Oct 13 13:02:38 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023
Oct 13 13:02:38 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 13 13:02:38 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Oct 13 13:02:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 13 13:02:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 13 13:02:38 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 13 13:02:38 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 13 13:02:38 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Oct 13 13:02:38 localhost kernel: signal: max sigframe size: 1776
Oct 13 13:02:38 localhost kernel: BIOS-provided physical RAM map:
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 13 13:02:38 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
Oct 13 13:02:38 localhost kernel: NX (Execute Disable) protection: active
Oct 13 13:02:38 localhost kernel: SMBIOS 2.8 present.
Oct 13 13:02:38 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 13 13:02:38 localhost kernel: Hypervisor detected: KVM
Oct 13 13:02:38 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 13 13:02:38 localhost kernel: kvm-clock: using sched offset of 2821681067 cycles
Oct 13 13:02:38 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 13 13:02:38 localhost kernel: tsc: Detected 2799.998 MHz processor
Oct 13 13:02:38 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 13 13:02:38 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 13 13:02:38 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000
Oct 13 13:02:38 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 13 13:02:38 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 13 13:02:38 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 13 13:02:38 localhost kernel: Using GB pages for direct mapping
Oct 13 13:02:38 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff]
Oct 13 13:02:38 localhost kernel: ACPI: Early table checksum verification disabled
Oct 13 13:02:38 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 13 13:02:38 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 13:02:38 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 13:02:38 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 13:02:38 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 13 13:02:38 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 13:02:38 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 13:02:38 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 13 13:02:38 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 13 13:02:38 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 13 13:02:38 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 13 13:02:38 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 13 13:02:38 localhost kernel: No NUMA configuration found
Oct 13 13:02:38 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff]
Oct 13 13:02:38 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff]
Oct 13 13:02:38 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB)
Oct 13 13:02:38 localhost kernel: Zone ranges:
Oct 13 13:02:38 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 13 13:02:38 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 13 13:02:38 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000043fffffff]
Oct 13 13:02:38 localhost kernel:   Device   empty
Oct 13 13:02:38 localhost kernel: Movable zone start for each node
Oct 13 13:02:38 localhost kernel: Early memory node ranges
Oct 13 13:02:38 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 13 13:02:38 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 13 13:02:38 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000043fffffff]
Oct 13 13:02:38 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
Oct 13 13:02:38 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 13 13:02:38 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 13 13:02:38 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 13 13:02:38 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 13 13:02:38 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 13 13:02:38 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 13 13:02:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 13 13:02:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 13 13:02:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 13 13:02:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 13 13:02:38 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 13 13:02:38 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 13 13:02:38 localhost kernel: TSC deadline timer available
Oct 13 13:02:38 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 13 13:02:38 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 13 13:02:38 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 13 13:02:38 localhost kernel: Booting paravirtualized kernel on KVM
Oct 13 13:02:38 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 13 13:02:38 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 13 13:02:38 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144
Oct 13 13:02:38 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u262144 alloc=1*2097152
Oct 13 13:02:38 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 13 13:02:38 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 13 13:02:38 localhost kernel: Fallback order for Node 0: 0 
Oct 13 13:02:38 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 4128475
Oct 13 13:02:38 localhost kernel: Policy zone: Normal
Oct 13 13:02:38 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Oct 13 13:02:38 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space.
Oct 13 13:02:38 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Oct 13 13:02:38 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 13 13:02:38 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 13 13:02:38 localhost kernel: software IO TLB: area num 8.
Oct 13 13:02:38 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved)
Oct 13 13:02:38 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0
Oct 13 13:02:38 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 13 13:02:38 localhost kernel: ftrace: allocating 44803 entries in 176 pages
Oct 13 13:02:38 localhost kernel: ftrace: allocated 176 pages with 3 groups
Oct 13 13:02:38 localhost kernel: Dynamic Preempt: voluntary
Oct 13 13:02:38 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 13 13:02:38 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 13 13:02:38 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 13 13:02:38 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 13 13:02:38 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 13 13:02:38 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 13 13:02:38 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 13 13:02:38 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 13 13:02:38 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 13 13:02:38 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 13 13:02:38 localhost kernel: random: crng init done (trusting CPU's manufacturer)
Oct 13 13:02:38 localhost kernel: Console: colour VGA+ 80x25
Oct 13 13:02:38 localhost kernel: printk: console [tty0] enabled
Oct 13 13:02:38 localhost kernel: printk: console [ttyS0] enabled
Oct 13 13:02:38 localhost kernel: ACPI: Core revision 20211217
Oct 13 13:02:38 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 13 13:02:38 localhost kernel: x2apic enabled
Oct 13 13:02:38 localhost kernel: Switched APIC routing to physical x2apic.
Oct 13 13:02:38 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 13 13:02:38 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 13 13:02:38 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 13 13:02:38 localhost kernel: LSM: Security Framework initializing
Oct 13 13:02:38 localhost kernel: Yama: becoming mindful.
Oct 13 13:02:38 localhost kernel: SELinux:  Initializing.
Oct 13 13:02:38 localhost kernel: LSM support for eBPF active
Oct 13 13:02:38 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Oct 13 13:02:38 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Oct 13 13:02:38 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 13 13:02:38 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 13 13:02:38 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 13 13:02:38 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 13 13:02:38 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 13 13:02:38 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Oct 13 13:02:38 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Oct 13 13:02:38 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 13 13:02:38 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 13 13:02:38 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 13 13:02:38 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 13 13:02:38 localhost kernel: Freeing SMP alternatives memory: 36K
Oct 13 13:02:38 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 13 13:02:38 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues.
Oct 13 13:02:38 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Oct 13 13:02:38 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Oct 13 13:02:38 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Oct 13 13:02:38 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 13 13:02:38 localhost kernel: ... version:                0
Oct 13 13:02:38 localhost kernel: ... bit width:              48
Oct 13 13:02:38 localhost kernel: ... generic registers:      6
Oct 13 13:02:38 localhost kernel: ... value mask:             0000ffffffffffff
Oct 13 13:02:38 localhost kernel: ... max period:             00007fffffffffff
Oct 13 13:02:38 localhost kernel: ... fixed-purpose events:   0
Oct 13 13:02:38 localhost kernel: ... event mask:             000000000000003f
Oct 13 13:02:38 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 13 13:02:38 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 13 13:02:38 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 13 13:02:38 localhost kernel: x86: Booting SMP configuration:
Oct 13 13:02:38 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 13 13:02:38 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 13 13:02:38 localhost kernel: smpboot: Max logical packages: 8
Oct 13 13:02:38 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 13 13:02:38 localhost kernel: node 0 deferred pages initialised in 23ms
Oct 13 13:02:38 localhost kernel: devtmpfs: initialized
Oct 13 13:02:38 localhost kernel: x86/mm: Memory block size: 128MB
Oct 13 13:02:38 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 13 13:02:38 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 13 13:02:38 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 13 13:02:38 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 13 13:02:38 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
Oct 13 13:02:38 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 13 13:02:38 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 13 13:02:38 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 13 13:02:38 localhost kernel: audit: type=2000 audit(1760360557.098:1): state=initialized audit_enabled=0 res=1
Oct 13 13:02:38 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 13 13:02:38 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 13 13:02:38 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 13 13:02:38 localhost kernel: cpuidle: using governor menu
Oct 13 13:02:38 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Oct 13 13:02:38 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 13 13:02:38 localhost kernel: PCI: Using configuration type 1 for base access
Oct 13 13:02:38 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 13 13:02:38 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 13 13:02:38 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Oct 13 13:02:38 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Oct 13 13:02:38 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Oct 13 13:02:38 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video)
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Oct 13 13:02:38 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Oct 13 13:02:38 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 13 13:02:38 localhost kernel: ACPI: Interpreter enabled
Oct 13 13:02:38 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 13 13:02:38 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 13 13:02:38 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 13 13:02:38 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 13 13:02:38 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 13 13:02:38 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 13 13:02:38 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [3] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [4] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [5] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [6] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [7] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [8] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [9] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [10] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [11] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [12] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [13] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [14] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [15] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [16] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [17] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [18] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [19] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [20] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [21] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [22] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [23] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [24] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [25] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [26] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [27] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [28] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [29] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [30] registered
Oct 13 13:02:38 localhost kernel: acpiphp: Slot [31] registered
Oct 13 13:02:38 localhost kernel: PCI host bridge to bus 0000:00
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.2: reg 0x20: [io  0xc100-0xc11f]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Oct 13 13:02:38 localhost kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Oct 13 13:02:38 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Oct 13 13:02:38 localhost kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Oct 13 13:02:38 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Oct 13 13:02:38 localhost kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 13 13:02:38 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Oct 13 13:02:38 localhost kernel: pci 0000:00:06.0: reg 0x10: [io  0xc120-0xc13f]
Oct 13 13:02:38 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 13 13:02:38 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 13 13:02:38 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 13 13:02:38 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 13 13:02:38 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 13 13:02:38 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 13 13:02:38 localhost kernel: iommu: Default domain type: Translated 
Oct 13 13:02:38 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Oct 13 13:02:38 localhost kernel: SCSI subsystem initialized
Oct 13 13:02:38 localhost kernel: ACPI: bus type USB registered
Oct 13 13:02:38 localhost kernel: usbcore: registered new interface driver usbfs
Oct 13 13:02:38 localhost kernel: usbcore: registered new interface driver hub
Oct 13 13:02:38 localhost kernel: usbcore: registered new device driver usb
Oct 13 13:02:38 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 13 13:02:38 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 13 13:02:38 localhost kernel: PTP clock support registered
Oct 13 13:02:38 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 13 13:02:38 localhost kernel: NetLabel: Initializing
Oct 13 13:02:38 localhost kernel: NetLabel:  domain hash size = 128
Oct 13 13:02:38 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 13 13:02:38 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 13 13:02:38 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 13 13:02:38 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 13 13:02:38 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 13 13:02:38 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 13 13:02:38 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 13 13:02:38 localhost kernel: vgaarb: loaded
Oct 13 13:02:38 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 13 13:02:38 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 13 13:02:38 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 13 13:02:38 localhost kernel: pnp: PnP ACPI init
Oct 13 13:02:38 localhost kernel: pnp 00:03: [dma 2]
Oct 13 13:02:38 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 13 13:02:38 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 13 13:02:38 localhost kernel: NET: Registered PF_INET protocol family
Oct 13 13:02:38 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Oct 13 13:02:38 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
Oct 13 13:02:38 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 13 13:02:38 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 13 13:02:38 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 13 13:02:38 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536)
Oct 13 13:02:38 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear)
Oct 13 13:02:38 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
Oct 13 13:02:38 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
Oct 13 13:02:38 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 13 13:02:38 localhost kernel: NET: Registered PF_XDP protocol family
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 13 13:02:38 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window]
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 13 13:02:38 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 13 13:02:38 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 13 13:02:38 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 29535 usecs
Oct 13 13:02:38 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 13 13:02:38 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 13 13:02:38 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 13 13:02:38 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 13 13:02:38 localhost kernel: ACPI: bus type thunderbolt registered
Oct 13 13:02:38 localhost kernel: Initialise system trusted keyrings
Oct 13 13:02:38 localhost kernel: Key type blacklist registered
Oct 13 13:02:38 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0
Oct 13 13:02:38 localhost kernel: zbud: loaded
Oct 13 13:02:38 localhost kernel: integrity: Platform Keyring initialized
Oct 13 13:02:38 localhost kernel: NET: Registered PF_ALG protocol family
Oct 13 13:02:38 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 13 13:02:38 localhost kernel: Key type asymmetric registered
Oct 13 13:02:38 localhost kernel: Asymmetric key parser 'x509' registered
Oct 13 13:02:38 localhost kernel: Running certificate verification selftests
Oct 13 13:02:38 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 13 13:02:38 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 13 13:02:38 localhost kernel: io scheduler mq-deadline registered
Oct 13 13:02:38 localhost kernel: io scheduler kyber registered
Oct 13 13:02:38 localhost kernel: io scheduler bfq registered
Oct 13 13:02:38 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 13 13:02:38 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 13 13:02:38 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 13 13:02:38 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 13 13:02:38 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 13 13:02:38 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 13 13:02:38 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 13 13:02:38 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 13 13:02:38 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 13 13:02:38 localhost kernel: Non-volatile memory driver v1.3
Oct 13 13:02:38 localhost kernel: rdac: device handler registered
Oct 13 13:02:38 localhost kernel: hp_sw: device handler registered
Oct 13 13:02:38 localhost kernel: emc: device handler registered
Oct 13 13:02:38 localhost kernel: alua: device handler registered
Oct 13 13:02:38 localhost kernel: libphy: Fixed MDIO Bus: probed
Oct 13 13:02:38 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Oct 13 13:02:38 localhost kernel: ehci-pci: EHCI PCI platform driver
Oct 13 13:02:38 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Oct 13 13:02:38 localhost kernel: ohci-pci: OHCI PCI platform driver
Oct 13 13:02:38 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver
Oct 13 13:02:38 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 13 13:02:38 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 13 13:02:38 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 13 13:02:38 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 13 13:02:38 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 13 13:02:38 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 13 13:02:38 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 13 13:02:38 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd
Oct 13 13:02:38 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 13 13:02:38 localhost kernel: hub 1-0:1.0: USB hub found
Oct 13 13:02:38 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 13 13:02:38 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 13 13:02:38 localhost kernel: usbserial: USB Serial support registered for generic
Oct 13 13:02:38 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 13 13:02:38 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 13 13:02:38 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 13 13:02:38 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 13 13:02:38 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 13 13:02:38 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 13 13:02:38 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 13 13:02:38 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T13:02:37 UTC (1760360557)
Oct 13 13:02:38 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 13 13:02:38 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 13 13:02:38 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 13 13:02:38 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 13 13:02:38 localhost kernel: usbcore: registered new interface driver usbhid
Oct 13 13:02:38 localhost kernel: usbhid: USB HID core driver
Oct 13 13:02:38 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 13 13:02:38 localhost kernel: Initializing XFRM netlink socket
Oct 13 13:02:38 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 13 13:02:38 localhost kernel: Segment Routing with IPv6
Oct 13 13:02:38 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 13 13:02:38 localhost kernel: mpls_gso: MPLS GSO support
Oct 13 13:02:38 localhost kernel: IPI shorthand broadcast: enabled
Oct 13 13:02:38 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 13 13:02:38 localhost kernel: AES CTR mode by8 optimization enabled
Oct 13 13:02:38 localhost kernel: sched_clock: Marking stable (726068757, 179066294)->(1038582931, -133447880)
Oct 13 13:02:38 localhost kernel: registered taskstats version 1
Oct 13 13:02:38 localhost kernel: Loading compiled-in X.509 certificates
Oct 13 13:02:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Oct 13 13:02:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 13 13:02:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 13 13:02:38 localhost kernel: zswap: loaded using pool lzo/zbud
Oct 13 13:02:38 localhost kernel: page_owner is disabled
Oct 13 13:02:38 localhost kernel: Key type big_key registered
Oct 13 13:02:38 localhost kernel: Freeing initrd memory: 74232K
Oct 13 13:02:38 localhost kernel: Key type encrypted registered
Oct 13 13:02:38 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 13 13:02:38 localhost kernel: Loading compiled-in module X.509 certificates
Oct 13 13:02:38 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Oct 13 13:02:38 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 13 13:02:38 localhost kernel: ima: No architecture policies found
Oct 13 13:02:38 localhost kernel: evm: Initialising EVM extended attributes:
Oct 13 13:02:38 localhost kernel: evm: security.selinux
Oct 13 13:02:38 localhost kernel: evm: security.SMACK64 (disabled)
Oct 13 13:02:38 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 13 13:02:38 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 13 13:02:38 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 13 13:02:38 localhost kernel: evm: security.apparmor (disabled)
Oct 13 13:02:38 localhost kernel: evm: security.ima
Oct 13 13:02:38 localhost kernel: evm: security.capability
Oct 13 13:02:38 localhost kernel: evm: HMAC attrs: 0x1
Oct 13 13:02:38 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 13 13:02:38 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 13 13:02:38 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 13 13:02:38 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 13 13:02:38 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 13 13:02:38 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 13 13:02:38 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 13 13:02:38 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 13 13:02:38 localhost kernel: Freeing unused decrypted memory: 2036K
Oct 13 13:02:38 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K
Oct 13 13:02:38 localhost kernel: Write protecting the kernel read-only data: 26624k
Oct 13 13:02:38 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Oct 13 13:02:38 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K
Oct 13 13:02:38 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 13 13:02:38 localhost kernel: Run /init as init process
Oct 13 13:02:38 localhost kernel:   with arguments:
Oct 13 13:02:38 localhost kernel:     /init
Oct 13 13:02:38 localhost kernel:   with environment:
Oct 13 13:02:38 localhost kernel:     HOME=/
Oct 13 13:02:38 localhost kernel:     TERM=linux
Oct 13 13:02:38 localhost kernel:     BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64
Oct 13 13:02:38 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 13 13:02:38 localhost systemd[1]: Detected virtualization kvm.
Oct 13 13:02:38 localhost systemd[1]: Detected architecture x86-64.
Oct 13 13:02:38 localhost systemd[1]: Running in initrd.
Oct 13 13:02:38 localhost systemd[1]: No hostname configured, using default hostname.
Oct 13 13:02:38 localhost systemd[1]: Hostname set to <localhost>.
Oct 13 13:02:38 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 13 13:02:38 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 13 13:02:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 13 13:02:38 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 13 13:02:38 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 13 13:02:38 localhost systemd[1]: Reached target Local File Systems.
Oct 13 13:02:38 localhost systemd[1]: Reached target Path Units.
Oct 13 13:02:38 localhost systemd[1]: Reached target Slice Units.
Oct 13 13:02:38 localhost systemd[1]: Reached target Swaps.
Oct 13 13:02:38 localhost systemd[1]: Reached target Timer Units.
Oct 13 13:02:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 13 13:02:38 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 13 13:02:38 localhost systemd[1]: Listening on Journal Socket.
Oct 13 13:02:38 localhost systemd[1]: Listening on udev Control Socket.
Oct 13 13:02:38 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 13 13:02:38 localhost systemd[1]: Reached target Socket Units.
Oct 13 13:02:38 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 13 13:02:38 localhost systemd[1]: Starting Journal Service...
Oct 13 13:02:38 localhost systemd[1]: Starting Load Kernel Modules...
Oct 13 13:02:38 localhost systemd[1]: Starting Create System Users...
Oct 13 13:02:38 localhost systemd[1]: Starting Setup Virtual Console...
Oct 13 13:02:38 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 13 13:02:38 localhost systemd[1]: Finished Load Kernel Modules.
Oct 13 13:02:38 localhost systemd-journald[284]: Journal started
Oct 13 13:02:38 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/6635e8bc8f5a4138a382e7fa61dbcec1) is 8.0M, max 314.7M, 306.7M free.
Oct 13 13:02:38 localhost systemd-modules-load[285]: Module 'msr' is built in
Oct 13 13:02:38 localhost systemd[1]: Started Journal Service.
Oct 13 13:02:38 localhost systemd[1]: Finished Setup Virtual Console.
Oct 13 13:02:38 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 13 13:02:38 localhost systemd[1]: Starting dracut cmdline hook...
Oct 13 13:02:38 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 13 13:02:38 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997.
Oct 13 13:02:38 localhost systemd-sysusers[286]: Creating group 'users' with GID 100.
Oct 13 13:02:38 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81.
Oct 13 13:02:38 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 13 13:02:38 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 13 13:02:38 localhost systemd[1]: Finished Create System Users.
Oct 13 13:02:38 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9
Oct 13 13:02:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 13 13:02:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 13 13:02:38 localhost dracut-cmdline[289]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Oct 13 13:02:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 13 13:02:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 13 13:02:38 localhost systemd[1]: Finished dracut cmdline hook.
Oct 13 13:02:38 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 13 13:02:38 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 13 13:02:38 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 13 13:02:38 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Oct 13 13:02:38 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 13 13:02:38 localhost kernel: RPC: Registered udp transport module.
Oct 13 13:02:38 localhost kernel: RPC: Registered tcp transport module.
Oct 13 13:02:38 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 13 13:02:38 localhost rpc.statd[406]: Version 2.5.4 starting
Oct 13 13:02:38 localhost rpc.statd[406]: Initializing NSM state
Oct 13 13:02:38 localhost rpc.idmapd[411]: Setting log level to 0
Oct 13 13:02:38 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 13 13:02:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 13 13:02:38 localhost systemd-udevd[424]: Using default interface naming scheme 'rhel-9.0'.
Oct 13 13:02:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 13 13:02:38 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 13 13:02:38 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 13 13:02:38 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 13 13:02:38 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 13 13:02:38 localhost systemd[1]: Reached target System Initialization.
Oct 13 13:02:38 localhost systemd[1]: Reached target Basic System.
Oct 13 13:02:38 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 13 13:02:38 localhost systemd[1]: Reached target Network.
Oct 13 13:02:38 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 13 13:02:38 localhost systemd[1]: Starting dracut initqueue hook...
Oct 13 13:02:38 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB)
Oct 13 13:02:38 localhost kernel: libata version 3.00 loaded.
Oct 13 13:02:38 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 13 13:02:38 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Oct 13 13:02:38 localhost kernel: GPT:20971519 != 838860799
Oct 13 13:02:38 localhost kernel: GPT:Alternate GPT header not at the end of the disk.
Oct 13 13:02:38 localhost kernel: GPT:20971519 != 838860799
Oct 13 13:02:38 localhost kernel: GPT: Use GNU Parted to correct GPT errors.
Oct 13 13:02:38 localhost kernel:  vda: vda1 vda2 vda3 vda4
Oct 13 13:02:38 localhost kernel: scsi host0: ata_piix
Oct 13 13:02:39 localhost kernel: scsi host1: ata_piix
Oct 13 13:02:39 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Oct 13 13:02:39 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Oct 13 13:02:39 localhost systemd-udevd[448]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:02:39 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Oct 13 13:02:39 localhost systemd[1]: Reached target Initrd Root Device.
Oct 13 13:02:39 localhost kernel: ata1: found unknown device (class 0)
Oct 13 13:02:39 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 13 13:02:39 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 13 13:02:39 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 13 13:02:39 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 13 13:02:39 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 13 13:02:39 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 13 13:02:39 localhost systemd[1]: Finished dracut initqueue hook.
Oct 13 13:02:39 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 13 13:02:39 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 13 13:02:39 localhost systemd[1]: Reached target Remote File Systems.
Oct 13 13:02:39 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 13 13:02:39 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 13 13:02:39 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a...
Oct 13 13:02:39 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system.
Oct 13 13:02:39 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Oct 13 13:02:39 localhost systemd[1]: Mounting /sysroot...
Oct 13 13:02:39 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 13 13:02:39 localhost kernel: XFS (vda4): Mounting V5 Filesystem
Oct 13 13:02:39 localhost kernel: XFS (vda4): Ending clean mount
Oct 13 13:02:39 localhost systemd[1]: Mounted /sysroot.
Oct 13 13:02:39 localhost systemd[1]: Reached target Initrd Root File System.
Oct 13 13:02:39 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 13 13:02:39 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 13 13:02:39 localhost systemd[1]: Reached target Initrd File Systems.
Oct 13 13:02:39 localhost systemd[1]: Reached target Initrd Default Target.
Oct 13 13:02:39 localhost systemd[1]: Starting dracut mount hook...
Oct 13 13:02:39 localhost systemd[1]: Finished dracut mount hook.
Oct 13 13:02:39 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 13 13:02:39 localhost rpc.idmapd[411]: exiting on signal 15
Oct 13 13:02:39 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 13 13:02:39 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 13 13:02:39 localhost systemd[1]: Stopped target Network.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Timer Units.
Oct 13 13:02:39 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 13 13:02:39 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Basic System.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Path Units.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Remote File Systems.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Slice Units.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Socket Units.
Oct 13 13:02:39 localhost systemd[1]: Stopped target System Initialization.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Local File Systems.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Swaps.
Oct 13 13:02:39 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped dracut mount hook.
Oct 13 13:02:39 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 13 13:02:39 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 13 13:02:39 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 13 13:02:39 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 13 13:02:39 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 13 13:02:39 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped Load Kernel Modules.
Oct 13 13:02:39 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 13 13:02:39 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 13 13:02:39 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 13 13:02:39 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 13 13:02:39 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 13 13:02:39 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 13 13:02:40 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 13 13:02:40 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 13 13:02:40 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Closed udev Control Socket.
Oct 13 13:02:40 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Closed udev Kernel Socket.
Oct 13 13:02:40 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 13 13:02:40 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 13 13:02:40 localhost systemd[1]: Starting Cleanup udev Database...
Oct 13 13:02:40 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 13 13:02:40 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 13 13:02:40 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Stopped Create System Users.
Oct 13 13:02:40 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 13 13:02:40 localhost systemd[1]: Finished Cleanup udev Database.
Oct 13 13:02:40 localhost systemd[1]: Reached target Switch Root.
Oct 13 13:02:40 localhost systemd[1]: Starting Switch Root...
Oct 13 13:02:40 localhost systemd[1]: Switching root.
Oct 13 13:02:40 localhost systemd-journald[284]: Journal stopped
Oct 13 13:02:41 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd).
Oct 13 13:02:41 localhost kernel: audit: type=1404 audit(1760360560.172:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability open_perms=1
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:02:41 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:02:41 localhost kernel: audit: type=1403 audit(1760360560.303:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 13 13:02:41 localhost systemd[1]: Successfully loaded SELinux policy in 136.332ms.
Oct 13 13:02:41 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 40.199ms.
Oct 13 13:02:41 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 13 13:02:41 localhost systemd[1]: Detected virtualization kvm.
Oct 13 13:02:41 localhost systemd[1]: Detected architecture x86-64.
Oct 13 13:02:41 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:02:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:02:41 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd[1]: Stopped Switch Root.
Oct 13 13:02:41 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 13 13:02:41 localhost systemd[1]: Created slice Slice /system/getty.
Oct 13 13:02:41 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 13 13:02:41 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 13 13:02:41 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 13 13:02:41 localhost systemd[1]: Created slice Slice /system/systemd-fsck.
Oct 13 13:02:41 localhost systemd[1]: Created slice User and Session Slice.
Oct 13 13:02:41 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 13 13:02:41 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 13 13:02:41 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 13 13:02:41 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 13 13:02:41 localhost systemd[1]: Stopped target Switch Root.
Oct 13 13:02:41 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 13 13:02:41 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 13 13:02:41 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 13 13:02:41 localhost systemd[1]: Reached target Path Units.
Oct 13 13:02:41 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 13 13:02:41 localhost systemd[1]: Reached target Slice Units.
Oct 13 13:02:41 localhost systemd[1]: Reached target Swaps.
Oct 13 13:02:41 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 13 13:02:41 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 13 13:02:41 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 13 13:02:41 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 13 13:02:41 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 13 13:02:41 localhost systemd[1]: Listening on udev Control Socket.
Oct 13 13:02:41 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 13 13:02:41 localhost systemd[1]: Mounting Huge Pages File System...
Oct 13 13:02:41 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 13 13:02:41 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 13 13:02:41 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 13 13:02:41 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 13 13:02:41 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 13 13:02:41 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 13 13:02:41 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 13 13:02:41 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 13 13:02:41 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 13 13:02:41 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 13 13:02:41 localhost systemd[1]: Stopped Journal Service.
Oct 13 13:02:41 localhost systemd[1]: Starting Journal Service...
Oct 13 13:02:41 localhost systemd[1]: Starting Load Kernel Modules...
Oct 13 13:02:41 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 13 13:02:41 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 13 13:02:41 localhost kernel: fuse: init (API version 7.36)
Oct 13 13:02:41 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 13 13:02:41 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 13 13:02:41 localhost systemd[1]: Mounted Huge Pages File System.
Oct 13 13:02:41 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 13 13:02:41 localhost systemd-journald[618]: Journal started
Oct 13 13:02:41 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/89829c24d904bea15dec4d2c9d1ee875) is 8.0M, max 314.7M, 306.7M free.
Oct 13 13:02:40 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 13 13:02:40 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd-modules-load[619]: Module 'msr' is built in
Oct 13 13:02:41 localhost systemd[1]: Started Journal Service.
Oct 13 13:02:41 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 13 13:02:41 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 13 13:02:41 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 13 13:02:41 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 13 13:02:41 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 13 13:02:41 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 13 13:02:41 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 13 13:02:41 localhost systemd[1]: Finished Load Kernel Modules.
Oct 13 13:02:41 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 13 13:02:41 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 13 13:02:41 localhost systemd[1]: Mounting FUSE Control File System...
Oct 13 13:02:41 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 13 13:02:41 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 13 13:02:41 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 13 13:02:41 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 13 13:02:41 localhost systemd[1]: Starting Load/Save Random Seed...
Oct 13 13:02:41 localhost kernel: ACPI: bus type drm_connector registered
Oct 13 13:02:41 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 13 13:02:41 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/89829c24d904bea15dec4d2c9d1ee875) is 8.0M, max 314.7M, 306.7M free.
Oct 13 13:02:41 localhost systemd-journald[618]: Received client request to flush runtime journal.
Oct 13 13:02:41 localhost systemd[1]: Starting Create System Users...
Oct 13 13:02:41 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 13 13:02:41 localhost systemd[1]: Mounted FUSE Control File System.
Oct 13 13:02:41 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 13 13:02:41 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 13 13:02:41 localhost systemd[1]: Finished Load/Save Random Seed.
Oct 13 13:02:41 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 13 13:02:41 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 13 13:02:41 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 13 13:02:41 localhost systemd-sysusers[630]: Creating group 'sgx' with GID 989.
Oct 13 13:02:41 localhost systemd-sysusers[630]: Creating group 'systemd-oom' with GID 988.
Oct 13 13:02:41 localhost systemd-sysusers[630]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988.
Oct 13 13:02:41 localhost systemd[1]: Finished Create System Users.
Oct 13 13:02:41 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 13 13:02:41 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 13 13:02:41 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 13 13:02:41 localhost systemd[1]: Set up automount EFI System Partition Automount.
Oct 13 13:02:41 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 13 13:02:41 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 13 13:02:41 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'.
Oct 13 13:02:41 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 13 13:02:41 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 13 13:02:41 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 13 13:02:41 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 13 13:02:41 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 13 13:02:41 localhost systemd-udevd[637]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:02:41 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 13 13:02:41 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 13 13:02:41 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped.
Oct 13 13:02:41 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7...
Oct 13 13:02:41 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped.
Oct 13 13:02:41 localhost systemd-fsck[682]: fsck.fat 4.2 (2021-01-31)
Oct 13 13:02:41 localhost systemd-fsck[682]: /dev/vda2: 12 files, 1782/51145 clusters
Oct 13 13:02:41 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7.
Oct 13 13:02:41 localhost kernel: SVM: TSC scaling supported
Oct 13 13:02:41 localhost kernel: kvm: Nested Virtualization enabled
Oct 13 13:02:41 localhost kernel: SVM: kvm: Nested Paging enabled
Oct 13 13:02:41 localhost kernel: SVM: LBR virtualization supported
Oct 13 13:02:41 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 13 13:02:41 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 13 13:02:41 localhost kernel: Console: switching to colour dummy device 80x25
Oct 13 13:02:41 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 13 13:02:41 localhost kernel: [drm] features: -context_init
Oct 13 13:02:41 localhost kernel: [drm] number of scanouts: 1
Oct 13 13:02:41 localhost kernel: [drm] number of cap sets: 0
Oct 13 13:02:41 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
Oct 13 13:02:41 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
Oct 13 13:02:41 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 13 13:02:42 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 13 13:02:42 localhost systemd[1]: Mounting /boot...
Oct 13 13:02:42 localhost kernel: XFS (vda3): Mounting V5 Filesystem
Oct 13 13:02:42 localhost kernel: XFS (vda3): Ending clean mount
Oct 13 13:02:42 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
Oct 13 13:02:42 localhost systemd[1]: Mounted /boot.
Oct 13 13:02:42 localhost systemd[1]: Mounting /boot/efi...
Oct 13 13:02:42 localhost systemd[1]: Mounted /boot/efi.
Oct 13 13:02:42 localhost systemd[1]: Reached target Local File Systems.
Oct 13 13:02:42 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 13 13:02:42 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 13 13:02:42 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 13 13:02:42 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 13 13:02:42 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 13 13:02:42 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 13 13:02:42 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 13 13:02:42 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 717 (bootctl)
Oct 13 13:02:42 localhost systemd[1]: Starting File System Check on /dev/vda2...
Oct 13 13:02:42 localhost systemd[1]: Finished File System Check on /dev/vda2.
Oct 13 13:02:42 localhost systemd[1]: Mounting EFI System Partition Automount...
Oct 13 13:02:42 localhost systemd[1]: Mounted EFI System Partition Automount.
Oct 13 13:02:42 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 13 13:02:42 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 13 13:02:42 localhost systemd[1]: Starting Security Auditing Service...
Oct 13 13:02:42 localhost systemd[1]: Starting RPC Bind...
Oct 13 13:02:42 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 13 13:02:42 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 13 13:02:42 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins
Oct 13 13:02:42 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable)
Oct 13 13:02:42 localhost systemd[1]: Started RPC Bind.
Oct 13 13:02:42 localhost augenrules[731]: /sbin/augenrules: No change
Oct 13 13:02:42 localhost augenrules[741]: No rules
Oct 13 13:02:42 localhost augenrules[741]: enabled 1
Oct 13 13:02:42 localhost augenrules[741]: failure 1
Oct 13 13:02:42 localhost augenrules[741]: pid 726
Oct 13 13:02:42 localhost augenrules[741]: rate_limit 0
Oct 13 13:02:42 localhost augenrules[741]: backlog_limit 8192
Oct 13 13:02:42 localhost augenrules[741]: lost 0
Oct 13 13:02:42 localhost augenrules[741]: backlog 4
Oct 13 13:02:42 localhost augenrules[741]: backlog_wait_time 60000
Oct 13 13:02:42 localhost augenrules[741]: backlog_wait_time_actual 0
Oct 13 13:02:42 localhost augenrules[741]: enabled 1
Oct 13 13:02:42 localhost augenrules[741]: failure 1
Oct 13 13:02:42 localhost augenrules[741]: pid 726
Oct 13 13:02:42 localhost augenrules[741]: rate_limit 0
Oct 13 13:02:42 localhost augenrules[741]: backlog_limit 8192
Oct 13 13:02:42 localhost augenrules[741]: lost 0
Oct 13 13:02:42 localhost augenrules[741]: backlog 0
Oct 13 13:02:42 localhost augenrules[741]: backlog_wait_time 60000
Oct 13 13:02:42 localhost augenrules[741]: backlog_wait_time_actual 0
Oct 13 13:02:42 localhost augenrules[741]: enabled 1
Oct 13 13:02:42 localhost augenrules[741]: failure 1
Oct 13 13:02:42 localhost augenrules[741]: pid 726
Oct 13 13:02:42 localhost augenrules[741]: rate_limit 0
Oct 13 13:02:42 localhost augenrules[741]: backlog_limit 8192
Oct 13 13:02:42 localhost augenrules[741]: lost 0
Oct 13 13:02:42 localhost augenrules[741]: backlog 3
Oct 13 13:02:42 localhost augenrules[741]: backlog_wait_time 60000
Oct 13 13:02:42 localhost augenrules[741]: backlog_wait_time_actual 0
Oct 13 13:02:42 localhost systemd[1]: Started Security Auditing Service.
Oct 13 13:02:42 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 13 13:02:42 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 13 13:02:42 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 13 13:02:42 localhost systemd[1]: Starting Update is Completed...
Oct 13 13:02:42 localhost systemd[1]: Finished Update is Completed.
Oct 13 13:02:42 localhost systemd[1]: Reached target System Initialization.
Oct 13 13:02:42 localhost systemd[1]: Started dnf makecache --timer.
Oct 13 13:02:42 localhost systemd[1]: Started Daily rotation of log files.
Oct 13 13:02:42 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 13 13:02:42 localhost systemd[1]: Reached target Timer Units.
Oct 13 13:02:42 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 13 13:02:42 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 13 13:02:42 localhost systemd[1]: Reached target Socket Units.
Oct 13 13:02:42 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)...
Oct 13 13:02:42 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 13 13:02:42 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 13 13:02:42 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 13 13:02:42 localhost systemd[1]: Reached target Basic System.
Oct 13 13:02:42 localhost dbus-broker-lau[751]: Ready
Oct 13 13:02:42 localhost systemd[1]: Starting NTP client/server...
Oct 13 13:02:42 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 13 13:02:42 localhost systemd[1]: Started irqbalance daemon.
Oct 13 13:02:42 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 13 13:02:42 localhost systemd[1]: Starting System Logging Service...
Oct 13 13:02:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:02:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:02:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:02:42 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 13 13:02:42 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 13 13:02:42 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 13 13:02:42 localhost systemd[1]: Starting User Login Management...
Oct 13 13:02:42 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 13 13:02:42 localhost systemd[1]: Started System Logging Service.
Oct 13 13:02:42 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start
Oct 13 13:02:42 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ]
Oct 13 13:02:42 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 13 13:02:42 localhost systemd-logind[760]: New seat seat0.
Oct 13 13:02:42 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data
Oct 13 13:02:42 localhost rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:02:42 localhost chronyd[766]: Loaded seccomp filter (level 2)
Oct 13 13:02:42 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 13 13:02:42 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 13 13:02:42 localhost systemd[1]: Started User Login Management.
Oct 13 13:02:42 localhost systemd[1]: Started NTP client/server.
Oct 13 13:02:43 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Mon, 13 Oct 2025 13:02:43 +0000. Up 6.35 seconds.
Oct 13 13:02:43 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 13 13:02:43 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 13 13:02:43 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpzb0_7ntl.mount: Deactivated successfully.
Oct 13 13:02:43 localhost systemd[1]: Starting Hostname Service...
Oct 13 13:02:43 localhost systemd[1]: Started Hostname Service.
Oct 13 13:02:43 np0005484548.novalocal systemd-hostnamed[784]: Hostname set to <np0005484548.novalocal> (static)
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Finished Initial cloud-init job (pre-networking).
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Reached target Preparation for Network.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting Network Manager...
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.7816] NetworkManager (version 1.42.2-1.el9) is starting... (boot:d585dfa6-4d92-480a-a639-e5f89c370e7c)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.7821] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Started Network Manager.
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.7870] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Reached target Network.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.7975] manager[0x563d1c13e020]: monitoring kernel firmware directory '/lib/firmware'.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8015] hostname: hostname: using hostnamed
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8015] hostname: static hostname changed from (none) to "np0005484548.novalocal"
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting Enable periodic update of entitlement certificates....
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8029] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Started Enable periodic update of entitlement certificates..
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Reached target NFS client services.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Reached target Remote File Systems.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8370] manager[0x563d1c13e020]: rfkill: Wi-Fi hardware radio set enabled
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8372] manager[0x563d1c13e020]: rfkill: WWAN hardware radio set enabled
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8452] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8453] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8463] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8463] manager: Networking is enabled by state file
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8503] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8506] settings: Loaded settings plugin: keyfile (internal)
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8548] dhcp: init: Using DHCP client 'internal'
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8552] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8571] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8580] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8592] device (lo): Activation: starting connection 'lo' (77fe12d1-f185-4d37-80c9-1be3be197acf)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8604] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8609] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8664] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8668] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8671] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8675] device (eth0): carrier: link connected
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8711] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8719] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8731] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8735] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8740] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8740] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8748] device (lo): Activation: successful, device activated.
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8755] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8758] manager: NetworkManager state is now CONNECTING
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8760] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8772] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8777] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8815] dhcp4 (eth0): state changed new lease, address=38.102.83.151
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8820] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8853] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8871] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8874] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8878] manager: NetworkManager state is now CONNECTED_SITE
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8881] device (eth0): Activation: successful, device activated.
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8887] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 13 13:02:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360563.8891] manager: startup complete
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 13 13:02:43 np0005484548.novalocal systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: Cloud-init v. 22.1-9.el9 running 'init' at Mon, 13 Oct 2025 13:02:44 +0000. Up 7.29 seconds.
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |  eth0  | True |        38.102.83.151         | 255.255.255.0 | global | fa:16:3e:2e:69:a7 |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |  eth0  | True | fe80::f816:3eff:fe2e:69a7/64 |       .       |  link  | fa:16:3e:2e:69:a7 |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 13 13:02:44 np0005484548.novalocal cloud-init[898]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 13 13:02:44 np0005484548.novalocal systemd[1]: Starting Authorization Manager...
Oct 13 13:02:44 np0005484548.novalocal systemd[1]: Started Dynamic System Tuning Daemon.
Oct 13 13:02:44 np0005484548.novalocal polkitd[1036]: Started polkitd version 0.117
Oct 13 13:02:44 np0005484548.novalocal polkitd[1036]: Loading rules from directory /etc/polkit-1/rules.d
Oct 13 13:02:44 np0005484548.novalocal polkitd[1036]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 13 13:02:44 np0005484548.novalocal polkitd[1036]: Finished loading, compiling and executing 4 rules
Oct 13 13:02:44 np0005484548.novalocal systemd[1]: Started Authorization Manager.
Oct 13 13:02:44 np0005484548.novalocal polkitd[1036]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 13 13:02:46 np0005484548.novalocal useradd[1119]: new group: name=cloud-user, GID=1001
Oct 13 13:02:46 np0005484548.novalocal useradd[1119]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 13 13:02:46 np0005484548.novalocal useradd[1119]: add 'cloud-user' to group 'adm'
Oct 13 13:02:46 np0005484548.novalocal useradd[1119]: add 'cloud-user' to group 'systemd-journal'
Oct 13 13:02:46 np0005484548.novalocal useradd[1119]: add 'cloud-user' to shadow group 'adm'
Oct 13 13:02:46 np0005484548.novalocal useradd[1119]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Generating public/private rsa key pair.
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: The key fingerprint is:
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: SHA256:Gtjkk2Bu/YE4Ns4A3UyQo1WreTInqSbBBSNRrjagHz4 root@np0005484548.novalocal
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: The key's randomart image is:
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: +---[RSA 3072]----+
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |o=o+o            |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |.o*+ .           |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |oooo* .          |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |=o.* O o         |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |++X @ O S        |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |.=.& o = .       |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |o.E o . .        |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |o  .             |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |                 |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: +----[SHA256]-----+
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Generating public/private ecdsa key pair.
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: The key fingerprint is:
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: SHA256:XXIkP5RY9ebpgP3WGfGymFtjhu3detpz1ka07nrj2g0 root@np0005484548.novalocal
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: The key's randomart image is:
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: +---[ECDSA 256]---+
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |          .o+o.  |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |          .=.  . |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |          . =  .o|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |         . +o. o=|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |        S .. o.++|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |             =++=|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |            + E*o|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |             *oXO|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |            .oXXO|
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: +----[SHA256]-----+
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Generating public/private ed25519 key pair.
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: The key fingerprint is:
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: SHA256:186fy8BI5LuLtCbjkzrWqYkSfmeHUAWPZIxZw/pxqyA root@np0005484548.novalocal
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: The key's randomart image is:
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: +--[ED25519 256]--+
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |    **.          |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |   ooo+.         |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |    ....  .      |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |   . o . o .     |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |    o o S + .    |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |.E o . . o *     |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |... o.oo. o =    |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: |.. oo*Bo.o . + . |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: | .o.*=o=o o.  =. |
Oct 13 13:02:47 np0005484548.novalocal cloud-init[898]: +----[SHA256]-----+
Oct 13 13:02:47 np0005484548.novalocal sm-notify[1132]: Version 2.5.4 starting
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Oct 13 13:02:47 np0005484548.novalocal sshd[1133]: Server listening on 0.0.0.0 port 22.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 13 13:02:47 np0005484548.novalocal sshd[1133]: Server listening on :: port 22.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Reached target Network is Online.
Oct 13 13:02:47 np0005484548.novalocal sshd[1143]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting Apply the settings specified in cloud-config...
Oct 13 13:02:47 np0005484548.novalocal sshd[1143]: Connection reset by 38.102.83.114 port 40182 [preauth]
Oct 13 13:02:47 np0005484548.novalocal sshd[1133]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot).
Oct 13 13:02:47 np0005484548.novalocal sshd[1158]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting Crash recovery kernel arming...
Oct 13 13:02:47 np0005484548.novalocal sshd[1158]: Unable to negotiate with 38.102.83.114 port 40184: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 13 13:02:47 np0005484548.novalocal crond[1139]: (CRON) STARTUP (1.5.7)
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 13 13:02:47 np0005484548.novalocal crond[1139]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting Permit User Sessions...
Oct 13 13:02:47 np0005484548.novalocal crond[1139]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 48% if used.)
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 13 13:02:47 np0005484548.novalocal crond[1139]: (CRON) INFO (running with inotify support)
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Finished Permit User Sessions.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Started Command Scheduler.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Started Getty on tty1.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Reached target Login Prompts.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Reached target Multi-User System.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 13 13:02:47 np0005484548.novalocal sshd[1166]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal sshd[1173]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal sshd[1173]: Unable to negotiate with 38.102.83.114 port 40200: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 13 13:02:47 np0005484548.novalocal sshd[1181]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal sshd[1181]: Unable to negotiate with 38.102.83.114 port 40216: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 13 13:02:47 np0005484548.novalocal sshd[1194]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal kdumpctl[1136]: kdump: No kdump initial ramdisk found.
Oct 13 13:02:47 np0005484548.novalocal kdumpctl[1136]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img
Oct 13 13:02:47 np0005484548.novalocal sshd[1194]: Connection closed by 38.102.83.114 port 40230 [preauth]
Oct 13 13:02:47 np0005484548.novalocal sshd[1201]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal sshd[1166]: Connection closed by 38.102.83.114 port 40192 [preauth]
Oct 13 13:02:47 np0005484548.novalocal sshd[1201]: Connection reset by 38.102.83.114 port 40236 [preauth]
Oct 13 13:02:47 np0005484548.novalocal sshd[1231]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal sshd[1231]: fatal: mm_answer_sign: sign: error in libcrypto
Oct 13 13:02:47 np0005484548.novalocal sshd[1242]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:47 np0005484548.novalocal sshd[1242]: Unable to negotiate with 38.102.83.114 port 40264: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1270]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Mon, 13 Oct 2025 13:02:47 +0000. Up 10.53 seconds.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Finished Apply the settings specified in cloud-config.
Oct 13 13:02:47 np0005484548.novalocal systemd[1]: Starting Execute cloud user/final scripts...
Oct 13 13:02:47 np0005484548.novalocal dracut[1436]: dracut-057-21.git20230214.el9
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1452]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Mon, 13 Oct 2025 13:02:47 +0000. Up 10.88 seconds.
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1457]: #############################################################
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1459]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1467]: 256 SHA256:XXIkP5RY9ebpgP3WGfGymFtjhu3detpz1ka07nrj2g0 root@np0005484548.novalocal (ECDSA)
Oct 13 13:02:47 np0005484548.novalocal dracut[1439]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1474]: 256 SHA256:186fy8BI5LuLtCbjkzrWqYkSfmeHUAWPZIxZw/pxqyA root@np0005484548.novalocal (ED25519)
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1482]: 3072 SHA256:Gtjkk2Bu/YE4Ns4A3UyQo1WreTInqSbBBSNRrjagHz4 root@np0005484548.novalocal (RSA)
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1483]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1485]: #############################################################
Oct 13 13:02:47 np0005484548.novalocal cloud-init[1452]: Cloud-init v. 22.1-9.el9 finished at Mon, 13 Oct 2025 13:02:47 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.14 seconds
Oct 13 13:02:48 np0005484548.novalocal systemd[1]: Reloading Network Manager...
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Oct 13 13:02:48 np0005484548.novalocal NetworkManager[789]: <info>  [1760360568.0653] audit: op="reload" arg="0" pid=1575 uid=0 result="success"
Oct 13 13:02:48 np0005484548.novalocal NetworkManager[789]: <info>  [1760360568.0663] config: signal: SIGHUP (no changes from disk)
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Oct 13 13:02:48 np0005484548.novalocal systemd[1]: Reloaded Network Manager.
Oct 13 13:02:48 np0005484548.novalocal systemd[1]: Finished Execute cloud user/final scripts.
Oct 13 13:02:48 np0005484548.novalocal systemd[1]: Reached target Cloud-init target.
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'resume' will not be installed, because it's in the list to be omitted!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'earlykdump' will not be installed, because it's in the list to be omitted!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: memstrack is not available
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal chronyd[766]: Selected source 149.56.19.163 (2.rhel.pool.ntp.org)
Oct 13 13:02:48 np0005484548.novalocal chronyd[766]: System clock TAI offset set to 37 seconds
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: memstrack is not available
Oct 13 13:02:48 np0005484548.novalocal dracut[1439]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Oct 13 13:02:49 np0005484548.novalocal dracut[1439]: *** Including module: systemd ***
Oct 13 13:02:49 np0005484548.novalocal dracut[1439]: *** Including module: systemd-initrd ***
Oct 13 13:02:49 np0005484548.novalocal dracut[1439]: *** Including module: i18n ***
Oct 13 13:02:49 np0005484548.novalocal dracut[1439]: No KEYMAP configured.
Oct 13 13:02:49 np0005484548.novalocal dracut[1439]: *** Including module: drm ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: prefixdevname ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: kernel-modules ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: kernel-modules-extra ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: qemu ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: fstab-sys ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: rootfs-block ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: terminfo ***
Oct 13 13:02:50 np0005484548.novalocal dracut[1439]: *** Including module: udev-rules ***
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: Skipping udev rule: 91-permissions.rules
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: Skipping udev rule: 80-drivers-modprobe.rules
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: *** Including module: virtiofs ***
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: *** Including module: dracut-systemd ***
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: *** Including module: usrmount ***
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: *** Including module: base ***
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: *** Including module: fs-lib ***
Oct 13 13:02:51 np0005484548.novalocal dracut[1439]: *** Including module: kdumpbase ***
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]: *** Including module: microcode_ctl-fw_dir_override ***
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:   microcode_ctl module: mangling fw_dir
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]:     microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware"
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]: *** Including module: shutdown ***
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]: *** Including module: squash ***
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]: *** Including modules done ***
Oct 13 13:02:52 np0005484548.novalocal dracut[1439]: *** Installing kernel module dependencies ***
Oct 13 13:02:53 np0005484548.novalocal dracut[1439]: *** Installing kernel module dependencies done ***
Oct 13 13:02:53 np0005484548.novalocal dracut[1439]: *** Resolving executable dependencies ***
Oct 13 13:02:53 np0005484548.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: *** Resolving executable dependencies done ***
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: *** Hardlinking files ***
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Mode:           real
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Files:          1099
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Linked:         3 files
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Compared:       0 xattrs
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Compared:       373 files
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Saved:          61.04 KiB
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Duration:       0.040885 seconds
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: *** Hardlinking files done ***
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Could not find 'strip'. Not stripping the initramfs.
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: *** Generating early-microcode cpio image ***
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: *** Constructing AuthenticAMD.bin ***
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: *** Store current command line parameters ***
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: Stored kernel commandline:
Oct 13 13:02:54 np0005484548.novalocal dracut[1439]: No dracut internal kernel commandline stored in the initramfs
Oct 13 13:02:55 np0005484548.novalocal dracut[1439]: *** Install squash loader ***
Oct 13 13:02:55 np0005484548.novalocal dracut[1439]: *** Squashing the files inside the initramfs ***
Oct 13 13:02:56 np0005484548.novalocal dracut[1439]: *** Squashing the files inside the initramfs done ***
Oct 13 13:02:56 np0005484548.novalocal dracut[1439]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' ***
Oct 13 13:02:56 np0005484548.novalocal dracut[1439]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done ***
Oct 13 13:02:57 np0005484548.novalocal kdumpctl[1136]: kdump: kexec: loaded kdump kernel
Oct 13 13:02:57 np0005484548.novalocal kdumpctl[1136]: kdump: Starting kdump: [OK]
Oct 13 13:02:57 np0005484548.novalocal systemd[1]: Finished Crash recovery kernel arming.
Oct 13 13:02:57 np0005484548.novalocal systemd[1]: Startup finished in 1.242s (kernel) + 2.107s (initrd) + 17.367s (userspace) = 20.717s.
Oct 13 13:02:59 np0005484548.novalocal sshd[4173]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:02:59 np0005484548.novalocal sshd[4173]: Accepted publickey for zuul from 38.102.83.114 port 35268 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 13 13:02:59 np0005484548.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 13 13:02:59 np0005484548.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 13 13:02:59 np0005484548.novalocal systemd-logind[760]: New session 1 of user zuul.
Oct 13 13:02:59 np0005484548.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 13 13:02:59 np0005484548.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Queued start job for default target Main User Target.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Created slice User Application Slice.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Reached target Paths.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Reached target Timers.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Starting D-Bus User Message Bus Socket...
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Starting Create User's Volatile Files and Directories...
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Listening on D-Bus User Message Bus Socket.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Reached target Sockets.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Finished Create User's Volatile Files and Directories.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Reached target Basic System.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Reached target Main User Target.
Oct 13 13:02:59 np0005484548.novalocal systemd[4177]: Startup finished in 96ms.
Oct 13 13:02:59 np0005484548.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 13 13:02:59 np0005484548.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 13 13:02:59 np0005484548.novalocal sshd[4173]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:03:00 np0005484548.novalocal python3[4229]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:03:04 np0005484548.novalocal python3[4247]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:03:08 np0005484548.novalocal python3[4300]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:03:09 np0005484548.novalocal python3[4330]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 13 13:03:11 np0005484548.novalocal python3[4346]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjKugkjRYwasypXRQCjmNKsHyHQbYeza3+zMIuVtFa4xUmQUXK0ObIX3Y/0LubiVeDts2rPaGbc9m50+t0OC8pr9tHfJHL+an7sZBNDTxWgi6AhO6cmswpRXQVxr0TMJm2Yc0rp+CK6mz3/ZM7zdzR1Opkh9tZn4B22qIoMSuDScNY60hiNYzPyISgXDNKoldJTvsCOZ7PcFmgSEu63nJ5tdUgGEmNF9gfdIC6CSgSbctPtA8pJsitAi/BiV4UG6pJDkogUkJCmgebHQae79F4rKmfEKmMPHuPO7/r4fnWc4jC30NY1XMU2FcsrhxM9xkuipqFEnnhq0Ak0KA2d5NTULF+1VHO47Q6hZCvlI0WhRATmCt7deLr+etP3mD2qbeb7w/RwZFdPvXRKp/Z4GrBq+tplzNxWIodxqx5XjWvhwQvXEu0+ISfyehOO/PJ7/HypWFFgsG8pUdWtC5e3O8PiN6WfEJR4YicNADfAVzK0lGReUN8LRwRtwxh7sjfwpc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:12 np0005484548.novalocal python3[4360]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:12 np0005484548.novalocal python3[4419]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:12 np0005484548.novalocal python3[4460]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760360592.3002682-210-150149784528140/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=f4ac5eb2606345a98a765b53b3e868f1_id_rsa follow=False checksum=a852b1bb9aaa7d35e14f42ee10ac51ee4296cce6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:13 np0005484548.novalocal python3[4533]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:13 np0005484548.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 13:03:13 np0005484548.novalocal python3[4574]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760360593.28283-244-115885223277797/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=f4ac5eb2606345a98a765b53b3e868f1_id_rsa.pub follow=False checksum=1b4a9289705ee369f83be33824a5990dd03bc576 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:15 np0005484548.novalocal python3[4604]: ansible-ping Invoked with data=pong
Oct 13 13:03:16 np0005484548.novalocal python3[4618]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:03:17 np0005484548.novalocal python3[4671]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 13 13:03:18 np0005484548.novalocal python3[4693]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:18 np0005484548.novalocal python3[4707]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:19 np0005484548.novalocal python3[4721]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:19 np0005484548.novalocal python3[4735]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:19 np0005484548.novalocal python3[4749]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:19 np0005484548.novalocal python3[4763]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:21 np0005484548.novalocal sudo[4777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtyzeghaerkjsipduwvfnsqngfbfnxsu ; /usr/bin/python3
Oct 13 13:03:21 np0005484548.novalocal sudo[4777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:21 np0005484548.novalocal python3[4779]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:21 np0005484548.novalocal sudo[4777]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:21 np0005484548.novalocal sudo[4825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-puhhaqlsoytnrqniouwjivsvdzfbxwgj ; /usr/bin/python3
Oct 13 13:03:21 np0005484548.novalocal sudo[4825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:22 np0005484548.novalocal python3[4827]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:22 np0005484548.novalocal sudo[4825]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:22 np0005484548.novalocal sudo[4868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwzxaporwnnpagjfkibocacdpvqspety ; /usr/bin/python3
Oct 13 13:03:22 np0005484548.novalocal sudo[4868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:22 np0005484548.novalocal python3[4871]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760360601.7873383-24-62981415952913/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:22 np0005484548.novalocal sudo[4868]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:23 np0005484548.novalocal python3[4899]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:23 np0005484548.novalocal python3[4913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:23 np0005484548.novalocal python3[4927]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:24 np0005484548.novalocal python3[4941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:24 np0005484548.novalocal python3[4955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:24 np0005484548.novalocal python3[4969]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:24 np0005484548.novalocal python3[4983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:25 np0005484548.novalocal python3[4997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:25 np0005484548.novalocal python3[5011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:25 np0005484548.novalocal python3[5025]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:25 np0005484548.novalocal python3[5039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:26 np0005484548.novalocal python3[5053]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:26 np0005484548.novalocal python3[5067]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:26 np0005484548.novalocal python3[5081]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:26 np0005484548.novalocal python3[5095]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:27 np0005484548.novalocal python3[5109]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:27 np0005484548.novalocal python3[5123]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:27 np0005484548.novalocal python3[5137]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:28 np0005484548.novalocal python3[5151]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:28 np0005484548.novalocal python3[5165]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:28 np0005484548.novalocal python3[5179]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:28 np0005484548.novalocal python3[5193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:29 np0005484548.novalocal python3[5207]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:29 np0005484548.novalocal python3[5221]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:29 np0005484548.novalocal python3[5235]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:29 np0005484548.novalocal python3[5249]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:03:32 np0005484548.novalocal sudo[5263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lniuxuyuyjjzxfkzcwwtxfgfaiofqcqf ; /usr/bin/python3
Oct 13 13:03:32 np0005484548.novalocal sudo[5263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:33 np0005484548.novalocal python3[5265]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 13 13:03:33 np0005484548.novalocal systemd[1]: Starting Time & Date Service...
Oct 13 13:03:33 np0005484548.novalocal systemd[1]: Started Time & Date Service.
Oct 13 13:03:33 np0005484548.novalocal systemd-timedated[5267]: Changed time zone to 'UTC' (UTC).
Oct 13 13:03:33 np0005484548.novalocal sudo[5263]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:33 np0005484548.novalocal sudo[5284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftvixnszdturupqcgienattutmvyprts ; /usr/bin/python3
Oct 13 13:03:33 np0005484548.novalocal sudo[5284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:33 np0005484548.novalocal python3[5286]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:33 np0005484548.novalocal sudo[5284]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:34 np0005484548.novalocal python3[5332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:34 np0005484548.novalocal python3[5373]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760360613.7459826-156-7626913544435/source _original_basename=tmpu9kyae5q follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:34 np0005484548.novalocal python3[5433]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:35 np0005484548.novalocal python3[5474]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760360614.6794257-187-7619962088299/source _original_basename=tmpcnizu4bi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:36 np0005484548.novalocal sudo[5534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-meuweppuyxvxetehktvlzccnhihlvlen ; /usr/bin/python3
Oct 13 13:03:36 np0005484548.novalocal sudo[5534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:36 np0005484548.novalocal python3[5536]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:36 np0005484548.novalocal sudo[5534]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:36 np0005484548.novalocal sudo[5577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwavmmmzqvitubcvsryekjdtlyhqmqfs ; /usr/bin/python3
Oct 13 13:03:36 np0005484548.novalocal sudo[5577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:36 np0005484548.novalocal python3[5579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760360615.989496-235-123265336274743/source _original_basename=tmpo79b3keg follow=False checksum=3a1440758208a7ff90a6a51d370205d9deb30bcc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:36 np0005484548.novalocal sudo[5577]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:37 np0005484548.novalocal python3[5607]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:03:37 np0005484548.novalocal python3[5623]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:03:37 np0005484548.novalocal sudo[5671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpgenkerhiuwudqtsyqmcegymoylpoww ; /usr/bin/python3
Oct 13 13:03:37 np0005484548.novalocal sudo[5671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:38 np0005484548.novalocal python3[5673]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:03:38 np0005484548.novalocal sudo[5671]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:38 np0005484548.novalocal sudo[5714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxyhvfkfogxznvumxeiujagruvdktwms ; /usr/bin/python3
Oct 13 13:03:38 np0005484548.novalocal sudo[5714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:38 np0005484548.novalocal python3[5716]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760360617.7885869-277-108440392245769/source _original_basename=tmpkjqnnj97 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:38 np0005484548.novalocal sudo[5714]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:38 np0005484548.novalocal sudo[5745]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bezgngjbldvpkxxqkruduyvyrjbdgitv ; /usr/bin/python3
Oct 13 13:03:38 np0005484548.novalocal sudo[5745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:39 np0005484548.novalocal python3[5747]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-58fa-0c49-00000000001d-1-standalone zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:03:39 np0005484548.novalocal sudo[5745]: pam_unix(sudo:session): session closed for user root
Oct 13 13:03:39 np0005484548.novalocal python3[5765]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-58fa-0c49-00000000001e-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 13 13:03:40 np0005484548.novalocal python3[5783]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:57 np0005484548.novalocal sudo[5797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbvckhebxfhnqwtbtpcfinpobjloughq ; /usr/bin/python3
Oct 13 13:03:57 np0005484548.novalocal sudo[5797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:03:57 np0005484548.novalocal python3[5799]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:03:57 np0005484548.novalocal sudo[5797]: pam_unix(sudo:session): session closed for user root
Oct 13 13:04:03 np0005484548.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 13:04:57 np0005484548.novalocal sshd[4186]: Received disconnect from 38.102.83.114 port 35268:11: disconnected by user
Oct 13 13:04:57 np0005484548.novalocal sshd[4186]: Disconnected from user zuul 38.102.83.114 port 35268
Oct 13 13:04:57 np0005484548.novalocal sshd[4173]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:04:57 np0005484548.novalocal systemd-logind[760]: Session 1 logged out. Waiting for processes to exit.
Oct 13 13:05:09 np0005484548.novalocal systemd[4177]: Starting Mark boot as successful...
Oct 13 13:05:09 np0005484548.novalocal systemd[4177]: Finished Mark boot as successful.
Oct 13 13:05:24 np0005484548.novalocal sshd[5806]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:05:24 np0005484548.novalocal sshd[5806]: error: kex_exchange_identification: client sent invalid protocol identifier "GET /squid-internal-mgr/cachemgr.cgi HTTP/1.1"
Oct 13 13:05:24 np0005484548.novalocal sshd[5806]: banner exchange: Connection from 174.138.52.189 port 54330: invalid format
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: reg 0x10: [io  0x0000-0x003f]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff]
Oct 13 13:05:31 np0005484548.novalocal kernel: pci 0000:00:07.0: BAR 0: assigned [io  0x1000-0x103f]
Oct 13 13:05:31 np0005484548.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3736] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 13 13:05:31 np0005484548.novalocal systemd-udevd[5808]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3873] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3897] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3900] device (eth1): carrier: link connected
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3902] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3906] policy: auto-activating connection 'Wired connection 1' (1411bd80-06ae-3c92-a323-d34fe33ef77c)
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3910] device (eth1): Activation: starting connection 'Wired connection 1' (1411bd80-06ae-3c92-a323-d34fe33ef77c)
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3911] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3913] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3917] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Oct 13 13:05:31 np0005484548.novalocal NetworkManager[789]: <info>  [1760360731.3920] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 13 13:05:31 np0005484548.novalocal sshd[5810]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:05:31 np0005484548.novalocal sshd[5810]: Accepted publickey for zuul from 38.102.83.114 port 42926 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:05:31 np0005484548.novalocal systemd-logind[760]: New session 3 of user zuul.
Oct 13 13:05:31 np0005484548.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 13 13:05:31 np0005484548.novalocal sshd[5810]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:05:32 np0005484548.novalocal python3[5827]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-5ccb-fb54-0000000001ca-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:05:32 np0005484548.novalocal kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Oct 13 13:05:41 np0005484548.novalocal sudo[5875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkpcyibkxulhyudtgqewtsooqbimokhh ; OS_CLOUD=vexxhost /usr/bin/python3
Oct 13 13:05:41 np0005484548.novalocal sudo[5875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:05:42 np0005484548.novalocal python3[5877]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:05:42 np0005484548.novalocal sudo[5875]: pam_unix(sudo:session): session closed for user root
Oct 13 13:05:42 np0005484548.novalocal sudo[5918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgahkaskvvewevaardffzhzxibxcgtyp ; OS_CLOUD=vexxhost /usr/bin/python3
Oct 13 13:05:42 np0005484548.novalocal sudo[5918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:05:42 np0005484548.novalocal python3[5920]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760360741.8695831-268-66811606718321/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=92086e9c3dcaba928761e0c19a9547a15b19860c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:05:42 np0005484548.novalocal sudo[5918]: pam_unix(sudo:session): session closed for user root
Oct 13 13:05:42 np0005484548.novalocal sudo[5948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcjnbpxzdvrtqnqaksnggjphyqisivuk ; OS_CLOUD=vexxhost /usr/bin/python3
Oct 13 13:05:42 np0005484548.novalocal sudo[5948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:05:43 np0005484548.novalocal python3[5950]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Stopping Network Manager...
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1257] caught SIGTERM, shutting down normally.
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1344] dhcp4 (eth0): canceled DHCP transaction
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1345] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1345] dhcp4 (eth0): state changed no lease
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1349] manager: NetworkManager state is now CONNECTING
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1444] dhcp4 (eth1): canceled DHCP transaction
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1445] dhcp4 (eth1): state changed no lease
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[789]: <info>  [1760360743.1525] exiting (success)
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Stopped Network Manager.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: NetworkManager.service: Consumed 1.090s CPU time.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Starting Network Manager...
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.2093] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:d585dfa6-4d92-480a-a639-e5f89c370e7c)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.2096] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Started Network Manager.
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.2130] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.2196] manager[0x556af0790090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Starting Hostname Service...
Oct 13 13:05:43 np0005484548.novalocal sudo[5948]: pam_unix(sudo:session): session closed for user root
Oct 13 13:05:43 np0005484548.novalocal systemd[1]: Started Hostname Service.
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3045] hostname: hostname: using hostnamed
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3046] hostname: static hostname changed from (none) to "np0005484548.novalocal"
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3052] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3061] manager[0x556af0790090]: rfkill: Wi-Fi hardware radio set enabled
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3061] manager[0x556af0790090]: rfkill: WWAN hardware radio set enabled
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3102] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3103] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3104] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3105] manager: Networking is enabled by state file
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3116] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3117] settings: Loaded settings plugin: keyfile (internal)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3163] dhcp: init: Using DHCP client 'internal'
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3167] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3175] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3182] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3195] device (lo): Activation: starting connection 'lo' (77fe12d1-f185-4d37-80c9-1be3be197acf)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3204] device (eth0): carrier: link connected
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3210] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3217] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3218] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3228] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3238] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3245] device (eth1): carrier: link connected
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3251] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3259] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (1411bd80-06ae-3c92-a323-d34fe33ef77c) (indicated)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3259] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3267] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3278] device (eth1): Activation: starting connection 'Wired connection 1' (1411bd80-06ae-3c92-a323-d34fe33ef77c)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3306] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3310] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3315] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3318] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3323] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3327] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3329] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3332] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3338] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3342] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3359] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3363] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3410] dhcp4 (eth0): state changed new lease, address=38.102.83.151
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3417] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3507] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3511] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3524] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3531] device (lo): Activation: successful, device activated.
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3538] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3540] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3545] manager: NetworkManager state is now CONNECTED_SITE
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3548] device (eth0): Activation: successful, device activated.
Oct 13 13:05:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360743.3554] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 13 13:05:43 np0005484548.novalocal python3[6017]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-5ccb-fb54-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:05:53 np0005484548.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 13:06:02 np0005484548.novalocal sudo[6079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhiatbpcwwfbwmemhuobfrqwgfllesdc ; OS_CLOUD=vexxhost /usr/bin/python3
Oct 13 13:06:02 np0005484548.novalocal sudo[6079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:06:02 np0005484548.novalocal python3[6081]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:06:02 np0005484548.novalocal sudo[6079]: pam_unix(sudo:session): session closed for user root
Oct 13 13:06:03 np0005484548.novalocal sudo[6122]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgwbieiepikyewvltfefwyvymihsycmv ; OS_CLOUD=vexxhost /usr/bin/python3
Oct 13 13:06:03 np0005484548.novalocal sudo[6122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:06:03 np0005484548.novalocal python3[6124]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760360762.567256-302-83261774123225/source _original_basename=tmp5s0ov7h9 follow=False checksum=10a0d52c9d693008fdb6a86a630ee8dace55a901 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:06:03 np0005484548.novalocal sudo[6122]: pam_unix(sudo:session): session closed for user root
Oct 13 13:06:04 np0005484548.novalocal sshd[5810]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:06:04 np0005484548.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 13 13:06:04 np0005484548.novalocal systemd[1]: session-3.scope: Consumed 2.076s CPU time.
Oct 13 13:06:04 np0005484548.novalocal systemd-logind[760]: Session 3 logged out. Waiting for processes to exit.
Oct 13 13:06:04 np0005484548.novalocal systemd-logind[760]: Removed session 3.
Oct 13 13:06:13 np0005484548.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 13:06:28 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360788.8117] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Oct 13 13:06:28 np0005484548.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 13:06:28 np0005484548.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 13:06:28 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360788.8318] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Oct 13 13:06:28 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360788.8321] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Oct 13 13:06:28 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360788.8326] device (eth1): Activation: successful, device activated.
Oct 13 13:06:28 np0005484548.novalocal NetworkManager[5962]: <info>  [1760360788.8331] manager: startup complete
Oct 13 13:06:28 np0005484548.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 13 13:06:38 np0005484548.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 13:07:35 np0005484548.novalocal sshd[6156]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:07:36 np0005484548.novalocal sshd[6156]: Received disconnect from 193.46.255.217 port 32942:11:  [preauth]
Oct 13 13:07:36 np0005484548.novalocal sshd[6156]: Disconnected from authenticating user root 193.46.255.217 port 32942 [preauth]
Oct 13 13:07:50 np0005484548.novalocal sshd[6158]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:07:50 np0005484548.novalocal sshd[6158]: error: kex_exchange_identification: banner line contains invalid characters
Oct 13 13:07:50 np0005484548.novalocal sshd[6158]: banner exchange: Connection from 101.200.236.207 port 54373: invalid format
Oct 13 13:08:04 np0005484548.novalocal systemd[1]: Unmounting EFI System Partition Automount...
Oct 13 13:08:04 np0005484548.novalocal systemd[1]: efi.mount: Deactivated successfully.
Oct 13 13:08:04 np0005484548.novalocal systemd[1]: Unmounted EFI System Partition Automount.
Oct 13 13:08:04 np0005484548.novalocal systemd[4177]: Created slice User Background Tasks Slice.
Oct 13 13:08:04 np0005484548.novalocal systemd[4177]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 13:08:04 np0005484548.novalocal systemd[4177]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 13:11:04 np0005484548.novalocal sshd[6164]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:11:04 np0005484548.novalocal sshd[6164]: Accepted publickey for zuul from 38.102.83.114 port 52410 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:11:04 np0005484548.novalocal systemd-logind[760]: New session 4 of user zuul.
Oct 13 13:11:04 np0005484548.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 13 13:11:04 np0005484548.novalocal sshd[6164]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:11:04 np0005484548.novalocal sudo[6181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eopyecpzmueiusnexuvaotoacvyonhbz ; /usr/bin/python3
Oct 13 13:11:04 np0005484548.novalocal sudo[6181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:04 np0005484548.novalocal python3[6183]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-4420-96b5-000000001ce8-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:11:04 np0005484548.novalocal sudo[6181]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:05 np0005484548.novalocal sudo[6200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-inlbevmvwlbkuxtautouvziyamunuyyr ; /usr/bin/python3
Oct 13 13:11:05 np0005484548.novalocal sudo[6200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:05 np0005484548.novalocal python3[6202]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:11:05 np0005484548.novalocal sudo[6200]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:05 np0005484548.novalocal sudo[6216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snhluxlclnvfbwtawbisonpjuccrhpcz ; /usr/bin/python3
Oct 13 13:11:05 np0005484548.novalocal sudo[6216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:06 np0005484548.novalocal python3[6218]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:11:06 np0005484548.novalocal sudo[6216]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:06 np0005484548.novalocal sudo[6232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msncjngkpthitccfmmnksrdiawqqvuub ; /usr/bin/python3
Oct 13 13:11:06 np0005484548.novalocal sudo[6232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:06 np0005484548.novalocal python3[6234]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:11:06 np0005484548.novalocal sudo[6232]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:06 np0005484548.novalocal sudo[6248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfscyailwrgberyszvbktqrbhbnbjxqr ; /usr/bin/python3
Oct 13 13:11:06 np0005484548.novalocal sudo[6248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:06 np0005484548.novalocal python3[6250]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:11:06 np0005484548.novalocal sudo[6248]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:06 np0005484548.novalocal sudo[6264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ehbrohyffacgoomtywvcbazsguttyegq ; /usr/bin/python3
Oct 13 13:11:06 np0005484548.novalocal sudo[6264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:07 np0005484548.novalocal python3[6266]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:11:07 np0005484548.novalocal python3[6266]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 13 13:11:07 np0005484548.novalocal sudo[6264]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:07 np0005484548.novalocal sudo[6280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aiigwvujkwphkladjaaetomnrjghaivu ; /usr/bin/python3
Oct 13 13:11:07 np0005484548.novalocal sudo[6280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:07 np0005484548.novalocal python3[6282]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 13:11:07 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:11:07 np0005484548.novalocal systemd-rc-local-generator[6299]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:11:08 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:11:08 np0005484548.novalocal sudo[6280]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:09 np0005484548.novalocal sudo[6326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcmgmzibvuyxzszjjhqbpifmdraomulo ; /usr/bin/python3
Oct 13 13:11:09 np0005484548.novalocal sudo[6326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:09 np0005484548.novalocal python3[6328]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 13 13:11:09 np0005484548.novalocal sudo[6326]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:09 np0005484548.novalocal sudo[6342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxzlspepsgumzwmkcdysmlntcjvzsmdl ; /usr/bin/python3
Oct 13 13:11:09 np0005484548.novalocal sudo[6342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:09 np0005484548.novalocal python3[6344]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:11:09 np0005484548.novalocal sudo[6342]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:09 np0005484548.novalocal sudo[6360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-injtjzfrlpzqhembjyfjihmdqgjihhae ; /usr/bin/python3
Oct 13 13:11:09 np0005484548.novalocal sudo[6360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:10 np0005484548.novalocal python3[6362]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:11:10 np0005484548.novalocal sudo[6360]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:10 np0005484548.novalocal sudo[6378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcibxoahbxmeihsfxlcsxjxpzvyrsiyd ; /usr/bin/python3
Oct 13 13:11:10 np0005484548.novalocal sudo[6378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:10 np0005484548.novalocal python3[6380]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:11:10 np0005484548.novalocal sudo[6378]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:10 np0005484548.novalocal sudo[6396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-briggwhangliwlghvjzuujgijvymaxmp ; /usr/bin/python3
Oct 13 13:11:10 np0005484548.novalocal sudo[6396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:11:10 np0005484548.novalocal python3[6398]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:11:10 np0005484548.novalocal sudo[6396]: pam_unix(sudo:session): session closed for user root
Oct 13 13:11:11 np0005484548.novalocal python3[6415]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-4420-96b5-000000001cee-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:11:11 np0005484548.novalocal python3[6435]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:11:13 np0005484548.novalocal sshd[6164]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:11:13 np0005484548.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 13 13:11:13 np0005484548.novalocal systemd[1]: session-4.scope: Consumed 3.151s CPU time.
Oct 13 13:11:13 np0005484548.novalocal systemd-logind[760]: Session 4 logged out. Waiting for processes to exit.
Oct 13 13:11:13 np0005484548.novalocal systemd-logind[760]: Removed session 4.
Oct 13 13:12:26 np0005484548.novalocal sshd[6443]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:12:27 np0005484548.novalocal sshd[6443]: Accepted publickey for zuul from 38.102.83.114 port 53106 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:12:27 np0005484548.novalocal systemd-logind[760]: New session 5 of user zuul.
Oct 13 13:12:27 np0005484548.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 13 13:12:27 np0005484548.novalocal sshd[6443]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:12:27 np0005484548.novalocal sudo[6460]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmlshjnuhutuakisncgcccafowaabfdt ; /usr/bin/python3
Oct 13 13:12:27 np0005484548.novalocal sudo[6460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:12:27 np0005484548.novalocal systemd[1]: Starting RHSM dbus service...
Oct 13 13:12:28 np0005484548.novalocal systemd[1]: Started RHSM dbus service.
Oct 13 13:12:28 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Oct 13 13:12:28 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Oct 13 13:12:28 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Oct 13 13:12:28 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.managerlib:90] Consumer created: np0005484548.novalocal (8ba5dc1a-e4d1-4e93-afbc-cee2f5a7a9d3)
Oct 13 13:12:31 np0005484548.novalocal subscription-manager[6467]: Registered system with identity: 8ba5dc1a-e4d1-4e93-afbc-cee2f5a7a9d3
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.entcertlib:131] certs updated:
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]: Total updates: 1
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]: Found (local) serial# []
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]: Expected (UEP) serial# [8052638211762784002]
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]: Added (new)
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]:   [sn:8052638211762784002 ( Content Access,) @ /etc/pki/entitlement/8052638211762784002.pem]
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]: Deleted (rogue):
Oct 13 13:12:31 np0005484548.novalocal rhsm-service[6467]:   <NONE>
Oct 13 13:12:31 np0005484548.novalocal subscription-manager[6467]: Added subscription for 'Content Access' contract 'None'
Oct 13 13:12:31 np0005484548.novalocal subscription-manager[6467]: Added subscription for product ' Content Access'
Oct 13 13:12:32 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Oct 13 13:12:32 np0005484548.novalocal rhsm-service[6467]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Oct 13 13:12:32 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:12:33 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:12:33 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:12:33 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:12:33 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:12:34 np0005484548.novalocal sudo[6460]: pam_unix(sudo:session): session closed for user root
Oct 13 13:12:34 np0005484548.novalocal python3[6559]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ec2-ffbe-9b3c-69ac-000000000007-1-standalone zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:12:35 np0005484548.novalocal sudo[6576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epqrnpxenvcclmgiskdvqsvxvisskngm ; /usr/bin/python3
Oct 13 13:12:35 np0005484548.novalocal sudo[6576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:12:35 np0005484548.novalocal python3[6578]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:13:06 np0005484548.novalocal setsebool[6653]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 13 13:13:06 np0005484548.novalocal setsebool[6653]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  Converting 408 SID table entries...
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:13:14 np0005484548.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:13:27 np0005484548.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct 13 13:13:27 np0005484548.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:13:27 np0005484548.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:13:27 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:13:27 np0005484548.novalocal systemd-rc-local-generator[7485]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:13:27 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:13:27 np0005484548.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:13:28 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:13:28 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:13:28 np0005484548.novalocal sudo[6576]: pam_unix(sudo:session): session closed for user root
Oct 13 13:13:28 np0005484548.novalocal sudo[9386]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqxekacuudbjkoqfiikxivvpdfxinari ; /usr/bin/python3
Oct 13 13:13:28 np0005484548.novalocal sudo[9386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:13:29 np0005484548.novalocal podman[9668]: 2025-10-13 13:13:29.082243802 +0000 UTC m=+0.128305548 system refresh
Oct 13 13:13:29 np0005484548.novalocal sudo[9386]: pam_unix(sudo:session): session closed for user root
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: Starting D-Bus User Message Bus...
Oct 13 13:13:29 np0005484548.novalocal dbus-broker-launch[11024]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 13 13:13:29 np0005484548.novalocal dbus-broker-launch[11024]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: Started D-Bus User Message Bus.
Oct 13 13:13:29 np0005484548.novalocal dbus-broker-lau[11024]: Ready
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: selinux: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: Created slice Slice /user.
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: podman-10872.scope: unit configures an IP firewall, but not running as root.
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: (This warning is only shown for the first unit using IP firewalling.)
Oct 13 13:13:29 np0005484548.novalocal systemd[4177]: Started podman-10872.scope.
Oct 13 13:13:30 np0005484548.novalocal systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:13:30 np0005484548.novalocal systemd[4177]: Started podman-pause-50b95cda.scope.
Oct 13 13:13:30 np0005484548.novalocal sshd[6443]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:13:30 np0005484548.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 13 13:13:30 np0005484548.novalocal systemd[1]: session-5.scope: Consumed 50.510s CPU time.
Oct 13 13:13:30 np0005484548.novalocal systemd-logind[760]: Session 5 logged out. Waiting for processes to exit.
Oct 13 13:13:30 np0005484548.novalocal systemd-logind[760]: Removed session 5.
Oct 13 13:13:35 np0005484548.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:13:35 np0005484548.novalocal systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:13:35 np0005484548.novalocal systemd[1]: man-db-cache-update.service: Consumed 9.968s CPU time.
Oct 13 13:13:35 np0005484548.novalocal systemd[1]: run-rf8d747f354af4656acc4b11057d5e303.service: Deactivated successfully.
Oct 13 13:13:46 np0005484548.novalocal sshd[18309]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:13:46 np0005484548.novalocal sshd[18310]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:13:46 np0005484548.novalocal sshd[18308]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:13:46 np0005484548.novalocal sshd[18311]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:13:46 np0005484548.novalocal sshd[18312]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:13:46 np0005484548.novalocal sshd[18310]: Connection closed by 38.102.83.198 port 45806 [preauth]
Oct 13 13:13:46 np0005484548.novalocal sshd[18308]: Connection closed by 38.102.83.198 port 45814 [preauth]
Oct 13 13:13:46 np0005484548.novalocal sshd[18311]: Unable to negotiate with 38.102.83.198 port 45830: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 13 13:13:46 np0005484548.novalocal sshd[18309]: Unable to negotiate with 38.102.83.198 port 45842: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 13 13:13:46 np0005484548.novalocal sshd[18312]: Unable to negotiate with 38.102.83.198 port 45856: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 13 13:13:50 np0005484548.novalocal sshd[18318]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:13:50 np0005484548.novalocal sshd[18318]: Accepted publickey for zuul from 38.102.83.114 port 47202 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:13:50 np0005484548.novalocal systemd-logind[760]: New session 6 of user zuul.
Oct 13 13:13:50 np0005484548.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 13 13:13:50 np0005484548.novalocal sshd[18318]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:13:50 np0005484548.novalocal python3[18335]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBnXeKKX2J9whySBr04BzMODHc0SQ9h3Lq/INLiBXf090pPe5ba3to6L/ArqbP5ZcieDfRJgK+V9IDAQyZ15z2o= zuul@np0005484546.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:13:51 np0005484548.novalocal sudo[18349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtqaktovphgmghcddhcbaqwxejqcblmg ; /usr/bin/python3
Oct 13 13:13:51 np0005484548.novalocal sudo[18349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:13:51 np0005484548.novalocal python3[18351]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBnXeKKX2J9whySBr04BzMODHc0SQ9h3Lq/INLiBXf090pPe5ba3to6L/ArqbP5ZcieDfRJgK+V9IDAQyZ15z2o= zuul@np0005484546.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:13:51 np0005484548.novalocal sudo[18349]: pam_unix(sudo:session): session closed for user root
Oct 13 13:13:52 np0005484548.novalocal sshd[18318]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:13:52 np0005484548.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Oct 13 13:13:52 np0005484548.novalocal systemd-logind[760]: Session 6 logged out. Waiting for processes to exit.
Oct 13 13:13:52 np0005484548.novalocal systemd-logind[760]: Removed session 6.
Oct 13 13:15:08 np0005484548.novalocal sshd[18352]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:15:08 np0005484548.novalocal sshd[18352]: Received disconnect from 193.46.255.20 port 34252:11:  [preauth]
Oct 13 13:15:08 np0005484548.novalocal sshd[18352]: Disconnected from authenticating user root 193.46.255.20 port 34252 [preauth]
Oct 13 13:15:19 np0005484548.novalocal sshd[18355]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:15:19 np0005484548.novalocal sshd[18355]: Accepted publickey for zuul from 38.102.83.114 port 57844 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:15:19 np0005484548.novalocal systemd-logind[760]: New session 7 of user zuul.
Oct 13 13:15:19 np0005484548.novalocal systemd[1]: Started Session 7 of User zuul.
Oct 13 13:15:19 np0005484548.novalocal sshd[18355]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:15:19 np0005484548.novalocal sudo[18372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kochdljrabdldainuvbgsahnmeivbjwh ; /usr/bin/python3
Oct 13 13:15:19 np0005484548.novalocal sudo[18372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:19 np0005484548.novalocal python3[18374]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjKugkjRYwasypXRQCjmNKsHyHQbYeza3+zMIuVtFa4xUmQUXK0ObIX3Y/0LubiVeDts2rPaGbc9m50+t0OC8pr9tHfJHL+an7sZBNDTxWgi6AhO6cmswpRXQVxr0TMJm2Yc0rp+CK6mz3/ZM7zdzR1Opkh9tZn4B22qIoMSuDScNY60hiNYzPyISgXDNKoldJTvsCOZ7PcFmgSEu63nJ5tdUgGEmNF9gfdIC6CSgSbctPtA8pJsitAi/BiV4UG6pJDkogUkJCmgebHQae79F4rKmfEKmMPHuPO7/r4fnWc4jC30NY1XMU2FcsrhxM9xkuipqFEnnhq0Ak0KA2d5NTULF+1VHO47Q6hZCvlI0WhRATmCt7deLr+etP3mD2qbeb7w/RwZFdPvXRKp/Z4GrBq+tplzNxWIodxqx5XjWvhwQvXEu0+ISfyehOO/PJ7/HypWFFgsG8pUdWtC5e3O8PiN6WfEJR4YicNADfAVzK0lGReUN8LRwRtwxh7sjfwpc= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 13:15:19 np0005484548.novalocal sudo[18372]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:20 np0005484548.novalocal sudo[18388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmedrdsmjixriihellbhszbprspdkgks ; /usr/bin/python3
Oct 13 13:15:20 np0005484548.novalocal sudo[18388]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:20 np0005484548.novalocal python3[18390]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005484548.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 13 13:15:21 np0005484548.novalocal sudo[18388]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:21 np0005484548.novalocal sudo[18438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibdujiwozqywnhlyyxatellcfagvqshf ; /usr/bin/python3
Oct 13 13:15:21 np0005484548.novalocal sudo[18438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:21 np0005484548.novalocal python3[18440]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:15:21 np0005484548.novalocal sudo[18438]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:22 np0005484548.novalocal sudo[18481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsfnwgzsngxkwqmcbbiyadtmrbxiqofe ; /usr/bin/python3
Oct 13 13:15:22 np0005484548.novalocal sudo[18481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:22 np0005484548.novalocal python3[18483]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760361321.431139-37-247935791614868/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=f4ac5eb2606345a98a765b53b3e868f1_id_rsa follow=False checksum=a852b1bb9aaa7d35e14f42ee10ac51ee4296cce6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:15:22 np0005484548.novalocal sudo[18481]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:23 np0005484548.novalocal sudo[18543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hazibfaoaymtpghyyfnrtmivrvnmxdlz ; /usr/bin/python3
Oct 13 13:15:23 np0005484548.novalocal sudo[18543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:23 np0005484548.novalocal python3[18545]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:15:23 np0005484548.novalocal sudo[18543]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:23 np0005484548.novalocal sudo[18586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwdvktqpdxvizhsqgnxncmuqlujtrymg ; /usr/bin/python3
Oct 13 13:15:23 np0005484548.novalocal sudo[18586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:23 np0005484548.novalocal python3[18588]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760361322.9202452-67-181012086971644/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=f4ac5eb2606345a98a765b53b3e868f1_id_rsa.pub follow=False checksum=1b4a9289705ee369f83be33824a5990dd03bc576 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:15:23 np0005484548.novalocal sudo[18586]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:24 np0005484548.novalocal sudo[18616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbatqcthyuiiugexseaqlrcbkezlagqd ; /usr/bin/python3
Oct 13 13:15:24 np0005484548.novalocal sudo[18616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:15:24 np0005484548.novalocal python3[18618]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:15:24 np0005484548.novalocal sudo[18616]: pam_unix(sudo:session): session closed for user root
Oct 13 13:15:25 np0005484548.novalocal python3[18664]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:15:25 np0005484548.novalocal python3[18680]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmprqonlm91 recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:15:25 np0005484548.novalocal python3[18740]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:15:26 np0005484548.novalocal python3[18756]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmp_sw_v72i recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:15:26 np0005484548.novalocal python3[18816]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:15:27 np0005484548.novalocal python3[18832]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmp_4mp9yfk recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:15:27 np0005484548.novalocal sshd[18355]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:15:27 np0005484548.novalocal systemd[1]: session-7.scope: Deactivated successfully.
Oct 13 13:15:27 np0005484548.novalocal systemd[1]: session-7.scope: Consumed 3.515s CPU time.
Oct 13 13:15:27 np0005484548.novalocal systemd-logind[760]: Session 7 logged out. Waiting for processes to exit.
Oct 13 13:15:27 np0005484548.novalocal systemd-logind[760]: Removed session 7.
Oct 13 13:17:35 np0005484548.novalocal sshd[18848]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:17:35 np0005484548.novalocal sshd[18848]: Accepted publickey for zuul from 38.102.83.198 port 46584 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:17:35 np0005484548.novalocal systemd-logind[760]: New session 8 of user zuul.
Oct 13 13:17:35 np0005484548.novalocal systemd[1]: Started Session 8 of User zuul.
Oct 13 13:17:35 np0005484548.novalocal sshd[18848]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:17:36 np0005484548.novalocal python3[18894]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:17:43 np0005484548.novalocal sshd[18896]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:17:43 np0005484548.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Oct 13 13:17:43 np0005484548.novalocal sshd[18896]: error: kex_exchange_identification: banner line contains invalid characters
Oct 13 13:17:43 np0005484548.novalocal sshd[18896]: banner exchange: Connection from 65.49.1.122 port 45856: invalid format
Oct 13 13:17:43 np0005484548.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 13 13:17:43 np0005484548.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Oct 13 13:17:43 np0005484548.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 13 13:22:31 np0005484548.novalocal sshd[18901]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:22:31 np0005484548.novalocal sshd[18901]: Received disconnect from 193.46.255.159 port 59446:11:  [preauth]
Oct 13 13:22:31 np0005484548.novalocal sshd[18901]: Disconnected from authenticating user root 193.46.255.159 port 59446 [preauth]
Oct 13 13:22:36 np0005484548.novalocal sshd[18851]: Received disconnect from 38.102.83.198 port 46584:11: disconnected by user
Oct 13 13:22:36 np0005484548.novalocal sshd[18851]: Disconnected from user zuul 38.102.83.198 port 46584
Oct 13 13:22:36 np0005484548.novalocal sshd[18848]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:22:36 np0005484548.novalocal systemd[1]: session-8.scope: Deactivated successfully.
Oct 13 13:22:36 np0005484548.novalocal systemd-logind[760]: Session 8 logged out. Waiting for processes to exit.
Oct 13 13:22:36 np0005484548.novalocal systemd-logind[760]: Removed session 8.
Oct 13 13:29:43 np0005484548.novalocal sshd[18906]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:29:43 np0005484548.novalocal sshd[18906]: Accepted publickey for zuul from 38.102.83.114 port 49364 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:29:43 np0005484548.novalocal systemd-logind[760]: New session 9 of user zuul.
Oct 13 13:29:43 np0005484548.novalocal systemd[1]: Started Session 9 of User zuul.
Oct 13 13:29:43 np0005484548.novalocal sshd[18906]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 13:29:44 np0005484548.novalocal python3[18923]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ec2-ffbe-688c-bd86-000000000006-1-standalone zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:29:44 np0005484548.novalocal sudo[18941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfmmnhanfbveikgcurgajpsjlyrocwoy ; /usr/bin/python3
Oct 13 13:29:44 np0005484548.novalocal sudo[18941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:29:44 np0005484548.novalocal python3[18943]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ec2-ffbe-688c-bd86-000000000007-1-standalone zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:29:46 np0005484548.novalocal sudo[18941]: pam_unix(sudo:session): session closed for user root
Oct 13 13:29:47 np0005484548.novalocal sudo[18960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfplscamktglhjdeecelmdphfuyttbak ; /usr/bin/python3
Oct 13 13:29:47 np0005484548.novalocal sudo[18960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:29:47 np0005484548.novalocal python3[18962]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Oct 13 13:29:52 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:29:52 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:05 np0005484548.novalocal sshd[19097]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:30:06 np0005484548.novalocal sshd[19097]: Received disconnect from 193.46.255.7 port 13570:11:  [preauth]
Oct 13 13:30:06 np0005484548.novalocal sshd[19097]: Disconnected from authenticating user root 193.46.255.7 port 13570 [preauth]
Oct 13 13:30:22 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:23 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:31 np0005484548.novalocal sudo[18960]: pam_unix(sudo:session): session closed for user root
Oct 13 13:30:31 np0005484548.novalocal sudo[19255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwzjnefqyasyxmhpgcjnzdzzdamgvrxq ; /usr/bin/python3
Oct 13 13:30:31 np0005484548.novalocal sudo[19255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:30:31 np0005484548.novalocal python3[19257]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Oct 13 13:30:37 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:37 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:43 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:43 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:30:57 np0005484548.novalocal sudo[19255]: pam_unix(sudo:session): session closed for user root
Oct 13 13:30:57 np0005484548.novalocal sudo[19593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxmsacsuqswhrenptxrqghyfmwyqjgmv ; /usr/bin/python3
Oct 13 13:30:57 np0005484548.novalocal sudo[19593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:30:57 np0005484548.novalocal python3[19595]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-688c-bd86-00000000000a-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:30:59 np0005484548.novalocal sudo[19593]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:00 np0005484548.novalocal sudo[19612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-illqjqjxglcntheglqagyknwxndjtujc ; /usr/bin/python3
Oct 13 13:31:00 np0005484548.novalocal sudo[19612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:00 np0005484548.novalocal python3[19614]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:31:14 np0005484548.novalocal groupadd[19703]: group added to /etc/group: name=unbound, GID=987
Oct 13 13:31:14 np0005484548.novalocal groupadd[19703]: group added to /etc/gshadow: name=unbound
Oct 13 13:31:14 np0005484548.novalocal groupadd[19703]: new group: name=unbound, GID=987
Oct 13 13:31:14 np0005484548.novalocal useradd[19710]: new user: name=unbound, UID=987, GID=987, home=/etc/unbound, shell=/sbin/nologin, from=none
Oct 13 13:31:14 np0005484548.novalocal systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  Converting 500 SID table entries...
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:31:23 np0005484548.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:31:23 np0005484548.novalocal groupadd[19827]: group added to /etc/group: name=openvswitch, GID=986
Oct 13 13:31:23 np0005484548.novalocal groupadd[19827]: group added to /etc/gshadow: name=openvswitch
Oct 13 13:31:23 np0005484548.novalocal groupadd[19827]: new group: name=openvswitch, GID=986
Oct 13 13:31:23 np0005484548.novalocal useradd[19834]: new user: name=openvswitch, UID=986, GID=986, home=/, shell=/sbin/nologin, from=none
Oct 13 13:31:23 np0005484548.novalocal groupadd[19842]: group added to /etc/group: name=hugetlbfs, GID=985
Oct 13 13:31:23 np0005484548.novalocal groupadd[19842]: group added to /etc/gshadow: name=hugetlbfs
Oct 13 13:31:23 np0005484548.novalocal groupadd[19842]: new group: name=hugetlbfs, GID=985
Oct 13 13:31:23 np0005484548.novalocal usermod[19850]: add 'openvswitch' to group 'hugetlbfs'
Oct 13 13:31:23 np0005484548.novalocal usermod[19850]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 13 13:31:25 np0005484548.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct 13 13:31:25 np0005484548.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:31:25 np0005484548.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:31:25 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:31:25 np0005484548.novalocal systemd-sysv-generator[20349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:31:25 np0005484548.novalocal systemd-rc-local-generator[20343]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:31:25 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:31:25 np0005484548.novalocal systemd[1]: Starting dnf makecache...
Oct 13 13:31:25 np0005484548.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:31:25 np0005484548.novalocal dnf[20464]: Updating Subscription Management repositories.
Oct 13 13:31:26 np0005484548.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:31:26 np0005484548.novalocal systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:31:26 np0005484548.novalocal systemd[1]: run-rf19967daabb143599cdc00e2d359f5a7.service: Deactivated successfully.
Oct 13 13:31:26 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:31:27 np0005484548.novalocal sudo[19612]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:27 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:31:27 np0005484548.novalocal sudo[20866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zigblwdupifkphfxbzslkgdvkqusuizl ; /usr/bin/python3
Oct 13 13:31:27 np0005484548.novalocal sudo[20866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:27 np0005484548.novalocal python3[20868]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:31:27 np0005484548.novalocal sudo[20866]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:27 np0005484548.novalocal dnf[20464]: Failed determining last makecache time.
Oct 13 13:31:27 np0005484548.novalocal dnf[20464]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  42 kB/s | 4.5 kB     00:00
Oct 13 13:31:27 np0005484548.novalocal dnf[20464]: Fast Datapath for RHEL 9 x86_64 (RPMs)           45 kB/s | 4.0 kB     00:00
Oct 13 13:31:27 np0005484548.novalocal sudo[20916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypfajjrzzkwsmwoxcriqtpdaladjycxp ; /usr/bin/python3
Oct 13 13:31:27 np0005484548.novalocal sudo[20916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:28 np0005484548.novalocal dnf[20464]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  33 kB/s | 4.5 kB     00:00
Oct 13 13:31:28 np0005484548.novalocal python3[20919]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/standalone_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:31:28 np0005484548.novalocal sudo[20916]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:28 np0005484548.novalocal sudo[20961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwpkajxmuuhryftznckeigofbineizgk ; /usr/bin/python3
Oct 13 13:31:28 np0005484548.novalocal sudo[20961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:28 np0005484548.novalocal dnf[20464]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   13 kB/s | 4.1 kB     00:00
Oct 13 13:31:28 np0005484548.novalocal python3[20963]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760362287.654407-33-100103102367146/source dest=/etc/os-net-config/standalone_config.yaml mode=None follow=False _original_basename=net_config.j2 checksum=d537a62ac7893a5a4933ff3ff553fd99c038e706 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:31:28 np0005484548.novalocal dnf[20464]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   51 kB/s | 4.1 kB     00:00
Oct 13 13:31:28 np0005484548.novalocal sudo[20961]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:28 np0005484548.novalocal dnf[20464]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_  27 kB/s | 4.0 kB     00:00
Oct 13 13:31:28 np0005484548.novalocal sudo[20993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atzjjhggxrakibzbnffsyrohozazlypc ; /usr/bin/python3
Oct 13 13:31:28 np0005484548.novalocal sudo[20993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:29 np0005484548.novalocal python3[20995]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network  state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Oct 13 13:31:29 np0005484548.novalocal dnf[20464]: Metadata cache created.
Oct 13 13:31:29 np0005484548.novalocal systemd-journald[618]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 91.6 (305 of 333 items), suggesting rotation.
Oct 13 13:31:29 np0005484548.novalocal systemd-journald[618]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 13:31:29 np0005484548.novalocal rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:31:29 np0005484548.novalocal rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:31:29 np0005484548.novalocal sudo[20993]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:29 np0005484548.novalocal sudo[21014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-haeulzvhgttrrnwkibhkpvdgiulgvthx ; /usr/bin/python3
Oct 13 13:31:29 np0005484548.novalocal sudo[21014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:29 np0005484548.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 13 13:31:29 np0005484548.novalocal systemd[1]: Finished dnf makecache.
Oct 13 13:31:29 np0005484548.novalocal systemd[1]: dnf-makecache.service: Consumed 2.787s CPU time.
Oct 13 13:31:29 np0005484548.novalocal python3[21016]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Oct 13 13:31:29 np0005484548.novalocal sudo[21014]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:29 np0005484548.novalocal sudo[21034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sulqlrtgazgqolwpqmzxkhjfxiadqbeo ; /usr/bin/python3
Oct 13 13:31:29 np0005484548.novalocal sudo[21034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:29 np0005484548.novalocal python3[21036]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Oct 13 13:31:29 np0005484548.novalocal sudo[21034]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:29 np0005484548.novalocal sudo[21054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzbilgkccnaujstithbhqrqsughxbimo ; /usr/bin/python3
Oct 13 13:31:29 np0005484548.novalocal sudo[21054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:30 np0005484548.novalocal python3[21056]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Oct 13 13:31:30 np0005484548.novalocal sudo[21054]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:30 np0005484548.novalocal sudo[21074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nafxptswuqgsypivopfpjxzmmclwkcxl ; /usr/bin/python3
Oct 13 13:31:30 np0005484548.novalocal sudo[21074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:30 np0005484548.novalocal python3[21076]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:31:30 np0005484548.novalocal systemd[1]: Starting LSB: Bring up/down networking...
Oct 13 13:31:30 np0005484548.novalocal network[21079]: WARN      : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:30 np0005484548.novalocal network[21090]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:30 np0005484548.novalocal network[21079]: WARN      : [network] 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:30 np0005484548.novalocal network[21091]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:30 np0005484548.novalocal network[21079]: WARN      : [network] It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 13:31:30 np0005484548.novalocal network[21092]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 13:31:30 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362290.9641] audit: op="connections-reload" pid=21120 uid=0 result="success"
Oct 13 13:31:31 np0005484548.novalocal network[21079]: Bringing up loopback interface:  [  OK  ]
Oct 13 13:31:31 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362291.1915] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=21208 uid=0 result="success"
Oct 13 13:31:31 np0005484548.novalocal network[21079]: Bringing up interface eth0:  [  OK  ]
Oct 13 13:31:31 np0005484548.novalocal systemd[1]: Started LSB: Bring up/down networking.
Oct 13 13:31:31 np0005484548.novalocal sudo[21074]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:31 np0005484548.novalocal sudo[21247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fylzbvslkrmwhdovsnagnbbirusgpcim ; /usr/bin/python3
Oct 13 13:31:31 np0005484548.novalocal sudo[21247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:31 np0005484548.novalocal python3[21249]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:31:31 np0005484548.novalocal systemd[1]: Starting Open vSwitch Database Unit...
Oct 13 13:31:31 np0005484548.novalocal chown[21253]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 13 13:31:31 np0005484548.novalocal ovs-ctl[21258]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 13 13:31:31 np0005484548.novalocal ovs-ctl[21258]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 13 13:31:31 np0005484548.novalocal ovs-ctl[21258]: Starting ovsdb-server [  OK  ]
Oct 13 13:31:31 np0005484548.novalocal ovs-vsctl[21308]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 13 13:31:32 np0005484548.novalocal ovs-vsctl[21328]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-110.el9fdp "external-ids:system-id=\"90b9e3f7-f5c9-4d48-9eca-66995f72d494\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\""
Oct 13 13:31:32 np0005484548.novalocal ovs-ctl[21258]: Configuring Open vSwitch system IDs [  OK  ]
Oct 13 13:31:32 np0005484548.novalocal ovs-ctl[21258]: Enabling remote OVSDB managers [  OK  ]
Oct 13 13:31:32 np0005484548.novalocal ovs-vsctl[21334]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005484548.novalocal
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Started Open vSwitch Database Unit.
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 13 13:31:32 np0005484548.novalocal kernel: openvswitch: Open vSwitch switching datapath
Oct 13 13:31:32 np0005484548.novalocal ovs-ctl[21378]: Inserting openvswitch module [  OK  ]
Oct 13 13:31:32 np0005484548.novalocal ovs-ctl[21347]: Starting ovs-vswitchd [  OK  ]
Oct 13 13:31:32 np0005484548.novalocal ovs-vsctl[21396]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005484548.novalocal
Oct 13 13:31:32 np0005484548.novalocal ovs-ctl[21347]: Enabling remote OVSDB managers [  OK  ]
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Starting Open vSwitch...
Oct 13 13:31:32 np0005484548.novalocal systemd[1]: Finished Open vSwitch.
Oct 13 13:31:32 np0005484548.novalocal sudo[21247]: pam_unix(sudo:session): session closed for user root
Oct 13 13:31:32 np0005484548.novalocal sudo[21412]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntvpytirelhttveogekhadkjbfmwiwee ; /usr/bin/python3
Oct 13 13:31:32 np0005484548.novalocal sudo[21412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 13:31:32 np0005484548.novalocal python3[21414]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/standalone_config.yaml
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-688c-bd86-000000000010-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:31:33 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362293.8006] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21572 uid=0 result="success"
Oct 13 13:31:33 np0005484548.novalocal ifup[21573]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:33 np0005484548.novalocal ifup[21574]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:33 np0005484548.novalocal ifup[21575]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:31:33 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362293.8319] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21581 uid=0 result="success"
Oct 13 13:31:33 np0005484548.novalocal ovs-vsctl[21583]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ctlplane -- set bridge br-ctlplane other-config:mac-table-size=50000 -- set bridge br-ctlplane other-config:hwaddr=fa:16:3e:0b:50:81 -- set bridge br-ctlplane fail_mode=standalone -- del-controller br-ctlplane
Oct 13 13:31:33 np0005484548.novalocal kernel: device ovs-system entered promiscuous mode
Oct 13 13:31:33 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362293.8561] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4)
Oct 13 13:31:33 np0005484548.novalocal kernel: Timeout policy base is empty
Oct 13 13:31:33 np0005484548.novalocal kernel: Failed to associated timeout policy `ovs_test_tp'
Oct 13 13:31:33 np0005484548.novalocal systemd-udevd[21321]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:31:33 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362293.8911] manager: (br-ctlplane): new Generic device (/org/freedesktop/NetworkManager/Devices/5)
Oct 13 13:31:33 np0005484548.novalocal kernel: device br-ctlplane entered promiscuous mode
Oct 13 13:31:33 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362293.9176] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21608 uid=0 result="success"
Oct 13 13:31:33 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362293.9578] device (br-ctlplane): carrier: link connected
Oct 13 13:31:39 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362299.6054] device (eth1): state change: activated -> unavailable (reason 'carrier-changed', sys-iface-state: 'managed')
Oct 13 13:31:39 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362299.6154] dhcp4 (eth1): canceled DHCP transaction
Oct 13 13:31:39 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362299.6154] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 13 13:31:39 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362299.6155] dhcp4 (eth1): state changed no lease
Oct 13 13:31:39 np0005484548.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 13:31:39 np0005484548.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.1096] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21681 uid=0 result="success"
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.1634] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21696 uid=0 result="success"
Oct 13 13:31:43 np0005484548.novalocal NET[21721]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.2632] device (eth1): state change: unavailable -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed')
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.2649] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=21730 uid=0 result="success"
Oct 13 13:31:43 np0005484548.novalocal ifup[21731]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:43 np0005484548.novalocal ifup[21732]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:43 np0005484548.novalocal ifup[21733]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.2972] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=21739 uid=0 result="success"
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.3861] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=21748 uid=0 result="success"
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.3929] device (eth1): carrier: link connected
Oct 13 13:31:43 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362303.4159] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=21757 uid=0 result="success"
Oct 13 13:31:43 np0005484548.novalocal ipv6_wait_tentative[21769]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Oct 13 13:31:44 np0005484548.novalocal ipv6_wait_tentative[21774]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.4943] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=21783 uid=0 result="success"
Oct 13 13:31:45 np0005484548.novalocal ovs-vsctl[21798]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane eth1 -- add-port br-ctlplane eth1
Oct 13 13:31:45 np0005484548.novalocal kernel: device eth1 entered promiscuous mode
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.6007] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21806 uid=0 result="success"
Oct 13 13:31:45 np0005484548.novalocal ifup[21807]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:45 np0005484548.novalocal ifup[21808]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:45 np0005484548.novalocal ifup[21809]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.6312] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=21815 uid=0 result="success"
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.6749] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=21825 uid=0 result="success"
Oct 13 13:31:45 np0005484548.novalocal ifup[21826]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:45 np0005484548.novalocal ifup[21827]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:45 np0005484548.novalocal ifup[21828]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.7051] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=21834 uid=0 result="success"
Oct 13 13:31:45 np0005484548.novalocal ovs-vsctl[21837]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan22 -- add-port br-ctlplane vlan22 tag=22 -- set Interface vlan22 type=internal
Oct 13 13:31:45 np0005484548.novalocal kernel: device vlan22 entered promiscuous mode
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.7458] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/6)
Oct 13 13:31:45 np0005484548.novalocal systemd-udevd[21839]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.7726] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=21848 uid=0 result="success"
Oct 13 13:31:45 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362305.7933] device (vlan22): carrier: link connected
Oct 13 13:31:48 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362308.8472] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=21877 uid=0 result="success"
Oct 13 13:31:48 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362308.8987] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=21892 uid=0 result="success"
Oct 13 13:31:48 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362308.9528] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=21913 uid=0 result="success"
Oct 13 13:31:48 np0005484548.novalocal ifup[21914]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:48 np0005484548.novalocal ifup[21915]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:48 np0005484548.novalocal ifup[21916]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:31:48 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362308.9812] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=21922 uid=0 result="success"
Oct 13 13:31:49 np0005484548.novalocal ovs-vsctl[21925]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan20 -- add-port br-ctlplane vlan20 tag=20 -- set Interface vlan20 type=internal
Oct 13 13:31:49 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362309.0213] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/7)
Oct 13 13:31:49 np0005484548.novalocal kernel: device vlan20 entered promiscuous mode
Oct 13 13:31:49 np0005484548.novalocal systemd-udevd[21927]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:31:49 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362309.0466] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=21937 uid=0 result="success"
Oct 13 13:31:49 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362309.0797] device (vlan20): carrier: link connected
Oct 13 13:31:49 np0005484548.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.1828] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=21984 uid=0 result="success"
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.2365] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=21999 uid=0 result="success"
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.2981] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22020 uid=0 result="success"
Oct 13 13:31:55 np0005484548.novalocal ifup[22021]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:31:55 np0005484548.novalocal ifup[22022]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:31:55 np0005484548.novalocal ifup[22023]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.3252] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22029 uid=0 result="success"
Oct 13 13:31:55 np0005484548.novalocal ovs-vsctl[22032]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan21 -- add-port br-ctlplane vlan21 tag=21 -- set Interface vlan21 type=internal
Oct 13 13:31:55 np0005484548.novalocal kernel: device vlan21 entered promiscuous mode
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.3613] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8)
Oct 13 13:31:55 np0005484548.novalocal systemd-udevd[22034]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.3871] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22044 uid=0 result="success"
Oct 13 13:31:55 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362315.4156] device (vlan21): carrier: link connected
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.4975] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22090 uid=0 result="success"
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.5390] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22105 uid=0 result="success"
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.6010] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22126 uid=0 result="success"
Oct 13 13:32:01 np0005484548.novalocal ifup[22127]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:01 np0005484548.novalocal ifup[22128]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:01 np0005484548.novalocal ifup[22129]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.6313] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22135 uid=0 result="success"
Oct 13 13:32:01 np0005484548.novalocal ovs-vsctl[22138]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan44 -- add-port br-ctlplane vlan44 tag=44 -- set Interface vlan44 type=internal
Oct 13 13:32:01 np0005484548.novalocal kernel: device vlan44 entered promiscuous mode
Oct 13 13:32:01 np0005484548.novalocal systemd-udevd[22140]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.6995] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9)
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.7293] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22150 uid=0 result="success"
Oct 13 13:32:01 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362321.7603] device (vlan44): carrier: link connected
Oct 13 13:32:07 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362327.8478] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22197 uid=0 result="success"
Oct 13 13:32:07 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362327.8960] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22212 uid=0 result="success"
Oct 13 13:32:07 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362327.9614] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22233 uid=0 result="success"
Oct 13 13:32:07 np0005484548.novalocal ifup[22234]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:07 np0005484548.novalocal ifup[22235]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:07 np0005484548.novalocal ifup[22236]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:07 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362327.9978] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22242 uid=0 result="success"
Oct 13 13:32:08 np0005484548.novalocal ovs-vsctl[22245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan23 -- add-port br-ctlplane vlan23 tag=23 -- set Interface vlan23 type=internal
Oct 13 13:32:08 np0005484548.novalocal systemd-udevd[22247]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 13:32:08 np0005484548.novalocal kernel: device vlan23 entered promiscuous mode
Oct 13 13:32:08 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362328.0426] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/10)
Oct 13 13:32:08 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362328.0716] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22257 uid=0 result="success"
Oct 13 13:32:08 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362328.1024] device (vlan23): carrier: link connected
Oct 13 13:32:14 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362334.1892] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22303 uid=0 result="success"
Oct 13 13:32:14 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362334.2320] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22318 uid=0 result="success"
Oct 13 13:32:14 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362334.2946] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22339 uid=0 result="success"
Oct 13 13:32:14 np0005484548.novalocal ifup[22340]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:14 np0005484548.novalocal ifup[22341]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:14 np0005484548.novalocal ifup[22342]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:14 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362334.3255] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22348 uid=0 result="success"
Oct 13 13:32:14 np0005484548.novalocal ovs-vsctl[22351]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan44 -- add-port br-ctlplane vlan44 tag=44 -- set Interface vlan44 type=internal
Oct 13 13:32:14 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362334.4080] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22358 uid=0 result="success"
Oct 13 13:32:16 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362336.4842] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22399 uid=0 result="success"
Oct 13 13:32:16 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362336.5301] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22414 uid=0 result="success"
Oct 13 13:32:16 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362336.5911] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22435 uid=0 result="success"
Oct 13 13:32:16 np0005484548.novalocal ifup[22436]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:16 np0005484548.novalocal ifup[22437]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:16 np0005484548.novalocal ifup[22438]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:16 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362336.6265] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22444 uid=0 result="success"
Oct 13 13:32:16 np0005484548.novalocal ovs-vsctl[22447]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan20 -- add-port br-ctlplane vlan20 tag=20 -- set Interface vlan20 type=internal
Oct 13 13:32:16 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362336.7209] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22454 uid=0 result="success"
Oct 13 13:32:18 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362338.8129] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22496 uid=0 result="success"
Oct 13 13:32:18 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362338.8632] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22511 uid=0 result="success"
Oct 13 13:32:18 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362338.9301] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22532 uid=0 result="success"
Oct 13 13:32:18 np0005484548.novalocal ifup[22533]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:18 np0005484548.novalocal ifup[22534]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:18 np0005484548.novalocal ifup[22535]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:18 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362338.9631] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22541 uid=0 result="success"
Oct 13 13:32:18 np0005484548.novalocal ovs-vsctl[22544]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan21 -- add-port br-ctlplane vlan21 tag=21 -- set Interface vlan21 type=internal
Oct 13 13:32:19 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362339.0188] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22551 uid=0 result="success"
Oct 13 13:32:21 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362341.1067] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22593 uid=0 result="success"
Oct 13 13:32:21 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362341.1506] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22608 uid=0 result="success"
Oct 13 13:32:21 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362341.2093] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22629 uid=0 result="success"
Oct 13 13:32:21 np0005484548.novalocal ifup[22630]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:21 np0005484548.novalocal ifup[22631]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:21 np0005484548.novalocal ifup[22632]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:21 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362341.2407] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22638 uid=0 result="success"
Oct 13 13:32:21 np0005484548.novalocal ovs-vsctl[22641]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan23 -- add-port br-ctlplane vlan23 tag=23 -- set Interface vlan23 type=internal
Oct 13 13:32:21 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362341.3223] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22648 uid=0 result="success"
Oct 13 13:32:23 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362343.4340] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22690 uid=0 result="success"
Oct 13 13:32:23 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362343.4801] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22705 uid=0 result="success"
Oct 13 13:32:23 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362343.5421] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22726 uid=0 result="success"
Oct 13 13:32:23 np0005484548.novalocal ifup[22727]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 13:32:23 np0005484548.novalocal ifup[22728]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:32:23 np0005484548.novalocal ifup[22729]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 13:32:23 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362343.5748] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22735 uid=0 result="success"
Oct 13 13:32:23 np0005484548.novalocal ovs-vsctl[22738]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan22 -- add-port br-ctlplane vlan22 tag=22 -- set Interface vlan22 type=internal
Oct 13 13:32:23 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362343.6646] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22745 uid=0 result="success"
Oct 13 13:32:24 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362344.7265] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22773 uid=0 result="success"
Oct 13 13:32:24 np0005484548.novalocal NetworkManager[5962]: <info>  [1760362344.7758] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22788 uid=0 result="success"
Oct 13 13:32:24 np0005484548.novalocal sudo[21412]: pam_unix(sudo:session): session closed for user root
Oct 13 13:32:25 np0005484548.novalocal python3[22818]: ansible-ansible.legacy.command Invoked with _raw_params=ip a
                                                       ping -c 2 -W 2 192.168.122.10
                                                       ping -c 2 -W 2 192.168.122.11
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-688c-bd86-000000000011-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:32:27 np0005484548.novalocal sshd[18906]: pam_unix(sshd:session): session closed for user zuul
Oct 13 13:32:27 np0005484548.novalocal systemd-logind[760]: Session 9 logged out. Waiting for processes to exit.
Oct 13 13:32:27 np0005484548.novalocal systemd[1]: session-9.scope: Deactivated successfully.
Oct 13 13:32:27 np0005484548.novalocal systemd[1]: session-9.scope: Consumed 1min 19.681s CPU time.
Oct 13 13:32:27 np0005484548.novalocal systemd-logind[760]: Removed session 9.
Oct 13 13:32:32 np0005484548.novalocal sshd[22826]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:33 np0005484548.novalocal sshd[22826]: Accepted publickey for root from 192.168.122.11 port 58746 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:33 np0005484548.novalocal systemd-logind[760]: New session 10 of user root.
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Created slice User Slice of UID 0.
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Starting User Manager for UID 0...
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Queued start job for default target Main User Target.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Created slice User Application Slice.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Reached target Paths.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Reached target Timers.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Starting D-Bus User Message Bus Socket...
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Starting Create User's Volatile Files and Directories...
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Listening on D-Bus User Message Bus Socket.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Reached target Sockets.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Finished Create User's Volatile Files and Directories.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Reached target Basic System.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Reached target Main User Target.
Oct 13 13:32:33 np0005484548.novalocal systemd[22830]: Startup finished in 112ms.
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Started User Manager for UID 0.
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Started Session 10 of User root.
Oct 13 13:32:33 np0005484548.novalocal sshd[22826]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:33 np0005484548.novalocal sshd[22845]: Received disconnect from 192.168.122.11 port 58746:11: disconnected by user
Oct 13 13:32:33 np0005484548.novalocal sshd[22845]: Disconnected from user root 192.168.122.11 port 58746
Oct 13 13:32:33 np0005484548.novalocal sshd[22826]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: session-10.scope: Deactivated successfully.
Oct 13 13:32:33 np0005484548.novalocal systemd-logind[760]: Session 10 logged out. Waiting for processes to exit.
Oct 13 13:32:33 np0005484548.novalocal systemd-logind[760]: Removed session 10.
Oct 13 13:32:33 np0005484548.novalocal sshd[22859]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:33 np0005484548.novalocal sshd[22859]: Accepted publickey for root from 192.168.122.11 port 58750 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:33 np0005484548.novalocal systemd-logind[760]: New session 12 of user root.
Oct 13 13:32:33 np0005484548.novalocal systemd[1]: Started Session 12 of User root.
Oct 13 13:32:33 np0005484548.novalocal sshd[22859]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:33 np0005484548.novalocal sshd[22862]: Received disconnect from 192.168.122.11 port 58750:11: disconnected by user
Oct 13 13:32:33 np0005484548.novalocal sshd[22862]: Disconnected from user root 192.168.122.11 port 58750
Oct 13 13:32:34 np0005484548.novalocal sshd[22859]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: session-12.scope: Deactivated successfully.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Session 12 logged out. Waiting for processes to exit.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Removed session 12.
Oct 13 13:32:34 np0005484548.novalocal sshd[22876]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:34 np0005484548.novalocal sshd[22876]: Accepted publickey for root from 192.168.122.11 port 58760 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: New session 13 of user root.
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: Started Session 13 of User root.
Oct 13 13:32:34 np0005484548.novalocal sshd[22876]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:34 np0005484548.novalocal sshd[22879]: Received disconnect from 192.168.122.11 port 58760:11: disconnected by user
Oct 13 13:32:34 np0005484548.novalocal sshd[22879]: Disconnected from user root 192.168.122.11 port 58760
Oct 13 13:32:34 np0005484548.novalocal sshd[22876]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: session-13.scope: Deactivated successfully.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Session 13 logged out. Waiting for processes to exit.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Removed session 13.
Oct 13 13:32:34 np0005484548.novalocal sshd[22893]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:34 np0005484548.novalocal sshd[22893]: Accepted publickey for root from 192.168.122.11 port 58764 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: New session 14 of user root.
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: Started Session 14 of User root.
Oct 13 13:32:34 np0005484548.novalocal sshd[22893]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:34 np0005484548.novalocal sshd[22896]: Received disconnect from 192.168.122.11 port 58764:11: disconnected by user
Oct 13 13:32:34 np0005484548.novalocal sshd[22896]: Disconnected from user root 192.168.122.11 port 58764
Oct 13 13:32:34 np0005484548.novalocal sshd[22893]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: session-14.scope: Deactivated successfully.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Session 14 logged out. Waiting for processes to exit.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Removed session 14.
Oct 13 13:32:34 np0005484548.novalocal sshd[22910]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:34 np0005484548.novalocal sshd[22910]: Accepted publickey for root from 192.168.122.11 port 58770 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: New session 15 of user root.
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: Started Session 15 of User root.
Oct 13 13:32:34 np0005484548.novalocal sshd[22910]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:34 np0005484548.novalocal sshd[22913]: Received disconnect from 192.168.122.11 port 58770:11: disconnected by user
Oct 13 13:32:34 np0005484548.novalocal sshd[22913]: Disconnected from user root 192.168.122.11 port 58770
Oct 13 13:32:34 np0005484548.novalocal sshd[22910]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: session-15.scope: Deactivated successfully.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Session 15 logged out. Waiting for processes to exit.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Removed session 15.
Oct 13 13:32:34 np0005484548.novalocal sshd[22927]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:34 np0005484548.novalocal sshd[22927]: Accepted publickey for root from 192.168.122.11 port 58780 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: New session 16 of user root.
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: Started Session 16 of User root.
Oct 13 13:32:34 np0005484548.novalocal sshd[22927]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:34 np0005484548.novalocal sshd[22930]: Received disconnect from 192.168.122.11 port 58780:11: disconnected by user
Oct 13 13:32:34 np0005484548.novalocal sshd[22930]: Disconnected from user root 192.168.122.11 port 58780
Oct 13 13:32:34 np0005484548.novalocal sshd[22927]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:34 np0005484548.novalocal systemd[1]: session-16.scope: Deactivated successfully.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Session 16 logged out. Waiting for processes to exit.
Oct 13 13:32:34 np0005484548.novalocal systemd-logind[760]: Removed session 16.
Oct 13 13:32:34 np0005484548.novalocal sshd[22944]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:35 np0005484548.novalocal sshd[22944]: Accepted publickey for root from 192.168.122.11 port 58796 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: New session 17 of user root.
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: Started Session 17 of User root.
Oct 13 13:32:35 np0005484548.novalocal sshd[22944]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:35 np0005484548.novalocal sshd[22947]: Received disconnect from 192.168.122.11 port 58796:11: disconnected by user
Oct 13 13:32:35 np0005484548.novalocal sshd[22947]: Disconnected from user root 192.168.122.11 port 58796
Oct 13 13:32:35 np0005484548.novalocal sshd[22944]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: session-17.scope: Deactivated successfully.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Session 17 logged out. Waiting for processes to exit.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Removed session 17.
Oct 13 13:32:35 np0005484548.novalocal sshd[22961]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:35 np0005484548.novalocal sshd[22961]: Accepted publickey for root from 192.168.122.11 port 58812 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: New session 18 of user root.
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: Started Session 18 of User root.
Oct 13 13:32:35 np0005484548.novalocal sshd[22961]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:35 np0005484548.novalocal sshd[22964]: Received disconnect from 192.168.122.11 port 58812:11: disconnected by user
Oct 13 13:32:35 np0005484548.novalocal sshd[22964]: Disconnected from user root 192.168.122.11 port 58812
Oct 13 13:32:35 np0005484548.novalocal sshd[22961]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: session-18.scope: Deactivated successfully.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Session 18 logged out. Waiting for processes to exit.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Removed session 18.
Oct 13 13:32:35 np0005484548.novalocal sshd[22978]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:35 np0005484548.novalocal sshd[22978]: Accepted publickey for root from 192.168.122.11 port 58822 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: New session 19 of user root.
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: Started Session 19 of User root.
Oct 13 13:32:35 np0005484548.novalocal sshd[22978]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:35 np0005484548.novalocal sshd[22981]: Received disconnect from 192.168.122.11 port 58822:11: disconnected by user
Oct 13 13:32:35 np0005484548.novalocal sshd[22981]: Disconnected from user root 192.168.122.11 port 58822
Oct 13 13:32:35 np0005484548.novalocal sshd[22978]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: session-19.scope: Deactivated successfully.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Session 19 logged out. Waiting for processes to exit.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Removed session 19.
Oct 13 13:32:35 np0005484548.novalocal sshd[22995]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:35 np0005484548.novalocal sshd[22995]: Accepted publickey for root from 192.168.122.11 port 58824 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: New session 20 of user root.
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: Started Session 20 of User root.
Oct 13 13:32:35 np0005484548.novalocal sshd[22995]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:35 np0005484548.novalocal sshd[22998]: Received disconnect from 192.168.122.11 port 58824:11: disconnected by user
Oct 13 13:32:35 np0005484548.novalocal sshd[22998]: Disconnected from user root 192.168.122.11 port 58824
Oct 13 13:32:35 np0005484548.novalocal sshd[22995]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: session-20.scope: Deactivated successfully.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Session 20 logged out. Waiting for processes to exit.
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: Removed session 20.
Oct 13 13:32:35 np0005484548.novalocal sshd[23012]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:35 np0005484548.novalocal sshd[23012]: Accepted publickey for root from 192.168.122.11 port 58828 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:35 np0005484548.novalocal systemd-logind[760]: New session 21 of user root.
Oct 13 13:32:35 np0005484548.novalocal systemd[1]: Started Session 21 of User root.
Oct 13 13:32:35 np0005484548.novalocal sshd[23012]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:35 np0005484548.novalocal sshd[23015]: Received disconnect from 192.168.122.11 port 58828:11: disconnected by user
Oct 13 13:32:35 np0005484548.novalocal sshd[23015]: Disconnected from user root 192.168.122.11 port 58828
Oct 13 13:32:36 np0005484548.novalocal sshd[23012]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:36 np0005484548.novalocal systemd[1]: session-21.scope: Deactivated successfully.
Oct 13 13:32:36 np0005484548.novalocal systemd-logind[760]: Session 21 logged out. Waiting for processes to exit.
Oct 13 13:32:36 np0005484548.novalocal systemd-logind[760]: Removed session 21.
Oct 13 13:32:36 np0005484548.novalocal sshd[23029]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:36 np0005484548.novalocal sshd[23029]: Accepted publickey for root from 192.168.122.11 port 58832 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:36 np0005484548.novalocal systemd-logind[760]: New session 22 of user root.
Oct 13 13:32:36 np0005484548.novalocal systemd[1]: Started Session 22 of User root.
Oct 13 13:32:36 np0005484548.novalocal sshd[23029]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:36 np0005484548.novalocal sshd[23032]: Received disconnect from 192.168.122.11 port 58832:11: disconnected by user
Oct 13 13:32:36 np0005484548.novalocal sshd[23032]: Disconnected from user root 192.168.122.11 port 58832
Oct 13 13:32:36 np0005484548.novalocal sshd[23029]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:36 np0005484548.novalocal systemd[1]: session-22.scope: Deactivated successfully.
Oct 13 13:32:36 np0005484548.novalocal systemd-logind[760]: Session 22 logged out. Waiting for processes to exit.
Oct 13 13:32:36 np0005484548.novalocal systemd-logind[760]: Removed session 22.
Oct 13 13:32:36 np0005484548.novalocal sshd[23046]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:36 np0005484548.novalocal sshd[23046]: Accepted publickey for root from 192.168.122.11 port 58834 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:36 np0005484548.novalocal systemd-logind[760]: New session 23 of user root.
Oct 13 13:32:36 np0005484548.novalocal systemd[1]: Started Session 23 of User root.
Oct 13 13:32:36 np0005484548.novalocal sshd[23046]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:39 np0005484548.novalocal rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:32:49 np0005484548.novalocal sshd[23049]: Received disconnect from 192.168.122.11 port 58834:11: disconnected by user
Oct 13 13:32:49 np0005484548.novalocal sshd[23049]: Disconnected from user root 192.168.122.11 port 58834
Oct 13 13:32:49 np0005484548.novalocal sshd[23046]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:49 np0005484548.novalocal systemd[1]: session-23.scope: Deactivated successfully.
Oct 13 13:32:49 np0005484548.novalocal systemd[1]: session-23.scope: Consumed 8.958s CPU time.
Oct 13 13:32:49 np0005484548.novalocal systemd-logind[760]: Session 23 logged out. Waiting for processes to exit.
Oct 13 13:32:49 np0005484548.novalocal systemd-logind[760]: Removed session 23.
Oct 13 13:32:49 np0005484548.novalocal sshd[23265]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:49 np0005484548.novalocal sshd[23265]: Accepted publickey for root from 192.168.122.11 port 57470 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:49 np0005484548.novalocal systemd-logind[760]: New session 24 of user root.
Oct 13 13:32:49 np0005484548.novalocal systemd[1]: Started Session 24 of User root.
Oct 13 13:32:49 np0005484548.novalocal sshd[23265]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:49 np0005484548.novalocal sshd[23268]: Received disconnect from 192.168.122.11 port 57470:11: disconnected by user
Oct 13 13:32:49 np0005484548.novalocal sshd[23268]: Disconnected from user root 192.168.122.11 port 57470
Oct 13 13:32:49 np0005484548.novalocal sshd[23265]: pam_unix(sshd:session): session closed for user root
Oct 13 13:32:49 np0005484548.novalocal systemd[1]: session-24.scope: Deactivated successfully.
Oct 13 13:32:49 np0005484548.novalocal systemd-logind[760]: Session 24 logged out. Waiting for processes to exit.
Oct 13 13:32:49 np0005484548.novalocal systemd-logind[760]: Removed session 24.
Oct 13 13:32:49 np0005484548.novalocal sshd[23282]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:32:49 np0005484548.novalocal sshd[23282]: Accepted publickey for root from 192.168.122.11 port 57482 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 13:32:49 np0005484548.novalocal systemd-logind[760]: New session 25 of user root.
Oct 13 13:32:49 np0005484548.novalocal systemd[1]: Started Session 25 of User root.
Oct 13 13:32:49 np0005484548.novalocal sshd[23282]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:32:49 np0005484548.novalocal sudo[23299]:     root : PWD=/root ; USER=root ; COMMAND=/bin/dnf install -y podman python3-tripleoclient util-linux lvm2 cephadm
Oct 13 13:32:49 np0005484548.novalocal sudo[23299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:33:22 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:33:22 np0005484548.novalocal systemd-rc-local-generator[23735]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:33:22 np0005484548.novalocal systemd-sysv-generator[23742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:33:22 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:33:22 np0005484548.novalocal systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 13 13:33:22 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:33:23 np0005484548.novalocal systemd-sysv-generator[23778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:33:23 np0005484548.novalocal systemd-rc-local-generator[23775]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:33:23 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:33:23 np0005484548.novalocal systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 13 13:33:23 np0005484548.novalocal systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 13 13:33:23 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:33:23 np0005484548.novalocal systemd-rc-local-generator[23815]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:33:23 np0005484548.novalocal systemd-sysv-generator[23820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:33:23 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:33:23 np0005484548.novalocal systemd[1]: Listening on LVM2 poll daemon socket.
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  Converting 2682 SID table entries...
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:34:12 np0005484548.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  Converting 2683 SID table entries...
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:34:24 np0005484548.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:34:34 np0005484548.novalocal groupadd[24577]: group added to /etc/group: name=heat, GID=187
Oct 13 13:34:34 np0005484548.novalocal groupadd[24577]: group added to /etc/gshadow: name=heat
Oct 13 13:34:34 np0005484548.novalocal groupadd[24577]: new group: name=heat, GID=187
Oct 13 13:34:34 np0005484548.novalocal useradd[24584]: new user: name=heat, UID=187, GID=187, home=/var/lib/heat, shell=/sbin/nologin, from=none
Oct 13 13:34:36 np0005484548.novalocal groupadd[24602]: group added to /etc/group: name=puppet, GID=52
Oct 13 13:34:36 np0005484548.novalocal groupadd[24602]: group added to /etc/gshadow: name=puppet
Oct 13 13:34:36 np0005484548.novalocal groupadd[24602]: new group: name=puppet, GID=52
Oct 13 13:34:36 np0005484548.novalocal useradd[24609]: new user: name=puppet, UID=52, GID=52, home=/var/lib/puppet, shell=/sbin/nologin, from=none
Oct 13 13:34:41 np0005484548.novalocal groupadd[24621]: group added to /etc/group: name=cephadm, GID=984
Oct 13 13:34:41 np0005484548.novalocal groupadd[24621]: group added to /etc/gshadow: name=cephadm
Oct 13 13:34:41 np0005484548.novalocal groupadd[24621]: new group: name=cephadm, GID=984
Oct 13 13:34:42 np0005484548.novalocal useradd[24628]: new user: name=cephadm, UID=985, GID=984, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 13 13:34:43 np0005484548.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Oct 13 13:34:43 np0005484548.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:34:43 np0005484548.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:34:43 np0005484548.novalocal systemd[1]: Reloading.
Oct 13 13:34:43 np0005484548.novalocal systemd-rc-local-generator[25197]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:34:43 np0005484548.novalocal systemd-sysv-generator[25202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:34:43 np0005484548.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:34:43 np0005484548.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:34:43 np0005484548.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:34:45 np0005484548.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:34:45 np0005484548.novalocal systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:34:45 np0005484548.novalocal systemd[1]: man-db-cache-update.service: Consumed 2.045s CPU time.
Oct 13 13:34:45 np0005484548.novalocal systemd[1]: run-re9fcb0c8bd4e4fbfb4a127f2ce3107d1.service: Deactivated successfully.
Oct 13 13:34:45 np0005484548.novalocal systemd[1]: run-ra6479f1176a448e2ba57e6c06ae10879.service: Deactivated successfully.
Oct 13 13:34:46 np0005484548.novalocal sudo[23299]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:46 np0005484548.novalocal sudo[27737]:     root : PWD=/root ; USER=root ; COMMAND=/bin/hostnamectl set-hostname standalone.localdomain
Oct 13 13:34:46 np0005484548.novalocal sudo[27737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:46 np0005484548.novalocal systemd[1]: Starting Hostname Service...
Oct 13 13:34:46 np0005484548.novalocal systemd[1]: Started Hostname Service.
Oct 13 13:34:46 standalone.localdomain systemd-hostnamed[27740]: Hostname set to <standalone.localdomain> (static)
Oct 13 13:34:46 standalone.localdomain NetworkManager[5962]: <info>  [1760362486.8397] hostname: static hostname changed from "np0005484548.novalocal" to "standalone.localdomain"
Oct 13 13:34:46 standalone.localdomain sudo[27737]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:46 standalone.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 13:34:46 standalone.localdomain systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 13:34:46 standalone.localdomain sudo[27741]:     root : PWD=/root ; USER=root ; COMMAND=/bin/hostnamectl set-hostname standalone.localdomain --transient
Oct 13 13:34:46 standalone.localdomain sudo[27741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:46 standalone.localdomain sudo[27741]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:48 standalone.localdomain sudo[27772]:     root : PWD=/root ; USER=root ; COMMAND=/bin/mkdir -p /etc/os-net-config
Oct 13 13:34:48 standalone.localdomain sudo[27772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:48 standalone.localdomain sudo[27772]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:48 standalone.localdomain sudo[27776]:     root : PWD=/root ; USER=root ; COMMAND=/bin/tee /etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg
Oct 13 13:34:48 standalone.localdomain sudo[27776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:48 standalone.localdomain sudo[27776]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:48 standalone.localdomain sudo[27779]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl enable network
Oct 13 13:34:48 standalone.localdomain sudo[27779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:48 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:34:48 standalone.localdomain systemd-rc-local-generator[27807]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:34:48 standalone.localdomain systemd-sysv-generator[27813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:34:48 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:34:49 standalone.localdomain sudo[27779]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:49 standalone.localdomain sudo[27819]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cp /tmp/net_config.yaml /etc/os-net-config/config.yaml
Oct 13 13:34:49 standalone.localdomain sudo[27819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:49 standalone.localdomain sudo[27819]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:49 standalone.localdomain sudo[27822]:     root : PWD=/root ; USER=root ; COMMAND=/bin/os-net-config -c /etc/os-net-config/config.yaml
Oct 13 13:34:49 standalone.localdomain sudo[27822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:49 standalone.localdomain sudo[27822]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:49 standalone.localdomain sudo[27829]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cp /tmp/net_config.yaml /root/standalone_net_config.j2
Oct 13 13:34:49 standalone.localdomain sudo[27829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:49 standalone.localdomain sudo[27829]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:49 standalone.localdomain sudo[27832]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cp /tmp/network_data.yaml /root/network_data.yaml
Oct 13 13:34:49 standalone.localdomain sudo[27832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:49 standalone.localdomain sudo[27832]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:49 standalone.localdomain sudo[27835]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cp /tmp/deployed_network.yaml /root/deployed_network.yaml
Oct 13 13:34:49 standalone.localdomain sudo[27835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:49 standalone.localdomain sudo[27835]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:49 standalone.localdomain sudo[27838]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cp /tmp/Standalone.yaml /root/Standalone.yaml
Oct 13 13:34:49 standalone.localdomain sudo[27838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:49 standalone.localdomain sudo[27838]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:50 standalone.localdomain sudo[27842]:     root : PWD=/root ; USER=root ; COMMAND=/bin/dd if=/dev/zero of=/var/lib/ceph-osd.img bs=1 count=0 seek=7G
Oct 13 13:34:50 standalone.localdomain sudo[27842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:50 standalone.localdomain sudo[27842]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:50 standalone.localdomain sudo[27845]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/losetup /dev/loop3 /var/lib/ceph-osd.img
Oct 13 13:34:50 standalone.localdomain sudo[27845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:50 standalone.localdomain kernel: loop: module loaded
Oct 13 13:34:50 standalone.localdomain kernel: loop3: detected capacity change from 0 to 14680064
Oct 13 13:34:50 standalone.localdomain sudo[27845]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:50 standalone.localdomain sudo[27854]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pvcreate /dev/loop3
Oct 13 13:34:50 standalone.localdomain sudo[27854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:50 standalone.localdomain lvm[27857]: PV /dev/loop3 not used.
Oct 13 13:34:50 standalone.localdomain sudo[27854]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:50 standalone.localdomain sudo[27858]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/vgcreate vg2 /dev/loop3
Oct 13 13:34:50 standalone.localdomain sudo[27858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:50 standalone.localdomain lvm[27861]: PV /dev/loop3 online, VG vg2 is complete.
Oct 13 13:34:50 standalone.localdomain sudo[27858]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:50 standalone.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event vg2.
Oct 13 13:34:50 standalone.localdomain sudo[27862]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/lvcreate -n data-lv2 -l +100%FREE vg2
Oct 13 13:34:50 standalone.localdomain sudo[27862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:50 standalone.localdomain lvm[27865]:   0 logical volume(s) in volume group "vg2" now active
Oct 13 13:34:50 standalone.localdomain systemd[1]: lvm-activate-vg2.service: Deactivated successfully.
Oct 13 13:34:50 standalone.localdomain lvm[27873]: PV /dev/loop3 online, VG vg2 is complete.
Oct 13 13:34:50 standalone.localdomain lvm[27873]: VG vg2 finished
Oct 13 13:34:50 standalone.localdomain sudo[27862]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:50 standalone.localdomain sudo[27875]:     root : PWD=/root ; USER=root ; COMMAND=/bin/openstack overcloud ceph spec --standalone --mon-ip 172.18.0.100 --osd-spec /root/osd_spec.yaml --output /root/ceph_spec.yaml
Oct 13 13:34:50 standalone.localdomain sudo[27875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:52 standalone.localdomain python3[27901]: ansible-ceph_spec_bootstrap Invoked with new_ceph_spec=/root/ceph_spec.yaml osd_spec={'data_devices': {'paths': ['/dev/vg2/data-lv2']}} mon_ip=172.18.0.100 standalone=True tld= deployed_metalsmith=None tripleo_ansible_inventory=None ceph_service_types=None tripleo_roles=None fqdn=None crush_hierarchy=None method=None
Oct 13 13:34:53 standalone.localdomain sudo[27875]: pam_unix(sudo:session): session closed for user root
Oct 13 13:34:53 standalone.localdomain sudo[27906]:     root : PWD=/root ; USER=root ; COMMAND=/bin/openstack overcloud ceph user enable --standalone /root/ceph_spec.yaml
Oct 13 13:34:53 standalone.localdomain sudo[27906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:34:55 standalone.localdomain python3[27928]: ansible-stat Invoked with path=/root/.ssh/ceph-admin-id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:34:55 standalone.localdomain python3[27932]: ansible-stat Invoked with path=/root/.ssh/ceph-admin-id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:34:56 standalone.localdomain python3[27941]: ansible-ansible.legacy.command Invoked with _raw_params=ssh-keygen -y -f /root/.ssh/ceph-admin-id_rsa > /root/.ssh/ceph-admin-id_rsa.pub _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:34:56 standalone.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 13:34:57 standalone.localdomain python3[27953]: ansible-slurp Invoked with src=/root/.ssh/ceph-admin-id_rsa.pub
Oct 13 13:34:57 standalone.localdomain useradd[27959]: new group: name=ceph-admin, GID=1002
Oct 13 13:34:57 standalone.localdomain useradd[27959]: new user: name=ceph-admin, UID=1002, GID=1002, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 13 13:35:01 standalone.localdomain sudo[27906]: pam_unix(sudo:session): session closed for user root
Oct 13 13:35:01 standalone.localdomain sudo[28044]:     root : PWD=/root ; USER=root ; COMMAND=/bin/sed -i /--yes-i-know/d /usr/share/ansible/roles/tripleo_cephadm/tasks/bootstrap.yaml
Oct 13 13:35:01 standalone.localdomain sudo[28044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:35:01 standalone.localdomain sudo[28044]: pam_unix(sudo:session): session closed for user root
Oct 13 13:35:01 standalone.localdomain sudo[28047]:     root : PWD=/root ; USER=root ; COMMAND=/bin/openstack overcloud ceph deploy --mon-ip 172.18.0.100 --ceph-spec /root/ceph_spec.yaml --config /root/initial_ceph.conf --container-image-prepare /root/containers-prepare-parameters.yaml --standalone --single-host-defaults --skip-hosts-config --skip-container-registry-config --skip-user-create --network-data /tmp/network_data.yaml --ntp-server pool.ntp.org --output /root/deployed_ceph.yaml
Oct 13 13:35:01 standalone.localdomain sudo[28047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:35:04 standalone.localdomain python3[28069]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:35:04 standalone.localdomain python3[28070]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:35:05 standalone.localdomain python3[28106]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:05 standalone.localdomain python3[28105]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:05 standalone.localdomain python3[28122]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:35:05 standalone.localdomain python3[28121]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:35:10 standalone.localdomain python3[28164]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:35:10 standalone.localdomain python3[28163]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:35:11 standalone.localdomain python3[28175]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362510.4553945-28144-272270287873528/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:11 standalone.localdomain python3[28176]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362510.4257233-28143-271529188034398/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:12 standalone.localdomain python3[28199]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:35:12 standalone.localdomain python3[28200]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:35:13 standalone.localdomain python3[28211]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:35:13 standalone.localdomain chronyd[766]: chronyd exiting
Oct 13 13:35:13 standalone.localdomain systemd[1]: Stopping NTP client/server...
Oct 13 13:35:13 standalone.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Oct 13 13:35:13 standalone.localdomain systemd[1]: Stopped NTP client/server.
Oct 13 13:35:13 standalone.localdomain systemd[1]: chronyd.service: Consumed 115ms CPU time, read 1.9M from disk, written 0B to disk.
Oct 13 13:35:13 standalone.localdomain systemd[1]: Starting NTP client/server...
Oct 13 13:35:13 standalone.localdomain chronyd[28218]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 13 13:35:13 standalone.localdomain chronyd[28218]: Frequency -30.428 +/- 0.095 ppm read from /var/lib/chrony/drift
Oct 13 13:35:13 standalone.localdomain chronyd[28218]: Loaded seccomp filter (level 2)
Oct 13 13:35:13 standalone.localdomain systemd[1]: Started NTP client/server.
Oct 13 13:35:14 standalone.localdomain python3[28237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:35:14 standalone.localdomain python3[28246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:35:14 standalone.localdomain python3[28250]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362513.9580426-28225-19552702103165/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:14 standalone.localdomain python3[28254]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362513.9802895-28226-269784085272518/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:14 standalone.localdomain python3[28264]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:35:14 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:35:14 standalone.localdomain python3[28266]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:35:14 standalone.localdomain systemd-rc-local-generator[28294]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:35:14 standalone.localdomain systemd-sysv-generator[28299]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:35:14 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:35:14 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:35:14 standalone.localdomain systemd-rc-local-generator[28332]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:35:14 standalone.localdomain systemd-sysv-generator[28336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:35:15 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:35:15 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:35:15 standalone.localdomain systemd-rc-local-generator[28369]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:35:15 standalone.localdomain systemd-sysv-generator[28372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:35:15 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:35:15 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:35:15 standalone.localdomain systemd-sysv-generator[28404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:35:15 standalone.localdomain systemd-rc-local-generator[28400]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:35:15 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:35:15 standalone.localdomain systemd[1]: Starting chronyd online sources service...
Oct 13 13:35:15 standalone.localdomain chronyc[28416]: 200 OK
Oct 13 13:35:15 standalone.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Oct 13 13:35:15 standalone.localdomain systemd[1]: Finished chronyd online sources service.
Oct 13 13:35:15 standalone.localdomain python3[28426]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:16 standalone.localdomain chronyd[28218]: System clock was stepped by 0.000000 seconds
Oct 13 13:35:16 standalone.localdomain python3[28428]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:16 standalone.localdomain chronyd[28218]: System clock was stepped by 0.000000 seconds
Oct 13 13:35:16 standalone.localdomain python3[28436]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:16 standalone.localdomain python3[28438]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:16 standalone.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 13:35:17 standalone.localdomain chronyd[28218]: Selected source 167.160.187.12 (pool.ntp.org)
Oct 13 13:35:26 standalone.localdomain python3[28449]: ansible-timezone Invoked with name=UTC hwclock=None
Oct 13 13:35:26 standalone.localdomain systemd[1]: Starting Time & Date Service...
Oct 13 13:35:26 standalone.localdomain python3[28450]: ansible-timezone Invoked with name=UTC hwclock=None
Oct 13 13:35:26 standalone.localdomain systemd[1]: Started Time & Date Service.
Oct 13 13:35:27 standalone.localdomain python3[28466]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:35:27 standalone.localdomain chronyd[28218]: chronyd exiting
Oct 13 13:35:27 standalone.localdomain systemd[1]: Stopping NTP client/server...
Oct 13 13:35:27 standalone.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Oct 13 13:35:27 standalone.localdomain systemd[1]: Stopped NTP client/server.
Oct 13 13:35:27 standalone.localdomain systemd[1]: Starting NTP client/server...
Oct 13 13:35:27 standalone.localdomain chronyd[28474]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 13 13:35:27 standalone.localdomain chronyd[28474]: Frequency -30.428 +/- 0.136 ppm read from /var/lib/chrony/drift
Oct 13 13:35:27 standalone.localdomain chronyd[28474]: Loaded seccomp filter (level 2)
Oct 13 13:35:27 standalone.localdomain systemd[1]: Started NTP client/server.
Oct 13 13:35:29 standalone.localdomain python3[28516]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=latest releasever=9 allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None
Oct 13 13:35:31 standalone.localdomain chronyd[28474]: Selected source 162.159.200.1 (pool.ntp.org)
Oct 13 13:35:32 standalone.localdomain python3[28521]: ansible-stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:35:33 standalone.localdomain python3[28529]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:35:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:35:34 standalone.localdomain python3[28583]: ansible-file Invoked with path=/etc/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:35:34 standalone.localdomain python3[28587]: ansible-file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:34 standalone.localdomain python3[28591]: ansible-stat Invoked with path=/root/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:35:34 standalone.localdomain python3[28605]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:35:35 standalone.localdomain python3[28609]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362534.8017225-28595-174932020033492/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yaml follow=False checksum=5a7900e149097cd39ae79c624bf5278d1b9d64eb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:35 standalone.localdomain python3[28615]: ansible-stat Invoked with path=/root/initial_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:35:35 standalone.localdomain python3[28629]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:35:36 standalone.localdomain python3[28633]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362535.7296164-28619-25312611011256/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=b642163a043cd016e2098b4e69353b03b1e8ed38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:35:36 standalone.localdomain python3[28643]: ansible-stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:35:36 standalone.localdomain python3[28648]: ansible-stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:35:37 standalone.localdomain python3[28655]: ansible-stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:35:37 standalone.localdomain python3[28661]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest \bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 627e7f45-65aa-56de-94df-66eaee84a56e --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 172.18.0.100
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:35:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:35:37 standalone.localdomain sshd[28678]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:35:37 standalone.localdomain sshd[28678]: Accepted publickey for ceph-admin from fe80::f816:3eff:fe2e:69a7%eth0 port 47774 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:35:37 standalone.localdomain systemd-logind[760]: New session 26 of user ceph-admin.
Oct 13 13:35:37 standalone.localdomain systemd[1]: Created slice User Slice of UID 1002.
Oct 13 13:35:37 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Oct 13 13:35:37 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Oct 13 13:35:37 standalone.localdomain systemd[1]: Starting User Manager for UID 1002...
Oct 13 13:35:37 standalone.localdomain systemd[28682]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Queued start job for default target Main User Target.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Created slice User Application Slice.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Reached target Paths.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Reached target Timers.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Starting D-Bus User Message Bus Socket...
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Starting Create User's Volatile Files and Directories...
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Finished Create User's Volatile Files and Directories.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Listening on D-Bus User Message Bus Socket.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Reached target Sockets.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Reached target Basic System.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Reached target Main User Target.
Oct 13 13:35:38 standalone.localdomain systemd[28682]: Startup finished in 118ms.
Oct 13 13:35:38 standalone.localdomain systemd[1]: Started User Manager for UID 1002.
Oct 13 13:35:38 standalone.localdomain systemd[1]: Started Session 26 of User ceph-admin.
Oct 13 13:35:38 standalone.localdomain sshd[28678]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:35:38 standalone.localdomain sudo[28698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 13 13:35:38 standalone.localdomain sudo[28698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:35:38 standalone.localdomain sudo[28698]: pam_unix(sudo:session): session closed for user root
Oct 13 13:35:38 standalone.localdomain sshd[28697]: Received disconnect from fe80::f816:3eff:fe2e:69a7%eth0 port 47774:11: disconnected by user
Oct 13 13:35:38 standalone.localdomain sshd[28697]: Disconnected from user ceph-admin fe80::f816:3eff:fe2e:69a7%eth0 port 47774
Oct 13 13:35:38 standalone.localdomain sshd[28678]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 13 13:35:38 standalone.localdomain systemd[1]: session-26.scope: Deactivated successfully.
Oct 13 13:35:38 standalone.localdomain systemd-logind[760]: Session 26 logged out. Waiting for processes to exit.
Oct 13 13:35:38 standalone.localdomain systemd-logind[760]: Removed session 26.
Oct 13 13:35:42 standalone.localdomain kernel: VFS: idmapped mount is not enabled.
Oct 13 13:35:48 standalone.localdomain systemd[1]: Stopping User Manager for UID 1002...
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Activating special unit Exit the Session...
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped target Main User Target.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped target Basic System.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped target Paths.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped target Sockets.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped target Timers.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Closed D-Bus User Message Bus Socket.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Stopped Create User's Volatile Files and Directories.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Removed slice User Application Slice.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Reached target Shutdown.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Finished Exit the Session.
Oct 13 13:35:48 standalone.localdomain systemd[28682]: Reached target Exit the Session.
Oct 13 13:35:48 standalone.localdomain systemd[1]: user@1002.service: Deactivated successfully.
Oct 13 13:35:48 standalone.localdomain systemd[1]: Stopped User Manager for UID 1002.
Oct 13 13:35:48 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1002...
Oct 13 13:35:48 standalone.localdomain systemd[1]: run-user-1002.mount: Deactivated successfully.
Oct 13 13:35:48 standalone.localdomain systemd[1]: user-runtime-dir@1002.service: Deactivated successfully.
Oct 13 13:35:48 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1002.
Oct 13 13:35:48 standalone.localdomain systemd[1]: Removed slice User Slice of UID 1002.
Oct 13 13:35:56 standalone.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28719]: 2025-10-13 13:35:38.219947682 +0000 UTC m=+0.041595711 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28862]: 
Oct 13 13:36:00 standalone.localdomain podman[28862]: 2025-10-13 13:36:00.105056673 +0000 UTC m=+0.062861197 container create f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_banzai, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:00 standalone.localdomain systemd[1]: Created slice Slice /machine.
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libpod-conmon-f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2.scope.
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:00 standalone.localdomain podman[28862]: 2025-10-13 13:36:00.075841823 +0000 UTC m=+0.033646377 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:00 standalone.localdomain podman[28862]: 2025-10-13 13:36:00.208211659 +0000 UTC m=+0.166016193 container init f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_banzai, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Oct 13 13:36:00 standalone.localdomain podman[28862]: 2025-10-13 13:36:00.217254217 +0000 UTC m=+0.175058751 container start f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_banzai, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553, GIT_CLEAN=True, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7)
Oct 13 13:36:00 standalone.localdomain podman[28862]: 2025-10-13 13:36:00.217478204 +0000 UTC m=+0.175282758 container attach f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_banzai, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:00 standalone.localdomain dazzling_banzai[28876]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable)
Oct 13 13:36:00 standalone.localdomain systemd[1]: libpod-f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2.scope: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28862]: 2025-10-13 13:36:00.296779765 +0000 UTC m=+0.254584379 container died f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_banzai, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, version=7, release=553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True)
Oct 13 13:36:00 standalone.localdomain podman[28881]: 2025-10-13 13:36:00.379749379 +0000 UTC m=+0.068943953 container remove f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_banzai, build-date=2025-09-24T08:57:55, release=553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., version=7, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph)
Oct 13 13:36:00 standalone.localdomain systemd[1]: libpod-conmon-f2db2ee23b989e4e47b6cd7a62eaeb62ef3527fbd1de77260069bbe89feaedd2.scope: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28893]: 
Oct 13 13:36:00 standalone.localdomain podman[28893]: 2025-10-13 13:36:00.488919 +0000 UTC m=+0.068670055 container create 0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_napier, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, architecture=x86_64)
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libpod-conmon-0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1.scope.
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:00 standalone.localdomain podman[28893]: 2025-10-13 13:36:00.541969393 +0000 UTC m=+0.121720458 container init 0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_napier, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.tags=rhceph ceph)
Oct 13 13:36:00 standalone.localdomain podman[28893]: 2025-10-13 13:36:00.548497334 +0000 UTC m=+0.128248389 container start 0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_napier, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:00 standalone.localdomain podman[28893]: 2025-10-13 13:36:00.548772162 +0000 UTC m=+0.128523267 container attach 0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_napier, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, architecture=x86_64, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-type=git, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Oct 13 13:36:00 standalone.localdomain musing_napier[28909]: 167 167
Oct 13 13:36:00 standalone.localdomain systemd[1]: libpod-0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1.scope: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28893]: 2025-10-13 13:36:00.550368452 +0000 UTC m=+0.130119527 container died 0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_napier, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, vendor=Red Hat, Inc.)
Oct 13 13:36:00 standalone.localdomain podman[28893]: 2025-10-13 13:36:00.463038464 +0000 UTC m=+0.042789529 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:00 standalone.localdomain podman[28914]: 2025-10-13 13:36:00.611493494 +0000 UTC m=+0.054430057 container remove 0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_napier, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:00 standalone.localdomain systemd[1]: libpod-conmon-0e97fe3baa3e859f8475d62fd6900239f75c3950755a0ee00793dfef3408b6b1.scope: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28928]: 
Oct 13 13:36:00 standalone.localdomain podman[28928]: 2025-10-13 13:36:00.703291109 +0000 UTC m=+0.059680588 container create 43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_satoshi, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc.)
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libpod-conmon-43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156.scope.
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:00 standalone.localdomain podman[28928]: 2025-10-13 13:36:00.754729003 +0000 UTC m=+0.111118522 container init 43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_satoshi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64)
Oct 13 13:36:00 standalone.localdomain podman[28928]: 2025-10-13 13:36:00.763384939 +0000 UTC m=+0.119774458 container start 43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_satoshi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-type=git, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main)
Oct 13 13:36:00 standalone.localdomain podman[28928]: 2025-10-13 13:36:00.765087731 +0000 UTC m=+0.121477240 container attach 43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_satoshi, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:00 standalone.localdomain infallible_satoshi[28943]: AQBAAO1omg1uLhAALF/3yLR90ryWlfOhnRMimQ==
Oct 13 13:36:00 standalone.localdomain systemd[1]: libpod-43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156.scope: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28928]: 2025-10-13 13:36:00.781782045 +0000 UTC m=+0.138171554 container died 43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_satoshi, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:00 standalone.localdomain podman[28928]: 2025-10-13 13:36:00.683253022 +0000 UTC m=+0.039642521 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:00 standalone.localdomain podman[28950]: 2025-10-13 13:36:00.853157303 +0000 UTC m=+0.065133486 container remove 43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_satoshi, version=7, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:00 standalone.localdomain systemd[1]: libpod-conmon-43c552d99f9fd143c7f77bc105287a40cf45bac8c16d05b15912a84f48f99156.scope: Deactivated successfully.
Oct 13 13:36:00 standalone.localdomain podman[28963]: 
Oct 13 13:36:00 standalone.localdomain podman[28963]: 2025-10-13 13:36:00.947091154 +0000 UTC m=+0.060529474 container create 3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_albattani, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libpod-conmon-3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa.scope.
Oct 13 13:36:00 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:00 standalone.localdomain podman[28963]: 2025-10-13 13:36:00.998072564 +0000 UTC m=+0.111510924 container init 3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_albattani, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:01 standalone.localdomain podman[28963]: 2025-10-13 13:36:01.00540819 +0000 UTC m=+0.118846510 container start 3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_albattani, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True)
Oct 13 13:36:01 standalone.localdomain podman[28963]: 2025-10-13 13:36:01.00574695 +0000 UTC m=+0.119185340 container attach 3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_albattani, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ac4e2a84662593658b93b67399d42f9fae33a9d9a910e1336a9666ad9ecfcaf5-merged.mount: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[28963]: 2025-10-13 13:36:00.92810517 +0000 UTC m=+0.041543500 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:01 standalone.localdomain happy_albattani[28979]: AQBBAO1oXEfNARAAQzdf5NzEVA7SgfHNx+vDyQ==
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[28963]: 2025-10-13 13:36:01.033258897 +0000 UTC m=+0.146697317 container died 3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_albattani, vcs-type=git, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, architecture=x86_64)
Oct 13 13:36:01 standalone.localdomain systemd[1]: tmp-crun.DhCTYq.mount: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b28dd8067e0ee38f6bac9ac1a140e37004cce8e293eef0bfde60987700e99f7e-merged.mount: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[28986]: 2025-10-13 13:36:01.112327061 +0000 UTC m=+0.068616633 container remove 3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_albattani, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, name=rhceph, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.)
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-conmon-3a61200349619a422dcff68a66646b9a41ae92c00aaed2266a43d880460a20aa.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[29000]: 
Oct 13 13:36:01 standalone.localdomain podman[29000]: 2025-10-13 13:36:01.180875141 +0000 UTC m=+0.040516588 container create 2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_cerf, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:01 standalone.localdomain systemd[1]: Started libpod-conmon-2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4.scope.
Oct 13 13:36:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:01 standalone.localdomain podman[29000]: 2025-10-13 13:36:01.233216673 +0000 UTC m=+0.092858120 container init 2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_cerf, release=553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git)
Oct 13 13:36:01 standalone.localdomain podman[29000]: 2025-10-13 13:36:01.241049013 +0000 UTC m=+0.100690450 container start 2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_cerf, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:01 standalone.localdomain podman[29000]: 2025-10-13 13:36:01.241376763 +0000 UTC m=+0.101018470 container attach 2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_cerf, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553)
Oct 13 13:36:01 standalone.localdomain relaxed_cerf[29016]: AQBBAO1o13jDDxAAWQianv2pgyRNJbvnEIYyzQ==
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[29000]: 2025-10-13 13:36:01.167112178 +0000 UTC m=+0.026753605 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:01 standalone.localdomain podman[29000]: 2025-10-13 13:36:01.267365754 +0000 UTC m=+0.127007181 container died 2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_cerf, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, version=7, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, release=553, io.buildah.version=1.33.12, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:01 standalone.localdomain podman[29023]: 2025-10-13 13:36:01.344981803 +0000 UTC m=+0.069304554 container remove 2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_cerf, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public)
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-conmon-2b2d7c83a019fdd7c74e8fe3730208d75cd3fef8e1331728b05b104cea1580e4.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[29037]: 
Oct 13 13:36:01 standalone.localdomain podman[29037]: 2025-10-13 13:36:01.448598183 +0000 UTC m=+0.053603211 container create a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bohr, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, ceph=True, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12)
Oct 13 13:36:01 standalone.localdomain systemd[1]: Started libpod-conmon-a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331.scope.
Oct 13 13:36:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7174c1733a40b025d711881b39b5408b05d3eb284b98be1f9ea814e4a4c20faf/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:01 standalone.localdomain podman[29037]: 2025-10-13 13:36:01.512725267 +0000 UTC m=+0.117730315 container init a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bohr, CEPH_POINT_RELEASE=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:01 standalone.localdomain podman[29037]: 2025-10-13 13:36:01.526068618 +0000 UTC m=+0.131073636 container start a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bohr, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:01 standalone.localdomain podman[29037]: 2025-10-13 13:36:01.526335946 +0000 UTC m=+0.131340974 container attach a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bohr, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, version=7)
Oct 13 13:36:01 standalone.localdomain podman[29037]: 2025-10-13 13:36:01.431853297 +0000 UTC m=+0.036858335 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:01 standalone.localdomain bold_bohr[29054]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 13 13:36:01 standalone.localdomain bold_bohr[29054]: setting min_mon_release = pacific
Oct 13 13:36:01 standalone.localdomain bold_bohr[29054]: /usr/bin/monmaptool: set fsid to 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:01 standalone.localdomain bold_bohr[29054]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[29037]: 2025-10-13 13:36:01.558455295 +0000 UTC m=+0.163460343 container died a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bohr, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main)
Oct 13 13:36:01 standalone.localdomain podman[29061]: 2025-10-13 13:36:01.626366166 +0000 UTC m=+0.062074132 container remove a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bohr, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-09-24T08:57:55, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12)
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-conmon-a899c125497ddf901b1d2050d09ba2de88fe2d9c94d0832933333a669ad5f331.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[29076]: 
Oct 13 13:36:01 standalone.localdomain podman[29076]: 2025-10-13 13:36:01.737620031 +0000 UTC m=+0.075099583 container create b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_keldysh, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Oct 13 13:36:01 standalone.localdomain systemd[1]: Started libpod-conmon-b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4.scope.
Oct 13 13:36:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b178f17c16709fbba3daff0610aa9196f5f5e9bae9acc5ee905d00f7f89c1d2/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b178f17c16709fbba3daff0610aa9196f5f5e9bae9acc5ee905d00f7f89c1d2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b178f17c16709fbba3daff0610aa9196f5f5e9bae9acc5ee905d00f7f89c1d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:01 standalone.localdomain podman[29076]: 2025-10-13 13:36:01.708300368 +0000 UTC m=+0.045779940 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b178f17c16709fbba3daff0610aa9196f5f5e9bae9acc5ee905d00f7f89c1d2/merged/var/lib/ceph/mon/ceph-standalone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:01 standalone.localdomain podman[29076]: 2025-10-13 13:36:01.833526223 +0000 UTC m=+0.171005735 container init b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_keldysh, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, release=553, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:01 standalone.localdomain podman[29076]: 2025-10-13 13:36:01.843102607 +0000 UTC m=+0.180582119 container start b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_keldysh, version=7, GIT_BRANCH=main, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:01 standalone.localdomain podman[29076]: 2025-10-13 13:36:01.843343795 +0000 UTC m=+0.180823327 container attach b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_keldysh, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, name=rhceph, version=7, architecture=x86_64, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph)
Oct 13 13:36:01 standalone.localdomain systemd[1]: libpod-b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4.scope: Deactivated successfully.
Oct 13 13:36:01 standalone.localdomain podman[29076]: 2025-10-13 13:36:01.927227687 +0000 UTC m=+0.264707209 container died b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_keldysh, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, io.buildah.version=1.33.12, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Oct 13 13:36:02 standalone.localdomain podman[29117]: 2025-10-13 13:36:02.003507015 +0000 UTC m=+0.065144166 container remove b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_keldysh, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:02 standalone.localdomain systemd[1]: libpod-conmon-b58718aca6fd56d0f66e6ee632e7ad8a3e4bdc09e954d689ad207ce5a1196cd4.scope: Deactivated successfully.
Oct 13 13:36:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:02 standalone.localdomain systemd-rc-local-generator[29161]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:02 standalone.localdomain systemd-sysv-generator[29164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:02 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:02 standalone.localdomain systemd-rc-local-generator[29192]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:02 standalone.localdomain systemd-sysv-generator[29198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:02 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reached target All Ceph clusters and services.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:02 standalone.localdomain systemd-sysv-generator[29234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:02 standalone.localdomain systemd-rc-local-generator[29230]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:02 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reached target Ceph cluster 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:02 standalone.localdomain systemd-rc-local-generator[29274]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:02 standalone.localdomain systemd-sysv-generator[29280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:02 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:03 standalone.localdomain systemd-sysv-generator[29319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:03 standalone.localdomain systemd-rc-local-generator[29313]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:03 standalone.localdomain systemd[1]: Created slice Slice /system/ceph-627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:03 standalone.localdomain systemd[1]: Reached target System Time Set.
Oct 13 13:36:03 standalone.localdomain systemd[1]: Reached target System Time Synchronized.
Oct 13 13:36:03 standalone.localdomain systemd[1]: Starting Ceph mon.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:03 standalone.localdomain podman[29379]: 
Oct 13 13:36:03 standalone.localdomain podman[29379]: 2025-10-13 13:36:03.561181158 +0000 UTC m=+0.064861618 container create fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, version=7, distribution-scope=public, build-date=2025-09-24T08:57:55)
Oct 13 13:36:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ac39df488c558a5f8efee7367834847bf5199d815df3b1b62fb1c2683df75f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ac39df488c558a5f8efee7367834847bf5199d815df3b1b62fb1c2683df75f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ac39df488c558a5f8efee7367834847bf5199d815df3b1b62fb1c2683df75f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:03 standalone.localdomain podman[29379]: 2025-10-13 13:36:03.53915881 +0000 UTC m=+0.042839290 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ac39df488c558a5f8efee7367834847bf5199d815df3b1b62fb1c2683df75f/merged/var/lib/ceph/mon/ceph-standalone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:03 standalone.localdomain podman[29379]: 2025-10-13 13:36:03.657671148 +0000 UTC m=+0.161351638 container init fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container)
Oct 13 13:36:03 standalone.localdomain podman[29379]: 2025-10-13 13:36:03.669528493 +0000 UTC m=+0.173208973 container start fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:03 standalone.localdomain bash[29379]: fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d
Oct 13 13:36:03 standalone.localdomain systemd[1]: Started Ceph mon.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: set uid:gid to 167:167 (ceph:ceph)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: pidfile_write: ignore empty --pid-file
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: load: jerasure load: lrc 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: RocksDB version: 7.9.2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Git sha 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Compile date 2025-09-23 00:00:00
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: DB SUMMARY
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: DB Session ID:  QLPWGFADYNWOFFHHOE4C
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: CURRENT file:  CURRENT
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: IDENTITY file:  IDENTITY
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: SST files in /var/lib/ceph/mon/ceph-standalone/store.db dir, Total Num: 0, files: 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-standalone/store.db: 000004.log size: 811 ; 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                         Options.error_if_exists: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.create_if_missing: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                         Options.paranoid_checks: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                                     Options.env: 0x560d3a7d99e0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                                Options.info_log: 0x560d3bdf4c80
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.max_file_opening_threads: 16
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                              Options.statistics: (nil)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                               Options.use_fsync: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.max_log_file_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                         Options.allow_fallocate: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                        Options.use_direct_reads: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.create_missing_column_families: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                              Options.db_log_dir: 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                                 Options.wal_dir: 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.advise_random_on_open: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                    Options.write_buffer_manager: 0x560d3be05540
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                            Options.rate_limiter: (nil)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.unordered_write: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                               Options.row_cache: None
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                              Options.wal_filter: None
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.allow_ingest_behind: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.two_write_queues: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.manual_wal_flush: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.wal_compression: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.atomic_flush: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.log_readahead_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.allow_data_in_errors: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.db_host_id: __hostname__
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.max_background_jobs: 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.max_background_compactions: -1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.max_subcompactions: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.max_total_wal_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                          Options.max_open_files: -1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                          Options.bytes_per_sync: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:       Options.compaction_readahead_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.max_background_flushes: -1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Compression algorithms supported:
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kZSTD supported: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kXpressCompression supported: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kBZip2Compression supported: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kLZ4Compression supported: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kZlibCompression supported: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kLZ4HCCompression supported: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         kSnappyCompression supported: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-standalone/store.db/MANIFEST-000005
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:           Options.merge_operator: 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:        Options.compaction_filter: None
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560d3bdf4900)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x560d3bdf1350
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:        Options.write_buffer_size: 33554432
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:  Options.max_write_buffer_number: 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.compression: NoCompression
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.num_levels: 7
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.table_properties_collectors: 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-standalone/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4ca07273-cffb-4aba-ab62-7eb3dc930841
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362563707446, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362563709995, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 823, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 700, "raw_average_value_size": 140, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "QLPWGFADYNWOFFHHOE4C", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362563710149, "job": 1, "event": "recovery_finished"}
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560d3be18e00
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: DB pointer 0x560d3bf0e000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 0.0 total, 0.0 interval
                                                        Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                        Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.89 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Sum      1/0    1.89 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.0 total, 0.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x560d3bdf1350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: starting mon.standalone rank 0 at public addrs [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] at bind addrs [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] mon_data /var/lib/ceph/mon/ceph-standalone fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@-1(???) e0 preinit fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@-1(probing) e0  my rank is now 0 (was -1)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(probing) e0 win_standalone_election
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [INF] : mon.standalone is new leader, mons standalone in quorum (ranks 0)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(probing) e1 win_standalone_election
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 13 13:36:03 standalone.localdomain podman[29397]: 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [INF] : mon.standalone is new leader, mons standalone in quorum (ranks 0)
Oct 13 13:36:03 standalone.localdomain podman[29397]: 2025-10-13 13:36:03.791857419 +0000 UTC m=+0.087873577 container create f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_chatelet, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 13 13:36:03 standalone.localdomain podman[29397]: 2025-10-13 13:36:03.753789787 +0000 UTC m=+0.049805935 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : last_changed 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : created 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] mon.standalone
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mgrc update_daemon_metadata mon.standalone metadata {addrs=[v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=standalone.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=standalone.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=0,mem_total_kb=16116612,os=Linux}
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).mds e1 new map
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).mds e1 print_map
                                                        e1
                                                        enable_multiple, ever_enabled_multiple: 1,1
                                                        default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        legacy client fscid: -1
                                                         
                                                        No filesystems configured
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : fsmap 
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mkfs 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 13 13:36:03 standalone.localdomain systemd-journald[618]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Oct 13 13:36:03 standalone.localdomain systemd-journald[618]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 13:36:03 standalone.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 13 13:36:03 standalone.localdomain systemd[1]: Started libpod-conmon-f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d.scope.
Oct 13 13:36:03 standalone.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 13 13:36:03 standalone.localdomain ceph-mon[29396]: mon.standalone is new leader, mons standalone in quorum (ranks 0)
Oct 13 13:36:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abae7d137ed2f6ca89b99ef93f740af00a9002d619f7a7582c75187fcd9bab3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abae7d137ed2f6ca89b99ef93f740af00a9002d619f7a7582c75187fcd9bab3d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abae7d137ed2f6ca89b99ef93f740af00a9002d619f7a7582c75187fcd9bab3d/merged/var/lib/ceph/mon/ceph-standalone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain podman[29397]: 2025-10-13 13:36:04.01894083 +0000 UTC m=+0.314956988 container init f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_chatelet, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:04 standalone.localdomain podman[29397]: 2025-10-13 13:36:04.028646118 +0000 UTC m=+0.324662266 container start f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_chatelet, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:04 standalone.localdomain podman[29397]: 2025-10-13 13:36:04.028892706 +0000 UTC m=+0.324908904 container attach f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_chatelet, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, release=553, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1165688321' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:   cluster:
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     id:     627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     health: HEALTH_OK
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:  
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:   services:
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     mon: 1 daemons, quorum standalone (age 0.411258s)
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     mgr: no daemons active
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     osd: 0 osds: 0 up, 0 in
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:  
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:   data:
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     pools:   0 pools, 0 pgs
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     objects: 0 objects, 0 B
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     usage:   0 B used, 0 B / 0 B avail
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:     pgs:     
Oct 13 13:36:04 standalone.localdomain agitated_chatelet[29451]:  
Oct 13 13:36:04 standalone.localdomain systemd[1]: libpod-f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d.scope: Deactivated successfully.
Oct 13 13:36:04 standalone.localdomain podman[29397]: 2025-10-13 13:36:04.211153437 +0000 UTC m=+0.507169635 container died f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_chatelet, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main)
Oct 13 13:36:04 standalone.localdomain podman[29477]: 2025-10-13 13:36:04.297134593 +0000 UTC m=+0.075624369 container remove f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_chatelet, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph)
Oct 13 13:36:04 standalone.localdomain systemd[1]: libpod-conmon-f9c1a52d4a2e125025651a39f9ad281f4f8227c8f42cb7602b658593bc421f3d.scope: Deactivated successfully.
Oct 13 13:36:04 standalone.localdomain podman[29492]: 
Oct 13 13:36:04 standalone.localdomain podman[29492]: 2025-10-13 13:36:04.36848072 +0000 UTC m=+0.053402245 container create 6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_tu, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, release=553, name=rhceph, version=7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:04 standalone.localdomain systemd[1]: Started libpod-conmon-6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11.scope.
Oct 13 13:36:04 standalone.localdomain systemd[1]: tmp-crun.GXtEJO.mount: Deactivated successfully.
Oct 13 13:36:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6312ea26da1e0de7632f387342b80ffa765a0a0b907e6da476f601348bae992/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain podman[29492]: 2025-10-13 13:36:04.342362306 +0000 UTC m=+0.027283831 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6312ea26da1e0de7632f387342b80ffa765a0a0b907e6da476f601348bae992/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6312ea26da1e0de7632f387342b80ffa765a0a0b907e6da476f601348bae992/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6312ea26da1e0de7632f387342b80ffa765a0a0b907e6da476f601348bae992/merged/var/lib/ceph/mon/ceph-standalone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain podman[29492]: 2025-10-13 13:36:04.482876462 +0000 UTC m=+0.167797997 container init 6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_tu, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True)
Oct 13 13:36:04 standalone.localdomain podman[29492]: 2025-10-13 13:36:04.490683582 +0000 UTC m=+0.175605127 container start 6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_tu, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:04 standalone.localdomain podman[29492]: 2025-10-13 13:36:04.49126322 +0000 UTC m=+0.176184755 container attach 6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_tu, name=rhceph, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/508469761' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/508469761' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 13 13:36:04 standalone.localdomain interesting_tu[29508]: 
Oct 13 13:36:04 standalone.localdomain interesting_tu[29508]: [global]
Oct 13 13:36:04 standalone.localdomain interesting_tu[29508]:         fsid = 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:04 standalone.localdomain interesting_tu[29508]:         mon_host = [v2:172.18.0.100:3300,v1:172.18.0.100:6789]
Oct 13 13:36:04 standalone.localdomain interesting_tu[29508]:         osd_crush_chooseleaf_type = 0
Oct 13 13:36:04 standalone.localdomain systemd[1]: libpod-6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11.scope: Deactivated successfully.
Oct 13 13:36:04 standalone.localdomain podman[29492]: 2025-10-13 13:36:04.67090145 +0000 UTC m=+0.355823015 container died 6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_tu, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7)
Oct 13 13:36:04 standalone.localdomain podman[29534]: 2025-10-13 13:36:04.755632088 +0000 UTC m=+0.074923807 container remove 6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_tu, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, name=rhceph, release=553, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=)
Oct 13 13:36:04 standalone.localdomain systemd[1]: libpod-conmon-6a2d8778413ca4668a1e6bbd18663e5739fea15e009ea18836ebbd656d361c11.scope: Deactivated successfully.
Oct 13 13:36:04 standalone.localdomain podman[29548]: 
Oct 13 13:36:04 standalone.localdomain podman[29548]: 2025-10-13 13:36:04.826257652 +0000 UTC m=+0.052105135 container create 8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_pare, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, ceph=True)
Oct 13 13:36:04 standalone.localdomain systemd[1]: Started libpod-conmon-8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b.scope.
Oct 13 13:36:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aec261a754b366fead85165afb91504f16836c3302f2eb3f6c8368a2b3fac29/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aec261a754b366fead85165afb91504f16836c3302f2eb3f6c8368a2b3fac29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aec261a754b366fead85165afb91504f16836c3302f2eb3f6c8368a2b3fac29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain podman[29548]: 2025-10-13 13:36:04.798884759 +0000 UTC m=+0.024732242 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aec261a754b366fead85165afb91504f16836c3302f2eb3f6c8368a2b3fac29/merged/var/lib/ceph/mon/ceph-standalone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:04 standalone.localdomain podman[29548]: 2025-10-13 13:36:04.925174527 +0000 UTC m=+0.151022020 container init 8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_pare, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph)
Oct 13 13:36:04 standalone.localdomain podman[29548]: 2025-10-13 13:36:04.93338543 +0000 UTC m=+0.159232913 container start 8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_pare, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, name=rhceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=)
Oct 13 13:36:04 standalone.localdomain podman[29548]: 2025-10-13 13:36:04.93373059 +0000 UTC m=+0.159578103 container attach 8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_pare, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, ceph=True, build-date=2025-09-24T08:57:55, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc.)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: mon.standalone is new leader, mons standalone in quorum (ranks 0)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: monmap epoch 1
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: last_changed 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: created 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: min_mon_release 18 (reef)
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: election_strategy: 1
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: 0: [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] mon.standalone
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: fsmap 
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: osdmap e1: 0 total, 0 up, 0 in
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: mgrmap e1: no daemons active
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: from='client.? 172.18.0.100:0/1165688321' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: from='client.? 172.18.0.100:0/508469761' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Oct 13 13:36:04 standalone.localdomain ceph-mon[29396]: from='client.? 172.18.0.100:0/508469761' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3487856685' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:05 standalone.localdomain systemd[1]: libpod-8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b.scope: Deactivated successfully.
Oct 13 13:36:05 standalone.localdomain podman[29548]: 2025-10-13 13:36:05.159718288 +0000 UTC m=+0.385565811 container died 8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_pare, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, architecture=x86_64)
Oct 13 13:36:05 standalone.localdomain podman[29590]: 2025-10-13 13:36:05.241182965 +0000 UTC m=+0.069834510 container remove 8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_pare, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55)
Oct 13 13:36:05 standalone.localdomain systemd[1]: libpod-conmon-8e8ca0dd3c4f4a2ffc224046c2628d721edb2ff01995b607544f4bc86934c76b.scope: Deactivated successfully.
Oct 13 13:36:05 standalone.localdomain systemd[1]: Stopping Ceph mon.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6312ea26da1e0de7632f387342b80ffa765a0a0b907e6da476f601348bae992-merged.mount: Deactivated successfully.
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.standalone -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 *** Got Signal Terminated ***
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: mon.standalone@0(leader) e1 shutdown
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 13 13:36:05 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone[29392]: 2025-10-13T13:36:05.536+0000 7f5c94df9640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.standalone -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 13 13:36:05 standalone.localdomain ceph-mon[29396]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 13 13:36:05 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone[29392]: 2025-10-13T13:36:05.536+0000 7f5c94df9640 -1 mon.standalone@0(leader) e1 *** Got Signal Terminated ***
Oct 13 13:36:05 standalone.localdomain podman[29632]: 2025-10-13 13:36:05.801875206 +0000 UTC m=+0.321983243 container died fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main)
Oct 13 13:36:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-78ac39df488c558a5f8efee7367834847bf5199d815df3b1b62fb1c2683df75f-merged.mount: Deactivated successfully.
Oct 13 13:36:05 standalone.localdomain podman[29632]: 2025-10-13 13:36:05.817197988 +0000 UTC m=+0.337305975 container cleanup fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:05 standalone.localdomain bash[29632]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone
Oct 13 13:36:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:05 standalone.localdomain podman[29644]: 2025-10-13 13:36:05.848840812 +0000 UTC m=+0.043722987 container remove fb8967e163854103b28c54dc00fdc218f1ef3c6f8a1b6845517f592fd42ed66d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.expose-services=, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:06 standalone.localdomain systemd[1]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e@mon.standalone.service: Deactivated successfully.
Oct 13 13:36:06 standalone.localdomain systemd[1]: Stopped Ceph mon.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:06 standalone.localdomain systemd[1]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e@mon.standalone.service: Consumed 1.672s CPU time.
Oct 13 13:36:06 standalone.localdomain systemd[1]: Starting Ceph mon.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 13:36:06 standalone.localdomain podman[29737]: 
Oct 13 13:36:06 standalone.localdomain podman[29737]: 2025-10-13 13:36:06.588831992 +0000 UTC m=+0.065234069 container create 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, RELEASE=main, ceph=True, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git)
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f97edaf59cdb5f655428f44a7e0dccafb42e86a6a6dd0f845e655ef6b4f305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f97edaf59cdb5f655428f44a7e0dccafb42e86a6a6dd0f845e655ef6b4f305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f97edaf59cdb5f655428f44a7e0dccafb42e86a6a6dd0f845e655ef6b4f305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain podman[29737]: 2025-10-13 13:36:06.56700843 +0000 UTC m=+0.043410517 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f97edaf59cdb5f655428f44a7e0dccafb42e86a6a6dd0f845e655ef6b4f305/merged/var/lib/ceph/mon/ceph-standalone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain podman[29737]: 2025-10-13 13:36:06.684703443 +0000 UTC m=+0.161105560 container init 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container)
Oct 13 13:36:06 standalone.localdomain podman[29737]: 2025-10-13 13:36:06.693544496 +0000 UTC m=+0.169946613 container start 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Oct 13 13:36:06 standalone.localdomain bash[29737]: 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580
Oct 13 13:36:06 standalone.localdomain systemd[1]: Started Ceph mon.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: set uid:gid to 167:167 (ceph:ceph)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: pidfile_write: ignore empty --pid-file
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: load: jerasure load: lrc 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: RocksDB version: 7.9.2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Git sha 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Compile date 2025-09-23 00:00:00
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: DB SUMMARY
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: DB Session ID:  GD13TFTIGQ1NUS7TF7HW
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: CURRENT file:  CURRENT
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: IDENTITY file:  IDENTITY
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: SST files in /var/lib/ceph/mon/ceph-standalone/store.db dir, Total Num: 1, files: 000008.sst 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-standalone/store.db: 000009.log size: 57017 ; 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                         Options.error_if_exists: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.create_if_missing: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                         Options.paranoid_checks: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                                     Options.env: 0x557b7d5839e0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                                Options.info_log: 0x557b7fb5ed60
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.max_file_opening_threads: 16
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                              Options.statistics: (nil)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                               Options.use_fsync: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.max_log_file_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                         Options.allow_fallocate: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                        Options.use_direct_reads: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.create_missing_column_families: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                              Options.db_log_dir: 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                                 Options.wal_dir: 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.advise_random_on_open: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                    Options.write_buffer_manager: 0x557b7fb6f540
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                            Options.rate_limiter: (nil)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.unordered_write: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                               Options.row_cache: None
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                              Options.wal_filter: None
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.allow_ingest_behind: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.two_write_queues: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.manual_wal_flush: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.wal_compression: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.atomic_flush: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.log_readahead_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.allow_data_in_errors: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.db_host_id: __hostname__
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.max_background_jobs: 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.max_background_compactions: -1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.max_subcompactions: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.max_total_wal_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                          Options.max_open_files: -1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                          Options.bytes_per_sync: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:       Options.compaction_readahead_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.max_background_flushes: -1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Compression algorithms supported:
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kZSTD supported: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kXpressCompression supported: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kBZip2Compression supported: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kLZ4Compression supported: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kZlibCompression supported: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kLZ4HCCompression supported: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         kSnappyCompression supported: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-standalone/store.db/MANIFEST-000010
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:           Options.merge_operator: 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:        Options.compaction_filter: None
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557b7fb5ea80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x557b7fb5b350
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:        Options.write_buffer_size: 33554432
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:  Options.max_write_buffer_number: 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.compression: NoCompression
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.num_levels: 7
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.table_properties_collectors: 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-standalone/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4ca07273-cffb-4aba-ab62-7eb3dc930841
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362566737138, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362566740461, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 56911, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 132, "table_properties": {"data_size": 55469, "index_size": 161, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2803, "raw_average_key_size": 28, "raw_value_size": 53113, "raw_average_value_size": 536, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362566740595, "job": 1, "event": "recovery_finished"}
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557b7fb82e00
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: DB pointer 0x557b7fc78000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 0.0 total, 0.0 interval
                                                        Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                        Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      2/0   57.47 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Sum      2/0   57.47 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.0 total, 0.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 4.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 4.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: starting mon.standalone rank 0 at public addrs [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] at bind addrs [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] mon_data /var/lib/ceph/mon/ceph-standalone fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???) e1 preinit fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).mds e1 new map
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).mds e1 print_map
                                                        e1
                                                        enable_multiple, ever_enabled_multiple: 1,1
                                                        default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        legacy client fscid: -1
                                                         
                                                        No filesystems configured
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@-1(probing) e1  my rank is now 0 (was -1)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(probing) e1 win_standalone_election
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : mon.standalone is new leader, mons standalone in quorum (ranks 0)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : last_changed 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : created 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] mon.standalone
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : fsmap 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 13 13:36:06 standalone.localdomain podman[29757]: 
Oct 13 13:36:06 standalone.localdomain podman[29757]: 2025-10-13 13:36:06.790635074 +0000 UTC m=+0.073533714 container create 4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mon.standalone is new leader, mons standalone in quorum (ranks 0)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: monmap epoch 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: last_changed 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: created 2025-10-13T13:36:01.554952+0000
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: min_mon_release 18 (reef)
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: election_strategy: 1
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: 0: [v2:172.18.0.100:3300/0,v1:172.18.0.100:6789/0] mon.standalone
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: fsmap 
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: osdmap e1: 0 total, 0 up, 0 in
Oct 13 13:36:06 standalone.localdomain ceph-mon[29756]: mgrmap e1: no daemons active
Oct 13 13:36:06 standalone.localdomain systemd[1]: Started libpod-conmon-4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92.scope.
Oct 13 13:36:06 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24a3453f9a83ce9208f175efab97995b7ef3695222ad03043e330306ca2edc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain podman[29757]: 2025-10-13 13:36:06.760400734 +0000 UTC m=+0.043299404 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24a3453f9a83ce9208f175efab97995b7ef3695222ad03043e330306ca2edc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a24a3453f9a83ce9208f175efab97995b7ef3695222ad03043e330306ca2edc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:06 standalone.localdomain podman[29757]: 2025-10-13 13:36:06.875742685 +0000 UTC m=+0.158641295 container init 4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:06 standalone.localdomain podman[29757]: 2025-10-13 13:36:06.885478444 +0000 UTC m=+0.168377064 container start 4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553)
Oct 13 13:36:06 standalone.localdomain podman[29757]: 2025-10-13 13:36:06.885840995 +0000 UTC m=+0.168739605 container attach 4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct 13 13:36:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3249715057' entity='client.admin' 
Oct 13 13:36:07 standalone.localdomain systemd[1]: libpod-4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92.scope: Deactivated successfully.
Oct 13 13:36:07 standalone.localdomain podman[29837]: 2025-10-13 13:36:07.158225491 +0000 UTC m=+0.051211599 container died 4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, name=rhceph, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:07 standalone.localdomain podman[29837]: 2025-10-13 13:36:07.187666147 +0000 UTC m=+0.080652205 container remove 4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=, name=rhceph, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=)
Oct 13 13:36:07 standalone.localdomain systemd[1]: libpod-conmon-4f8c5025cb2c78ffa3403cc81f09cdb131afbf86039afef6dd75f6841bc98f92.scope: Deactivated successfully.
Oct 13 13:36:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:07 standalone.localdomain systemd-sysv-generator[29879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:07 standalone.localdomain systemd-rc-local-generator[29874]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:07 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:07 standalone.localdomain systemd-rc-local-generator[29916]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:07 standalone.localdomain systemd-sysv-generator[29919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:07 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:07 standalone.localdomain systemd[1]: Starting Ceph mgr.standalone.ectizd for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:08 standalone.localdomain podman[29981]: 
Oct 13 13:36:08 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3249715057' entity='client.admin' 
Oct 13 13:36:08 standalone.localdomain podman[29981]: 2025-10-13 13:36:08.091346146 +0000 UTC m=+0.068656705 container create 45f657e5908aacdd8c5d77c40731f5216f343a0998e730c79942a85fa5849f2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, RELEASE=main)
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c984ccd709a4eac448b7b346ea1f075f79f17302b5b535185f64bc5024f3e2f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c984ccd709a4eac448b7b346ea1f075f79f17302b5b535185f64bc5024f3e2f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c984ccd709a4eac448b7b346ea1f075f79f17302b5b535185f64bc5024f3e2f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c984ccd709a4eac448b7b346ea1f075f79f17302b5b535185f64bc5024f3e2f1/merged/var/lib/ceph/mgr/ceph-standalone.ectizd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain podman[29981]: 2025-10-13 13:36:08.064392636 +0000 UTC m=+0.041703195 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:08 standalone.localdomain podman[29981]: 2025-10-13 13:36:08.182416929 +0000 UTC m=+0.159727498 container init 45f657e5908aacdd8c5d77c40731f5216f343a0998e730c79942a85fa5849f2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=)
Oct 13 13:36:08 standalone.localdomain podman[29981]: 2025-10-13 13:36:08.189820258 +0000 UTC m=+0.167130817 container start 45f657e5908aacdd8c5d77c40731f5216f343a0998e730c79942a85fa5849f2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12)
Oct 13 13:36:08 standalone.localdomain bash[29981]: 45f657e5908aacdd8c5d77c40731f5216f343a0998e730c79942a85fa5849f2b
Oct 13 13:36:08 standalone.localdomain systemd[1]: Started Ceph mgr.standalone.ectizd for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: set uid:gid to 167:167 (ceph:ceph)
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: pidfile_write: ignore empty --pid-file
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'alerts'
Oct 13 13:36:08 standalone.localdomain podman[30000]: 
Oct 13 13:36:08 standalone.localdomain podman[30000]: 2025-10-13 13:36:08.29675904 +0000 UTC m=+0.076561148 container create 181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swanson, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main)
Oct 13 13:36:08 standalone.localdomain systemd[1]: Started libpod-conmon-181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b.scope.
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'balancer'
Oct 13 13:36:08 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:08.352+0000 7ff09577b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 13 13:36:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:08 standalone.localdomain podman[30000]: 2025-10-13 13:36:08.266875859 +0000 UTC m=+0.046677987 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4d103f8dd16d07d823ccdfbb34f6278a10c4fffe0b8c4be3a801be1c908fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4d103f8dd16d07d823ccdfbb34f6278a10c4fffe0b8c4be3a801be1c908fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4d103f8dd16d07d823ccdfbb34f6278a10c4fffe0b8c4be3a801be1c908fdb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:08 standalone.localdomain podman[30000]: 2025-10-13 13:36:08.406202679 +0000 UTC m=+0.186004787 container init 181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swanson, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=553, name=rhceph, vcs-type=git)
Oct 13 13:36:08 standalone.localdomain podman[30000]: 2025-10-13 13:36:08.415419602 +0000 UTC m=+0.195221720 container start 181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swanson, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git)
Oct 13 13:36:08 standalone.localdomain podman[30000]: 2025-10-13 13:36:08.415730772 +0000 UTC m=+0.195532930 container attach 181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swanson, version=7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, release=553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'cephadm'
Oct 13 13:36:08 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:08.419+0000 7ff09577b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 13 13:36:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 13 13:36:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2002294890' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]: 
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]: {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "fsid": "627e7f45-65aa-56de-94df-66eaee84a56e",
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "health": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "status": "HEALTH_OK",
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "checks": {},
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "mutes": []
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "election_epoch": 5,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "quorum": [
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         0
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     ],
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "quorum_names": [
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "standalone"
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     ],
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "quorum_age": 1,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "monmap": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "epoch": 1,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "min_mon_release_name": "reef",
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_mons": 1
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "osdmap": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "epoch": 1,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_osds": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_up_osds": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "osd_up_since": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_in_osds": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "osd_in_since": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_remapped_pgs": 0
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "pgmap": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "pgs_by_state": [],
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_pgs": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_pools": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_objects": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "data_bytes": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "bytes_used": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "bytes_avail": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "bytes_total": 0
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "fsmap": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "epoch": 1,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "by_rank": [],
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "up:standby": 0
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "mgrmap": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "available": false,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "num_standbys": 0,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "modules": [
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:             "iostat",
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:             "nfs",
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:             "restful"
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         ],
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "services": {}
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "servicemap": {
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "epoch": 1,
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "modified": "2025-10-13T13:36:03.878879+0000",
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:         "services": {}
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     },
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]:     "progress_events": {}
Oct 13 13:36:08 standalone.localdomain jovial_swanson[30041]: }
Oct 13 13:36:08 standalone.localdomain systemd[1]: libpod-181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b.scope: Deactivated successfully.
Oct 13 13:36:08 standalone.localdomain podman[30000]: 2025-10-13 13:36:08.671938739 +0000 UTC m=+0.451740857 container died 181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swanson, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public)
Oct 13 13:36:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5b4d103f8dd16d07d823ccdfbb34f6278a10c4fffe0b8c4be3a801be1c908fdb-merged.mount: Deactivated successfully.
Oct 13 13:36:08 standalone.localdomain podman[30067]: 2025-10-13 13:36:08.755063219 +0000 UTC m=+0.071954617 container remove 181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_swanson, version=7, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 13:36:08 standalone.localdomain systemd[1]: libpod-conmon-181d1a0f6b2dc04be61f1c3d87cbd11444822ca4971b39d5e7474987c04d591b.scope: Deactivated successfully.
Oct 13 13:36:08 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'crash'
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'dashboard'
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:09.057+0000 7ff09577b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2002294890' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'devicehealth'
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'diskprediction_local'
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:09.633+0000 7ff09577b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]:   from numpy import show_config as show_numpy_config
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:09.765+0000 7ff09577b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'influx'
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'insights'
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:09.824+0000 7ff09577b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'iostat'
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 13 13:36:09 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'k8sevents'
Oct 13 13:36:09 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:09.977+0000 7ff09577b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'localpool'
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'mds_autoscaler'
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'mirroring'
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'nfs'
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'orchestrator'
Oct 13 13:36:10 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:10.701+0000 7ff09577b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain podman[30086]: 
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'osd_perf_query'
Oct 13 13:36:10 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:10.844+0000 7ff09577b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain podman[30086]: 2025-10-13 13:36:10.848594476 +0000 UTC m=+0.070576943 container create b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_nightingale, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, version=7, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:10 standalone.localdomain systemd[1]: Started libpod-conmon-b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb.scope.
Oct 13 13:36:10 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'osd_support'
Oct 13 13:36:10 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:10.908+0000 7ff09577b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a11856cccbd260ee38f7414fbd4dd22788b2e7482274c569362b92fe10aa36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a11856cccbd260ee38f7414fbd4dd22788b2e7482274c569362b92fe10aa36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:10 standalone.localdomain podman[30086]: 2025-10-13 13:36:10.820628965 +0000 UTC m=+0.042611432 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a11856cccbd260ee38f7414fbd4dd22788b2e7482274c569362b92fe10aa36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:10 standalone.localdomain podman[30086]: 2025-10-13 13:36:10.956104106 +0000 UTC m=+0.178086573 container init b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_nightingale, RELEASE=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, name=rhceph, architecture=x86_64, GIT_CLEAN=True)
Oct 13 13:36:10 standalone.localdomain systemd[1]: tmp-crun.uesapL.mount: Deactivated successfully.
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'pg_autoscaler'
Oct 13 13:36:10 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:10.966+0000 7ff09577b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 13 13:36:10 standalone.localdomain podman[30086]: 2025-10-13 13:36:10.970347824 +0000 UTC m=+0.192330261 container start b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_nightingale, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, version=7, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, build-date=2025-09-24T08:57:55)
Oct 13 13:36:10 standalone.localdomain podman[30086]: 2025-10-13 13:36:10.980491827 +0000 UTC m=+0.202474304 container attach b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_nightingale, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'progress'
Oct 13 13:36:11 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:11.034+0000 7ff09577b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'prometheus'
Oct 13 13:36:11 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:11.094+0000 7ff09577b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 13 13:36:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1846454075' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]: 
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]: {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "fsid": "627e7f45-65aa-56de-94df-66eaee84a56e",
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "health": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "status": "HEALTH_OK",
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "checks": {},
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "mutes": []
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "election_epoch": 5,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "quorum": [
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         0
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     ],
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "quorum_names": [
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "standalone"
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     ],
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "quorum_age": 4,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "monmap": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "epoch": 1,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "min_mon_release_name": "reef",
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_mons": 1
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "osdmap": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "epoch": 1,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_osds": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_up_osds": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "osd_up_since": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_in_osds": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "osd_in_since": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_remapped_pgs": 0
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "pgmap": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "pgs_by_state": [],
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_pgs": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_pools": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_objects": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "data_bytes": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "bytes_used": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "bytes_avail": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "bytes_total": 0
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "fsmap": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "epoch": 1,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "by_rank": [],
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "up:standby": 0
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "mgrmap": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "available": false,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "num_standbys": 0,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "modules": [
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:             "iostat",
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:             "nfs",
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:             "restful"
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         ],
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "services": {}
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "servicemap": {
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "epoch": 1,
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "modified": "2025-10-13T13:36:03.878879+0000",
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:         "services": {}
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     },
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]:     "progress_events": {}
Oct 13 13:36:11 standalone.localdomain competent_nightingale[30102]: }
Oct 13 13:36:11 standalone.localdomain systemd[1]: libpod-b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb.scope: Deactivated successfully.
Oct 13 13:36:11 standalone.localdomain podman[30086]: 2025-10-13 13:36:11.200332824 +0000 UTC m=+0.422315261 container died b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_nightingale, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64)
Oct 13 13:36:11 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1846454075' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:11 standalone.localdomain podman[30128]: 2025-10-13 13:36:11.286302691 +0000 UTC m=+0.080445698 container remove b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_nightingale, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:11 standalone.localdomain systemd[1]: libpod-conmon-b8d5f84155391b7db4055a848bc95695d2e79251426e8f5a220bf5109af834cb.scope: Deactivated successfully.
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'rbd_support'
Oct 13 13:36:11 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:11.388+0000 7ff09577b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'restful'
Oct 13 13:36:11 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:11.467+0000 7ff09577b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'rgw'
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'rook'
Oct 13 13:36:11 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:11.781+0000 7ff09577b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 13 13:36:11 standalone.localdomain systemd[1]: tmp-crun.BGekWr.mount: Deactivated successfully.
Oct 13 13:36:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a0a11856cccbd260ee38f7414fbd4dd22788b2e7482274c569362b92fe10aa36-merged.mount: Deactivated successfully.
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'selftest'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.187+0000 7ff09577b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'snap_schedule'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.248+0000 7ff09577b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'stats'
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'status'
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'telegraf'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.433+0000 7ff09577b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'telemetry'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.490+0000 7ff09577b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'test_orchestrator'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.615+0000 7ff09577b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'volumes'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.756+0000 7ff09577b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'zabbix'
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.936+0000 7ff09577b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:12.992+0000 7ff09577b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 13 13:36:12 standalone.localdomain ceph-mgr[29999]: ms_deliver_dispatch: unhandled message 0x5642b4758f20 mon_map magic: 0 from mon.0 v2:172.18.0.100:3300/0
Oct 13 13:36:12 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Activating manager daemon standalone.ectizd
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr handle_mgr_map Activating!
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr handle_mgr_map I am now activating
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e2: standalone.ectizd(active, starting, since 0.0120251s)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e1 all = 1
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e1 all = 1
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "standalone"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata", "id": "standalone"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "standalone.ectizd", "id": "standalone.ectizd"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr metadata", "who": "standalone.ectizd", "id": "standalone.ectizd"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: balancer
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: crash
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Starting
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Manager daemon standalone.ectizd is now available
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: devicehealth
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:36:13
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] No pools available
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Starting
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: iostat
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: nfs
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: orchestrator
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: pg_autoscaler
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: progress
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Loading...
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] No stored events to load
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Loaded [] historic events
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Loaded OSDMap, ready.
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] recovery thread starting
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] starting setup
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: rbd_support
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/mirror_snapshot_schedule"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/mirror_snapshot_schedule"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: restful
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: status
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: telemetry
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [restful INFO root] server_addr: :: server_port: 8003
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [restful WARNING root] server not running: no certificate configured
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] PerfHandler: starting
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TaskHandler: starting
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/trash_purge_schedule"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/trash_purge_schedule"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: Activating manager daemon standalone.ectizd
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mgrmap e2: standalone.ectizd(active, starting, since 0.0120251s)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata", "id": "standalone"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr metadata", "who": "standalone.ectizd", "id": "standalone.ectizd"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: Manager daemon standalone.ectizd is now available
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/mirror_snapshot_schedule"} : dispatch
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' 
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] setup complete
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: volumes
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' 
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' 
Oct 13 13:36:13 standalone.localdomain podman[30220]: 
Oct 13 13:36:13 standalone.localdomain podman[30220]: 2025-10-13 13:36:13.375861197 +0000 UTC m=+0.068387286 container create 65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_nash, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.component=rhceph-container)
Oct 13 13:36:13 standalone.localdomain systemd[1]: Started libpod-conmon-65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73.scope.
Oct 13 13:36:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d714bafac8fb1f5675e8a1f80ad598082a6b3cfa234655e782ce82b2dd790f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d714bafac8fb1f5675e8a1f80ad598082a6b3cfa234655e782ce82b2dd790f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d714bafac8fb1f5675e8a1f80ad598082a6b3cfa234655e782ce82b2dd790f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:13 standalone.localdomain podman[30220]: 2025-10-13 13:36:13.339758915 +0000 UTC m=+0.032285024 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:13 standalone.localdomain podman[30220]: 2025-10-13 13:36:13.441528308 +0000 UTC m=+0.134054377 container init 65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_nash, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, ceph=True, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:13 standalone.localdomain podman[30220]: 2025-10-13 13:36:13.445871392 +0000 UTC m=+0.138397471 container start 65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_nash, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, release=553, ceph=True, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12)
Oct 13 13:36:13 standalone.localdomain podman[30220]: 2025-10-13 13:36:13.446072588 +0000 UTC m=+0.138598667 container attach 65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_nash, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, release=553, ceph=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 13 13:36:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1683452675' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]: 
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]: {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "fsid": "627e7f45-65aa-56de-94df-66eaee84a56e",
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "health": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "status": "HEALTH_OK",
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "checks": {},
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "mutes": []
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "election_epoch": 5,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "quorum": [
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         0
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     ],
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "quorum_names": [
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "standalone"
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     ],
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "quorum_age": 6,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "monmap": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "epoch": 1,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "min_mon_release_name": "reef",
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_mons": 1
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "osdmap": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "epoch": 1,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_osds": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_up_osds": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "osd_up_since": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_in_osds": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "osd_in_since": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_remapped_pgs": 0
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "pgmap": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "pgs_by_state": [],
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_pgs": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_pools": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_objects": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "data_bytes": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "bytes_used": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "bytes_avail": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "bytes_total": 0
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "fsmap": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "epoch": 1,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "by_rank": [],
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "up:standby": 0
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "mgrmap": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "available": false,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "num_standbys": 0,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "modules": [
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:             "iostat",
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:             "nfs",
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:             "restful"
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         ],
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "services": {}
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "servicemap": {
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "epoch": 1,
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "modified": "2025-10-13T13:36:03.878879+0000",
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:         "services": {}
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     },
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]:     "progress_events": {}
Oct 13 13:36:13 standalone.localdomain dazzling_nash[30234]: }
Oct 13 13:36:13 standalone.localdomain systemd[1]: libpod-65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73.scope: Deactivated successfully.
Oct 13 13:36:13 standalone.localdomain podman[30220]: 2025-10-13 13:36:13.629506285 +0000 UTC m=+0.322032384 container died 65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_nash, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:13 standalone.localdomain podman[30260]: 2025-10-13 13:36:13.746255029 +0000 UTC m=+0.111196914 container remove 65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_nash, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Oct 13 13:36:13 standalone.localdomain systemd[1]: libpod-conmon-65fbb1ba2ef2790c0fa929f71dabd77afc3c0fb6e7c82d4df1f0a08476210b73.scope: Deactivated successfully.
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e3: standalone.ectizd(active, since 1.01855s)
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/trash_purge_schedule"} : dispatch
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' 
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' 
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: from='mgr.14100 172.18.0.100:0/1793176007' entity='mgr.standalone.ectizd' 
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1683452675' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:14 standalone.localdomain ceph-mon[29756]: mgrmap e3: standalone.ectizd(active, since 1.01855s)
Oct 13 13:36:14 standalone.localdomain systemd[1]: tmp-crun.5E9wfp.mount: Deactivated successfully.
Oct 13 13:36:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a0d714bafac8fb1f5675e8a1f80ad598082a6b3cfa234655e782ce82b2dd790f-merged.mount: Deactivated successfully.
Oct 13 13:36:15 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:15 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e4: standalone.ectizd(active, since 2s)
Oct 13 13:36:15 standalone.localdomain podman[30274]: 
Oct 13 13:36:15 standalone.localdomain podman[30274]: 2025-10-13 13:36:15.836365123 +0000 UTC m=+0.063085773 container create b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_rhodes, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=)
Oct 13 13:36:15 standalone.localdomain systemd[1]: Started libpod-conmon-b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f.scope.
Oct 13 13:36:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4077048414c0ec3248cfe808da1440b188ad8c7f9c2a4cedc312f079143316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4077048414c0ec3248cfe808da1440b188ad8c7f9c2a4cedc312f079143316/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4077048414c0ec3248cfe808da1440b188ad8c7f9c2a4cedc312f079143316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:15 standalone.localdomain podman[30274]: 2025-10-13 13:36:15.819821453 +0000 UTC m=+0.046542103 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:15 standalone.localdomain podman[30274]: 2025-10-13 13:36:15.921228336 +0000 UTC m=+0.147948986 container init b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_rhodes, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:15 standalone.localdomain podman[30274]: 2025-10-13 13:36:15.928941962 +0000 UTC m=+0.155662642 container start b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_rhodes, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:15 standalone.localdomain podman[30274]: 2025-10-13 13:36:15.92918193 +0000 UTC m=+0.155902610 container attach b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_rhodes, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, release=553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:16 standalone.localdomain ceph-mon[29756]: mgrmap e4: standalone.ectizd(active, since 2s)
Oct 13 13:36:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 13 13:36:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/247993590' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]: 
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]: {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "fsid": "627e7f45-65aa-56de-94df-66eaee84a56e",
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "health": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "status": "HEALTH_OK",
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "checks": {},
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "mutes": []
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "election_epoch": 5,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "quorum": [
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         0
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     ],
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "quorum_names": [
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "standalone"
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     ],
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "quorum_age": 9,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "monmap": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "epoch": 1,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "min_mon_release_name": "reef",
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_mons": 1
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "osdmap": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "epoch": 1,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_osds": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_up_osds": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "osd_up_since": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_in_osds": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "osd_in_since": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_remapped_pgs": 0
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "pgmap": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "pgs_by_state": [],
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_pgs": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_pools": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_objects": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "data_bytes": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "bytes_used": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "bytes_avail": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "bytes_total": 0
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "fsmap": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "epoch": 1,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "by_rank": [],
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "up:standby": 0
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "mgrmap": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "available": true,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "num_standbys": 0,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "modules": [
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:             "iostat",
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:             "nfs",
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:             "restful"
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         ],
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "services": {}
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "servicemap": {
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "epoch": 1,
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "modified": "2025-10-13T13:36:03.878879+0000",
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:         "services": {}
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     },
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]:     "progress_events": {}
Oct 13 13:36:16 standalone.localdomain hungry_rhodes[30290]: }
Oct 13 13:36:16 standalone.localdomain systemd[1]: libpod-b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f.scope: Deactivated successfully.
Oct 13 13:36:16 standalone.localdomain podman[30274]: 2025-10-13 13:36:16.376034916 +0000 UTC m=+0.602755606 container died b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_rhodes, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Oct 13 13:36:16 standalone.localdomain podman[30316]: 2025-10-13 13:36:16.464645035 +0000 UTC m=+0.079437067 container remove b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_rhodes, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.buildah.version=1.33.12, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Oct 13 13:36:16 standalone.localdomain systemd[1]: libpod-conmon-b20c85596a1d193d7ca5a06b9ccf23f0e331779453624d58b6ea360c39a3d00f.scope: Deactivated successfully.
Oct 13 13:36:16 standalone.localdomain podman[30331]: 
Oct 13 13:36:16 standalone.localdomain podman[30331]: 2025-10-13 13:36:16.551023223 +0000 UTC m=+0.064290680 container create a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhabha, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, com.redhat.component=rhceph-container)
Oct 13 13:36:16 standalone.localdomain systemd[1]: Started libpod-conmon-a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3.scope.
Oct 13 13:36:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d0ce7ef53dc7d5435e5ccb1bceb4736a5c2542b01f341cf344dedfb1fb276f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d0ce7ef53dc7d5435e5ccb1bceb4736a5c2542b01f341cf344dedfb1fb276f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:16 standalone.localdomain podman[30331]: 2025-10-13 13:36:16.527971164 +0000 UTC m=+0.041238611 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d0ce7ef53dc7d5435e5ccb1bceb4736a5c2542b01f341cf344dedfb1fb276f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d0ce7ef53dc7d5435e5ccb1bceb4736a5c2542b01f341cf344dedfb1fb276f/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:16 standalone.localdomain podman[30331]: 2025-10-13 13:36:16.649719182 +0000 UTC m=+0.162986639 container init a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhabha, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, ceph=True, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64)
Oct 13 13:36:16 standalone.localdomain podman[30331]: 2025-10-13 13:36:16.657327986 +0000 UTC m=+0.170595443 container start a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhabha, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public)
Oct 13 13:36:16 standalone.localdomain podman[30331]: 2025-10-13 13:36:16.657561413 +0000 UTC m=+0.170828920 container attach a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhabha, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, release=553, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, ceph=True)
Oct 13 13:36:16 standalone.localdomain systemd[1]: tmp-crun.CVmxDQ.mount: Deactivated successfully.
Oct 13 13:36:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8b4077048414c0ec3248cfe808da1440b188ad8c7f9c2a4cedc312f079143316-merged.mount: Deactivated successfully.
Oct 13 13:36:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 13 13:36:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2869098369' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Oct 13 13:36:16 standalone.localdomain systemd[1]: libpod-a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3.scope: Deactivated successfully.
Oct 13 13:36:16 standalone.localdomain podman[30331]: 2025-10-13 13:36:16.964697348 +0000 UTC m=+0.477964855 container died a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhabha, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.openshift.expose-services=, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Oct 13 13:36:17 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-94d0ce7ef53dc7d5435e5ccb1bceb4736a5c2542b01f341cf344dedfb1fb276f-merged.mount: Deactivated successfully.
Oct 13 13:36:17 standalone.localdomain podman[30372]: 2025-10-13 13:36:17.0326471 +0000 UTC m=+0.062990130 container remove a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhabha, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7)
Oct 13 13:36:17 standalone.localdomain systemd[1]: libpod-conmon-a81adc1953d0582df812a7da6cc8bf5cb46895580c43f51281dc21839d6b70b3.scope: Deactivated successfully.
Oct 13 13:36:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/247993590' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Oct 13 13:36:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2869098369' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Oct 13 13:36:17 standalone.localdomain podman[30388]: 
Oct 13 13:36:17 standalone.localdomain podman[30388]: 2025-10-13 13:36:17.125410535 +0000 UTC m=+0.070913814 container create 976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_chatelet, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:17 standalone.localdomain systemd[1]: Started libpod-conmon-976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c.scope.
Oct 13 13:36:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f17b1f6b470158da6cb8e3a9dc8155c32af40df4a8b67ad8ed9abf70f5c0ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f17b1f6b470158da6cb8e3a9dc8155c32af40df4a8b67ad8ed9abf70f5c0ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:17 standalone.localdomain podman[30388]: 2025-10-13 13:36:17.097991962 +0000 UTC m=+0.043495241 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f17b1f6b470158da6cb8e3a9dc8155c32af40df4a8b67ad8ed9abf70f5c0ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:17 standalone.localdomain podman[30388]: 2025-10-13 13:36:17.209465963 +0000 UTC m=+0.154969202 container init 976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_chatelet, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=)
Oct 13 13:36:17 standalone.localdomain podman[30388]: 2025-10-13 13:36:17.216769078 +0000 UTC m=+0.162272337 container start 976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_chatelet, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:17 standalone.localdomain podman[30388]: 2025-10-13 13:36:17.216906202 +0000 UTC m=+0.162409451 container attach 976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_chatelet, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, RELEASE=main)
Oct 13 13:36:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct 13 13:36:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3919051493' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Oct 13 13:36:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3919051493' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  1: '-n'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  2: 'mgr.standalone.ectizd'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  3: '-f'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  4: '--setuser'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  5: 'ceph'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  6: '--setgroup'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  7: 'ceph'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  8: '--default-log-to-file=false'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  9: '--default-log-to-journald=true'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr respawn  exe_path /proc/self/exe
Oct 13 13:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3919051493' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 13 13:36:18 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e5: standalone.ectizd(active, since 5s)
Oct 13 13:36:18 standalone.localdomain systemd[1]: libpod-976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c.scope: Deactivated successfully.
Oct 13 13:36:18 standalone.localdomain podman[30388]: 2025-10-13 13:36:18.125504652 +0000 UTC m=+1.071007971 container died 976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_chatelet, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:18 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: ignoring --setuser ceph since I am not root
Oct 13 13:36:18 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: ignoring --setgroup ceph since I am not root
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: pidfile_write: ignore empty --pid-file
Oct 13 13:36:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f1f17b1f6b470158da6cb8e3a9dc8155c32af40df4a8b67ad8ed9abf70f5c0ae-merged.mount: Deactivated successfully.
Oct 13 13:36:18 standalone.localdomain podman[30429]: 2025-10-13 13:36:18.206705592 +0000 UTC m=+0.070347016 container remove 976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_chatelet, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:18 standalone.localdomain systemd[1]: libpod-conmon-976d061a6ea3f6e5d38e3cf1ac72b80a45d6c8a94b0098f5e9d84093418f618c.scope: Deactivated successfully.
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'alerts'
Oct 13 13:36:18 standalone.localdomain podman[30465]: 
Oct 13 13:36:18 standalone.localdomain podman[30465]: 2025-10-13 13:36:18.292816833 +0000 UTC m=+0.066099925 container create 13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_williams, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55)
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'balancer'
Oct 13 13:36:18 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:18.306+0000 7f02c4621140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 13 13:36:18 standalone.localdomain systemd[1]: Started libpod-conmon-13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce.scope.
Oct 13 13:36:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdc4ad46fcdde8c28eff4c50bc6b8a5c0e27ef4a3cb4da26e65920c90c5b0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdc4ad46fcdde8c28eff4c50bc6b8a5c0e27ef4a3cb4da26e65920c90c5b0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdc4ad46fcdde8c28eff4c50bc6b8a5c0e27ef4a3cb4da26e65920c90c5b0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:18 standalone.localdomain podman[30465]: 2025-10-13 13:36:18.266638188 +0000 UTC m=+0.039921280 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:18 standalone.localdomain podman[30465]: 2025-10-13 13:36:18.369473224 +0000 UTC m=+0.142756316 container init 13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_williams, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7, vcs-type=git, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_BRANCH=main)
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'cephadm'
Oct 13 13:36:18 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:18.375+0000 7f02c4621140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 13 13:36:18 standalone.localdomain podman[30465]: 2025-10-13 13:36:18.378914364 +0000 UTC m=+0.152197416 container start 13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_williams, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Oct 13 13:36:18 standalone.localdomain podman[30465]: 2025-10-13 13:36:18.379301886 +0000 UTC m=+0.152585018 container attach 13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_williams, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, name=rhceph)
Oct 13 13:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 13 13:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1900985817' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Oct 13 13:36:18 standalone.localdomain recursing_williams[30481]: {
Oct 13 13:36:18 standalone.localdomain recursing_williams[30481]:     "epoch": 5,
Oct 13 13:36:18 standalone.localdomain recursing_williams[30481]:     "available": true,
Oct 13 13:36:18 standalone.localdomain recursing_williams[30481]:     "active_name": "standalone.ectizd",
Oct 13 13:36:18 standalone.localdomain recursing_williams[30481]:     "num_standby": 0
Oct 13 13:36:18 standalone.localdomain recursing_williams[30481]: }
Oct 13 13:36:18 standalone.localdomain systemd[1]: libpod-13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce.scope: Deactivated successfully.
Oct 13 13:36:18 standalone.localdomain podman[30465]: 2025-10-13 13:36:18.793875758 +0000 UTC m=+0.567158870 container died 13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_williams, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, name=rhceph)
Oct 13 13:36:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1bfdc4ad46fcdde8c28eff4c50bc6b8a5c0e27ef4a3cb4da26e65920c90c5b0e-merged.mount: Deactivated successfully.
Oct 13 13:36:18 standalone.localdomain podman[30508]: 2025-10-13 13:36:18.863342577 +0000 UTC m=+0.058464441 container remove 13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_williams, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_BRANCH=main)
Oct 13 13:36:18 standalone.localdomain systemd[1]: libpod-conmon-13ebf3596ea1fa1500c2bb30878f73d9101f91038958fd2a9c64aae9dfa9b1ce.scope: Deactivated successfully.
Oct 13 13:36:18 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'crash'
Oct 13 13:36:18 standalone.localdomain podman[30526]: 
Oct 13 13:36:18 standalone.localdomain podman[30526]: 2025-10-13 13:36:18.958443165 +0000 UTC m=+0.069089809 container create 8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_heisenberg, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:19 standalone.localdomain systemd[1]: Started libpod-conmon-8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201.scope.
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'dashboard'
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:19.000+0000 7f02c4621140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:19 standalone.localdomain podman[30526]: 2025-10-13 13:36:18.929039789 +0000 UTC m=+0.039686463 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c0f137cff833a35ba8c85cd5ef8c5cc554bddc16c59d3c339998de7cbc0a81/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c0f137cff833a35ba8c85cd5ef8c5cc554bddc16c59d3c339998de7cbc0a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c0f137cff833a35ba8c85cd5ef8c5cc554bddc16c59d3c339998de7cbc0a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:19 standalone.localdomain podman[30526]: 2025-10-13 13:36:19.071413352 +0000 UTC m=+0.182059976 container init 8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_heisenberg, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:19 standalone.localdomain podman[30526]: 2025-10-13 13:36:19.076238171 +0000 UTC m=+0.186884785 container start 8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_heisenberg, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 13:36:19 standalone.localdomain podman[30526]: 2025-10-13 13:36:19.076613602 +0000 UTC m=+0.187260256 container attach 8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_heisenberg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3919051493' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 13 13:36:19 standalone.localdomain ceph-mon[29756]: mgrmap e5: standalone.ectizd(active, since 5s)
Oct 13 13:36:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1900985817' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'devicehealth'
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'diskprediction_local'
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:19.556+0000 7f02c4621140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]:   from numpy import show_config as show_numpy_config
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'influx'
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:19.684+0000 7f02c4621140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'insights'
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:19.742+0000 7f02c4621140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'iostat'
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 13 13:36:19 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'k8sevents'
Oct 13 13:36:19 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:19.856+0000 7f02c4621140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'localpool'
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'mds_autoscaler'
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'mirroring'
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'nfs'
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'orchestrator'
Oct 13 13:36:20 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:20.605+0000 7f02c4621140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'osd_perf_query'
Oct 13 13:36:20 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:20.762+0000 7f02c4621140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'osd_support'
Oct 13 13:36:20 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:20.827+0000 7f02c4621140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'pg_autoscaler'
Oct 13 13:36:20 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:20.884+0000 7f02c4621140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 13 13:36:20 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'progress'
Oct 13 13:36:20 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:20.951+0000 7f02c4621140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'prometheus'
Oct 13 13:36:21 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:21.010+0000 7f02c4621140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'rbd_support'
Oct 13 13:36:21 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:21.315+0000 7f02c4621140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'restful'
Oct 13 13:36:21 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:21.399+0000 7f02c4621140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'rgw'
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 13 13:36:21 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'rook'
Oct 13 13:36:21 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:21.742+0000 7f02c4621140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'selftest'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.178+0000 7f02c4621140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'snap_schedule'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.240+0000 7f02c4621140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'stats'
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'status'
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'telegraf'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.431+0000 7f02c4621140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'telemetry'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.490+0000 7f02c4621140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'test_orchestrator'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.621+0000 7f02c4621140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'volumes'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.767+0000 7f02c4621140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 13 13:36:22 standalone.localdomain ceph-mgr[29999]: mgr[py] Loading python module 'zabbix'
Oct 13 13:36:22 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:22.961+0000 7f02c4621140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 13 13:36:23 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T13:36:23.022+0000 7f02c4621140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: ms_deliver_dispatch: unhandled message 0x55aee33ca420 mon_map magic: 0 from mon.0 v2:172.18.0.100:3300/0
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Active manager daemon standalone.ectizd restarted
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Activating manager daemon standalone.ectizd
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr handle_mgr_map Activating!
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr handle_mgr_map I am now activating
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e6: standalone.ectizd(active, starting, since 0.0110708s)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "standalone"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata", "id": "standalone"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "standalone.ectizd", "id": "standalone.ectizd"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr metadata", "who": "standalone.ectizd", "id": "standalone.ectizd"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e1 all = 1
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: balancer
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Starting
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Manager daemon standalone.ectizd is now available
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:36:23
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] No pools available
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: Active manager daemon standalone.ectizd restarted
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: Activating manager daemon standalone.ectizd
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: osdmap e2: 0 total, 0 up, 0 in
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mgrmap e6: standalone.ectizd(active, starting, since 0.0110708s)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata", "id": "standalone"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr metadata", "who": "standalone.ectizd", "id": "standalone.ectizd"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: Manager daemon standalone.ectizd is now available
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: cephadm
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: crash
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: devicehealth
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: iostat
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Starting
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: nfs
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: orchestrator
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: pg_autoscaler
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: progress
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Loading...
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] No stored events to load
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Loaded [] historic events
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Loaded OSDMap, ready.
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] recovery thread starting
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] starting setup
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: rbd_support
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: restful
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: status
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [restful INFO root] server_addr: :: server_port: 8003
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [restful WARNING root] server not running: no certificate configured
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: telemetry
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/mirror_snapshot_schedule"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/mirror_snapshot_schedule"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] PerfHandler: starting
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TaskHandler: starting
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/trash_purge_schedule"} v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/trash_purge_schedule"} : dispatch
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] setup complete
Oct 13 13:36:23 standalone.localdomain ceph-mgr[29999]: mgr load Constructed class from module: volumes
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct 13 13:36:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e7: standalone.ectizd(active, since 1.01721s)
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 13 13:36:24 standalone.localdomain clever_heisenberg[30542]: {
Oct 13 13:36:24 standalone.localdomain clever_heisenberg[30542]:     "mgrmap_epoch": 7,
Oct 13 13:36:24 standalone.localdomain clever_heisenberg[30542]:     "initialized": true
Oct 13 13:36:24 standalone.localdomain clever_heisenberg[30542]: }
Oct 13 13:36:24 standalone.localdomain systemd[1]: libpod-8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201.scope: Deactivated successfully.
Oct 13 13:36:24 standalone.localdomain podman[30526]: 2025-10-13 13:36:24.072338512 +0000 UTC m=+5.182985196 container died 8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_heisenberg, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: Found migration_current of "None". Setting to last migration.
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/mirror_snapshot_schedule"} : dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/standalone.ectizd/trash_purge_schedule"} : dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: mgrmap e7: standalone.ectizd(active, since 1.01721s)
Oct 13 13:36:24 standalone.localdomain systemd[1]: tmp-crun.zWLXc3.mount: Deactivated successfully.
Oct 13 13:36:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-37c0f137cff833a35ba8c85cd5ef8c5cc554bddc16c59d3c339998de7cbc0a81-merged.mount: Deactivated successfully.
Oct 13 13:36:24 standalone.localdomain podman[30678]: 2025-10-13 13:36:24.154921305 +0000 UTC m=+0.071883314 container remove 8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_heisenberg, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, build-date=2025-09-24T08:57:55, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, RELEASE=main)
Oct 13 13:36:24 standalone.localdomain systemd[1]: libpod-conmon-8f430dd2639e376e756ca779cfcc096b9eb4c35446c22e5b8c0c1f6ed7b11201.scope: Deactivated successfully.
Oct 13 13:36:24 standalone.localdomain podman[30692]: 
Oct 13 13:36:24 standalone.localdomain podman[30692]: 2025-10-13 13:36:24.234848805 +0000 UTC m=+0.061072531 container create ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bell, release=553, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph)
Oct 13 13:36:24 standalone.localdomain systemd[1]: Started libpod-conmon-ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc.scope.
Oct 13 13:36:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35ad24a8cc4fc453e0120125769128c565614be29d72bb5e97c2beb394576a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:24 standalone.localdomain podman[30692]: 2025-10-13 13:36:24.20381549 +0000 UTC m=+0.030039226 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35ad24a8cc4fc453e0120125769128c565614be29d72bb5e97c2beb394576a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35ad24a8cc4fc453e0120125769128c565614be29d72bb5e97c2beb394576a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:24 standalone.localdomain podman[30692]: 2025-10-13 13:36:24.320589165 +0000 UTC m=+0.146812861 container init ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bell, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, release=553, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main)
Oct 13 13:36:24 standalone.localdomain podman[30692]: 2025-10-13 13:36:24.326074974 +0000 UTC m=+0.152298720 container start ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bell, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:24 standalone.localdomain podman[30692]: 2025-10-13 13:36:24.328502539 +0000 UTC m=+0.154726235 container attach ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bell, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cherrypy.error] [13/Oct/2025:13:36:24] ENGINE Bus STARTING
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : [13/Oct/2025:13:36:24] ENGINE Bus STARTING
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cherrypy.error] [13/Oct/2025:13:36:24] ENGINE Serving on http://172.18.0.100:8765
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : [13/Oct/2025:13:36:24] ENGINE Serving on http://172.18.0.100:8765
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cherrypy.error] [13/Oct/2025:13:36:24] ENGINE Serving on https://172.18.0.100:7150
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : [13/Oct/2025:13:36:24] ENGINE Serving on https://172.18.0.100:7150
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cherrypy.error] [13/Oct/2025:13:36:24] ENGINE Bus STARTED
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : [13/Oct/2025:13:36:24] ENGINE Bus STARTED
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cherrypy.error] [13/Oct/2025:13:36:24] ENGINE Client ('172.18.0.100', 53908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : [13/Oct/2025:13:36:24] ENGINE Client ('172.18.0.100', 53908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 13 13:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 13 13:36:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:24 standalone.localdomain systemd[1]: libpod-ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc.scope: Deactivated successfully.
Oct 13 13:36:24 standalone.localdomain podman[30692]: 2025-10-13 13:36:24.683809447 +0000 UTC m=+0.510033143 container died ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bell, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:24 standalone.localdomain podman[30757]: 2025-10-13 13:36:24.749271421 +0000 UTC m=+0.058928235 container remove ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_bell, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:24 standalone.localdomain systemd[1]: libpod-conmon-ee09cac1173f2fee12d778679c5e563e99eb60edc4de7d02ff44335f5d06d2fc.scope: Deactivated successfully.
Oct 13 13:36:24 standalone.localdomain podman[30771]: 
Oct 13 13:36:24 standalone.localdomain podman[30771]: 2025-10-13 13:36:24.830844853 +0000 UTC m=+0.064476956 container create 1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_pare, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.33.12, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, name=rhceph, version=7)
Oct 13 13:36:24 standalone.localdomain systemd[1]: Started libpod-conmon-1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37.scope.
Oct 13 13:36:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abb17f268b93eacf91876e3ffdaad8e9b111860ae9fbaaec31b34b82c690bd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:24 standalone.localdomain podman[30771]: 2025-10-13 13:36:24.797888528 +0000 UTC m=+0.031520641 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abb17f268b93eacf91876e3ffdaad8e9b111860ae9fbaaec31b34b82c690bd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abb17f268b93eacf91876e3ffdaad8e9b111860ae9fbaaec31b34b82c690bd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:24 standalone.localdomain podman[30771]: 2025-10-13 13:36:24.951319771 +0000 UTC m=+0.184952114 container init 1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_pare, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:24 standalone.localdomain podman[30771]: 2025-10-13 13:36:24.957082259 +0000 UTC m=+0.190714342 container start 1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_pare, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, architecture=x86_64, version=7, vcs-type=git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc.)
Oct 13 13:36:24 standalone.localdomain podman[30771]: 2025-10-13 13:36:24.957267604 +0000 UTC m=+0.190899737 container attach 1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_pare, RELEASE=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b35ad24a8cc4fc453e0120125769128c565614be29d72bb5e97c2beb394576a2-merged.mount: Deactivated successfully.
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Set ssh ssh_user
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Set ssh ssh_config
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 13 13:36:25 standalone.localdomain quizzical_pare[30787]: ssh user set to ceph-admin. sudo will be used
Oct 13 13:36:25 standalone.localdomain systemd[1]: libpod-1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37.scope: Deactivated successfully.
Oct 13 13:36:25 standalone.localdomain podman[30771]: 2025-10-13 13:36:25.315393479 +0000 UTC m=+0.549025552 container died 1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_pare, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=553, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6abb17f268b93eacf91876e3ffdaad8e9b111860ae9fbaaec31b34b82c690bd1-merged.mount: Deactivated successfully.
Oct 13 13:36:25 standalone.localdomain podman[30813]: 2025-10-13 13:36:25.369824854 +0000 UTC m=+0.045626425 container remove 1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_pare, vcs-type=git, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, name=rhceph, CEPH_POINT_RELEASE=)
Oct 13 13:36:25 standalone.localdomain systemd[1]: libpod-conmon-1144d22441de77384e2bfea6805e43ec1e4840d3fed2ef04661bcd55f2108a37.scope: Deactivated successfully.
Oct 13 13:36:25 standalone.localdomain podman[30825]: 
Oct 13 13:36:25 standalone.localdomain podman[30825]: 2025-10-13 13:36:25.462228179 +0000 UTC m=+0.073088561 container create 0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_shamir, vcs-type=git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Oct 13 13:36:25 standalone.localdomain systemd[1]: Started libpod-conmon-0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e.scope.
Oct 13 13:36:25 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43eb9ac1521674adaf1a803e350acfb35a703823fe794022330077979eb58a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:25 standalone.localdomain podman[30825]: 2025-10-13 13:36:25.431130392 +0000 UTC m=+0.041990744 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43eb9ac1521674adaf1a803e350acfb35a703823fe794022330077979eb58a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43eb9ac1521674adaf1a803e350acfb35a703823fe794022330077979eb58a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43eb9ac1521674adaf1a803e350acfb35a703823fe794022330077979eb58a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff43eb9ac1521674adaf1a803e350acfb35a703823fe794022330077979eb58a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:25 standalone.localdomain podman[30825]: 2025-10-13 13:36:25.578783317 +0000 UTC m=+0.189643639 container init 0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_shamir, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main)
Oct 13 13:36:25 standalone.localdomain podman[30825]: 2025-10-13 13:36:25.586803014 +0000 UTC m=+0.197663376 container start 0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_shamir, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=)
Oct 13 13:36:25 standalone.localdomain podman[30825]: 2025-10-13 13:36:25.587187546 +0000 UTC m=+0.198047888 container attach 0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_shamir, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.expose-services=, release=553, com.redhat.component=rhceph-container, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main)
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e8: standalone.ectizd(active, since 2s)
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct 13 13:36:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Set ssh private key
Oct 13 13:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 13 13:36:25 standalone.localdomain systemd[1]: libpod-0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e.scope: Deactivated successfully.
Oct 13 13:36:25 standalone.localdomain podman[30825]: 2025-10-13 13:36:25.989152441 +0000 UTC m=+0.600012803 container died 0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_shamir, CEPH_POINT_RELEASE=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: [13/Oct/2025:13:36:24] ENGINE Bus STARTING
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: [13/Oct/2025:13:36:24] ENGINE Serving on http://172.18.0.100:8765
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: [13/Oct/2025:13:36:24] ENGINE Serving on https://172.18.0.100:7150
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: [13/Oct/2025:13:36:24] ENGINE Bus STARTED
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: [13/Oct/2025:13:36:24] ENGINE Client ('172.18.0.100', 53908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: mgrmap e8: standalone.ectizd(active, since 2s)
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1019924059 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:27 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: Set ssh ssh_user
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: Set ssh ssh_config
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: ssh user set to ceph-admin. sudo will be used
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: Set ssh ssh_identity_key
Oct 13 13:36:27 standalone.localdomain ceph-mon[29756]: Set ssh private key
Oct 13 13:36:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ff43eb9ac1521674adaf1a803e350acfb35a703823fe794022330077979eb58a-merged.mount: Deactivated successfully.
Oct 13 13:36:28 standalone.localdomain podman[30867]: 2025-10-13 13:36:28.07946865 +0000 UTC m=+2.079544239 container remove 0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_shamir, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main)
Oct 13 13:36:28 standalone.localdomain systemd[1]: libpod-conmon-0c614f10ef5af489ffe585eaaadb832d09f175c53601d1ba61efcba63adfa53e.scope: Deactivated successfully.
Oct 13 13:36:28 standalone.localdomain podman[31002]: 
Oct 13 13:36:28 standalone.localdomain podman[31002]: 2025-10-13 13:36:28.187805595 +0000 UTC m=+0.080118227 container create 0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_hawking, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553)
Oct 13 13:36:28 standalone.localdomain systemd[1]: Started libpod-conmon-0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2.scope.
Oct 13 13:36:28 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc123c55feff268e0a7270caa478b645b698f26617dfa2ac2af2b19224704411/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain podman[31002]: 2025-10-13 13:36:28.154796289 +0000 UTC m=+0.047108921 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc123c55feff268e0a7270caa478b645b698f26617dfa2ac2af2b19224704411/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc123c55feff268e0a7270caa478b645b698f26617dfa2ac2af2b19224704411/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc123c55feff268e0a7270caa478b645b698f26617dfa2ac2af2b19224704411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc123c55feff268e0a7270caa478b645b698f26617dfa2ac2af2b19224704411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain podman[31002]: 2025-10-13 13:36:28.286981768 +0000 UTC m=+0.179294370 container init 0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_hawking, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.33.12, release=553, RELEASE=main, distribution-scope=public)
Oct 13 13:36:28 standalone.localdomain podman[31002]: 2025-10-13 13:36:28.296020076 +0000 UTC m=+0.188332678 container start 0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_hawking, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553)
Oct 13 13:36:28 standalone.localdomain podman[31002]: 2025-10-13 13:36:28.296302205 +0000 UTC m=+0.188614987 container attach 0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_hawking, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Oct 13 13:36:28 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct 13 13:36:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:28 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 13 13:36:28 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 13 13:36:28 standalone.localdomain systemd[1]: libpod-0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2.scope: Deactivated successfully.
Oct 13 13:36:28 standalone.localdomain podman[31002]: 2025-10-13 13:36:28.631625528 +0000 UTC m=+0.523938140 container died 0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_hawking, GIT_BRANCH=main, release=553, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55)
Oct 13 13:36:28 standalone.localdomain podman[31044]: 2025-10-13 13:36:28.713687884 +0000 UTC m=+0.071195882 container remove 0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_hawking, io.openshift.expose-services=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph)
Oct 13 13:36:28 standalone.localdomain systemd[1]: libpod-conmon-0d734433fd1c12045cd13757fff7d6e485f4f21a2db8966af897f63339626dc2.scope: Deactivated successfully.
Oct 13 13:36:28 standalone.localdomain podman[31058]: 
Oct 13 13:36:28 standalone.localdomain podman[31058]: 2025-10-13 13:36:28.802149288 +0000 UTC m=+0.066799298 container create 064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=553, distribution-scope=public, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container)
Oct 13 13:36:28 standalone.localdomain systemd[1]: Started libpod-conmon-064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52.scope.
Oct 13 13:36:28 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deab699b7748d544a72042821d88f8dbf1aefbaa929386c8c3ac408d90009197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deab699b7748d544a72042821d88f8dbf1aefbaa929386c8c3ac408d90009197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain podman[31058]: 2025-10-13 13:36:28.774512417 +0000 UTC m=+0.039162427 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deab699b7748d544a72042821d88f8dbf1aefbaa929386c8c3ac408d90009197/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:28 standalone.localdomain podman[31058]: 2025-10-13 13:36:28.8935355 +0000 UTC m=+0.158185530 container init 064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, version=7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:28 standalone.localdomain podman[31058]: 2025-10-13 13:36:28.90131665 +0000 UTC m=+0.165966640 container start 064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, release=553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, ceph=True)
Oct 13 13:36:28 standalone.localdomain podman[31058]: 2025-10-13 13:36:28.901571268 +0000 UTC m=+0.166221328 container attach 064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.33.12, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:29 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:29 standalone.localdomain systemd[1]: tmp-crun.DKjwb0.mount: Deactivated successfully.
Oct 13 13:36:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc123c55feff268e0a7270caa478b645b698f26617dfa2ac2af2b19224704411-merged.mount: Deactivated successfully.
Oct 13 13:36:29 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:29 standalone.localdomain awesome_nightingale[31073]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChG/bUV5/Hx9Y/HVa2cqPF7Akvrc34rzTqxAdSSOMyxDELl4KaMq+NbQ4PvElck+3Ky66xCRmLnbedYcabBW5hM4X3iBAJsk+5ALMBpXxfGMDwzY4aR5iaNVOKSaweMGrCto/Gq2q0BEFXqCCl8G8AydW+daTq6klF53V67SZsttygoz6jc+/F9zcDchV7YIgR4mCDLcE2b89KDGcue5IIwCY+lK+ca/6GSukymHtFa/AH3YCkGoGGlLdTA2kbkbA+dbEuNF7D9XI+RkrPnbR7v8SSkRcjPWNhIj3OF2juuLe5JX4tiVUxTzVmCRNrY8Ab5jrkgcnyPeWWm0nT4lpbA4pyLFBnkTy3K1iKS9P8iG9kyNHZ/R7/h2c2M4mL3NjqE57/FkIPoEkbgdJe4Ef343PSS3L6/rb8FfA0b7U196TGC/eNmSXuFWxseSiDK7MKH6zjpWyjhYy27vRIj4AQBLcPcf11V/qcgz5hqzLICekLlzM+9eMaXvoork/0ouU= root@standalone.localdomain
Oct 13 13:36:29 standalone.localdomain systemd[1]: libpod-064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52.scope: Deactivated successfully.
Oct 13 13:36:29 standalone.localdomain podman[31058]: 2025-10-13 13:36:29.289594283 +0000 UTC m=+0.554244313 container died 064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public)
Oct 13 13:36:29 standalone.localdomain systemd[1]: tmp-crun.Oac769.mount: Deactivated successfully.
Oct 13 13:36:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-deab699b7748d544a72042821d88f8dbf1aefbaa929386c8c3ac408d90009197-merged.mount: Deactivated successfully.
Oct 13 13:36:29 standalone.localdomain podman[31099]: 2025-10-13 13:36:29.377243881 +0000 UTC m=+0.079258990 container remove 064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_nightingale, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, ceph=True, version=7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=)
Oct 13 13:36:29 standalone.localdomain systemd[1]: libpod-conmon-064ed9c6891a48974d5fa0411ae63c77c951ccd0e3af9862a3ab13ef40d65f52.scope: Deactivated successfully.
Oct 13 13:36:29 standalone.localdomain podman[31113]: 
Oct 13 13:36:29 standalone.localdomain podman[31113]: 2025-10-13 13:36:29.470247094 +0000 UTC m=+0.066589681 container create 6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_napier, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, architecture=x86_64)
Oct 13 13:36:29 standalone.localdomain systemd[1]: Started libpod-conmon-6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04.scope.
Oct 13 13:36:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fbba057120c8f5b75ddd49c2d857151f7a9aa0cd9b1584ca9dd9beb575a6cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fbba057120c8f5b75ddd49c2d857151f7a9aa0cd9b1584ca9dd9beb575a6cb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:29 standalone.localdomain podman[31113]: 2025-10-13 13:36:29.448259107 +0000 UTC m=+0.044601724 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fbba057120c8f5b75ddd49c2d857151f7a9aa0cd9b1584ca9dd9beb575a6cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:29 standalone.localdomain podman[31113]: 2025-10-13 13:36:29.566014922 +0000 UTC m=+0.162357509 container init 6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_napier, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, distribution-scope=public, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph)
Oct 13 13:36:29 standalone.localdomain podman[31113]: 2025-10-13 13:36:29.572386299 +0000 UTC m=+0.168728966 container start 6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_napier, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:29 standalone.localdomain podman[31113]: 2025-10-13 13:36:29.572679998 +0000 UTC m=+0.169022605 container attach 6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_napier, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 13:36:29 standalone.localdomain ceph-mon[29756]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:29 standalone.localdomain ceph-mon[29756]: Set ssh ssh_identity_pub
Oct 13 13:36:29 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "standalone.localdomain", "addr": "172.18.0.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:30 standalone.localdomain sshd[31154]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:30 standalone.localdomain sshd[31154]: Accepted publickey for ceph-admin from 172.18.0.100 port 46270 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:30 standalone.localdomain systemd-logind[760]: New session 28 of user ceph-admin.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Created slice User Slice of UID 1002.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Oct 13 13:36:30 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Starting User Manager for UID 1002...
Oct 13 13:36:30 standalone.localdomain systemd[31158]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:30 standalone.localdomain sshd[31170]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Queued start job for default target Main User Target.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Created slice User Application Slice.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Reached target Paths.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Reached target Timers.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Starting D-Bus User Message Bus Socket...
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Starting Create User's Volatile Files and Directories...
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Finished Create User's Volatile Files and Directories.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Listening on D-Bus User Message Bus Socket.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Reached target Sockets.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Reached target Basic System.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Started User Manager for UID 1002.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Reached target Main User Target.
Oct 13 13:36:30 standalone.localdomain systemd[31158]: Startup finished in 124ms.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Started Session 28 of User ceph-admin.
Oct 13 13:36:30 standalone.localdomain sshd[31170]: Accepted publickey for ceph-admin from 172.18.0.100 port 46276 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:30 standalone.localdomain sshd[31154]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:30 standalone.localdomain systemd-logind[760]: New session 30 of user ceph-admin.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Started Session 30 of User ceph-admin.
Oct 13 13:36:30 standalone.localdomain sshd[31170]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:30 standalone.localdomain sudo[31177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:30 standalone.localdomain sudo[31177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:30 standalone.localdomain sudo[31177]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:30 standalone.localdomain ceph-mon[29756]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:30 standalone.localdomain ceph-mon[29756]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "standalone.localdomain", "addr": "172.18.0.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:30 standalone.localdomain sshd[31192]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:30 standalone.localdomain sshd[31192]: Accepted publickey for ceph-admin from 172.18.0.100 port 46288 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:30 standalone.localdomain systemd-logind[760]: New session 31 of user ceph-admin.
Oct 13 13:36:30 standalone.localdomain systemd[1]: Started Session 31 of User ceph-admin.
Oct 13 13:36:30 standalone.localdomain sshd[31192]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:30 standalone.localdomain sudo[31196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname standalone.localdomain
Oct 13 13:36:30 standalone.localdomain sudo[31196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:30 standalone.localdomain sudo[31196]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:31 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:31 standalone.localdomain sshd[31211]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:31 standalone.localdomain sshd[31211]: Accepted publickey for ceph-admin from 172.18.0.100 port 46302 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:31 standalone.localdomain systemd-logind[760]: New session 32 of user ceph-admin.
Oct 13 13:36:31 standalone.localdomain systemd[1]: Started Session 32 of User ceph-admin.
Oct 13 13:36:31 standalone.localdomain sshd[31211]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:31 standalone.localdomain sudo[31215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Oct 13 13:36:31 standalone.localdomain sudo[31215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:31 standalone.localdomain sudo[31215]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:31 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Deploying cephadm binary to standalone.localdomain
Oct 13 13:36:31 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Deploying cephadm binary to standalone.localdomain
Oct 13 13:36:31 standalone.localdomain sshd[31230]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:31 standalone.localdomain sshd[31230]: Accepted publickey for ceph-admin from 172.18.0.100 port 46312 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:31 standalone.localdomain systemd-logind[760]: New session 33 of user ceph-admin.
Oct 13 13:36:31 standalone.localdomain systemd[1]: Started Session 33 of User ceph-admin.
Oct 13 13:36:31 standalone.localdomain sshd[31230]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:31 standalone.localdomain sudo[31234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:31 standalone.localdomain sudo[31234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:31 standalone.localdomain sudo[31234]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053074 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:31 standalone.localdomain sshd[31249]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:32 standalone.localdomain sshd[31249]: Accepted publickey for ceph-admin from 172.18.0.100 port 46324 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:32 standalone.localdomain systemd-logind[760]: New session 34 of user ceph-admin.
Oct 13 13:36:32 standalone.localdomain systemd[1]: Started Session 34 of User ceph-admin.
Oct 13 13:36:32 standalone.localdomain sshd[31249]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:32 standalone.localdomain sudo[31253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:32 standalone.localdomain sudo[31253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:32 standalone.localdomain sudo[31253]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:32 standalone.localdomain sshd[31268]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:32 standalone.localdomain sshd[31268]: Accepted publickey for ceph-admin from 172.18.0.100 port 46334 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:32 standalone.localdomain systemd-logind[760]: New session 35 of user ceph-admin.
Oct 13 13:36:32 standalone.localdomain systemd[1]: Started Session 35 of User ceph-admin.
Oct 13 13:36:32 standalone.localdomain sshd[31268]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:32 standalone.localdomain sudo[31272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Oct 13 13:36:32 standalone.localdomain sudo[31272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:32 standalone.localdomain sudo[31272]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:32 standalone.localdomain sshd[31287]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:32 standalone.localdomain sshd[31287]: Accepted publickey for ceph-admin from 172.18.0.100 port 46344 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:32 standalone.localdomain ceph-mon[29756]: Deploying cephadm binary to standalone.localdomain
Oct 13 13:36:32 standalone.localdomain systemd-logind[760]: New session 36 of user ceph-admin.
Oct 13 13:36:32 standalone.localdomain systemd[1]: Started Session 36 of User ceph-admin.
Oct 13 13:36:32 standalone.localdomain sshd[31287]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:33 standalone.localdomain sudo[31291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:33 standalone.localdomain sudo[31291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:33 standalone.localdomain sudo[31291]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:33 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:33 standalone.localdomain sshd[31306]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:33 standalone.localdomain sshd[31306]: Accepted publickey for ceph-admin from 172.18.0.100 port 46358 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:33 standalone.localdomain systemd-logind[760]: New session 37 of user ceph-admin.
Oct 13 13:36:33 standalone.localdomain systemd[1]: Started Session 37 of User ceph-admin.
Oct 13 13:36:33 standalone.localdomain sshd[31306]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:33 standalone.localdomain sudo[31310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Oct 13 13:36:33 standalone.localdomain sudo[31310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:33 standalone.localdomain sudo[31310]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:33 standalone.localdomain sshd[31325]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:33 standalone.localdomain sshd[31325]: Accepted publickey for ceph-admin from 172.18.0.100 port 46366 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:33 standalone.localdomain systemd-logind[760]: New session 38 of user ceph-admin.
Oct 13 13:36:33 standalone.localdomain systemd[1]: Started Session 38 of User ceph-admin.
Oct 13 13:36:33 standalone.localdomain sshd[31325]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:34 standalone.localdomain sshd[31342]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:34 standalone.localdomain sshd[31342]: Accepted publickey for ceph-admin from 172.18.0.100 port 46370 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:34 standalone.localdomain systemd-logind[760]: New session 39 of user ceph-admin.
Oct 13 13:36:34 standalone.localdomain systemd[1]: Started Session 39 of User ceph-admin.
Oct 13 13:36:34 standalone.localdomain sshd[31342]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:34 standalone.localdomain sudo[31346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Oct 13 13:36:34 standalone.localdomain sudo[31346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:34 standalone.localdomain sudo[31346]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:34 standalone.localdomain sshd[31361]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:36:34 standalone.localdomain sshd[31361]: Accepted publickey for ceph-admin from 172.18.0.100 port 46386 ssh2: RSA SHA256:cMHhkoBmUfNJ2/6UHMJENiofkvyf7CKW3i3hi1dNE6U
Oct 13 13:36:34 standalone.localdomain systemd-logind[760]: New session 40 of user ceph-admin.
Oct 13 13:36:34 standalone.localdomain systemd[1]: Started Session 40 of User ceph-admin.
Oct 13 13:36:34 standalone.localdomain sshd[31361]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Oct 13 13:36:34 standalone.localdomain sudo[31365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname standalone.localdomain
Oct 13 13:36:34 standalone.localdomain sudo[31365]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:35 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:35 standalone.localdomain sudo[31365]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 13 13:36:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:35 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Added host standalone.localdomain
Oct 13 13:36:35 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Added host standalone.localdomain
Oct 13 13:36:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 13 13:36:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:35 standalone.localdomain loving_napier[31128]: Added host 'standalone.localdomain' with addr '172.18.0.100'
Oct 13 13:36:35 standalone.localdomain systemd[1]: libpod-6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04.scope: Deactivated successfully.
Oct 13 13:36:35 standalone.localdomain podman[31113]: 2025-10-13 13:36:35.249976259 +0000 UTC m=+5.846318886 container died 6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_napier, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, release=553)
Oct 13 13:36:35 standalone.localdomain sudo[31403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:35 standalone.localdomain sudo[31403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:35 standalone.localdomain sudo[31403]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-31fbba057120c8f5b75ddd49c2d857151f7a9aa0cd9b1584ca9dd9beb575a6cb-merged.mount: Deactivated successfully.
Oct 13 13:36:35 standalone.localdomain podman[31416]: 2025-10-13 13:36:35.352905788 +0000 UTC m=+0.085991568 container remove 6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_napier, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64)
Oct 13 13:36:35 standalone.localdomain systemd[1]: libpod-conmon-6c1a29e9b30629a44b040a8eef8cfae67d1a76c8f5ff4f7d8cc7f50cf3f22d04.scope: Deactivated successfully.
Oct 13 13:36:35 standalone.localdomain sudo[31429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 inspect-image
Oct 13 13:36:35 standalone.localdomain sudo[31429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:35 standalone.localdomain podman[31446]: 
Oct 13 13:36:35 standalone.localdomain podman[31446]: 2025-10-13 13:36:35.441499276 +0000 UTC m=+0.065226370 container create 061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_austin, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, release=553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:35 standalone.localdomain systemd[1]: Started libpod-conmon-061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2.scope.
Oct 13 13:36:35 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04a8e188731c87130c6ce5174b9a9a443b82cbe75819b624831e39c9364c20d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04a8e188731c87130c6ce5174b9a9a443b82cbe75819b624831e39c9364c20d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:35 standalone.localdomain podman[31446]: 2025-10-13 13:36:35.418495527 +0000 UTC m=+0.042222601 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04a8e188731c87130c6ce5174b9a9a443b82cbe75819b624831e39c9364c20d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:35 standalone.localdomain podman[31446]: 2025-10-13 13:36:35.541899787 +0000 UTC m=+0.165626891 container init 061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_austin, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True)
Oct 13 13:36:35 standalone.localdomain podman[31446]: 2025-10-13 13:36:35.552843203 +0000 UTC m=+0.176570277 container start 061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_austin, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, release=553)
Oct 13 13:36:35 standalone.localdomain podman[31446]: 2025-10-13 13:36:35.553336019 +0000 UTC m=+0.177063173 container attach 061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_austin, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, RELEASE=main, name=rhceph, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:35 standalone.localdomain podman[31511]: 
Oct 13 13:36:35 standalone.localdomain podman[31511]: 2025-10-13 13:36:35.830399877 +0000 UTC m=+0.067929312 container create 1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_ptolemy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64)
Oct 13 13:36:35 standalone.localdomain systemd[1]: Started libpod-conmon-1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6.scope.
Oct 13 13:36:35 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:35 standalone.localdomain podman[31511]: 2025-10-13 13:36:35.888832626 +0000 UTC m=+0.126362011 container init 1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_ptolemy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=)
Oct 13 13:36:35 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:35 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 13 13:36:35 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 13 13:36:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 13 13:36:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:35 standalone.localdomain stupefied_austin[31462]: Scheduled mon update...
Oct 13 13:36:35 standalone.localdomain podman[31511]: 2025-10-13 13:36:35.897176733 +0000 UTC m=+0.134706178 container start 1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_ptolemy, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Oct 13 13:36:35 standalone.localdomain podman[31511]: 2025-10-13 13:36:35.897461042 +0000 UTC m=+0.134990507 container attach 1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_ptolemy, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Oct 13 13:36:35 standalone.localdomain podman[31511]: 2025-10-13 13:36:35.801527059 +0000 UTC m=+0.039056504 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:35 standalone.localdomain systemd[1]: libpod-061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2.scope: Deactivated successfully.
Oct 13 13:36:35 standalone.localdomain podman[31446]: 2025-10-13 13:36:35.909948987 +0000 UTC m=+0.533676061 container died 061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_austin, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph)
Oct 13 13:36:35 standalone.localdomain busy_ptolemy[31526]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable)
Oct 13 13:36:35 standalone.localdomain podman[31533]: 2025-10-13 13:36:35.995300054 +0000 UTC m=+0.075986040 container remove 061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_austin, vcs-type=git, vendor=Red Hat, Inc., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:36 standalone.localdomain systemd[1]: libpod-conmon-061121f84c1e6821b75411f49ff37852f2b9190dabef29347fee115db93ffbc2.scope: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain systemd[1]: libpod-1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6.scope: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain podman[31546]: 2025-10-13 13:36:36.043209418 +0000 UTC m=+0.028705704 container died 1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_ptolemy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:36 standalone.localdomain podman[31547]: 
Oct 13 13:36:36 standalone.localdomain podman[31547]: 2025-10-13 13:36:36.05982768 +0000 UTC m=+0.043484039 container create c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_noether, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True)
Oct 13 13:36:36 standalone.localdomain systemd[1]: Started libpod-conmon-c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2.scope.
Oct 13 13:36:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9ee95ca4077ab28fd55974f44537c65ac78355cd2d66b67425b3aa135a6fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9ee95ca4077ab28fd55974f44537c65ac78355cd2d66b67425b3aa135a6fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc9ee95ca4077ab28fd55974f44537c65ac78355cd2d66b67425b3aa135a6fee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:36 standalone.localdomain podman[31547]: 2025-10-13 13:36:36.044673354 +0000 UTC m=+0.028329723 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:36 standalone.localdomain podman[31546]: 2025-10-13 13:36:36.145590381 +0000 UTC m=+0.131086637 container remove 1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_ptolemy, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=553, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55)
Oct 13 13:36:36 standalone.localdomain systemd[1]: libpod-conmon-1ee6493f30fafbbd62cfaac134fb7b4e680615adc26353e2e7ec3a8e126c18b6.scope: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain sudo[31429]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:36 standalone.localdomain podman[31547]: 2025-10-13 13:36:36.194812186 +0000 UTC m=+0.178468555 container init c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_noether, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, distribution-scope=public, version=7, release=553, io.openshift.tags=rhceph ceph)
Oct 13 13:36:36 standalone.localdomain podman[31547]: 2025-10-13 13:36:36.199624454 +0000 UTC m=+0.183280813 container start c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_noether, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public)
Oct 13 13:36:36 standalone.localdomain podman[31547]: 2025-10-13 13:36:36.199914553 +0000 UTC m=+0.183570962 container attach c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_noether, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12)
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: Added host standalone.localdomain
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: Saving service mon spec with placement count:5
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:36 standalone.localdomain sudo[31580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:36 standalone.localdomain sudo[31580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:36 standalone.localdomain sudo[31580]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:36 standalone.localdomain sudo[31595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 13:36:36 standalone.localdomain sudo[31595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a04a8e188731c87130c6ce5174b9a9a443b82cbe75819b624831e39c9364c20d-merged.mount: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:36 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 13 13:36:36 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:36 standalone.localdomain laughing_noether[31574]: Scheduled mgr update...
Oct 13 13:36:36 standalone.localdomain systemd[1]: libpod-c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2.scope: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain podman[31547]: 2025-10-13 13:36:36.58312015 +0000 UTC m=+0.566776519 container died c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_noether, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=553, name=rhceph, architecture=x86_64, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:36 standalone.localdomain systemd[1]: tmp-crun.KtvJOp.mount: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc9ee95ca4077ab28fd55974f44537c65ac78355cd2d66b67425b3aa135a6fee-merged.mount: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain podman[31641]: 2025-10-13 13:36:36.652855406 +0000 UTC m=+0.064306880 container remove c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_noether, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Oct 13 13:36:36 standalone.localdomain systemd[1]: libpod-conmon-c7623b4900c00e6d813fcc4aeabcd5480a4766ba1bdef0ee4d9ad146cddbe8c2.scope: Deactivated successfully.
Oct 13 13:36:36 standalone.localdomain sudo[31595]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:36 standalone.localdomain podman[31667]: 
Oct 13 13:36:36 standalone.localdomain podman[31667]: 2025-10-13 13:36:36.736607055 +0000 UTC m=+0.060978339 container create fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_cerf, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph)
Oct 13 13:36:36 standalone.localdomain sudo[31681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:36 standalone.localdomain systemd[1]: Started libpod-conmon-fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81.scope.
Oct 13 13:36:36 standalone.localdomain sudo[31681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:36 standalone.localdomain sudo[31681]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56cdd250b4354ecda3b4d3913e8f71973f96d5495ce3153df5dad4351c30b3ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56cdd250b4354ecda3b4d3913e8f71973f96d5495ce3153df5dad4351c30b3ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:36 standalone.localdomain podman[31667]: 2025-10-13 13:36:36.713650328 +0000 UTC m=+0.038021652 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56cdd250b4354ecda3b4d3913e8f71973f96d5495ce3153df5dad4351c30b3ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:36 standalone.localdomain podman[31667]: 2025-10-13 13:36:36.835159028 +0000 UTC m=+0.159530322 container init fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_cerf, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:36 standalone.localdomain podman[31667]: 2025-10-13 13:36:36.841820264 +0000 UTC m=+0.166191528 container start fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_cerf, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:36 standalone.localdomain podman[31667]: 2025-10-13 13:36:36.842042691 +0000 UTC m=+0.166413985 container attach fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_cerf, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, GIT_CLEAN=True)
Oct 13 13:36:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:36 standalone.localdomain sudo[31700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f --timeout 895 ls
Oct 13 13:36:36 standalone.localdomain sudo[31700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:37 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:37 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:37 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service crash spec with placement *
Oct 13 13:36:37 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:37 standalone.localdomain stupefied_cerf[31696]: Scheduled crash update...
Oct 13 13:36:37 standalone.localdomain podman[31667]: 2025-10-13 13:36:37.239262468 +0000 UTC m=+0.563633732 container died fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_cerf, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=553, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:37 standalone.localdomain systemd[1]: libpod-fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81.scope: Deactivated successfully.
Oct 13 13:36:37 standalone.localdomain podman[31765]: 2025-10-13 13:36:37.321649375 +0000 UTC m=+0.074193645 container remove fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_cerf, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, com.redhat.component=rhceph-container)
Oct 13 13:36:37 standalone.localdomain systemd[1]: libpod-conmon-fba10429788cc27432f42d19266faa79bd325eaa12f28bd1d9e05b243e0d1b81.scope: Deactivated successfully.
Oct 13 13:36:37 standalone.localdomain systemd[1]: tmp-crun.nSW7dz.mount: Deactivated successfully.
Oct 13 13:36:37 standalone.localdomain podman[31824]: 2025-10-13 13:36:37.465192264 +0000 UTC m=+0.073267037 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-type=git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:37 standalone.localdomain podman[31822]: 
Oct 13 13:36:37 standalone.localdomain podman[31822]: 2025-10-13 13:36:37.48779859 +0000 UTC m=+0.098981688 container create 77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_black, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, release=553, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7)
Oct 13 13:36:37 standalone.localdomain podman[31822]: 2025-10-13 13:36:37.414584756 +0000 UTC m=+0.025767904 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:37 standalone.localdomain systemd[1]: Started libpod-conmon-77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620.scope.
Oct 13 13:36:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afeebbe26cd07d9b04d143a7f9a0d9efaf20143d4cdc1e718b504a4bc554b292/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afeebbe26cd07d9b04d143a7f9a0d9efaf20143d4cdc1e718b504a4bc554b292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: Saving service mgr spec with placement count:2
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afeebbe26cd07d9b04d143a7f9a0d9efaf20143d4cdc1e718b504a4bc554b292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:37 standalone.localdomain podman[31822]: 2025-10-13 13:36:37.576939534 +0000 UTC m=+0.188122622 container init 77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_black, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, architecture=x86_64)
Oct 13 13:36:37 standalone.localdomain podman[31822]: 2025-10-13 13:36:37.585190728 +0000 UTC m=+0.196373846 container start 77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_black, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64, release=553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vcs-type=git)
Oct 13 13:36:37 standalone.localdomain podman[31822]: 2025-10-13 13:36:37.585463696 +0000 UTC m=+0.196646794 container attach 77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_black, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7)
Oct 13 13:36:37 standalone.localdomain podman[31824]: 2025-10-13 13:36:37.586778416 +0000 UTC m=+0.194853179 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=553, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main)
Oct 13 13:36:37 standalone.localdomain sudo[31700]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:37 standalone.localdomain sudo[31909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:37 standalone.localdomain sudo[31909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:37 standalone.localdomain sudo[31909]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:37 standalone.localdomain sudo[31924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:36:37 standalone.localdomain sudo[31924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct 13 13:36:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1686432409' entity='client.admin' 
Oct 13 13:36:38 standalone.localdomain systemd[1]: libpod-77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620.scope: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain podman[31822]: 2025-10-13 13:36:38.007565131 +0000 UTC m=+0.618748249 container died 77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_black, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc.)
Oct 13 13:36:38 standalone.localdomain podman[31941]: 2025-10-13 13:36:38.065094401 +0000 UTC m=+0.050656400 container remove 77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_black, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64)
Oct 13 13:36:38 standalone.localdomain systemd[1]: libpod-conmon-77e4e080e18b8be395766816380ffb613b132b6dc9aa8408f9ab6f479c170620.scope: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain podman[31953]: 
Oct 13 13:36:38 standalone.localdomain podman[31953]: 2025-10-13 13:36:38.151879693 +0000 UTC m=+0.065791036 container create 9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_williamson, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, version=7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:38 standalone.localdomain systemd[1]: Started libpod-conmon-9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868.scope.
Oct 13 13:36:38 standalone.localdomain systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 31981 (sysctl)
Oct 13 13:36:38 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d978e8fdec444d124f644f9d41662e8b4170e683d48e3cf1cf69f2773164ded/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d978e8fdec444d124f644f9d41662e8b4170e683d48e3cf1cf69f2773164ded/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d978e8fdec444d124f644f9d41662e8b4170e683d48e3cf1cf69f2773164ded/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:38 standalone.localdomain podman[31953]: 2025-10-13 13:36:38.129131583 +0000 UTC m=+0.043042916 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:38 standalone.localdomain podman[31953]: 2025-10-13 13:36:38.242281426 +0000 UTC m=+0.156192769 container init 9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_williamson, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, name=rhceph, release=553, RELEASE=main, CEPH_POINT_RELEASE=)
Oct 13 13:36:38 standalone.localdomain podman[31953]: 2025-10-13 13:36:38.251228082 +0000 UTC m=+0.165139435 container start 9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_williamson, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public)
Oct 13 13:36:38 standalone.localdomain podman[31953]: 2025-10-13 13:36:38.25150069 +0000 UTC m=+0.165412083 container attach 9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_williamson, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, release=553, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-afeebbe26cd07d9b04d143a7f9a0d9efaf20143d4cdc1e718b504a4bc554b292-merged.mount: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 13 13:36:38 standalone.localdomain systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 13 13:36:38 standalone.localdomain sudo[31924]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:38 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:38 standalone.localdomain systemd[1]: libpod-9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868.scope: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain podman[31953]: 2025-10-13 13:36:38.694372141 +0000 UTC m=+0.608283524 container died 9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_williamson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, RELEASE=main, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: Saving service crash spec with placement *
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1686432409' entity='client.admin' 
Oct 13 13:36:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:38 standalone.localdomain sudo[32035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:38 standalone.localdomain systemd[1]: tmp-crun.GLLZns.mount: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain sudo[32035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:38 standalone.localdomain sudo[32035]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4d978e8fdec444d124f644f9d41662e8b4170e683d48e3cf1cf69f2773164ded-merged.mount: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain podman[32029]: 2025-10-13 13:36:38.781198543 +0000 UTC m=+0.077473565 container remove 9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_williamson, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.33.12, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55)
Oct 13 13:36:38 standalone.localdomain systemd[1]: libpod-conmon-9abed286ec074805ca6532771619de56c6787850f0b38c49a0b4a01a08a46868.scope: Deactivated successfully.
Oct 13 13:36:38 standalone.localdomain sudo[32056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f --timeout 895 list-networks
Oct 13 13:36:38 standalone.localdomain sudo[32056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:38 standalone.localdomain podman[32069]: 
Oct 13 13:36:38 standalone.localdomain podman[32069]: 2025-10-13 13:36:38.877621191 +0000 UTC m=+0.069123248 container create 1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_babbage, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public)
Oct 13 13:36:38 standalone.localdomain systemd[1]: Started libpod-conmon-1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0.scope.
Oct 13 13:36:38 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493793de9daa2fc89c3bf55dc04a0122bc8990cb587ce1faea9d05413e705de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:38 standalone.localdomain podman[32069]: 2025-10-13 13:36:38.84413522 +0000 UTC m=+0.035637287 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493793de9daa2fc89c3bf55dc04a0122bc8990cb587ce1faea9d05413e705de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8493793de9daa2fc89c3bf55dc04a0122bc8990cb587ce1faea9d05413e705de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:38 standalone.localdomain podman[32069]: 2025-10-13 13:36:38.982887651 +0000 UTC m=+0.174389708 container init 1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_babbage, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:38 standalone.localdomain podman[32069]: 2025-10-13 13:36:38.994735715 +0000 UTC m=+0.186237762 container start 1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_babbage, name=rhceph, release=553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=)
Oct 13 13:36:38 standalone.localdomain podman[32069]: 2025-10-13 13:36:38.994940391 +0000 UTC m=+0.186442438 container attach 1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_babbage, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, name=rhceph, architecture=x86_64, release=553, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=)
Oct 13 13:36:39 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:39 standalone.localdomain sudo[32056]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:39 standalone.localdomain sudo[32128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:39 standalone.localdomain sudo[32128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:39 standalone.localdomain sudo[32128]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:39 standalone.localdomain sudo[32143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- inventory --format=json-pretty --filter-for-batch
Oct 13 13:36:39 standalone.localdomain sudo[32143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:39 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "standalone.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:39 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Added label _admin to host standalone.localdomain
Oct 13 13:36:39 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Added label _admin to host standalone.localdomain
Oct 13 13:36:39 standalone.localdomain confident_babbage[32086]: Added label _admin to host standalone.localdomain
Oct 13 13:36:39 standalone.localdomain systemd[1]: libpod-1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0.scope: Deactivated successfully.
Oct 13 13:36:39 standalone.localdomain podman[32069]: 2025-10-13 13:36:39.427334238 +0000 UTC m=+0.618836325 container died 1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_babbage, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:39 standalone.localdomain systemd[1]: tmp-crun.qinZ6g.mount: Deactivated successfully.
Oct 13 13:36:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8493793de9daa2fc89c3bf55dc04a0122bc8990cb587ce1faea9d05413e705de-merged.mount: Deactivated successfully.
Oct 13 13:36:39 standalone.localdomain podman[32160]: 2025-10-13 13:36:39.498398326 +0000 UTC m=+0.062396751 container remove 1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_babbage, release=553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=)
Oct 13 13:36:39 standalone.localdomain systemd[1]: libpod-conmon-1ef5cf326a58efb930da110249847ed8b4e9bad48405be38066931bc839a00a0.scope: Deactivated successfully.
Oct 13 13:36:39 standalone.localdomain podman[32173]: 
Oct 13 13:36:39 standalone.localdomain podman[32173]: 2025-10-13 13:36:39.580440951 +0000 UTC m=+0.060414730 container create 330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_bohr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7)
Oct 13 13:36:39 standalone.localdomain systemd[1]: Started libpod-conmon-330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3.scope.
Oct 13 13:36:39 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc38cf7d2e34040be76daad6a767f0fee371c7842502b1a08dfc257cea265291/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc38cf7d2e34040be76daad6a767f0fee371c7842502b1a08dfc257cea265291/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:39 standalone.localdomain podman[32173]: 2025-10-13 13:36:39.551736657 +0000 UTC m=+0.031710446 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc38cf7d2e34040be76daad6a767f0fee371c7842502b1a08dfc257cea265291/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:39 standalone.localdomain podman[32173]: 2025-10-13 13:36:39.659867675 +0000 UTC m=+0.139841464 container init 330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_bohr, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, ceph=True)
Oct 13 13:36:39 standalone.localdomain podman[32173]: 2025-10-13 13:36:39.668589404 +0000 UTC m=+0.148563193 container start 330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_bohr, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, release=553, distribution-scope=public, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True)
Oct 13 13:36:39 standalone.localdomain podman[32173]: 2025-10-13 13:36:39.668877723 +0000 UTC m=+0.148851542 container attach 330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_bohr, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:39 standalone.localdomain podman[32239]: 
Oct 13 13:36:39 standalone.localdomain podman[32239]: 2025-10-13 13:36:39.8641066 +0000 UTC m=+0.064826545 container create 0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=trusting_mahavira, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, distribution-scope=public, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main)
Oct 13 13:36:39 standalone.localdomain systemd[1]: Started libpod-conmon-0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f.scope.
Oct 13 13:36:39 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:39 standalone.localdomain podman[32239]: 2025-10-13 13:36:39.918402202 +0000 UTC m=+0.119122147 container init 0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=trusting_mahavira, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:39 standalone.localdomain podman[32239]: 2025-10-13 13:36:39.925847731 +0000 UTC m=+0.126567676 container start 0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=trusting_mahavira, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, release=553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, ceph=True)
Oct 13 13:36:39 standalone.localdomain podman[32239]: 2025-10-13 13:36:39.926328086 +0000 UTC m=+0.127048071 container attach 0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=trusting_mahavira, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55)
Oct 13 13:36:39 standalone.localdomain trusting_mahavira[32263]: 167 167
Oct 13 13:36:39 standalone.localdomain systemd[1]: libpod-0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f.scope: Deactivated successfully.
Oct 13 13:36:39 standalone.localdomain podman[32239]: 2025-10-13 13:36:39.929655588 +0000 UTC m=+0.130375533 container died 0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=trusting_mahavira, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:39 standalone.localdomain podman[32239]: 2025-10-13 13:36:39.838586235 +0000 UTC m=+0.039306230 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f
Oct 13 13:36:40 standalone.localdomain podman[32268]: 2025-10-13 13:36:40.017617895 +0000 UTC m=+0.078655772 container remove 0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=trusting_mahavira, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, RELEASE=main, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7)
Oct 13 13:36:40 standalone.localdomain systemd[1]: libpod-conmon-0d73201cc506fad5b709c54bde7fbfcc5899f3cead07a6552177f3c8a963176f.scope: Deactivated successfully.
Oct 13 13:36:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0)
Oct 13 13:36:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1125134539' entity='client.admin' 
Oct 13 13:36:40 standalone.localdomain systemd[1]: libpod-330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3.scope: Deactivated successfully.
Oct 13 13:36:40 standalone.localdomain podman[32173]: 2025-10-13 13:36:40.059846095 +0000 UTC m=+0.539819904 container died 330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_bohr, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=553, ceph=True, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc.)
Oct 13 13:36:40 standalone.localdomain podman[32283]: 2025-10-13 13:36:40.124923717 +0000 UTC m=+0.058534121 container remove 330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_bohr, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:40 standalone.localdomain systemd[1]: libpod-conmon-330129ca27eecc406838ee68d1606e285af4f651a6eaeba1a6619f196472e4f3.scope: Deactivated successfully.
Oct 13 13:36:40 standalone.localdomain podman[32297]: 
Oct 13 13:36:40 standalone.localdomain podman[32297]: 2025-10-13 13:36:40.19908761 +0000 UTC m=+0.058976116 container create 04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_spence, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git)
Oct 13 13:36:40 standalone.localdomain systemd[1]: Started libpod-conmon-04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766.scope.
Oct 13 13:36:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d387a9c69cff14b0d5ba96b1d5019e42e78bc1bfb70133e13308afc08669fa96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:40 standalone.localdomain podman[32297]: 2025-10-13 13:36:40.171703437 +0000 UTC m=+0.031591973 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d387a9c69cff14b0d5ba96b1d5019e42e78bc1bfb70133e13308afc08669fa96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d387a9c69cff14b0d5ba96b1d5019e42e78bc1bfb70133e13308afc08669fa96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:40 standalone.localdomain podman[32297]: 2025-10-13 13:36:40.295370053 +0000 UTC m=+0.155258559 container init 04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_spence, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container)
Oct 13 13:36:40 standalone.localdomain podman[32297]: 2025-10-13 13:36:40.301204052 +0000 UTC m=+0.161092588 container start 04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_spence, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:40 standalone.localdomain podman[32297]: 2025-10-13 13:36:40.301511833 +0000 UTC m=+0.161400369 container attach 04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_spence, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, release=553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., version=7, architecture=x86_64, distribution-scope=public, vcs-type=git)
Oct 13 13:36:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dc38cf7d2e34040be76daad6a767f0fee371c7842502b1a08dfc257cea265291-merged.mount: Deactivated successfully.
Oct 13 13:36:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct 13 13:36:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1084305879' entity='client.admin' 
Oct 13 13:36:40 standalone.localdomain goofy_spence[32312]: set mgr/dashboard/cluster/status
Oct 13 13:36:40 standalone.localdomain systemd[1]: libpod-04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766.scope: Deactivated successfully.
Oct 13 13:36:40 standalone.localdomain podman[32297]: 2025-10-13 13:36:40.782966669 +0000 UTC m=+0.642855175 container died 04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_spence, com.redhat.component=rhceph-container, release=553, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7)
Oct 13 13:36:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d387a9c69cff14b0d5ba96b1d5019e42e78bc1bfb70133e13308afc08669fa96-merged.mount: Deactivated successfully.
Oct 13 13:36:40 standalone.localdomain podman[32338]: 2025-10-13 13:36:40.8713577 +0000 UTC m=+0.077652971 container remove 04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_spence, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=7, RELEASE=main)
Oct 13 13:36:40 standalone.localdomain systemd[1]: libpod-conmon-04be825c98a6a559f84127f8b71f480e3e4627cac2a0d187a4962c930c7b3766.scope: Deactivated successfully.
Oct 13 13:36:41 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 13 13:36:41 standalone.localdomain ceph-mon[29756]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "standalone.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:41 standalone.localdomain ceph-mon[29756]: Added label _admin to host standalone.localdomain
Oct 13 13:36:41 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1125134539' entity='client.admin' 
Oct 13 13:36:41 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1084305879' entity='client.admin' 
Oct 13 13:36:41 standalone.localdomain podman[32365]: 
Oct 13 13:36:41 standalone.localdomain podman[32365]: 2025-10-13 13:36:41.088526313 +0000 UTC m=+0.052871897 container create ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=angry_hertz, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, vcs-type=git)
Oct 13 13:36:41 standalone.localdomain systemd[1]: Started libpod-conmon-ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6.scope.
Oct 13 13:36:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6445fb5e8d8a703c3f7203b566d3bc69830d4f92e4a64cca74658565f059d96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6445fb5e8d8a703c3f7203b566d3bc69830d4f92e4a64cca74658565f059d96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:41 standalone.localdomain podman[32365]: 2025-10-13 13:36:41.069537099 +0000 UTC m=+0.033882723 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f
Oct 13 13:36:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6445fb5e8d8a703c3f7203b566d3bc69830d4f92e4a64cca74658565f059d96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6445fb5e8d8a703c3f7203b566d3bc69830d4f92e4a64cca74658565f059d96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:41 standalone.localdomain podman[32365]: 2025-10-13 13:36:41.188726497 +0000 UTC m=+0.153072101 container init ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=angry_hertz, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, release=553, name=rhceph, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55)
Oct 13 13:36:41 standalone.localdomain podman[32365]: 2025-10-13 13:36:41.199184429 +0000 UTC m=+0.163530043 container start ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=angry_hertz, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True)
Oct 13 13:36:41 standalone.localdomain podman[32365]: 2025-10-13 13:36:41.199556851 +0000 UTC m=+0.163902535 container attach ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=angry_hertz, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, io.buildah.version=1.33.12, version=7, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_BRANCH=main)
Oct 13 13:36:41 standalone.localdomain python3[32390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:41 standalone.localdomain podman[32396]: 
Oct 13 13:36:41 standalone.localdomain podman[32396]: 2025-10-13 13:36:41.591052839 +0000 UTC m=+0.076987600 container create 6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hawking, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:41 standalone.localdomain systemd[1]: Started libpod-conmon-6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384.scope.
Oct 13 13:36:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4818e0ee3d7fde17e1ba09a97b5776bad502ac7e512345dd218423a4e813751/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:41 standalone.localdomain podman[32396]: 2025-10-13 13:36:41.564174032 +0000 UTC m=+0.050108793 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4818e0ee3d7fde17e1ba09a97b5776bad502ac7e512345dd218423a4e813751/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:41 standalone.localdomain podman[32396]: 2025-10-13 13:36:41.687790056 +0000 UTC m=+0.173724817 container init 6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hawking, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, release=553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container)
Oct 13 13:36:41 standalone.localdomain systemd[1]: tmp-crun.oBL2u1.mount: Deactivated successfully.
Oct 13 13:36:41 standalone.localdomain podman[32396]: 2025-10-13 13:36:41.70058866 +0000 UTC m=+0.186523391 container start 6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hawking, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:41 standalone.localdomain podman[32396]: 2025-10-13 13:36:41.70122327 +0000 UTC m=+0.187158041 container attach 6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hawking, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.33.12, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553)
Oct 13 13:36:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]: [
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:     {
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "available": false,
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "ceph_device": false,
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "lsm_data": {},
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "lvs": [],
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "path": "/dev/sr0",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "rejected_reasons": [
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "Insufficient space (<5GB)",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "Has a FileSystem"
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         ],
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         "sys_api": {
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "actuators": null,
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "device_nodes": "sr0",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "human_readable_size": "482.00 KB",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "id_bus": "ata",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "model": "QEMU DVD-ROM",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "nr_requests": "2",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "partitions": {},
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "path": "/dev/sr0",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "removable": "1",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "rev": "2.5+",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "ro": "0",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "rotational": "1",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "sas_address": "",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "sas_device_handle": "",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "scheduler_mode": "mq-deadline",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "sectors": 0,
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "sectorsize": "2048",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "size": 493568.0,
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "support_discard": "0",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "type": "disk",
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:             "vendor": "QEMU"
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:         }
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]:     }
Oct 13 13:36:41 standalone.localdomain angry_hertz[32382]: ]
Oct 13 13:36:41 standalone.localdomain systemd[1]: libpod-ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6.scope: Deactivated successfully.
Oct 13 13:36:41 standalone.localdomain podman[32365]: 2025-10-13 13:36:41.963813221 +0000 UTC m=+0.928158895 container died ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=angry_hertz, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:42 standalone.localdomain podman[33549]: 2025-10-13 13:36:42.023139107 +0000 UTC m=+0.051773334 container remove ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:acbbd001ec0d511cce0af82966a041eabbb032c11399bf602a6ae2a93f8d951f, name=angry_hertz, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_BRANCH=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:42 standalone.localdomain systemd[1]: libpod-conmon-ada6db9468df3dd2d5e16032db4c6dff1325132799480258ad8fd221490f4dc6.scope: Deactivated successfully.
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1499487963' entity='client.admin' 
Oct 13 13:36:42 standalone.localdomain systemd[1]: libpod-6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384.scope: Deactivated successfully.
Oct 13 13:36:42 standalone.localdomain podman[32396]: 2025-10-13 13:36:42.066791831 +0000 UTC m=+0.552726602 container died 6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hawking, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, release=553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main)
Oct 13 13:36:42 standalone.localdomain sudo[32143]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:standalone", "name": "osd_memory_target"} v 0)
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config rm", "who": "osd/host:standalone", "name": "osd_memory_target"} : dispatch
Oct 13 13:36:42 standalone.localdomain sudo[33572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:42 standalone.localdomain sudo[33572]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:42 standalone.localdomain sudo[33572]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:42 standalone.localdomain podman[33564]: 2025-10-13 13:36:42.144091439 +0000 UTC m=+0.064797695 container remove 6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hawking, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, release=553, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, RELEASE=main)
Oct 13 13:36:42 standalone.localdomain systemd[1]: libpod-conmon-6960bebf883ef4883bca5b120111eca9a08a3c158e1d6b8deeff64a4a14bd384.scope: Deactivated successfully.
Oct 13 13:36:42 standalone.localdomain sudo[33593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 _orch set-coredump-overrides --fsid 627e7f45-65aa-56de-94df-66eaee84a56e --coredump-max-size=32G
Oct 13 13:36:42 standalone.localdomain sudo[33593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d4818e0ee3d7fde17e1ba09a97b5776bad502ac7e512345dd218423a4e813751-merged.mount: Deactivated successfully.
Oct 13 13:36:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6445fb5e8d8a703c3f7203b566d3bc69830d4f92e4a64cca74658565f059d96-merged.mount: Deactivated successfully.
Oct 13 13:36:42 standalone.localdomain python3[33611]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global container_image registry.redhat.io/rhceph/rhceph-7-rhel9:latest
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:42 standalone.localdomain systemd[1]: systemd-coredump.socket: Deactivated successfully.
Oct 13 13:36:42 standalone.localdomain systemd[1]: Closed Process Core Dump Socket.
Oct 13 13:36:42 standalone.localdomain systemd[1]: Stopping Process Core Dump Socket...
Oct 13 13:36:42 standalone.localdomain systemd[1]: Listening on Process Core Dump Socket.
Oct 13 13:36:42 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:42 standalone.localdomain podman[33624]: 
Oct 13 13:36:42 standalone.localdomain podman[33624]: 2025-10-13 13:36:42.514733497 +0000 UTC m=+0.059683249 container create 08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_bouman, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:42 standalone.localdomain systemd-rc-local-generator[33668]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:42 standalone.localdomain podman[33624]: 2025-10-13 13:36:42.485172357 +0000 UTC m=+0.030122119 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:42 standalone.localdomain systemd-sysv-generator[33672]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:42 standalone.localdomain systemd[1]: Started libpod-conmon-08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80.scope.
Oct 13 13:36:42 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:42 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:42 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b10b185d97a4f5ac430e0794b1c5c66538e3ba6b50d39dfa0e19bc9dab3257/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:42 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b10b185d97a4f5ac430e0794b1c5c66538e3ba6b50d39dfa0e19bc9dab3257/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:42 standalone.localdomain podman[33624]: 2025-10-13 13:36:42.774258273 +0000 UTC m=+0.319208015 container init 08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_bouman, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, release=553, RELEASE=main)
Oct 13 13:36:42 standalone.localdomain podman[33624]: 2025-10-13 13:36:42.79137503 +0000 UTC m=+0.336324792 container start 08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_bouman, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Oct 13 13:36:42 standalone.localdomain podman[33624]: 2025-10-13 13:36:42.801471801 +0000 UTC m=+0.346421583 container attach 08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_bouman, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12)
Oct 13 13:36:42 standalone.localdomain systemd-sysv-generator[33712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:42 standalone.localdomain systemd-rc-local-generator[33709]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:42 standalone.localdomain sudo[33593]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:36:43 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Updating standalone.localdomain:/etc/ceph/ceph.conf
Oct 13 13:36:43 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Updating standalone.localdomain:/etc/ceph/ceph.conf
Oct 13 13:36:43 standalone.localdomain ceph-mgr[29999]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 13 13:36:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1499487963' entity='client.admin' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config rm", "who": "osd/host:standalone", "name": "osd_memory_target"} : dispatch
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:36:43 standalone.localdomain sudo[33738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 13 13:36:43 standalone.localdomain sudo[33738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33738]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph
Oct 13 13:36:43 standalone.localdomain sudo[33753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33753]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct 13 13:36:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2302986053' entity='client.admin' 
Oct 13 13:36:43 standalone.localdomain systemd[1]: libpod-08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80.scope: Deactivated successfully.
Oct 13 13:36:43 standalone.localdomain podman[33624]: 2025-10-13 13:36:43.16469559 +0000 UTC m=+0.709645362 container died 08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_bouman, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public)
Oct 13 13:36:43 standalone.localdomain sudo[33768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain sudo[33768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33768]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain podman[33783]: 2025-10-13 13:36:43.262845811 +0000 UTC m=+0.083380898 container remove 08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_bouman, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, vcs-type=git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:43 standalone.localdomain sudo[33794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:43 standalone.localdomain systemd[1]: libpod-conmon-08af3aa00cb157bd88644e2c0ae7a22b23288cb4f22d8b823a771d48cd420e80.scope: Deactivated successfully.
Oct 13 13:36:43 standalone.localdomain sudo[33794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33794]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a3b10b185d97a4f5ac430e0794b1c5c66538e3ba6b50d39dfa0e19bc9dab3257-merged.mount: Deactivated successfully.
Oct 13 13:36:43 standalone.localdomain sudo[33814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33814]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain sudo[33846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33846]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain sudo[33863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33863]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 13 13:36:43 standalone.localdomain sudo[33881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33881]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Updating standalone.localdomain:/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf
Oct 13 13:36:43 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Updating standalone.localdomain:/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf
Oct 13 13:36:43 standalone.localdomain sudo[33897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config
Oct 13 13:36:43 standalone.localdomain sudo[33897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33897]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config
Oct 13 13:36:43 standalone.localdomain sudo[33912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33912]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain sudo[33928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33928]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:43 standalone.localdomain sudo[33943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33943]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain sudo[33958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33958]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:43 standalone.localdomain sudo[33986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf.new
Oct 13 13:36:43 standalone.localdomain sudo[33986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:43 standalone.localdomain sudo[33986]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf.new
Oct 13 13:36:44 standalone.localdomain sudo[34001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34001]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain ceph-mon[29756]: Updating standalone.localdomain:/etc/ceph/ceph.conf
Oct 13 13:36:44 standalone.localdomain ceph-mon[29756]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:44 standalone.localdomain ceph-mon[29756]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 13 13:36:44 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2302986053' entity='client.admin' 
Oct 13 13:36:44 standalone.localdomain sudo[34016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf.new /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf
Oct 13 13:36:44 standalone.localdomain sudo[34016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34016]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Updating standalone.localdomain:/etc/ceph/ceph.client.admin.keyring
Oct 13 13:36:44 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Updating standalone.localdomain:/etc/ceph/ceph.client.admin.keyring
Oct 13 13:36:44 standalone.localdomain sudo[34032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 13 13:36:44 standalone.localdomain sudo[34032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34032]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph
Oct 13 13:36:44 standalone.localdomain sudo[34047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34047]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.client.admin.keyring.new
Oct 13 13:36:44 standalone.localdomain sudo[34062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34062]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:44 standalone.localdomain sudo[34079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34079]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.client.admin.keyring.new
Oct 13 13:36:44 standalone.localdomain sudo[34094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34094]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain python3[34069]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global public_network 172.18.0.0/24
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:44 standalone.localdomain podman[34122]: 
Oct 13 13:36:44 standalone.localdomain sudo[34128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.client.admin.keyring.new
Oct 13 13:36:44 standalone.localdomain sudo[34128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain podman[34122]: 2025-10-13 13:36:44.4738614 +0000 UTC m=+0.061965008 container create 3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_chaplygin, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:44 standalone.localdomain sudo[34128]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain systemd[1]: Started libpod-conmon-3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f.scope.
Oct 13 13:36:44 standalone.localdomain podman[34122]: 2025-10-13 13:36:44.446939991 +0000 UTC m=+0.035043569 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:44 standalone.localdomain sudo[34150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.client.admin.keyring.new
Oct 13 13:36:44 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:44 standalone.localdomain sudo[34150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34150]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceee1e4cab92aad2fa366fd557dd4a95535983912cf397f8a30610d2a2448eb8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:44 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceee1e4cab92aad2fa366fd557dd4a95535983912cf397f8a30610d2a2448eb8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:44 standalone.localdomain podman[34122]: 2025-10-13 13:36:44.58595928 +0000 UTC m=+0.174062838 container init 3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_chaplygin, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, release=553)
Oct 13 13:36:44 standalone.localdomain podman[34122]: 2025-10-13 13:36:44.597186306 +0000 UTC m=+0.185289844 container start 3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_chaplygin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, release=553, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=)
Oct 13 13:36:44 standalone.localdomain podman[34122]: 2025-10-13 13:36:44.599466156 +0000 UTC m=+0.187569694 container attach 3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_chaplygin, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:44 standalone.localdomain sudo[34170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 13 13:36:44 standalone.localdomain sudo[34170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34170]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Updating standalone.localdomain:/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring
Oct 13 13:36:44 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Updating standalone.localdomain:/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring
Oct 13 13:36:44 standalone.localdomain sudo[34186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config
Oct 13 13:36:44 standalone.localdomain sudo[34186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34186]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config
Oct 13 13:36:44 standalone.localdomain sudo[34211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34211]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring.new
Oct 13 13:36:44 standalone.localdomain sudo[34235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34235]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain sudo[34250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:44 standalone.localdomain sudo[34250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34250]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct 13 13:36:44 standalone.localdomain systemd[1]: libpod-3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f.scope: Deactivated successfully.
Oct 13 13:36:44 standalone.localdomain podman[34122]: 2025-10-13 13:36:44.916514094 +0000 UTC m=+0.504617682 container died 3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_chaplygin, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, RELEASE=main, vendor=Red Hat, Inc., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:44 standalone.localdomain sudo[34265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring.new
Oct 13 13:36:44 standalone.localdomain sudo[34265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:44 standalone.localdomain sudo[34265]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:44 standalone.localdomain podman[34280]: 2025-10-13 13:36:44.98171097 +0000 UTC m=+0.055902612 container remove 3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_chaplygin, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True)
Oct 13 13:36:44 standalone.localdomain systemd[1]: libpod-conmon-3a0345a3faad7d62cff182faabf92bd471ab9b227384b7230f115cb3686ac74f.scope: Deactivated successfully.
Oct 13 13:36:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: Updating standalone.localdomain:/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.conf
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: Updating standalone.localdomain:/etc/ceph/ceph.client.admin.keyring
Oct 13 13:36:45 standalone.localdomain sudo[34309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring.new
Oct 13 13:36:45 standalone.localdomain sudo[34309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:45 standalone.localdomain sudo[34309]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:45 standalone.localdomain sudo[34326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring.new
Oct 13 13:36:45 standalone.localdomain sudo[34326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:45 standalone.localdomain sudo[34326]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:45 standalone.localdomain sudo[34341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-627e7f45-65aa-56de-94df-66eaee84a56e/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring.new /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring
Oct 13 13:36:45 standalone.localdomain sudo[34341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:45 standalone.localdomain sudo[34341]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev d0c1f475-1f4b-4b8f-b0d8-4362ce7b8823 (Updating crash deployment (+1 -> 1))
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.standalone.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "client.crash.standalone.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.standalone.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:45 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Deploying daemon crash.standalone on standalone.localdomain
Oct 13 13:36:45 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Deploying daemon crash.standalone on standalone.localdomain
Oct 13 13:36:45 standalone.localdomain python3[34324]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global cluster_network 172.20.0.0/24
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:45 standalone.localdomain sudo[34356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:45 standalone.localdomain sudo[34356]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:45 standalone.localdomain sudo[34356]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:45 standalone.localdomain podman[34369]: 
Oct 13 13:36:45 standalone.localdomain podman[34369]: 2025-10-13 13:36:45.283088046 +0000 UTC m=+0.038117755 container create 6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_cerf, io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55)
Oct 13 13:36:45 standalone.localdomain systemd[1]: Started libpod-conmon-6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69.scope.
Oct 13 13:36:45 standalone.localdomain sudo[34380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:45 standalone.localdomain sudo[34380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afcddb7eb5ded43fdecce2a65aaf5e678eb5ca900fceaee8651f39945c7661f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1afcddb7eb5ded43fdecce2a65aaf5e678eb5ca900fceaee8651f39945c7661f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:45 standalone.localdomain podman[34369]: 2025-10-13 13:36:45.343217796 +0000 UTC m=+0.098247505 container init 6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_cerf, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=)
Oct 13 13:36:45 standalone.localdomain podman[34369]: 2025-10-13 13:36:45.268166476 +0000 UTC m=+0.023196195 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:45 standalone.localdomain podman[34369]: 2025-10-13 13:36:45.36772786 +0000 UTC m=+0.122757569 container start 6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_cerf, vcs-type=git, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, distribution-scope=public)
Oct 13 13:36:45 standalone.localdomain podman[34369]: 2025-10-13 13:36:45.367977718 +0000 UTC m=+0.123007487 container attach 6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_cerf, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, release=553, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ceee1e4cab92aad2fa366fd557dd4a95535983912cf397f8a30610d2a2448eb8-merged.mount: Deactivated successfully.
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct 13 13:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2956692428' entity='client.admin' 
Oct 13 13:36:45 standalone.localdomain podman[34369]: 2025-10-13 13:36:45.68939405 +0000 UTC m=+0.444423769 container died 6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_cerf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, name=rhceph, version=7, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:45 standalone.localdomain systemd[1]: libpod-6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69.scope: Deactivated successfully.
Oct 13 13:36:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1afcddb7eb5ded43fdecce2a65aaf5e678eb5ca900fceaee8651f39945c7661f-merged.mount: Deactivated successfully.
Oct 13 13:36:45 standalone.localdomain podman[34442]: 2025-10-13 13:36:45.786682265 +0000 UTC m=+0.084839413 container remove 6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_cerf, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:45 standalone.localdomain systemd[1]: libpod-conmon-6d1736e16447beb76590edb2aab9cc29a058b8c1abc1e34e2dcf6fc003c15f69.scope: Deactivated successfully.
Oct 13 13:36:45 standalone.localdomain podman[34480]: 
Oct 13 13:36:45 standalone.localdomain podman[34480]: 2025-10-13 13:36:45.923053851 +0000 UTC m=+0.071295145 container create 2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_bose, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=553, version=7, CEPH_POINT_RELEASE=)
Oct 13 13:36:45 standalone.localdomain systemd[1]: Started libpod-conmon-2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03.scope.
Oct 13 13:36:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:45 standalone.localdomain podman[34480]: 2025-10-13 13:36:45.980078397 +0000 UTC m=+0.128319691 container init 2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_bose, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:45 standalone.localdomain podman[34480]: 2025-10-13 13:36:45.988348961 +0000 UTC m=+0.136590265 container start 2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_bose, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:45 standalone.localdomain podman[34480]: 2025-10-13 13:36:45.988616899 +0000 UTC m=+0.136858193 container attach 2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_bose, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, release=553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:45 standalone.localdomain happy_bose[34497]: 167 167
Oct 13 13:36:45 standalone.localdomain systemd[1]: libpod-2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03.scope: Deactivated successfully.
Oct 13 13:36:45 standalone.localdomain podman[34480]: 2025-10-13 13:36:45.991606171 +0000 UTC m=+0.139847535 container died 2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_bose, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, distribution-scope=public)
Oct 13 13:36:45 standalone.localdomain podman[34480]: 2025-10-13 13:36:45.892979475 +0000 UTC m=+0.041220799 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:46 standalone.localdomain python3[34488]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global ms_bind_ipv4 True
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: Updating standalone.localdomain:/var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/config/ceph.client.admin.keyring
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "client.crash.standalone.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.standalone.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2956692428' entity='client.admin' 
Oct 13 13:36:46 standalone.localdomain podman[34502]: 2025-10-13 13:36:46.084892982 +0000 UTC m=+0.082817570 container remove 2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_bose, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553)
Oct 13 13:36:46 standalone.localdomain systemd[1]: libpod-conmon-2bac6ff19bb470ea19046f0064d2f80bc6d13bcdbed150cba144563978571b03.scope: Deactivated successfully.
Oct 13 13:36:46 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:46 standalone.localdomain podman[34510]: 
Oct 13 13:36:46 standalone.localdomain podman[34510]: 2025-10-13 13:36:46.159258801 +0000 UTC m=+0.127604589 container create 4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mestorf, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, io.buildah.version=1.33.12, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=)
Oct 13 13:36:46 standalone.localdomain podman[34510]: 2025-10-13 13:36:46.126979647 +0000 UTC m=+0.095325475 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:46 standalone.localdomain systemd-rc-local-generator[34551]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:46 standalone.localdomain systemd-sysv-generator[34559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b509f34e94dd94467494c3a9d7bbb7c1f5372a8e3d96254bfb5064553a9fc0f8-merged.mount: Deactivated successfully.
Oct 13 13:36:46 standalone.localdomain systemd[1]: Started libpod-conmon-4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a.scope.
Oct 13 13:36:46 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f085d799de2618f2e0673fa3e7b3d2bd9f5b594bff1bfc625eb07f4ea0926ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f085d799de2618f2e0673fa3e7b3d2bd9f5b594bff1bfc625eb07f4ea0926ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:46 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:46 standalone.localdomain podman[34510]: 2025-10-13 13:36:46.477766613 +0000 UTC m=+0.446112401 container init 4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mestorf, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:46 standalone.localdomain podman[34510]: 2025-10-13 13:36:46.486816371 +0000 UTC m=+0.455162149 container start 4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mestorf, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, architecture=x86_64, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:46 standalone.localdomain podman[34510]: 2025-10-13 13:36:46.487012468 +0000 UTC m=+0.455358246 container attach 4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mestorf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main)
Oct 13 13:36:46 standalone.localdomain systemd-rc-local-generator[34600]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:46 standalone.localdomain systemd-sysv-generator[34605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:46 standalone.localdomain systemd[1]: Starting Ceph crash.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=ms_bind_ipv4}] v 0)
Oct 13 13:36:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2203486585' entity='client.admin' 
Oct 13 13:36:46 standalone.localdomain systemd[1]: libpod-4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a.scope: Deactivated successfully.
Oct 13 13:36:46 standalone.localdomain podman[34510]: 2025-10-13 13:36:46.903211406 +0000 UTC m=+0.871557164 container died 4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mestorf, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:46 standalone.localdomain systemd[1]: tmp-crun.tOUzzu.mount: Deactivated successfully.
Oct 13 13:36:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1f085d799de2618f2e0673fa3e7b3d2bd9f5b594bff1bfc625eb07f4ea0926ba-merged.mount: Deactivated successfully.
Oct 13 13:36:46 standalone.localdomain podman[34679]: 2025-10-13 13:36:46.973627483 +0000 UTC m=+0.060903245 container remove 4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mestorf, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:46 standalone.localdomain systemd[1]: libpod-conmon-4014f2a7e4caf8e6795987e1ba6c50eebdd2e04f376f52bc4b3100f29ea2604a.scope: Deactivated successfully.
Oct 13 13:36:47 standalone.localdomain podman[34697]: 
Oct 13 13:36:47 standalone.localdomain podman[34697]: 2025-10-13 13:36:47.018903807 +0000 UTC m=+0.055152738 container create 8bf78d4760c437ecfd45b901001e842b8dc4ddf077c2c5944852a4937c4b1180 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64)
Oct 13 13:36:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856004ac29115bd4dcab3f60ba10f4ecd3297b5c67e06ae2e947cdd52c54d832/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: Deploying daemon crash.standalone on standalone.localdomain
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2203486585' entity='client.admin' 
Oct 13 13:36:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856004ac29115bd4dcab3f60ba10f4ecd3297b5c67e06ae2e947cdd52c54d832/merged/etc/ceph/ceph.client.crash.standalone.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:47 standalone.localdomain podman[34697]: 2025-10-13 13:36:46.992252487 +0000 UTC m=+0.028501438 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856004ac29115bd4dcab3f60ba10f4ecd3297b5c67e06ae2e947cdd52c54d832/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856004ac29115bd4dcab3f60ba10f4ecd3297b5c67e06ae2e947cdd52c54d832/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:47 standalone.localdomain podman[34697]: 2025-10-13 13:36:47.121064731 +0000 UTC m=+0.157313652 container init 8bf78d4760c437ecfd45b901001e842b8dc4ddf077c2c5944852a4937c4b1180 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., release=553, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:47 standalone.localdomain podman[34697]: 2025-10-13 13:36:47.126814768 +0000 UTC m=+0.163063699 container start 8bf78d4760c437ecfd45b901001e842b8dc4ddf077c2c5944852a4937c4b1180 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553)
Oct 13 13:36:47 standalone.localdomain bash[34697]: 8bf78d4760c437ecfd45b901001e842b8dc4ddf077c2c5944852a4937c4b1180
Oct 13 13:36:47 standalone.localdomain systemd[1]: Started Ceph crash.standalone for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:47 standalone.localdomain sudo[34380]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:47 standalone.localdomain python3[34712]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global ms_bind_ipv6 False
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev d0c1f475-1f4b-4b8f-b0d8-4362ce7b8823 (Updating crash deployment (+1 -> 1))
Oct 13 13:36:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event d0c1f475-1f4b-4b8f-b0d8-4362ce7b8823 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 15fd75fc-e7cf-4709-b4d9-e9060af5f4c5 (Updating mgr deployment (+1 -> 2))
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.standalone.aquevm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "mgr.standalone.aquevm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.standalone.aquevm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr services"} : dispatch
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:47 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Deploying daemon mgr.standalone.aquevm on standalone.localdomain
Oct 13 13:36:47 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Deploying daemon mgr.standalone.aquevm on standalone.localdomain
Oct 13 13:36:47 standalone.localdomain podman[34718]: 
Oct 13 13:36:47 standalone.localdomain podman[34718]: 2025-10-13 13:36:47.319830079 +0000 UTC m=+0.074271358 container create c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_tharp, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, release=553, distribution-scope=public, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, name=rhceph)
Oct 13 13:36:47 standalone.localdomain sudo[34728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:47 standalone.localdomain sudo[34728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:47 standalone.localdomain sudo[34728]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: 2025-10-13T13:36:47.342+0000 7ff66fa88640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: 2025-10-13T13:36:47.342+0000 7ff66fa88640 -1 AuthRegistry(0x7ff668067910) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: 2025-10-13T13:36:47.344+0000 7ff66fa88640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: 2025-10-13T13:36:47.344+0000 7ff66fa88640 -1 AuthRegistry(0x7ff66fa87000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: 2025-10-13T13:36:47.346+0000 7ff66d7fd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: 2025-10-13T13:36:47.346+0000 7ff66fa88640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 13 13:36:47 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-crash-standalone[34709]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 13 13:36:47 standalone.localdomain systemd[1]: Started libpod-conmon-c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf.scope.
Oct 13 13:36:47 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdf0ce9b0e1825e2e08cefded248b7a3e96ee07cc7b3c9832ec4f0eb48fd469/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:47 standalone.localdomain podman[34718]: 2025-10-13 13:36:47.289971909 +0000 UTC m=+0.044413188 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbdf0ce9b0e1825e2e08cefded248b7a3e96ee07cc7b3c9832ec4f0eb48fd469/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:47 standalone.localdomain podman[34718]: 2025-10-13 13:36:47.407764724 +0000 UTC m=+0.162205973 container init c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_tharp, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553)
Oct 13 13:36:47 standalone.localdomain sudo[34757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:47 standalone.localdomain sudo[34757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:47 standalone.localdomain podman[34718]: 2025-10-13 13:36:47.418843595 +0000 UTC m=+0.173284874 container start c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_tharp, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:47 standalone.localdomain podman[34718]: 2025-10-13 13:36:47.419094023 +0000 UTC m=+0.173535312 container attach c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_tharp, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=ms_bind_ipv6}] v 0)
Oct 13 13:36:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3722799613' entity='client.admin' 
Oct 13 13:36:47 standalone.localdomain systemd[1]: libpod-c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf.scope: Deactivated successfully.
Oct 13 13:36:47 standalone.localdomain podman[34718]: 2025-10-13 13:36:47.835104767 +0000 UTC m=+0.589546026 container died c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_tharp, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cbdf0ce9b0e1825e2e08cefded248b7a3e96ee07cc7b3c9832ec4f0eb48fd469-merged.mount: Deactivated successfully.
Oct 13 13:36:47 standalone.localdomain podman[34822]: 2025-10-13 13:36:47.937452656 +0000 UTC m=+0.094912562 container remove c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_tharp, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, version=7, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 13:36:47 standalone.localdomain systemd[1]: libpod-conmon-c7a170b5864bd50cb16dd3cbc6a6d667e9f822a4da21b7cbe01fe28479b9e0cf.scope: Deactivated successfully.
Oct 13 13:36:48 standalone.localdomain podman[34854]: 
Oct 13 13:36:48 standalone.localdomain podman[34854]: 2025-10-13 13:36:48.063119694 +0000 UTC m=+0.065481736 container create 5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_dijkstra, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "mgr.standalone.aquevm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.standalone.aquevm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr services"} : dispatch
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3722799613' entity='client.admin' 
Oct 13 13:36:48 standalone.localdomain systemd[1]: Started libpod-conmon-5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201.scope.
Oct 13 13:36:48 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 1 completed events
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:36:48 standalone.localdomain podman[34854]: 2025-10-13 13:36:48.120442278 +0000 UTC m=+0.122804280 container init 5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_dijkstra, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container)
Oct 13 13:36:48 standalone.localdomain stoic_dijkstra[34872]: 167 167
Oct 13 13:36:48 standalone.localdomain systemd[1]: libpod-5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201.scope: Deactivated successfully.
Oct 13 13:36:48 standalone.localdomain podman[34854]: 2025-10-13 13:36:48.139796224 +0000 UTC m=+0.142158246 container start 5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_dijkstra, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=553, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:48 standalone.localdomain podman[34854]: 2025-10-13 13:36:48.040140967 +0000 UTC m=+0.042502989 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:48 standalone.localdomain podman[34854]: 2025-10-13 13:36:48.140198516 +0000 UTC m=+0.142560598 container attach 5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_dijkstra, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:48 standalone.localdomain podman[34854]: 2025-10-13 13:36:48.144438296 +0000 UTC m=+0.146800338 container died 5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_dijkstra, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, release=553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:48 standalone.localdomain python3[34869]: ansible-stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:36:48 standalone.localdomain podman[34877]: 2025-10-13 13:36:48.224989356 +0000 UTC m=+0.078414305 container remove 5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_dijkstra, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:48 standalone.localdomain systemd[1]: libpod-conmon-5ad621190a13ce08d3195c6ed08beaa8deb19dd4aea11080c3eb122117fc3201.scope: Deactivated successfully.
Oct 13 13:36:48 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:48 standalone.localdomain systemd-rc-local-generator[34917]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:48 standalone.localdomain systemd-sysv-generator[34921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:48 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ead0114060d2eda9d8764c507874ae3650f06cc11fe6c6c66bdcec77d2324bbf-merged.mount: Deactivated successfully.
Oct 13 13:36:48 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:48 standalone.localdomain systemd-rc-local-generator[34964]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:48 standalone.localdomain systemd-sysv-generator[34968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:48 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:48 standalone.localdomain python3[34980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:48 standalone.localdomain systemd[1]: Starting Ceph mgr.standalone.aquevm for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:48 standalone.localdomain podman[34983]: 
Oct 13 13:36:48 standalone.localdomain podman[34983]: 2025-10-13 13:36:48.878934041 +0000 UTC m=+0.063153584 container create 6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_grothendieck, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main)
Oct 13 13:36:48 standalone.localdomain systemd[1]: Started libpod-conmon-6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5.scope.
Oct 13 13:36:48 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:48 standalone.localdomain podman[34983]: 2025-10-13 13:36:48.852862649 +0000 UTC m=+0.037082192 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:48 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c06afdb7a35603b2be2d3ac3075680a56d7d7812647e9ecd4c791c707354f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:48 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c06afdb7a35603b2be2d3ac3075680a56d7d7812647e9ecd4c791c707354f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:48 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c06afdb7a35603b2be2d3ac3075680a56d7d7812647e9ecd4c791c707354f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:48 standalone.localdomain podman[34983]: 2025-10-13 13:36:48.984764408 +0000 UTC m=+0.168983951 container init 6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_grothendieck, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:49 standalone.localdomain podman[34983]: 2025-10-13 13:36:49.002578586 +0000 UTC m=+0.186798129 container start 6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_grothendieck, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, GIT_CLEAN=True, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:49 standalone.localdomain podman[34983]: 2025-10-13 13:36:49.002853145 +0000 UTC m=+0.187072698 container attach 6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_grothendieck, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph)
Oct 13 13:36:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:49 standalone.localdomain podman[35045]: 
Oct 13 13:36:49 standalone.localdomain podman[35045]: 2025-10-13 13:36:49.134242138 +0000 UTC m=+0.068359454 container create a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, architecture=x86_64, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: Deploying daemon mgr.standalone.aquevm on standalone.localdomain
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b06862fc6e59d23d92b8f78fabb346520491ae95de3ad1e63ff3dcc5b51fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:49 standalone.localdomain podman[35045]: 2025-10-13 13:36:49.106865386 +0000 UTC m=+0.040982702 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b06862fc6e59d23d92b8f78fabb346520491ae95de3ad1e63ff3dcc5b51fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b06862fc6e59d23d92b8f78fabb346520491ae95de3ad1e63ff3dcc5b51fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2b06862fc6e59d23d92b8f78fabb346520491ae95de3ad1e63ff3dcc5b51fc/merged/var/lib/ceph/mgr/ceph-standalone.aquevm supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:49 standalone.localdomain podman[35045]: 2025-10-13 13:36:49.245239525 +0000 UTC m=+0.179356821 container init a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:49 standalone.localdomain podman[35045]: 2025-10-13 13:36:49.251884379 +0000 UTC m=+0.186001685 container start a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public)
Oct 13 13:36:49 standalone.localdomain bash[35045]: a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb
Oct 13 13:36:49 standalone.localdomain systemd[1]: Started Ceph mgr.standalone.aquevm for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:49 standalone.localdomain sudo[34757]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: set uid:gid to 167:167 (ceph:ceph)
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: pidfile_write: ignore empty --pid-file
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 15fd75fc-e7cf-4709-b4d9-e9060af5f4c5 (Updating mgr deployment (+1 -> 2))
Oct 13 13:36:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 15fd75fc-e7cf-4709-b4d9-e9060af5f4c5 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 13 13:36:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'alerts'
Oct 13 13:36:49 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:36:49 standalone.localdomain keen_grothendieck[35029]: 
Oct 13 13:36:49 standalone.localdomain keen_grothendieck[35029]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 13 13:36:49 standalone.localdomain sudo[35108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:36:49 standalone.localdomain sudo[35108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:49 standalone.localdomain sudo[35108]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:49 standalone.localdomain systemd[1]: libpod-6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5.scope: Deactivated successfully.
Oct 13 13:36:49 standalone.localdomain podman[34983]: 2025-10-13 13:36:49.423629615 +0000 UTC m=+0.607849188 container died 6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_grothendieck, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'balancer'
Oct 13 13:36:49 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:49.441+0000 7f5f8b38c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 13 13:36:49 standalone.localdomain sudo[35131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:49 standalone.localdomain sudo[35131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:49 standalone.localdomain sudo[35131]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:49 standalone.localdomain podman[35125]: 2025-10-13 13:36:49.507251589 +0000 UTC m=+0.077320041 container remove 6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_grothendieck, description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 13 13:36:49 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'cephadm'
Oct 13 13:36:49 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:49.511+0000 7f5f8b38c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 13 13:36:49 standalone.localdomain systemd[1]: libpod-conmon-6fbce6bb8260853affd529aaa10432c57b95a51152caec4665e91cfe1cd5f9c5.scope: Deactivated successfully.
Oct 13 13:36:49 standalone.localdomain sudo[35151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 13:36:49 standalone.localdomain sudo[35151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:49 standalone.localdomain systemd[1]: tmp-crun.Yyk4X5.mount: Deactivated successfully.
Oct 13 13:36:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b1c06afdb7a35603b2be2d3ac3075680a56d7d7812647e9ecd4c791c707354f0-merged.mount: Deactivated successfully.
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'crash'
Oct 13 13:36:50 standalone.localdomain python3[35221]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/container_image_base registry.redhat.io/rhceph/rhceph-7-rhel9
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'dashboard'
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:50.129+0000 7f5f8b38c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain podman[35240]: 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain podman[35240]: 2025-10-13 13:36:50.178978001 +0000 UTC m=+0.081187120 container create cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_lehmann, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:36:50 standalone.localdomain systemd[1]: Started libpod-conmon-cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046.scope.
Oct 13 13:36:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:50 standalone.localdomain podman[35240]: 2025-10-13 13:36:50.142341714 +0000 UTC m=+0.044550913 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c78529677846e88696520682e00229f85c09b22f8a9cf2e909b5d38cadc353/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c78529677846e88696520682e00229f85c09b22f8a9cf2e909b5d38cadc353/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c78529677846e88696520682e00229f85c09b22f8a9cf2e909b5d38cadc353/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:50 standalone.localdomain podman[35240]: 2025-10-13 13:36:50.287057538 +0000 UTC m=+0.189266657 container init cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_lehmann, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.buildah.version=1.33.12, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=)
Oct 13 13:36:50 standalone.localdomain podman[35265]: 2025-10-13 13:36:50.288588135 +0000 UTC m=+0.096064908 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64)
Oct 13 13:36:50 standalone.localdomain podman[35240]: 2025-10-13 13:36:50.295698463 +0000 UTC m=+0.197907582 container start cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_lehmann, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:50 standalone.localdomain podman[35240]: 2025-10-13 13:36:50.29592047 +0000 UTC m=+0.198129589 container attach cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_lehmann, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64)
Oct 13 13:36:50 standalone.localdomain podman[35265]: 2025-10-13 13:36:50.39371202 +0000 UTC m=+0.201188783 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'devicehealth'
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_image_base}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/189604614' entity='client.admin' 
Oct 13 13:36:50 standalone.localdomain systemd[1]: libpod-cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046.scope: Deactivated successfully.
Oct 13 13:36:50 standalone.localdomain podman[35240]: 2025-10-13 13:36:50.67574934 +0000 UTC m=+0.577958479 container died cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_lehmann, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git)
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'diskprediction_local'
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:50.682+0000 7f5f8b38c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain sudo[35151]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:50 standalone.localdomain podman[35365]: 2025-10-13 13:36:50.789625904 +0000 UTC m=+0.100859145 container remove cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_lehmann, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, RELEASE=main, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=7, vendor=Red Hat, Inc., name=rhceph)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain systemd[1]: libpod-conmon-cbef9211a2fb6e70f021ec11fb1c8bb023f5d81876bd61b0137538f6b289f046.scope: Deactivated successfully.
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:36:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b4c78529677846e88696520682e00229f85c09b22f8a9cf2e909b5d38cadc353-merged.mount: Deactivated successfully.
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]:   from numpy import show_config as show_numpy_config
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'influx'
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:50.825+0000 7f5f8b38c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain sudo[35388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:36:50 standalone.localdomain sudo[35388]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'insights'
Oct 13 13:36:50 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:50.886+0000 7f5f8b38c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 13 13:36:50 standalone.localdomain sudo[35388]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:50 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Reconfiguring mon.standalone (unknown last config time)...
Oct 13 13:36:50 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Reconfiguring mon.standalone (unknown last config time)...
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:50 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.standalone on standalone.localdomain
Oct 13 13:36:50 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.standalone on standalone.localdomain
Oct 13 13:36:50 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'iostat'
Oct 13 13:36:50 standalone.localdomain sudo[35407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:50 standalone.localdomain sudo[35407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:50 standalone.localdomain sudo[35407]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'k8sevents'
Oct 13 13:36:51 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:51.011+0000 7f5f8b38c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:51 standalone.localdomain sudo[35423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:51 standalone.localdomain sudo[35423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/189604614' entity='client.admin' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:51 standalone.localdomain python3[35443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'localpool'
Oct 13 13:36:51 standalone.localdomain podman[35444]: 
Oct 13 13:36:51 standalone.localdomain podman[35444]: 2025-10-13 13:36:51.383726718 +0000 UTC m=+0.061629407 container create 6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_tu, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12, RELEASE=main, GIT_BRANCH=main, architecture=x86_64)
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'mds_autoscaler'
Oct 13 13:36:51 standalone.localdomain systemd[1]: Started libpod-conmon-6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a.scope.
Oct 13 13:36:51 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:51 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dc8479d51cab72241ecaf416da9fe18b99f36396b8a1c2e069b9a34a7dfc5f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:51 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dc8479d51cab72241ecaf416da9fe18b99f36396b8a1c2e069b9a34a7dfc5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:51 standalone.localdomain podman[35444]: 2025-10-13 13:36:51.366538179 +0000 UTC m=+0.044440878 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:51 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1dc8479d51cab72241ecaf416da9fe18b99f36396b8a1c2e069b9a34a7dfc5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:51 standalone.localdomain podman[35444]: 2025-10-13 13:36:51.483149599 +0000 UTC m=+0.161052318 container init 6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_tu, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:51 standalone.localdomain podman[35444]: 2025-10-13 13:36:51.493541839 +0000 UTC m=+0.171444548 container start 6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_tu, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, release=553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:51 standalone.localdomain podman[35444]: 2025-10-13 13:36:51.494101806 +0000 UTC m=+0.172004515 container attach 6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_tu, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, version=7, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:51 standalone.localdomain podman[35476]: 
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'mirroring'
Oct 13 13:36:51 standalone.localdomain podman[35476]: 2025-10-13 13:36:51.556690652 +0000 UTC m=+0.104860908 container create 6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhaskara, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_BRANCH=main)
Oct 13 13:36:51 standalone.localdomain systemd[1]: Started libpod-conmon-6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0.scope.
Oct 13 13:36:51 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:51 standalone.localdomain podman[35476]: 2025-10-13 13:36:51.614254824 +0000 UTC m=+0.162425080 container init 6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhaskara, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7)
Oct 13 13:36:51 standalone.localdomain podman[35476]: 2025-10-13 13:36:51.62257527 +0000 UTC m=+0.170745556 container start 6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhaskara, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, CEPH_POINT_RELEASE=)
Oct 13 13:36:51 standalone.localdomain dazzling_bhaskara[35509]: 167 167
Oct 13 13:36:51 standalone.localdomain podman[35476]: 2025-10-13 13:36:51.623274741 +0000 UTC m=+0.171445007 container attach 6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhaskara, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12)
Oct 13 13:36:51 standalone.localdomain systemd[1]: libpod-6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0.scope: Deactivated successfully.
Oct 13 13:36:51 standalone.localdomain podman[35476]: 2025-10-13 13:36:51.624985594 +0000 UTC m=+0.173155910 container died 6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhaskara, GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, release=553, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'nfs'
Oct 13 13:36:51 standalone.localdomain podman[35476]: 2025-10-13 13:36:51.542906397 +0000 UTC m=+0.091076653 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:51 standalone.localdomain podman[35514]: 2025-10-13 13:36:51.68690971 +0000 UTC m=+0.054372865 container remove 6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bhaskara, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Oct 13 13:36:51 standalone.localdomain systemd[1]: libpod-conmon-6706b71084a6cd56f23f3eccad73c0c264c5c9a3a53bea430978195c0f1e21f0.scope: Deactivated successfully.
Oct 13 13:36:51 standalone.localdomain sudo[35423]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:51 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Reconfiguring mgr.standalone.ectizd (unknown last config time)...
Oct 13 13:36:51 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Reconfiguring mgr.standalone.ectizd (unknown last config time)...
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.standalone.ectizd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "mgr.standalone.ectizd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr services"} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:51 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.standalone.ectizd on standalone.localdomain
Oct 13 13:36:51 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.standalone.ectizd on standalone.localdomain
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'orchestrator'
Oct 13 13:36:51 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:51.790+0000 7f5f8b38c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:51 standalone.localdomain sudo[35529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:51 standalone.localdomain sudo[35529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:51 standalone.localdomain sudo[35529]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:51 standalone.localdomain sudo[35544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:51 standalone.localdomain sudo[35544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:51 standalone.localdomain sudo[35544]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:51 standalone.localdomain sudo[35558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:36:51 standalone.localdomain sudo[35558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'osd_perf_query'
Oct 13 13:36:51 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:51.930+0000 7f5f8b38c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain sudo[35574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname standalone.localdomain
Oct 13 13:36:51 standalone.localdomain sudo[35574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 13 13:36:51 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'osd_support'
Oct 13 13:36:51 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:51.991+0000 7f5f8b38c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'pg_autoscaler'
Oct 13 13:36:52 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:52.045+0000 7f5f8b38c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:52.111+0000 7f5f8b38c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'progress'
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:52.169+0000 7f5f8b38c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'prometheus'
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: Reconfiguring mon.standalone (unknown last config time)...
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: Reconfiguring daemon mon.standalone on standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "mgr.standalone.ectizd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr services"} : dispatch
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:52 standalone.localdomain podman[35608]: 
Oct 13 13:36:52 standalone.localdomain podman[35608]: 2025-10-13 13:36:52.369812666 +0000 UTC m=+0.083575272 container create 02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_mcnulty, build-date=2025-09-24T08:57:55, version=7, release=553, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.buildah.version=1.33.12)
Oct 13 13:36:52 standalone.localdomain systemd[1]: Started libpod-conmon-02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c.scope.
Oct 13 13:36:52 standalone.localdomain podman[35608]: 2025-10-13 13:36:52.331723814 +0000 UTC m=+0.045486480 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:52 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:52 standalone.localdomain sudo[35574]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 13 13:36:52 standalone.localdomain podman[35608]: 2025-10-13 13:36:52.460490737 +0000 UTC m=+0.174253333 container init 02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_mcnulty, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 13 13:36:52 standalone.localdomain podman[35608]: 2025-10-13 13:36:52.469863966 +0000 UTC m=+0.183626572 container start 02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_mcnulty, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, release=553, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main)
Oct 13 13:36:52 standalone.localdomain podman[35608]: 2025-10-13 13:36:52.470065642 +0000 UTC m=+0.183828298 container attach 02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_mcnulty, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:52 standalone.localdomain vigorous_mcnulty[35643]: 167 167
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain systemd[1]: libpod-02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c.scope: Deactivated successfully.
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 13 13:36:52 standalone.localdomain podman[35608]: 2025-10-13 13:36:52.474503029 +0000 UTC m=+0.188265675 container died 02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_mcnulty, io.buildah.version=1.33.12, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, version=7, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:52.478+0000 7f5f8b38c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'rbd_support'
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Added host standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Added host standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Marking host: standalone.localdomain for OSDSpec preview refresh.
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Marking host: standalone.localdomain for OSDSpec preview refresh.
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service mon spec with placement standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service mon spec with placement standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service mgr spec with placement standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement standalone.localdomain
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain laughing_tu[35471]: Added host 'standalone.localdomain' with addr '172.18.0.100'
Oct 13 13:36:52 standalone.localdomain laughing_tu[35471]: Scheduled osd.default_drive_group update...
Oct 13 13:36:52 standalone.localdomain laughing_tu[35471]: Scheduled mon update...
Oct 13 13:36:52 standalone.localdomain laughing_tu[35471]: Scheduled mgr update...
Oct 13 13:36:52 standalone.localdomain systemd[1]: libpod-6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a.scope: Deactivated successfully.
Oct 13 13:36:52 standalone.localdomain podman[35444]: 2025-10-13 13:36:52.523135965 +0000 UTC m=+1.201038654 container died 6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_tu, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:52.570+0000 7f5f8b38c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'restful'
Oct 13 13:36:52 standalone.localdomain podman[35648]: 2025-10-13 13:36:52.57593335 +0000 UTC m=+0.089664410 container remove 02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_mcnulty, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, version=7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:36:52 standalone.localdomain systemd[1]: libpod-conmon-02ba8edece38e89c419b330e41db4cca69c89487ac7ab88387b32b0891bbf07c.scope: Deactivated successfully.
Oct 13 13:36:52 standalone.localdomain podman[35659]: 2025-10-13 13:36:52.612873827 +0000 UTC m=+0.080116736 container remove 6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_tu, name=rhceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 13:36:52 standalone.localdomain systemd[1]: libpod-conmon-6f65f196fb92826600f7ea6fae95db1190a43e7c74f64930344dbb711a97851a.scope: Deactivated successfully.
Oct 13 13:36:52 standalone.localdomain sudo[35558]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-proxy}] v 0)
Oct 13 13:36:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'rgw'
Oct 13 13:36:52 standalone.localdomain sudo[35679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:52 standalone.localdomain sudo[35679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:52 standalone.localdomain sudo[35679]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-546d5ec0abda2e9f33d05bb1f0c5cb58f4c7c7f96437c3714d047617bd69db2a-merged.mount: Deactivated successfully.
Oct 13 13:36:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d1dc8479d51cab72241ecaf416da9fe18b99f36396b8a1c2e069b9a34a7dfc5f-merged.mount: Deactivated successfully.
Oct 13 13:36:52 standalone.localdomain sudo[35695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 13:36:52 standalone.localdomain sudo[35695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:52.913+0000 7f5f8b38c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 13 13:36:52 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'rook'
Oct 13 13:36:53 standalone.localdomain python3[35713]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:53 standalone.localdomain podman[35715]: 
Oct 13 13:36:53 standalone.localdomain podman[35715]: 2025-10-13 13:36:53.067650834 +0000 UTC m=+0.053771907 container create 14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_faraday, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=553)
Oct 13 13:36:53 standalone.localdomain systemd[1]: Started libpod-conmon-14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca.scope.
Oct 13 13:36:53 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:53 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5406a4ca338687672051f0c3b7d92034cf277a08cbe77d79b9e9070b8a920fb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:53 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5406a4ca338687672051f0c3b7d92034cf277a08cbe77d79b9e9070b8a920fb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:36:53 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5406a4ca338687672051f0c3b7d92034cf277a08cbe77d79b9e9070b8a920fb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:36:53 standalone.localdomain podman[35715]: 2025-10-13 13:36:53.138634268 +0000 UTC m=+0.124755371 container init 14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_faraday, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git)
Oct 13 13:36:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 2 completed events
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:36:53 standalone.localdomain podman[35715]: 2025-10-13 13:36:53.052386233 +0000 UTC m=+0.038507296 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:53 standalone.localdomain podman[35715]: 2025-10-13 13:36:53.149360388 +0000 UTC m=+0.135481471 container start 14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_faraday, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, release=553, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:53 standalone.localdomain podman[35715]: 2025-10-13 13:36:53.149636966 +0000 UTC m=+0.135758120 container attach 14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_faraday, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, ceph=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: Reconfiguring mgr.standalone.ectizd (unknown last config time)...
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: Reconfiguring daemon mgr.standalone.ectizd on standalone.localdomain
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:53.352+0000 7f5f8b38c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'selftest'
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:53.413+0000 7f5f8b38c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'snap_schedule'
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'stats'
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 13 13:36:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/855359815' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Oct 13 13:36:53 standalone.localdomain frosty_faraday[35730]: 
Oct 13 13:36:53 standalone.localdomain frosty_faraday[35730]: {"fsid":"627e7f45-65aa-56de-94df-66eaee84a56e","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["standalone"],"quorum_age":46,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":2,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-13T13:36:03.878879+0000","services":{}},"progress_events":{}}
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'status'
Oct 13 13:36:53 standalone.localdomain systemd[1]: libpod-14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca.scope: Deactivated successfully.
Oct 13 13:36:53 standalone.localdomain podman[35715]: 2025-10-13 13:36:53.55431893 +0000 UTC m=+0.540440044 container died 14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_faraday, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:36:53 standalone.localdomain podman[35822]: 2025-10-13 13:36:53.560729288 +0000 UTC m=+0.077543167 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:53.608+0000 7f5f8b38c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'telegraf'
Oct 13 13:36:53 standalone.localdomain podman[35840]: 2025-10-13 13:36:53.620802057 +0000 UTC m=+0.058029717 container remove 14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_faraday, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, version=7, release=553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:53 standalone.localdomain systemd[1]: libpod-conmon-14ae794ee2c83c61b9bac94767c3a0c4923deb339ef6779d1a4351c83d9737ca.scope: Deactivated successfully.
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:53.667+0000 7f5f8b38c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'telemetry'
Oct 13 13:36:53 standalone.localdomain podman[35822]: 2025-10-13 13:36:53.700715507 +0000 UTC m=+0.217529426 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, ceph=True, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=)
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:53.799+0000 7f5f8b38c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'test_orchestrator'
Oct 13 13:36:53 standalone.localdomain systemd[1]: tmp-crun.J9KxJe.mount: Deactivated successfully.
Oct 13 13:36:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5406a4ca338687672051f0c3b7d92034cf277a08cbe77d79b9e9070b8a920fb-merged.mount: Deactivated successfully.
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:53.946+0000 7f5f8b38c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 13 13:36:53 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'volumes'
Oct 13 13:36:54 standalone.localdomain sudo[35695]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 314c9a07-dfdf-4211-bdab-cbe7b2080f50 (Updating mgr deployment (-1 -> 1))
Oct 13 13:36:54 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Removing daemon mgr.standalone.aquevm from standalone.localdomain -- ports [8765]
Oct 13 13:36:54 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Removing daemon mgr.standalone.aquevm from standalone.localdomain -- ports [8765]
Oct 13 13:36:54 standalone.localdomain ceph-mgr[35083]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 13 13:36:54 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:54.132+0000 7f5f8b38c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 13 13:36:54 standalone.localdomain ceph-mgr[35083]: mgr[py] Loading python module 'zabbix'
Oct 13 13:36:54 standalone.localdomain sudo[35920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:54 standalone.localdomain sudo[35920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:54 standalone.localdomain sudo[35920]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: Added host standalone.localdomain
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: Marking host: standalone.localdomain for OSDSpec preview refresh.
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: Saving service osd.default_drive_group spec with placement standalone.localdomain
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: Saving service mon spec with placement standalone.localdomain
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: Saving service mgr spec with placement standalone.localdomain
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/855359815' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:54 standalone.localdomain sudo[35935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 rm-daemon --fsid 627e7f45-65aa-56de-94df-66eaee84a56e --name mgr.standalone.aquevm --force --tcp-ports 8765
Oct 13 13:36:54 standalone.localdomain ceph-mgr[35083]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 13 13:36:54 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm[35079]: 2025-10-13T13:36:54.190+0000 7f5f8b38c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 13 13:36:54 standalone.localdomain sudo[35935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:54 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : Standby manager daemon standalone.aquevm started
Oct 13 13:36:54 standalone.localdomain ceph-mgr[35083]: ms_deliver_dispatch: unhandled message 0x55f1f8bf0420 mon_map magic: 0 from mon.0 v2:172.18.0.100:3300/0
Oct 13 13:36:54 standalone.localdomain ceph-mgr[29999]: mgr.server handle_open ignoring open from mgr.standalone.aquevm 172.18.0.100:0/1076240970; not ready for session (expect reconnect)
Oct 13 13:36:54 standalone.localdomain ceph-mgr[35083]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 13:36:54 standalone.localdomain systemd[1]: Stopping Ceph mgr.standalone.aquevm for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:36:55 standalone.localdomain podman[36017]: 2025-10-13 13:36:55.033767742 +0000 UTC m=+0.091780135 container died a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container)
Oct 13 13:36:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4b2b06862fc6e59d23d92b8f78fabb346520491ae95de3ad1e63ff3dcc5b51fc-merged.mount: Deactivated successfully.
Oct 13 13:36:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:55 standalone.localdomain podman[36017]: 2025-10-13 13:36:55.056662187 +0000 UTC m=+0.114674560 container cleanup a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm, release=553, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:36:55 standalone.localdomain bash[36017]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm
Oct 13 13:36:55 standalone.localdomain podman[36030]: 2025-10-13 13:36:55.155475558 +0000 UTC m=+0.114282998 container remove a80d623c6a7fcbb2f8c2ea590181a85c4ec7361f0626480103a2eb216c1886eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-aquevm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, ceph=True, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main)
Oct 13 13:36:55 standalone.localdomain systemd[1]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e@mgr.standalone.aquevm.service: Main process exited, code=exited, status=143/n/a
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: Removing daemon mgr.standalone.aquevm from standalone.localdomain -- ports [8765]
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: Standby manager daemon standalone.aquevm started
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e9: standalone.ectizd(active, since 32s), standbys: standalone.aquevm
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "standalone.aquevm", "id": "standalone.aquevm"} v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr metadata", "who": "standalone.aquevm", "id": "standalone.aquevm"} : dispatch
Oct 13 13:36:55 standalone.localdomain systemd[1]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e@mgr.standalone.aquevm.service: Failed with result 'exit-code'.
Oct 13 13:36:55 standalone.localdomain systemd[1]: Stopped Ceph mgr.standalone.aquevm for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:36:55 standalone.localdomain systemd[1]: ceph-627e7f45-65aa-56de-94df-66eaee84a56e@mgr.standalone.aquevm.service: Consumed 6.600s CPU time.
Oct 13 13:36:55 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:36:55 standalone.localdomain systemd-rc-local-generator[36098]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:36:55 standalone.localdomain systemd-sysv-generator[36103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:36:55 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:36:55 standalone.localdomain sudo[35935]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:55 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.standalone.aquevm
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.standalone.aquevm"} v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Removing key for mgr.standalone.aquevm
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth rm", "entity": "mgr.standalone.aquevm"} : dispatch
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth rm", "entity": "mgr.standalone.aquevm"}]': finished
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 314c9a07-dfdf-4211-bdab-cbe7b2080f50 (Updating mgr deployment (-1 -> 1))
Oct 13 13:36:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 314c9a07-dfdf-4211-bdab-cbe7b2080f50 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:36:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:56 standalone.localdomain sudo[36112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:36:56 standalone.localdomain sudo[36112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:56 standalone.localdomain sudo[36112]: pam_unix(sudo:session): session closed for user root
Oct 13 13:36:56 standalone.localdomain sudo[36127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e --config-json - -- lvm batch --no-auto /dev/vg2/data-lv2 --yes --no-systemd
Oct 13 13:36:56 standalone.localdomain sudo[36127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: mgrmap e9: standalone.ectizd(active, since 32s), standbys: standalone.aquevm
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mgr metadata", "who": "standalone.aquevm", "id": "standalone.aquevm"} : dispatch
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth rm", "entity": "mgr.standalone.aquevm"} : dispatch
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth rm", "entity": "mgr.standalone.aquevm"}]': finished
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:36:56 standalone.localdomain podman[36178]: 
Oct 13 13:36:56 standalone.localdomain podman[36178]: 2025-10-13 13:36:56.630580205 +0000 UTC m=+0.072735289 container create 9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_lewin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Oct 13 13:36:56 standalone.localdomain systemd[1]: Started libpod-conmon-9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876.scope.
Oct 13 13:36:56 standalone.localdomain podman[36178]: 2025-10-13 13:36:56.602877734 +0000 UTC m=+0.045032818 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:56 standalone.localdomain podman[36178]: 2025-10-13 13:36:56.722290668 +0000 UTC m=+0.164445762 container init 9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_lewin, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, vcs-type=git)
Oct 13 13:36:56 standalone.localdomain podman[36178]: 2025-10-13 13:36:56.734080421 +0000 UTC m=+0.176235505 container start 9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_lewin, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-09-24T08:57:55, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main)
Oct 13 13:36:56 standalone.localdomain podman[36178]: 2025-10-13 13:36:56.734325748 +0000 UTC m=+0.176480882 container attach 9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_lewin, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.expose-services=, version=7, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public)
Oct 13 13:36:56 standalone.localdomain suspicious_lewin[36193]: 167 167
Oct 13 13:36:56 standalone.localdomain systemd[1]: libpod-9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876.scope: Deactivated successfully.
Oct 13 13:36:56 standalone.localdomain podman[36178]: 2025-10-13 13:36:56.751063393 +0000 UTC m=+0.193218487 container died 9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_lewin, release=553, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55)
Oct 13 13:36:56 standalone.localdomain systemd[1]: tmp-crun.UxEgX6.mount: Deactivated successfully.
Oct 13 13:36:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:36:56 standalone.localdomain podman[36198]: 2025-10-13 13:36:56.873570424 +0000 UTC m=+0.107354245 container remove 9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_lewin, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:56 standalone.localdomain systemd[1]: libpod-conmon-9fbaca3228ced642102e7239fd1f1f541b464c0fdf6655bb207ca3e6a84b3876.scope: Deactivated successfully.
Oct 13 13:36:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:57 standalone.localdomain podman[36217]: 
Oct 13 13:36:57 standalone.localdomain podman[36217]: 2025-10-13 13:36:57.070296858 +0000 UTC m=+0.058767540 container create 8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_chaum, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, name=rhceph, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:36:57 standalone.localdomain systemd[1]: Started libpod-conmon-8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a.scope.
Oct 13 13:36:57 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:36:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9283d7a82f063e8523c0edbc8396133f28d2f570bd460d41f9ec09e1ad2f8112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:57 standalone.localdomain podman[36217]: 2025-10-13 13:36:57.048751796 +0000 UTC m=+0.037222498 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:36:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9283d7a82f063e8523c0edbc8396133f28d2f570bd460d41f9ec09e1ad2f8112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9283d7a82f063e8523c0edbc8396133f28d2f570bd460d41f9ec09e1ad2f8112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9283d7a82f063e8523c0edbc8396133f28d2f570bd460d41f9ec09e1ad2f8112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9283d7a82f063e8523c0edbc8396133f28d2f570bd460d41f9ec09e1ad2f8112/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 13:36:57 standalone.localdomain podman[36217]: 2025-10-13 13:36:57.195956355 +0000 UTC m=+0.184427017 container init 8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_chaum, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: Removing key for mgr.standalone.aquevm
Oct 13 13:36:57 standalone.localdomain podman[36217]: 2025-10-13 13:36:57.20421876 +0000 UTC m=+0.192689422 container start 8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_chaum, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:36:57 standalone.localdomain podman[36217]: 2025-10-13 13:36:57.204594341 +0000 UTC m=+0.193065043 container attach 8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_chaum, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:36:57 standalone.localdomain quizzical_chaum[36232]: --> passed data devices: 0 physical, 1 LVM
Oct 13 13:36:57 standalone.localdomain quizzical_chaum[36232]: --> relative data size: 1.0
Oct 13 13:36:57 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 13 13:36:57 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b05d1f73-98ec-4f65-983d-44df16fccc62
Oct 13 13:36:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-877ee58817e840f5fb1c880f5fd91e61148d9e373da9a1bf71787b7ece25feff-merged.mount: Deactivated successfully.
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b05d1f73-98ec-4f65-983d-44df16fccc62"} v 0)
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2718634641' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b05d1f73-98ec-4f65-983d-44df16fccc62"} : dispatch
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e3 e3: 1 total, 0 up, 1 in
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2718634641' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b05d1f73-98ec-4f65-983d-44df16fccc62"}]': finished
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e3: 1 total, 0 up, 1 in
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:36:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:36:57 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:36:58 standalone.localdomain lvm[36279]: PV /dev/loop3 online, VG vg2 is complete.
Oct 13 13:36:58 standalone.localdomain lvm[36279]: VG vg2 finished
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -h ceph:ceph /dev/vg2/data-lv2
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ln -s /dev/vg2/data-lv2 /var/lib/ceph/osd/ceph-0/block
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 13 13:36:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 3 completed events
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2718634641' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b05d1f73-98ec-4f65-983d-44df16fccc62"} : dispatch
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2718634641' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b05d1f73-98ec-4f65-983d-44df16fccc62"}]': finished
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: osdmap e3: 1 total, 0 up, 1 in
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 13 13:36:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/398257708' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]:  stderr: got monmap epoch 1
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: --> Creating keyring file for osd.0
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 13 13:36:58 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b05d1f73-98ec-4f65-983d-44df16fccc62 --setuser ceph --setgroup ceph
Oct 13 13:36:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:36:59 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 13 13:36:59 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 13 13:36:59 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/398257708' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Oct 13 13:36:59 standalone.localdomain ceph-mon[29756]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:00 standalone.localdomain ceph-mon[29756]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 13 13:37:00 standalone.localdomain ceph-mon[29756]: Cluster is now healthy
Oct 13 13:37:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]:  stderr: 2025-10-13T13:36:58.760+0000 7fe550408a80 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]:  stderr: 2025-10-13T13:36:58.760+0000 7fe550408a80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: --> ceph-volume lvm prepare successful for: vg2/data-lv2
Oct 13 13:37:01 standalone.localdomain ceph-mon[29756]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vg2/data-lv2 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/ln -snf /dev/vg2/data-lv2 /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 13 13:37:01 standalone.localdomain quizzical_chaum[36232]: --> ceph-volume lvm create successful for: vg2/data-lv2
Oct 13 13:37:01 standalone.localdomain systemd[1]: libpod-8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a.scope: Deactivated successfully.
Oct 13 13:37:01 standalone.localdomain systemd[1]: libpod-8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a.scope: Consumed 1.985s CPU time.
Oct 13 13:37:01 standalone.localdomain podman[37197]: 2025-10-13 13:37:01.432930813 +0000 UTC m=+0.052332981 container died 8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_chaum, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main)
Oct 13 13:37:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9283d7a82f063e8523c0edbc8396133f28d2f570bd460d41f9ec09e1ad2f8112-merged.mount: Deactivated successfully.
Oct 13 13:37:01 standalone.localdomain podman[37197]: 2025-10-13 13:37:01.471315464 +0000 UTC m=+0.090717602 container remove 8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_chaum, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7)
Oct 13 13:37:01 standalone.localdomain systemd[1]: libpod-conmon-8e2f0d992ecbc8dd9db9975d5e05b38cac9d430a79703b1f133f44528166f26a.scope: Deactivated successfully.
Oct 13 13:37:01 standalone.localdomain sudo[36127]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:01 standalone.localdomain sudo[37209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:37:01 standalone.localdomain sudo[37209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:01 standalone.localdomain sudo[37209]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:01 standalone.localdomain sudo[37224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- lvm list --format json
Oct 13 13:37:01 standalone.localdomain sudo[37224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:02 standalone.localdomain podman[37278]: 
Oct 13 13:37:02 standalone.localdomain podman[37278]: 2025-10-13 13:37:02.17626407 +0000 UTC m=+0.066967002 container create ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_margulis, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=553, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vcs-type=git, version=7)
Oct 13 13:37:02 standalone.localdomain systemd[1]: Started libpod-conmon-ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505.scope.
Oct 13 13:37:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:02 standalone.localdomain podman[37278]: 2025-10-13 13:37:02.248174293 +0000 UTC m=+0.138877235 container init ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_margulis, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, ceph=True, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:37:02 standalone.localdomain podman[37278]: 2025-10-13 13:37:02.150080864 +0000 UTC m=+0.040783826 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:02 standalone.localdomain podman[37278]: 2025-10-13 13:37:02.259659657 +0000 UTC m=+0.150362549 container start ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_margulis, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64)
Oct 13 13:37:02 standalone.localdomain podman[37278]: 2025-10-13 13:37:02.259887094 +0000 UTC m=+0.150590056 container attach ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_margulis, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, version=7, ceph=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7)
Oct 13 13:37:02 standalone.localdomain kind_margulis[37293]: 167 167
Oct 13 13:37:02 standalone.localdomain systemd[1]: libpod-ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505.scope: Deactivated successfully.
Oct 13 13:37:02 standalone.localdomain podman[37278]: 2025-10-13 13:37:02.26401693 +0000 UTC m=+0.154719912 container died ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_margulis, build-date=2025-09-24T08:57:55, RELEASE=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:37:02 standalone.localdomain podman[37298]: 2025-10-13 13:37:02.353263697 +0000 UTC m=+0.079598820 container remove ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_margulis, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:37:02 standalone.localdomain systemd[1]: libpod-conmon-ae651f0f446aee808f5ea164a14db07f37836867e98fe92050e97a7aeb532505.scope: Deactivated successfully.
Oct 13 13:37:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b5894a8709ae3e8337a8f8b89f62afcd1c07d1c7838c899aacf9ba9684607a6c-merged.mount: Deactivated successfully.
Oct 13 13:37:02 standalone.localdomain podman[37318]: 
Oct 13 13:37:02 standalone.localdomain podman[37318]: 2025-10-13 13:37:02.549503657 +0000 UTC m=+0.070154971 container create 086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, RELEASE=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55)
Oct 13 13:37:02 standalone.localdomain systemd[1]: Started libpod-conmon-086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e.scope.
Oct 13 13:37:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5efece38a790c9199261f5b9da25e775c2b196c1baf453f23ff1c7a192e2596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5efece38a790c9199261f5b9da25e775c2b196c1baf453f23ff1c7a192e2596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5efece38a790c9199261f5b9da25e775c2b196c1baf453f23ff1c7a192e2596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:02 standalone.localdomain podman[37318]: 2025-10-13 13:37:02.527669204 +0000 UTC m=+0.048320488 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5efece38a790c9199261f5b9da25e775c2b196c1baf453f23ff1c7a192e2596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:02 standalone.localdomain podman[37318]: 2025-10-13 13:37:02.635846834 +0000 UTC m=+0.156498118 container init 086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, io.openshift.expose-services=, io.buildah.version=1.33.12, ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, release=553, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:37:02 standalone.localdomain podman[37318]: 2025-10-13 13:37:02.642317883 +0000 UTC m=+0.162969167 container start 086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 13:37:02 standalone.localdomain podman[37318]: 2025-10-13 13:37:02.6425508 +0000 UTC m=+0.163202194 container attach 086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-type=git, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.33.12)
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]: {
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:     "0": [
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:         {
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "devices": [
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "/dev/loop3"
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             ],
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "lv_name": "data-lv2",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "lv_path": "/dev/vg2/data-lv2",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "lv_size": "7511998464",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "lv_tags": "ceph.block_device=/dev/vg2/data-lv2,ceph.block_uuid=YBQJZH-3xxI-1qUK-zBIs-U7kv-8Q2m-ERDMVZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=627e7f45-65aa-56de-94df-66eaee84a56e,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b05d1f73-98ec-4f65-983d-44df16fccc62,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "lv_uuid": "YBQJZH-3xxI-1qUK-zBIs-U7kv-8Q2m-ERDMVZ",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "name": "data-lv2",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "path": "/dev/vg2/data-lv2",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "tags": {
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.block_device": "/dev/vg2/data-lv2",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.block_uuid": "YBQJZH-3xxI-1qUK-zBIs-U7kv-8Q2m-ERDMVZ",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.cephx_lockbox_secret": "",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.cluster_fsid": "627e7f45-65aa-56de-94df-66eaee84a56e",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.cluster_name": "ceph",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.crush_device_class": "",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.encrypted": "0",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.osd_fsid": "b05d1f73-98ec-4f65-983d-44df16fccc62",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.osd_id": "0",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.type": "block",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:                 "ceph.vdo": "0"
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             },
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "type": "block",
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:             "vg_name": "vg2"
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:         }
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]:     ]
Oct 13 13:37:02 standalone.localdomain confident_feistel[37333]: }
Oct 13 13:37:02 standalone.localdomain systemd[1]: libpod-086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e.scope: Deactivated successfully.
Oct 13 13:37:02 standalone.localdomain podman[37318]: 2025-10-13 13:37:02.974544838 +0000 UTC m=+0.495196182 container died 086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, release=553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph)
Oct 13 13:37:03 standalone.localdomain podman[37342]: 2025-10-13 13:37:03.046353098 +0000 UTC m=+0.063696421 container remove 086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64)
Oct 13 13:37:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:03 standalone.localdomain systemd[1]: libpod-conmon-086b7a959373bdfd89c89f70ae9d539e77cc0809d1972018974c678e9f82634e.scope: Deactivated successfully.
Oct 13 13:37:03 standalone.localdomain sudo[37224]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct 13 13:37:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Oct 13 13:37:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:37:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:37:03 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on standalone.localdomain
Oct 13 13:37:03 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on standalone.localdomain
Oct 13 13:37:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Oct 13 13:37:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:37:03 standalone.localdomain sudo[37355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:37:03 standalone.localdomain sudo[37355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:03 standalone.localdomain sudo[37355]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:03 standalone.localdomain sudo[37370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:37:03 standalone.localdomain sudo[37370]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f5efece38a790c9199261f5b9da25e775c2b196c1baf453f23ff1c7a192e2596-merged.mount: Deactivated successfully.
Oct 13 13:37:03 standalone.localdomain podman[37429]: 
Oct 13 13:37:03 standalone.localdomain podman[37429]: 2025-10-13 13:37:03.866796558 +0000 UTC m=+0.080195599 container create 7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_ramanujan, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, RELEASE=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Oct 13 13:37:03 standalone.localdomain systemd[1]: Started libpod-conmon-7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa.scope.
Oct 13 13:37:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:03 standalone.localdomain podman[37429]: 2025-10-13 13:37:03.835310849 +0000 UTC m=+0.048709920 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:03 standalone.localdomain podman[37429]: 2025-10-13 13:37:03.947852812 +0000 UTC m=+0.161251853 container init 7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_ramanujan, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, vcs-type=git, ceph=True, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Oct 13 13:37:03 standalone.localdomain systemd[1]: tmp-crun.AgxHRm.mount: Deactivated successfully.
Oct 13 13:37:03 standalone.localdomain podman[37429]: 2025-10-13 13:37:03.961635626 +0000 UTC m=+0.175034667 container start 7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_ramanujan, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55)
Oct 13 13:37:03 standalone.localdomain podman[37429]: 2025-10-13 13:37:03.962303327 +0000 UTC m=+0.175702409 container attach 7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_ramanujan, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph)
Oct 13 13:37:03 standalone.localdomain nifty_ramanujan[37444]: 167 167
Oct 13 13:37:03 standalone.localdomain systemd[1]: libpod-7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa.scope: Deactivated successfully.
Oct 13 13:37:03 standalone.localdomain podman[37429]: 2025-10-13 13:37:03.968700224 +0000 UTC m=+0.182099295 container died 7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_ramanujan, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, build-date=2025-09-24T08:57:55, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.)
Oct 13 13:37:04 standalone.localdomain podman[37449]: 2025-10-13 13:37:04.051799092 +0000 UTC m=+0.075010830 container remove 7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_ramanujan, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:37:04 standalone.localdomain systemd[1]: libpod-conmon-7403a61cf68dafab97f6e47e64a3a8cfd123b8e24e29d130435024aa779d53aa.scope: Deactivated successfully.
Oct 13 13:37:04 standalone.localdomain ceph-mon[29756]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:04 standalone.localdomain ceph-mon[29756]: Deploying daemon osd.0 on standalone.localdomain
Oct 13 13:37:04 standalone.localdomain podman[37477]: 
Oct 13 13:37:04 standalone.localdomain podman[37477]: 2025-10-13 13:37:04.353878168 +0000 UTC m=+0.071768789 container create abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, version=7, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:37:04 standalone.localdomain systemd[1]: Started libpod-conmon-abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af.scope.
Oct 13 13:37:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e358ce0ae4908a0120d98e02d5fd87d73139d0f541fb05c9300694272d478a0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:04 standalone.localdomain podman[37477]: 2025-10-13 13:37:04.327535297 +0000 UTC m=+0.045425848 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e358ce0ae4908a0120d98e02d5fd87d73139d0f541fb05c9300694272d478a0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e358ce0ae4908a0120d98e02d5fd87d73139d0f541fb05c9300694272d478a0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4a3d1e3bc7a5343cb91ca1616c8c198edd6808e4888431bf3d9ccbed9673be5e-merged.mount: Deactivated successfully.
Oct 13 13:37:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e358ce0ae4908a0120d98e02d5fd87d73139d0f541fb05c9300694272d478a0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e358ce0ae4908a0120d98e02d5fd87d73139d0f541fb05c9300694272d478a0a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:04 standalone.localdomain podman[37477]: 2025-10-13 13:37:04.476545303 +0000 UTC m=+0.194435884 container init abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, release=553)
Oct 13 13:37:04 standalone.localdomain podman[37477]: 2025-10-13 13:37:04.486583103 +0000 UTC m=+0.204473684 container start abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, release=553, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, version=7, name=rhceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12)
Oct 13 13:37:04 standalone.localdomain podman[37477]: 2025-10-13 13:37:04.486870491 +0000 UTC m=+0.204761112 container attach abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True)
Oct 13 13:37:04 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test[37493]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 13 13:37:04 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test[37493]:                             [--no-systemd] [--no-tmpfs]
Oct 13 13:37:04 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test[37493]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 13 13:37:04 standalone.localdomain systemd[1]: libpod-abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af.scope: Deactivated successfully.
Oct 13 13:37:04 standalone.localdomain podman[37477]: 2025-10-13 13:37:04.725165555 +0000 UTC m=+0.443056146 container died abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Oct 13 13:37:04 standalone.localdomain systemd[1]: tmp-crun.mtPpeN.mount: Deactivated successfully.
Oct 13 13:37:04 standalone.localdomain podman[37498]: 2025-10-13 13:37:04.817916139 +0000 UTC m=+0.084558233 container remove abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate-test, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.buildah.version=1.33.12)
Oct 13 13:37:04 standalone.localdomain systemd[1]: libpod-conmon-abb00c67e818a448d6cb513ff01339646773a22edeb6f17989e03a2cd1fd85af.scope: Deactivated successfully.
Oct 13 13:37:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:37:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:05 standalone.localdomain systemd-sysv-generator[37557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:37:05 standalone.localdomain systemd-rc-local-generator[37551]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:37:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:37:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e358ce0ae4908a0120d98e02d5fd87d73139d0f541fb05c9300694272d478a0a-merged.mount: Deactivated successfully.
Oct 13 13:37:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:37:05 standalone.localdomain systemd-sysv-generator[37598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:37:05 standalone.localdomain systemd-rc-local-generator[37593]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:37:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:37:05 standalone.localdomain systemd[1]: Starting Ceph osd.0 for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:37:05 standalone.localdomain podman[37658]: 
Oct 13 13:37:05 standalone.localdomain podman[37658]: 2025-10-13 13:37:05.915696155 +0000 UTC m=+0.070580833 container create aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:37:05 standalone.localdomain systemd[1]: tmp-crun.HD3Yv3.mount: Deactivated successfully.
Oct 13 13:37:05 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2836b5bb5231cbdbd190070c7892953d75c8443d809cc23413f501bd5d404d54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:05 standalone.localdomain podman[37658]: 2025-10-13 13:37:05.887033713 +0000 UTC m=+0.041918461 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2836b5bb5231cbdbd190070c7892953d75c8443d809cc23413f501bd5d404d54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2836b5bb5231cbdbd190070c7892953d75c8443d809cc23413f501bd5d404d54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2836b5bb5231cbdbd190070c7892953d75c8443d809cc23413f501bd5d404d54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2836b5bb5231cbdbd190070c7892953d75c8443d809cc23413f501bd5d404d54/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:06 standalone.localdomain podman[37658]: 2025-10-13 13:37:06.030282141 +0000 UTC m=+0.185166849 container init aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:37:06 standalone.localdomain podman[37658]: 2025-10-13 13:37:06.040725253 +0000 UTC m=+0.195609941 container start aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, version=7)
Oct 13 13:37:06 standalone.localdomain podman[37658]: 2025-10-13 13:37:06.04094968 +0000 UTC m=+0.195834368 container attach aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:37:06 standalone.localdomain ceph-mon[29756]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 13 13:37:06 standalone.localdomain bash[37658]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/vg2-data--lv2
Oct 13 13:37:06 standalone.localdomain bash[37658]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/vg2-data--lv2
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/vg2-data--lv2
Oct 13 13:37:06 standalone.localdomain bash[37658]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/vg2-data--lv2
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 13 13:37:06 standalone.localdomain bash[37658]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: Running command: /usr/bin/ln -s /dev/mapper/vg2-data--lv2 /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:06 standalone.localdomain bash[37658]: Running command: /usr/bin/ln -s /dev/mapper/vg2-data--lv2 /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 13 13:37:06 standalone.localdomain bash[37658]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 13 13:37:06 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate[37672]: --> ceph-volume raw activate successful for osd ID: 0
Oct 13 13:37:06 standalone.localdomain bash[37658]: --> ceph-volume raw activate successful for osd ID: 0
Oct 13 13:37:06 standalone.localdomain systemd[1]: libpod-aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d.scope: Deactivated successfully.
Oct 13 13:37:06 standalone.localdomain podman[37799]: 2025-10-13 13:37:06.820662726 +0000 UTC m=+0.047198394 container died aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553)
Oct 13 13:37:06 standalone.localdomain systemd[1]: tmp-crun.7qFw5K.mount: Deactivated successfully.
Oct 13 13:37:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2836b5bb5231cbdbd190070c7892953d75c8443d809cc23413f501bd5d404d54-merged.mount: Deactivated successfully.
Oct 13 13:37:06 standalone.localdomain podman[37799]: 2025-10-13 13:37:06.858568912 +0000 UTC m=+0.085104530 container remove aa063390e39c6adf224a712510b756690f917acec0283716f7081f4cad50779d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0-activate, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:37:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:07 standalone.localdomain podman[37861]: 
Oct 13 13:37:07 standalone.localdomain podman[37861]: 2025-10-13 13:37:07.159874345 +0000 UTC m=+0.073323617 container create 573915c9ea97d0b8390376409db64ef054af183df4e04cabf85661fb82b3aae4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0, architecture=x86_64, release=553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 13:37:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e184ecfe6fa01cbce3e5cb24cce564677f2d1d3b609f8f9cd323c8c7f90f4ad2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:07 standalone.localdomain podman[37861]: 2025-10-13 13:37:07.130606335 +0000 UTC m=+0.044055657 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e184ecfe6fa01cbce3e5cb24cce564677f2d1d3b609f8f9cd323c8c7f90f4ad2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e184ecfe6fa01cbce3e5cb24cce564677f2d1d3b609f8f9cd323c8c7f90f4ad2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e184ecfe6fa01cbce3e5cb24cce564677f2d1d3b609f8f9cd323c8c7f90f4ad2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e184ecfe6fa01cbce3e5cb24cce564677f2d1d3b609f8f9cd323c8c7f90f4ad2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:07 standalone.localdomain podman[37861]: 2025-10-13 13:37:07.277949689 +0000 UTC m=+0.191398951 container init 573915c9ea97d0b8390376409db64ef054af183df4e04cabf85661fb82b3aae4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph)
Oct 13 13:37:07 standalone.localdomain podman[37861]: 2025-10-13 13:37:07.286316637 +0000 UTC m=+0.199765899 container start 573915c9ea97d0b8390376409db64ef054af183df4e04cabf85661fb82b3aae4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=553, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, distribution-scope=public)
Oct 13 13:37:07 standalone.localdomain bash[37861]: 573915c9ea97d0b8390376409db64ef054af183df4e04cabf85661fb82b3aae4
Oct 13 13:37:07 standalone.localdomain systemd[1]: Started Ceph osd.0 for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: set uid:gid to 167:167 (ceph:ceph)
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: pidfile_write: ignore empty --pid-file
Oct 13 13:37:07 standalone.localdomain sudo[37370]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) close
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:07 standalone.localdomain sudo[37891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:37:07 standalone.localdomain sudo[37891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:07 standalone.localdomain sudo[37891]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:07 standalone.localdomain sudo[37906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- raw list --format json
Oct 13 13:37:07 standalone.localdomain sudo[37906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) close
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: load: jerasure load: lrc 
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Oct 13 13:37:07 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) close
Oct 13 13:37:08 standalone.localdomain podman[37968]: 
Oct 13 13:37:08 standalone.localdomain podman[37968]: 2025-10-13 13:37:08.050135564 +0000 UTC m=+0.048894146 container create 433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bose, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64)
Oct 13 13:37:08 standalone.localdomain systemd[1]: Started libpod-conmon-433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6.scope.
Oct 13 13:37:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:08 standalone.localdomain podman[37968]: 2025-10-13 13:37:08.115909018 +0000 UTC m=+0.114667600 container init 433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bose, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:37:08 standalone.localdomain podman[37968]: 2025-10-13 13:37:08.126135233 +0000 UTC m=+0.124893815 container start 433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bose, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.buildah.version=1.33.12, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main)
Oct 13 13:37:08 standalone.localdomain podman[37968]: 2025-10-13 13:37:08.126267827 +0000 UTC m=+0.125026409 container attach 433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bose, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, RELEASE=main, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=553)
Oct 13 13:37:08 standalone.localdomain exciting_bose[37984]: 167 167
Oct 13 13:37:08 standalone.localdomain systemd[1]: libpod-433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6.scope: Deactivated successfully.
Oct 13 13:37:08 standalone.localdomain podman[37968]: 2025-10-13 13:37:08.130370143 +0000 UTC m=+0.129128815 container died 433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bose, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:37:08 standalone.localdomain podman[37968]: 2025-10-13 13:37:08.032971366 +0000 UTC m=+0.031729958 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) close
Oct 13 13:37:08 standalone.localdomain podman[37989]: 2025-10-13 13:37:08.204574477 +0000 UTC m=+0.064588399 container remove 433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_bose, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public)
Oct 13 13:37:08 standalone.localdomain systemd[1]: libpod-conmon-433ec591a0d8f0f2dcd04967b74d35463bd71aaea222aa5d11690edbaa2596d6.scope: Deactivated successfully.
Oct 13 13:37:08 standalone.localdomain podman[38013]: 
Oct 13 13:37:08 standalone.localdomain podman[38013]: 2025-10-13 13:37:08.386976431 +0000 UTC m=+0.073841194 container create 77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_lumiere, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, release=553, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7)
Oct 13 13:37:08 standalone.localdomain systemd[1]: Started libpod-conmon-77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9.scope.
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742816e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs mount
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs mount shared_bdev_used = 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 13 13:37:08 standalone.localdomain podman[38013]: 2025-10-13 13:37:08.358551426 +0000 UTC m=+0.045416179 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: RocksDB version: 7.9.2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Git sha 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Compile date 2025-09-23 00:00:00
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DB SUMMARY
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DB Session ID:  O5Z7BDAR7WYULNIYDCR1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: CURRENT file:  CURRENT
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: IDENTITY file:  IDENTITY
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.error_if_exists: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.create_if_missing: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.paranoid_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                     Options.env: 0x55a742aaad20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                Options.info_log: 0x55a74379c480
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_file_opening_threads: 16
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                              Options.statistics: (nil)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.use_fsync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.max_log_file_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.allow_fallocate: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.use_direct_reads: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.create_missing_column_families: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                              Options.db_log_dir: 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                 Options.wal_dir: db.wal
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.advise_random_on_open: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.write_buffer_manager: 0x55a742800140
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                            Options.rate_limiter: (nil)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.unordered_write: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.row_cache: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                              Options.wal_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.allow_ingest_behind: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.two_write_queues: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.manual_wal_flush: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.wal_compression: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.atomic_flush: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.log_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.allow_data_in_errors: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.db_host_id: __hostname__
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_background_jobs: 4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_background_compactions: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_subcompactions: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.max_open_files: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.bytes_per_sync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.max_background_flushes: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Compression algorithms supported:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kZSTD supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kXpressCompression supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kBZip2Compression supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kLZ4Compression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kZlibCompression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kLZ4HCCompression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kSnappyCompression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b95ce32c38d9a89a96eaccfe0082d8f66ca7ab7da773b76048ea01220e57f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c640)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c860)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c860)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a74379c860)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b95ce32c38d9a89a96eaccfe0082d8f66ca7ab7da773b76048ea01220e57f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4368670c-df6a-4181-b13a-1a2680d93301
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628493024, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628493557, "job": 1, "event": "recovery_finished"}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: freelist init
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: freelist _read_cfg
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs umount
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) close
Oct 13 13:37:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b95ce32c38d9a89a96eaccfe0082d8f66ca7ab7da773b76048ea01220e57f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b95ce32c38d9a89a96eaccfe0082d8f66ca7ab7da773b76048ea01220e57f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:08 standalone.localdomain podman[38013]: 2025-10-13 13:37:08.534063448 +0000 UTC m=+0.220928211 container init 77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_lumiere, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:37:08 standalone.localdomain podman[38013]: 2025-10-13 13:37:08.544491458 +0000 UTC m=+0.231356201 container start 77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_lumiere, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, architecture=x86_64, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:37:08 standalone.localdomain podman[38013]: 2025-10-13 13:37:08.544786707 +0000 UTC m=+0.231651520 container attach 77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_lumiere, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:37:08 standalone.localdomain systemd[1]: tmp-crun.vB20gR.mount: Deactivated successfully.
Oct 13 13:37:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a151c08619c3e4f51c32b711d4de1aafcc40a3d11e0c7b8af34f8d1fbf159cf9-merged.mount: Deactivated successfully.
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bdev(0x55a742817180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs mount
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluefs mount shared_bdev_used = 4718592
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: RocksDB version: 7.9.2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Git sha 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Compile date 2025-09-23 00:00:00
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DB SUMMARY
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DB Session ID:  O5Z7BDAR7WYULNIYDCR0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: CURRENT file:  CURRENT
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: IDENTITY file:  IDENTITY
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.error_if_exists: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.create_if_missing: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.paranoid_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                     Options.env: 0x55a742aab730
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                Options.info_log: 0x55a74379ca20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_file_opening_threads: 16
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                              Options.statistics: (nil)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.use_fsync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.max_log_file_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.allow_fallocate: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.use_direct_reads: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.create_missing_column_families: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                              Options.db_log_dir: 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                                 Options.wal_dir: db.wal
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.advise_random_on_open: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.write_buffer_manager: 0x55a7428015e0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                            Options.rate_limiter: (nil)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.unordered_write: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.row_cache: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                              Options.wal_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.allow_ingest_behind: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.two_write_queues: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.manual_wal_flush: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.wal_compression: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.atomic_flush: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.log_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.allow_data_in_errors: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.db_host_id: __hostname__
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_background_jobs: 4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_background_compactions: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_subcompactions: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.max_open_files: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.bytes_per_sync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.max_background_flushes: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Compression algorithms supported:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kZSTD supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kXpressCompression supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kBZip2Compression supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kLZ4Compression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kZlibCompression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kLZ4HCCompression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         kSnappyCompression supported: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: 
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822a80)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427ee430
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 483183820
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822bc0)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822bc0)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:           Options.merge_operator: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.compaction_filter_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.sst_partitioner_factory: None
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a743822bc0)
                                                          cache_index_and_filter_blocks: 1
                                                          cache_index_and_filter_blocks_with_high_priority: 0
                                                          pin_l0_filter_and_index_blocks_in_cache: 0
                                                          pin_top_level_index_and_filter: 1
                                                          index_type: 0
                                                          data_block_index_type: 0
                                                          index_shortening: 1
                                                          data_block_hash_table_util_ratio: 0.750000
                                                          checksum: 4
                                                          no_block_cache: 0
                                                          block_cache: 0x55a7427eedd0
                                                          block_cache_name: BinnedLRUCache
                                                          block_cache_options:
                                                            capacity : 536870912
                                                            num_shard_bits : 4
                                                            strict_capacity_limit : 0
                                                            high_pri_pool_ratio: 0.000
                                                          block_cache_compressed: (nil)
                                                          persistent_cache: (nil)
                                                          block_size: 4096
                                                          block_size_deviation: 10
                                                          block_restart_interval: 16
                                                          index_block_restart_interval: 1
                                                          metadata_block_size: 4096
                                                          partition_filters: 0
                                                          use_delta_encoding: 1
                                                          filter_policy: bloomfilter
                                                          whole_key_filtering: 1
                                                          verify_compression: 0
                                                          read_amp_bytes_per_bit: 0
                                                          format_version: 5
                                                          enable_index_compression: 1
                                                          block_align: 0
                                                          max_auto_readahead_size: 262144
                                                          prepopulate_block_cache: 0
                                                          initial_auto_readahead_size: 8192
                                                          num_file_reads_for_auto_readahead: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.write_buffer_size: 16777216
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.max_write_buffer_number: 64
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.compression: LZ4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.num_levels: 7
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.level: 32767
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.compression_opts.strategy: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                  Options.compression_opts.enabled: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.arena_block_size: 1048576
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.disable_auto_compactions: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.inplace_update_support: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.bloom_locality: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                    Options.max_successive_merges: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.paranoid_file_checks: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.force_consistency_checks: 1
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.report_bg_io_stats: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                               Options.ttl: 2592000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                       Options.enable_blob_files: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                           Options.min_blob_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                          Options.blob_file_size: 268435456
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb:                Options.blob_file_starting_level: 0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4368670c-df6a-4181-b13a-1a2680d93301
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628745818, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628766883, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362628, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4368670c-df6a-4181-b13a-1a2680d93301", "db_session_id": "O5Z7BDAR7WYULNIYDCR0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628771071, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362628, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4368670c-df6a-4181-b13a-1a2680d93301", "db_session_id": "O5Z7BDAR7WYULNIYDCR0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628774597, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362628, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4368670c-df6a-4181-b13a-1a2680d93301", "db_session_id": "O5Z7BDAR7WYULNIYDCR0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760362628778520, "job": 1, "event": "recovery_finished"}
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a742817500
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: DB pointer 0x55a7436f3a00
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                        Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-0] **
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-1] **
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-2] **
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-0] **
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-1] **
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-2] **
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-0] **
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-1] **
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-2] **
                                                        
                                                        ** Compaction Stats [L] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [L] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [L] **
                                                        
                                                        ** Compaction Stats [P] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [P] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 0.1 total, 0.1 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [P] **
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: _get_class not permitted to load lua
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: _get_class not permitted to load sdk
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: _get_class not permitted to load test_remote_reads
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 load_pgs
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 load_pgs opened 0 pgs
Oct 13 13:37:08 standalone.localdomain ceph-osd[37878]: osd.0 0 log_to_monitors true
Oct 13 13:37:08 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0[37874]: 2025-10-13T13:37:08.817+0000 7f16c8e87a80 -1 osd.0 0 log_to_monitors true
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=standalone", "root=default"]} v 0)
Oct 13 13:37:08 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=standalone", "root=default"]} : dispatch
Oct 13 13:37:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e4 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=standalone,root=default}
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]: {
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:     "b05d1f73-98ec-4f65-983d-44df16fccc62": {
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:         "ceph_fsid": "627e7f45-65aa-56de-94df-66eaee84a56e",
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:         "device": "/dev/mapper/vg2-data--lv2",
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:         "osd_id": 0,
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:         "osd_uuid": "b05d1f73-98ec-4f65-983d-44df16fccc62",
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:         "type": "bluestore"
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]:     }
Oct 13 13:37:09 standalone.localdomain vigilant_lumiere[38028]: }
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:09 standalone.localdomain systemd[1]: libpod-77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9.scope: Deactivated successfully.
Oct 13 13:37:09 standalone.localdomain podman[38013]: 2025-10-13 13:37:09.078035779 +0000 UTC m=+0.764900532 container died 77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_lumiere, GIT_BRANCH=main, io.buildah.version=1.33.12, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:37:09 standalone.localdomain systemd[1]: tmp-crun.iLCgZg.mount: Deactivated successfully.
Oct 13 13:37:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1b95ce32c38d9a89a96eaccfe0082d8f66ca7ab7da773b76048ea01220e57f39-merged.mount: Deactivated successfully.
Oct 13 13:37:09 standalone.localdomain podman[38467]: 2025-10-13 13:37:09.176015994 +0000 UTC m=+0.088640149 container remove 77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_lumiere, name=rhceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:37:09 standalone.localdomain systemd[1]: libpod-conmon-77d38a854a98f8a08ac1d8726c4201307c0cef18486c326ebc150876e6913eb9.scope: Deactivated successfully.
Oct 13 13:37:09 standalone.localdomain sudo[37906]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 2604a9ac-729e-4efe-a25a-87d5d9651d32 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 2604a9ac-729e-4efe-a25a-87d5d9651d32 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 2604a9ac-729e-4efe-a25a-87d5d9651d32 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-proxy}] v 0)
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:09 standalone.localdomain sudo[38482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:37:09 standalone.localdomain sudo[38482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:09 standalone.localdomain sudo[38482]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:09 standalone.localdomain sudo[38497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:37:09 standalone.localdomain sudo[38497]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:09 standalone.localdomain sudo[38497]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:09 standalone.localdomain sudo[38512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 13:37:09 standalone.localdomain sudo[38512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e5 e5: 1 total, 0 up, 1 in
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=standalone", "root=default"]}]': finished
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e5: 1 total, 0 up, 1 in
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0 done with init, starting boot process
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0 start_boot
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 13 13:37:09 standalone.localdomain ceph-osd[37878]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: osdmap e4: 1 total, 0 up, 1 in
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=standalone", "root=default"]} : dispatch
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: mgr.server handle_open ignoring open from osd.0 v2:172.18.0.100:6802/3642281347; not ready for session (expect reconnect)
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:09 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:37:10 standalone.localdomain systemd[1]: tmp-crun.wcNJSm.mount: Deactivated successfully.
Oct 13 13:37:10 standalone.localdomain podman[38595]: 2025-10-13 13:37:10.211683887 +0000 UTC m=+0.084514971 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Oct 13 13:37:10 standalone.localdomain podman[38595]: 2025-10-13 13:37:10.300671296 +0000 UTC m=+0.173502300 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, release=553, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git)
Oct 13 13:37:10 standalone.localdomain sudo[38512]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:10 standalone.localdomain sudo[38674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:37:10 standalone.localdomain sudo[38674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:10 standalone.localdomain sudo[38674]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:10 standalone.localdomain sudo[38689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- inventory --format=json-pretty --filter-for-batch
Oct 13 13:37:10 standalone.localdomain sudo[38689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:10 standalone.localdomain ceph-mgr[29999]: mgr.server handle_open ignoring open from osd.0 v2:172.18.0.100:6802/3642281347; not ready for session (expect reconnect)
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: from='osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=standalone", "root=default"]}]': finished
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: osdmap e5: 1 total, 0 up, 1 in
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:10 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:37:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:11 standalone.localdomain podman[38745]: 
Oct 13 13:37:11 standalone.localdomain podman[38745]: 2025-10-13 13:37:11.25478064 +0000 UTC m=+0.059735170 container create e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_wilbur, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:37:11 standalone.localdomain systemd[1]: Started libpod-conmon-e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1.scope.
Oct 13 13:37:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:11 standalone.localdomain podman[38745]: 2025-10-13 13:37:11.230249415 +0000 UTC m=+0.035203945 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:11 standalone.localdomain podman[38745]: 2025-10-13 13:37:11.343318525 +0000 UTC m=+0.148273095 container init e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_wilbur, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, release=553, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_CLEAN=True)
Oct 13 13:37:11 standalone.localdomain unruffled_wilbur[38760]: 167 167
Oct 13 13:37:11 standalone.localdomain systemd[1]: libpod-e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1.scope: Deactivated successfully.
Oct 13 13:37:11 standalone.localdomain podman[38745]: 2025-10-13 13:37:11.360195054 +0000 UTC m=+0.165149584 container start e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_wilbur, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main)
Oct 13 13:37:11 standalone.localdomain podman[38745]: 2025-10-13 13:37:11.360643879 +0000 UTC m=+0.165598519 container attach e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_wilbur, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Oct 13 13:37:11 standalone.localdomain podman[38745]: 2025-10-13 13:37:11.364416404 +0000 UTC m=+0.169370944 container died e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_wilbur, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, com.redhat.component=rhceph-container, release=553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph)
Oct 13 13:37:11 standalone.localdomain podman[38765]: 2025-10-13 13:37:11.454809816 +0000 UTC m=+0.087768282 container remove e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_wilbur, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553)
Oct 13 13:37:11 standalone.localdomain systemd[1]: libpod-conmon-e2612b27f777015b48b7c9a6d6ca4205227e8725b321809fed1e165d86b0b6f1.scope: Deactivated successfully.
Oct 13 13:37:11 standalone.localdomain podman[38785]: 
Oct 13 13:37:11 standalone.localdomain podman[38785]: 2025-10-13 13:37:11.694287897 +0000 UTC m=+0.092288892 container create 533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_sammet, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, release=553, ceph=True, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=)
Oct 13 13:37:11 standalone.localdomain systemd[1]: Started libpod-conmon-533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94.scope.
Oct 13 13:37:11 standalone.localdomain podman[38785]: 2025-10-13 13:37:11.655510453 +0000 UTC m=+0.053511478 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad54a0e3244b9d0dbb0260f4e7aef2396389e3fb6e4ef714d4b27a889bd924f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad54a0e3244b9d0dbb0260f4e7aef2396389e3fb6e4ef714d4b27a889bd924f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad54a0e3244b9d0dbb0260f4e7aef2396389e3fb6e4ef714d4b27a889bd924f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad54a0e3244b9d0dbb0260f4e7aef2396389e3fb6e4ef714d4b27a889bd924f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:11 standalone.localdomain podman[38785]: 2025-10-13 13:37:11.83674139 +0000 UTC m=+0.234742345 container init 533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_sammet, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=)
Oct 13 13:37:11 standalone.localdomain podman[38785]: 2025-10-13 13:37:11.847833471 +0000 UTC m=+0.245834446 container start 533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_sammet, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_CLEAN=True, vcs-type=git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container)
Oct 13 13:37:11 standalone.localdomain podman[38785]: 2025-10-13 13:37:11.848151722 +0000 UTC m=+0.246152687 container attach 533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_sammet, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=553)
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:11 standalone.localdomain ceph-mgr[29999]: mgr.server handle_open ignoring open from osd.0 v2:172.18.0.100:6802/3642281347; not ready for session (expect reconnect)
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:11 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: purged_snaps scrub starts
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: purged_snaps scrub ok
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.993 iops: 8190.234 elapsed_sec: 0.366
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [WRN] : OSD bench result of 8190.233505 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 0 waiting for initial osdmap
Oct 13 13:37:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0[37874]: 2025-10-13T13:37:12.038+0000 7f16c4e06640 -1 osd.0 0 waiting for initial osdmap
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 check_osdmap_features require_osd_release unknown -> reef
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 set_numa_affinity not setting numa affinity
Oct 13 13:37:12 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-osd-0[37874]: 2025-10-13T13:37:12.053+0000 7f16c0430640 -1 osd.0 5 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 5 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 13 13:37:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2ba6c7b6b1b5b4daad01082ec41c3cbb7ede6e5b08acd00fcd2cd4caf6dd8fde-merged.mount: Deactivated successfully.
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]: [
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:     {
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "available": false,
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "ceph_device": false,
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "lsm_data": {},
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "lvs": [],
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "path": "/dev/sr0",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "rejected_reasons": [
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "Has a FileSystem",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "Insufficient space (<5GB)"
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         ],
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         "sys_api": {
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "actuators": null,
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "device_nodes": "sr0",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "human_readable_size": "482.00 KB",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "id_bus": "ata",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "model": "QEMU DVD-ROM",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "nr_requests": "2",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "partitions": {},
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "path": "/dev/sr0",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "removable": "1",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "rev": "2.5+",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "ro": "0",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "rotational": "1",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "sas_address": "",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "sas_device_handle": "",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "scheduler_mode": "mq-deadline",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "sectors": 0,
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "sectorsize": "2048",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "size": 493568.0,
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "support_discard": "0",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "type": "disk",
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:             "vendor": "QEMU"
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:         }
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]:     }
Oct 13 13:37:12 standalone.localdomain exciting_sammet[38801]: ]
Oct 13 13:37:12 standalone.localdomain systemd[1]: libpod-533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94.scope: Deactivated successfully.
Oct 13 13:37:12 standalone.localdomain podman[38785]: 2025-10-13 13:37:12.597300718 +0000 UTC m=+0.995301713 container died 533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_sammet, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=553, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7)
Oct 13 13:37:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dad54a0e3244b9d0dbb0260f4e7aef2396389e3fb6e4ef714d4b27a889bd924f-merged.mount: Deactivated successfully.
Oct 13 13:37:12 standalone.localdomain podman[39761]: 2025-10-13 13:37:12.679010142 +0000 UTC m=+0.069559322 container remove 533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_sammet, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, release=553, io.openshift.expose-services=)
Oct 13 13:37:12 standalone.localdomain systemd[1]: libpod-conmon-533ab6ed22dd74b5b15ac28f6a4028175d8c621e0986768981e27b87a6a4dd94.scope: Deactivated successfully.
Oct 13 13:37:12 standalone.localdomain sudo[38689]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Adjusting osd_memory_target on standalone.localdomain to  5769M
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on standalone.localdomain to  5769M
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 86bd32a9-8a9e-49ac-9424-9f8c49c2b3e4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 86bd32a9-8a9e-49ac-9424-9f8c49c2b3e4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 86bd32a9-8a9e-49ac-9424-9f8c49c2b3e4 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:37:12 standalone.localdomain sudo[39774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:37:12 standalone.localdomain sudo[39774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:12 standalone.localdomain sudo[39774]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: mgr.server handle_open ignoring open from osd.0 v2:172.18.0.100:6802/3642281347; not ready for session (expect reconnect)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-mgr[29999]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 13 13:37:12 standalone.localdomain sudo[39789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:37:12 standalone.localdomain sudo[39789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:12 standalone.localdomain sudo[39789]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e6 e6: 1 total, 1 up, 1 in
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347] boot
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e6: 1 total, 1 up, 1 in
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct 13 13:37:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:12 standalone.localdomain ceph-osd[37878]: osd.0 6 state: booting -> active
Oct 13 13:37:13 standalone.localdomain sudo[39804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 13:37:13 standalone.localdomain sudo[39804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:13 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] creating mgr pool
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 5 completed events
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: OSD bench result of 8190.233505 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: Adjusting osd_memory_target on standalone.localdomain to  5769M
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: osd.0 [v2:172.18.0.100:6802/3642281347,v1:172.18.0.100:6803/3642281347] boot
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: osdmap e6: 1 total, 1 up, 1 in
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:13 standalone.localdomain podman[39893]: 2025-10-13 13:37:13.813365733 +0000 UTC m=+0.096488720 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, vcs-type=git, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:37:13 standalone.localdomain podman[39893]: 2025-10-13 13:37:13.945730366 +0000 UTC m=+0.228853293 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, version=7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e7 e7: 1 total, 1 up, 1 in
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e7 crush map has features 3314933000852226048, adjusting msgr requires
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e7 crush map has features 288514051259236352, adjusting msgr requires
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e7 crush map has features 288514051259236352, adjusting msgr requires
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e7 crush map has features 288514051259236352, adjusting msgr requires
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e7: 1 total, 1 up, 1 in
Oct 13 13:37:13 standalone.localdomain ceph-osd[37878]: osd.0 7 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 13 13:37:13 standalone.localdomain ceph-osd[37878]: osd.0 7 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 13 13:37:13 standalone.localdomain ceph-osd[37878]: osd.0 7 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 13 13:37:13 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 7 pg[1.0( empty local-lis/les=0/0 n=0 ec=7/7 lis/c=0/0 les/c/f=0/0/0 sis=7) [0] r=0 lpr=7 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct 13 13:37:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Oct 13 13:37:14 standalone.localdomain sudo[39804]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 2bb8f061-6cb4-4cda-bae4-007d0c563f9b (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:37:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 2bb8f061-6cb4-4cda-bae4-007d0c563f9b (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:37:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 2bb8f061-6cb4-4cda-bae4-007d0c563f9b (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:37:14 standalone.localdomain sudo[39976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:37:14 standalone.localdomain sudo[39976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:37:14 standalone.localdomain sudo[39976]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e8 e8: 1 total, 1 up, 1 in
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e8: 1 total, 1 up, 1 in
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: osdmap e7: 1 total, 1 up, 1 in
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:37:14 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 8 pg[1.0( empty local-lis/les=7/8 n=0 ec=7/7 lis/c=0/0 les/c/f=0/0/0 sis=7) [0] r=0 lpr=7 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:37:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v25: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Oct 13 13:37:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] creating main.db for devicehealth
Oct 13 13:37:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 13 13:37:15 standalone.localdomain sudo[40003]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 13 13:37:15 standalone.localdomain sudo[40003]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 13:37:15 standalone.localdomain sudo[40003]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 13 13:37:15 standalone.localdomain sudo[40003]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "standalone"} v 0)
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata", "id": "standalone"} : dispatch
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: osdmap e8: 1 total, 1 up, 1 in
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: pgmap v25: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 13 13:37:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mon metadata", "id": "standalone"} : dispatch
Oct 13 13:37:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 e9: 1 total, 1 up, 1 in
Oct 13 13:37:16 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e9: 1 total, 1 up, 1 in
Oct 13 13:37:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:17 standalone.localdomain ceph-mon[29756]: osdmap e9: 1 total, 1 up, 1 in
Oct 13 13:37:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Oct 13 13:37:18 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e10: standalone.ectizd(active, since 54s), standbys: standalone.aquevm
Oct 13 13:37:18 standalone.localdomain ceph-mon[29756]: pgmap v27: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Oct 13 13:37:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 6 completed events
Oct 13 13:37:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:37:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:19 standalone.localdomain ceph-mon[29756]: mgrmap e10: standalone.ectizd(active, since 54s), standbys: standalone.aquevm
Oct 13 13:37:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:37:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:20 standalone.localdomain ceph-mon[29756]: pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:22 standalone.localdomain ceph-mon[29756]: pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:37:23
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:37:23 standalone.localdomain python3[40007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:37:23 standalone.localdomain podman[40009]: 
Oct 13 13:37:23 standalone.localdomain podman[40009]: 2025-10-13 13:37:23.99882325 +0000 UTC m=+0.074809554 container create e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hawking, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553)
Oct 13 13:37:24 standalone.localdomain systemd[1]: Started libpod-conmon-e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b.scope.
Oct 13 13:37:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c686c7b3a82468b5048e3ca27d3f1d67b0365d278d6c35e6fccd2d9915987fbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:24 standalone.localdomain podman[40009]: 2025-10-13 13:37:23.967717393 +0000 UTC m=+0.043703737 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c686c7b3a82468b5048e3ca27d3f1d67b0365d278d6c35e6fccd2d9915987fbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c686c7b3a82468b5048e3ca27d3f1d67b0365d278d6c35e6fccd2d9915987fbe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:24 standalone.localdomain podman[40009]: 2025-10-13 13:37:24.097737794 +0000 UTC m=+0.173724158 container init e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hawking, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, com.redhat.component=rhceph-container)
Oct 13 13:37:24 standalone.localdomain podman[40009]: 2025-10-13 13:37:24.109136784 +0000 UTC m=+0.185123128 container start e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hawking, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, release=553, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container)
Oct 13 13:37:24 standalone.localdomain podman[40009]: 2025-10-13 13:37:24.109455074 +0000 UTC m=+0.185441468 container attach e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hawking, ceph=True, io.buildah.version=1.33.12, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, distribution-scope=public, RELEASE=main, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph)
Oct 13 13:37:24 standalone.localdomain ceph-mon[29756]: pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct 13 13:37:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1315104308' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Oct 13 13:37:24 standalone.localdomain magical_hawking[40024]: 
Oct 13 13:37:24 standalone.localdomain magical_hawking[40024]: {"fsid":"627e7f45-65aa-56de-94df-66eaee84a56e","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["standalone"],"quorum_age":77,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":1,"num_up_osds":1,"osd_up_since":1760362632,"num_in_osds":1,"osd_in_since":1760362617,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":28016640,"bytes_avail":7483981824,"bytes_total":7511998464},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-13T13:36:03.878879+0000","services":{}},"progress_events":{}}
Oct 13 13:37:24 standalone.localdomain systemd[1]: libpod-e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b.scope: Deactivated successfully.
Oct 13 13:37:24 standalone.localdomain podman[40009]: 2025-10-13 13:37:24.554506371 +0000 UTC m=+0.630492705 container died e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hawking, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public)
Oct 13 13:37:24 standalone.localdomain systemd[1]: tmp-crun.c47Aq9.mount: Deactivated successfully.
Oct 13 13:37:24 standalone.localdomain podman[40049]: 2025-10-13 13:37:24.653673634 +0000 UTC m=+0.088978540 container remove e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hawking, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, release=553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:37:24 standalone.localdomain systemd[1]: libpod-conmon-e54bb5e963545983c1571282622d1ff4ad0e58f5ac1a3cd395ea30ff087e8b2b.scope: Deactivated successfully.
Oct 13 13:37:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c686c7b3a82468b5048e3ca27d3f1d67b0365d278d6c35e6fccd2d9915987fbe-merged.mount: Deactivated successfully.
Oct 13 13:37:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1315104308' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Oct 13 13:37:26 standalone.localdomain ceph-mon[29756]: pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:26 standalone.localdomain python3[40107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:37:26 standalone.localdomain podman[40108]: 
Oct 13 13:37:26 standalone.localdomain podman[40108]: 2025-10-13 13:37:26.433693905 +0000 UTC m=+0.060397339 container create ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_stonebraker, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, architecture=x86_64, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:37:26 standalone.localdomain systemd[1]: Started libpod-conmon-ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8.scope.
Oct 13 13:37:26 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:26 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb56c1af3b7b73ef761ea11da09ebd8798df97d8228341c1a038ddd9e52566a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:26 standalone.localdomain podman[40108]: 2025-10-13 13:37:26.405049694 +0000 UTC m=+0.031753158 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:26 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb56c1af3b7b73ef761ea11da09ebd8798df97d8228341c1a038ddd9e52566a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:26 standalone.localdomain podman[40108]: 2025-10-13 13:37:26.51894506 +0000 UTC m=+0.145648524 container init ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_stonebraker, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, ceph=True, io.openshift.expose-services=, name=rhceph, release=553, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:37:26 standalone.localdomain systemd[1]: tmp-crun.3RrE9o.mount: Deactivated successfully.
Oct 13 13:37:26 standalone.localdomain podman[40108]: 2025-10-13 13:37:26.530005819 +0000 UTC m=+0.156709283 container start ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_stonebraker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, name=rhceph, version=7, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=)
Oct 13 13:37:26 standalone.localdomain podman[40108]: 2025-10-13 13:37:26.53032029 +0000 UTC m=+0.157023764 container attach ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_stonebraker, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, release=553, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, version=7, vcs-type=git, architecture=x86_64, ceph=True)
Oct 13 13:37:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:26 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14194 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:37:26 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mgrmap e11: standalone.ectizd(active, since 63s)
Oct 13 13:37:26 standalone.localdomain infallible_stonebraker[40122]: 
Oct 13 13:37:26 standalone.localdomain infallible_stonebraker[40122]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 13 13:37:26 standalone.localdomain systemd[1]: libpod-ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8.scope: Deactivated successfully.
Oct 13 13:37:26 standalone.localdomain podman[40108]: 2025-10-13 13:37:26.914672458 +0000 UTC m=+0.541375922 container died ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_stonebraker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:37:26 standalone.localdomain podman[40147]: 2025-10-13 13:37:26.988216831 +0000 UTC m=+0.064289429 container remove ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_stonebraker, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, distribution-scope=public)
Oct 13 13:37:26 standalone.localdomain systemd[1]: libpod-conmon-ded841c98b5782af9ecb85b840c4ab9751a694c4b2f770c734b7622eb989cda8.scope: Deactivated successfully.
Oct 13 13:37:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:27 standalone.localdomain python3[40166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:37:27 standalone.localdomain podman[40167]: 
Oct 13 13:37:27 standalone.localdomain podman[40167]: 2025-10-13 13:37:27.30823728 +0000 UTC m=+0.058503552 container create ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:37:27 standalone.localdomain systemd[1]: Started libpod-conmon-ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883.scope.
Oct 13 13:37:27 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:27 standalone.localdomain podman[40167]: 2025-10-13 13:37:27.276286617 +0000 UTC m=+0.026552899 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0eeeca6fbfe62bac182d9d7be8aff49d1719d5ee23d23471915e1e0410ec8ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0eeeca6fbfe62bac182d9d7be8aff49d1719d5ee23d23471915e1e0410ec8ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:27 standalone.localdomain podman[40167]: 2025-10-13 13:37:27.399099817 +0000 UTC m=+0.149366079 container init ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.expose-services=, release=553, version=7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, ceph=True)
Oct 13 13:37:27 standalone.localdomain podman[40167]: 2025-10-13 13:37:27.410679703 +0000 UTC m=+0.160945945 container start ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main)
Oct 13 13:37:27 standalone.localdomain podman[40167]: 2025-10-13 13:37:27.410829708 +0000 UTC m=+0.161095940 container attach ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, release=553, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55)
Oct 13 13:37:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4fb56c1af3b7b73ef761ea11da09ebd8798df97d8228341c1a038ddd9e52566a-merged.mount: Deactivated successfully.
Oct 13 13:37:27 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_type: crash
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_name: crash
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: placement:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   host_pattern: '*'
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: ---
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_type: mgr
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_name: mgr
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: placement:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   hosts:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   - standalone.localdomain
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: ---
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_type: mon
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_name: mon
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: placement:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   hosts:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   - standalone.localdomain
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: ---
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_type: node-proxy
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_name: node-proxy
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: placement:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   host_pattern: '*'
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: ---
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_type: osd
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_id: default_drive_group
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: service_name: osd.default_drive_group
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: placement:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   hosts:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   - standalone.localdomain
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]: spec:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   data_devices:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:     paths:
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:     - /dev/vg2/data-lv2
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   filter_logic: AND
Oct 13 13:37:27 standalone.localdomain brave_lewin[40181]:   objectstore: bluestore
Oct 13 13:37:27 standalone.localdomain systemd[1]: libpod-ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883.scope: Deactivated successfully.
Oct 13 13:37:27 standalone.localdomain podman[40167]: 2025-10-13 13:37:27.784340343 +0000 UTC m=+0.534606605 container died ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main)
Oct 13 13:37:27 standalone.localdomain systemd[1]: tmp-crun.xylB8x.mount: Deactivated successfully.
Oct 13 13:37:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c0eeeca6fbfe62bac182d9d7be8aff49d1719d5ee23d23471915e1e0410ec8ed-merged.mount: Deactivated successfully.
Oct 13 13:37:27 standalone.localdomain podman[40206]: 2025-10-13 13:37:27.85671086 +0000 UTC m=+0.062387841 container remove ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553)
Oct 13 13:37:27 standalone.localdomain systemd[1]: libpod-conmon-ccae381441fd36a50729b490ef65eafaf2de4f541a30566bc944ff917b411883.scope: Deactivated successfully.
Oct 13 13:37:27 standalone.localdomain ceph-mon[29756]: from='client.14194 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:37:27 standalone.localdomain ceph-mon[29756]: mgrmap e11: standalone.ectizd(active, since 63s)
Oct 13 13:37:27 standalone.localdomain ceph-mon[29756]: pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:28 standalone.localdomain python3[40224]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:37:28 standalone.localdomain podman[40225]: 
Oct 13 13:37:28 standalone.localdomain podman[40225]: 2025-10-13 13:37:28.256518075 +0000 UTC m=+0.076206196 container create 7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_driscoll, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, release=553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55)
Oct 13 13:37:28 standalone.localdomain systemd[1]: Started libpod-conmon-7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22.scope.
Oct 13 13:37:28 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:37:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04936200a1a07469637e224ee0842ec127056c5cac0364276a1a9e775f1ed333/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:28 standalone.localdomain podman[40225]: 2025-10-13 13:37:28.223897501 +0000 UTC m=+0.043585622 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:37:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04936200a1a07469637e224ee0842ec127056c5cac0364276a1a9e775f1ed333/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:37:28 standalone.localdomain podman[40225]: 2025-10-13 13:37:28.331186842 +0000 UTC m=+0.150874963 container init 7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_driscoll, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, ceph=True, release=553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Oct 13 13:37:28 standalone.localdomain podman[40225]: 2025-10-13 13:37:28.337023153 +0000 UTC m=+0.156711254 container start 7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_driscoll, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, release=553, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12)
Oct 13 13:37:28 standalone.localdomain podman[40225]: 2025-10-13 13:37:28.337444145 +0000 UTC m=+0.157132266 container attach 7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_driscoll, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, release=553, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:37:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 13 13:37:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4178606710' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:   cluster:
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     id:     627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     health: HEALTH_OK
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:  
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:   services:
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     mon: 1 daemons, quorum standalone (age 81s)
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     mgr: standalone.ectizd(active, since 65s)
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     osd: 1 osds: 1 up (since 15s), 1 in (since 30s)
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:  
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:   data:
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     pools:   1 pools, 1 pgs
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     objects: 2 objects, 449 KiB
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     usage:   27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:     pgs:     1 active+clean
Oct 13 13:37:28 standalone.localdomain sleepy_driscoll[40239]:  
Oct 13 13:37:28 standalone.localdomain systemd[1]: libpod-7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22.scope: Deactivated successfully.
Oct 13 13:37:28 standalone.localdomain podman[40225]: 2025-10-13 13:37:28.724496617 +0000 UTC m=+0.544184818 container died 7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_driscoll, release=553, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container)
Oct 13 13:37:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-04936200a1a07469637e224ee0842ec127056c5cac0364276a1a9e775f1ed333-merged.mount: Deactivated successfully.
Oct 13 13:37:28 standalone.localdomain podman[40264]: 2025-10-13 13:37:28.806755708 +0000 UTC m=+0.074598186 container remove 7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_driscoll, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=553, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:37:28 standalone.localdomain systemd[1]: libpod-conmon-7ae720f1df7f03bb1f4f561f1f1c01405c1d262a8cdb3dff800ca73ce6e99e22.scope: Deactivated successfully.
Oct 13 13:37:28 standalone.localdomain ceph-mon[29756]: from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:37:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4178606710' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 13:37:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:29 standalone.localdomain python3[40289]: ansible-ansible.legacy.stat Invoked with path=/root/deployed_ceph.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:37:29 standalone.localdomain python3[40293]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760362648.9407856-40279-236480483782582/source dest=/root/deployed_ceph.yaml mode=420 force=True follow=False _original_basename=deployed_ceph.yaml.j2 checksum=7f2394b05e65269d90923121dfb137a97abd17fa backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:37:29 standalone.localdomain ceph-mon[29756]: pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:29 standalone.localdomain sudo[28047]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:30 standalone.localdomain sudo[40302]:     root : PWD=/root ; USER=root ; COMMAND=/bin/openstack tripleo deploy --templates /usr/share/openstack-tripleo-heat-templates --local-ip=192.168.122.100/24 --control-virtual-ip=192.168.122.99 --output-dir /root --standalone-role Standalone -r /root/Standalone.yaml -n /root/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml -e /root/enable_heat.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml -e
Oct 13 13:37:30 standalone.localdomain sudo[40302]:     root : (command continued) /root/standalone_parameters.yaml -e /root/deployed_ceph.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /root/containers-prepare-parameters.yaml -e /root/deployed_network.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e /root/sriov_template.yaml -e /root/dhcp_agent_template.yaml
Oct 13 13:37:30 standalone.localdomain sudo[40302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:37:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:31 standalone.localdomain systemd[1]: Starting Hostname Service...
Oct 13 13:37:31 standalone.localdomain systemd[1]: Started Hostname Service.
Oct 13 13:37:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:32 standalone.localdomain ceph-mon[29756]: pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:34 standalone.localdomain ceph-mon[29756]: pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:36 standalone.localdomain sudo[40323]:     root : PWD=/root ; USER=root ; COMMAND=/bin/chown -R root /root/tripleo-standalone-passwords.yaml
Oct 13 13:37:36 standalone.localdomain sudo[40323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:37:36 standalone.localdomain sudo[40323]: pam_unix(sudo:session): session closed for user root
Oct 13 13:37:36 standalone.localdomain ceph-mon[29756]: pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:37 standalone.localdomain ceph-mon[29756]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:40 standalone.localdomain ceph-mon[29756]: pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:42 standalone.localdomain ceph-mon[29756]: pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:44 standalone.localdomain ceph-mon[29756]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:46 standalone.localdomain ceph-mon[29756]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:47 standalone.localdomain ceph-mon[29756]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:48 standalone.localdomain sshd[40329]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:37:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:49 standalone.localdomain sshd[40329]: Received disconnect from 80.94.93.233 port 27738:11:  [preauth]
Oct 13 13:37:49 standalone.localdomain sshd[40329]: Disconnected from authenticating user root 80.94.93.233 port 27738 [preauth]
Oct 13 13:37:50 standalone.localdomain ceph-mon[29756]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:52 standalone.localdomain ceph-mon[29756]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:37:54 standalone.localdomain ceph-mon[29756]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:56 standalone.localdomain ceph-mon[29756]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:37:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:57 standalone.localdomain ceph-mon[29756]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:37:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:00 standalone.localdomain ceph-mon[29756]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:01 standalone.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 13:38:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:02 standalone.localdomain ceph-mon[29756]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:04 standalone.localdomain ceph-mon[29756]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:06 standalone.localdomain ceph-mon[29756]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:07 standalone.localdomain ceph-mon[29756]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:09 standalone.localdomain systemd[22830]: Created slice User Background Tasks Slice.
Oct 13 13:38:09 standalone.localdomain systemd[22830]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 13:38:09 standalone.localdomain systemd[22830]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 13:38:10 standalone.localdomain ceph-mon[29756]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:12 standalone.localdomain ceph-mon[29756]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:14 standalone.localdomain ceph-mon[29756]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:14 standalone.localdomain sudo[40334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:38:14 standalone.localdomain sudo[40334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:38:14 standalone.localdomain sudo[40334]: pam_unix(sudo:session): session closed for user root
Oct 13 13:38:14 standalone.localdomain sudo[40349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:38:14 standalone.localdomain sudo[40349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:38:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:15 standalone.localdomain sudo[40349]: pam_unix(sudo:session): session closed for user root
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:38:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:38:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1d9b7193-72cb-4541-adcb-bf1036cecd4c (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:38:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1d9b7193-72cb-4541-adcb-bf1036cecd4c (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:38:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1d9b7193-72cb-4541-adcb-bf1036cecd4c (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:38:15 standalone.localdomain sudo[40395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:38:15 standalone.localdomain sudo[40395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:38:15 standalone.localdomain sudo[40395]: pam_unix(sudo:session): session closed for user root
Oct 13 13:38:16 standalone.localdomain ceph-mon[29756]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:38:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:38:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:38:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:38:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:17 standalone.localdomain ceph-mon[29756]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 7 completed events
Oct 13 13:38:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:38:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:38:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:38:19 standalone.localdomain ceph-mon[29756]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:22 standalone.localdomain ceph-mon[29756]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:38:23
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:38:24 standalone.localdomain ceph-mon[29756]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:26 standalone.localdomain ceph-mon[29756]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:27 standalone.localdomain ceph-mon[29756]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:30 standalone.localdomain ceph-mon[29756]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:32 standalone.localdomain ceph-mon[29756]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:34 standalone.localdomain ceph-mon[29756]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:36 standalone.localdomain ceph-mon[29756]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:37 standalone.localdomain ceph-mon[29756]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:40 standalone.localdomain ceph-mon[29756]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:42 standalone.localdomain ceph-mon[29756]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:44 standalone.localdomain ceph-mon[29756]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:46 standalone.localdomain ceph-mon[29756]: pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:47 standalone.localdomain ceph-mon[29756]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:50 standalone.localdomain ceph-mon[29756]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:52 standalone.localdomain ceph-mon[29756]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:38:54 standalone.localdomain ceph-mon[29756]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:56 standalone.localdomain ceph-mon[29756]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:38:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:57 standalone.localdomain ceph-mon[29756]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:38:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:00 standalone.localdomain ceph-mon[29756]: pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:02 standalone.localdomain ceph-mon[29756]: pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:04 standalone.localdomain ceph-mon[29756]: pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:06 standalone.localdomain ceph-mon[29756]: pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:07 standalone.localdomain ceph-mon[29756]: pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:09 standalone.localdomain systemd[31158]: Starting Mark boot as successful...
Oct 13 13:39:09 standalone.localdomain systemd[31158]: Finished Mark boot as successful.
Oct 13 13:39:10 standalone.localdomain ceph-mon[29756]: pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:12 standalone.localdomain ceph-mon[29756]: pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:14 standalone.localdomain ceph-mon[29756]: pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:15 standalone.localdomain sudo[40412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:39:15 standalone.localdomain sudo[40412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:39:15 standalone.localdomain sudo[40412]: pam_unix(sudo:session): session closed for user root
Oct 13 13:39:15 standalone.localdomain sudo[40427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:39:15 standalone.localdomain sudo[40427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:39:15 standalone.localdomain sudo[40427]: pam_unix(sudo:session): session closed for user root
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:39:16 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 5bff8f2c-8a0b-498a-b7bb-fe6f2edf7fd3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:39:16 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 5bff8f2c-8a0b-498a-b7bb-fe6f2edf7fd3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:39:16 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 5bff8f2c-8a0b-498a-b7bb-fe6f2edf7fd3 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:39:16 standalone.localdomain sudo[40475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:39:16 standalone.localdomain sudo[40475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:39:16 standalone.localdomain sudo[40475]: pam_unix(sudo:session): session closed for user root
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:39:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:17 standalone.localdomain ceph-mon[29756]: pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 8 completed events
Oct 13 13:39:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:39:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:39:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:39:19 standalone.localdomain ceph-mon[29756]: pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:22 standalone.localdomain ceph-mon[29756]: pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:39:23
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:39:24 standalone.localdomain ceph-mon[29756]: pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:26 standalone.localdomain ceph-mon[29756]: pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:27 standalone.localdomain ceph-mon[29756]: pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:30 standalone.localdomain ceph-mon[29756]: pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:32 standalone.localdomain ceph-mon[29756]: pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:34 standalone.localdomain ceph-mon[29756]: pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:36 standalone.localdomain ceph-mon[29756]: pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:38 standalone.localdomain ceph-mon[29756]: pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:40 standalone.localdomain ceph-mon[29756]: pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:42 standalone.localdomain ceph-mon[29756]: pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:44 standalone.localdomain ceph-mon[29756]: pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:46 standalone.localdomain ceph-mon[29756]: pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:48 standalone.localdomain ceph-mon[29756]: pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:50 standalone.localdomain ceph-mon[29756]: pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:52 standalone.localdomain ceph-mon[29756]: pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:39:54 standalone.localdomain ceph-mon[29756]: pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:56 standalone.localdomain ceph-mon[29756]: pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:39:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:58 standalone.localdomain ceph-mon[29756]: pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:39:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:00 standalone.localdomain ceph-mon[29756]: pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:02 standalone.localdomain ceph-mon[29756]: pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:04 standalone.localdomain ceph-mon[29756]: pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:06 standalone.localdomain ceph-mon[29756]: pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:08 standalone.localdomain ceph-mon[29756]: pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:10 standalone.localdomain ceph-mon[29756]: pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:12 standalone.localdomain ceph-mon[29756]: pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:14 standalone.localdomain ceph-mon[29756]: pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:16 standalone.localdomain sudo[40490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:40:16 standalone.localdomain sudo[40490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:40:16 standalone.localdomain sudo[40490]: pam_unix(sudo:session): session closed for user root
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:16 standalone.localdomain sudo[40505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:40:16 standalone.localdomain sudo[40505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:40:16 standalone.localdomain sudo[40505]: pam_unix(sudo:session): session closed for user root
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:40:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:40:16 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev d0a76bd1-3ea7-4602-8be8-cac06a8eee01 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:40:16 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev d0a76bd1-3ea7-4602-8be8-cac06a8eee01 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:40:16 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event d0a76bd1-3ea7-4602-8be8-cac06a8eee01 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:40:16 standalone.localdomain sudo[40551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:40:16 standalone.localdomain sudo[40551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:40:16 standalone.localdomain sudo[40551]: pam_unix(sudo:session): session closed for user root
Oct 13 13:40:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:17 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:40:17 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:40:17 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:40:17 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:40:18 standalone.localdomain ceph-mon[29756]: pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 9 completed events
Oct 13 13:40:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:40:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:40:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:40:19 standalone.localdomain ceph-mon[29756]: pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:22 standalone.localdomain ceph-mon[29756]: pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:40:23
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:40:24 standalone.localdomain ceph-mon[29756]: pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:26 standalone.localdomain ceph-mon[29756]: pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:28 standalone.localdomain ceph-mon[29756]: pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:30 standalone.localdomain ceph-mon[29756]: pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:32 standalone.localdomain ceph-mon[29756]: pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:34 standalone.localdomain ceph-mon[29756]: pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:36 standalone.localdomain ceph-mon[29756]: pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:38 standalone.localdomain ceph-mon[29756]: pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:40 standalone.localdomain ceph-mon[29756]: pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:42 standalone.localdomain ceph-mon[29756]: pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:44 standalone.localdomain ceph-mon[29756]: pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:46 standalone.localdomain ceph-mon[29756]: pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:48 standalone.localdomain ceph-mon[29756]: pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:50 standalone.localdomain ceph-mon[29756]: pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:52 standalone.localdomain ceph-mon[29756]: pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:40:54 standalone.localdomain ceph-mon[29756]: pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:56 standalone.localdomain ceph-mon[29756]: pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:40:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:58 standalone.localdomain ceph-mon[29756]: pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:40:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:00 standalone.localdomain ceph-mon[29756]: pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:02 standalone.localdomain ceph-mon[29756]: pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:04 standalone.localdomain ceph-mon[29756]: pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:06 standalone.localdomain ceph-mon[29756]: pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:08 standalone.localdomain ceph-mon[29756]: pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:10 standalone.localdomain ceph-mon[29756]: pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:12 standalone.localdomain ceph-mon[29756]: pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:14 standalone.localdomain ceph-mon[29756]: pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:16 standalone.localdomain ceph-mon[29756]: pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:16 standalone.localdomain sudo[40566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:41:16 standalone.localdomain sudo[40566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:41:16 standalone.localdomain sudo[40566]: pam_unix(sudo:session): session closed for user root
Oct 13 13:41:16 standalone.localdomain sudo[40581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:41:16 standalone.localdomain sudo[40581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:17 standalone.localdomain sudo[40581]: pam_unix(sudo:session): session closed for user root
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:41:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:41:17 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c78caf79-2a44-4c75-8e5d-3f8a78def5bd (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:41:17 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c78caf79-2a44-4c75-8e5d-3f8a78def5bd (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:41:17 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c78caf79-2a44-4c75-8e5d-3f8a78def5bd (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:41:17 standalone.localdomain sudo[40627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:41:17 standalone.localdomain sudo[40627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:41:17 standalone.localdomain sudo[40627]: pam_unix(sudo:session): session closed for user root
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:41:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 10 completed events
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:41:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:41:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:41:19 standalone.localdomain ceph-mon[29756]: pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:22 standalone.localdomain ceph-mon[29756]: pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:41:23
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:41:24 standalone.localdomain ceph-mon[29756]: pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:26 standalone.localdomain ceph-mon[29756]: pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:28 standalone.localdomain ceph-mon[29756]: pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:30 standalone.localdomain ceph-mon[29756]: pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:32 standalone.localdomain ceph-mon[29756]: pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:34 standalone.localdomain ceph-mon[29756]: pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:36 standalone.localdomain ceph-mon[29756]: pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:38 standalone.localdomain ceph-mon[29756]: pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:40 standalone.localdomain ceph-mon[29756]: pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:42 standalone.localdomain ceph-mon[29756]: pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:44 standalone.localdomain ceph-mon[29756]: pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:46 standalone.localdomain ceph-mon[29756]: pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:48 standalone.localdomain ceph-mon[29756]: pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:50 standalone.localdomain ceph-mon[29756]: pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:52 standalone.localdomain ceph-mon[29756]: pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:41:54 standalone.localdomain ceph-mon[29756]: pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:56 standalone.localdomain ceph-mon[29756]: pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:41:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:58 standalone.localdomain ceph-mon[29756]: pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:41:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:00 standalone.localdomain ceph-mon[29756]: pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:02 standalone.localdomain ceph-mon[29756]: pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:04 standalone.localdomain ceph-mon[29756]: pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:06 standalone.localdomain ceph-mon[29756]: pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:08 standalone.localdomain ceph-mon[29756]: pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:09 standalone.localdomain systemd[31158]: Created slice User Background Tasks Slice.
Oct 13 13:42:09 standalone.localdomain systemd[31158]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 13:42:09 standalone.localdomain systemd[31158]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 13:42:10 standalone.localdomain ceph-mon[29756]: pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:12 standalone.localdomain ceph-mon[29756]: pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:14 standalone.localdomain ceph-mon[29756]: pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:16 standalone.localdomain ceph-mon[29756]: pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:17 standalone.localdomain sudo[40643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:42:17 standalone.localdomain sudo[40643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:42:17 standalone.localdomain sudo[40643]: pam_unix(sudo:session): session closed for user root
Oct 13 13:42:17 standalone.localdomain sudo[40658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:42:17 standalone.localdomain sudo[40658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: pgmap v177: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:18 standalone.localdomain sudo[40658]: pam_unix(sudo:session): session closed for user root
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:42:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:42:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c620d5f4-2b80-455a-a589-77779f8a27c4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:42:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c620d5f4-2b80-455a-a589-77779f8a27c4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:42:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c620d5f4-2b80-455a-a589-77779f8a27c4 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:42:18 standalone.localdomain sudo[40704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:42:18 standalone.localdomain sudo[40704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:42:18 standalone.localdomain sudo[40704]: pam_unix(sudo:session): session closed for user root
Oct 13 13:42:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:42:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:42:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:42:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:42:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:20 standalone.localdomain ceph-mon[29756]: pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:22 standalone.localdomain ceph-mon[29756]: pgmap v179: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:42:23
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:42:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 11 completed events
Oct 13 13:42:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:42:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:42:24 standalone.localdomain ceph-mon[29756]: pgmap v180: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:42:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:26 standalone.localdomain ceph-mon[29756]: pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:28 standalone.localdomain ceph-mon[29756]: pgmap v182: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:30 standalone.localdomain ceph-mon[29756]: pgmap v183: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:32 standalone.localdomain ceph-mon[29756]: pgmap v184: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:34 standalone.localdomain ceph-mon[29756]: pgmap v185: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:36 standalone.localdomain ceph-mon[29756]: pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:38 standalone.localdomain ceph-mon[29756]: pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:40 standalone.localdomain ceph-mon[29756]: pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:42 standalone.localdomain ceph-mon[29756]: pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:44 standalone.localdomain ceph-mon[29756]: pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:46 standalone.localdomain ceph-mon[29756]: pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:48 standalone.localdomain ceph-mon[29756]: pgmap v192: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:50 standalone.localdomain ceph-mon[29756]: pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:52 standalone.localdomain ceph-mon[29756]: pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:42:54 standalone.localdomain ceph-mon[29756]: pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:55 standalone.localdomain ceph-mon[29756]: pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:42:58 standalone.localdomain ceph-mon[29756]: pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:42:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:00 standalone.localdomain ceph-mon[29756]: pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:02 standalone.localdomain ceph-mon[29756]: pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:04 standalone.localdomain ceph-mon[29756]: pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:06 standalone.localdomain ceph-mon[29756]: pgmap v201: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:08 standalone.localdomain ceph-mon[29756]: pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:10 standalone.localdomain ceph-mon[29756]: pgmap v203: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:12 standalone.localdomain ceph-mon[29756]: pgmap v204: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:14 standalone.localdomain ceph-mon[29756]: pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:15 standalone.localdomain ceph-mon[29756]: pgmap v206: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:18 standalone.localdomain ceph-mon[29756]: pgmap v207: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:18 standalone.localdomain sudo[40719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:43:18 standalone.localdomain sudo[40719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:43:18 standalone.localdomain sudo[40719]: pam_unix(sudo:session): session closed for user root
Oct 13 13:43:18 standalone.localdomain sudo[40734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:43:18 standalone.localdomain sudo[40734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:43:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:19 standalone.localdomain sudo[40734]: pam_unix(sudo:session): session closed for user root
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:43:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:43:19 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 8f8ff0c4-e748-4125-add5-db325fd5d7cf (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:43:19 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 8f8ff0c4-e748-4125-add5-db325fd5d7cf (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:43:19 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 8f8ff0c4-e748-4125-add5-db325fd5d7cf (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:43:19 standalone.localdomain sudo[40780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:43:19 standalone.localdomain sudo[40780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:43:19 standalone.localdomain sudo[40780]: pam_unix(sudo:session): session closed for user root
Oct 13 13:43:20 standalone.localdomain ceph-mon[29756]: pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:43:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:43:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:43:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:43:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:22 standalone.localdomain ceph-mon[29756]: pgmap v209: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:43:23
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:43:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 12 completed events
Oct 13 13:43:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:43:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:43:24 standalone.localdomain ceph-mon[29756]: pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:43:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:25 standalone.localdomain ceph-mon[29756]: pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:28 standalone.localdomain ceph-mon[29756]: pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:30 standalone.localdomain ceph-mon[29756]: pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:32 standalone.localdomain ceph-mon[29756]: pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:34 standalone.localdomain ceph-mon[29756]: pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:36 standalone.localdomain ceph-mon[29756]: pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:38 standalone.localdomain ceph-mon[29756]: pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:40 standalone.localdomain ceph-mon[29756]: pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:42 standalone.localdomain ceph-mon[29756]: pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:44 standalone.localdomain ceph-mon[29756]: pgmap v220: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:45 standalone.localdomain ceph-mon[29756]: pgmap v221: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:48 standalone.localdomain ceph-mon[29756]: pgmap v222: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:49 standalone.localdomain ceph-mon[29756]: pgmap v223: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v224: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:52 standalone.localdomain ceph-mon[29756]: pgmap v224: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:43:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v225: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:53 standalone.localdomain ceph-mon[29756]: pgmap v225: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v226: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:56 standalone.localdomain ceph-mon[29756]: pgmap v226: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v227: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:43:58 standalone.localdomain ceph-mon[29756]: pgmap v227: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v228: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:43:59 standalone.localdomain ceph-mon[29756]: pgmap v228: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v229: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:02 standalone.localdomain ceph-mon[29756]: pgmap v229: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v230: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:03 standalone.localdomain systemd[1]: root-heat_launcher-tripleo_deploy\x2dpeyvtmmb.mount: Deactivated successfully.
Oct 13 13:44:04 standalone.localdomain ceph-mon[29756]: pgmap v230: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v231: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:05 standalone.localdomain python3[40807]: ansible-tripleo_ovn_mac_addresses Invoked with ovn_bridge_mappings=['datacentre:br-ctlplane'] ovn_static_bridge_mac_mappings={'standalone': {'datacentre': 'fa:16:3a:00:53:00'}} playbook_dir=/root/standalone-ansible-fa02z7re role_name=Standalone server_resource_names=['standalone'] stack_name=standalone wait=True timeout=180 interface=public concurrency=0 cloud=None auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 13:44:06 standalone.localdomain python3[40813]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:44:06 standalone.localdomain ceph-mon[29756]: pgmap v231: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:06 standalone.localdomain python3[40830]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:44:06 standalone.localdomain python3[40829]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 13:44:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v232: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:08 standalone.localdomain ceph-mon[29756]: pgmap v232: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:08 standalone.localdomain python3[41152]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config
Oct 13 13:44:08 standalone.localdomain python3[41164]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Oct 13 13:44:09 standalone.localdomain python3[41163]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Oct 13 13:44:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v233: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:09 standalone.localdomain python3[41186]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.5cseqvsxtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:09 standalone.localdomain python3[41187]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.5o3xbkzrtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:09 standalone.localdomain python3[41199]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.5o3xbkzrtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:09 standalone.localdomain python3[41198]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.5cseqvsxtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:10 standalone.localdomain python3[41207]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.5cseqvsxtmphosts insertbefore=BOF block=172.17.0.100 standalone.localdomain standalone
                                                       172.18.0.100 standalone.storage.localdomain standalone.storage
                                                       172.20.0.100 standalone.storagemgmt.localdomain standalone.storagemgmt
                                                       172.17.0.100 standalone.internalapi.localdomain standalone.internalapi
                                                       172.19.0.100 standalone.tenant.localdomain standalone.tenant
                                                       172.21.0.100 standalone.external.localdomain standalone.external
                                                       192.168.122.100 standalone.ctlplane.localdomain standalone.ctlplane
                                                       
                                                        marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: standalone marker_end=END_HOST_ENTRIES_FOR_STACK: standalone state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:10 standalone.localdomain ceph-mon[29756]: pgmap v233: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:10 standalone.localdomain python3[41209]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.5o3xbkzrtmphosts insertbefore=BOF block=172.17.0.100 standalone.localdomain standalone
                                                       172.18.0.100 standalone.storage.localdomain standalone.storage
                                                       172.20.0.100 standalone.storagemgmt.localdomain standalone.storagemgmt
                                                       172.17.0.100 standalone.internalapi.localdomain standalone.internalapi
                                                       172.19.0.100 standalone.tenant.localdomain standalone.tenant
                                                       172.21.0.100 standalone.external.localdomain standalone.external
                                                       192.168.122.100 standalone.ctlplane.localdomain standalone.ctlplane
                                                       
                                                        marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: standalone marker_end=END_HOST_ENTRIES_FOR_STACK: standalone state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:10 standalone.localdomain python3[41217]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.5o3xbkzrtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:10 standalone.localdomain python3[41216]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.5cseqvsxtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:11 standalone.localdomain python3[41226]: ansible-file Invoked with path=/tmp/ansible.5cseqvsxtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:11 standalone.localdomain python3[41227]: ansible-file Invoked with path=/tmp/ansible.5o3xbkzrtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v234: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:11 standalone.localdomain ceph-mon[29756]: pgmap v234: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:11 standalone.localdomain python3[41239]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:12 standalone.localdomain python3[41244]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:44:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v235: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:14 standalone.localdomain ceph-mon[29756]: pgmap v235: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v236: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:15 standalone.localdomain ceph-mon[29756]: pgmap v236: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:17 standalone.localdomain python3[41258]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v237: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:17 standalone.localdomain python3[41263]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:44:18 standalone.localdomain ceph-mon[29756]: pgmap v237: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v238: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:19 standalone.localdomain sudo[41265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:44:19 standalone.localdomain sudo[41265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:44:19 standalone.localdomain sudo[41265]: pam_unix(sudo:session): session closed for user root
Oct 13 13:44:19 standalone.localdomain sudo[41280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:44:19 standalone.localdomain sudo[41280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:44:19 standalone.localdomain sudo[41280]: pam_unix(sudo:session): session closed for user root
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:44:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 0bdb7b73-6002-4881-90a3-5e62864b46c2 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:44:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 0bdb7b73-6002-4881-90a3-5e62864b46c2 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:44:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 0bdb7b73-6002-4881-90a3-5e62864b46c2 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:44:20 standalone.localdomain sudo[41330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:44:20 standalone.localdomain sudo[41330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:44:20 standalone.localdomain sudo[41330]: pam_unix(sudo:session): session closed for user root
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: pgmap v238: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:44:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:44:21 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:44:21 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:44:21 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:44:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v239: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:21 standalone.localdomain systemd-sysv-generator[41396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:44:21 standalone.localdomain systemd-rc-local-generator[41391]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:44:21 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:44:21 standalone.localdomain ceph-mon[29756]: pgmap v239: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:21 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:44:21 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:44:21 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:44:21 standalone.localdomain systemd[1]: run-r200929623cab4cd692a653dabf6315ad.service: Deactivated successfully.
Oct 13 13:44:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:22 standalone.localdomain python3[41648]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:44:23
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v240: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 13 completed events
Oct 13 13:44:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:44:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:44:23 standalone.localdomain python3[41775]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:44:23 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:44:24 standalone.localdomain systemd-rc-local-generator[41800]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:44:24 standalone.localdomain systemd-sysv-generator[41804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:44:24 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:44:24 standalone.localdomain ceph-mon[29756]: pgmap v240: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:44:24 standalone.localdomain python3[41817]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:24 standalone.localdomain python3[41821]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v241: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:25 standalone.localdomain python3[41828]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 13:44:25 standalone.localdomain ceph-mon[29756]: pgmap v241: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:25 standalone.localdomain python3[41834]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:25 standalone.localdomain python3[41840]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:26 standalone.localdomain python3[41846]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:44:26 standalone.localdomain systemd[1]: Reloading Network Manager...
Oct 13 13:44:26 standalone.localdomain NetworkManager[5962]: <info>  [1760363066.3513] audit: op="reload" arg="0" pid=41849 uid=0 result="success"
Oct 13 13:44:26 standalone.localdomain NetworkManager[5962]: <info>  [1760363066.3526] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf))
Oct 13 13:44:26 standalone.localdomain NetworkManager[5962]: <info>  [1760363066.3527] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct 13 13:44:26 standalone.localdomain systemd[1]: Reloaded Network Manager.
Oct 13 13:44:26 standalone.localdomain python3[41853]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:26 standalone.localdomain python3[41858]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v242: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:27 standalone.localdomain python3[41864]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:27 standalone.localdomain python3[41868]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:27 standalone.localdomain python3[41874]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 13 13:44:28 standalone.localdomain python3[41878]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:28 standalone.localdomain ceph-mon[29756]: pgmap v242: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:28 standalone.localdomain python3[41888]: ansible-blockinfile Invoked with path=/tmp/ansible.8101ej7t block=[192.168.122.100]*,[standalone.ctlplane.localdomain]*,[172.21.0.100]*,[standalone.external.localdomain]*,[172.17.0.100]*,[standalone.internalapi.localdomain]*,[172.18.0.100]*,[standalone.storage.localdomain]*,[172.20.0.100]*,[standalone.storagemgmt.localdomain]*,[172.19.0.100]*,[standalone.tenant.localdomain]*,[standalone.localdomain]*,[standalone]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNkoVwZ2oohlYvcD/qhxAyRnIat7PUr6hW4eXifpaxVFWOGSjkOVawsiECVgrbBhDIPW+M4HrGs8BsLzCO0q0HJQ9IvnS8ZQr5mRbjei4swbOm3FFwBq6RW1FXifUOD0K8Ob/JoKwFFoitFHI4w4TTVbUXlub6YgM8FJZfZVQF+OkgsOUPKravagy8zyzUWCJZq3CWyzjEMKb2CoJMV+uoryALhR6A/iAJGGAg/TtDOXCxsHfuGUPXo04ks63xpOCgGpzdQnD5c9q5EySCejiYhtsgm+CTNrcsvO9yhjXYVQ8IB15W8KmZDJNAcEe5i2bBYbTLiC0NgdyINHQ8jnb7mTztBjhdNbFYPqvgD8p8i3obhP1yvSBxk5yJ3qzht6ixTnggCh5Yev6rIcetij2XYREtvsTTCvq1To+Ke7Qem4wRnRtqwQxvH4yBUu+WrqFWJIeAH7wB4QH05ordBku7Dqss4Ak64CSJfQ+NnAEx2NaEhGfvNPOAT2GHFm8zuCU=
                                                        create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:28 standalone.localdomain python3[41892]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8101ej7t' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:28 standalone.localdomain python3[41898]: ansible-file Invoked with path=/tmp/ansible.8101ej7t state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v243: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:29 standalone.localdomain python3[41905]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:44:29 standalone.localdomain python3[41909]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:29 standalone.localdomain python3[41915]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:30 standalone.localdomain python3[41922]: ansible-community.general.cloud_init_data_facts Invoked with filter=status
Oct 13 13:44:30 standalone.localdomain ceph-mon[29756]: pgmap v243: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v244: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:31 standalone.localdomain ceph-mon[29756]: pgmap v244: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:31 standalone.localdomain python3[41956]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:32 standalone.localdomain python3[41961]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:44:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v245: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:34 standalone.localdomain ceph-mon[29756]: pgmap v245: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v246: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:35 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:44:35 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: pgmap v246: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.380493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075380720, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6065, "num_deletes": 252, "total_data_size": 4061910, "memory_usage": 4255064, "flush_reason": "Manual Compaction"}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075397684, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 2429354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 135, "largest_seqno": 6197, "table_properties": {"data_size": 2415133, "index_size": 7976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 5189, "raw_key_size": 43535, "raw_average_key_size": 21, "raw_value_size": 2378266, "raw_average_value_size": 1150, "num_data_blocks": 370, "num_entries": 2067, "num_filter_entries": 2067, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362566, "oldest_key_time": 1760362566, "file_creation_time": 1760363075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 17257 microseconds, and 10699 cpu microseconds.
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.397770) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 2429354 bytes OK
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.397803) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.399335) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.399365) EVENT_LOG_v1 {"time_micros": 1760363075399358, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.399399) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 4037227, prev total WAL file size 4037227, number of live WAL files 2.
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.400362) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(2372KB) 13(55KB) 8(1935B)]
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075400485, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 2488200, "oldest_snapshot_seqno": -1}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 1878 keys, 2442790 bytes, temperature: kUnknown
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075416909, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 2442790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 2428972, "index_size": 8026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4741, "raw_key_size": 41275, "raw_average_key_size": 21, "raw_value_size": 2393860, "raw_average_value_size": 1274, "num_data_blocks": 373, "num_entries": 1878, "num_filter_entries": 1878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.417125) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 2442790 bytes
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.421743) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 148.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(2.4, 0.0 +0.0 blob) out(2.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2171, records dropped: 293 output_compression: NoCompression
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.421771) EVENT_LOG_v1 {"time_micros": 1760363075421758, "job": 4, "event": "compaction_finished", "compaction_time_micros": 16481, "compaction_time_cpu_micros": 9123, "output_level": 6, "num_output_files": 1, "total_output_size": 2442790, "num_input_records": 2171, "num_output_records": 1878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075422215, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075422294, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363075422338, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 13 13:44:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:44:35.400262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:44:35 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:44:35 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:44:35 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:44:35 standalone.localdomain systemd-rc-local-generator[42015]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:44:35 standalone.localdomain systemd-sysv-generator[42022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:44:35 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:44:35 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:44:35 standalone.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 13 13:44:36 standalone.localdomain systemd[1]: tuned.service: Deactivated successfully.
Oct 13 13:44:36 standalone.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 13 13:44:36 standalone.localdomain systemd[1]: tuned.service: Consumed 1.326s CPU time.
Oct 13 13:44:36 standalone.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 13 13:44:36 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:44:36 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:44:36 standalone.localdomain systemd[1]: run-r419cebb3443842b9aeba148eb527216e.service: Deactivated successfully.
Oct 13 13:44:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v247: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:37 standalone.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Oct 13 13:44:37 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:44:37 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:44:37 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:44:37 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:44:37 standalone.localdomain systemd[1]: run-rbf9c1ddd4e5843d4807bbb713d73e952.service: Deactivated successfully.
Oct 13 13:44:38 standalone.localdomain ceph-mon[29756]: pgmap v247: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:38 standalone.localdomain python3[42389]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:44:38 standalone.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 13 13:44:38 standalone.localdomain systemd[1]: tuned.service: Deactivated successfully.
Oct 13 13:44:38 standalone.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 13 13:44:38 standalone.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 13 13:44:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v248: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:39 standalone.localdomain ceph-mon[29756]: pgmap v248: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:40 standalone.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Oct 13 13:44:40 standalone.localdomain python3[42572]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:41 standalone.localdomain python3[42583]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Oct 13 13:44:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v249: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:41 standalone.localdomain python3[42587]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:41 standalone.localdomain python3[42595]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:42 standalone.localdomain ceph-mon[29756]: pgmap v249: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v250: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:43 standalone.localdomain python3[42605]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:43 standalone.localdomain ceph-mon[29756]: pgmap v250: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:43 standalone.localdomain python3[42610]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:44 standalone.localdomain python3[42642]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v251: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:46 standalone.localdomain ceph-mon[29756]: pgmap v251: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v252: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:47 standalone.localdomain python3[42701]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:47 standalone.localdomain python3[42713]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:47 standalone.localdomain python3[42719]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/root/.ansible/tmp/ansible-tmp-1760363087.4848087-42703-484097256309/source _original_basename=tmp_gvvp_j2 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:48 standalone.localdomain python3[42725]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:48 standalone.localdomain ceph-mon[29756]: pgmap v252: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:48 standalone.localdomain python3[42739]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:48 standalone.localdomain python3[42743]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363088.3554804-42729-186326741818980/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=6311e8b18ce1dfd98d3815e5a7902e05c1af773b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:49 standalone.localdomain python3[42757]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v253: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:49 standalone.localdomain python3[42761]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363088.8644562-42747-118046949753107/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=63b3da9ec99121682302955da415924de19007e7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:49 standalone.localdomain python3[42774]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:49 standalone.localdomain python3[42778]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363089.3568277-42747-114248592143886/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=caefdcfbacc3c2fd9409bb7f7afdb9e52a38d583 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:50 standalone.localdomain python3[42790]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:50 standalone.localdomain ceph-mon[29756]: pgmap v253: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:50 standalone.localdomain python3[42794]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363089.8606555-42747-248419050247515/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=b629be8660bd0f48c0911269c1a9dc411b7a0c82 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:50 standalone.localdomain python3[42806]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:50 standalone.localdomain python3[42810]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363090.462058-42747-225841713343315/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v254: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:51 standalone.localdomain python3[42822]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:51 standalone.localdomain python3[42826]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363090.9294684-42747-189033436612492/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=c6b21e0d62457e185fbacc59f4e7b469ac9494d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:51 standalone.localdomain python3[42838]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:51 standalone.localdomain python3[42842]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363091.4074168-42747-117509652652408/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=e5ad1300dc31c6dc4168ebee1fb42cd0a482bc95 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:52 standalone.localdomain python3[42854]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:52 standalone.localdomain ceph-mon[29756]: pgmap v254: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:52 standalone.localdomain python3[42858]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363091.8912134-42747-83810750362241/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=ea1287ab36cc5e41f89178b6322bcab00a091ec8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:52 standalone.localdomain python3[42870]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:52 standalone.localdomain python3[42874]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363092.3798397-42747-89566086311494/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:53 standalone.localdomain python3[42886]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:44:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v255: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:53 standalone.localdomain python3[42890]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363092.8753912-42747-16875911201714/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=64405b73be2d8dad826f8e3a924c847407b9b445 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:53 standalone.localdomain ceph-mon[29756]: pgmap v255: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:53 standalone.localdomain python3[42902]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:53 standalone.localdomain python3[42906]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363093.3639793-42747-275715180505415/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=7d8c1375fd9f3f44fc342692924ab21dda1986a2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:54 standalone.localdomain python3[42913]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:54 standalone.localdomain python3[42929]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:44:54 standalone.localdomain python3[42933]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760363094.454164-42919-58499076608100/source _original_basename=tmpero2ubpo follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v256: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:55 standalone.localdomain python3[42948]: ansible-sefcontext Invoked with target=/var/lib/tripleo-config(/.*)? setype=container_file_t selevel=s0 state=present ignore_selinux_state=False ftype=a reload=True seuser=None
Oct 13 13:44:56 standalone.localdomain ceph-mon[29756]: pgmap v256: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:56 standalone.localdomain python3[42958]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:56 standalone.localdomain python3[42965]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:44:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v257: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:44:58 standalone.localdomain python3[43092]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=no-auto-default value=* backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:44:58 standalone.localdomain ceph-mon[29756]: pgmap v257: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:58 standalone.localdomain python3[43098]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:44:58 standalone.localdomain systemd[1]: Reloading Network Manager...
Oct 13 13:44:58 standalone.localdomain NetworkManager[5962]: <info>  [1760363098.4282] audit: op="reload" arg="0" pid=43101 uid=0 result="success"
Oct 13 13:44:58 standalone.localdomain NetworkManager[5962]: <info>  [1760363098.4294] config: signal: SIGHUP (no changes from disk)
Oct 13 13:44:58 standalone.localdomain systemd[1]: Reloaded Network Manager.
Oct 13 13:44:58 standalone.localdomain python3[43105]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:44:58 standalone.localdomain python3[43109]: ansible-stat Invoked with path=/var/lib/tripleo-config/os-net-config.returncode follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v258: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:44:59 standalone.localdomain python3[43115]: ansible-stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:44:59 standalone.localdomain python3[43121]: ansible-file Invoked with path=/etc/os-net-config state=directory recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:00 standalone.localdomain python3[43125]: ansible-tripleo_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 13 13:45:00 standalone.localdomain ceph-mon[29756]: pgmap v258: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:00 standalone.localdomain python3[43131]: ansible-file Invoked with path=/var/lib/tripleo-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:01 standalone.localdomain python3[43151]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v259: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:01 standalone.localdomain python3[43157]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363100.9184365-43141-127776916032226/source dest=/etc/os-net-config/config.yaml mode=420 backup=True follow=False _original_basename=standalone_net_config.j2 checksum=b46618d992bf3b70e379c56c5b7ced0507959f14 force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:01 standalone.localdomain ansible-async_wrapper.py[43175]: Invoked with 12572657992 300 /root/.ansible/tmp/ansible-tmp-1760363101.4895942-43163-45794506557339/AnsiballZ_tripleo_os_net_config.py _
Oct 13 13:45:01 standalone.localdomain ansible-async_wrapper.py[43178]: Starting module and watcher
Oct 13 13:45:01 standalone.localdomain ansible-async_wrapper.py[43178]: Start watching 43179 (300)
Oct 13 13:45:01 standalone.localdomain ansible-async_wrapper.py[43179]: Start module (43179)
Oct 13 13:45:01 standalone.localdomain ansible-async_wrapper.py[43175]: Return async_wrapper task started.
Oct 13 13:45:02 standalone.localdomain python3[43180]: ansible-tripleo_os_net_config Invoked with config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False purge=False cleanup=False
Oct 13 13:45:02 standalone.localdomain ceph-mon[29756]: pgmap v259: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:02 standalone.localdomain ansible-async_wrapper.py[43179]: Module complete (43179)
Oct 13 13:45:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v260: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:03 standalone.localdomain ceph-mon[29756]: pgmap v260: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v261: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:05 standalone.localdomain python3[43183]: ansible-ansible.legacy.async_status Invoked with jid=12572657992.43175 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:45:05 standalone.localdomain python3[43185]: ansible-ansible.legacy.async_status Invoked with jid=12572657992.43175 mode=cleanup _async_dir=/tmp/.ansible_async
Oct 13 13:45:05 standalone.localdomain python3[43197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/os-net-config.returncode follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:05 standalone.localdomain python3[43201]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/tripleo-config/os-net-config.returncode src=/root/.ansible/tmp/ansible-tmp-1760363105.5347676-43187-152177736288664/source _original_basename=tmpn5lwnth0 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:06 standalone.localdomain python3[43215]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-tripleo-disable-network-config.cfg follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:06 standalone.localdomain ceph-mon[29756]: pgmap v261: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:06 standalone.localdomain python3[43219]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-tripleo-disable-network-config.cfg src=/root/.ansible/tmp/ansible-tmp-1760363106.0347311-43205-96060339034843/source _original_basename=tmphal8r8ms follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:06 standalone.localdomain python3[43225]: ansible-systemd Invoked with name=network enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 13:45:06 standalone.localdomain ansible-async_wrapper.py[43178]: Done in kid B.
Oct 13 13:45:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v262: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:07 standalone.localdomain python3[43235]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 13:45:08 standalone.localdomain ceph-mon[29756]: pgmap v262: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:08 standalone.localdomain python3[43297]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:08 standalone.localdomain python3[43309]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:08 standalone.localdomain python3[43313]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpvo4gahys recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v263: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:09 standalone.localdomain python3[43319]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:09 standalone.localdomain python3[43333]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:09 standalone.localdomain python3[43337]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:10 standalone.localdomain python3[43351]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:10 standalone.localdomain ceph-mon[29756]: pgmap v263: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:10 standalone.localdomain python3[43355]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:10 standalone.localdomain python3[43368]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:10 standalone.localdomain python3[43372]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v264: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:11 standalone.localdomain python3[43384]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:11 standalone.localdomain python3[43388]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:11 standalone.localdomain python3[43400]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:11 standalone.localdomain python3[43404]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:12 standalone.localdomain python3[43416]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: pgmap v264: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.335673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363112335750, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 250, "total_data_size": 133558, "memory_usage": 144928, "flush_reason": "Manual Compaction"}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363112338554, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 96466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6198, "largest_seqno": 6767, "table_properties": {"data_size": 94129, "index_size": 386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6611, "raw_average_key_size": 18, "raw_value_size": 89091, "raw_average_value_size": 256, "num_data_blocks": 19, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363077, "oldest_key_time": 1760363077, "file_creation_time": 1760363112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 2938 microseconds, and 952 cpu microseconds.
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.338610) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 96466 bytes OK
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.338642) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.341681) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.341696) EVENT_LOG_v1 {"time_micros": 1760363112341692, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.341710) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 130374, prev total WAL file size 130696, number of live WAL files 2.
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.343089) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(94KB)], [20(2385KB)]
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363112343137, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 2539256, "oldest_snapshot_seqno": -1}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 1726 keys, 1958765 bytes, temperature: kUnknown
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363112359599, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 1958765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 1947562, "index_size": 5799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4357, "raw_key_size": 38533, "raw_average_key_size": 22, "raw_value_size": 1916616, "raw_average_value_size": 1110, "num_data_blocks": 273, "num_entries": 1726, "num_filter_entries": 1726, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.359850) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 1958765 bytes
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361483) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.8 rd, 118.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 2.3 +0.0 blob) out(1.9 +0.0 blob), read-write-amplify(46.6) write-amplify(20.3) OK, records in: 2226, records dropped: 500 output_compression: NoCompression
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361513) EVENT_LOG_v1 {"time_micros": 1760363112361500, "job": 6, "event": "compaction_finished", "compaction_time_micros": 16513, "compaction_time_cpu_micros": 8687, "output_level": 6, "num_output_files": 1, "total_output_size": 1958765, "num_input_records": 2226, "num_output_records": 1726, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363112361643, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363112361926, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.343031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:45:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:45:12.361968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:45:12 standalone.localdomain python3[43420]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:12 standalone.localdomain python3[43432]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:12 standalone.localdomain python3[43436]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v265: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:13 standalone.localdomain python3[43448]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:13 standalone.localdomain ceph-mon[29756]: pgmap v265: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:13 standalone.localdomain python3[43452]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:13 standalone.localdomain python3[43464]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:13 standalone.localdomain python3[43468]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:14 standalone.localdomain python3[43480]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:14 standalone.localdomain python3[43484]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:14 standalone.localdomain python3[43496]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:14 standalone.localdomain python3[43500]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v266: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:15 standalone.localdomain python3[43507]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:45:15 standalone.localdomain python3[43523]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:15 standalone.localdomain python3[43527]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpzhyle7m5 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:16 standalone.localdomain ceph-mon[29756]: pgmap v266: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:17 standalone.localdomain python3[43546]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:45:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v267: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:18 standalone.localdomain ceph-mon[29756]: pgmap v267: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v268: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:20 standalone.localdomain sudo[43559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:45:20 standalone.localdomain sudo[43559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:45:20 standalone.localdomain sudo[43559]: pam_unix(sudo:session): session closed for user root
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: pgmap v268: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:20 standalone.localdomain sudo[43576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:45:20 standalone.localdomain sudo[43576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:45:20 standalone.localdomain sudo[43576]: pam_unix(sudo:session): session closed for user root
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:45:20 standalone.localdomain python3[43617]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:45:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:45:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b62f9ec9-149f-4956-9c8d-15e4495474f3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:45:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b62f9ec9-149f-4956-9c8d-15e4495474f3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:45:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b62f9ec9-149f-4956-9c8d-15e4495474f3 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:45:20 standalone.localdomain sudo[43637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:45:20 standalone.localdomain sudo[43637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:45:20 standalone.localdomain sudo[43637]: pam_unix(sudo:session): session closed for user root
Oct 13 13:45:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v269: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:45:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:45:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:45:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:45:21 standalone.localdomain python3[43654]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:45:21 standalone.localdomain python3[43659]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:45:21 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:45:21 standalone.localdomain systemd-rc-local-generator[43685]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:45:21 standalone.localdomain systemd-sysv-generator[43689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:45:21 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:45:22 standalone.localdomain systemd[1]: Starting Netfilter Tables...
Oct 13 13:45:22 standalone.localdomain systemd[1]: Finished Netfilter Tables.
Oct 13 13:45:22 standalone.localdomain ceph-mon[29756]: pgmap v269: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:22 standalone.localdomain python3[43720]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:22 standalone.localdomain python3[43724]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/root/.ansible/tmp/ansible-tmp-1760363122.4081235-43710-274589508029303/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:23 standalone.localdomain python3[43730]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:45:23
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v270: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:23 standalone.localdomain ceph-mon[29756]: pgmap v270: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 14 completed events
Oct 13 13:45:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:45:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:45:23 standalone.localdomain python3[43740]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:45:23 standalone.localdomain python3[43753]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:24 standalone.localdomain python3[43757]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/root/.ansible/tmp/ansible-tmp-1760363123.7158039-43743-94857416826580/source mode=None follow=False _original_basename=jump-chain.j2 checksum=a58fa6b4962df4e82ea0d541352b347fb05f5a78 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:45:24 standalone.localdomain python3[43771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:24 standalone.localdomain python3[43775]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/root/.ansible/tmp/ansible-tmp-1760363124.2587917-43761-9246957275394/source mode=None follow=False _original_basename=jump-chain.j2 checksum=a58fa6b4962df4e82ea0d541352b347fb05f5a78 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:25 standalone.localdomain python3[43789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v271: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:25 standalone.localdomain python3[43793]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/root/.ansible/tmp/ansible-tmp-1760363124.8807316-43779-40994241627968/source mode=None follow=False _original_basename=flush-chain.j2 checksum=ebf10f893abbc750c8ed26ea1c5dae0c0eda93be backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:25 standalone.localdomain sshd[43796]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:45:25 standalone.localdomain ceph-mon[29756]: pgmap v271: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:25 standalone.localdomain python3[43809]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:25 standalone.localdomain python3[43813]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/root/.ansible/tmp/ansible-tmp-1760363125.420924-43799-116066580572892/source mode=None follow=False _original_basename=chains.j2 checksum=9f98791395dff1b0ca99f958411ac5fca3e887b0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:26 standalone.localdomain sshd[43796]: Received disconnect from 193.46.255.159 port 52814:11:  [preauth]
Oct 13 13:45:26 standalone.localdomain sshd[43796]: Disconnected from authenticating user root 193.46.255.159 port 52814 [preauth]
Oct 13 13:45:27 standalone.localdomain python3[43827]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v272: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:27 standalone.localdomain python3[43831]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/root/.ansible/tmp/ansible-tmp-1760363125.8870652-43817-193224445688935/source mode=None follow=False _original_basename=ruleset.j2 checksum=332f810bc315efb75a7dd3ce37bd978d587c4cb0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:27 standalone.localdomain python3[43837]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:45:28 standalone.localdomain python3[43890]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"
                                                       include "/etc/nftables/tripleo-chains.nft"
                                                       include "/etc/nftables/tripleo-rules.nft"
                                                       include "/etc/nftables/tripleo-jumps.nft"
                                                        state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:28 standalone.localdomain ceph-mon[29756]: pgmap v272: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:28 standalone.localdomain python3[43897]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:45:28 standalone.localdomain python3[43902]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:45:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v273: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:29 standalone.localdomain python3[43909]: ansible-file Invoked with mode=0750 path=/var/log/containers/barbican setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:29 standalone.localdomain python3[43912]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/barbican-api setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:29 standalone.localdomain python3[43915]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 13 13:45:30 standalone.localdomain ceph-mon[29756]: pgmap v273: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:30 standalone.localdomain python3[43923]: ansible-file Invoked with mode=0750 path=/var/log/containers/cinder setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:31 standalone.localdomain python3[43926]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/cinder-api setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v274: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:31 standalone.localdomain python3[43929]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/cinder(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  Converting 2708 SID table entries...
Oct 13 13:45:32 standalone.localdomain ceph-mon[29756]: pgmap v274: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:45:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:45:32 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:45:32 standalone.localdomain python3[43938]: ansible-file Invoked with mode=0750 path=/var/log/containers/cinder setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:32 standalone.localdomain python3[43941]: ansible-file Invoked with path=/var/lib/cinder setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v275: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:33 standalone.localdomain python3[43944]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:33 standalone.localdomain ceph-mon[29756]: pgmap v275: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:33 standalone.localdomain python3[43948]: ansible-file Invoked with mode=0750 path=/var/log/containers/cinder setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:34 standalone.localdomain python3[43970]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False
Oct 13 13:45:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v276: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:35 standalone.localdomain python3[43978]: ansible-community.general.seport Invoked with ports=['8787', '13787'] proto=tcp setype=http_port_t state=present ignore_selinux_state=False reload=True
Oct 13 13:45:35 standalone.localdomain kernel: SELinux:  Converting 2708 SID table entries...
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:45:36 standalone.localdomain ceph-mon[29756]: pgmap v276: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  Converting 2708 SID table entries...
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:45:36 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:45:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v277: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:37 standalone.localdomain python3[43996]: ansible-ansible.legacy.dnf Invoked with name=['httpd'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:45:38 standalone.localdomain ceph-mon[29756]: pgmap v277: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v278: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:40 standalone.localdomain ceph-mon[29756]: pgmap v278: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:40 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 13 13:45:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v279: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:41 standalone.localdomain groupadd[44015]: group added to /etc/group: name=apache, GID=48
Oct 13 13:45:41 standalone.localdomain groupadd[44015]: group added to /etc/gshadow: name=apache
Oct 13 13:45:41 standalone.localdomain groupadd[44015]: new group: name=apache, GID=48
Oct 13 13:45:41 standalone.localdomain useradd[44022]: new user: name=apache, UID=48, GID=48, home=/usr/share/httpd, shell=/sbin/nologin, from=none
Oct 13 13:45:41 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:45:41 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:45:41 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:45:42 standalone.localdomain systemd-rc-local-generator[44083]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:45:42 standalone.localdomain systemd-sysv-generator[44090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:45:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:45:42 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:45:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:42 standalone.localdomain ceph-mon[29756]: pgmap v279: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:42 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:45:42 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:45:42 standalone.localdomain systemd[1]: run-re61823b1c16b4335ba40331708cd9566.service: Deactivated successfully.
Oct 13 13:45:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v280: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:43 standalone.localdomain ceph-mon[29756]: pgmap v280: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:43 standalone.localdomain python3[44347]: ansible-file Invoked with state=directory path=/var/lib/image-serve/v2 mode=493 owner=root group=root setype=httpd_sys_content_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:43 standalone.localdomain python3[44359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/image-serve/v2/index.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:43 standalone.localdomain python3[44363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/image-serve/v2/index.json mode=420 owner=root group=root setype=httpd_sys_content_t src=/root/.ansible/tmp/ansible-tmp-1760363143.5792658-44349-225431005135544/source _original_basename=tmp6myk2djt follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:44 standalone.localdomain python3[44369]: ansible-lineinfile Invoked with path=/etc/httpd/conf/httpd.conf regexp=^\s*Listen(.*)$ line=# Listen \1 state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:44 standalone.localdomain python3[44381]: ansible-ansible.legacy.stat Invoked with path=/etc/httpd/conf.d/image-serve.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:44 standalone.localdomain python3[44385]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363144.3367593-44371-173999061131187/source dest=/etc/httpd/conf.d/image-serve.conf mode=None follow=False _original_basename=image-serve.conf.j2 checksum=b30714d27273d428c6ba27b62f9f48253fe25d38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:45 standalone.localdomain python3[44391]: ansible-systemd Invoked with name=httpd state=restarted enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:45:45 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:45:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v281: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:45 standalone.localdomain systemd-rc-local-generator[44412]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:45:45 standalone.localdomain systemd-sysv-generator[44415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:45:45 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:45:45 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:45:45 standalone.localdomain systemd-rc-local-generator[44454]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:45:45 standalone.localdomain systemd-sysv-generator[44459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:45:45 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:45:45 standalone.localdomain systemd[1]: Starting The Apache HTTP Server...
Oct 13 13:45:45 standalone.localdomain systemd[1]: Started The Apache HTTP Server.
Oct 13 13:45:45 standalone.localdomain httpd[44467]: Server configured, listening on: port 8787
Oct 13 13:45:46 standalone.localdomain python3[44686]: ansible-file Invoked with mode=0750 path=/var/log/containers/glance setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:46 standalone.localdomain python3[44689]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/glance setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:46 standalone.localdomain ceph-mon[29756]: pgmap v281: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:46 standalone.localdomain python3[44696]: ansible-file Invoked with path=/var/lib/glance setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v282: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:47 standalone.localdomain python3[44728]: ansible-file Invoked with mode=0750 path=/var/log/containers/haproxy setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:48 standalone.localdomain python3[44731]: ansible-file Invoked with path=/var/lib/haproxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:48 standalone.localdomain ceph-mon[29756]: pgmap v282: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v283: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:50 standalone.localdomain ceph-mon[29756]: pgmap v283: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v284: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:52 standalone.localdomain ceph-mon[29756]: pgmap v284: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:45:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v285: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:53 standalone.localdomain ceph-mon[29756]: pgmap v285: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:53 standalone.localdomain python3[44857]: ansible-file Invoked with mode=0750 path=/var/log/containers/heat setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:53 standalone.localdomain python3[44860]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/heat-api setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:54 standalone.localdomain python3[44863]: ansible-file Invoked with mode=0750 path=/var/log/containers/heat setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:54 standalone.localdomain python3[44866]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/heat-api-cfn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:54 standalone.localdomain python3[44869]: ansible-file Invoked with mode=0750 path=/var/log/containers/heat setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:54 standalone.localdomain python3[44873]: ansible-file Invoked with mode=0750 path=/var/log/containers/horizon setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:55 standalone.localdomain python3[44876]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/horizon setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v286: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:55 standalone.localdomain python3[44878]: ansible-file Invoked with path=/var/www setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:55 standalone.localdomain python3[44880]: ansible-file Invoked with mode=01777 path=/var/tmp/horizon setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:45:55 standalone.localdomain python3[44891]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/var-tmp-horizon.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:45:56 standalone.localdomain python3[44895]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/var-tmp-horizon.conf src=/root/.ansible/tmp/ansible-tmp-1760363155.678528-44881-86954548871138/source _original_basename=tmp3jhg9jy7 follow=False checksum=804a78abbf39204f4c8abd5e4269fa10d8cb9df3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:45:56 standalone.localdomain ceph-mon[29756]: pgmap v286: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:56 standalone.localdomain python3[44901]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  Converting 2715 SID table entries...
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:45:57 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:45:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v287: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:45:57 standalone.localdomain python3[44913]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Oct 13 13:45:58 standalone.localdomain ceph-mon[29756]: pgmap v287: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  Converting 2715 SID table entries...
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:45:58 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:45:59 standalone.localdomain python3[44920]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Oct 13 13:45:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v288: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  Converting 2715 SID table entries...
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:45:59 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:46:00 standalone.localdomain python3[44928]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:00 standalone.localdomain python3[44931]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:00 standalone.localdomain ceph-mon[29756]: pgmap v288: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:00 standalone.localdomain python3[44933]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:00 standalone.localdomain python3[44936]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:46:00 standalone.localdomain python3[44942]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v289: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:01 standalone.localdomain ceph-mon[29756]: pgmap v289: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:01 standalone.localdomain python3[44953]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:46:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v290: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:03 standalone.localdomain ceph-mon[29756]: pgmap v290: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:04 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 13 13:46:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v291: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:05 standalone.localdomain python3[44958]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:05 standalone.localdomain python3[44970]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:05 standalone.localdomain python3[44974]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363165.3479586-44960-275652922674474/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:06 standalone.localdomain python3[44980]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:46:06 standalone.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 13:46:06 standalone.localdomain systemd[1]: Stopped Load Kernel Modules.
Oct 13 13:46:06 standalone.localdomain systemd[1]: Stopping Load Kernel Modules...
Oct 13 13:46:06 standalone.localdomain systemd[1]: Starting Load Kernel Modules...
Oct 13 13:46:06 standalone.localdomain kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 13 13:46:06 standalone.localdomain systemd-modules-load[44983]: Inserted module 'br_netfilter'
Oct 13 13:46:06 standalone.localdomain kernel: Bridge firewalling registered
Oct 13 13:46:06 standalone.localdomain systemd-modules-load[44983]: Module 'msr' is built in
Oct 13 13:46:06 standalone.localdomain systemd[1]: Finished Load Kernel Modules.
Oct 13 13:46:06 standalone.localdomain ceph-mon[29756]: pgmap v291: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:06 standalone.localdomain python3[44999]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:06 standalone.localdomain python3[45003]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363166.2480538-44989-132552492118690/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:46:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 600.0 total, 600.0 interval
                                                        Cumulative writes: 1728 writes, 7145 keys, 1728 commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 0.01 MB/s
                                                        Cumulative WAL: 1728 writes, 1728 syncs, 1.00 writes per sync, written: 0.00 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1728 writes, 7145 keys, 1728 commit groups, 1.0 writes per commit group, ingest: 4.17 MB, 0.01 MB/s
                                                        Interval WAL: 1728 writes, 1728 syncs, 1.00 writes per sync, written: 0.00 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    106.2      0.02              0.01         3    0.008       0      0       0.0       0.0
                                                          L6      1/0    1.87 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.7    145.3    127.2      0.03              0.02         2    0.016    4397    793       0.0       0.0
                                                         Sum      1/0    1.87 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     85.3    118.5      0.06              0.03         5    0.011    4397    793       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     90.1    124.2      0.05              0.03         4    0.013    4397    793       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    145.3    127.2      0.03              0.02         2    0.016    4397    793       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    119.3      0.02              0.01         2    0.010       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.002, interval 0.002
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.01 GB write, 0.01 MB/s write, 0.00 GB read, 0.01 MB/s read, 0.1 seconds
                                                        Interval compaction: 0.01 GB write, 0.01 MB/s write, 0.00 GB read, 0.01 MB/s read, 0.1 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 608.47 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.5e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(39,568.34 KB,0.180202%) FilterBlock(6,17.05 KB,0.00540498%) IndexBlock(6,23.08 KB,0.00731728%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 13:46:07 standalone.localdomain python3[45009]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v292: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:07 standalone.localdomain python3[45014]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:07 standalone.localdomain python3[45018]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:07 standalone.localdomain python3[45021]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:08 standalone.localdomain python3[45024]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:08 standalone.localdomain ceph-mon[29756]: pgmap v292: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:08 standalone.localdomain python3[45027]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:08 standalone.localdomain python3[45031]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:08 standalone.localdomain python3[45035]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:09 standalone.localdomain python3[45039]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v293: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:09 standalone.localdomain python3[45043]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:09 standalone.localdomain python3[45047]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:09 standalone.localdomain python3[45051]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:10 standalone.localdomain python3[45055]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:10 standalone.localdomain python3[45058]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:10 standalone.localdomain ceph-mon[29756]: pgmap v293: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:10 standalone.localdomain python3[45061]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:10 standalone.localdomain python3[45064]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:10 standalone.localdomain python3[45067]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Oct 13 13:46:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v294: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:11 standalone.localdomain python3[45072]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:46:11 standalone.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 13 13:46:11 standalone.localdomain systemd[1]: Stopped Apply Kernel Variables.
Oct 13 13:46:11 standalone.localdomain systemd[1]: Stopping Apply Kernel Variables...
Oct 13 13:46:11 standalone.localdomain systemd[1]: Starting Apply Kernel Variables...
Oct 13 13:46:11 standalone.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 13 13:46:11 standalone.localdomain systemd[1]: Finished Apply Kernel Variables.
Oct 13 13:46:11 standalone.localdomain python3[45080]: ansible-file Invoked with mode=0750 path=/var/log/containers/keystone setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:11 standalone.localdomain python3[45083]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/keystone setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:12 standalone.localdomain python3[45086]: ansible-stat Invoked with path=/etc/openldap/certs/certs_valid follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:46:12 standalone.localdomain ceph-mon[29756]: pgmap v294: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:12 standalone.localdomain python3[45090]: ansible-stat Invoked with path=/etc/openldap/certs/cert9.db follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:46:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:12 standalone.localdomain python3[45094]: ansible-stat Invoked with path=/etc/openldap/certs/key4.db follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:46:12 standalone.localdomain python3[45100]: ansible-file Invoked with mode=0750 path=/var/log/containers/manila setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:13 standalone.localdomain python3[45103]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/manila-api setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v295: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:13 standalone.localdomain python3[45106]: ansible-file Invoked with mode=0750 path=/var/log/containers/manila setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:13 standalone.localdomain ceph-mon[29756]: pgmap v295: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:13 standalone.localdomain python3[45110]: ansible-file Invoked with mode=0750 path=/var/log/containers/manila setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:13 standalone.localdomain python3[45113]: ansible-file Invoked with path=/var/lib/manila setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:14 standalone.localdomain python3[45116]: ansible-file Invoked with mode=0750 path=/var/log/containers/memcached setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:14 standalone.localdomain python3[45120]: ansible-file Invoked with mode=0750 path=/var/log/containers/mysql setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:14 standalone.localdomain python3[45123]: ansible-file Invoked with path=/var/lib/mysql setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:14 standalone.localdomain python3[45125]: ansible-file Invoked with mode=0750 path=/var/log/mariadb setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v296: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:15 standalone.localdomain python3[45128]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:15 standalone.localdomain python3[45131]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/neutron-api setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:15 standalone.localdomain python3[45134]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:15 standalone.localdomain python3[45138]: ansible-file Invoked with group=root mode=488 owner=root path=/var/lib/pci_passthrough_whitelist_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:16 standalone.localdomain python3[45150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/pci_passthrough_whitelist_scripts/derive_pci_passthrough_whitelist.py follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:16 standalone.localdomain ceph-mon[29756]: pgmap v296: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:16 standalone.localdomain python3[45154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/pci_passthrough_whitelist_scripts/derive_pci_passthrough_whitelist.py mode=448 src=/root/.ansible/tmp/ansible-tmp-1760363176.053921-45140-245998030476070/source _original_basename=derive_pci_passthrough_whitelist.py follow=False checksum=020cb44edcd036afc0b7531b445fb44db4dcd4dc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:16 standalone.localdomain python3[45160]: ansible-ansible.legacy.command Invoked with _raw_params=/var/lib/pci_passthrough_whitelist_scripts/derive_pci_passthrough_whitelist.py _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v297: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:17 standalone.localdomain python3[45166]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:17 standalone.localdomain python3[45171]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:17 standalone.localdomain systemd[1]: run-netns-ns_temp.mount: Deactivated successfully.
Oct 13 13:46:17 standalone.localdomain python3[45176]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:18 standalone.localdomain ceph-mon[29756]: pgmap v297: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:18 standalone.localdomain python3[45182]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:18 standalone.localdomain python3[45194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:18 standalone.localdomain python3[45198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=493 src=/root/.ansible/tmp/ansible-tmp-1760363178.3660142-45184-43324036040139/source _original_basename=tmpyilxg59a follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:18 standalone.localdomain python3[45212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:19 standalone.localdomain python3[45216]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/root/.ansible/tmp/ansible-tmp-1760363178.8013804-45202-164337721127835/source _original_basename=tmpuv09n9z4 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v298: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:19 standalone.localdomain python3[45222]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:19 standalone.localdomain python3[45225]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/nova-api setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:19 standalone.localdomain python3[45228]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:20 standalone.localdomain python3[45232]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:20 standalone.localdomain ceph-mon[29756]: pgmap v298: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:20 standalone.localdomain python3[45235]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/nova-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:20 standalone.localdomain python3[45238]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:46:20 standalone.localdomain python3[45242]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:21 standalone.localdomain sudo[45246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:46:21 standalone.localdomain sudo[45246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:46:21 standalone.localdomain sudo[45246]: pam_unix(sudo:session): session closed for user root
Oct 13 13:46:21 standalone.localdomain python3[45245]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:21 standalone.localdomain sudo[45261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:46:21 standalone.localdomain sudo[45261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:46:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v299: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:21 standalone.localdomain python3[45277]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:21 standalone.localdomain python3[45279]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:21 standalone.localdomain sudo[45261]: pam_unix(sudo:session): session closed for user root
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:46:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:46:21 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 59e8b988-b537-48c4-862b-1688310ab9b9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:46:21 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 59e8b988-b537-48c4-862b-1688310ab9b9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:46:21 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 59e8b988-b537-48c4-862b-1688310ab9b9 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:46:21 standalone.localdomain python3[45300]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:21 standalone.localdomain sudo[45313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:46:21 standalone.localdomain sudo[45313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:46:21 standalone.localdomain sudo[45313]: pam_unix(sudo:session): session closed for user root
Oct 13 13:46:22 standalone.localdomain python3[45338]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:22 standalone.localdomain python3[45342]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/root/.ansible/tmp/ansible-tmp-1760363181.825991-45328-37518655961719/source _original_basename=tmpfmrqhyx3 follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:22 standalone.localdomain ceph-mon[29756]: pgmap v299: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:22 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:46:22 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:46:22 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:46:22 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:46:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:22 standalone.localdomain python3[45348]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:46:23
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v300: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:23 standalone.localdomain ceph-mon[29756]: pgmap v300: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 15 completed events
Oct 13 13:46:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:46:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:46:23 standalone.localdomain python3[45391]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:24 standalone.localdomain python3[45395]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:24 standalone.localdomain python3[45398]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:46:24 standalone.localdomain python3[45400]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:24 standalone.localdomain python3[45402]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:25 standalone.localdomain python3[45404]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v301: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:25 standalone.localdomain python3[45406]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:25 standalone.localdomain ceph-mon[29756]: pgmap v301: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:25 standalone.localdomain python3[45408]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:25 standalone.localdomain python3[45410]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:26 standalone.localdomain python3[45413]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False
Oct 13 13:46:26 standalone.localdomain groupadd[45414]: group added to /etc/group: name=qemu, GID=107
Oct 13 13:46:26 standalone.localdomain groupadd[45414]: group added to /etc/gshadow: name=qemu
Oct 13 13:46:26 standalone.localdomain groupadd[45414]: new group: name=qemu, GID=107
Oct 13 13:46:26 standalone.localdomain python3[45423]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on standalone.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 13 13:46:26 standalone.localdomain useradd[45425]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=none
Oct 13 13:46:26 standalone.localdomain systemd-journald[618]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 77.2 (257 of 333 items), suggesting rotation.
Oct 13 13:46:26 standalone.localdomain systemd-journald[618]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 13:46:26 standalone.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:46:26 standalone.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:46:27 standalone.localdomain python3[45436]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None
Oct 13 13:46:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v302: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:27 standalone.localdomain python3[45440]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:27 standalone.localdomain python3[45455]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:27 standalone.localdomain python3[45459]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/root/.ansible/tmp/ansible-tmp-1760363187.5240712-45445-217526810812718/source _original_basename=tmp3vdx8ahx follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:28 standalone.localdomain python3[45465]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 13 13:46:28 standalone.localdomain ceph-mon[29756]: pgmap v302: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v303: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:29 standalone.localdomain python3[45473]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:29 standalone.localdomain python3[45477]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:29 standalone.localdomain python3[45480]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:30 standalone.localdomain python3[45491]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:30 standalone.localdomain python3[45495]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/root/.ansible/tmp/ansible-tmp-1760363189.9374776-45481-79047081478971/source _original_basename=tmp6f_5ijm1 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:30 standalone.localdomain ceph-mon[29756]: pgmap v303: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:30 standalone.localdomain python3[45509]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:30 standalone.localdomain python3[45513]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/root/.ansible/tmp/ansible-tmp-1760363190.4444902-45499-50117620289760/source _original_basename=tmprcmlugjj follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v304: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:31 standalone.localdomain python3[45519]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 13:46:31 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=16 res=1
Oct 13 13:46:31 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:46:31 standalone.localdomain ceph-mon[29756]: pgmap v304: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:31 standalone.localdomain systemd-sysv-generator[45544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:46:31 standalone.localdomain systemd-rc-local-generator[45538]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:46:31 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:46:31 standalone.localdomain python3[45560]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:32 standalone.localdomain python3[45563]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:32 standalone.localdomain python3[45575]: ansible-ansible.legacy.dnf Invoked with name=['python-aodhclient', 'python-barbicanclient', 'python-cinderclient', 'python-designateclient', 'python-glanceclient', 'python-gnocchiclient', 'python-heatclient', 'python-ironicclient', 'python-keystoneclient', 'python-manilaclient', 'python-mistralclient', 'python-neutronclient', 'python-novaclient', 'python-openstackclient', 'python-osc-placement', 'python-saharaclient', 'python-swiftclient', 'python-zaqarclient'] state=present releasever=9 allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None
Oct 13 13:46:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v305: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:33 standalone.localdomain ceph-mon[29756]: pgmap v305: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v306: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:36 standalone.localdomain ceph-mon[29756]: pgmap v306: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v307: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:37 standalone.localdomain python3[45589]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/rabbitmq(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Oct 13 13:46:38 standalone.localdomain ceph-mon[29756]: pgmap v307: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  Converting 2718 SID table entries...
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:46:38 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:46:38 standalone.localdomain python3[45599]: ansible-file Invoked with path=/var/lib/rabbitmq setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:39 standalone.localdomain python3[45602]: ansible-file Invoked with mode=0750 path=/var/log/containers/rabbitmq setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v308: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:39 standalone.localdomain python3[45605]: ansible-ansible.legacy.command Invoked with _raw_params=echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf
                                                       echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf
                                                       for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:39 standalone.localdomain python3[45613]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/logind.conf regexp=^\s*#?\s*HandlePowerKey\s*=.* state=absent backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None line=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:39 standalone.localdomain python3[45617]: ansible-ansible.builtin.lineinfile Invoked with line=HandlePowerKey=ignore path=/etc/systemd/logind.conf regexp=^#?HandlePowerKey state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:40 standalone.localdomain python3[45621]: ansible-ansible.legacy.systemd Invoked with name=systemd-logind state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:46:40 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=17 res=1
Oct 13 13:46:40 standalone.localdomain systemd[1]: Stopping User Login Management...
Oct 13 13:46:40 standalone.localdomain systemd[1]: systemd-logind.service: Deactivated successfully.
Oct 13 13:46:40 standalone.localdomain systemd[1]: Stopped User Login Management.
Oct 13 13:46:40 standalone.localdomain systemd[1]: Starting Load Kernel Module drm...
Oct 13 13:46:40 standalone.localdomain systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 13 13:46:40 standalone.localdomain systemd[1]: Finished Load Kernel Module drm.
Oct 13 13:46:40 standalone.localdomain systemd[1]: Starting User Login Management...
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New seat seat0.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 13 13:46:40 standalone.localdomain systemd[1]: Started User Login Management.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 31 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 25 of user root.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 40 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 36 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 30 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 35 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 28 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 38 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 32 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 37 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 39 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 33 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain systemd-logind[45629]: New session 34 of user ceph-admin.
Oct 13 13:46:40 standalone.localdomain ceph-mon[29756]: pgmap v308: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:40 standalone.localdomain python3[45637]: ansible-file Invoked with mode=0750 path=/var/log/containers/placement setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:40 standalone.localdomain python3[45640]: ansible-file Invoked with mode=0750 path=/var/log/containers/httpd/placement setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v309: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:41 standalone.localdomain ceph-mon[29756]: pgmap v309: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:41 standalone.localdomain python3[45653]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:46:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v310: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:43 standalone.localdomain ceph-mon[29756]: pgmap v310: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:44 standalone.localdomain python3[45658]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 13:46:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v311: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:45 standalone.localdomain python3[45713]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:45 standalone.localdomain python3[45717]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:46:46 standalone.localdomain ceph-mon[29756]: pgmap v311: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:46 standalone.localdomain python3[45740]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:46 standalone.localdomain python3[45744]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363205.9744124-45730-59748445109663/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=1f48096f39da4d32bd26b10cb04503bd3fcbb1d7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:46 standalone.localdomain python3[45758]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v312: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:47 standalone.localdomain python3[45764]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363206.6535654-45748-99179535911315/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=49d7b5ca5aa289d6b9b0e8b7734f14864fc638e4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:47 standalone.localdomain python3[45770]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:47 standalone.localdomain python3[45773]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:47 standalone.localdomain python3[45775]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:48 standalone.localdomain python3[45777]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:48 standalone.localdomain ceph-mon[29756]: pgmap v312: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:48 standalone.localdomain python3[45794]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:48 standalone.localdomain python3[45798]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/root/.ansible/tmp/ansible-tmp-1760363208.5633554-45784-143508531329602/source _original_basename=tmpolywvpw6 follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v313: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:49 standalone.localdomain python3[45804]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog'] state=installed allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:46:50 standalone.localdomain ceph-mon[29756]: pgmap v313: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v314: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:52 standalone.localdomain ceph-mon[29756]: pgmap v314: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:52 standalone.localdomain python3[45809]: ansible-systemd Invoked with enabled=True name=rsyslog state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:46:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:52 standalone.localdomain python3[45819]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:46:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v315: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:53 standalone.localdomain ceph-mon[29756]: pgmap v315: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v316: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:56 standalone.localdomain ceph-mon[29756]: pgmap v316: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:56 standalone.localdomain python3[45846]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:46:56 standalone.localdomain python3[45852]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/root/.ansible/tmp/ansible-tmp-1760363216.0421202-45836-51399274384819/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:46:56 standalone.localdomain python3[45859]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:46:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v317: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:46:57 standalone.localdomain systemd[1]: Stopping OpenSSH server daemon...
Oct 13 13:46:57 standalone.localdomain sshd[1133]: Received signal 15; terminating.
Oct 13 13:46:57 standalone.localdomain systemd[1]: sshd.service: Deactivated successfully.
Oct 13 13:46:57 standalone.localdomain systemd[1]: Stopped OpenSSH server daemon.
Oct 13 13:46:57 standalone.localdomain systemd[1]: sshd.service: Consumed 2.561s CPU time, read 1.9M from disk, written 0B to disk.
Oct 13 13:46:57 standalone.localdomain systemd[1]: Stopped target sshd-keygen.target.
Oct 13 13:46:57 standalone.localdomain systemd[1]: Stopping sshd-keygen.target...
Oct 13 13:46:57 standalone.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:46:57 standalone.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:46:57 standalone.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:46:57 standalone.localdomain systemd[1]: Reached target sshd-keygen.target.
Oct 13 13:46:57 standalone.localdomain systemd[1]: Starting OpenSSH server daemon...
Oct 13 13:46:57 standalone.localdomain sshd[45863]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:46:57 standalone.localdomain sshd[45863]: Server listening on 0.0.0.0 port 22.
Oct 13 13:46:57 standalone.localdomain sshd[45863]: Server listening on :: port 22.
Oct 13 13:46:57 standalone.localdomain systemd[1]: Started OpenSSH server daemon.
Oct 13 13:46:58 standalone.localdomain ceph-mon[29756]: pgmap v317: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:58 standalone.localdomain python3[45867]: ansible-file Invoked with path=/srv/node setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:58 standalone.localdomain python3[45870]: ansible-file Invoked with path=/var/log/swift setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:58 standalone.localdomain python3[45872]: ansible-file Invoked with mode=0750 path=/var/log/containers/swift setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:59 standalone.localdomain python3[45875]: ansible-file Invoked with path=/srv/node setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v318: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:46:59 standalone.localdomain python3[45878]: ansible-file Invoked with path=/var/cache/swift setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:59 standalone.localdomain python3[45880]: ansible-file Invoked with mode=0750 path=/var/log/containers/swift setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:46:59 standalone.localdomain python3[45887]: ansible-file Invoked with path=/srv/node/d1 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:00 standalone.localdomain ceph-mon[29756]: pgmap v318: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:00 standalone.localdomain python3[45903]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v319: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:01 standalone.localdomain python3[45923]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:01 standalone.localdomain python3[45931]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:47:02 standalone.localdomain ceph-mon[29756]: pgmap v319: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v320: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:03 standalone.localdomain ceph-mon[29756]: pgmap v320: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:05 standalone.localdomain python3[45952]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:47:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v321: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:05 standalone.localdomain python3[45956]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:05 standalone.localdomain python3[45968]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:47:06 standalone.localdomain ceph-mon[29756]: pgmap v321: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v322: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:07 standalone.localdomain python3[45986]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:47:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:07 standalone.localdomain python3[45990]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:07 standalone.localdomain python3[45996]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:47:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:47:08 standalone.localdomain systemd-sysv-generator[46024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:47:08 standalone.localdomain systemd-rc-local-generator[46016]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:47:08 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:47:08 standalone.localdomain systemd[1]: Starting chronyd online sources service...
Oct 13 13:47:08 standalone.localdomain chronyc[46036]: 200 OK
Oct 13 13:47:08 standalone.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Oct 13 13:47:08 standalone.localdomain systemd[1]: Finished chronyd online sources service.
Oct 13 13:47:08 standalone.localdomain ceph-mon[29756]: pgmap v322: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:08 standalone.localdomain python3[46043]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:08 standalone.localdomain chronyd[28474]: System clock was stepped by 0.000062 seconds
Oct 13 13:47:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:47:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Cumulative writes: 3388 writes, 16K keys, 3388 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s
                                                        Cumulative WAL: 3388 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 3388 writes, 16K keys, 3388 commit groups, 1.0 writes per commit group, ingest: 15.19 MB, 0.03 MB/s
                                                        Interval WAL: 3388 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-0] **
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-1] **
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-2] **
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-0] **
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-1] **
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-2] **
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 288.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,6.88765e-05%) FilterBlock(1,0.11 KB,3.70873e-05%) IndexBlock(1,0.14 KB,4.76837e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-0] **
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 288.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,6.88765e-05%) FilterBlock(1,0.11 KB,3.70873e-05%) IndexBlock(1,0.14 KB,4.76837e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-1] **
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 288.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,6.88765e-05%) FilterBlock(1,0.11 KB,3.70873e-05%) IndexBlock(1,0.14 KB,4.76837e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-2] **
                                                        
                                                        ** Compaction Stats [L] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [L] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [L] **
                                                        
                                                        ** Compaction Stats [P] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [P] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 600.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [P] **
Oct 13 13:47:08 standalone.localdomain python3[46048]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:09 standalone.localdomain python3[46053]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v323: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:09 standalone.localdomain chronyd[28474]: System clock was stepped by 0.000000 seconds
Oct 13 13:47:09 standalone.localdomain python3[46058]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:09 standalone.localdomain python3[46065]: ansible-timezone Invoked with name=UTC hwclock=None
Oct 13 13:47:10 standalone.localdomain systemd[1]: Starting Time & Date Service...
Oct 13 13:47:10 standalone.localdomain systemd[1]: Started Time & Date Service.
Oct 13 13:47:10 standalone.localdomain ceph-mon[29756]: pgmap v323: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:10 standalone.localdomain python3[46075]: ansible-ansible.legacy.dnf Invoked with name=['tmpwatch'] state=installed allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:47:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v324: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:12 standalone.localdomain ceph-mon[29756]: pgmap v324: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v325: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:13 standalone.localdomain ceph-mon[29756]: pgmap v325: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:13 standalone.localdomain python3[46088]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:14 standalone.localdomain python3[46097]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:14 standalone.localdomain python3[46108]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Oct 13 13:47:15 standalone.localdomain python3[46112]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:47:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v326: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 13:47:15 standalone.localdomain python3[46127]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:47:16 standalone.localdomain python3[46131]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None
Oct 13 13:47:16 standalone.localdomain ceph-mon[29756]: pgmap v326: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:16 standalone.localdomain python3[46135]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:47:16 standalone.localdomain python3[46139]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:16 standalone.localdomain python3[46143]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v327: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:17 standalone.localdomain python3[46147]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Oct 13 13:47:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  Converting 2718 SID table entries...
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:47:17 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:47:18 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=18 res=1
Oct 13 13:47:18 standalone.localdomain systemd[1]: Stopping User Manager for UID 1000...
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Activating special unit Exit the Session...
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopping podman-pause-50b95cda.scope...
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Removed slice User Background Tasks Slice.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped target Main User Target.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped target Basic System.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped target Paths.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped target Sockets.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped target Timers.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 13:47:18 standalone.localdomain dbus-broker[11042]: Dispatched 2884 messages @ 2(±27)μs / message.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopping D-Bus User Message Bus...
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped Create User's Volatile Files and Directories.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped podman-pause-50b95cda.scope.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Stopped D-Bus User Message Bus.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Removed slice Slice /user.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Closed D-Bus User Message Bus Socket.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Removed slice User Application Slice.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Reached target Shutdown.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Finished Exit the Session.
Oct 13 13:47:18 standalone.localdomain systemd[4177]: Reached target Exit the Session.
Oct 13 13:47:18 standalone.localdomain systemd[1]: user@1000.service: Deactivated successfully.
Oct 13 13:47:18 standalone.localdomain systemd[1]: Stopped User Manager for UID 1000.
Oct 13 13:47:18 standalone.localdomain systemd[1]: user@1000.service: Consumed 1.455s CPU time, read 12.0K from disk, written 4.0K to disk.
Oct 13 13:47:18 standalone.localdomain ceph-mon[29756]: pgmap v327: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:18 standalone.localdomain python3[46156]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:47:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v328: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:19 standalone.localdomain python3[46182]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_0':********@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, 'mysql_data_ownership': {'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, 'rabbitmq_bootstrap': {'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']}}, 'step_2': {'barbican_init_log': {'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']}, 'cinder_api_init_logs': {'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, 'cinder_scheduler_init_logs': {'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, 'clustercheck': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, 'create_dnsmasq_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'glance_init_logs': {'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']}, 'heat_init_log': {'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']}, 'horizon_fix_perms': {'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'keystone_init_log': {'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']}, 'manila_init_logs': {'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, 'mysql_wait_bundle': {'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, 'neutron_init_logs': {'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']}, 'nova_api_init_logs': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_conductor_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_metadata_init_logs': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, 'placement_init_log': {'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']}, 'rabbitmq_wait_bundle': {'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}}, 'step_3': {'barbican_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, 'barbican_api_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, 'barbican_api_secret_store_sync': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, 'barbican_keystone_listener': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, 'barbican_worker': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, 'cinder_api_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, 'cinder_backup_init_logs': {'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, 'cinder_volume_init_logs': {'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']}, 'glance_api_db_sync': {'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, 'heat_engine_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, 'horizon': {'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'keystone': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, 'keystone_bootstrap': {'action': 'exec', 'command': ['keystone', '/usr/bin/bootstrap_host_exec', 'keystone', 'keystone-manage', 'bootstrap'], 'environment': {'KOLLA_BOOTSTRAP': True, 'OS_BOOTSTRAP_ADMIN_URL': 'http://192.168.122.99:35357', 'OS_BOOTSTRAP_INTERNAL_URL': 'http://172.17.0.2:5000', 'OS_BOOTSTRAP_PASSWORD': '0l8h5BMDlaumaGKKyzjOKn9mJ', 'OS_BOOTSTRAP_PROJECT_NAME': 'admin', 'OS_BOOTSTRAP_PUBLIC_URL': 'http://172.21.0.2:5000', 'OS_BOOTSTRAP_REGION_ID': 'regionOne', 'OS_BOOTSTRAP_ROLE_NAME': 'admin', 'OS_BOOTSTRAP_SERVICE_NAME': 'keystone', 'OS_BOOTSTRAP_USERNAME': 'admin'}, 'start_order': 3, 'user': 'root'}, 'keystone_cron': {'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, 'keystone_db_sync': {'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, 'manila_api_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, 'manila_share_init_logs': {'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']}, 'neutron_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, 'nova_api_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_api_ensure_default_cells': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, 'nova_virtnodedevd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtproxyd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtqemud': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, 'nova_virtsecretd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtstoraged': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, 'placement_api_db_sync': {'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, 'swift_copy_rings': {'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']}, 'swift_setup_srv': {'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']}}, 'step_4': {'cinder_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, 'cinder_api_cron': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, 'cinder_scheduler': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, 'configure_cms_options': {'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, 'glance_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, 'glance_api_cron': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, 'glance_api_internal': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, 'heat_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, 'heat_api_cfn': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, 'heat_api_cron': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, 'heat_engine': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, 'keystone_refresh': {'action': 'exec', 'command': ['keystone', 'pkill', '--signal', 'USR1', 'httpd'], 'start_order': 1, 'user': 'root'}, 'logrotate_crond': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, 'manila_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, 'manila_api_cron': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, 'manila_scheduler': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, 'neutron_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, 'neutron_dhcp': {'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, 'neutron_sriov_agent': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, 'nova_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_api_cron': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_conductor': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_libvirt_init_secret': {'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, 'nova_metadata': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, 'nova_migration_target': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, 'nova_scheduler': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_vnc_proxy': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, 'nova_wait_for_api_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'ovn_controller': {'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, 'ovn_metadata_agent': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, 'placement_api': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, 'placement_wait_for_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'setup_ovs_manager': {'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, 'swift_account_reaper': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_account_server': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_container_server': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_container_updater': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_object_expirer': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_object_server': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_object_updater': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, 'swift_proxy': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}}, 'step_5': {'nova_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, 'nova_wait_for_compute_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}}}
Oct 13 13:47:20 standalone.localdomain python3[46186]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:47:20 standalone.localdomain rsyslogd[759]: message too long (92527) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:47:20 standalone.localdomain ceph-mon[29756]: pgmap v328: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:20 standalone.localdomain python3[46190]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:47:20 standalone.localdomain python3[46194]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/barbican_api.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_create_hmac.json': {'command': "/usr/bin/bootstrap_host_exec barbican_api su barbican -s /bin/bash -c '/usr/bin/barbican-manage  hsm check_hmac --label  || /usr/bin/barbican-manage hsm gen_hmac --label  '", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_create_mkek.json': {'command': "/usr/bin/bootstrap_host_exec barbican_api su barbican -s /bin/bash -c '/usr/bin/barbican-manage  hsm check_mkek --label  || /usr/bin/barbican-manage  hsm gen_mkek --label  '", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec barbican_api su barbican -s /bin/bash -c '/usr/bin/barbican-manage  db upgrade '", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_get_from_rfs.json': {'command': '/opt/nfast/bin/rfs-sync --update', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_rewrap_pkeks.json': {'command': "/usr/bin/bootstrap_host_exec barbican_api su barbican -s /bin/bash -c '/usr/bin/barbican-manage  hsm rewrap_pkek '", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json': {'command': "/usr/bin/bootstrap_host_exec barbican_api su barbican -s /bin/bash -c '/usr/bin/barbican-manage  db sync_secret_stores --verbose '", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_api_update_rfs_server.json': {'command': '/usr/bin/bootstrap_host_exec barbican_api /opt/nfast/bin/rfs-sync --commit', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_keystone_listener.json': {'command': '/usr/bin/barbican-keystone-listener', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/barbican_worker.json': {'command': '/usr/bin/barbican-worker', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/cinder_api.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'cinder:cinder', 'path': '/var/log/cinder', 'recurse': True}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/certs/etcd.crt'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/private/etcd.key'}]}, '/var/lib/kolla/config_files/cinder_api_cron.json': {'command': '/usr/sbin/crond -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'cinder:cinder', 'path': '/var/log/cinder', 'recurse': True}]}, '/var/lib/kolla/config_files/cinder_api_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec cinder_api su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'cinder:cinder', 'path': '/var/log/cinder', 'recurse': True}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/certs/etcd.crt'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/private/etcd.key'}]}, '/var/lib/kolla/config_files/cinder_backup.json': {'command': '/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'cinder:cinder', 'path': '/var/log/cinder', 'recurse': True}, {'owner': 'cinder:cinder', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/certs/etcd.crt'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/private/etcd.key'}]}, '/var/lib/kolla/config_files/cinder_scheduler.json': {'command': '/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'cinder:cinder', 'path': '/var/log/cinder', 'recurse': True}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/certs/etcd.crt'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/private/etcd.key'}]}, '/var/lib/kolla/config_files/cinder_volume.json': {'command': '/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'cinder:cinder', 'path': '/var/log/cinder', 'recurse': True}, {'owner': 'cinder:cinder', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/certs/etcd.crt'}, {'owner': 'cinder:cinder', 'path': '/etc/pki/tls/private/etcd.key'}]}, '/var/lib/kolla/config_files/clustercheck.json': {'command': 'bash -c $* -- eval source /etc/sysconfig/clustercheck; exec socat -T"${TRIPLEO_HEALTHCHECK_TIMEOUT:-2}" "$TRIPLEO_SOCAT_BIND" system:"grep -qPe \\"^\\\\r\\$\\" && /usr/bin/clustercheck"', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/glance_api.json': {'command': '/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf --config-file /etc/glance/glance-image-import.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}], 'permissions': [{'owner': 'glance:glance', 'path': '/var/lib/glance', 'recurse': True}, {'owner': 'glance:glance', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/glance_api_cron.json': {'command': '/usr/sbin/crond -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'glance:glance', 'path': '/var/log/glance', 'recurse': True}]}, '/var/lib/kolla/config_files/glance_api_tls_proxy.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/haproxy.json': {'command': 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi', 'config_files': [{'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'haproxy:haproxy', 'path': '/var/lib/haproxy', 'recurse': True}, {'optional': True, 'owner': 'haproxy:haproxy', 'path': '/etc/pki/tls/certs/haproxy/*', 'perm': '0600'}, {'optional': True, 'owner': 'haproxy:haproxy', 'path': '/etc/pki/tls/private/haproxy/*', 'perm': '0600'}]}, '/var/lib/kolla/config_files/heat_api.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'heat:heat', 'path': '/var/log/heat', 'recurse': True}]}, '/var/lib/kolla/config_files/heat_api_cfn.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'heat:heat', 'path': '/var/log/heat', 'recurse': True}]}, '/var/lib/kolla/config_files/heat_api_cron.json': {'command': '/usr/sbin/crond -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'heat:heat', 'path': '/var/log/heat', 'recurse': True}]}, '/var/lib/kolla/config_files/heat_engine.json': {'command': '/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'heat:heat', 'path': '/var/log/heat', 'recurse': True}]}, '/var/lib/kolla/config_files/heat_engine_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'heat:heat', 'path': '/var/log/heat', 'recurse': True}]}, '/var/lib/kolla/config_files/horizon.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'apache:apache', 'path': '/var/log/horizon/', 'recurse': True}, {'owner': 'apache:apache', 'path': '/etc/openstack-dashboard/', 'recurse': True}, {'owner': 'apache:apache', 'path': '/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': 'apache:apache', 'path': '/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/keystone.json': {'command': '/usr/sbin/httpd', 'config_files': [{'dest': '/etc/keystone/fernet-keys', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/keystone/fernet-keys'}, {'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/keystone_cron.json': {'command': '/usr/sbin/crond -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/manila_api.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'manila:manila', 'path': '/var/log/manila', 'recurse': True}]}, '/var/lib/kolla/config_files/manila_api_cron.json': {'command': '/usr/sbin/crond -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'manila:manila', 'path': '/var/log/manila', 'recurse': True}]}, '/var/lib/kolla/config_files/manila_api_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec manila_api su manila -s /bin/bash -c '/usr/bin/manila-manage db sync'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'manila:manila', 'path': '/var/log/manila', 'recurse': True}]}, '/var/lib/kolla/config_files/manila_scheduler.json': {'command': '/usr/bin/manila-scheduler --config-file /usr/share/manila/manila-dist.conf --config-file /etc/manila/manila.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'manila:manila', 'path': '/var/log/manila', 'recurse': True}]}, '/var/lib/kolla/config_files/manila_share.json': {'command': '/usr/bin/manila-share --config-file /usr/share/manila/manila-dist.conf --config-file /etc/manila/manila.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'manila:manila', 'path': '/var/log/manila', 'recurse': True}]}, '/var/lib/kolla/config_files/memcached.json': {'command': 'bash -c $* -- eval source /etc/sysconfig/memcached; exec /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS >> /var/log/memcached/memcached.log 2>&1', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'memcached:memcached', 'path': '/var/log/memcached', 'recurse': True}, {'optional': True, 'owner': 'memcached:memcached', 'path': '/etc/pki/tls/certs/memcached.crt'}, {'optional': True, 'owner': 'memcached:memcached', 'path': '/etc/pki/tls/private/memcached.key'}]}, '/var/lib/kolla/config_files/mysql.json': {'command': '/usr/sbin/pacemaker_remoted', 'config_files': [{'dest': '/etc/libqb/force-filesystem-sockets', 'owner': 'root', 'perm': '0644', 'source': '/dev/null'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'mysql:mysql', 'path': '/var/log/mysql', 'recurse': True}, {'optional': True, 'owner': 'mysql:mysql', 'path': '/etc/pki/tls/certs/mysql.crt', 'perm': '0600'}, {'optional': True, 'owner': 'mysql:mysql', 'path': '/etc/pki/tls/private/mysql.key', 'perm': '0600'}]}, '/var/lib/kolla/config_files/neutron_api.json': {'command': '/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/neutron_ovn.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/neutron_ovn.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/neutron_api_db_sync.json': {'command': '/usr/bin/bootstrap_host_exec neutron_api neutron-db-manage upgrade heads', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/neutron_ovn.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/neutron_ovn.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/neutron_dhcp.json': {'command': '/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/neutron.crt'}, {'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/neutron.key'}]}, '/var/lib/kolla/config_files/neutron_server_tls_proxy.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}]}, '/var/lib/kolla/config_files/neutron_sriov_agent.json': {'command': '/usr/bin/neutron-sriov-nic-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/sriov_agent.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/sriov-nic-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_api.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_api_cron.json': {'command': '/usr/sbin/crond -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_api_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json': {'command': "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/container-config-scripts/pyshim.sh /container-config-scripts/nova_api_ensure_default_cells.py'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_conductor.json': {'command': '/usr/bin/nova-conductor ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_conductor_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec nova_conductor su nova -s /bin/bash -c '/usr/bin/nova-manage db sync '", 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_metadata.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_scheduler.json': {'command': '/usr/bin/nova-scheduler ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_vnc_proxy.json': {'command': '/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'root:root', 'path': '/etc/pki/tls/certs/novnc-proxy.crt', 'perm': '0644'}, {'owner': 'root:qemu', 'path': '/etc/pki/tls/private/novnc-proxy.key', 'perm': '0640'}, {'owner': 'root:root', 'path': '/etc/pki/tls/certs/libvirt-vnc-client-cert.crt', 'perm': '0644'}, {'owner': 'root:qemu', 'path': '/etc/pki/tls/private/libvirt-vnc-client-cert.key', 'perm': '0640'}]}, '/var/lib/kolla/config_files/nova_wait_for_api_service.json': {'command': "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_api_service.py'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json': {'command': 'bash -c $* -- eval source /etc/sysconfig/ovn_cluster; exec /usr/local/bin/start-nb-db-server ${OVN_NB_DB_OPTS}', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_cluster_northd.json': {'command': 'bash -c $* -- eval source /etc/sysconfig/ovn_cluster; exec /usr/bin/ovn-northd ${OVN_NORTHD_OPTS}', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json': {'command': 'bash -c $* -- eval source /etc/sysconfig/ovn_cluster; exec /usr/local/bin/start-sb-db-server ${OVN_SB_DB_OPTS}', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/placement_api.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'placement:placement', 'path': '/var/log/placement', 'recurse': True}]}, '/var/lib/kolla/config_files/placement_api_db_sync.json': {'command': "/usr/bin/bootstrap_host_exec placement su placement -s /bin/bash -c '/usr/bin/placement-manage db sync'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'placement:placement', 'path': '/var/log/placement', 'recurse': True}]}, '/var/lib/kolla/config_files/placement_api_wait_for_service.json': {'command': "/usr/bin/bootstrap_host_exec placement su placement -s /bin/bash -c '/container-config-scripts/pyshim.sh /container-config-scripts/placement_wait_for_service.py'", 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'placement:placement', 'path': '/var/log/placement', 'recurse': True}]}, '/var/lib/kolla/config_files/rabbitmq.json': {'command': '/usr/sbin/pacemaker_remoted', 'config_files': [{'dest': '/etc/libqb/force-filesystem-sockets', 'owner': 'root', 'perm': '0644', 'source': '/dev/null'}, {'dest': '/var/log/btmp', 'owner': 'root:utmp', 'perm': '0600', 'source': '/dev/null'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'rabbitmq:rabbitmq', 'path': '/var/lib/rabbitmq', 'recurse': True}, {'owner': 'rabbitmq:rabbitmq', 'path': '/var/log/rabbitmq', 'recurse': True}, {'optional': True, 'owner': 'rabbitmq:rabbitmq', 'path': '/etc/pki/tls/certs/rabbitmq.crt', 'perm': '0600'}, {'optional': True, 'owner': 'rabbitmq:rabbitmq', 'path': '/etc/pki/tls/private/rabbitmq.key', 'perm': '0600'}]}, '/var/lib/kolla/config_files/swift_account_auditor.json': {'command': '/usr/bin/swift-account-auditor /etc/swift/account-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_account_reaper.json': {'command': '/usr/bin/swift-account-reaper /etc/swift/account-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_account_replicator.json': {'command': '/usr/bin/swift-account-replicator /etc/swift/account-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_account_server.json': {'command': '/usr/bin/swift-account-server /etc/swift/account-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_container_auditor.json': {'command': '/usr/bin/swift-container-auditor /etc/swift/container-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_container_replicator.json': {'command': '/usr/bin/swift-container-replicator /etc/swift/container-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_container_server.json': {'command': '/usr/bin/swift-container-server /etc/swift/container-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_container_sharder.json': {'command': '/usr/bin/swift-container-sharder /etc/swift/container-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_container_updater.json': {'command': '/usr/bin/swift-container-updater /etc/swift/container-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_object_auditor.json': {'command': '/usr/bin/swift-object-auditor /etc/swift/object-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_object_expirer.json': {'command': '/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_object_replicator.json': {'command': '/usr/bin/swift-object-replicator /etc/swift/object-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_object_server.json': {'command': '/usr/bin/swift-object-server /etc/swift/object-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'swift:swift', 'path': '/var/cache/swift', 'recurse': True}]}, '/var/lib/kolla/config_files/swift_object_updater.json': {'command': '/usr/bin/swift-object-updater /etc/swift/object-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_proxy.json': {'command': '/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_proxy_tls_proxy.json': {'command': '/usr/sbin/httpd -DFOREGROUND', 'config_files': [{'dest': '/etc/httpd/conf.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.d'}, {'dest': '/etc/httpd/conf.modules.d', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/httpd/conf.modules.d'}, {'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/swift_rsync.json': {'command': '/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}}
Oct 13 13:47:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v329: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:21 standalone.localdomain rsyslogd[759]: message too long (43139) with configured size 8096, begin of message is: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/conf [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:47:21 standalone.localdomain sudo[46213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:47:21 standalone.localdomain sudo[46213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:21 standalone.localdomain sudo[46213]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:21 standalone.localdomain sudo[46228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 13:47:21 standalone.localdomain sudo[46228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:22 standalone.localdomain ceph-mon[29756]: pgmap v329: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:22 standalone.localdomain sudo[46228]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:47:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:47:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:22 standalone.localdomain python3[46212]: ansible-tripleo_container_image_prepare Invoked with roles_data=[{'CountDefault': 1, 'RoleParametersDefault': {'OVNCMSOptions': 'enable-chassis-as-gw'}, 'ServicesDefault': ['OS::TripleO::Services::Aide', 'OS::TripleO::Services::AodhApi', 'OS::TripleO::Services::AodhEvaluator', 'OS::TripleO::Services::AodhListener', 'OS::TripleO::Services::AodhNotifier', 'OS::TripleO::Services::AuditD', 'OS::TripleO::Services::BootParams', 'OS::TripleO::Services::BarbicanApi', 'OS::TripleO::Services::BarbicanBackendDogtag', 'OS::TripleO::Services::BarbicanBackendKmip', 'OS::TripleO::Services::BarbicanBackendPkcs11Crypto', 'OS::TripleO::Services::BarbicanBackendSimpleCrypto', 'OS::TripleO::Services::CACerts', 'OS::TripleO::Services::CeilometerAgentCentral', 'OS::TripleO::Services::CeilometerAgentNotification', 'OS::TripleO::Services::CephClient', 'OS::TripleO::Services::CephExternal', 'OS::TripleO::Services::CephGrafana', 'OS::TripleO::Services::CephMds', 'OS::TripleO::Services::CephMgr', 'OS::TripleO::Services::CephMon', 'OS::TripleO::Services::CephNfs', 'OS::TripleO::Services::CephRbdMirror', 'OS::TripleO::Services::CephRgw', 'OS::TripleO::Services::CephOSD', 'OS::TripleO::Services::CinderApi', 'OS::TripleO::Services::CinderBackendDellEMCPowerFlex', 'OS::TripleO::Services::CinderBackendDellEMCPowermax', 'OS::TripleO::Services::CinderBackendDellEMCPowerStore', 'OS::TripleO::Services::CinderBackendDellEMCSc', 'OS::TripleO::Services::CinderBackendDellEMCUnity', 'OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI', 'OS::TripleO::Services::CinderBackendDellEMCVNX', 'OS::TripleO::Services::CinderBackendDellEMCVxFlexOS', 'OS::TripleO::Services::CinderBackendDellEMCXtremio', 'OS::TripleO::Services::CinderBackendDellSc', 'OS::TripleO::Services::CinderBackendNVMeOF', 'OS::TripleO::Services::CinderBackendPure', 'OS::TripleO::Services::CinderBackendNetApp', 'OS::TripleO::Services::CinderBackendScaleIO', 'OS::TripleO::Services::CinderBackup', 'OS::TripleO::Services::CinderHPELeftHandISCSI', 'OS::TripleO::Services::CinderScheduler', 'OS::TripleO::Services::CinderVolume', 'OS::TripleO::Services::Clustercheck', 'OS::TripleO::Services::Collectd', 'OS::TripleO::Services::ComputeCeilometerAgent', 'OS::TripleO::Services::CeilometerAgentIpmi', 'OS::TripleO::Services::ContainerImagePrepare', 'OS::TripleO::Services::ContainersLogrotateCrond', 'OS::TripleO::Services::DesignateApi', 'OS::TripleO::Services::DesignateCentral', 'OS::TripleO::Services::DesignateMDNS', 'OS::TripleO::Services::DesignateProducer', 'OS::TripleO::Services::DesignateSink', 'OS::TripleO::Services::DesignateBind', 'OS::TripleO::Services::DesignateWorker', 'OS::TripleO::Services::DockerRegistry', 'OS::TripleO::Services::Etcd', 'OS::TripleO::Services::ExternalSwiftProxy', 'OS::TripleO::Services::Frr', 'OS::TripleO::Services::GlanceApi', 'OS::TripleO::Services::GlanceApiInternal', 'OS::TripleO::Services::GnocchiApi', 'OS::TripleO::Services::GnocchiMetricd', 'OS::TripleO::Services::GnocchiStatsd', 'OS::TripleO::Services::HAproxy', 'OS::TripleO::Services::HeatApi', 'OS::TripleO::Services::HeatApiCfn', 'OS::TripleO::Services::HeatApiCloudwatch', 'OS::TripleO::Services::HeatEngine', 'OS::TripleO::Services::Horizon', 'OS::TripleO::Services::IpaClient', 'OS::TripleO::Services::Ipsec', 'OS::TripleO::Services::IronicApi', 'OS::TripleO::Services::IronicConductor', 'OS::TripleO::Services::IronicInspector', 'OS::TripleO::Services::IronicNeutronAgent', 'OS::TripleO::Services::IronicPxe', 'OS::TripleO::Services::Iscsid', 'OS::TripleO::Services::Kernel', 'OS::TripleO::Services::Keystone', 'OS::TripleO::Services::LoginDefs', 'OS::TripleO::Services::ManilaApi', 'OS::TripleO::Services::ManilaBackendCephFs', 'OS::TripleO::Services::ManilaBackendIsilon', 'OS::TripleO::Services::ManilaBackendNetapp', 'OS::TripleO::Services::ManilaBackendPowerMax', 'OS::TripleO::Services::ManilaBackendUnity', 'OS::TripleO::Services::ManilaBackendVMAX', 'OS::TripleO::Services::ManilaBackendVNX', 'OS::TripleO::Services::ManilaScheduler', 'OS::TripleO::Services::ManilaShare', 'OS::TripleO::Services::MasqueradeNetworks', 'OS::TripleO::Services::Memcached', 'OS::TripleO::Services::MetricsQdr', 'OS::TripleO::Services::Multipathd', 'OS::TripleO::Services::MySQL', 'OS::TripleO::Services::MySQLClient', 'OS::TripleO::Services::NeutronApi', 'OS::TripleO::Services::NeutronBgpVpnApi', 'OS::TripleO::Services::NeutronBgpVpnBagpipe', 'OS::TripleO::Services::NeutronCorePlugin', 'OS::TripleO::Services::NeutronL2gwAgent', 'OS::TripleO::Services::NeutronL2gwApi', 'OS::TripleO::Services::NeutronL3Agent', 'OS::TripleO::Services::NeutronLinuxbridgeAgent', 'OS::TripleO::Services::NeutronMetadataAgent', 'OS::TripleO::Services::NeutronOvsAgent', 'OS::TripleO::Services::NeutronSfcApi', 'OS::TripleO::Services::NeutronSriovAgent', 'OS::TripleO::Services::NeutronDhcpAgent', 'OS::TripleO::Services::NeutronVppAgent', 'OS::TripleO::Services::NovaApi', 'OS::TripleO::Services::NovaConductor', 'OS::TripleO::Services::NovaIronic', 'OS::TripleO::Services::NovaMetadata', 'OS::TripleO::Services::NovaScheduler', 'OS::TripleO::Services::NovaCompute', 'OS::TripleO::Services::NovaLibvirt', 'OS::TripleO::Services::NovaMigrationTarget', 'OS::TripleO::Services::NovaVncProxy', 'OS::TripleO::Services::OVNController', 'OS::TripleO::Services::OVNDBs', 'OS::TripleO::Services::OVNMetadataAgent', 'OS::TripleO::Services::OctaviaApi', 'OS::TripleO::Services::OctaviaDeploymentConfig', 'OS::TripleO::Services::OctaviaHealthManager', 'OS::TripleO::Services::OctaviaHousekeeping', 'OS::TripleO::Services::OctaviaWorker', 'OS::TripleO::Services::OpenStackClients', 'OS::TripleO::Services::OsloMessagingNotify', 'OS::TripleO::Services::OsloMessagingRpc', 'OS::TripleO::Services::Pacemaker', 'OS::TripleO::Services::PlacementApi', 'OS::TripleO::Services::Podman', 'OS::TripleO::Services::Redis', 'OS::TripleO::Services::Rhsm', 'OS::TripleO::Services::Rsyslog', 'OS::TripleO::Services::RsyslogSidecar', 'OS::TripleO::Services::Securetty', 'OS::TripleO::Services::Snmp', 'OS::TripleO::Services::Sshd', 'OS::TripleO::Services::SwiftDispersion', 'OS::TripleO::Services::SwiftProxy', 'OS::TripleO::Services::SwiftRingBuilder', 'OS::TripleO::Services::SwiftStorage', 'OS::TripleO::Services::Timesync', 'OS::TripleO::Services::Timezone', 'OS::TripleO::Services::Tmpwatch', 'OS::TripleO::Services::TripleoFirewall', 'OS::TripleO::Services::TripleoPackages', 'OS::TripleO::Services::Unbound', 'OS::TripleO::Services::Tuned', 'OS::TripleO::Services::Vpp'], 'default_route_networks': [], 'description': "A standalone role that a minimal set of services. This can be used for\ntesting in a single node configuration with the\n'openstack tripleo deploy --standalone' command or via an Undercloud using\n'openstack overcloud deploy'.\n", 'name': 'Standalone', 'networks': {'External': {'subnet': 'external_subnet'}, 'InternalApi': {'subnet': 'internal_api_subnet'}, 'Storage': {'subnet': 'storage_subnet'}, 'StorageMgmt': {'subnet': 'storage_mgmt_subnet'}, 'StorageNFS': {'subnet': 'storage_nfs_subnet'}, 'Tenant': {'subnet': 'tenant_subnet'}}, 'tags': ['primary', 'controller', 'standalone']}] environment={'parameter_defaults': {'AdditionalArchitectures': [], 'ContainerImagePrepare': [{'set': {'ceph_alertmanager_image': 'ose-prometheus-alertmanager', 'ceph_alertmanager_namespace': 'registry.redhat.io/openshift4', 'ceph_alertmanager_tag': 'v4.12', 'ceph_grafana_image': 'rhceph-6-dashboard-rhel9', 'ceph_grafana_namespace': 'registry.redhat.io/rhceph', 'ceph_grafana_tag': 'latest', 'ceph_image': 'rhceph-7-rhel9', 'ceph_namespace': 'registry.redhat.io/rhceph', 'ceph_node_exporter_image': 'ose-prometheus-node-exporter', 'ceph_node_exporter_namespace': 'registry.redhat.io/openshift4', 'ceph_node_exporter_tag': 'v4.12', 'ceph_prometheus_image': 'ose-prometheus', 'ceph_prometheus_namespace': 'registry.redhat.io/openshift4', 'ceph_prometheus_tag': 'v4.12', 'ceph_tag': 'latest', 'name_prefix': 'openstack-', 'name_suffix': '', 'namespace': 'registry.redhat.io/rhosp-rhel9', 'neutron_driver': 'ovn', 'rhel_containers': False, 'tag': '17.1'}, 'tag_from_label': '{version}-{release}'}], 'ContainerImageRegistryCredentials': {}, 'DockerInsecureRegistryAddress': ['192.168.122.100:8787'], 'DockerRegistryMirror': '', 'NeutronMechanismDrivers': ['sriovnicswitch', 'ovn'], 'StandaloneContainerImagePrepare': {}, 'StandaloneCount': 1, 'StandaloneServices': ['OS::TripleO::Services::BootParams', 'OS::TripleO::Services::BarbicanApi', 'OS::TripleO::Services::BarbicanBackendSimpleCrypto', 'OS::TripleO::Services::CACerts', 'OS::TripleO::Services::CephClient', 'OS::TripleO::Services::CephMds', 'OS::TripleO::Services::CephMgr', 'OS::TripleO::Services::CephMon', 'OS::TripleO::Services::CephOSD', 'OS::TripleO::Services::CinderApi', 'OS::TripleO::Services::CinderBackup', 'OS::TripleO::Services::CinderScheduler', 'OS::TripleO::Services::CinderVolume', 'OS::TripleO::Services::Clustercheck', 'OS::TripleO::Services::ContainerImagePrepare', 'OS::TripleO::Services::ContainersLogrotateCrond', 'OS::TripleO::Services::DockerRegistry', 'OS::TripleO::Services::GlanceApi', 'OS::TripleO::Services::GlanceApiInternal', 'OS::TripleO::Services::HAproxy', 'OS::TripleO::Services::HeatApi', 'OS::TripleO::Services::HeatApiCfn', 'OS::TripleO::Services::HeatEngine', 'OS::TripleO::Services::Horizon', 'OS::TripleO::Services::Iscsid', 'OS::TripleO::Services::Kernel', 'OS::TripleO::Services::Keystone', 'OS::TripleO::Services::ManilaApi', 'OS::TripleO::Services::ManilaBackendCephFs', 'OS::TripleO::Services::ManilaScheduler', 'OS::TripleO::Services::ManilaShare', 'OS::TripleO::Services::Memcached', 'OS::TripleO::Services::MySQL', 'OS::TripleO::Services::MySQLClient', 'OS::TripleO::Services::NeutronApi', 'OS::TripleO::Services::NeutronCorePlugin', 'OS::TripleO::Services::NeutronSriovAgent', 'OS::TripleO::Services::NeutronDhcpAgent', 'OS::TripleO::Services::NovaApi', 'OS::TripleO::Services::NovaConductor', 'OS::TripleO::Services::NovaMetadata', 'OS::TripleO::Services::NovaScheduler', 'OS::TripleO::Services::NovaCompute', 'OS::TripleO::Services::NovaLibvirt', 'OS::TripleO::Services::NovaMigrationTarget', 'OS::TripleO::Services::NovaVncProxy', 'OS::TripleO::Services::OVNController', 'OS::TripleO::Services::OVNDBs', 'OS::TripleO::Services::OVNMetadataAgent', 'OS::TripleO::Services::OpenStackClients', 'OS::TripleO::Services::OsloMessagingNotify', 'OS::TripleO::Services::OsloMessagingRpc', 'OS::TripleO::Services::Pacemaker', 'OS::TripleO::Services::PlacementApi', 'OS::TripleO::Services::Podman', 'OS::TripleO::Services::Rsyslog', 'OS::TripleO::Services::Snmp', 'OS::TripleO::Services::Sshd', 'OS::TripleO::Services::SwiftProxy', 'OS::TripleO::Services::SwiftRingBuilder', 'OS::TripleO::Services::SwiftStorage', 'OS::TripleO::Services::Timesync', 'OS::TripleO::Services::Timezone', 'OS::TripleO::Services::Tmpwatch', 'OS::TripleO::Services::TripleoFirewall', 'OS::TripleO::Services::TripleoPackages', 'OS::TripleO::Services::Tuned']}} cleanup=partial log_file=/var/log/tripleo-container-image-prepare.log debug=True wait=True timeout=180 interface=public dry_run=False cloud=None auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 13:47:22 standalone.localdomain sudo[46265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:47:22 standalone.localdomain sudo[46265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:22 standalone.localdomain sudo[46265]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:22 standalone.localdomain sudo[46280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 13:47:22 standalone.localdomain sudo[46280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:22 standalone.localdomain rsyslogd[759]: message too long (11273) with configured size 8096, begin of message is: ansible-tripleo_container_image_prepare Invoked with roles_data=[{'CountDefault' [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:47:22 standalone.localdomain python3[46298]: ansible-file Invoked with mode=0755 owner=root path=/etc/openstack state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:47:23
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:47:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v330: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:23 standalone.localdomain python3[46344]: ansible-stat Invoked with path=/etc/openstack/clouds.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:47:23 standalone.localdomain podman[46370]: 2025-10-13 13:47:23.28487218 +0000 UTC m=+0.073802791 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Oct 13 13:47:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:23 standalone.localdomain podman[46370]: 2025-10-13 13:47:23.413886979 +0000 UTC m=+0.202817640 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64)
Oct 13 13:47:23 standalone.localdomain python3[46398]: ansible-ansible.legacy.stat Invoked with path=/etc/openstack/clouds.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:47:23 standalone.localdomain python3[46434]: ansible-ansible.legacy.copy Invoked with dest=/etc/openstack/clouds.yaml src=/tmp/ansible-root/ansible-tmp-1760363243.3129983-46387-49928412486997/source _original_basename=tmp5jqppvep follow=False checksum=196ef56683789c543bd79e5b4b160c89fd6b5d26 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:23 standalone.localdomain sudo[46280]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:47:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:47:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:23 standalone.localdomain sudo[46471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:47:23 standalone.localdomain sudo[46471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:23 standalone.localdomain sudo[46471]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:23 standalone.localdomain sudo[46488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:47:23 standalone.localdomain sudo[46488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:24 standalone.localdomain python3[46513]: ansible-slurp Invoked with src=/etc/openstack/clouds.yaml
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: pgmap v330: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:24 standalone.localdomain python3[46527]: ansible-copy Invoked with src=/tmp/ansible-root/ansible-tmp-1760363244.0417006-46504-60783736995614/source dest=/etc/openstack/clouds.yaml owner=root group=root mode=0600 _original_basename=clouds.yaml follow=True backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:24 standalone.localdomain sudo[46488]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:47:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:47:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 3ae22957-5a3a-4764-863e-1ef16884a792 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:47:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 3ae22957-5a3a-4764-863e-1ef16884a792 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:47:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 3ae22957-5a3a-4764-863e-1ef16884a792 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:47:24 standalone.localdomain sudo[46551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:47:24 standalone.localdomain sudo[46551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:47:24 standalone.localdomain sudo[46551]: pam_unix(sudo:session): session closed for user root
Oct 13 13:47:24 standalone.localdomain python3[46578]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:47:25 standalone.localdomain python3[46582]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760363244.6709373-46568-115754326455498/source _original_basename=tmpqbjpq9x1 follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v331: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:25 standalone.localdomain python3[46588]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:47:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:47:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:47:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:47:25 standalone.localdomain ceph-mon[29756]: pgmap v331: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:26 standalone.localdomain python3[46622]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:47:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v332: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:27 standalone.localdomain python3[46652]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Oct 13 13:47:28 standalone.localdomain ceph-mon[29756]: pgmap v332: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 16 completed events
Oct 13 13:47:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:47:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:28 standalone.localdomain python3[46672]: ansible-ansible.legacy.command Invoked with _raw_params=podman pull registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v333: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:47:29 standalone.localdomain ceph-mon[29756]: pgmap v333: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v334: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:32 standalone.localdomain ceph-mon[29756]: pgmap v334: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v335: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:33 standalone.localdomain ceph-mon[29756]: pgmap v335: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v336: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:36 standalone.localdomain ceph-mon[29756]: pgmap v336: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v337: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:38 standalone.localdomain ceph-mon[29756]: pgmap v337: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v338: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:40 standalone.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 13:47:40 standalone.localdomain ceph-mon[29756]: pgmap v338: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v339: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:41 standalone.localdomain ceph-mon[29756]: pgmap v339: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v340: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:43 standalone.localdomain ceph-mon[29756]: pgmap v340: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:44 standalone.localdomain podman[46673]: 2025-10-13 13:47:28.564000224 +0000 UTC m=+0.044849184 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1
Oct 13 13:47:44 standalone.localdomain python3[46893]: ansible-ansible.legacy.command Invoked with _raw_params=podman tag registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1 cluster.common.tag/cinder-backup:pcmklatest _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:44 standalone.localdomain podman[46894]: 2025-10-13 13:47:44.407528915 +0000 UTC m=+0.029246891 image tag a494a4709399d3b48f7a72d9a7ba248a8aef77a31bc332f2b941ea1a9802eb23 cluster.common.tag/cinder-backup:pcmklatest
Oct 13 13:47:44 standalone.localdomain python3[46910]: ansible-ansible.legacy.command Invoked with _raw_params=podman pull registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v341: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:46 standalone.localdomain ceph-mon[29756]: pgmap v341: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v342: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:48 standalone.localdomain ceph-mon[29756]: pgmap v342: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v343: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:50 standalone.localdomain ceph-mon[29756]: pgmap v343: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v344: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: pgmap v344: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.311720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363272311787, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1781, "num_deletes": 251, "total_data_size": 669369, "memory_usage": 700216, "flush_reason": "Manual Compaction"}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363272319852, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 622598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6768, "largest_seqno": 8548, "table_properties": {"data_size": 617061, "index_size": 2562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14244, "raw_average_key_size": 18, "raw_value_size": 604246, "raw_average_value_size": 804, "num_data_blocks": 120, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363112, "oldest_key_time": 1760363112, "file_creation_time": 1760363272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8260 microseconds, and 4864 cpu microseconds.
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.319971) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 622598 bytes OK
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.320013) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.321676) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.321703) EVENT_LOG_v1 {"time_micros": 1760363272321696, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.321734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 661619, prev total WAL file size 661619, number of live WAL files 2.
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.322340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(608KB)], [23(1912KB)]
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363272322384, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 2581363, "oldest_snapshot_seqno": -1}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 1957 keys, 2203211 bytes, temperature: kUnknown
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363272335662, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 2203211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 2190395, "index_size": 6832, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4933, "raw_key_size": 43669, "raw_average_key_size": 22, "raw_value_size": 2155238, "raw_average_value_size": 1101, "num_data_blocks": 317, "num_entries": 1957, "num_filter_entries": 1957, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.335888) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 2203211 bytes
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.337672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.3 rd, 164.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 1.9 +0.0 blob) out(2.1 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 2477, records dropped: 520 output_compression: NoCompression
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.337693) EVENT_LOG_v1 {"time_micros": 1760363272337683, "job": 8, "event": "compaction_finished", "compaction_time_micros": 13357, "compaction_time_cpu_micros": 8148, "output_level": 6, "num_output_files": 1, "total_output_size": 2203211, "num_input_records": 2477, "num_output_records": 1957, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363272337861, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363272338057, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.322277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.338086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.338091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.338093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.338094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:47:52.338096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:47:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:47:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v345: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:53 standalone.localdomain ceph-mon[29756]: pgmap v345: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:55 standalone.localdomain podman[46911]: 2025-10-13 13:47:44.818998929 +0000 UTC m=+0.043236187 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1
Oct 13 13:47:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v346: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:55 standalone.localdomain python3[47265]: ansible-ansible.legacy.command Invoked with _raw_params=podman tag registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1 cluster.common.tag/cinder-volume:pcmklatest _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:55 standalone.localdomain podman[47266]: 2025-10-13 13:47:55.544240558 +0000 UTC m=+0.051969215 image tag 6478a0150ae96c1577ab98090485db34bd923bea7fbd99b449759855054ad48a cluster.common.tag/cinder-volume:pcmklatest
Oct 13 13:47:55 standalone.localdomain python3[47280]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active rsyslog _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:56 standalone.localdomain python3[47285]: ansible-blockinfile Invoked with content=if $syslogfacility-text == 'local0' and $programname == 'haproxy' then -/var/log/containers/haproxy/haproxy.log
                                                       & stop
                                                        create=True path=/etc/rsyslog.d/openstack-haproxy.conf block=if $syslogfacility-text == 'local0' and $programname == 'haproxy' then -/var/log/containers/haproxy/haproxy.log
                                                       & stop
                                                        state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:47:56 standalone.localdomain ceph-mon[29756]: pgmap v346: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:56 standalone.localdomain python3[47289]: ansible-ansible.legacy.systemd Invoked with name=rsyslog state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:47:56 standalone.localdomain systemd[1]: Stopping System Logging Service...
Oct 13 13:47:56 standalone.localdomain rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] exiting on signal 15.
Oct 13 13:47:56 standalone.localdomain systemd[1]: rsyslog.service: Deactivated successfully.
Oct 13 13:47:56 standalone.localdomain systemd[1]: Stopped System Logging Service.
Oct 13 13:47:56 standalone.localdomain systemd[1]: rsyslog.service: Consumed 1.605s CPU time, read 920.0K from disk, written 2.9M to disk.
Oct 13 13:47:56 standalone.localdomain systemd[1]: Starting System Logging Service...
Oct 13 13:47:56 standalone.localdomain rsyslogd[47292]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="47292" x-info="https://www.rsyslog.com"] start
Oct 13 13:47:56 standalone.localdomain systemd[1]: Started System Logging Service.
Oct 13 13:47:56 standalone.localdomain rsyslogd[47292]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:47:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v347: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:57 standalone.localdomain python3[47301]: ansible-ansible.legacy.command Invoked with _raw_params=podman pull registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:47:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:47:58 standalone.localdomain ceph-mon[29756]: pgmap v347: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:47:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v348: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:00 standalone.localdomain ceph-mon[29756]: pgmap v348: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v349: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:02 standalone.localdomain ceph-mon[29756]: pgmap v349: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:03 standalone.localdomain podman[47302]: 2025-10-13 13:47:57.488822497 +0000 UTC m=+0.044137312 image pull  registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1
Oct 13 13:48:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v350: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:03 standalone.localdomain python3[47401]: ansible-ansible.legacy.command Invoked with _raw_params=podman tag registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1 cluster.common.tag/haproxy:pcmklatest _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:03 standalone.localdomain ceph-mon[29756]: pgmap v350: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:03 standalone.localdomain podman[47402]: 2025-10-13 13:48:03.538848246 +0000 UTC m=+0.042740172 image tag ca7d8eb100efddb0a29cba30061b2c9178f350b380fee5b69208dfdf229628f7 cluster.common.tag/haproxy:pcmklatest
Oct 13 13:48:03 standalone.localdomain python3[47419]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:04 standalone.localdomain python3[47442]: ansible-ansible.legacy.command Invoked with _raw_params=podman pull registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v351: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:06 standalone.localdomain ceph-mon[29756]: pgmap v351: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v352: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:08 standalone.localdomain ceph-mon[29756]: pgmap v352: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v353: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:10 standalone.localdomain ceph-mon[29756]: pgmap v353: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v354: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:12 standalone.localdomain ceph-mon[29756]: pgmap v354: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v355: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:13 standalone.localdomain ceph-mon[29756]: pgmap v355: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:14 standalone.localdomain podman[47443]: 2025-10-13 13:48:05.023246562 +0000 UTC m=+0.042799473 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1
Oct 13 13:48:14 standalone.localdomain python3[47552]: ansible-ansible.legacy.command Invoked with _raw_params=podman tag registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1 cluster.common.tag/manila-share:pcmklatest _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:14 standalone.localdomain podman[47553]: 2025-10-13 13:48:14.742214579 +0000 UTC m=+0.046803423 image tag 5b8751731d4e88def30b35fe920ea8b7144eb85593c68b79fefc7a35d737b9c5 cluster.common.tag/manila-share:pcmklatest
Oct 13 13:48:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v356: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:15 standalone.localdomain ceph-mon[29756]: pgmap v356: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:15 standalone.localdomain python3[47572]: ansible-ansible.legacy.command Invoked with _raw_params=podman pull registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v357: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:18 standalone.localdomain ceph-mon[29756]: pgmap v357: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v358: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:20 standalone.localdomain ceph-mon[29756]: pgmap v358: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v359: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:22 standalone.localdomain ceph-mon[29756]: pgmap v359: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:48:23
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:48:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v360: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:23 standalone.localdomain ceph-mon[29756]: pgmap v360: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:23 standalone.localdomain podman[47674]: 2025-10-13 13:48:15.719024914 +0000 UTC m=+0.047845794 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:48:23 standalone.localdomain python3[47773]: ansible-ansible.legacy.command Invoked with _raw_params=podman tag registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 cluster.common.tag/mariadb:pcmklatest _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:24 standalone.localdomain podman[47774]: 2025-10-13 13:48:24.033222703 +0000 UTC m=+0.050345178 image tag eb2deacebcb0e732dc23a6e49150052f3f1724aa184df8abe621651e45eee06c cluster.common.tag/mariadb:pcmklatest
Oct 13 13:48:24 standalone.localdomain python3[47790]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 13:48:24 standalone.localdomain sudo[47792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:48:24 standalone.localdomain sudo[47792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:48:24 standalone.localdomain sudo[47792]: pam_unix(sudo:session): session closed for user root
Oct 13 13:48:24 standalone.localdomain sudo[47807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:48:24 standalone.localdomain sudo[47807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:48:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v361: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:25 standalone.localdomain sudo[47807]: pam_unix(sudo:session): session closed for user root
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:48:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:48:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 48cebfa1-a145-459c-9684-f8308b1cdf1d (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:48:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 48cebfa1-a145-459c-9684-f8308b1cdf1d (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:48:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 48cebfa1-a145-459c-9684-f8308b1cdf1d (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:48:25 standalone.localdomain sudo[47853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:48:25 standalone.localdomain sudo[47853]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:48:25 standalone.localdomain sudo[47853]: pam_unix(sudo:session): session closed for user root
Oct 13 13:48:26 standalone.localdomain ceph-mon[29756]: pgmap v361: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:48:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:48:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:48:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:48:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v362: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:27 standalone.localdomain ceph-mon[29756]: pgmap v362: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:28 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:48:28 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:48:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 17 completed events
Oct 13 13:48:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:48:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:48:28 standalone.localdomain systemd[1]: Reexecuting.
Oct 13 13:48:28 standalone.localdomain systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 13 13:48:28 standalone.localdomain systemd[1]: Detected virtualization kvm.
Oct 13 13:48:28 standalone.localdomain systemd[1]: Detected architecture x86-64.
Oct 13 13:48:28 standalone.localdomain systemd-rc-local-generator[47963]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:48:28 standalone.localdomain systemd-sysv-generator[47968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:48:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:48:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v363: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:48:29 standalone.localdomain ceph-mon[29756]: pgmap v363: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v364: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:32 standalone.localdomain ceph-mon[29756]: pgmap v364: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v365: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:33 standalone.localdomain ceph-mon[29756]: pgmap v365: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v366: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:36 standalone.localdomain ceph-mon[29756]: pgmap v366: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v367: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:37 standalone.localdomain ceph-mon[29756]: pgmap v367: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v368: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:40 standalone.localdomain ceph-mon[29756]: pgmap v368: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  Converting 2718 SID table entries...
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:48:40 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:48:40 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:48:40 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=19 res=1
Oct 13 13:48:40 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:48:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v369: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:41 standalone.localdomain ceph-mon[29756]: pgmap v369: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:41 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:48:41 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:48:41 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:48:41 standalone.localdomain systemd-sysv-generator[48332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:48:41 standalone.localdomain systemd-rc-local-generator[48328]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:48:41 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:48:42 standalone.localdomain systemd-journald[618]: Journal stopped
Oct 13 13:48:42 standalone.localdomain systemd-journald[618]: Received SIGTERM from PID 1 (systemd).
Oct 13 13:48:42 standalone.localdomain systemd[1]: Stopping Journal Service...
Oct 13 13:48:42 standalone.localdomain systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 13 13:48:42 standalone.localdomain systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Stopped Journal Service.
Oct 13 13:48:42 standalone.localdomain systemd[1]: systemd-journald.service: Consumed 2.443s CPU time.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Starting Journal Service...
Oct 13 13:48:42 standalone.localdomain systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 13 13:48:42 standalone.localdomain systemd[1]: systemd-udevd.service: Consumed 2.613s CPU time.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 13 13:48:42 standalone.localdomain systemd-journald[48591]: Journal started
Oct 13 13:48:42 standalone.localdomain systemd-journald[48591]: Runtime Journal (/run/log/journal/89829c24d904bea15dec4d2c9d1ee875) is 18.0M, max 314.7M, 296.7M free.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Started Journal Service.
Oct 13 13:48:42 standalone.localdomain systemd-udevd[48596]: Using default interface naming scheme 'rhel-9.0'.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:48:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:42 standalone.localdomain systemd-rc-local-generator[49229]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:48:42 standalone.localdomain systemd-sysv-generator[49232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:48:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:48:42 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:48:43 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:48:43 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:48:43 standalone.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.418s CPU time.
Oct 13 13:48:43 standalone.localdomain systemd[1]: run-r7e87bd8c7368467ea166d18a644203a3.service: Deactivated successfully.
Oct 13 13:48:43 standalone.localdomain systemd[1]: run-r8a8cdc48690f4b2694de9ba285dd390c.service: Deactivated successfully.
Oct 13 13:48:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v370: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:43 standalone.localdomain ceph-mon[29756]: pgmap v370: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:44 standalone.localdomain python3[49660]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False
Oct 13 13:48:45 standalone.localdomain python3[49669]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v371: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:46 standalone.localdomain python3[49689]: ansible-ansible.builtin.service_facts Invoked
Oct 13 13:48:46 standalone.localdomain network[49706]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 13:48:46 standalone.localdomain network[49707]: 'network-scripts' will be removed from distribution in near future.
Oct 13 13:48:46 standalone.localdomain network[49708]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 13:48:46 standalone.localdomain ceph-mon[29756]: pgmap v371: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v372: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:47 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:48:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:48 standalone.localdomain ceph-mon[29756]: pgmap v372: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v373: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:50 standalone.localdomain ceph-mon[29756]: pgmap v373: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:50 standalone.localdomain python3[49840]: ansible-ansible.builtin.file Invoked with path=/var/lib/config-data/ansible-generated/ovn/etc/sysconfig recurse=True state=directory selevel=s0 setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:48:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v374: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:51 standalone.localdomain python3[49846]: ansible-ansible.legacy.command Invoked with _raw_params=ovsdb-tool --help|grep -q election-timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:51 standalone.localdomain python3[49863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/ovn/etc/sysconfig/ovn_cluster follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:48:52 standalone.localdomain python3[49867]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363331.5304067-49853-200530620942244/source dest=/var/lib/config-data/ansible-generated/ovn/etc/sysconfig/ovn_cluster mode=640 selevel=s0 setype=container_file_t follow=False _original_basename=ovn_cluster.j2 checksum=52b097dd1c9dbbcf5f0623a04814beb0dc8f1d2f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:48:52 standalone.localdomain ceph-mon[29756]: pgmap v374: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:52 standalone.localdomain python3[49875]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active --quiet tripleo_cluster_north_db_server _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:52 standalone.localdomain python3[49882]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active --quiet tripleo_cluster_south_db_server _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:48:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v375: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:53 standalone.localdomain python3[49889]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active --quiet tripleo_cluster_northd _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:53 standalone.localdomain ceph-mon[29756]: pgmap v375: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:53 standalone.localdomain python3[49902]: ansible-ansible.legacy.command Invoked with _raw_params=podman pull registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:48:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v376: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:56 standalone.localdomain ceph-mon[29756]: pgmap v376: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v377: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:48:58 standalone.localdomain ceph-mon[29756]: pgmap v377: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:48:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v378: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:00 standalone.localdomain ceph-mon[29756]: pgmap v378: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v379: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:01 standalone.localdomain podman[49903]: 2025-10-13 13:48:53.975930848 +0000 UTC m=+0.044346474 image pull  registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1
Oct 13 13:49:01 standalone.localdomain python3[50002]: ansible-ansible.legacy.command Invoked with _raw_params=podman tag registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 cluster.common.tag/rabbitmq:pcmklatest _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:49:01 standalone.localdomain podman[50003]: 2025-10-13 13:49:01.927641105 +0000 UTC m=+0.039169082 image tag 0fe17636bae901521de90b30f1dee371909a7d6e5bc2e1f72392fb6f24c02e4d cluster.common.tag/rabbitmq:pcmklatest
Oct 13 13:49:02 standalone.localdomain ceph-mon[29756]: pgmap v379: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:02 standalone.localdomain python3[50019]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:02 standalone.localdomain python3[50019]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1 --format json
Oct 13 13:49:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:02 standalone.localdomain python3[50019]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1 -q --tls-verify=false
Oct 13 13:49:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v380: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:03 standalone.localdomain ceph-mon[29756]: pgmap v380: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v381: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:06 standalone.localdomain ceph-mon[29756]: pgmap v381: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v382: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:07 standalone.localdomain podman[50032]: 2025-10-13 13:49:02.636227171 +0000 UTC m=+0.045109835 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1
Oct 13 13:49:07 standalone.localdomain python3[50019]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect fb8a6693beee68287a0f02201de79217e47346216a623f9fa87d3a80e8a156d0 --format json
Oct 13 13:49:07 standalone.localdomain python3[50153]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:07 standalone.localdomain python3[50153]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1 --format json
Oct 13 13:49:07 standalone.localdomain python3[50153]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1 -q --tls-verify=false
Oct 13 13:49:08 standalone.localdomain ceph-mon[29756]: pgmap v382: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v383: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:10 standalone.localdomain ceph-mon[29756]: pgmap v383: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v384: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:12 standalone.localdomain ceph-mon[29756]: pgmap v384: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v385: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:13 standalone.localdomain podman[50166]: 2025-10-13 13:49:08.024199328 +0000 UTC m=+0.032588923 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1
Oct 13 13:49:13 standalone.localdomain python3[50153]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9f1d58481b92b0ac72a6e8cc6e9beeecd3d8f2fa81e58d20b5d3093170a87b09 --format json
Oct 13 13:49:13 standalone.localdomain ceph-mon[29756]: pgmap v385: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:13 standalone.localdomain python3[50274]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:13 standalone.localdomain python3[50274]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1 --format json
Oct 13 13:49:13 standalone.localdomain python3[50274]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1 -q --tls-verify=false
Oct 13 13:49:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v386: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:16 standalone.localdomain ceph-mon[29756]: pgmap v386: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:17 standalone.localdomain podman[50288]: 2025-10-13 13:49:13.800866489 +0000 UTC m=+0.044763596 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1
Oct 13 13:49:17 standalone.localdomain python3[50274]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9f44ac27c6ae5ee5c1f78a00a0d801e404d75691dd3bd9c73675c21893b32179 --format json
Oct 13 13:49:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v387: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:17 standalone.localdomain python3[50396]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:17 standalone.localdomain python3[50396]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1 --format json
Oct 13 13:49:17 standalone.localdomain python3[50396]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1 -q --tls-verify=false
Oct 13 13:49:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:18 standalone.localdomain ceph-mon[29756]: pgmap v387: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v388: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:20 standalone.localdomain ceph-mon[29756]: pgmap v388: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:20 standalone.localdomain podman[50408]: 2025-10-13 13:49:17.528667976 +0000 UTC m=+0.044444627 image pull  registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1
Oct 13 13:49:20 standalone.localdomain python3[50396]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e73a71f3bab32912341886b13c2f3c5c0880a42512494d8bfe2884e7d7d6824c --format json
Oct 13 13:49:21 standalone.localdomain python3[50517]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:21 standalone.localdomain python3[50517]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 --format json
Oct 13 13:49:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v389: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:21 standalone.localdomain python3[50517]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 -q --tls-verify=false
Oct 13 13:49:22 standalone.localdomain ceph-mon[29756]: pgmap v389: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:49:23
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:49:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v390: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:23 standalone.localdomain ceph-mon[29756]: pgmap v390: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:23 standalone.localdomain podman[50529]: 2025-10-13 13:49:21.362649418 +0000 UTC m=+0.043939453 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:49:23 standalone.localdomain python3[50517]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect eb2deacebcb0e732dc23a6e49150052f3f1724aa184df8abe621651e45eee06c --format json
Oct 13 13:49:24 standalone.localdomain python3[50625]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:24 standalone.localdomain python3[50625]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 --format json
Oct 13 13:49:24 standalone.localdomain python3[50625]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 -q --tls-verify=false
Oct 13 13:49:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v391: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:25 standalone.localdomain sudo[50650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:49:25 standalone.localdomain sudo[50650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:49:25 standalone.localdomain sudo[50650]: pam_unix(sudo:session): session closed for user root
Oct 13 13:49:25 standalone.localdomain sudo[50665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:49:25 standalone.localdomain sudo[50665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:49:26 standalone.localdomain podman[50637]: 2025-10-13 13:49:24.16372438 +0000 UTC m=+0.044399346 image pull  registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1
Oct 13 13:49:26 standalone.localdomain python3[50625]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 0fe17636bae901521de90b30f1dee371909a7d6e5bc2e1f72392fb6f24c02e4d --format json
Oct 13 13:49:26 standalone.localdomain sudo[50665]: pam_unix(sudo:session): session closed for user root
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: pgmap v391: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:49:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:49:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ebc74b46-cc39-429b-9a64-3e1aecd09236 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:49:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ebc74b46-cc39-429b-9a64-3e1aecd09236 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:49:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ebc74b46-cc39-429b-9a64-3e1aecd09236 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:49:26 standalone.localdomain sudo[50798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:49:26 standalone.localdomain sudo[50798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:49:26 standalone.localdomain sudo[50798]: pam_unix(sudo:session): session closed for user root
Oct 13 13:49:26 standalone.localdomain python3[50797]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:26 standalone.localdomain python3[50797]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1 --format json
Oct 13 13:49:26 standalone.localdomain python3[50797]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1 -q --tls-verify=false
Oct 13 13:49:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v392: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:49:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:49:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:49:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:49:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:28 standalone.localdomain ceph-mon[29756]: pgmap v392: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 18 completed events
Oct 13 13:49:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:49:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:49:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v393: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:49:29 standalone.localdomain ceph-mon[29756]: pgmap v393: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:31 standalone.localdomain podman[50825]: 2025-10-13 13:49:26.599817538 +0000 UTC m=+0.046595696 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 13:49:31 standalone.localdomain python3[50797]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 77bd1b0ebdee597174b14966eeb7f7c67840f28bdf48fe0c62c5ea3429eeb306 --format json
Oct 13 13:49:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v394: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:31 standalone.localdomain python3[50947]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:31 standalone.localdomain python3[50947]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1 --format json
Oct 13 13:49:31 standalone.localdomain python3[50947]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1 -q --tls-verify=false
Oct 13 13:49:32 standalone.localdomain ceph-mon[29756]: pgmap v394: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v395: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:33 standalone.localdomain ceph-mon[29756]: pgmap v395: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v396: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:35 standalone.localdomain ceph-mon[29756]: pgmap v396: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:36 standalone.localdomain podman[50958]: 2025-10-13 13:49:31.580582225 +0000 UTC m=+0.042306509 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 13:49:36 standalone.localdomain python3[50947]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e1025d0160926b2478cff0c3ae6dabd48bf901c8d824a98ef2b2e78a3c7fa2eb --format json
Oct 13 13:49:36 standalone.localdomain python3[51134]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:36 standalone.localdomain python3[51134]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1 --format json
Oct 13 13:49:37 standalone.localdomain python3[51134]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1 -q --tls-verify=false
Oct 13 13:49:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v397: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:38 standalone.localdomain ceph-mon[29756]: pgmap v397: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v398: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:39 standalone.localdomain podman[51147]: 2025-10-13 13:49:37.115160767 +0000 UTC m=+0.041581824 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1
Oct 13 13:49:39 standalone.localdomain python3[51134]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 4cfb90098422284da1d8ef01853709c60dec6ef1c9a2dcdf7ca7407d826e5512 --format json
Oct 13 13:49:40 standalone.localdomain python3[51253]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:40 standalone.localdomain python3[51253]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1 --format json
Oct 13 13:49:40 standalone.localdomain ceph-mon[29756]: pgmap v398: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:40 standalone.localdomain python3[51253]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1 -q --tls-verify=false
Oct 13 13:49:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v399: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:42 standalone.localdomain ceph-mon[29756]: pgmap v399: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v400: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:43 standalone.localdomain ceph-mon[29756]: pgmap v400: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v401: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:46 standalone.localdomain ceph-mon[29756]: pgmap v401: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v402: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:47 standalone.localdomain ceph-mon[29756]: pgmap v402: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v403: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:49 standalone.localdomain podman[51265]: 2025-10-13 13:49:40.417152043 +0000 UTC m=+0.047011347 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 13:49:49 standalone.localdomain python3[51253]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 4af3a6dfd4e6a0f01ed463aba45c145674fed1894057a9e05eb33d749dd724a6 --format json
Oct 13 13:49:50 standalone.localdomain python3[51399]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:50 standalone.localdomain python3[51399]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json
Oct 13 13:49:50 standalone.localdomain python3[51399]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false
Oct 13 13:49:50 standalone.localdomain ceph-mon[29756]: pgmap v403: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v404: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:52 standalone.localdomain ceph-mon[29756]: pgmap v404: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:49:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v405: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:53 standalone.localdomain ceph-mon[29756]: pgmap v405: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:54 standalone.localdomain podman[51412]: 2025-10-13 13:49:50.122829088 +0000 UTC m=+0.027788842 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Oct 13 13:49:54 standalone.localdomain python3[51399]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 1e3eee8f9b979ec527f69dda079bc969bf9ddbe65c90f0543f3891d72e56a75e --format json
Oct 13 13:49:55 standalone.localdomain python3[51520]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:49:55 standalone.localdomain python3[51520]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json
Oct 13 13:49:55 standalone.localdomain python3[51520]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false
Oct 13 13:49:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v406: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:56 standalone.localdomain ceph-mon[29756]: pgmap v406: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v407: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:49:58 standalone.localdomain ceph-mon[29756]: pgmap v407: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:49:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v408: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:00 standalone.localdomain ceph-mon[29756]: pgmap v408: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v409: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:02 standalone.localdomain ceph-mon[29756]: pgmap v409: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v410: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:03 standalone.localdomain ceph-mon[29756]: pgmap v410: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v411: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:06 standalone.localdomain ceph-mon[29756]: pgmap v411: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v412: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:08 standalone.localdomain ceph-mon[29756]: pgmap v412: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v413: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:10 standalone.localdomain ceph-mon[29756]: pgmap v413: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v414: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:11 standalone.localdomain ceph-mon[29756]: pgmap v414: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:12 standalone.localdomain podman[51534]: 2025-10-13 13:49:55.163083812 +0000 UTC m=+0.041645226 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 13:50:12 standalone.localdomain python3[51520]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a56a2196ea2290002b5e3e60b4c440f2326e4f1173ca4d9c0a320716a756e568 --format json
Oct 13 13:50:13 standalone.localdomain python3[52119]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:50:13 standalone.localdomain python3[52119]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1 --format json
Oct 13 13:50:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v415: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:13 standalone.localdomain python3[52119]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1 -q --tls-verify=false
Oct 13 13:50:13 standalone.localdomain ceph-mon[29756]: pgmap v415: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v416: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:16 standalone.localdomain ceph-mon[29756]: pgmap v416: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v417: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:18 standalone.localdomain ceph-mon[29756]: pgmap v417: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v418: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:20 standalone.localdomain ceph-mon[29756]: pgmap v418: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v419: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:22 standalone.localdomain ceph-mon[29756]: pgmap v419: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:50:23
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:50:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v420: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:24 standalone.localdomain ceph-mon[29756]: pgmap v420: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:24 standalone.localdomain podman[52133]: 2025-10-13 13:50:13.352728116 +0000 UTC m=+0.034088640 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 13:50:24 standalone.localdomain python3[52119]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 447401b86bcb477eb5bae5c77932dcf0cb066b2a802d29cff00771dfb8ca2877 --format json
Oct 13 13:50:24 standalone.localdomain python3[52243]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:50:24 standalone.localdomain python3[52243]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1 --format json
Oct 13 13:50:24 standalone.localdomain python3[52243]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1 -q --tls-verify=false
Oct 13 13:50:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v421: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:26 standalone.localdomain ceph-mon[29756]: pgmap v421: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:26 standalone.localdomain sudo[52268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:50:26 standalone.localdomain sudo[52268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:50:26 standalone.localdomain sudo[52268]: pam_unix(sudo:session): session closed for user root
Oct 13 13:50:26 standalone.localdomain sudo[52283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:50:26 standalone.localdomain sudo[52283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:50:27 standalone.localdomain sudo[52283]: pam_unix(sudo:session): session closed for user root
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:50:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev d64c42cb-ea18-4e60-b530-9465e354b8f7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:50:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev d64c42cb-ea18-4e60-b530-9465e354b8f7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:50:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event d64c42cb-ea18-4e60-b530-9465e354b8f7 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:50:27 standalone.localdomain sudo[52399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:50:27 standalone.localdomain sudo[52399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:50:27 standalone.localdomain sudo[52399]: pam_unix(sudo:session): session closed for user root
Oct 13 13:50:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v422: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:50:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:28 standalone.localdomain ceph-mon[29756]: pgmap v422: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 19 completed events
Oct 13 13:50:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:50:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:50:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v423: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:50:29 standalone.localdomain ceph-mon[29756]: pgmap v423: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v424: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:32 standalone.localdomain ceph-mon[29756]: pgmap v424: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v425: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:34 standalone.localdomain ceph-mon[29756]: pgmap v425: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:34 standalone.localdomain podman[52255]: 2025-10-13 13:50:24.712808031 +0000 UTC m=+0.043177681 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1
Oct 13 13:50:34 standalone.localdomain python3[52243]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 15779505045749d0f87c23467f6cb9935f02e499ee1aacdf2d6b1567f859af96 --format json
Oct 13 13:50:34 standalone.localdomain python3[52452]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:50:34 standalone.localdomain python3[52452]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1 --format json
Oct 13 13:50:34 standalone.localdomain python3[52452]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1 -q --tls-verify=false
Oct 13 13:50:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v426: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:36 standalone.localdomain ceph-mon[29756]: pgmap v426: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v427: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v428: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:39 standalone.localdomain ceph-mon[29756]: pgmap v427: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:40 standalone.localdomain ceph-mon[29756]: pgmap v428: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v429: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:41 standalone.localdomain ceph-mon[29756]: pgmap v429: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v430: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:44 standalone.localdomain ceph-mon[29756]: pgmap v430: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v431: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:45 standalone.localdomain podman[52464]: 2025-10-13 13:50:34.70194511 +0000 UTC m=+0.035706339 image pull  registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1
Oct 13 13:50:45 standalone.localdomain python3[52452]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 167fda607a6d7dc8af502904a099903e441c7fea928fd5acd5ba236b573ce9af --format json
Oct 13 13:50:45 standalone.localdomain python3[52573]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:50:45 standalone.localdomain python3[52573]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1 --format json
Oct 13 13:50:45 standalone.localdomain python3[52573]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1 -q --tls-verify=false
Oct 13 13:50:46 standalone.localdomain ceph-mon[29756]: pgmap v431: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v432: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:48 standalone.localdomain ceph-mon[29756]: pgmap v432: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v433: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:50 standalone.localdomain ceph-mon[29756]: pgmap v433: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:50 standalone.localdomain podman[52585]: 2025-10-13 13:50:45.739762447 +0000 UTC m=+0.043913152 image pull  registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 13:50:50 standalone.localdomain python3[52573]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a82affbd47d98c40f4cfe573861dfc3fa58282e5f9cddc7973ab269093c8a1c2 --format json
Oct 13 13:50:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v434: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:51 standalone.localdomain python3[52693]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:50:51 standalone.localdomain python3[52693]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1 --format json
Oct 13 13:50:51 standalone.localdomain python3[52693]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1 -q --tls-verify=false
Oct 13 13:50:52 standalone.localdomain ceph-mon[29756]: pgmap v434: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:50:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v435: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:54 standalone.localdomain ceph-mon[29756]: pgmap v435: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v436: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:56 standalone.localdomain ceph-mon[29756]: pgmap v436: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:56 standalone.localdomain podman[52707]: 2025-10-13 13:50:51.423982844 +0000 UTC m=+0.078239713 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 13:50:56 standalone.localdomain python3[52693]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3b5a08d67a860e3c0a9f57caa528e73254e53d40d0befae3ec354732ae396d0b --format json
Oct 13 13:50:56 standalone.localdomain python3[52816]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:50:56 standalone.localdomain python3[52816]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 --format json
Oct 13 13:50:56 standalone.localdomain python3[52816]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 -q --tls-verify=false
Oct 13 13:50:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v437: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:50:58 standalone.localdomain ceph-mon[29756]: pgmap v437: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:50:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v438: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:00 standalone.localdomain ceph-mon[29756]: pgmap v438: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:00 standalone.localdomain podman[52828]: 2025-10-13 13:50:56.893614554 +0000 UTC m=+0.043784948 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 13:51:00 standalone.localdomain python3[52816]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a803884207d9182b24f7629d8c3a7431e2cd91b63902dc1d773d21683f1360cf --format json
Oct 13 13:51:01 standalone.localdomain python3[52936]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:01 standalone.localdomain python3[52936]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1 --format json
Oct 13 13:51:01 standalone.localdomain python3[52936]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1 -q --tls-verify=false
Oct 13 13:51:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v439: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:02 standalone.localdomain ceph-mon[29756]: pgmap v439: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v440: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:04 standalone.localdomain ceph-mon[29756]: pgmap v440: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v441: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:06 standalone.localdomain ceph-mon[29756]: pgmap v441: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v442: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:07 standalone.localdomain podman[52948]: 2025-10-13 13:51:01.141881672 +0000 UTC m=+0.029004806 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:51:07 standalone.localdomain python3[52936]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 8507cfad8adf60ce0ac55ad72a86aac763a357b88300cebf796de3bc732b7f58 --format json
Oct 13 13:51:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:08 standalone.localdomain python3[53057]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:08 standalone.localdomain python3[53057]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json
Oct 13 13:51:08 standalone.localdomain python3[53057]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false
Oct 13 13:51:08 standalone.localdomain ceph-mon[29756]: pgmap v442: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v443: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:10 standalone.localdomain ceph-mon[29756]: pgmap v443: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v444: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:12 standalone.localdomain ceph-mon[29756]: pgmap v444: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v445: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: pgmap v445: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.159696) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363474159774, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2108, "num_deletes": 251, "total_data_size": 757430, "memory_usage": 795384, "flush_reason": "Manual Compaction"}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363474167606, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 708275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8549, "largest_seqno": 10656, "table_properties": {"data_size": 702000, "index_size": 3043, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16252, "raw_average_key_size": 19, "raw_value_size": 687429, "raw_average_value_size": 812, "num_data_blocks": 142, "num_entries": 846, "num_filter_entries": 846, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363272, "oldest_key_time": 1760363272, "file_creation_time": 1760363474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 7936 microseconds, and 4778 cpu microseconds.
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.167642) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 708275 bytes OK
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.167664) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.169591) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.169606) EVENT_LOG_v1 {"time_micros": 1760363474169602, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.169623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 748447, prev total WAL file size 748447, number of live WAL files 2.
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.170099) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(691KB)], [26(2151KB)]
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363474170126, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 2911486, "oldest_snapshot_seqno": -1}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 2285 keys, 2526239 bytes, temperature: kUnknown
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363474185310, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 2526239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 2510940, "index_size": 8483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 5765, "raw_key_size": 50947, "raw_average_key_size": 22, "raw_value_size": 2469752, "raw_average_value_size": 1080, "num_data_blocks": 385, "num_entries": 2285, "num_filter_entries": 2285, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.185462) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 2526239 bytes
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.2 rd, 165.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 2.1 +0.0 blob) out(2.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 2803, records dropped: 518 output_compression: NoCompression
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187108) EVENT_LOG_v1 {"time_micros": 1760363474187099, "job": 10, "event": "compaction_finished", "compaction_time_micros": 15228, "compaction_time_cpu_micros": 6625, "output_level": 6, "num_output_files": 1, "total_output_size": 2526239, "num_input_records": 2803, "num_output_records": 2285, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363474187249, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363474187545, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.170058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:51:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:51:14.187591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:51:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v446: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:16 standalone.localdomain ceph-mon[29756]: pgmap v446: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v447: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:18 standalone.localdomain ceph-mon[29756]: pgmap v447: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v448: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:20 standalone.localdomain ceph-mon[29756]: pgmap v448: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:20 standalone.localdomain podman[53071]: 2025-10-13 13:51:08.274090327 +0000 UTC m=+0.032791485 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 13:51:20 standalone.localdomain python3[53057]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 89ed729ad5d881399a0bbd370b8f3c39b84e5a87c6e02b0d1f2c943d2d9cfb7a --format json
Oct 13 13:51:20 standalone.localdomain python3[53565]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:20 standalone.localdomain python3[53565]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1 --format json
Oct 13 13:51:21 standalone.localdomain python3[53565]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1 -q --tls-verify=false
Oct 13 13:51:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v449: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:22 standalone.localdomain ceph-mon[29756]: pgmap v449: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:51:23
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:51:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:51:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v450: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:24 standalone.localdomain ceph-mgr[29999]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 13:51:24 standalone.localdomain ceph-mon[29756]: pgmap v450: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v451: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:25 standalone.localdomain podman[53577]: 2025-10-13 13:51:21.116062954 +0000 UTC m=+0.044778129 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1
Oct 13 13:51:25 standalone.localdomain python3[53565]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect f2875b56e5b8bc82f1d30120eea91196f619a58f2bcf511cdc876202254191dc --format json
Oct 13 13:51:25 standalone.localdomain python3[53686]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:25 standalone.localdomain python3[53686]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1 --format json
Oct 13 13:51:25 standalone.localdomain python3[53686]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1 -q --tls-verify=false
Oct 13 13:51:26 standalone.localdomain ceph-mon[29756]: pgmap v451: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:27 standalone.localdomain sudo[53713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:51:27 standalone.localdomain sudo[53713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:51:27 standalone.localdomain sudo[53713]: pam_unix(sudo:session): session closed for user root
Oct 13 13:51:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v452: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:27 standalone.localdomain sudo[53728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:51:27 standalone.localdomain sudo[53728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:51:27 standalone.localdomain sudo[53728]: pam_unix(sudo:session): session closed for user root
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:51:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:51:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 76bd0d6b-b54c-4aa7-abbb-1240441420bf (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:51:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 76bd0d6b-b54c-4aa7-abbb-1240441420bf (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:51:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 76bd0d6b-b54c-4aa7-abbb-1240441420bf (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:51:27 standalone.localdomain sudo[53775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:51:27 standalone.localdomain sudo[53775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:51:28 standalone.localdomain sudo[53775]: pam_unix(sudo:session): session closed for user root
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: pgmap v452: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:51:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 20 completed events
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:51:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:51:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v453: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:51:29 standalone.localdomain ceph-mon[29756]: pgmap v453: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:30 standalone.localdomain podman[53700]: 2025-10-13 13:51:25.88585976 +0000 UTC m=+0.041443414 image pull  registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 13:51:31 standalone.localdomain python3[53686]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3fb7c9f61f2d12d8d26596a592d8309d080d5f1541f5a96437b72b53fdd69feb --format json
Oct 13 13:51:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v454: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:31 standalone.localdomain python3[53887]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:31 standalone.localdomain python3[53887]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1 --format json
Oct 13 13:51:31 standalone.localdomain python3[53887]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1 -q --tls-verify=false
Oct 13 13:51:32 standalone.localdomain ceph-mon[29756]: pgmap v454: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v455: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:34 standalone.localdomain ceph-mon[29756]: pgmap v455: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:34 standalone.localdomain podman[53900]: 2025-10-13 13:51:31.43658822 +0000 UTC m=+0.038001096 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1
Oct 13 13:51:34 standalone.localdomain python3[53887]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 67d42cf10b1eccb2a60f819a2d59a2783690038dd1569fdb8eea42487ec2910f --format json
Oct 13 13:51:35 standalone.localdomain python3[54009]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:35 standalone.localdomain python3[54009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1 --format json
Oct 13 13:51:35 standalone.localdomain python3[54009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1 -q --tls-verify=false
Oct 13 13:51:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v456: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:36 standalone.localdomain ceph-mon[29756]: pgmap v456: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v457: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:38 standalone.localdomain ceph-mon[29756]: pgmap v457: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v458: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:40 standalone.localdomain podman[54022]: 2025-10-13 13:51:35.299945305 +0000 UTC m=+0.032910958 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1
Oct 13 13:51:40 standalone.localdomain ceph-mon[29756]: pgmap v458: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:40 standalone.localdomain python3[54009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7b3bf2f62c77ab269f2ffc253a7ff15279e9120346132ed5f3a205bb91b35ee3 --format json
Oct 13 13:51:40 standalone.localdomain python3[54131]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:40 standalone.localdomain python3[54131]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1 --format json
Oct 13 13:51:40 standalone.localdomain python3[54131]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1 -q --tls-verify=false
Oct 13 13:51:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v459: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:41 standalone.localdomain ceph-mon[29756]: pgmap v459: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v460: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:43 standalone.localdomain podman[54145]: 2025-10-13 13:51:40.873095098 +0000 UTC m=+0.045325966 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1
Oct 13 13:51:43 standalone.localdomain python3[54131]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a494a4709399d3b48f7a72d9a7ba248a8aef77a31bc332f2b941ea1a9802eb23 --format json
Oct 13 13:51:44 standalone.localdomain ceph-mon[29756]: pgmap v460: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:44 standalone.localdomain python3[54241]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:44 standalone.localdomain python3[54241]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1 --format json
Oct 13 13:51:44 standalone.localdomain python3[54241]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1 -q --tls-verify=false
Oct 13 13:51:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v461: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:46 standalone.localdomain ceph-mon[29756]: pgmap v461: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:46 standalone.localdomain podman[54254]: 2025-10-13 13:51:44.356984188 +0000 UTC m=+0.043679374 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1
Oct 13 13:51:46 standalone.localdomain python3[54241]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 6478a0150ae96c1577ab98090485db34bd923bea7fbd99b449759855054ad48a --format json
Oct 13 13:51:47 standalone.localdomain python3[54351]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:47 standalone.localdomain python3[54351]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json
Oct 13 13:51:47 standalone.localdomain python3[54351]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false
Oct 13 13:51:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v462: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:48 standalone.localdomain ceph-mon[29756]: pgmap v462: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v463: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:50 standalone.localdomain ceph-mon[29756]: pgmap v463: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:50 standalone.localdomain podman[54364]: 2025-10-13 13:51:47.302112429 +0000 UTC m=+0.044590097 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Oct 13 13:51:50 standalone.localdomain python3[54351]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 5b5e3dbf480a168d795a47e53d0695cd833f381ef10119a3de87e5946f6b53e5 --format json
Oct 13 13:51:51 standalone.localdomain python3[54472]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:51 standalone.localdomain python3[54472]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1 --format json
Oct 13 13:51:51 standalone.localdomain python3[54472]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1 -q --tls-verify=false
Oct 13 13:51:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v464: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:52 standalone.localdomain ceph-mon[29756]: pgmap v464: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:51:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v465: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:53 standalone.localdomain ceph-mon[29756]: pgmap v465: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:53 standalone.localdomain podman[54484]: 2025-10-13 13:51:51.226395348 +0000 UTC m=+0.041075269 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1
Oct 13 13:51:53 standalone.localdomain python3[54472]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 5b8751731d4e88def30b35fe920ea8b7144eb85593c68b79fefc7a35d737b9c5 --format json
Oct 13 13:51:54 standalone.localdomain python3[54580]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:54 standalone.localdomain python3[54580]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1 --format json
Oct 13 13:51:54 standalone.localdomain python3[54580]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1 -q --tls-verify=false
Oct 13 13:51:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v466: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:56 standalone.localdomain ceph-mon[29756]: pgmap v466: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v467: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:58 standalone.localdomain ceph-mon[29756]: pgmap v467: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:51:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v468: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:59 standalone.localdomain podman[54593]: 2025-10-13 13:51:54.431946324 +0000 UTC m=+0.045639130 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 13:51:59 standalone.localdomain python3[54580]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 5eb0b610e4075103526444dd3d6b0b4adc5ff85409bd781bbba9af29712a9f3a --format json
Oct 13 13:51:59 standalone.localdomain ceph-mon[29756]: pgmap v468: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:51:59 standalone.localdomain python3[54714]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:51:59 standalone.localdomain python3[54714]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1 --format json
Oct 13 13:51:59 standalone.localdomain python3[54714]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1 -q --tls-verify=false
Oct 13 13:52:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v469: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:02 standalone.localdomain ceph-mon[29756]: pgmap v469: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v470: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:03 standalone.localdomain podman[54726]: 2025-10-13 13:51:59.92261696 +0000 UTC m=+0.059782357 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1
Oct 13 13:52:03 standalone.localdomain python3[54714]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9e24f189b3636f0d661cc8aff88c6ce2baa4d125a24f4bd633cc49c6fd37815d --format json
Oct 13 13:52:04 standalone.localdomain python3[54835]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:04 standalone.localdomain python3[54835]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json
Oct 13 13:52:04 standalone.localdomain python3[54835]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false
Oct 13 13:52:04 standalone.localdomain ceph-mon[29756]: pgmap v470: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v471: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:06 standalone.localdomain ceph-mon[29756]: pgmap v471: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:07 standalone.localdomain podman[54848]: 2025-10-13 13:52:04.143991675 +0000 UTC m=+0.039451512 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Oct 13 13:52:07 standalone.localdomain python3[54835]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 01fc8d861e2b923ef0bf1d5c40a269bd976b00e8a31e8c56d63f3504b82b1c76 --format json
Oct 13 13:52:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v472: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:07 standalone.localdomain python3[54957]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:07 standalone.localdomain python3[54957]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1 --format json
Oct 13 13:52:07 standalone.localdomain python3[54957]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1 -q --tls-verify=false
Oct 13 13:52:08 standalone.localdomain ceph-mon[29756]: pgmap v472: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v473: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:09 standalone.localdomain ceph-mon[29756]: pgmap v473: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:10 standalone.localdomain podman[54970]: 2025-10-13 13:52:07.673930805 +0000 UTC m=+0.047254386 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 13:52:10 standalone.localdomain python3[54957]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 60f7c11feb43ac02004f92bead49aafd69ddde0f5e252aa745d4bfbaad3cc7d4 --format json
Oct 13 13:52:11 standalone.localdomain python3[55078]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:11 standalone.localdomain python3[55078]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1 --format json
Oct 13 13:52:11 standalone.localdomain python3[55078]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1 -q --tls-verify=false
Oct 13 13:52:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v474: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:12 standalone.localdomain ceph-mon[29756]: pgmap v474: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v475: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:14 standalone.localdomain ceph-mon[29756]: pgmap v475: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:14 standalone.localdomain podman[55092]: 2025-10-13 13:52:11.181496226 +0000 UTC m=+0.050062200 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1
Oct 13 13:52:14 standalone.localdomain python3[55078]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 00462d008b1708fdd147e5a33f4e1be6c29fa6d29fc5e6616de11485b3a7d26f --format json
Oct 13 13:52:14 standalone.localdomain python3[55201]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:14 standalone.localdomain python3[55201]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json
Oct 13 13:52:14 standalone.localdomain python3[55201]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false
Oct 13 13:52:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v476: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:16 standalone.localdomain ceph-mon[29756]: pgmap v476: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:17 standalone.localdomain podman[55214]: 2025-10-13 13:52:14.97215789 +0000 UTC m=+0.049731632 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Oct 13 13:52:17 standalone.localdomain python3[55201]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7f7fcb1a516a6191c7a8cb132a460e04d50ca4381f114f08dcbfe84340e49ac0 --format json
Oct 13 13:52:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v477: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:17 standalone.localdomain python3[55323]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:17 standalone.localdomain python3[55323]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1 --format json
Oct 13 13:52:17 standalone.localdomain python3[55323]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1 -q --tls-verify=false
Oct 13 13:52:18 standalone.localdomain ceph-mon[29756]: pgmap v477: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v478: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:19 standalone.localdomain ceph-mon[29756]: pgmap v478: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:20 standalone.localdomain podman[55336]: 2025-10-13 13:52:17.727411081 +0000 UTC m=+0.047227146 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1
Oct 13 13:52:20 standalone.localdomain python3[55323]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 903322252b49f98ece47489a0bd245a5c7f7a9e6f465f0df1184168e1b2e45c0 --format json
Oct 13 13:52:20 standalone.localdomain python3[55445]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:20 standalone.localdomain python3[55445]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1 --format json
Oct 13 13:52:21 standalone.localdomain python3[55445]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1 -q --tls-verify=false
Oct 13 13:52:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v479: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:22 standalone.localdomain ceph-mon[29756]: pgmap v479: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:52:23
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:52:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v480: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:24 standalone.localdomain ceph-mon[29756]: pgmap v480: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:24 standalone.localdomain podman[55457]: 2025-10-13 13:52:21.072423262 +0000 UTC m=+0.032147770 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1
Oct 13 13:52:24 standalone.localdomain python3[55445]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 2d6b413b8e4c7b477c11432bfe3469d584fca0a49e5dd0287c04848c6073d147 --format json
Oct 13 13:52:25 standalone.localdomain python3[55566]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:25 standalone.localdomain python3[55566]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1 --format json
Oct 13 13:52:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v481: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:25 standalone.localdomain python3[55566]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1 -q --tls-verify=false
Oct 13 13:52:26 standalone.localdomain ceph-mon[29756]: pgmap v481: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v482: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:28 standalone.localdomain sudo[55591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:52:28 standalone.localdomain sudo[55591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:52:28 standalone.localdomain sudo[55591]: pam_unix(sudo:session): session closed for user root
Oct 13 13:52:28 standalone.localdomain sudo[55613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:52:28 standalone.localdomain sudo[55613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: pgmap v482: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:28 standalone.localdomain sudo[55613]: pam_unix(sudo:session): session closed for user root
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:52:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:52:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev cb903fe8-9950-4d25-b821-69c8d0ffc08d (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:52:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev cb903fe8-9950-4d25-b821-69c8d0ffc08d (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:52:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event cb903fe8-9950-4d25-b821-69c8d0ffc08d (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:52:28 standalone.localdomain sudo[55723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:52:28 standalone.localdomain sudo[55723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:52:28 standalone.localdomain sudo[55723]: pam_unix(sudo:session): session closed for user root
Oct 13 13:52:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v483: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:29 standalone.localdomain podman[55578]: 2025-10-13 13:52:25.393718428 +0000 UTC m=+0.047416630 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1
Oct 13 13:52:29 standalone.localdomain python3[55566]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ebc633f430db717bee8411568d84409b0e467361d8d610d04f36c0f5099c2ac6 --format json
Oct 13 13:52:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:52:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:52:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:52:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:52:29 standalone.localdomain ceph-mon[29756]: pgmap v483: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:29 standalone.localdomain python3[55763]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:29 standalone.localdomain python3[55763]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1 --format json
Oct 13 13:52:29 standalone.localdomain python3[55763]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1 -q --tls-verify=false
Oct 13 13:52:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v484: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:32 standalone.localdomain ceph-mon[29756]: pgmap v484: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v485: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 21 completed events
Oct 13 13:52:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:52:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:52:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:34 standalone.localdomain ceph-mon[29756]: pgmap v485: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:52:34 standalone.localdomain podman[55776]: 2025-10-13 13:52:30.024915216 +0000 UTC m=+0.044114205 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1
Oct 13 13:52:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v486: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:35 standalone.localdomain python3[55763]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 6ae7dc8c23d11fdd05dd9c7188ab03eadc1499f98f781f6f2e46e49b4bdcfcd0 --format json
Oct 13 13:52:36 standalone.localdomain python3[55887]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:36 standalone.localdomain python3[55887]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1 --format json
Oct 13 13:52:36 standalone.localdomain python3[55887]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1 -q --tls-verify=false
Oct 13 13:52:36 standalone.localdomain ceph-mon[29756]: pgmap v486: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v487: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:38 standalone.localdomain ceph-mon[29756]: pgmap v487: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v488: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:39 standalone.localdomain podman[55900]: 2025-10-13 13:52:36.184114868 +0000 UTC m=+0.046283724 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1
Oct 13 13:52:39 standalone.localdomain python3[55887]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3f76d3697c703d2332d9793a607c3ad5895f1e231f3d109c5894b6df0fc3e2d4 --format json
Oct 13 13:52:39 standalone.localdomain ceph-mon[29756]: pgmap v488: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:39 standalone.localdomain python3[56009]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Oct 13 13:52:39 standalone.localdomain python3[56009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1 --format json
Oct 13 13:52:39 standalone.localdomain python3[56009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1 -q --tls-verify=false
Oct 13 13:52:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v489: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:42 standalone.localdomain ceph-mon[29756]: pgmap v489: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v490: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:43 standalone.localdomain podman[56021]: 2025-10-13 13:52:40.027127246 +0000 UTC m=+0.035710786 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1
Oct 13 13:52:43 standalone.localdomain python3[56009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 64ee247085a151b9b85535ec576fe4bfbffeda6b9cbe0d7a85dc8f1216ce65a0 --format json
Oct 13 13:52:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:44 standalone.localdomain python3[56130]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active rsyslog _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:52:44 standalone.localdomain ceph-mon[29756]: pgmap v490: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:44 standalone.localdomain python3[56143]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/openstack-swift.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:52:44 standalone.localdomain python3[56147]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/openstack-swift.conf src=/root/.ansible/tmp/ansible-tmp-1760363564.4728172-56133-220526211321907/source _original_basename=tmpkm0u14__ follow=False checksum=c046f7cecbacf1ea879045a0e157e300a2726c73 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:52:45 standalone.localdomain python3[56153]: ansible-ansible.legacy.systemd Invoked with name=rsyslog state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 13:52:45 standalone.localdomain systemd[1]: Stopping System Logging Service...
Oct 13 13:52:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v491: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:45 standalone.localdomain rsyslogd[47292]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="47292" x-info="https://www.rsyslog.com"] exiting on signal 15.
Oct 13 13:52:45 standalone.localdomain systemd[1]: rsyslog.service: Deactivated successfully.
Oct 13 13:52:45 standalone.localdomain systemd[1]: Stopped System Logging Service.
Oct 13 13:52:45 standalone.localdomain systemd[1]: Starting System Logging Service...
Oct 13 13:52:45 standalone.localdomain rsyslogd[56156]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="56156" x-info="https://www.rsyslog.com"] start
Oct 13 13:52:45 standalone.localdomain systemd[1]: Started System Logging Service.
Oct 13 13:52:45 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:52:45 standalone.localdomain python3[56163]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:52:46 standalone.localdomain ceph-mon[29756]: pgmap v491: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:46 standalone.localdomain ansible-async_wrapper.py[56199]: Invoked with 853654849487 3600 /root/.ansible/tmp/ansible-tmp-1760363566.791877-56187-6513428854035/AnsiballZ_command.py _
Oct 13 13:52:46 standalone.localdomain ansible-async_wrapper.py[56202]: Starting module and watcher
Oct 13 13:52:46 standalone.localdomain ansible-async_wrapper.py[56202]: Start watching 56203 (3600)
Oct 13 13:52:46 standalone.localdomain ansible-async_wrapper.py[56203]: Start module (56203)
Oct 13 13:52:46 standalone.localdomain ansible-async_wrapper.py[56199]: Return async_wrapper task started.
Oct 13 13:52:47 standalone.localdomain python3[56208]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:52:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v492: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:48 standalone.localdomain ceph-mon[29756]: pgmap v492: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v493: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:49 standalone.localdomain ceph-mon[29756]: pgmap v493: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v494: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]:    (file & line not available)
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]:    (file & line not available)
Oct 13 13:52:51 standalone.localdomain puppet-user[56211]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Oct 13 13:52:51 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3600)
Oct 13 13:52:52 standalone.localdomain puppet-user[56211]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Oct 13 13:52:52 standalone.localdomain puppet-user[56211]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.42 seconds
Oct 13 13:52:52 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully
Oct 13 13:52:52 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:52:52 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:52:52 standalone.localdomain ceph-mon[29756]: pgmap v494: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:52:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v495: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:54 standalone.localdomain ceph-mon[29756]: pgmap v495: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v496: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:56 standalone.localdomain ceph-mon[29756]: pgmap v496: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:56 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3595)
Oct 13 13:52:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v497: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:57 standalone.localdomain python3[56360]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:52:57 standalone.localdomain groupadd[56366]: group added to /etc/group: name=haclient, GID=189
Oct 13 13:52:57 standalone.localdomain groupadd[56366]: group added to /etc/gshadow: name=haclient
Oct 13 13:52:57 standalone.localdomain groupadd[56366]: new group: name=haclient, GID=189
Oct 13 13:52:57 standalone.localdomain useradd[56373]: new user: name=hacluster, UID=189, GID=189, home=/home/hacluster, shell=/sbin/nologin, from=none
Oct 13 13:52:58 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:52:58 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 13:52:58 standalone.localdomain ceph-mon[29756]: pgmap v497: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:58 standalone.localdomain sshd[45863]: Received signal 15; terminating.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Stopping OpenSSH server daemon...
Oct 13 13:52:58 standalone.localdomain systemd[1]: sshd.service: Deactivated successfully.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Stopped OpenSSH server daemon.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Stopped target sshd-keygen.target.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Stopping sshd-keygen.target...
Oct 13 13:52:58 standalone.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:52:58 standalone.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:52:58 standalone.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 13:52:58 standalone.localdomain systemd[1]: Reached target sshd-keygen.target.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Starting OpenSSH server daemon...
Oct 13 13:52:58 standalone.localdomain sshd[56396]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:52:58 standalone.localdomain sshd[56396]: Server listening on 0.0.0.0 port 22.
Oct 13 13:52:58 standalone.localdomain sshd[56396]: Server listening on :: port 22.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Started OpenSSH server daemon.
Oct 13 13:52:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:52:58 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:52:58 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:52:58 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:52:59 standalone.localdomain systemd-rc-local-generator[56439]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:52:59 standalone.localdomain systemd-sysv-generator[56444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:52:59 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:52:59 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:52:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v498: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:52:59 standalone.localdomain ceph-mon[29756]: pgmap v498: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:00 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:53:00 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:53:00 standalone.localdomain systemd[1]: run-rcb280af283024c2397597524c33790f5.service: Deactivated successfully.
Oct 13 13:53:01 standalone.localdomain rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 13:53:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v499: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:01 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Install/Package[pacemaker]/ensure: created
Oct 13 13:53:01 standalone.localdomain sshd[57340]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 13:53:01 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3590)
Oct 13 13:53:02 standalone.localdomain ceph-mon[29756]: pgmap v499: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:02 standalone.localdomain sshd[57340]: Received disconnect from 193.46.255.99 port 43366:11:  [preauth]
Oct 13 13:53:02 standalone.localdomain sshd[57340]: Disconnected from authenticating user root 193.46.255.99 port 43366 [preauth]
Oct 13 13:53:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v500: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:04 standalone.localdomain ceph-mon[29756]: pgmap v500: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v501: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:06 standalone.localdomain ceph-mon[29756]: pgmap v501: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:06 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3585)
Oct 13 13:53:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v502: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:07 standalone.localdomain systemd-rc-local-generator[57384]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:07 standalone.localdomain systemd-sysv-generator[57389]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:07 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:07 standalone.localdomain python3[57401]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:53:07 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 13:53:07 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 13:53:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:08 standalone.localdomain systemd-sysv-generator[57473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:08 standalone.localdomain systemd-rc-local-generator[57466]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:08 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:08 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 13:53:08 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 13:53:08 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 13:53:08 standalone.localdomain systemd[1]: run-r8392bd853b894a5d90a78d46eead85af.service: Deactivated successfully.
Oct 13 13:53:08 standalone.localdomain ceph-mon[29756]: pgmap v502: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v503: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:09 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Install/Package[pcs]/ensure: created
Oct 13 13:53:09 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]/ensure: created
Oct 13 13:53:09 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed [redacted] to [redacted]
Oct 13 13:53:09 standalone.localdomain usermod[57535]: add 'hacluster' to group 'haclient'
Oct 13 13:53:09 standalone.localdomain usermod[57535]: add 'hacluster' to shadow group 'haclient'
Oct 13 13:53:09 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed  to ['haclient']
Oct 13 13:53:09 standalone.localdomain ceph-mon[29756]: pgmap v503: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:09 standalone.localdomain systemd-sysv-generator[57570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:09 standalone.localdomain systemd-rc-local-generator[57566]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:09 standalone.localdomain systemd[1]: Starting PCS GUI and remote configuration interface (Ruby)...
Oct 13 13:53:10 standalone.localdomain pcsd[57581]: 2025-10-13 13:53:10 +0000  INFO Notifying systemd we are running (socket /run/systemd/notify)
Oct 13 13:53:10 standalone.localdomain systemd[1]: Started PCS GUI and remote configuration interface (Ruby).
Oct 13 13:53:10 standalone.localdomain systemd[1]: Starting PCS GUI and remote configuration interface...
Oct 13 13:53:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v504: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:11 standalone.localdomain systemd[1]: Started PCS GUI and remote configuration interface.
Oct 13 13:53:11 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3580)
Oct 13 13:53:12 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:12 standalone.localdomain systemd-sysv-generator[57709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:12 standalone.localdomain systemd-rc-local-generator[57704]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:12 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:12 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:12 standalone.localdomain systemd-rc-local-generator[57740]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:12 standalone.localdomain systemd-sysv-generator[57743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:12 standalone.localdomain ceph-mon[29756]: pgmap v504: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:12 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:12 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'
Oct 13 13:53:13 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/Exec[check-for-local-authentication]/returns: executed successfully
Oct 13 13:53:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v505: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:14 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 3 events
Oct 13 13:53:14 standalone.localdomain ceph-mon[29756]: pgmap v505: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v506: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:16 standalone.localdomain ceph-mon[29756]: pgmap v506: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:16 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:16 standalone.localdomain systemd-rc-local-generator[57821]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:16 standalone.localdomain systemd-sysv-generator[57827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:16 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:16 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3575)
Oct 13 13:53:17 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:17 standalone.localdomain systemd-sysv-generator[57860]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:17 standalone.localdomain systemd-rc-local-generator[57856]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:17 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v507: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:17 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully
Oct 13 13:53:17 standalone.localdomain python3[57886]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:53:18 standalone.localdomain ceph-mon[29756]: pgmap v507: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:18 standalone.localdomain systemd[1]: Starting Corosync Cluster Engine...
Oct 13 13:53:18 standalone.localdomain corosync[57893]:   [MAIN  ] Corosync Cluster Engine 3.1.7 starting up
Oct 13 13:53:18 standalone.localdomain corosync[57893]:   [MAIN  ] Corosync built-in features: dbus systemd xmlconf vqsim nozzle snmp pie relro bindnow
Oct 13 13:53:18 standalone.localdomain corosync[57893]:   [TOTEM ] Initializing transport (Kronosnet).
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [TOTEM ] totemknet initialized
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [KNET  ] pmtud: MTU manually set to: 0
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [KNET  ] common: crypto_nss.so has been loaded from /usr/lib64/kronosnet/crypto_nss.so
Oct 13 13:53:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v508: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [SERV  ] Service engine loaded: corosync configuration map access [0]
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QB    ] server name: cmap
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [SERV  ] Service engine loaded: corosync configuration service [1]
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QB    ] server name: cfg
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QB    ] server name: cpg
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QUORUM] Using quorum provider corosync_votequorum
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QUORUM] This node is within the primary component and will provide service.
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QUORUM] Members[0]:
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QB    ] server name: votequorum
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QB    ] server name: quorum
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [TOTEM ] Configuring link 0
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [TOTEM ] Configured link number 0: local addr: 172.17.0.100, port=5405
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [KNET  ] link: Resetting MTU for link 0 because host 1 joined
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QUORUM] Sync members[1]: 1
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QUORUM] Sync joined[1]: 1
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [TOTEM ] A new membership (1.5) was formed. Members joined: 1
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [QUORUM] Members[1]: 1
Oct 13 13:53:19 standalone.localdomain corosync[57893]:   [MAIN  ] Completed service synchronization, ready to provide service.
Oct 13 13:53:19 standalone.localdomain systemd[1]: Started Corosync Cluster Engine.
Oct 13 13:53:19 standalone.localdomain systemd[1]: Reached target resource-agents dependencies.
Oct 13 13:53:19 standalone.localdomain systemd[1]: Started Pacemaker High Availability Cluster Manager.
Oct 13 13:53:19 standalone.localdomain pacemakerd[57905]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:53:19 standalone.localdomain pacemakerd[57905]:  notice: Starting Pacemaker 2.1.5-9.el9_2.4
Oct 13 13:53:19 standalone.localdomain pacemakerd[57905]:  notice: Pacemaker daemon successfully started and accepting connections
Oct 13 13:53:19 standalone.localdomain pacemaker-fenced[57907]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:53:19 standalone.localdomain pacemaker-fenced[57907]:  notice: Starting Pacemaker fencer
Oct 13 13:53:19 standalone.localdomain pacemaker-fenced[57907]:  notice: Connecting to corosync cluster infrastructure
Oct 13 13:53:19 standalone.localdomain pacemaker-execd[57908]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:53:19 standalone.localdomain pacemaker-execd[57908]:  notice: Starting Pacemaker local executor
Oct 13 13:53:19 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Starting Pacemaker scheduler
Oct 13 13:53:19 standalone.localdomain pacemaker-execd[57908]:  notice: Pacemaker local executor successfully started and accepting connections
Oct 13 13:53:19 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Pacemaker scheduler successfully started and accepting connections
Oct 13 13:53:19 standalone.localdomain pacemaker-execd[57908]:  notice: OCF resource agent search path is /usr/lib/ocf/resource.d
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:53:19 standalone.localdomain pacemaker-attrd[57909]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: Starting Pacemaker CIB manager
Oct 13 13:53:19 standalone.localdomain pacemaker-attrd[57909]:  notice: Starting Pacemaker node attribute manager
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: /var/lib/pacemaker/cib/cib.xml not found: No such file or directory
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: /var/lib/pacemaker/cib/cib.xml.sig not found: No such file or directory
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  warning: Could not verify cluster configuration file /var/lib/pacemaker/cib/cib.xml: No such file or directory (2)
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  warning: Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  warning: Continuing with an empty configuration.
Oct 13 13:53:19 standalone.localdomain pacemaker-controld[57911]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:53:19 standalone.localdomain pacemaker-controld[57911]:  notice: Starting Pacemaker controller
Oct 13 13:53:19 standalone.localdomain pacemaker-fenced[57907]:  notice: Node standalone state is now member
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: Connecting to corosync cluster infrastructure
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: Node standalone state is now member
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57906]:  notice: Pacemaker CIB manager successfully started and accepting connections
Oct 13 13:53:19 standalone.localdomain pacemaker-based[57912]:  warning: Could not verify cluster configuration file /var/lib/pacemaker/cib/cib.xml: No such file or directory (2)
Oct 13 13:53:19 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully
Oct 13 13:53:19 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:19 standalone.localdomain ceph-mon[29756]: pgmap v508: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:19 standalone.localdomain systemd-sysv-generator[57940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:19 standalone.localdomain systemd-rc-local-generator[57936]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:19 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:19 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:20 standalone.localdomain systemd-rc-local-generator[57973]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:20 standalone.localdomain systemd-sysv-generator[57977]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:20 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:20 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'
Oct 13 13:53:20 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:20 standalone.localdomain systemd-sysv-generator[58016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:20 standalone.localdomain systemd-rc-local-generator[58011]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:20 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:20 standalone.localdomain pacemaker-attrd[57909]:  notice: Connecting to corosync cluster infrastructure
Oct 13 13:53:20 standalone.localdomain pacemaker-controld[57911]:  notice: Connecting to corosync cluster infrastructure
Oct 13 13:53:20 standalone.localdomain pacemaker-fenced[57907]:  notice: Pacemaker fencer successfully started and accepting connections
Oct 13 13:53:20 standalone.localdomain pacemaker-attrd[57909]:  notice: Node standalone state is now member
Oct 13 13:53:20 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:53:20 standalone.localdomain pacemaker-attrd[57909]:  notice: Pacemaker node attribute manager successfully started and accepting connections
Oct 13 13:53:20 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting #attrd-protocol[standalone]: (unset) -> 5
Oct 13 13:53:20 standalone.localdomain pacemaker-attrd[57909]:  notice: Recorded local node as attribute writer (was unset)
Oct 13 13:53:20 standalone.localdomain pacemaker-controld[57911]:  notice: Quorum acquired
Oct 13 13:53:20 standalone.localdomain pacemaker-controld[57911]:  notice: Node standalone state is now member
Oct 13 13:53:20 standalone.localdomain pacemaker-controld[57911]:  notice: Pacemaker controller successfully started and accepting connections
Oct 13 13:53:20 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_STARTING -> S_PENDING
Oct 13 13:53:20 standalone.localdomain systemd-rc-local-generator[58047]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:53:20 standalone.localdomain systemd-sysv-generator[58053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:53:20 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:53:20 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'
Oct 13 13:53:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v509: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:21 standalone.localdomain pacemaker-controld[57911]:  notice: Fencer successfully connected
Oct 13 13:53:22 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3570)
Oct 13 13:53:22 standalone.localdomain ceph-mon[29756]: pgmap v509: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:53:23
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:53:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v510: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:24 standalone.localdomain ceph-mon[29756]: pgmap v510: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v511: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:26 standalone.localdomain ceph-mon[29756]: pgmap v511: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:27 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3565)
Oct 13 13:53:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v512: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:28 standalone.localdomain python3[58092]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:53:28 standalone.localdomain ceph-mon[29756]: pgmap v512: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:29 standalone.localdomain sudo[58093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:53:29 standalone.localdomain sudo[58093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:53:29 standalone.localdomain sudo[58093]: pam_unix(sudo:session): session closed for user root
Oct 13 13:53:29 standalone.localdomain sudo[58108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:53:29 standalone.localdomain sudo[58108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:53:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v513: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:29 standalone.localdomain sudo[58108]: pam_unix(sudo:session): session closed for user root
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: pgmap v513: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:53:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev a890cb6b-d95d-4ed8-bbd6-dc9f9bc5c63f (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:53:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev a890cb6b-d95d-4ed8-bbd6-dc9f9bc5c63f (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:53:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event a890cb6b-d95d-4ed8-bbd6-dc9f9bc5c63f (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:53:29 standalone.localdomain sudo[58156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:53:29 standalone.localdomain sudo[58156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:53:29 standalone.localdomain sudo[58156]: pam_unix(sudo:session): session closed for user root
Oct 13 13:53:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:53:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:53:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:53:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:53:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v514: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:31 standalone.localdomain ceph-mon[29756]: pgmap v514: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:32 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3560)
Oct 13 13:53:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v515: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 22 completed events
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.716728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363613716857, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1536, "num_deletes": 251, "total_data_size": 558508, "memory_usage": 587488, "flush_reason": "Manual Compaction"}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363613725297, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 358671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10657, "largest_seqno": 12192, "table_properties": {"data_size": 354589, "index_size": 1491, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12061, "raw_average_key_size": 19, "raw_value_size": 344812, "raw_average_value_size": 568, "num_data_blocks": 70, "num_entries": 607, "num_filter_entries": 607, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363475, "oldest_key_time": 1760363475, "file_creation_time": 1760363613, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 8611 microseconds, and 2588 cpu microseconds.
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.725357) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 358671 bytes OK
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.725391) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.731934) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.731966) EVENT_LOG_v1 {"time_micros": 1760363613731958, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.731997) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 551681, prev total WAL file size 552170, number of live WAL files 2.
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.732818) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(350KB)], [29(2467KB)]
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363613732884, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 2884910, "oldest_snapshot_seqno": -1}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 2438 keys, 2409091 bytes, temperature: kUnknown
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363613752979, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 2409091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 2394047, "index_size": 7844, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6149, "raw_key_size": 54244, "raw_average_key_size": 22, "raw_value_size": 2351395, "raw_average_value_size": 964, "num_data_blocks": 359, "num_entries": 2438, "num_filter_entries": 2438, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363613, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.753289) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 2409091 bytes
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.756593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.9 rd, 119.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 2.4 +0.0 blob) out(2.3 +0.0 blob), read-write-amplify(14.8) write-amplify(6.7) OK, records in: 2892, records dropped: 454 output_compression: NoCompression
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.756678) EVENT_LOG_v1 {"time_micros": 1760363613756640, "job": 12, "event": "compaction_finished", "compaction_time_micros": 20189, "compaction_time_cpu_micros": 10325, "output_level": 6, "num_output_files": 1, "total_output_size": 2409091, "num_input_records": 2892, "num_output_records": 2438, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363613757069, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363613757534, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.732728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.757586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.757595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.757599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.757603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:53:33 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:53:33.757607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:53:34 standalone.localdomain ceph-mon[29756]: pgmap v515: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:53:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v516: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:36 standalone.localdomain ceph-mon[29756]: pgmap v516: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:37 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3555)
Oct 13 13:53:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v517: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:38 standalone.localdomain python3[58189]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:53:38 standalone.localdomain ceph-mon[29756]: pgmap v517: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v518: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:39 standalone.localdomain ceph-mon[29756]: pgmap v518: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v519: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:41 standalone.localdomain pacemaker-controld[57911]:  warning: Input I_DC_TIMEOUT received in state S_PENDING from crm_timer_popped
Oct 13 13:53:41 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_ELECTION -> S_INTEGRATION
Oct 13 13:53:41 standalone.localdomain pacemaker-controld[57911]:  notice: Cluster does not have watchdog fencing device
Oct 13 13:53:41 standalone.localdomain pacemaker-controld[57911]:  notice: Cluster does not have watchdog fencing device
Oct 13 13:53:41 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting #feature-set[standalone]: (unset) -> 3.16.2
Oct 13 13:53:42 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3550)
Oct 13 13:53:42 standalone.localdomain ceph-mon[29756]: pgmap v519: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:42 standalone.localdomain pacemaker-schedulerd[57910]:  error: Resource start-up disabled since no STONITH resources have been defined
Oct 13 13:53:42 standalone.localdomain pacemaker-schedulerd[57910]:  error: Either configure some or disable STONITH with the stonith-enabled option
Oct 13 13:53:42 standalone.localdomain pacemaker-schedulerd[57910]:  error: NOTE: Clusters with shared data need STONITH to ensure data integrity
Oct 13 13:53:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: No fencing will be done until there are resources to manage
Oct 13 13:53:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 0, saving inputs in /var/lib/pacemaker/pengine/pe-input-0.bz2
Oct 13 13:53:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Configuration errors found during scheduler processing,  please run "crm_verify -L" to identify issues
Oct 13 13:53:42 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
Oct 13 13:53:42 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:53:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v520: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:43 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully
Oct 13 13:53:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:44 standalone.localdomain ceph-mon[29756]: pgmap v520: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:45 standalone.localdomain puppet-user[56211]: Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs property config' instead.
Oct 13 13:53:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v521: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:46 standalone.localdomain ceph-mon[29756]: pgmap v521: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:47 standalone.localdomain ansible-async_wrapper.py[56202]: 56203 still running (3545)
Oct 13 13:53:47 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Notice: Applied catalog in 55.06 seconds
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Application:
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:    Initial environment: production
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:    Converged environment: production
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:          Run mode: user
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Changes:
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:             Total: 17
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Events:
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:           Success: 17
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:             Total: 17
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Resources:
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:         Restarted: 1
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:           Changed: 16
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:       Out of sync: 16
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:             Total: 26
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Time:
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:        Filebucket: 0.00
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:          Schedule: 0.00
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:         File line: 0.00
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:              File: 0.00
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:            Augeas: 0.02
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:              User: 0.17
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:    Config retrieval: 0.48
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:           Package: 17.08
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:          Last run: 1760363627
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:              Exec: 28.85
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:     Pcmk property: 3.59
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:           Service: 4.10
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:    Transaction evaluation: 55.05
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:    Catalog application: 55.06
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:             Total: 55.06
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]: Version:
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:            Config: 1760363571
Oct 13 13:53:47 standalone.localdomain puppet-user[56211]:            Puppet: 7.10.0
Oct 13 13:53:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v522: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:47 standalone.localdomain ansible-async_wrapper.py[56203]: Module complete (56203)
Oct 13 13:53:48 standalone.localdomain pacemaker-controld[57911]:  notice: Cluster does not have watchdog fencing device
Oct 13 13:53:48 standalone.localdomain pacemaker-schedulerd[57910]:  warning: Blind faith: not fencing unseen nodes
Oct 13 13:53:48 standalone.localdomain pacemaker-schedulerd[57910]:  notice: No fencing will be done until there are resources to manage
Oct 13 13:53:48 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 1, saving inputs in /var/lib/pacemaker/pengine/pe-input-1.bz2
Oct 13 13:53:48 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 1 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
Oct 13 13:53:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:53:48 standalone.localdomain ceph-mon[29756]: pgmap v522: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:48 standalone.localdomain python3[58235]: ansible-ansible.legacy.async_status Invoked with jid=853654849487.56199 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:53:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:49 standalone.localdomain python3[58244]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:53:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v523: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:49 standalone.localdomain python3[58248]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:53:49 standalone.localdomain ceph-mon[29756]: pgmap v523: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:49 standalone.localdomain python3[58262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:53:50 standalone.localdomain python3[58266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/root/.ansible/tmp/ansible-tmp-1760363629.641832-58252-264661159688109/source _original_basename=tmpd7t7ns3q follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:53:50 standalone.localdomain python3[58272]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:53:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v524: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:51 standalone.localdomain python3[58353]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Oct 13 13:53:51 standalone.localdomain python3[58362]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 13:53:51 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 78.1 (260 of 333 items), suggesting rotation.
Oct 13 13:53:51 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 13:53:51 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:53:51 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 13:53:52 standalone.localdomain ansible-async_wrapper.py[56202]: Done in kid B.
Oct 13 13:53:52 standalone.localdomain python3[58367]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=standalone step=1 update_config_hash_only=False
Oct 13 13:53:52 standalone.localdomain ceph-mon[29756]: pgmap v524: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:52 standalone.localdomain python3[58375]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:53:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v525: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:54 standalone.localdomain ceph-mon[29756]: pgmap v525: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:54 standalone.localdomain python3[58425]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 13:53:55 standalone.localdomain podman[58606]: 2025-10-13 13:53:55.314134729 +0000 UTC m=+0.096553018 container create 848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api, config_id=tripleo_puppet_step1, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-glance-api, distribution-scope=public, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, release=1, build-date=2025-07-21T13:58:20, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=container-puppet-glance_api, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 13:53:55 standalone.localdomain podman[58613]: 2025-10-13 13:53:55.32353301 +0000 UTC m=+0.094878397 container create 48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=container-puppet-cinder, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-cinder, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:58:55, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 13:53:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v526: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:55 standalone.localdomain podman[58606]: 2025-10-13 13:53:55.246457899 +0000 UTC m=+0.028876188 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 13:53:55 standalone.localdomain podman[58620]: 2025-10-13 13:53:55.348396326 +0000 UTC m=+0.114963059 container create 13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-clustercheck, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.openshift.expose-services=, container_name=container-puppet-clustercheck, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, config_id=tripleo_puppet_step1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, vcs-type=git, name=rhosp17/openstack-mariadb)
Oct 13 13:53:55 standalone.localdomain podman[58613]: 2025-10-13 13:53:55.25953396 +0000 UTC m=+0.030879327 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libpod-conmon-48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c.scope.
Oct 13 13:53:55 standalone.localdomain podman[58620]: 2025-10-13 13:53:55.265910932 +0000 UTC m=+0.032477665 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:53:55 standalone.localdomain podman[58631]: 2025-10-13 13:53:55.367339474 +0000 UTC m=+0.122023241 container create e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, build-date=2025-07-21T13:07:52, container_name=container-puppet-crond, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_puppet_step1, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libpod-conmon-13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea.scope.
Oct 13 13:53:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebc8707f4c7cba2ecd74c41ce3f00c568695e2bd83874b7af7633e3c76f6461/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:55 standalone.localdomain podman[58613]: 2025-10-13 13:53:55.390102757 +0000 UTC m=+0.161448154 container init 48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=container-puppet-cinder, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=container-puppet-cinder, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, build-date=2025-07-21T15:58:55, release=1, com.redhat.component=openstack-cinder-api-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, version=17.1.9)
Oct 13 13:53:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f7a4d09cf0fc6d635961ac1a5e419617e8dfdf8c2aec5e2a272814a4290da2/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:55 standalone.localdomain podman[58613]: 2025-10-13 13:53:55.39784457 +0000 UTC m=+0.169189967 container start 48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=container-puppet-cinder, build-date=2025-07-21T15:58:55, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=container-puppet-cinder, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, config_id=tripleo_puppet_step1, name=rhosp17/openstack-cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:53:55 standalone.localdomain podman[58613]: 2025-10-13 13:53:55.398148729 +0000 UTC m=+0.169494096 container attach 48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=container-puppet-cinder, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, architecture=x86_64, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:58:55, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=container-puppet-cinder, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libpod-conmon-848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66.scope.
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libpod-conmon-e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270.scope.
Oct 13 13:53:55 standalone.localdomain podman[58620]: 2025-10-13 13:53:55.402741316 +0000 UTC m=+0.169308049 container init 13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-clustercheck, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, config_id=tripleo_puppet_step1, container_name=container-puppet-clustercheck, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:53:55 standalone.localdomain podman[58631]: 2025-10-13 13:53:55.304232841 +0000 UTC m=+0.058916618 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Oct 13 13:53:55 standalone.localdomain podman[58636]: 2025-10-13 13:53:55.306957573 +0000 UTC m=+0.056599859 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:55 standalone.localdomain podman[58620]: 2025-10-13 13:53:55.409643583 +0000 UTC m=+0.176210316 container start 13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_id=tripleo_puppet_step1, container_name=container-puppet-clustercheck, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:53:55 standalone.localdomain podman[58620]: 2025-10-13 13:53:55.410092237 +0000 UTC m=+0.176658970 container attach 13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-clustercheck, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, container_name=container-puppet-clustercheck, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, architecture=x86_64, config_id=tripleo_puppet_step1, name=rhosp17/openstack-mariadb, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:53:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e53c626af8f2c4d4f73282ff962891d82a21f7b0dd776480787a910cef793ec/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db193506eaeea5bc352278b61a9192c8541e438157f80fdad5fbfa7bd832e228/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:55 standalone.localdomain podman[58636]: 2025-10-13 13:53:55.418473758 +0000 UTC m=+0.168116054 container create a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=container-puppet-barbican, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, distribution-scope=public, com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, container_name=container-puppet-barbican)
Oct 13 13:53:55 standalone.localdomain podman[58606]: 2025-10-13 13:53:55.421644603 +0000 UTC m=+0.204062892 container init 848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.openshift.expose-services=, container_name=container-puppet-glance_api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_puppet_step1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-glance-api-container)
Oct 13 13:53:55 standalone.localdomain podman[58606]: 2025-10-13 13:53:55.438822938 +0000 UTC m=+0.221241267 container start 848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api, tcib_managed=true, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, config_id=tripleo_puppet_step1, version=17.1.9, container_name=container-puppet-glance_api, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, release=1)
Oct 13 13:53:55 standalone.localdomain podman[58606]: 2025-10-13 13:53:55.439063845 +0000 UTC m=+0.221482144 container attach 848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, release=1, build-date=2025-07-21T13:58:20, container_name=container-puppet-glance_api, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public)
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libpod-conmon-a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a.scope.
Oct 13 13:53:55 standalone.localdomain podman[58679]: 2025-10-13 13:53:55.456009513 +0000 UTC m=+0.166393321 container create 82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api_internal, build-date=2025-07-21T13:58:20, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=container-puppet-glance_api_internal, maintainer=OpenStack TripleO Team)
Oct 13 13:53:55 standalone.localdomain podman[58679]: 2025-10-13 13:53:55.365799938 +0000 UTC m=+0.076183756 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59dd28e840ca55d017be8e7da1cf03b089ee3ce35fdcc06fb7e7fd2d34874879/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:55 standalone.localdomain podman[58631]: 2025-10-13 13:53:55.480671413 +0000 UTC m=+0.235355190 container init e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, container_name=container-puppet-crond, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libpod-conmon-82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae.scope.
Oct 13 13:53:55 standalone.localdomain podman[58631]: 2025-10-13 13:53:55.488804477 +0000 UTC m=+0.243488264 container start e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1, container_name=container-puppet-crond, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron)
Oct 13 13:53:55 standalone.localdomain podman[58631]: 2025-10-13 13:53:55.488950512 +0000 UTC m=+0.243634289 container attach e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, version=17.1.9, config_id=tripleo_puppet_step1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, container_name=container-puppet-crond, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 13:53:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7237bb817dfe05d48f2606885847d1490e7fbe49ddf4120a0989298e8583c1c8/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:55 standalone.localdomain podman[58679]: 2025-10-13 13:53:55.502640852 +0000 UTC m=+0.213024660 container init 82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api_internal, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., container_name=container-puppet-glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-glance-api-container, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 13:53:55 standalone.localdomain podman[58679]: 2025-10-13 13:53:55.511040394 +0000 UTC m=+0.221424202 container start 82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api_internal, container_name=container-puppet-glance_api_internal, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-glance-api-container, tcib_managed=true, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 13:53:55 standalone.localdomain podman[58679]: 2025-10-13 13:53:55.51122051 +0000 UTC m=+0.221604328 container attach 82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api_internal, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, container_name=container-puppet-glance_api_internal, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-glance-api-container)
Oct 13 13:53:55 standalone.localdomain podman[58636]: 2025-10-13 13:53:55.535573421 +0000 UTC m=+0.285215717 container init a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=container-puppet-barbican, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, release=1, container_name=container-puppet-barbican, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T15:22:44, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team)
Oct 13 13:53:55 standalone.localdomain podman[58636]: 2025-10-13 13:53:55.547512208 +0000 UTC m=+0.297154514 container start a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=container-puppet-barbican, com.redhat.component=openstack-barbican-api-container, version=17.1.9, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, container_name=container-puppet-barbican, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_puppet_step1, build-date=2025-07-21T15:22:44, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1)
Oct 13 13:53:55 standalone.localdomain podman[58636]: 2025-10-13 13:53:55.547679823 +0000 UTC m=+0.297322119 container attach a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=container-puppet-barbican, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_puppet_step1, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, com.redhat.component=openstack-barbican-api-container, container_name=container-puppet-barbican, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 13:53:56 standalone.localdomain ceph-mon[29756]: pgmap v526: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.07 seconds
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0'
Oct 13 13:53:57 standalone.localdomain crontab[59126]: (root) LIST (root)
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: Scope(Class[Tripleo::Profile::Base::Cinder::Api]): The keymgr_backend parameter has been deprecated and has no effect.
Oct 13 13:53:57 standalone.localdomain crontab[59137]: (root) REPLACE (root)
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Notice: Applied catalog in 0.04 seconds
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Application:
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    Initial environment: production
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    Converged environment: production
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:          Run mode: user
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Changes:
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:             Total: 2
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Events:
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:           Success: 2
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:             Total: 2
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Resources:
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:           Changed: 2
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:       Out of sync: 2
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:           Skipped: 7
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:             Total: 9
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Time:
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:              File: 0.01
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:              Cron: 0.01
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    Transaction evaluation: 0.03
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    Catalog application: 0.04
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:    Config retrieval: 0.10
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:          Last run: 1760363637
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:             Total: 0.04
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]: Version:
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:            Config: 1760363637
Oct 13 13:53:57 standalone.localdomain puppet-user[58782]:            Puppet: 7.10.0
Oct 13 13:53:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v527: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58806]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:53:57 standalone.localdomain puppet-user[58806]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:53:57 standalone.localdomain puppet-user[58806]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:53:57 standalone.localdomain puppet-user[58806]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58806]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:53:57 standalone.localdomain puppet-user[58806]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: Scope(Class[Cinder]): The database_connection parameter is deprecated and will be \
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: removed in a future realse. Use cinder::db::database_connection instead
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: 'cinder::api::keymgr_backend'. (file: /etc/puppet/modules/cinder/manifests/init.pp, line: 455, column: 31)
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: 'cinder::api::keymgr_encryption_api_url'. (file: /etc/puppet/modules/cinder/manifests/init.pp, line: 456, column: 42)
Oct 13 13:53:57 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: 'cinder::api::keymgr_encryption_auth_url'. (file: /etc/puppet/modules/cinder/manifests/init.pp, line: 458, column: 43)
Oct 13 13:53:57 standalone.localdomain systemd[1]: libpod-e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270.scope: Deactivated successfully.
Oct 13 13:53:57 standalone.localdomain systemd[1]: libpod-e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270.scope: Consumed 1.989s CPU time.
Oct 13 13:53:57 standalone.localdomain podman[58631]: 2025-10-13 13:53:57.614512247 +0000 UTC m=+2.369196014 container died e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=container-puppet-crond, name=rhosp17/openstack-cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]: Warning: Scope(Class[Glance::Api]): glance::api::os_region_name is deprecated. Use \
Oct 13 13:53:57 standalone.localdomain puppet-user[58760]: glance::backend::multistore::cinder::cinder_os_region_name instead.
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]: Warning: Scope(Class[Glance::Api]): glance::api::os_region_name is deprecated. Use \
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]: glance::backend::multistore::cinder::cinder_os_region_name instead.
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain systemd[1]: tmp-crun.CpNR7d.mount: Deactivated successfully.
Oct 13 13:53:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270-userdata-shm.mount: Deactivated successfully.
Oct 13 13:53:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8e53c626af8f2c4d4f73282ff962891d82a21f7b0dd776480787a910cef793ec-merged.mount: Deactivated successfully.
Oct 13 13:53:57 standalone.localdomain puppet-user[58795]: Warning: Scope(Class[Glance::Api]): The show_multiple_locations parameter is deprecated, and will be removed in a future release
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    (file & line not available)
Oct 13 13:53:57 standalone.localdomain podman[59228]: 2025-10-13 13:53:57.727928119 +0000 UTC m=+0.105094783 container cleanup e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, container_name=container-puppet-crond, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 13:53:57 standalone.localdomain systemd[1]: libpod-conmon-e97197612237163624938886ba8b8c063cad9e59c6d7b00b29bcde0313832270.scope: Deactivated successfully.
Oct 13 13:53:57 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.08 seconds
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{sha256}af29a9eb31131202cc1348fc00331eb91d25f5407fa7ea33ff6c8da25b2edba6'
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Notice: /Stage[main]/Xinetd/File[/etc/xinetd.d]/ensure: created
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/ensure: defined content as '{sha256}37f48d3e5dce056a46519d144042a388bde95ea9fa161d3362eedf70e7987a91'
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{sha256}590421a08aeb8c31fbd521c49c271c93bc0ab53d170f5cc35e1f8de09fcb27be'
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Notice: Applied catalog in 0.03 seconds
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Application:
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    Initial environment: production
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    Converged environment: production
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:          Run mode: user
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Changes:
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:             Total: 4
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Events:
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:           Success: 4
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:             Total: 4
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Resources:
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:           Changed: 4
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:       Out of sync: 4
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:           Skipped: 9
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:             Total: 13
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Time:
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:              File: 0.02
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    Transaction evaluation: 0.03
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    Catalog application: 0.03
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:    Config retrieval: 0.12
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:          Last run: 1760363637
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:             Total: 0.03
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]: Version:
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:            Config: 1760363637
Oct 13 13:53:57 standalone.localdomain puppet-user[58751]:            Puppet: 7.10.0
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.78 seconds
Oct 13 13:53:58 standalone.localdomain systemd[1]: libpod-13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea.scope: Deactivated successfully.
Oct 13 13:53:58 standalone.localdomain systemd[1]: libpod-13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea.scope: Consumed 2.551s CPU time.
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.77 seconds
Oct 13 13:53:58 standalone.localdomain podman[58620]: 2025-10-13 13:53:58.103808584 +0000 UTC m=+2.870375327 container died 13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, container_name=container-puppet-clustercheck, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-mariadb-container)
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: 'ensure'. (file: /etc/puppet/modules/cinder/manifests/backup.pp, line: 94, column: 18)
Oct 13 13:53:58 standalone.localdomain podman[59366]: 2025-10-13 13:53:58.181800183 +0000 UTC m=+0.074555007 container cleanup 13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=container-puppet-clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, vcs-type=git, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:53:58 standalone.localdomain systemd[1]: libpod-conmon-13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea.scope: Deactivated successfully.
Oct 13 13:53:58 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-clustercheck --conmon-pidfile /run/container-puppet-clustercheck.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::pacemaker::clustercheck --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-clustercheck --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'clustercheck', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::pacemaker::clustercheck'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-clustercheck.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: 'ensure'. (file: /etc/puppet/modules/cinder/manifests/volume.pp, line: 69, column: 18)
Oct 13 13:53:58 standalone.localdomain podman[59348]: 2025-10-13 13:53:58.220784192 +0000 UTC m=+0.150625718 container create 2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5 (image=registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1, name=container-puppet-haproxy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, container_name=container-puppet-haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T13:08:11, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, com.redhat.component=openstack-haproxy-container, tcib_managed=true, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created
Oct 13 13:53:58 standalone.localdomain systemd[1]: Started libpod-conmon-2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5.scope.
Oct 13 13:53:58 standalone.localdomain podman[59348]: 2025-10-13 13:53:58.160799433 +0000 UTC m=+0.090640989 image pull  registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1
Oct 13 13:53:58 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:58 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0a15d821c5264bbbef5ba19d0556f6e72bc0e0289105b378b947c98fd86551/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created
Oct 13 13:53:58 standalone.localdomain podman[59348]: 2025-10-13 13:53:58.280009138 +0000 UTC m=+0.209850704 container init 2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5 (image=registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1, name=container-puppet-haproxy, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-haproxy, tcib_managed=true, build-date=2025-07-21T13:08:11, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=container-puppet-haproxy, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, config_id=tripleo_puppet_step1, architecture=x86_64, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1)
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created
Oct 13 13:53:58 standalone.localdomain podman[59348]: 2025-10-13 13:53:58.286584156 +0000 UTC m=+0.216425692 container start 2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5 (image=registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1, name=container-puppet-haproxy, vendor=Red Hat, Inc., vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=container-puppet-haproxy, build-date=2025-07-21T13:08:11, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 13:53:58 standalone.localdomain podman[59348]: 2025-10-13 13:53:58.287854614 +0000 UTC m=+0.217696180 container attach 2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5 (image=registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1, name=container-puppet-haproxy, batch=17.1_20250721.1, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, tcib_managed=true, release=1, config_id=tripleo_puppet_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, container_name=container-puppet-haproxy, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-haproxy-container)
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created
Oct 13 13:53:58 standalone.localdomain ceph-mon[29756]: pgmap v527: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_backends]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/default_backend]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Warning: Scope(Apache::Vhost[barbican_wsgi_main]):
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     file names.
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: 
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_backends]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/default_backend]/ensure: created
Oct 13 13:53:58 standalone.localdomain podman[59452]: 2025-10-13 13:53:58.567832852 +0000 UTC m=+0.083529937 container create b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat_api, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-heat-api, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/image_cache_dir]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_image_import_config[image_import_opts/image_import_plugins]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_image_import_config[image_conversion/output_format]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_image_import_config[inject_metadata_properties/ignore_user_roles]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created
Oct 13 13:53:58 standalone.localdomain systemd[1]: Started libpod-conmon-b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91.scope.
Oct 13 13:53:58 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/image_cache_dir]/ensure: created
Oct 13 13:53:58 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333396cca4f2fb744fdc3f640c53a832d015dd9da8cc6511785f638d57d7bd1c/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_image_import_config[image_import_opts/image_import_plugins]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_image_import_config[image_conversion/output_format]/ensure: created
Oct 13 13:53:58 standalone.localdomain podman[59452]: 2025-10-13 13:53:58.616651467 +0000 UTC m=+0.132348552 container init b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat_api, config_id=tripleo_puppet_step1, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=container-puppet-heat_api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[key_manager/backend]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_image_import_config[inject_metadata_properties/ignore_user_roles]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[barbican/barbican_endpoint]/ensure: created
Oct 13 13:53:58 standalone.localdomain podman[59452]: 2025-10-13 13:53:58.626905033 +0000 UTC m=+0.142602118 container start b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat_api, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_puppet_step1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-heat-api-container, container_name=container-puppet-heat_api, vendor=Red Hat, Inc.)
Oct 13 13:53:58 standalone.localdomain podman[59452]: 2025-10-13 13:53:58.627086929 +0000 UTC m=+0.142784014 container attach b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat_api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-heat_api, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, architecture=x86_64, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: '::pacemaker::pcs_010'. (file: /etc/puppet/modules/pacemaker/manifests/resource/bundle.pp, line: 159, column: 6)
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Glance_api_config[barbican/auth_endpoint]/ensure: created
Oct 13 13:53:58 standalone.localdomain podman[59452]: 2025-10-13 13:53:58.533588825 +0000 UTC m=+0.049285910 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.28 seconds
Oct 13 13:53:58 standalone.localdomain crontab[59474]: (root) LIST (root)
Oct 13 13:53:58 standalone.localdomain crontab[59477]: (root) LIST (glance)
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Cron::Db_purge/Cron[glance-manage db purge]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[key_manager/backend]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: Warning: Unknown variable: '::pacemaker::pcs_010'. (file: /etc/puppet/modules/pacemaker/manifests/resource/bundle.pp, line: 159, column: 6)
Oct 13 13:53:58 standalone.localdomain crontab[59480]: (root) REPLACE (glance)
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[barbican/barbican_endpoint]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Glance_api_config[barbican/auth_endpoint]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:53:58 standalone.localdomain crontab[59486]: (root) LIST (root)
Oct 13 13:53:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f2f7a4d09cf0fc6d635961ac1a5e419617e8dfdf8c2aec5e2a272814a4290da2-merged.mount: Deactivated successfully.
Oct 13 13:53:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13c8cbdf23fb0e277cf2f17648f69ab57feecd870ecfb5ab703fd045af92caea-userdata-shm.mount: Deactivated successfully.
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: Warning: Scope(Apache::Vhost[cinder_wsgi]):
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     file names.
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: 
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:53:58 standalone.localdomain crontab[59487]: (root) LIST (glance)
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Cron::Db_purge/Cron[glance-manage db purge]/ensure: created
Oct 13 13:53:58 standalone.localdomain crontab[59488]: (root) REPLACE (glance)
Oct 13 13:53:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58752]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.60 seconds
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Plugins::Simple_crypto/Barbican_config[secretstore:simple_crypto/secret_store_plugin]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Plugins::Simple_crypto/Barbican_config[secretstore:simple_crypto/crypto_plugin]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Plugins::Simple_crypto/Barbican_config[secretstore:simple_crypto/global_default]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Plugins::Simple_crypto/Barbican_config[simple_crypto_plugin/kek]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Db/Barbican_config[DEFAULT/sql_connection]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/File[/var/lib/barbican]/owner: owner changed 'barbican' to 'root'
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/File[/var/lib/barbican]/mode: mode changed '0755' to '0770'
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[DEFAULT/bind_host]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[DEFAULT/bind_port]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[DEFAULT/host_href]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[queue/enable]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[certificate/enabled_certificate_plugins]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[secretstore/enable_multiple_secret_stores]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:53:58 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[secretstore/stores_lookup_suffix]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_api_paste_ini[pipeline:barbican_api/pipeline]/value: value changed cors http_proxy_to_wsgi unauthenticated-context apiapp to cors authtoken context apiapp
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Barbican_config[DEFAULT/db_auto_create]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Notification/Barbican_config[keystone_notifications/enable]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Notification/Barbican_config[keystone_notifications/topic]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/username]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/password]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Keystone::Authtoken/Keystone::Resource::Authtoken[barbican_config]/Barbican_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Db/Oslo::Db[barbican_config]/Barbican_config[database/connection]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Policy/Oslo::Policy[barbican_config]/Barbican_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Oslo::Messaging::Rabbit[barbican_config]/Barbican_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:53:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v528: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Oslo::Concurrency[glance_api_config]/Glance_api_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Oslo::Messaging::Default[barbican_config]/Barbican_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_store_ceph_conf]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_store_user]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Oslo::Messaging::Notifications[barbican_config]/Barbican_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_store_pool]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api/Oslo::Messaging::Notifications[barbican_config]/Barbican_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_thin_provisioning]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api::Logging/Oslo::Log[barbican_config]/Barbican_config[DEFAULT/debug]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Oslo::Concurrency[glance_api_config]/Glance_api_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/store_description]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Api::Logging/Oslo::Log[barbican_config]/Barbican_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}6287b5cb73a993ea294337d15b55daa28928d113cf111a35616ac88212b27915'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_store_ceph_conf]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_store_user]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_store_pool]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/rbd_thin_provisioning]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Tripleo::Profile::Base::Glance::Backend::Rbd/Glance::Backend::Multistore::Rbd[default_backend]/Glance_api_config[default_backend/store_description]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Wsgi::Apache/File[/etc/httpd/conf.d/barbican-api.conf]/ensure: defined content as '{sha256}eb80525234bbea58a5e24efba9b599f761648240b84fe8dda31e0cea5a4ae91c'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Wsgi::Apache/Openstacklib::Wsgi::Apache[barbican_wsgi_main]/File[/var/www/cgi-bin/barbican]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Wsgi::Apache/Openstacklib::Wsgi::Apache[barbican_wsgi_main]/File[barbican_wsgi_main]/ensure: defined content as '{sha256}aa0af9dcf64fdc301b82c80246a8df6a735052749f676abc2b64cf200edea6a6'
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Notice: Applied catalog in 1.46 seconds
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Application:
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Initial environment: production
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Converged environment: production
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:          Run mode: user
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Changes:
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:             Total: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Events:
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:           Success: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:             Total: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Resources:
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:           Skipped: 25
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:           Changed: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:       Out of sync: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:             Total: 214
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Time:
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:            Anchor: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:              File: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Glance image import config: 0.01
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Glance cache config: 0.01
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:            Augeas: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:              Cron: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:           Package: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Config retrieval: 0.88
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Glance api config: 1.28
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Transaction evaluation: 1.45
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:    Catalog application: 1.46
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:          Last run: 1760363639
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:         Resources: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:             Total: 1.46
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]: Version:
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:            Config: 1760363637
Oct 13 13:53:59 standalone.localdomain puppet-user[58760]:            Puppet: 7.10.0
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: /Stage[main]/Barbican::Wsgi::Apache/Openstacklib::Wsgi::Apache[barbican_wsgi_main]/Apache::Vhost[barbican_wsgi_main]/Concat[10-barbican_wsgi_main.conf]/File[/etc/httpd/conf.d/10-barbican_wsgi_main.conf]/ensure: defined content as '{sha256}cd481b39ce91758292c2fde92973e06bef1f9397b03e62e42ab88f1234ed9e37'
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Notice: Applied catalog in 0.84 seconds
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Application:
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Initial environment: production
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Converged environment: production
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:          Run mode: user
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Changes:
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:             Total: 82
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Events:
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:           Success: 82
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:             Total: 82
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Resources:
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:           Skipped: 31
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:           Changed: 81
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:       Out of sync: 81
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:             Total: 279
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Time:
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:       Concat file: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:            Anchor: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Concat fragment: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:            Augeas: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Barbican api paste ini: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:           Package: 0.03
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:              File: 0.12
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Barbican config: 0.45
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Transaction evaluation: 0.83
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Catalog application: 0.84
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:    Config retrieval: 1.47
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:          Last run: 1760363639
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:             Total: 0.84
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]: Version:
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:            Config: 1760363637
Oct 13 13:53:59 standalone.localdomain puppet-user[58806]:            Puppet: 7.10.0
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}53350997fb8fa05833a5a1b61654614ddf969bf5de2701590cf83a04e71f9d91'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:53:59 standalone.localdomain ceph-mon[29756]: pgmap v528: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Notice: Applied catalog in 1.58 seconds
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Application:
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Initial environment: production
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Converged environment: production
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:          Run mode: user
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Changes:
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:             Total: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Events:
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:           Success: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:             Total: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Resources:
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:           Skipped: 25
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:           Changed: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:       Out of sync: 51
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:             Total: 214
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Time:
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:            Anchor: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:              File: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Glance image import config: 0.01
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:              Cron: 0.01
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Glance cache config: 0.01
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:            Augeas: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:           Package: 0.02
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Config retrieval: 0.86
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Glance api config: 1.39
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Transaction evaluation: 1.57
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:    Catalog application: 1.58
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:          Last run: 1760363639
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:         Resources: 0.00
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:             Total: 1.58
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]: Version:
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:            Config: 1760363637
Oct 13 13:53:59 standalone.localdomain puppet-user[58795]:            Puppet: 7.10.0
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[59421]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[key_manager/backend]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[barbican/barbican_endpoint]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[barbican/auth_endpoint]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/region_name]/ensure: created
Oct 13 13:53:59 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/interface]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/auth_type]/ensure: created
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a.scope: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a.scope: Consumed 4.348s CPU time.
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66.scope: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66.scope: Consumed 4.518s CPU time.
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/auth_url]/ensure: created
Oct 13 13:54:00 standalone.localdomain podman[58606]: 2025-10-13 13:54:00.166186124 +0000 UTC m=+4.948604413 container died 848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, distribution-scope=public, container_name=container-puppet-glance_api, vcs-type=git, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_puppet_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/username]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/user_domain_name]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/password]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/project_name]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Nova/Cinder_config[nova/project_domain_name]/ensure: created
Oct 13 13:54:00 standalone.localdomain podman[58636]: 2025-10-13 13:54:00.217122062 +0000 UTC m=+4.966764368 container died a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=container-puppet-barbican, build-date=2025-07-21T15:22:44, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=container-puppet-barbican, release=1, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 13:54:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db193506eaeea5bc352278b61a9192c8541e438157f80fdad5fbfa7bd832e228-merged.mount: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain crontab[59637]: (root) LIST (root)
Oct 13 13:54:00 standalone.localdomain crontab[59638]: (root) LIST (cinder)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae.scope: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae.scope: Consumed 4.634s CPU time.
Oct 13 13:54:00 standalone.localdomain crontab[59640]: (root) REPLACE (cinder)
Oct 13 13:54:00 standalone.localdomain podman[58679]: 2025-10-13 13:54:00.350718489 +0000 UTC m=+5.061102327 container died 82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api_internal, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=container-puppet-glance_api_internal, com.redhat.component=openstack-glance-api-container, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public)
Oct 13 13:54:00 standalone.localdomain podman[59594]: 2025-10-13 13:54:00.352263315 +0000 UTC m=+0.177141974 container cleanup 848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=container-puppet-glance_api, architecture=x86_64)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-conmon-848a68670d9e27ef81c92311480d2157a355ca0b1e46f3525a85617af4f7db66.scope: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain podman[59593]: 2025-10-13 13:54:00.385889404 +0000 UTC m=+0.209440244 container cleanup a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=container-puppet-barbican, architecture=x86_64, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, container_name=container-puppet-barbican, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.component=openstack-barbican-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.openshift.expose-services=, name=rhosp17/openstack-barbican-api, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, vcs-type=git, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 13:54:00 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-barbican --conmon-pidfile /run/container-puppet-barbican.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config --env NAME=barbican --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::barbican::api
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-barbican --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,barbican_api_paste_ini,barbican_config', 'NAME': 'barbican', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::barbican::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-barbican.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-conmon-a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a.scope: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/default_volume_type]/ensure: created
Oct 13 13:54:00 standalone.localdomain ovs-vsctl[59674]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:54:00 standalone.localdomain podman[59649]: 2025-10-13 13:54:00.461901804 +0000 UTC m=+0.091711521 container cleanup 82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=container-puppet-glance_api_internal, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, container_name=container-puppet-glance_api_internal, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created
Oct 13 13:54:00 standalone.localdomain systemd[1]: libpod-conmon-82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae.scope: Deactivated successfully.
Oct 13 13:54:00 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-glance_api_internal --conmon-pidfile /run/container-puppet-glance_api_internal.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config --env NAME=glance_api_internal --env STEP_CONFIG=include ::tripleo::packages
                                                       class { 'tripleo::profile::base::glance::api':
                                                         bind_port => 9293,
                                                         tls_proxy_port => 9293,
                                                         log_file => '/var/log/glance/api_internal.log',
                                                         show_image_direct_url => true,
                                                         show_multiple_locations => true,
                                                       }
                                                       
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-glance_api_internal --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api_internal', 'STEP_CONFIG': "include ::tripleo::packages\nclass { 'tripleo::profile::base::glance::api':\n  bind_port => 9293,\n  tls_proxy_port => 9293,\n  log_file => '/var/log/glance/api_internal.log',\n  show_image_direct_url => true,\n  show_multiple_locations => true,\n}\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-glance_api_internal.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 13:54:00 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-glance_api --conmon-pidfile /run/container-puppet-glance_api.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config --env NAME=glance_api --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::glance::api
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-glance_api --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config,glance_image_import_config', 'NAME': 'glance_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::glance::api\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-glance_api.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_workers]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup/Cinder_config[DEFAULT/backup_max_operations]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[59421]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:00 standalone.localdomain puppet-user[59421]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:00 standalone.localdomain puppet-user[59421]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:00 standalone.localdomain puppet-user[59421]:    (file & line not available)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]:    (file & line not available)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]:    (file & line not available)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[59421]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:00 standalone.localdomain puppet-user[59421]:    (file & line not available)
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]: Warning: Scope(Class[Heat]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:00 standalone.localdomain puppet-user[59485]: removed in a future realse. Use heat::db::database_connection instead
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:00 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:00 standalone.localdomain podman[59905]: 2025-10-13 13:54:00.871710767 +0000 UTC m=+0.073547508 container create 1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=container-puppet-heat_api_cfn, architecture=x86_64, release=1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:49:55, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, container_name=container-puppet-heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_puppet_step1, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 13:54:00 standalone.localdomain systemd[1]: Started libpod-conmon-1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2.scope.
Oct 13 13:54:00 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:00 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6ca37cfa9f0e5053b4b5f65562ad93b9cc6ac5c10b8f2ac2713ae975ff1e87/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:00 standalone.localdomain podman[59905]: 2025-10-13 13:54:00.830635604 +0000 UTC m=+0.032472355 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1
Oct 13 13:54:00 standalone.localdomain podman[59905]: 2025-10-13 13:54:00.941271972 +0000 UTC m=+0.143108713 container init 1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=container-puppet-heat_api_cfn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:49:55, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=container-puppet-heat_api_cfn, version=17.1.9, name=rhosp17/openstack-heat-api-cfn)
Oct 13 13:54:00 standalone.localdomain podman[59905]: 2025-10-13 13:54:00.953700245 +0000 UTC m=+0.155536986 container start 1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=container-puppet-heat_api_cfn, container_name=container-puppet-heat_api_cfn, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-cfn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:49:55, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 13:54:00 standalone.localdomain podman[59905]: 2025-10-13 13:54:00.953950393 +0000 UTC m=+0.155787164 container attach 1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=container-puppet-heat_api_cfn, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_puppet_step1, container_name=container-puppet-heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T14:49:55, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 13:54:00 standalone.localdomain podman[59949]: 2025-10-13 13:54:00.962017535 +0000 UTC m=+0.071092774 container create f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, name=rhosp17/openstack-heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, container_name=container-puppet-heat, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, com.redhat.component=openstack-heat-api-container, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 13:54:00 standalone.localdomain systemd[1]: Started libpod-conmon-f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3.scope.
Oct 13 13:54:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7861ba3a2d0fa6c88e81d0b962398561746b4f0944affb5e220c19f62f3ba4ad/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:01 standalone.localdomain podman[59949]: 2025-10-13 13:54:01.02652423 +0000 UTC m=+0.135599469 container init f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=container-puppet-heat, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Oct 13 13:54:01 standalone.localdomain podman[59949]: 2025-10-13 13:54:00.927310003 +0000 UTC m=+0.036385272 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 13:54:01 standalone.localdomain podman[59949]: 2025-10-13 13:54:01.034655814 +0000 UTC m=+0.143731063 container start f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, name=rhosp17/openstack-heat-api, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-heat, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, com.redhat.component=openstack-heat-api-container)
Oct 13 13:54:01 standalone.localdomain podman[59949]: 2025-10-13 13:54:01.035304243 +0000 UTC m=+0.144379502 container attach f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=container-puppet-heat, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9)
Oct 13 13:54:01 standalone.localdomain podman[59959]: 2025-10-13 13:54:01.053881161 +0000 UTC m=+0.129573649 container create 132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=container-puppet-horizon, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, container_name=container-puppet-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 horizon, release=1, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 13:54:01 standalone.localdomain systemd[1]: Started libpod-conmon-132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731.scope.
Oct 13 13:54:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b385782410aae5c113d9546c3022ac0ff632d6109f408ddd4832d9b2db2c7c16/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Warning: Unknown variable: '::pacemaker::pcs_010'. (file: /etc/puppet/modules/pacemaker/manifests/resource/bundle.pp, line: 159, column: 6)
Oct 13 13:54:01 standalone.localdomain podman[59959]: 2025-10-13 13:54:01.018220801 +0000 UTC m=+0.093913309 image pull  registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1
Oct 13 13:54:01 standalone.localdomain podman[59959]: 2025-10-13 13:54:01.123103566 +0000 UTC m=+0.198796074 container init 132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=container-puppet-horizon, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, container_name=container-puppet-horizon, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, version=17.1.9, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:15, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, architecture=x86_64, release=1, vcs-type=git, com.redhat.component=openstack-horizon-container)
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.
Oct 13 13:54:01 standalone.localdomain podman[59959]: 2025-10-13 13:54:01.130755636 +0000 UTC m=+0.206448124 container start 132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=container-puppet-horizon, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, container_name=container-puppet-horizon, tcib_managed=true, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_puppet_step1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-horizon-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:54:01 standalone.localdomain podman[59959]: 2025-10-13 13:54:01.131047615 +0000 UTC m=+0.206740123 container attach 132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=container-puppet-horizon, container_name=container-puppet-horizon, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, distribution-scope=public, config_id=tripleo_puppet_step1, tcib_managed=true, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, com.redhat.component=openstack-horizon-container, build-date=2025-07-21T13:58:15, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 13:54:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7237bb817dfe05d48f2606885847d1490e7fbe49ddf4120a0989298e8583c1c8-merged.mount: Deactivated successfully.
Oct 13 13:54:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82f315f8d78a88d5e6f207d8087ac567c6fc668af9126f55ca61b69f7cd349ae-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-59dd28e840ca55d017be8e7da1cf03b089ee3ce35fdcc06fb7e7fd2d34874879-merged.mount: Deactivated successfully.
Oct 13 13:54:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1f0330d550b2d6044ac233ed638fe82a5fa02cbfb4d5d5b9b6760c1e472335a-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/rpc_response_timeout]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v529: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.78 seconds
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{sha256}8afc9a0bcc462f08af54b6ac1cbfc3b8343b1feee00b4ab07d7a8c7b47065f0b' to '{sha256}258c836c5dad1cff2b47968af5aad948d8ea117f06d43c70cb5a85e29193d0b6'
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/auth_type]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Notice: Applied catalog in 0.16 seconds
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Application:
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:    Initial environment: production
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:    Converged environment: production
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:          Run mode: user
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Changes:
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:             Total: 2
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Events:
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:           Success: 2
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:             Total: 2
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Resources:
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:           Changed: 1
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:       Out of sync: 1
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:           Skipped: 29
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:             Total: 69
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Time:
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:       Concat file: 0.00
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:    Concat fragment: 0.00
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:              File: 0.04
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:    Transaction evaluation: 0.15
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:    Catalog application: 0.16
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:    Config retrieval: 0.86
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:          Last run: 1760363641
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:             Total: 0.16
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]: Version:
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:            Config: 1760363640
Oct 13 13:54:01 standalone.localdomain puppet-user[59421]:            Puppet: 7.10.0
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/region_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/auth_url]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/username]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/password]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/user_domain_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/project_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/project_domain_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Service_user/Keystone::Resource::Service_user[cinder_config]/Cinder_config[service_user/send_service_user_token]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]: Warning: Scope(Apache::Vhost[heat_api_wsgi]):
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     file names.
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]: 
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[59485]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.28 seconds
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:01 standalone.localdomain systemd[1]: libpod-2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5.scope: Deactivated successfully.
Oct 13 13:54:01 standalone.localdomain systemd[1]: libpod-2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5.scope: Consumed 3.444s CPU time.
Oct 13 13:54:01 standalone.localdomain podman[59348]: 2025-10-13 13:54:01.914170835 +0000 UTC m=+3.844012381 container died 2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5 (image=registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1, name=container-puppet-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, container_name=container-puppet-haproxy, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-haproxy-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9)
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:01 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ca0a15d821c5264bbbef5ba19d0556f6e72bc0e0289105b378b947c98fd86551-merged.mount: Deactivated successfully.
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:02 standalone.localdomain podman[60067]: 2025-10-13 13:54:02.034970568 +0000 UTC m=+0.109122064 container cleanup 2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5 (image=registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1, name=container-puppet-haproxy, release=1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, name=rhosp17/openstack-haproxy, build-date=2025-07-21T13:08:11, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-haproxy, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 13:54:02 standalone.localdomain systemd[1]: libpod-conmon-2ef790e6a8e9a3fe6dd46fd9b770d8d2bcb1e89f9dc08e5c32df29c47a4ac6c5.scope: Deactivated successfully.
Oct 13 13:54:02 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-haproxy --conmon-pidfile /run/container-puppet-haproxy.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env STEP_CONFIG=include ::tripleo::packages
                                                       exec {'wait-for-settle': command => '/bin/true' }
                                                       class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}
                                                       ['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }
                                                       include tripleo::profile::pacemaker::haproxy_bundle --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-haproxy --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,haproxy_config', 'NAME': 'haproxy', 'STEP_CONFIG': "include ::tripleo::packages\nexec {'wait-for-settle': command => '/bin/true' }\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::haproxy_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-haproxy.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-haproxy:17.1
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[/var/www/cgi-bin/cinder]/group: group changed 'root' to 'cinder'
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{sha256}4edb31cc3eee33c28f8a9c6c47aaad65265bca7f5a84782e4481666691daacea'
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_flatten_volume_from_snapshot]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/report_discard_supported]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[DEFAULT/region_name_for_services]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{sha256}aa8227a91b8c5c3edd5d4f7f9e4db8a6d8af7a689b4505fae4d3498c1cd571ef'
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Notice: Applied catalog in 3.28 seconds
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Application:
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:    Initial environment: production
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:    Converged environment: production
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:          Run mode: user
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Changes:
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:             Total: 124
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Events:
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:           Success: 124
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:             Total: 124
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Resources:
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:           Changed: 124
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:       Out of sync: 124
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:           Skipped: 38
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:             Total: 379
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Time:
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:         Resources: 0.00
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:       Concat file: 0.00
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:         File line: 0.00
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:            Anchor: 0.00
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:    Concat fragment: 0.00
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:           Package: 0.03
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:              Cron: 0.06
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:              File: 0.10
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:            Augeas: 0.53
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:    Config retrieval: 1.76
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:          Last run: 1760363642
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:     Cinder config: 2.30
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:    Transaction evaluation: 3.27
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:    Catalog application: 3.28
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:             Total: 3.28
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]: Version:
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:            Config: 1760363637
Oct 13 13:54:02 standalone.localdomain puppet-user[58752]:            Puppet: 7.10.0
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cache/Heat_config[resource_finder_cache/caching]/ensure: created
Oct 13 13:54:02 standalone.localdomain crontab[60135]: (root) LIST (root)
Oct 13 13:54:02 standalone.localdomain crontab[60137]: (root) LIST (heat)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created
Oct 13 13:54:02 standalone.localdomain crontab[60143]: (root) REPLACE (heat)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:02 standalone.localdomain ceph-mon[29756]: pgmap v529: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:02 standalone.localdomain podman[60146]: 2025-10-13 13:54:02.42614374 +0000 UTC m=+0.063695820 container create f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, build-date=2025-07-21T13:27:15, tcib_managed=true, release=1, com.redhat.component=openstack-iscsid-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, config_id=tripleo_puppet_step1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:02 standalone.localdomain systemd[1]: Started libpod-conmon-f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74.scope.
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b84240793023d0060bc44a6d5c6a81ebdf8f4232b50de9763d8b1b4c56d59c81/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b84240793023d0060bc44a6d5c6a81ebdf8f4232b50de9763d8b1b4c56d59c81/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:02 standalone.localdomain podman[60146]: 2025-10-13 13:54:02.481365537 +0000 UTC m=+0.118917617 container init f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, container_name=container-puppet-iscsid, version=17.1.9, tcib_managed=true, release=1, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:54:02 standalone.localdomain podman[60146]: 2025-10-13 13:54:02.491313136 +0000 UTC m=+0.128865216 container start f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, release=1, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64)
Oct 13 13:54:02 standalone.localdomain podman[60146]: 2025-10-13 13:54:02.491607505 +0000 UTC m=+0.129159605 container attach f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_puppet_step1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:54:02 standalone.localdomain podman[60146]: 2025-10-13 13:54:02.394308306 +0000 UTC m=+0.031860436 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:02 standalone.localdomain ovs-vsctl[60193]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59990]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:02 standalone.localdomain puppet-user[59990]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59990]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:02 standalone.localdomain puppet-user[59990]:    (file & line not available)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59990]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:02 standalone.localdomain puppet-user[59990]:    (file & line not available)
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/backend]/ensure: created
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/enabled]/ensure: created
Oct 13 13:54:02 standalone.localdomain systemd[1]: libpod-48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c.scope: Deactivated successfully.
Oct 13 13:54:02 standalone.localdomain systemd[1]: libpod-48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c.scope: Consumed 7.179s CPU time.
Oct 13 13:54:02 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:02 standalone.localdomain podman[58613]: 2025-10-13 13:54:02.993189499 +0000 UTC m=+7.764534936 container died 48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=container-puppet-cinder, name=rhosp17/openstack-cinder-api, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, container_name=container-puppet-cinder, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 13:54:03 standalone.localdomain ovs-vsctl[60304]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]: Warning: Scope(Class[Heat]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]: removed in a future realse. Use heat::db::database_connection instead
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}4e7bfa267538f068da8a29b6e498a2584cee1a817aa01f7e085fc1e5fd81e88e'
Oct 13 13:54:03 standalone.localdomain podman[60292]: 2025-10-13 13:54:03.110647273 +0000 UTC m=+0.112012851 container cleanup 48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=container-puppet-cinder, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, container_name=container-puppet-cinder, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T15:58:55, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, maintainer=OpenStack TripleO Team)
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:03 standalone.localdomain systemd[1]: libpod-conmon-48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c.scope: Deactivated successfully.
Oct 13 13:54:03 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-cinder --conmon-pidfile /run/container-puppet-cinder.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::cinder::api
                                                       include tripleo::profile::base::database::mysql::client
                                                       
                                                       include tripleo::profile::base::cinder::backup::ceph
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::pacemaker::cinder::backup_bundle
                                                       include tripleo::profile::base::cinder::scheduler
                                                       
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::lvm
                                                       include tripleo::profile::pacemaker::cinder::volume_bundle
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-cinder --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_api_paste_ini,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', 'NAME': 'cinder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::cinder::api\ninclude tripleo::profile::base::database::mysql::client\n\ninclude tripleo::profile::base::cinder::backup::ceph\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::cinder::backup_bundle\ninclude tripleo::profile::base::cinder::scheduler\n\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::lvm\ninclude tripleo::profile::pacemaker::cinder::volume_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-cinder.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]:    (file & line not available)
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:    (file & line not available)
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]:    (file & line not available)
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:    (file & line not available)
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{sha256}63c23a972f142aef4c001999d58b5cb122b43aa3aebf0785080bcc6e56385a66'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{sha256}e131feabdda214b085cb54e1d72c666deb1552657c4e90a05e1b26f0c1733032'
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Notice: Applied catalog in 1.21 seconds
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Application:
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:    Initial environment: production
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:    Converged environment: production
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:          Run mode: user
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Changes:
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:             Total: 89
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Events:
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:           Success: 89
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:             Total: 89
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Resources:
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:           Skipped: 31
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:           Changed: 89
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:       Out of sync: 89
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:             Total: 331
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Time:
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:       Concat file: 0.00
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:            Anchor: 0.00
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:    Concat fragment: 0.00
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:              Cron: 0.01
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:           Package: 0.03
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:              File: 0.12
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:       Heat config: 0.75
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:    Transaction evaluation: 1.20
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:    Catalog application: 1.21
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:    Config retrieval: 1.48
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:          Last run: 1760363643
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:         Resources: 0.00
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:             Total: 1.21
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]: Version:
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:            Config: 1760363640
Oct 13 13:54:03 standalone.localdomain puppet-user[59485]:            Puppet: 7.10.0
Oct 13 13:54:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v530: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]: Warning: Scope(Class[Heat]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]: removed in a future realse. Use heat::db::database_connection instead
Oct 13 13:54:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cebc8707f4c7cba2ecd74c41ce3f00c568695e2bd83874b7af7633e3c76f6461-merged.mount: Deactivated successfully.
Oct 13 13:54:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48939fe4969186d618d37decb411b78f51e8550ba212d5ea72284648b672804c-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:03 standalone.localdomain podman[60481]: 2025-10-13 13:54:03.453903488 +0000 UTC m=+0.036336731 image pull  registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 13:54:03 standalone.localdomain podman[60481]: 2025-10-13 13:54:03.492106054 +0000 UTC m=+0.074539247 container create c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=container-puppet-keystone, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, config_id=tripleo_puppet_step1, name=rhosp17/openstack-keystone, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=container-puppet-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 13:54:03 standalone.localdomain systemd[1]: Started libpod-conmon-c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295.scope.
Oct 13 13:54:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71faf331040c26a5f484565c527989857decd5d4ab474ba00bf72472b9f3d87/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:03 standalone.localdomain podman[60481]: 2025-10-13 13:54:03.558458345 +0000 UTC m=+0.140891528 container init c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=container-puppet-keystone, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, container_name=container-puppet-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 13:54:03 standalone.localdomain podman[60481]: 2025-10-13 13:54:03.566358382 +0000 UTC m=+0.148791585 container start c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=container-puppet-keystone, architecture=x86_64, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_puppet_step1, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Oct 13 13:54:03 standalone.localdomain podman[60481]: 2025-10-13 13:54:03.566732553 +0000 UTC m=+0.149165736 container attach c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=container-puppet-keystone, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, version=17.1.9, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, config_id=tripleo_puppet_step1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, container_name=container-puppet-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: Warning: This parameter is deprecated, please use `internal_proxy`. at ["/etc/puppet/modules/apache/manifests/mod/remoteip.pp", 77]:["/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp", 103]
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 13:54:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:03 standalone.localdomain puppet-user[60007]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.72 seconds
Oct 13 13:54:03 standalone.localdomain systemd[1]: libpod-b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91.scope: Deactivated successfully.
Oct 13 13:54:03 standalone.localdomain systemd[1]: libpod-b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91.scope: Consumed 5.002s CPU time.
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: Warning: Scope(Apache::Vhost[horizon_vhost]):
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     file names.
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: 
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: Warning: Scope(Apache::Vhost[horizon_ssl_vhost]):
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     file names.
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]: 
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:03 standalone.localdomain puppet-user[60028]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:03 standalone.localdomain podman[59452]: 2025-10-13 13:54:03.953312858 +0000 UTC m=+5.469009933 container died b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat_api, release=1, container_name=container-puppet-heat_api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:56:26, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]: Warning: Scope(Apache::Vhost[heat_api_cfn_wsgi]):
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     file names.
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]: 
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:03 standalone.localdomain puppet-user[59990]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.92 seconds
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.36 seconds
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created
Oct 13 13:54:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-333396cca4f2fb744fdc3f640c53a832d015dd9da8cc6511785f638d57d7bd1c-merged.mount: Deactivated successfully.
Oct 13 13:54:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[DEFAULT/region_name_for_services]/ensure: created
Oct 13 13:54:04 standalone.localdomain ceph-mon[29756]: pgmap v530: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:04 standalone.localdomain podman[60547]: 2025-10-13 13:54:04.60315432 +0000 UTC m=+0.639400140 container cleanup b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, container_name=container-puppet-heat_api, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-heat-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_puppet_step1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:04 standalone.localdomain systemd[1]: libpod-conmon-b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91.scope: Deactivated successfully.
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created
Oct 13 13:54:04 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-heat_api --conmon-pidfile /run/container-puppet-heat_api.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini --env NAME=heat_api --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::heat::api
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-heat_api --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line,heat_api_paste_ini', 'NAME': 'heat_api', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-heat_api.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{sha256}9c8d4355af8c0547dc87c380e06a19f272a0bd3fac83afce5f8eb116cf574c2e'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon/File[/etc/openstack-dashboard/local_settings.d]/mode: mode changed '0750' to '0755'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}04a02d56f6c6528fed872eeeb7dd0b5ca4a739b859823074471b23f2b9bfb0dd'
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[DEFAULT/region_name_for_services]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cache/Heat_config[resource_finder_cache/caching]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:04 standalone.localdomain crontab[60614]: (root) LIST (root)
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:04 standalone.localdomain crontab[60627]: (root) LIST (heat)
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cache/Heat_config[resource_finder_cache/caching]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:04 standalone.localdomain crontab[60642]: (root) LIST (root)
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:04 standalone.localdomain crontab[60637]: (root) REPLACE (heat)
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:04 standalone.localdomain crontab[60652]: (root) LIST (heat)
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created
Oct 13 13:54:04 standalone.localdomain crontab[60656]: (root) REPLACE (heat)
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{sha256}3977211787f6c6bf5629e4156b32d1dc95c37bc640452d0027b2bc9b1ec9f2d7'
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{sha256}79743e7ac35b48eefd773c622783f4b85fb483cb9697ce173c5af112525ef0c3' to '{sha256}dd5a2d4cf6f15be16c5fcff6095f3b898b7d7214f4111ebfd4d46c57d8faf56e'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]:    (file & line not available)
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{sha256}2674ec0a2b4f3412930e918216e5698d5bc877be4364105136866cad3f2ae4bb' to '{sha256}5c3c01834d94a99528a118d4a02978297ac0bf0250e0729c7e4bbd47d4865680'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{sha256}8c17a7de4a27d92b2aca6b156dca9e26b9e0bf31b8cc43f63c971aeed09d4e54'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Dashboards::Heat/Concat[/etc/openstack-dashboard/local_settings.d/_1699_orchestration_settings.py]/File[/etc/openstack-dashboard/local_settings.d/_1699_orchestration_settings.py]/ensure: defined content as '{sha256}dd44da5c856beb5e53df88fc72180a79669b73c9c04d487b5033290279692113'
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]:    (file & line not available)
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Dashboards::Manila/Concat[/etc/openstack-dashboard/local_settings.d/_90_manila_shares.py]/File[/etc/openstack-dashboard/local_settings.d/_90_manila_shares.py]/content: content changed '{sha256}30de7bbf440460cde78da0e2cf1cab2404921ef0b0e926f5ebaaf973dc4252c5' to '{sha256}838e0d57b7536ffc538afd2e4d916f26ccb29380c13563d0f1ef51d676112c34'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Dashboards::Manila/Concat[/etc/openstack-dashboard/local_settings.d/_90_manila_shares.py]/File[/etc/openstack-dashboard/local_settings.d/_90_manila_shares.py]/group: group changed 'root' to 'apache'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Dashboards::Manila/Concat[/etc/openstack-dashboard/local_settings.d/_90_manila_shares.py]/File[/etc/openstack-dashboard/local_settings.d/_90_manila_shares.py]/mode: mode changed '0644' to '0640'
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{sha256}824016275330b45fd8bd04b07792de5f9aaa337f8272bfc01c5b57bb515fc9b4'
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/trusts_delegated_roles]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{sha256}965433cd4fdeaddea847ed3d0095ae8fdb2df1f89ddde7da57d582a23fb19b7e'
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.14 seconds
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/client_retry_limit]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Notice: Applied catalog in 0.77 seconds
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Application:
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:    Initial environment: production
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:    Converged environment: production
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:          Run mode: user
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Changes:
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:             Total: 49
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Events:
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:           Success: 49
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:             Total: 49
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Resources:
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:           Skipped: 33
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:           Changed: 47
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:       Out of sync: 47
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:             Total: 138
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Time:
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:       Concat file: 0.00
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:    Concat fragment: 0.00
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:              File: 0.58
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:    Transaction evaluation: 0.76
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:    Catalog application: 0.77
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:    Config retrieval: 1.01
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:          Last run: 1760363644
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:             Total: 0.77
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]: Version:
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:            Config: 1760363643
Oct 13 13:54:04 standalone.localdomain puppet-user[60028]:            Puppet: 7.10.0
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60184]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:04 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:05 standalone.localdomain podman[60695]: 2025-10-13 13:54:05.031197238 +0000 UTC m=+0.081431173 container create a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=container-puppet-manila, com.redhat.component=openstack-manila-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-manila, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_id=tripleo_puppet_step1, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:05 standalone.localdomain systemd[1]: Started libpod-conmon-a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498.scope.
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:05 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee02a78034d58b97a791f95c9d2a559665d6d7cb40b95ff70c9188da0f009dd0/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:05 standalone.localdomain podman[60695]: 2025-10-13 13:54:05.082672583 +0000 UTC m=+0.132906538 container init a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=container-puppet-manila, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_id=tripleo_puppet_step1, container_name=container-puppet-manila, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:06:43, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-manila-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8)
Oct 13 13:54:05 standalone.localdomain podman[60695]: 2025-10-13 13:54:05.088193669 +0000 UTC m=+0.138427634 container start a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=container-puppet-manila, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-manila-api-container, container_name=container-puppet-manila, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_id=tripleo_puppet_step1, name=rhosp17/openstack-manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.buildah.version=1.33.12, build-date=2025-07-21T16:06:43, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 13:54:05 standalone.localdomain podman[60695]: 2025-10-13 13:54:05.088430926 +0000 UTC m=+0.138664871 container attach a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=container-puppet-manila, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-manila-api-container, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, config_id=tripleo_puppet_step1, release=1, description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=container-puppet-manila, build-date=2025-07-21T16:06:43, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 13:54:05 standalone.localdomain podman[60695]: 2025-10-13 13:54:04.994300422 +0000 UTC m=+0.044534377 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v531: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:05 standalone.localdomain systemd[1]: libpod-132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731.scope: Deactivated successfully.
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:05 standalone.localdomain systemd[1]: libpod-132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731.scope: Consumed 3.590s CPU time.
Oct 13 13:54:05 standalone.localdomain podman[59959]: 2025-10-13 13:54:05.397910018 +0000 UTC m=+4.473602516 container died 132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=container-puppet-horizon, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, build-date=2025-07-21T13:58:15, release=1, vcs-type=git, io.openshift.expose-services=, container_name=container-puppet-horizon, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, com.redhat.component=openstack-horizon-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/backend]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/enabled]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]:    (file & line not available)
Oct 13 13:54:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b385782410aae5c113d9546c3022ac0ff632d6109f408ddd4832d9b2db2c7c16-merged.mount: Deactivated successfully.
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/backend]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/enabled]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:05 standalone.localdomain podman[60832]: 2025-10-13 13:54:05.503001291 +0000 UTC m=+0.093758194 container cleanup 132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=container-puppet-horizon, container_name=container-puppet-horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:58:15, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, io.openshift.expose-services=)
Oct 13 13:54:05 standalone.localdomain systemd[1]: libpod-conmon-132c290807346c1ea1944bc4c59a9fc1d6f32f082883814680aeaf3701e3f731.scope: Deactivated successfully.
Oct 13 13:54:05 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-horizon --conmon-pidfile /run/container-puppet-horizon.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::horizon
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-horizon --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,horizon_config', 'NAME': 'horizon', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::horizon\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-horizon.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}9c07fd1c45d5592fa4b64fda68d21e4881542a149acb2452d09a45f929aee437'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Notice: Applied catalog in 0.63 seconds
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Application:
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:    Initial environment: production
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:    Converged environment: production
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:          Run mode: user
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Changes:
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:             Total: 4
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Events:
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:           Success: 4
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:             Total: 4
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Resources:
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:           Changed: 4
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:       Out of sync: 4
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:           Skipped: 8
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:             Total: 13
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Time:
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:              File: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:              Exec: 0.09
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:    Config retrieval: 0.17
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:            Augeas: 0.53
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:    Transaction evaluation: 0.62
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:    Catalog application: 0.63
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:          Last run: 1760363645
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:             Total: 0.63
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]: Version:
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:            Config: 1760363644
Oct 13 13:54:05 standalone.localdomain puppet-user[60184]:            Puppet: 7.10.0
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]:    (file & line not available)
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:05 standalone.localdomain ceph-mon[29756]: pgmap v531: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Cache/Oslo::Cache[heat_config]/Heat_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{sha256}afb3543781a0adb6e46645cb5079509a9f1e3246c2285967df9cdf5b25fadd4f'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{sha256}00dfd79a2e891b11ddd21cb5ce9d8c56f440a274b42eb9e7e9616c7c7e326582'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Notice: Applied catalog in 1.74 seconds
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Application:
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:    Initial environment: production
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:    Converged environment: production
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:          Run mode: user
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Changes:
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:             Total: 61
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Events:
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:           Success: 61
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:             Total: 61
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Resources:
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:           Skipped: 21
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:           Changed: 61
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:       Out of sync: 61
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:             Total: 260
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Time:
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:              File: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:            Anchor: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:              Cron: 0.01
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:            Augeas: 0.02
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:           Package: 0.04
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:    Config retrieval: 0.83
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:       Heat config: 1.53
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:    Transaction evaluation: 1.73
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:    Catalog application: 1.74
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:          Last run: 1760363645
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:         Resources: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:             Total: 1.74
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]: Version:
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:            Config: 1760363643
Oct 13 13:54:05 standalone.localdomain puppet-user[60007]:            Puppet: 7.10.0
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{sha256}969903cf4f0f82030bdb9bac6fd904c130f027d297ae7b562a96ca79b40bd672'
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Notice: Applied catalog in 1.45 seconds
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Application:
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:    Initial environment: production
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:    Converged environment: production
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:          Run mode: user
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Changes:
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:             Total: 90
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Events:
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:           Success: 90
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:             Total: 90
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Resources:
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:           Skipped: 32
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:           Changed: 90
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:       Out of sync: 90
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:             Total: 333
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Time:
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:            Anchor: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:       Concat file: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:    Concat fragment: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:              Cron: 0.02
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:           Package: 0.04
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:              File: 0.15
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:       Heat config: 0.93
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:    Transaction evaluation: 1.44
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:    Catalog application: 1.45
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:    Config retrieval: 1.55
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:          Last run: 1760363645
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:         Resources: 0.00
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:             Total: 1.45
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]: Version:
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:            Config: 1760363642
Oct 13 13:54:05 standalone.localdomain puppet-user[59990]:            Puppet: 7.10.0
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]: Notice: Accepting previously invalid value for target type 'Enum['sql', 'template']'
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]: Warning: Scope(Class[Keystone]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:05 standalone.localdomain puppet-user[60519]: removed in a future realse. Use keystone::db::database_connection instead
Oct 13 13:54:05 standalone.localdomain systemd[1]: libpod-f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74.scope: Deactivated successfully.
Oct 13 13:54:05 standalone.localdomain systemd[1]: libpod-f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74.scope: Consumed 3.057s CPU time.
Oct 13 13:54:05 standalone.localdomain podman[60146]: 2025-10-13 13:54:05.866751281 +0000 UTC m=+3.504303361 container died f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_puppet_step1, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=container-puppet-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid)
Oct 13 13:54:05 standalone.localdomain podman[60935]: 2025-10-13 13:54:05.947250985 +0000 UTC m=+0.069678500 container cleanup f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.9, container_name=container-puppet-iscsid, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, name=rhosp17/openstack-iscsid)
Oct 13 13:54:05 standalone.localdomain systemd[1]: libpod-conmon-f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74.scope: Deactivated successfully.
Oct 13 13:54:05 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::iscsid
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Oct 13 13:54:06 standalone.localdomain podman[60944]: 2025-10-13 13:54:06.000762361 +0000 UTC m=+0.102651310 container create 0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=container-puppet-memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-memcached, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, name=rhosp17/openstack-memcached, distribution-scope=public, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43)
Oct 13 13:54:06 standalone.localdomain systemd[1]: Started libpod-conmon-0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e.scope.
Oct 13 13:54:06 standalone.localdomain podman[60944]: 2025-10-13 13:54:05.939058359 +0000 UTC m=+0.040947268 image pull  registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1
Oct 13 13:54:06 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87046473c186226e71335669ea31ba5572060b2d006f721960f031336019f3c2/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:06 standalone.localdomain podman[60944]: 2025-10-13 13:54:06.072882114 +0000 UTC m=+0.174770993 container init 0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=container-puppet-memcached, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_puppet_step1, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-memcached, container_name=container-puppet-memcached, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Oct 13 13:54:06 standalone.localdomain podman[60944]: 2025-10-13 13:54:06.083532273 +0000 UTC m=+0.185421172 container start 0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=container-puppet-memcached, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, name=rhosp17/openstack-memcached, tcib_managed=true, build-date=2025-07-21T12:58:43, container_name=container-puppet-memcached, architecture=x86_64, config_id=tripleo_puppet_step1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Oct 13 13:54:06 standalone.localdomain podman[60944]: 2025-10-13 13:54:06.083724169 +0000 UTC m=+0.185613068 container attach 0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=container-puppet-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_puppet_step1, name=rhosp17/openstack-memcached, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, release=1, io.openshift.expose-services=, container_name=container-puppet-memcached, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 13:54:06 standalone.localdomain systemd[1]: libpod-f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3.scope: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain systemd[1]: libpod-f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3.scope: Consumed 4.457s CPU time.
Oct 13 13:54:06 standalone.localdomain podman[59949]: 2025-10-13 13:54:06.242856002 +0000 UTC m=+5.351931261 container died f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, release=1, container_name=container-puppet-heat, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, config_id=tripleo_puppet_step1)
Oct 13 13:54:06 standalone.localdomain podman[61079]: 2025-10-13 13:54:06.378377857 +0000 UTC m=+0.124770133 container cleanup f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=container-puppet-heat, config_id=tripleo_puppet_step1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-heat, tcib_managed=true, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container)
Oct 13 13:54:06 standalone.localdomain systemd[1]: libpod-conmon-f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3.scope: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-heat --conmon-pidfile /run/container-puppet-heat.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::heat::engine
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-heat --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::engine\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-heat.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 13:54:06 standalone.localdomain systemd[1]: libpod-1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2.scope: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain systemd[1]: libpod-1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2.scope: Consumed 4.993s CPU time.
Oct 13 13:54:06 standalone.localdomain podman[59905]: 2025-10-13 13:54:06.409387287 +0000 UTC m=+5.611224018 container died 1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=container-puppet-heat_api_cfn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api-cfn, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=container-puppet-heat_api_cfn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 13:54:06 standalone.localdomain podman[61108]: 2025-10-13 13:54:06.427756878 +0000 UTC m=+0.109803364 container create 20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-mysql, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_puppet_step1, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=container-puppet-mysql, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64)
Oct 13 13:54:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b84240793023d0060bc44a6d5c6a81ebdf8f4232b50de9763d8b1b4c56d59c81-merged.mount: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f961214072af6346858a06478203dc71c49226493903f5b19207b41ccc675d74-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7861ba3a2d0fa6c88e81d0b962398561746b4f0944affb5e220c19f62f3ba4ad-merged.mount: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain systemd[1]: Started libpod-conmon-20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5.scope.
Oct 13 13:54:06 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbf92a4c859842b06884bea493d8bbbda1c9f40c81fd52f77228f976bab6cee/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:06 standalone.localdomain podman[61108]: 2025-10-13 13:54:06.474836641 +0000 UTC m=+0.156883127 container init 20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-mysql, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T12:58:45, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=container-puppet-mysql, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible)
Oct 13 13:54:06 standalone.localdomain podman[61108]: 2025-10-13 13:54:06.489187361 +0000 UTC m=+0.171233837 container start 20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-mysql, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_puppet_step1, build-date=2025-07-21T12:58:45, release=1, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=container-puppet-mysql, distribution-scope=public)
Oct 13 13:54:06 standalone.localdomain podman[61108]: 2025-10-13 13:54:06.489572822 +0000 UTC m=+0.171619318 container attach 20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-mysql, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, version=17.1.9, container_name=container-puppet-mysql, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-mariadb, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 13:54:06 standalone.localdomain podman[61108]: 2025-10-13 13:54:06.391227303 +0000 UTC m=+0.073273819 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:54:06 standalone.localdomain podman[61132]: 2025-10-13 13:54:06.595090018 +0000 UTC m=+0.176462664 container cleanup 1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=container-puppet-heat_api_cfn, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, container_name=container-puppet-heat_api_cfn, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:49:55, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 13:54:06 standalone.localdomain systemd[1]: libpod-conmon-1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2.scope: Deactivated successfully.
Oct 13 13:54:06 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-heat_api_cfn --conmon-pidfile /run/container-puppet-heat_api_cfn.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::heat::api_cfn
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-heat_api_cfn --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', 'NAME': 'heat_api_cfn', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::heat::api_cfn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-heat_api_cfn.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]: Warning: Scope(Apache::Vhost[keystone_wsgi]):
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     file names.
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]: 
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:06 standalone.localdomain puppet-user[60519]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.28 seconds
Oct 13 13:54:06 standalone.localdomain podman[61255]: 2025-10-13 13:54:06.891110196 +0000 UTC m=+0.077881286 container create 5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, distribution-scope=public, build-date=2025-07-21T15:44:03, name=rhosp17/openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, tcib_managed=true, container_name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container)
Oct 13 13:54:06 standalone.localdomain systemd[1]: Started libpod-conmon-5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241.scope.
Oct 13 13:54:06 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:06 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691834ad3a3edd918cbfe39bcf9284ebfbeb04084cbf00c4514adb5db703ef9/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:06 standalone.localdomain podman[61255]: 2025-10-13 13:54:06.931955442 +0000 UTC m=+0.118726532 container init 5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, release=1, name=rhosp17/openstack-neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d)
Oct 13 13:54:06 standalone.localdomain podman[61255]: 2025-10-13 13:54:06.93758576 +0000 UTC m=+0.124356890 container start 5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, container_name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, architecture=x86_64, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, tcib_managed=true, release=1, managed_by=tripleo_ansible)
Oct 13 13:54:06 standalone.localdomain podman[61255]: 2025-10-13 13:54:06.938089685 +0000 UTC m=+0.124860795 container attach 5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, batch=17.1_20250721.1, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, container_name=container-puppet-neutron, release=1, name=rhosp17/openstack-neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 13:54:06 standalone.localdomain podman[61279]: 2025-10-13 13:54:06.957605181 +0000 UTC m=+0.086612409 container create f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-type=git, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., container_name=container-puppet-nova, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 13:54:06 standalone.localdomain podman[61255]: 2025-10-13 13:54:06.858690484 +0000 UTC m=+0.045461614 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 13:54:07 standalone.localdomain podman[61279]: 2025-10-13 13:54:06.907362384 +0000 UTC m=+0.036369602 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:54:07 standalone.localdomain systemd[1]: Started libpod-conmon-f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46.scope.
Oct 13 13:54:07 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created
Oct 13 13:54:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed161745c15cbd99d7c949d604dd3eb9ddf0df704d54afd7fe76909dd70cd6ab/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:07 standalone.localdomain podman[61279]: 2025-10-13 13:54:07.036084475 +0000 UTC m=+0.165091703 container init f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova, com.redhat.component=openstack-nova-api-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, distribution-scope=public, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-nova, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, version=17.1.9)
Oct 13 13:54:07 standalone.localdomain podman[61279]: 2025-10-13 13:54:07.042311131 +0000 UTC m=+0.171318329 container start f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=container-puppet-nova, build-date=2025-07-21T16:05:11, config_id=tripleo_puppet_step1, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:54:07 standalone.localdomain podman[61279]: 2025-10-13 13:54:07.042647121 +0000 UTC m=+0.171654379 container attach f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova, name=rhosp17/openstack-nova-api, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, container_name=container-puppet-nova, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T16:05:11, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{sha256}88805955656f368a53297ae57e44c9827d9f331e82cb0c48c901d61ff21c93cb'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{sha256}534c8b95a0b582a97715ace9c822132fe30c3a897d57123937e7038fb2114749'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{sha256}9a41858833966763d46047d5de4018a225ba5b60b405a06900d700a6982333ed'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{sha256}bc5d96fed63dbe6badbd2ecc5b1e6c2fc1c5d8b6677b76ebc5350a6fe39e1553'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:07 standalone.localdomain crontab[61387]: (root) LIST (root)
Oct 13 13:54:07 standalone.localdomain crontab[61394]: (root) LIST (keystone)
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Cron::Trust_flush/Cron[keystone-manage trust_flush]/ensure: created
Oct 13 13:54:07 standalone.localdomain crontab[61395]: (root) REPLACE (keystone)
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]:    (file & line not available)
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Cache/Oslo::Cache[keystone_config]/Keystone_config[cache/backend]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]:    (file & line not available)
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Cache/Oslo::Cache[keystone_config]/Keystone_config[cache/backend_argument]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Cache/Oslo::Cache[keystone_config]/Keystone_config[cache/enabled]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Cache/Oslo::Cache[keystone_config]/Keystone_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Cache/Oslo::Cache[keystone_config]/Keystone_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v532: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]: Warning: Scope(Class[Manila]): The sql_connection parameter is deprecated and will be \
Oct 13 13:54:07 standalone.localdomain puppet-user[60733]: removed in a future realse. Use manila::db::database_connection instead
Oct 13 13:54:07 standalone.localdomain systemd[1]: tmp-crun.lxvpIg.mount: Deactivated successfully.
Oct 13 13:54:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6a6ca37cfa9f0e5053b4b5f65562ad93b9cc6ac5c10b8f2ac2713ae975ff1e87-merged.mount: Deactivated successfully.
Oct 13 13:54:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1196a293686b80c8120eb352633d509fd3b9dac6f4b0bb13f0791dda7da389a2-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}5675f80f8be7667cc0726512d8a325d75e13603e2fab1ec9630eabace91e4865'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi]/File[/var/www/cgi-bin/keystone]/group: group changed 'root' to 'keystone'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi]/File[keystone_wsgi]/ensure: defined content as '{sha256}55e95baab868583f1b6646e2dcc61edb7f403991f97d4397478e9a9dd3e7d1f2'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_mellon.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_openidc.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_mellon.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_openidc.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]:    (file & line not available)
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi]/Apache::Vhost[keystone_wsgi]/Concat[10-keystone_wsgi.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi.conf]/ensure: defined content as '{sha256}5894990a279a0728821d339c9e44d16604882211157eb6e608dbf34aa18168ac'
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Notice: Applied catalog in 1.05 seconds
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Application:
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Initial environment: production
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Converged environment: production
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:          Run mode: user
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Changes:
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:             Total: 83
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Events:
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:           Success: 83
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:             Total: 83
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Resources:
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:           Skipped: 32
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:           Changed: 83
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:       Out of sync: 83
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:             Total: 275
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Time:
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:       Concat file: 0.00
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Concat fragment: 0.00
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:              Cron: 0.01
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:            Augeas: 0.02
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:           Package: 0.04
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:              File: 0.11
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Keystone config: 0.65
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Transaction evaluation: 1.04
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Catalog application: 1.05
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:    Config retrieval: 1.46
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:          Last run: 1760363647
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:         Resources: 0.00
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:             Total: 1.05
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]: Version:
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:            Config: 1760363645
Oct 13 13:54:07 standalone.localdomain puppet-user[60519]:            Puppet: 7.10.0
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]:    (file & line not available)
Oct 13 13:54:07 standalone.localdomain puppet-user[61032]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.08 seconds
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{sha256}31f7d20fad86bdd2bc5692619928af8785dc0e9f858863aeece67cff0e4edfd2' to '{sha256}4ad053d3f853d950fd8114b543f5bf4a0fc687729b67a92487b0939abbc8cf1b'
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Notice: Applied catalog in 0.02 seconds
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Application:
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:    Initial environment: production
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:    Converged environment: production
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:          Run mode: user
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Changes:
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:             Total: 1
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Events:
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:           Success: 1
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:             Total: 1
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Resources:
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:           Changed: 1
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:       Out of sync: 1
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:           Skipped: 9
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:             Total: 10
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Time:
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:              File: 0.01
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:    Transaction evaluation: 0.01
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:    Catalog application: 0.02
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:    Config retrieval: 0.11
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:          Last run: 1760363648
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:             Total: 0.02
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]: Version:
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:            Config: 1760363647
Oct 13 13:54:08 standalone.localdomain puppet-user[61032]:            Puppet: 7.10.0
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'ensure'. (file: /etc/puppet/modules/manila/manifests/share.pp, line: 50, column: 18)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_generic_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 292, column: 48)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_netapp_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 294, column: 39)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_vmax_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 295, column: 39)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_powermax_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 296, column: 39)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_isilon_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 297, column: 39)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_unity_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 298, column: 39)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_vnx_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 299, column: 39)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: 'manila_flashblade_backend'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/manila/share.pp, line: 300, column: 39)
Oct 13 13:54:08 standalone.localdomain systemd[1]: libpod-0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e.scope: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain systemd[1]: libpod-0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e.scope: Consumed 2.064s CPU time.
Oct 13 13:54:08 standalone.localdomain podman[60944]: 2025-10-13 13:54:08.305650635 +0000 UTC m=+2.407539534 container died 0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=container-puppet-memcached, name=rhosp17/openstack-memcached, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-memcached, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1)
Oct 13 13:54:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-87046473c186226e71335669ea31ba5572060b2d006f721960f031336019f3c2-merged.mount: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain ceph-mon[29756]: pgmap v532: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:08 standalone.localdomain podman[61531]: 2025-10-13 13:54:08.417580762 +0000 UTC m=+0.105516935 container cleanup 0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=container-puppet-memcached, container_name=container-puppet-memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, distribution-scope=public, name=rhosp17/openstack-memcached, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 13:54:08 standalone.localdomain systemd[1]: libpod-conmon-0e9939c7623197f57ceb85a30c91b02087819f8de092c3f3f1dd23d3098f949e.scope: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-memcached --conmon-pidfile /run/container-puppet-memcached.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::memcached
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-memcached --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'memcached', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::memcached\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-memcached.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1
Oct 13 13:54:08 standalone.localdomain systemd[1]: libpod-c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295.scope: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain systemd[1]: libpod-c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295.scope: Consumed 4.630s CPU time.
Oct 13 13:54:08 standalone.localdomain podman[60481]: 2025-10-13 13:54:08.465911882 +0000 UTC m=+5.048345065 container died c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=container-puppet-keystone, description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=container-puppet-keystone, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, release=1)
Oct 13 13:54:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f71faf331040c26a5f484565c527989857decd5d4ab474ba00bf72472b9f3d87-merged.mount: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain podman[61576]: 2025-10-13 13:54:08.548784498 +0000 UTC m=+0.074413894 container cleanup c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=container-puppet-keystone, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, container_name=container-puppet-keystone, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=)
Oct 13 13:54:08 standalone.localdomain systemd[1]: libpod-conmon-c345d86697c9779b8502f8137e9a8e4a68bc47a5d50271d1fcf60c2cd0fb8295.scope: Deactivated successfully.
Oct 13 13:54:08 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-keystone --conmon-pidfile /run/container-puppet-keystone.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env STEP_CONFIG=include ::tripleo::packages
                                                       ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }
                                                       include tripleo::profile::base::keystone
                                                       
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-keystone --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', 'NAME': 'keystone', 'STEP_CONFIG': "include ::tripleo::packages\n['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::keystone\n\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-keystone.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Oct 13 13:54:08 standalone.localdomain ovs-vsctl[61699]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:54:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Unknown variable: '::pacemaker::pcs_010'. (file: /etc/puppet/modules/pacemaker/manifests/resource/bundle.pp, line: 159, column: 6)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Warning: Scope(Apache::Vhost[manila_wsgi]):
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     file names.
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: 
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:08 standalone.localdomain puppet-user[61173]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:08 standalone.localdomain puppet-user[61173]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:08 standalone.localdomain puppet-user[61173]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:08 standalone.localdomain puppet-user[61173]:    (file & line not available)
Oct 13 13:54:08 standalone.localdomain podman[61738]: 2025-10-13 13:54:08.831616921 +0000 UTC m=+0.061465344 container create 8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova_metadata, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, container_name=container-puppet-nova_metadata, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]:    (file & line not available)
Oct 13 13:54:08 standalone.localdomain puppet-user[61336]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:08 standalone.localdomain puppet-user[61336]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:08 standalone.localdomain puppet-user[61336]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:08 standalone.localdomain puppet-user[61336]:    (file & line not available)
Oct 13 13:54:08 standalone.localdomain systemd[1]: Started libpod-conmon-8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60.scope.
Oct 13 13:54:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:08 standalone.localdomain puppet-user[61309]:    (file & line not available)
Oct 13 13:54:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5481f769777c9753e381b725ebc95f095fff1a1888ddebb0aedc5c68ba2eae3/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:08 standalone.localdomain podman[61738]: 2025-10-13 13:54:08.797100086 +0000 UTC m=+0.026948529 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:54:08 standalone.localdomain podman[61738]: 2025-10-13 13:54:08.902566379 +0000 UTC m=+0.132414802 container init 8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova_metadata, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, name=rhosp17/openstack-nova-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_puppet_step1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, container_name=container-puppet-nova_metadata, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, vcs-type=git, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12)
Oct 13 13:54:08 standalone.localdomain podman[61738]: 2025-10-13 13:54:08.90959113 +0000 UTC m=+0.139439563 container start 8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova_metadata, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, vcs-type=git, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, container_name=container-puppet-nova_metadata, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public)
Oct 13 13:54:08 standalone.localdomain podman[61738]: 2025-10-13 13:54:08.910348893 +0000 UTC m=+0.140197356 container attach 8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova_metadata, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=container-puppet-nova_metadata, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11)
Oct 13 13:54:08 standalone.localdomain puppet-user[61173]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:08 standalone.localdomain puppet-user[61173]:    (file & line not available)
Oct 13 13:54:08 standalone.localdomain puppet-user[60733]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.73 seconds
Oct 13 13:54:08 standalone.localdomain puppet-user[61336]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:08 standalone.localdomain puppet-user[61336]:    (file & line not available)
Oct 13 13:54:09 standalone.localdomain podman[61900]: 2025-10-13 13:54:09.012614891 +0000 UTC m=+0.079287630 container create d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt)
Oct 13 13:54:09 standalone.localdomain systemd[1]: Started libpod-conmon-d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb.scope.
Oct 13 13:54:09 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b1648ca07963a73362905865b5e148d2096f74e5b48b5a7bd40d6efde1ab3f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:09 standalone.localdomain podman[61900]: 2025-10-13 13:54:08.974961771 +0000 UTC m=+0.041634500 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 13:54:09 standalone.localdomain podman[61900]: 2025-10-13 13:54:09.081809235 +0000 UTC m=+0.148481964 container init d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, container_name=container-puppet-nova_libvirt, config_id=tripleo_puppet_step1, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=2, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:54:09 standalone.localdomain podman[61900]: 2025-10-13 13:54:09.090209107 +0000 UTC m=+0.156881836 container start d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 13:54:09 standalone.localdomain podman[61900]: 2025-10-13 13:54:09.090555018 +0000 UTC m=+0.157227747 container attach d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, version=17.1.9, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 13:54:09 standalone.localdomain puppet-user[61173]: Could not connect to the CIB: Transport endpoint is not connected
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: in a future release. Use nova::cinder::os_region_name instead
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: in a future release. Use nova::cinder::catalog_info instead
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Scope(Class[Nova]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: removed in a future realse. Use nova::db::database_connection instead
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Scope(Class[Nova]): The api_database_connection parameter is deprecated and will be \
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: removed in a future realse. Use nova::db::api_database_connection instead
Oct 13 13:54:09 standalone.localdomain puppet-user[61173]: Init failed, could not perform requested operations
Oct 13 13:54:09 standalone.localdomain puppet-user[61173]: -:1: parser error : Document is empty
Oct 13 13:54:09 standalone.localdomain puppet-user[61173]: 
Oct 13 13:54:09 standalone.localdomain puppet-user[61173]: ^
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Manila_config[DEFAULT/api_paste_config]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Manila_config[DEFAULT/storage_availability_zone]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Manila_config[DEFAULT/rootwrap_config]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Manila_config[DEFAULT/state_path]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Manila_config[DEFAULT/host]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Api/Manila_config[DEFAULT/osapi_share_listen]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Api/Manila_config[DEFAULT/enabled_share_protocols]/ensure: created
Oct 13 13:54:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v533: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Api/Manila_config[DEFAULT/default_share_type]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Api/Manila_config[DEFAULT/osapi_share_workers]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Api/Manila_config[DEFAULT/auth_strategy]/ensure: created
Oct 13 13:54:09 standalone.localdomain crontab[61936]: (root) LIST (root)
Oct 13 13:54:09 standalone.localdomain crontab[61937]: (root) LIST (manila)
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Cron::Db_purge/Cron[manila-manage db purge]/ensure: created
Oct 13 13:54:09 standalone.localdomain crontab[61938]: (root) REPLACE (manila)
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/auth_url]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/auth_type]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/region_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/username]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/user_domain_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/password]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/project_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Compute::Nova/Manila_config[nova/project_domain_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[DEFAULT/network_api_class]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/auth_url]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/auth_type]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/region_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/username]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/user_domain_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/password]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/project_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[neutron/project_domain_name]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[DEFAULT/network_plugin_ipv4_enabled]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Network::Neutron/Manila_config[DEFAULT/network_plugin_ipv6_enabled]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Backends/Manila_config[DEFAULT/enabled_share_backends]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Db/Oslo::Db[manila_config]/Manila_config[database/connection]/ensure: created
Oct 13 13:54:09 standalone.localdomain ceph-mon[29756]: pgmap v533: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Db/Oslo::Db[manila_config]/Manila_config[database/max_retries]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Db/Oslo::Db[manila_config]/Manila_config[database/db_max_retries]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Rabbit[manila_config]/Manila_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Unknown variable: '::nova::scheduler::filter::scheduler_max_attempts'. (file: /etc/puppet/modules/nova/manifests/scheduler.pp, line: 122, column: 29)
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Unknown variable: '::nova::scheduler::filter::periodic_task_interval'. (file: /etc/puppet/modules/nova/manifests/scheduler.pp, line: 123, column: 39)
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: Warning: Scope(Class[Nova::Scheduler::Filter]): The nova::scheduler::filter::scheduler_max_attempts parameter has been deprecated and \
Oct 13 13:54:09 standalone.localdomain puppet-user[61336]: will be removed in a future release. Use the nova::scheduler::max_attempts parameter instead.
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Amqp[manila_config]/Manila_config[oslo_messaging_amqp/server_request_prefix]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Amqp[manila_config]/Manila_config[oslo_messaging_amqp/broadcast_prefix]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Amqp[manila_config]/Manila_config[oslo_messaging_amqp/group_request_prefix]/ensure: created
Oct 13 13:54:09 standalone.localdomain puppet-user[61309]: Warning: Scope(Class[Neutron::Plugins::Ml2::Sriov_driver]): The vnic_type_blacklist parameter is deprecated. Use vnic_type_prohibit_list instead
Oct 13 13:54:09 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Amqp[manila_config]/Manila_config[oslo_messaging_amqp/container_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Amqp[manila_config]/Manila_config[oslo_messaging_amqp/idle_timeout]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Amqp[manila_config]/Manila_config[oslo_messaging_amqp/trace]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Warning: Unknown variable: '::pacemaker::pcs_010'. (file: /etc/puppet/modules/pacemaker/manifests/resource/bundle.pp, line: 159, column: 6)
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Default[manila_config]/Manila_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Default[manila_config]/Manila_config[DEFAULT/control_exchange]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Notifications[manila_config]/Manila_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Messaging::Notifications[manila_config]/Manila_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila/Oslo::Concurrency[manila_config]/Manila_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Logging/Oslo::Log[manila_config]/Manila_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Logging/Oslo::Log[manila_config]/Manila_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.42 seconds
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Warning: Scope(Apache::Vhost[nova_api_wsgi]):
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     file names.
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: 
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Keystone::Authtoken/Keystone::Resource::Authtoken[manila_config]/Manila_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{sha256}79f7de8818e5d7d02ba998b9655b6df5248c9e15df5cda769a34f6cb5431eae7'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}5430101641470fdaa6627ed634ccd56656fefff7a9a0650441c6a9c2f1086132'
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{sha256}df7b18b99470a82afb1aebff284d910b04ceb628c9ac89d8e2fe9fb4682d5fc9' to '{sha256}f625a41e1bbb0e235dfcc63eadfa059cb11b5d8ffe338b9aa3b21dc2e3db610f'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.65 seconds
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{sha256}750609517a439d6b5e2bf790f41c729c3920449fd955ed758b147fe38e446fac'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Notice: Applied catalog in 0.21 seconds
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Application:
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:    Initial environment: production
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:    Converged environment: production
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:          Run mode: user
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Changes:
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:             Total: 4
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Events:
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:           Success: 4
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:             Total: 4
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Resources:
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:           Skipped: 162
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:           Changed: 4
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:       Out of sync: 4
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:             Total: 167
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Time:
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:              File: 0.02
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:    Transaction evaluation: 0.21
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:    Catalog application: 0.21
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:    Config retrieval: 1.52
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:          Last run: 1760363650
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:             Total: 0.21
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]: Version:
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:            Config: 1760363648
Oct 13 13:54:10 standalone.localdomain puppet-user[61173]:            Puppet: 7.10.0
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.66 seconds
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Policy/Oslo::Policy[manila_config]/Manila_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Api/Oslo::Middleware[manila_config]/Manila_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain ovs-vsctl[61960]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Wsgi::Apache/Openstacklib::Wsgi::Apache[manila_wsgi]/File[/var/www/cgi-bin/manila]/group: group changed 'root' to 'manila'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Wsgi::Apache/Openstacklib::Wsgi::Apache[manila_wsgi]/File[manila_wsgi]/ensure: defined content as '{sha256}ca07199ace8325d094e754ccdfefb2b5a78a13cd657d68221b3a68b53b0dd893'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/driver_handles_share_servers]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/share_backend_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/share_driver]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_conf_path]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_auth_id]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_cluster_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_ganesha_server_ip]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_ganesha_server_is_remote]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_volume_mode]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Tripleo::Profile::Base::Manila::Share/Manila::Backend::Cephfs[cephfs]/Manila_config[cephfs/cephfs_protocol_helper_type]/ensure: created
Oct 13 13:54:10 standalone.localdomain ovs-vsctl[62006]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: /Stage[main]/Manila::Wsgi::Apache/Openstacklib::Wsgi::Apache[manila_wsgi]/Apache::Vhost[manila_wsgi]/Concat[10-manila_wsgi.conf]/File[/etc/httpd/conf.d/10-manila_wsgi.conf]/ensure: defined content as '{sha256}e833d10456a7678869e13a5466f835614312c4aa4cb2bffd2f6962ee16390d20'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61905]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:10 standalone.localdomain puppet-user[61905]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:10 standalone.localdomain puppet-user[61905]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:10 standalone.localdomain puppet-user[61905]:    (file & line not available)
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Notice: Applied catalog in 1.71 seconds
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Application:
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:    Initial environment: production
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:    Converged environment: production
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:          Run mode: user
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Changes:
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:             Total: 115
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Events:
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:           Success: 115
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:             Total: 115
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Resources:
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:           Changed: 115
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:       Out of sync: 115
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:           Skipped: 49
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:             Total: 331
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Time:
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:         Resources: 0.00
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:       Concat file: 0.00
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:    Concat fragment: 0.00
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:            Augeas: 0.02
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:              Cron: 0.02
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:           Package: 0.03
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:              File: 0.13
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:     Manila config: 1.23
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:    Transaction evaluation: 1.70
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:    Catalog application: 1.71
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:    Config retrieval: 1.91
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:          Last run: 1760363650
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:             Total: 1.71
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]: Version:
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:            Config: 1760363647
Oct 13 13:54:10 standalone.localdomain puppet-user[60733]:            Puppet: 7.10.0
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61905]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:10 standalone.localdomain puppet-user[61905]:    (file & line not available)
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:10 standalone.localdomain puppet-user[61935]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:10 standalone.localdomain puppet-user[61935]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:10 standalone.localdomain puppet-user[61935]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:10 standalone.localdomain puppet-user[61935]:    (file & line not available)
Oct 13 13:54:10 standalone.localdomain systemd[1]: libpod-20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5.scope: Deactivated successfully.
Oct 13 13:54:10 standalone.localdomain systemd[1]: libpod-20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5.scope: Consumed 4.151s CPU time.
Oct 13 13:54:10 standalone.localdomain podman[61108]: 2025-10-13 13:54:10.865380823 +0000 UTC m=+4.547427319 container died 20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-mysql, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, container_name=container-puppet-mysql, architecture=x86_64, config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true)
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61935]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61935]:    (file & line not available)
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}8111b284f1dfc6f050def1ba208f3afefd52da48e2a6797afe626248081b9e0f'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/auth_url]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/username]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/password]/ensure: created
Oct 13 13:54:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4bbf92a4c859842b06884bea493d8bbbda1c9f40c81fd52f77228f976bab6cee-merged.mount: Deactivated successfully.
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/project_domain_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/project_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/user_domain_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/region_name]/ensure: created
Oct 13 13:54:10 standalone.localdomain podman[62122]: 2025-10-13 13:54:10.96729557 +0000 UTC m=+0.093377002 container cleanup 20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=container-puppet-mysql, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, distribution-scope=public, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, container_name=container-puppet-mysql)
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/endpoint_type]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:10 standalone.localdomain systemd[1]: libpod-conmon-20e55a940c0b3ab75f4fd8e0729f7e2cb59429f76437b23c77c9f1116ef115b5.scope: Deactivated successfully.
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Notifications::Nova/Neutron_config[nova/auth_type]/ensure: created
Oct 13 13:54:10 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-mysql --conmon-pidfile /run/container-puppet-mysql.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env STEP_CONFIG=include ::tripleo::packages
                                                       ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }
                                                       exec {'wait-for-settle': command => '/bin/true' }
                                                       include tripleo::profile::pacemaker::database::mysql_bundle --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-mysql --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file', 'NAME': 'mysql', 'STEP_CONFIG': "include ::tripleo::packages\n['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\nexec {'wait-for-settle': command => '/bin/true' }\ninclude tripleo::profile::pacemaker::database::mysql_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-mysql.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/auth_url]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/username]/ensure: created
Oct 13 13:54:10 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:10 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/password]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/project_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/project_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/user_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{sha256}901cc9636a87a089b1b6620430d7a36909add0ca7dc2216b74d7bb9dc627d776'
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: in a future release. Use nova::cinder::os_region_name instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: in a future release. Use nova::cinder::catalog_info instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: Warning: Scope(Class[Nova]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: removed in a future realse. Use nova::db::database_connection instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: Warning: Scope(Class[Nova]): The api_database_connection parameter is deprecated and will be \
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: removed in a future realse. Use nova::db::api_database_connection instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/region_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: in a future release. Use nova::cinder::os_region_name instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: in a future release. Use nova::cinder::catalog_info instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova]): The database_connection parameter is deprecated and will be \
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: removed in a future realse. Use nova::db::database_connection instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova]): The api_database_connection parameter is deprecated and will be \
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: removed in a future realse. Use nova::db::api_database_connection instead
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server::Placement/Neutron_config[placement/auth_type]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/agent_down_time]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61905]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Neutron_config[ovs/igmp_snooping_enable]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_security_group]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ml2::Sriov/Neutron_sriov_agent_config[sriov_nic/physical_device_mappings]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ml2::Sriov/Neutron_sriov_agent_config[agent/extensions]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ml2::Sriov/Neutron_sriov_agent_config[agent/polling_interval]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ml2::Sriov/Neutron_sriov_agent_config[securitygroup/firewall_driver]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ml2::Sriov/Neutron_sriov_agent_config[sriov_nic/resource_provider_default_hypervisor]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dhcp_broadcast_reply]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_enable_addr6_list]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5)
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5)
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5)
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set.
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v534: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Oct 13 13:54:11 standalone.localdomain podman[62200]: 2025-10-13 13:54:11.441540465 +0000 UTC m=+0.113473055 container create 23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, vcs-type=git, container_name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:28:44)
Oct 13 13:54:11 standalone.localdomain podman[62200]: 2025-10-13 13:54:11.364878775 +0000 UTC m=+0.036811385 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:11 standalone.localdomain podman[60695]: 2025-10-13 13:54:11.472736491 +0000 UTC m=+6.522970416 container died a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=container-puppet-manila, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, architecture=x86_64, com.redhat.component=openstack-manila-api-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, container_name=container-puppet-manila, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true)
Oct 13 13:54:11 standalone.localdomain systemd[1]: Started libpod-conmon-23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f.scope.
Oct 13 13:54:11 standalone.localdomain systemd[1]: libpod-a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498.scope: Deactivated successfully.
Oct 13 13:54:11 standalone.localdomain systemd[1]: libpod-a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498.scope: Consumed 6.069s CPU time.
Oct 13 13:54:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Oct 13 13:54:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7f16864f80a2c037ac48def8b5c01d605b8cc2e9a20cb2523be725cfabe631b/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7f16864f80a2c037ac48def8b5c01d605b8cc2e9a20cb2523be725cfabe631b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:11 standalone.localdomain podman[62200]: 2025-10-13 13:54:11.490518854 +0000 UTC m=+0.162451434 container init 23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=container-puppet-ovn_controller, build-date=2025-07-21T13:28:44, release=1, com.redhat.component=openstack-ovn-controller-container)
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Oct 13 13:54:11 standalone.localdomain podman[62200]: 2025-10-13 13:54:11.496153353 +0000 UTC m=+0.168085933 container start 23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, container_name=container-puppet-ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 13:54:11 standalone.localdomain podman[62200]: 2025-10-13 13:54:11.496290847 +0000 UTC m=+0.168223427 container attach 23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-ovn_controller, build-date=2025-07-21T13:28:44, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Oct 13 13:54:11 standalone.localdomain podman[62238]: 2025-10-13 13:54:11.546964267 +0000 UTC m=+0.066969400 container cleanup a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=container-puppet-manila, description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T16:06:43, release=1, com.redhat.component=openstack-manila-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-manila-api, architecture=x86_64, config_id=tripleo_puppet_step1, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, container_name=container-puppet-manila, managed_by=tripleo_ansible)
Oct 13 13:54:11 standalone.localdomain systemd[1]: libpod-conmon-a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498.scope: Deactivated successfully.
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-manila --conmon-pidfile /run/container-puppet-manila.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line --env NAME=manila --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::manila::api
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::manila::scheduler
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::pacemaker::manila::share_bundle
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-manila --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,manila_config,manila_api_paste_ini,manila_config,manila_scheduler_paste_ini,manila_config,file,concat,file_line', 'NAME': 'manila', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::manila::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::manila::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::pacemaker::manila::share_bundle\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-manila.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61935]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used.
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[api/max_limit]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}6c2ed8ff71ada905bd1020b95469f141bdf523b3ef9db77a58b86b09ad28324c'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created
Oct 13 13:54:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee02a78034d58b97a791f95c9d2a559665d6d7cb40b95ff70c9188da0f009dd0-merged.mount: Deactivated successfully.
Oct 13 13:54:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a60806ae6781c47faafde023f39dbdb243c633403c9920899c32304e4e298498-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:11 standalone.localdomain podman[62326]: 2025-10-13 13:54:11.945939345 +0000 UTC m=+0.067647890 container create da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=container-puppet-rabbitmq, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rabbitmq, container_name=container-puppet-rabbitmq, architecture=x86_64, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_puppet_step1, tcib_managed=true, build-date=2025-07-21T13:08:05, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible)
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[geneve]/Neutron_plugin_ml2[ml2_type_geneve/max_header_size]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[geneve]/Neutron_plugin_ml2[ml2_type_geneve/vni_ranges]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/ovn_nb_connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/ovn_sb_connection]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/ovsdb_connection_timeout]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/ovsdb_probe_interval]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/neutron_sync_mode]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Oct 13 13:54:11 standalone.localdomain systemd[1]: Started libpod-conmon-da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b.scope.
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/ovn_metadata_enabled]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/enable_distributed_floating_ip]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/dns_servers]/ensure: created
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[ovn/ovn_emit_need_to_frag]/ensure: created
Oct 13 13:54:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:11 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[network_log/rate_limit]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Notice: /Stage[main]/Neutron::Plugins::Ml2::Ovn/Neutron_plugin_ml2[network_log/burst_limit]/ensure: created
Oct 13 13:54:12 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/363ca58f2fdd66d3f83e207fb876fe212480858dd07aa9a917453d678975c0bc/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:12 standalone.localdomain podman[62326]: 2025-10-13 13:54:11.911879023 +0000 UTC m=+0.033587568 image pull  registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1
Oct 13 13:54:12 standalone.localdomain podman[62326]: 2025-10-13 13:54:12.015899153 +0000 UTC m=+0.137607678 container init da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=container-puppet-rabbitmq, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, config_id=tripleo_puppet_step1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=container-puppet-rabbitmq, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:54:12 standalone.localdomain podman[62326]: 2025-10-13 13:54:12.026699497 +0000 UTC m=+0.148408042 container start da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=container-puppet-rabbitmq, container_name=container-puppet-rabbitmq, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, config_id=tripleo_puppet_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.component=openstack-rabbitmq-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:54:12 standalone.localdomain podman[62326]: 2025-10-13 13:54:12.027031607 +0000 UTC m=+0.148740122 container attach da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=container-puppet-rabbitmq, build-date=2025-07-21T13:08:05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-rabbitmq, vcs-type=git, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_puppet_step1)
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Notice: Applied catalog in 1.40 seconds
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Application:
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Initial environment: production
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Converged environment: production
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:          Run mode: user
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Changes:
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:             Total: 157
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Events:
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:           Success: 157
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:             Total: 157
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Resources:
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:           Changed: 157
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:       Out of sync: 157
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:           Skipped: 48
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:             Total: 439
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Time:
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:       Concat file: 0.00
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:         Resources: 0.00
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Concat fragment: 0.00
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Neutron sriov agent config: 0.01
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Ovn metadata agent config: 0.02
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:            Augeas: 0.02
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:           Package: 0.02
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Neutron dhcp agent config: 0.03
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Neutron plugin ml2: 0.05
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:              File: 0.07
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Neutron config: 0.94
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Transaction evaluation: 1.38
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Catalog application: 1.40
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:    Config retrieval: 1.82
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:          Last run: 1760363652
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:             Total: 1.40
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]: Version:
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:            Config: 1760363648
Oct 13 13:54:12 standalone.localdomain puppet-user[61309]:            Puppet: 7.10.0
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Warning: Scope(Apache::Vhost[nova_metadata_wsgi]):
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     file names.
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: 
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/workers]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/query_placement_for_image_type_support]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/limit_tenants_to_placement_aggregate]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/placement_aggregate_required_for_tenants]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/enable_isolated_aggregate_filtering]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.48 seconds
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/query_placement_for_availability_zone]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/query_placement_for_routed_network_aggregates]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.46 seconds
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/available_filters]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/enabled_filters]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/shuffle_best_same_weighed_hosts]/ensure: created
Oct 13 13:54:12 standalone.localdomain ceph-mon[29756]: pgmap v534: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}1dceed4d49186af6bcffcdb8e52bc8c99d8b0a9d907d90175f372e4ff3440e8f'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}5801b08719adb82e7ba408316c09acba375194043bcdc26e3775a280967ab489'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Warning: Empty environment setting 'TLS_PASSWORD'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]:    (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182)
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:12 standalone.localdomain systemd[1]: libpod-5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241.scope: Deactivated successfully.
Oct 13 13:54:12 standalone.localdomain systemd[1]: libpod-5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241.scope: Consumed 5.510s CPU time.
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully
Oct 13 13:54:12 standalone.localdomain podman[61255]: 2025-10-13 13:54:12.775432994 +0000 UTC m=+5.962204184 container died 5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, name=rhosp17/openstack-neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, release=1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}fe42762c98e502801224ac303c014611f71d4650d3ca1362e7289eb7ca11d8bc'
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Oct 13 13:54:12 standalone.localdomain podman[62402]: 2025-10-13 13:54:12.904956629 +0000 UTC m=+0.120066772 container cleanup 5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, architecture=x86_64, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, release=1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:03, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, container_name=container-puppet-neutron, name=rhosp17/openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Oct 13 13:54:12 standalone.localdomain systemd[1]: libpod-conmon-5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241.scope: Deactivated successfully.
Oct 13 13:54:12 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::neutron::server
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::neutron::plugins::ml2
                                                       
                                                       include tripleo::profile::base::neutron::sriov
                                                       
                                                       include tripleo::profile::base::neutron::dhcp
                                                       
                                                       include tripleo::profile::base::neutron::ovn_metadata
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,neutron_api_paste_ini,neutron_plugin_ml2,neutron_config,neutron_agent_sriov_numvfs,neutron_sriov_agent_config,neutron_config,neutron_dhcp_agent_config,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::server\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::neutron::plugins::ml2\n\ninclude tripleo::profile::base::neutron::sriov\n\ninclude tripleo::profile::base::neutron::dhcp\n\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 13:54:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c691834ad3a3edd918cbfe39bcf9284ebfbeb04084cbf00c4514adb5db703ef9-merged.mount: Deactivated successfully.
Oct 13 13:54:12 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Oct 13 13:54:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c3d60524c932de95be2ae05c30e04d03535d69fd13796a0c0462e1e414f0241-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Oct 13 13:54:12 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Oct 13 13:54:13 standalone.localdomain podman[62473]: 2025-10-13 13:54:13.3213901 +0000 UTC m=+0.072797724 container create 934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=container-puppet-placement, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, tcib_managed=true, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., container_name=container-puppet-placement, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:12, name=rhosp17/openstack-placement-api, com.redhat.component=openstack-placement-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public)
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v535: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain systemd[1]: Started libpod-conmon-934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff.scope.
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Oct 13 13:54:13 standalone.localdomain podman[62473]: 2025-10-13 13:54:13.282769742 +0000 UTC m=+0.034177366 image pull  registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 13:54:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf1f29ceeb931335ec036bcac576e0e50be69d47f8ecf9dd6e53a9d522384f3/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]:    (file & line not available)
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Oct 13 13:54:13 standalone.localdomain podman[62473]: 2025-10-13 13:54:13.398672018 +0000 UTC m=+0.150079632 container init 934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=container-puppet-placement, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-placement-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-placement-api-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=container-puppet-placement, vcs-type=git, version=17.1.9, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team)
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Oct 13 13:54:13 standalone.localdomain podman[62473]: 2025-10-13 13:54:13.41574905 +0000 UTC m=+0.167156704 container start 934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=container-puppet-placement, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, release=1, architecture=x86_64, name=rhosp17/openstack-placement-api, io.openshift.expose-services=, com.redhat.component=openstack-placement-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, container_name=container-puppet-placement)
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain podman[62473]: 2025-10-13 13:54:13.416194534 +0000 UTC m=+0.167602168 container attach 934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=container-puppet-placement, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-placement-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, container_name=container-puppet-placement, name=rhosp17/openstack-placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]:    (file & line not available)
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Metadata/Nova_config[api/local_metadata_per_cell]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Metadata/Nova_config[neutron/service_metadata_proxy]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Metadata/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.28 seconds
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62563]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.100:6642
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created
Oct 13 13:54:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62580]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62596]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.100
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62599]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=standalone.localdomain
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005484548.novalocal' to 'standalone.localdomain'
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62601]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62603]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62605]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Db/Oslo::Db[api_database]/Nova_config[api_database/connection]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:13 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Pci/Nova_config[pci/passthrough_whitelist]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62607]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created
Oct 13 13:54:13 standalone.localdomain ovs-vsctl[62609]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000
Oct 13 13:54:13 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created
Oct 13 13:54:14 standalone.localdomain ovs-vsctl[62611]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created
Oct 13 13:54:14 standalone.localdomain ovs-vsctl[62614]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3a:00:53:00
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:14 standalone.localdomain runuser[62616]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:54:14 standalone.localdomain ovs-vsctl[62623]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ctlplane
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/enforce_scope]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created
Oct 13 13:54:14 standalone.localdomain ovs-vsctl[62636]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/enforce_new_defaults]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:14 standalone.localdomain ovs-vsctl[62650]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[barbican/auth_endpoint]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Notice: Applied catalog in 0.47 seconds
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Application:
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:    Initial environment: production
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:    Converged environment: production
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:          Run mode: user
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Changes:
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:             Total: 14
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Events:
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:           Success: 14
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:             Total: 14
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Resources:
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:           Skipped: 12
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:           Changed: 14
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:       Out of sync: 14
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:             Total: 29
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Time:
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:              Exec: 0.02
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:    Config retrieval: 0.32
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:         Vs config: 0.41
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:    Transaction evaluation: 0.46
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:    Catalog application: 0.47
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:          Last run: 1760363654
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:             Total: 0.47
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]: Version:
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:            Config: 1760363653
Oct 13 13:54:14 standalone.localdomain puppet-user[62268]:            Puppet: 7.10.0
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[barbican/barbican_endpoint]/ensure: created
Oct 13 13:54:14 standalone.localdomain crontab[62672]: (root) LIST (root)
Oct 13 13:54:14 standalone.localdomain crontab[62673]: (root) LIST (nova)
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created
Oct 13 13:54:14 standalone.localdomain crontab[62674]: (root) REPLACE (nova)
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created
Oct 13 13:54:14 standalone.localdomain crontab[62675]: (root) REPLACE (nova)
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{sha256}7e08ebd0b74ef678e4dd65a63b1a848c69da44bc7b936faba98aa71adf0be1bf'
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Notice: Applied catalog in 3.60 seconds
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Application:
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:    Initial environment: production
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:    Converged environment: production
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:          Run mode: user
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Changes:
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:             Total: 157
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Events:
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:           Success: 157
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:             Total: 157
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Resources:
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:           Changed: 157
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:       Out of sync: 157
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:           Skipped: 51
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:             Total: 514
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Time:
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:       Concat file: 0.00
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:            Anchor: 0.00
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:    Concat fragment: 0.01
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:            Augeas: 0.02
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:           Package: 0.03
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:              Cron: 0.04
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:              File: 0.13
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:    Config retrieval: 1.85
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:          Last run: 1760363654
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:       Nova config: 3.07
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:    Transaction evaluation: 3.58
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:    Catalog application: 3.60
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:         Resources: 0.00
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:             Total: 3.60
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]: Version:
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:            Config: 1760363648
Oct 13 13:54:14 standalone.localdomain puppet-user[61336]:            Puppet: 7.10.0
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain ceph-mon[29756]: pgmap v535: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Oct 13 13:54:14 standalone.localdomain systemd[1]: libpod-23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f.scope: Deactivated successfully.
Oct 13 13:54:14 standalone.localdomain systemd[1]: libpod-23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f.scope: Consumed 2.765s CPU time.
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}4e31a0777f64f79685d0715f365c3c7c199ff97b1ffdc418fd9b2146622f4ab3'
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created
Oct 13 13:54:14 standalone.localdomain podman[62719]: 2025-10-13 13:54:14.614256629 +0000 UTC m=+0.041844697 container died 23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=container-puppet-ovn_controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, distribution-scope=public)
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e7f16864f80a2c037ac48def8b5c01d605b8cc2e9a20cb2523be725cfabe631b-merged.mount: Deactivated successfully.
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created
Oct 13 13:54:14 standalone.localdomain podman[62719]: 2025-10-13 13:54:14.677399103 +0000 UTC m=+0.104987171 container cleanup 23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, container_name=container-puppet-ovn_controller, release=1)
Oct 13 13:54:14 standalone.localdomain systemd[1]: libpod-conmon-23209d7039fdef74b6a865f6f7a80565d8cc9a01bf213354ccd4c315bbd87e4f.scope: Deactivated successfully.
Oct 13 13:54:14 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::neutron::agents::ovn
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created
Oct 13 13:54:14 standalone.localdomain runuser[62616]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created
Oct 13 13:54:14 standalone.localdomain runuser[62792]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created
Oct 13 13:54:14 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created
Oct 13 13:54:15 standalone.localdomain podman[62870]: 2025-10-13 13:54:15.018887006 +0000 UTC m=+0.059003021 container create d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, container_name=container-puppet-swift, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, vcs-type=git, version=17.1.9, batch=17.1_20250721.1)
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain systemd[1]: Started libpod-conmon-d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572.scope.
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Db/Oslo::Db[api_database]/Nova_config[api_database/connection]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created
Oct 13 13:54:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce87643b35ec000a1792ecfe3a8ec00ec1224e0b2865439ada2796f407c1075b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain podman[62870]: 2025-10-13 13:54:15.062629597 +0000 UTC m=+0.102745632 container init d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, vendor=Red Hat, Inc., container_name=container-puppet-swift, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-proxy-server-container, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_puppet_step1)
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain podman[62870]: 2025-10-13 13:54:15.069178695 +0000 UTC m=+0.109294720 container start d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=container-puppet-swift, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_puppet_step1, name=rhosp17/openstack-swift-proxy-server, build-date=2025-07-21T14:48:37, release=1)
Oct 13 13:54:15 standalone.localdomain podman[62870]: 2025-10-13 13:54:15.069413861 +0000 UTC m=+0.109529886 container attach d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T14:48:37, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_puppet_step1, version=17.1.9, batch=17.1_20250721.1, container_name=container-puppet-swift, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-proxy-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created
Oct 13 13:54:15 standalone.localdomain podman[62870]: 2025-10-13 13:54:14.994854985 +0000 UTC m=+0.034971010 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 13:54:15 standalone.localdomain systemd[1]: libpod-f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46.scope: Deactivated successfully.
Oct 13 13:54:15 standalone.localdomain systemd[1]: libpod-f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46.scope: Consumed 7.762s CPU time.
Oct 13 13:54:15 standalone.localdomain podman[61279]: 2025-10-13 13:54:15.18403599 +0000 UTC m=+8.313043228 container died f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, vcs-type=git, config_id=tripleo_puppet_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, container_name=container-puppet-nova, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., release=1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-api, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Wsgi::Apache_metadata/Openstacklib::Wsgi::Apache[nova_metadata_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Wsgi::Apache_metadata/Openstacklib::Wsgi::Apache[nova_metadata_wsgi]/File[nova_metadata_wsgi]/ensure: defined content as '{sha256}7311c9047eec89f1e952197038ebb53e3ab5810905e7292a2f802c7e4fc0351c'
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: /Stage[main]/Nova::Wsgi::Apache_metadata/Openstacklib::Wsgi::Apache[nova_metadata_wsgi]/Apache::Vhost[nova_metadata_wsgi]/Concat[10-nova_metadata_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_metadata_wsgi.conf]/ensure: defined content as '{sha256}48c184c30e4fa60a5c688a97a3847ed13f3724673621b1b2e571d5f4c326ac6f'
Oct 13 13:54:15 standalone.localdomain podman[62929]: 2025-10-13 13:54:15.260245325 +0000 UTC m=+0.065663981 container cleanup f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova, vcs-type=git, release=1, container_name=container-puppet-nova, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:54:15 standalone.localdomain systemd[1]: libpod-conmon-f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46.scope: Deactivated successfully.
Oct 13 13:54:15 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova --conmon-pidfile /run/container-puppet-nova.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config --env NAME=nova --env STEP_CONFIG=include ::tripleo::packages
                                                       ['Nova_cell_v2'].each |String $val| { noop_resource($val) }
                                                       include tripleo::profile::base::nova::api
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::nova::conductor
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::nova::scheduler
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::nova::vncproxy
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini,nova_config,nova_config,nova_config', 'NAME': 'nova', 'STEP_CONFIG': "include ::tripleo::packages\n['Nova_cell_v2'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::base::nova::api\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::conductor\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::scheduler\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::nova::vncproxy\ninclude tripleo::profile::base::database::mysql::client"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Notice: Applied catalog in 2.80 seconds
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Application:
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:    Initial environment: production
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:    Converged environment: production
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:          Run mode: user
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Changes:
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:             Total: 128
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Events:
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:           Success: 128
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:             Total: 128
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Resources:
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:           Changed: 128
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:       Out of sync: 128
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:           Skipped: 37
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:             Total: 421
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Time:
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:       Concat file: 0.00
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:            Anchor: 0.00
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:    Concat fragment: 0.00
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:            Augeas: 0.02
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:           Package: 0.04
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:              File: 0.12
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:    Config retrieval: 1.70
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:          Last run: 1760363655
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:       Nova config: 2.32
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:    Transaction evaluation: 2.79
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:    Catalog application: 2.80
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:         Resources: 0.00
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:             Total: 2.80
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]: Version:
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:            Config: 1760363650
Oct 13 13:54:15 standalone.localdomain puppet-user[61905]:            Puppet: 7.10.0
Oct 13 13:54:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v536: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:15 standalone.localdomain puppet-user[62555]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:15 standalone.localdomain puppet-user[62555]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:15 standalone.localdomain puppet-user[62555]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:15 standalone.localdomain puppet-user[62555]:    (file & line not available)
Oct 13 13:54:15 standalone.localdomain puppet-user[62555]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:15 standalone.localdomain puppet-user[62555]:    (file & line not available)
Oct 13 13:54:15 standalone.localdomain runuser[62792]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:54:15 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully
Oct 13 13:54:15 standalone.localdomain podman[63065]: 2025-10-13 13:54:15.585615384 +0000 UTC m=+0.079004390 container create 8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift_ringbuilder, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, container_name=container-puppet-swift_ringbuilder, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_puppet_step1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 13:54:15 standalone.localdomain systemd[1]: Started libpod-conmon-8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49.scope.
Oct 13 13:54:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8e7524cf6b6fffc8c0516c67342c7bc57cbe29bccdd18f09acae4c940add05/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:15 standalone.localdomain podman[63065]: 2025-10-13 13:54:15.544423619 +0000 UTC m=+0.037812625 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 13:54:15 standalone.localdomain podman[63065]: 2025-10-13 13:54:15.672848321 +0000 UTC m=+0.166237327 container init 8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift_ringbuilder, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, config_id=tripleo_puppet_step1, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=container-puppet-swift_ringbuilder, architecture=x86_64, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 13:54:15 standalone.localdomain podman[63065]: 2025-10-13 13:54:15.694751908 +0000 UTC m=+0.188140954 container start 8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift_ringbuilder, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=container-puppet-swift_ringbuilder, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 13:54:15 standalone.localdomain podman[63065]: 2025-10-13 13:54:15.695020116 +0000 UTC m=+0.188409162 container attach 8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift_ringbuilder, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, version=17.1.9, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, container_name=container-puppet-swift_ringbuilder, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]:    (file & line not available)
Oct 13 13:54:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ed161745c15cbd99d7c949d604dd3eb9ddf0df704d54afd7fe76909dd70cd6ab-merged.mount: Deactivated successfully.
Oct 13 13:54:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2f33f0f48a0fa5fd93a14483b0a39b75bf37f0806c70e292a200e9e1a98ff46-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]:    (file & line not available)
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Warning: Unknown variable: '::pacemaker::pcs_010'. (file: /etc/puppet/modules/pacemaker/manifests/resource/bundle.pp, line: 159, column: 6)
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.34 seconds
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/mode: mode changed '0755' to '2755'
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]: Warning: Scope(Apache::Vhost[placement_wsgi]):
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     It is possible for the $name parameter to be defined with spaces in it. Although supported on POSIX systems, this
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     can lead to cumbersome file names. The $servername attribute has stricter conditions from Apache (i.e. no spaces)
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     When $use_servername_for_filenames = true, the $servername parameter, sanitized, is used to construct log and config
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     file names.
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]: 
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     From version v7.0.0 of the puppetlabs-apache module, this parameter will default to true. From version v8.0.0 of the
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     module, the $use_servername_for_filenames will be removed and log/config file names will be derived from the
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]:     sanitized $servername parameter when not explicitly defined.
Oct 13 13:54:16 standalone.localdomain systemd[1]: libpod-8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60.scope: Deactivated successfully.
Oct 13 13:54:16 standalone.localdomain systemd[1]: libpod-8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60.scope: Consumed 7.141s CPU time.
Oct 13 13:54:16 standalone.localdomain podman[61738]: 2025-10-13 13:54:16.54765746 +0000 UTC m=+7.777505913 container died 8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova_metadata, com.redhat.component=openstack-nova-api-container, vcs-type=git, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, container_name=container-puppet-nova_metadata, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]: Notice: Compiled catalog for standalone.localdomain in environment production in 1.22 seconds
Oct 13 13:54:16 standalone.localdomain ceph-mon[29756]: pgmap v536: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Tripleo::Profile::Base::Rabbitmq/File[/etc/rabbitmq/ssl-dist.conf]/ensure: defined content as '{sha256}4eb53695c05b3733d34f4ce7e9e421888b3887fd10fb06f494336918922d00a9'
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{sha256}72d11ad53d3dcef3cf8ff0433548561c40b2e89f01218ae647a047ad13309579'
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{sha256}b83b8080dbcdf2a49fff2f747972e7343801f7518a0f1dcb3e2a301e50aef551'
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[enabled_plugins]/ensure: defined content as '{sha256}3b5c9fba2f1456d923499fc142bc9ef7a7f6d53d6bc4c0ae88310332eb10a31c'
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{sha256}b984a5f0a62696715f206ca0a602fd9d2d497894c6c24502896fb3010ee0c557'
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Oct 13 13:54:16 standalone.localdomain systemd[1]: tmp-crun.gbUcvS.mount: Deactivated successfully.
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/ensure: defined content as '{sha256}173a6e73d0e5d8950d61840c0aae8cbe3f09312e51a827fde788b67d1cd4417a'
Oct 13 13:54:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File[/var/lib/rabbitmq/.erlang.cookie]/content: content changed '{sha256}0b00a5005230202e26b2c621b6d67fee0e3f110c946fb63a88c5ebdc73c465f4' to '{sha256}b749eebc1b7e3b8277dce5c52929cf6cd625f90317d86363e0dacd688986068c'
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File_line[rabbitmq-pamd-systemd]/ensure: removed
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62357]: Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Rabbitmq_bundle/File_line[rabbitmq-pamd-succeed]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Oct 13 13:54:16 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Oct 13 13:54:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c5481f769777c9753e381b725ebc95f095fff1a1888ddebb0aedc5c68ba2eae3-merged.mount: Deactivated successfully.
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Notice: Applied catalog in 0.90 seconds
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Application:
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:    Initial environment: production
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:    Converged environment: production
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:          Run mode: user
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Changes:
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:             Total: 11
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Events:
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:           Success: 11
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:             Total: 11
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Resources:
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:           Changed: 11
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:       Out of sync: 11
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:           Skipped: 15
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:             Total: 26
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Time:
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:         File line: 0.03
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:              File: 0.37
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:    Config retrieval: 0.39
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:    Transaction evaluation: 0.42
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:    Catalog application: 0.90
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:          Last run: 1760363657
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:             Total: 0.90
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]: Version:
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:            Config: 1760363655
Oct 13 13:54:17 standalone.localdomain puppet-user[62357]:            Puppet: 7.10.0
Oct 13 13:54:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v537: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Oct 13 13:54:17 standalone.localdomain podman[63185]: 2025-10-13 13:54:17.360962476 +0000 UTC m=+0.806015928 container cleanup 8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=container-puppet-nova_metadata, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-nova-api-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=container-puppet-nova_metadata, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team)
Oct 13 13:54:17 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_metadata --conmon-pidfile /run/container-puppet-nova_metadata.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini --env NAME=nova_metadata --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::nova::metadata
                                                       include tripleo::profile::base::database::mysql::client --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_metadata --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,nova_api_paste_ini', 'NAME': 'nova_metadata', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::nova::metadata\ninclude tripleo::profile::base::database::mysql::client'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_metadata.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:54:17 standalone.localdomain systemd[1]: libpod-conmon-8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60.scope: Deactivated successfully.
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]:    (file & line not available)
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{sha256}3416848459dfd1bd419fb071f68b2ea5d8e6e9867a76d5341dc8d9efed0948cb'
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]:    (file & line not available)
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{sha256}ab8ffe3256e845dfb6a4c5088ae25445d4344a295858a1e3c2daa88f27527d4f'
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain ceph-mon[29756]: pgmap v537: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{sha256}847a6fcb41eb25248553082108cde5327c624189fe47009f65d11c3885cab78c'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Logging/Oslo::Log[placement_config]/Placement_config[DEFAULT/debug]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Logging/Oslo::Log[placement_config]/Placement_config[DEFAULT/log_file]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Logging/Oslo::Log[placement_config]/Placement_config[DEFAULT/log_dir]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3443f12bc8c547cc5f40146a4ca19ad0c6c8123e9ad5f12a15fd0d5530f0cff5'
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Notice: Applied catalog in 5.13 seconds
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Application:
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Initial environment: production
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Converged environment: production
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:          Run mode: user
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Changes:
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:             Total: 186
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Events:
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:           Success: 186
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:             Total: 186
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Resources:
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:           Changed: 186
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:       Out of sync: 186
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:           Skipped: 58
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:             Total: 488
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Time:
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:       Concat file: 0.00
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:         Resources: 0.00
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Concat fragment: 0.00
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:            Anchor: 0.00
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:         File line: 0.00
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Virtlogd config: 0.00
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Virtstoraged config: 0.01
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Virtnodedevd config: 0.01
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Virtsecretd config: 0.01
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Virtqemud config: 0.01
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:              Exec: 0.02
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Virtproxyd config: 0.03
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:              File: 0.03
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:           Package: 0.04
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:            Augeas: 1.10
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Config retrieval: 1.80
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:          Last run: 1760363657
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:       Nova config: 3.58
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Transaction evaluation: 5.10
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:    Catalog application: 5.13
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:             Total: 5.13
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]: Version:
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:            Config: 1760363650
Oct 13 13:54:17 standalone.localdomain puppet-user[61935]:            Puppet: 7.10.0
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Db/Oslo::Db[placement_config]/Placement_config[placement_database/connection]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/auth_type]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/memcache_use_advanced_pool]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/memcached_servers]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/region_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/auth_url]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/username]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/password]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/user_domain_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/project_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain systemd[1]: libpod-da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b.scope: Deactivated successfully.
Oct 13 13:54:17 standalone.localdomain systemd[1]: libpod-da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b.scope: Consumed 5.137s CPU time.
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/project_domain_name]/ensure: created
Oct 13 13:54:17 standalone.localdomain podman[62326]: 2025-10-13 13:54:17.87023806 +0000 UTC m=+5.991946585 container died da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=container-puppet-rabbitmq, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, name=rhosp17/openstack-rabbitmq, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=container-puppet-rabbitmq)
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Keystone::Authtoken/Keystone::Resource::Authtoken[placement_config]/Placement_config[keystone_authtoken/interface]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{sha256}d23aed885c10d0f1d27ee09365b2329ed0dab39817de56ddfb375456e8636767'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{sha256}b8a7429cbef3ecabe9e4f331123adb372ecfa3e82e76bc33d6cce997b36874bb' to '{sha256}acb6231bff742206c144c5740288a1d6caf14b1704f7c1d0c98aca319c969540'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{sha256}8dbb5887d99b1bd7e8e6700b2c3bcfebc3d6ce5fdb66b8504b224d99ce5981a7'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{sha256}55fd1ffb0fbb31ed1635c6175b7904207ae53c25e37a8de928aeeb6efb2f21eb'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{sha256}eb9bf7ff02774b28c59bc3cc355fe6bea4b7b1b6780453d078fb1558b2d714fd'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{sha256}53f359b7deca28aff7c56ca0ac425ccb8323bc5121f64e4c5f04036898e6d866'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{sha256}ca2fe478af71981984e353dd168b51c9bc993005157b9bff497c9aa7a7125700'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{sha256}197eae5f99bc425f01e493b3390d78b186be5364d81fc5e3a6df370be3c3f734'
Oct 13 13:54:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-363ca58f2fdd66d3f83e207fb876fe212480858dd07aa9a917453d678975c0bc-merged.mount: Deactivated successfully.
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{sha256}8cbdbfcf32c28d41e5ca9206eea0e3be34dce45cff3a0c408ad2d23761560052'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{sha256}a6ff35715035af2d397f744cbd2023805fad6fd3dd17a10d225e497fcb7ac808'
Oct 13 13:54:17 standalone.localdomain podman[63341]: 2025-10-13 13:54:17.955138478 +0000 UTC m=+0.074677622 container cleanup da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=container-puppet-rabbitmq, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, com.redhat.component=openstack-rabbitmq-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, config_id=tripleo_puppet_step1, version=17.1.9, container_name=container-puppet-rabbitmq, release=1, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{sha256}2086e39dec178d39012a52700badd7b3cc6f2d97c06d197807e0cad8877e5f16'
Oct 13 13:54:17 standalone.localdomain systemd[1]: libpod-conmon-da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b.scope: Deactivated successfully.
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/ensure: defined content as '{sha256}4350f1dd81b2dcc0bf8903458871e8e60dfcb531dd3230247012f2fb48c5e22b'
Oct 13 13:54:17 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rabbitmq --conmon-pidfile /run/container-puppet-rabbitmq.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file,file_line --env NAME=rabbitmq --env STEP_CONFIG=include ::tripleo::packages
                                                       ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }
                                                       include tripleo::profile::pacemaker::rabbitmq_bundle --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rabbitmq --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,file,file_line', 'NAME': 'rabbitmq', 'STEP_CONFIG': "include ::tripleo::packages\n['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\ninclude tripleo::profile::pacemaker::rabbitmq_bundle"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rabbitmq.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{sha256}88f04c415dbd1bf0d074965d37261e056d073b675a047a02e55222818640c6e8'
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]: Warning: Scope(Class[Swift::Proxy::S3token]): Usage of the default password is deprecated and will be removed in a future release. \
Oct 13 13:54:17 standalone.localdomain puppet-user[62927]: Please set password parameter
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{sha256}9feefdc48c65f8b73ab77f3fc813d60744dc97b336bbd60e16bbd763b99c5d66'
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Policy/Oslo::Policy[placement_config]/Placement_config[oslo_policy/enforce_scope]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Policy/Oslo::Policy[placement_config]/Placement_config[oslo_policy/enforce_new_defaults]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Policy/Oslo::Policy[placement_config]/Placement_config[oslo_policy/policy_file]/ensure: created
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/ssl.conf]/ensure: removed
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed
Oct 13 13:54:17 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{sha256}19cb9bd7248ea35b8e882d1d21458b114cfa18be60fb8acbf1eb5cc9cab1afb7'
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Wsgi::Apache/File[/etc/httpd/conf.d/00-placement-api.conf]/content: content changed '{sha256}829e74856246ff8f4a56a4995cd421edd210e3c0342c998de9e934d33c2d229f' to '{sha256}a742a33fca7bd0225b70d9c9c3f9977f3f5b1391a7c4db389c2e405e7a0e7ecc'
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{sha256}ca7e6bca762fed4f5860c5961f7d7873dfa06890a8dae109803984f2a57c857d'
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Wsgi::Apache/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/placement]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Wsgi::Apache/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{sha256}7330573e2f484b77671e7cd10bec4bf8fe4471ba5a127b8362286c6c89a050fe'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Warning: Scope(Class[Swift::Keymaster]): password parameter is missing
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]:    (file & line not available)
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{sha256}3906459aafe799c09305ffbfe0105de3fb9d05a4636cd93e6af9f82e10c8788b'
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{sha256}736d628e01f143a2d94f46af14446fe584d90a1a5dc68a9153e5c676f5888b15'
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]:    (file & line not available)
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed \
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: from 6000 to 6200 and will be changed in a later release
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed \
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: from 6001 to 6201 and will be changed in a later release
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed \
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: from 6002 to 6202 and will be changed in a later release
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-brotli.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-optional.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi-python3.conf]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/README]/ensure: removed
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: /Stage[main]/Placement::Wsgi::Apache/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{sha256}7d6abda294aa9a3c525068f62e7f5355fd49cf75cdcae362942b46d4519a4e42'
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Notice: Applied catalog in 1.40 seconds
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Application:
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Initial environment: production
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Converged environment: production
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:          Run mode: user
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Changes:
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:             Total: 63
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Events:
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:           Success: 63
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:             Total: 63
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Resources:
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:           Skipped: 31
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:           Changed: 63
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:       Out of sync: 63
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:             Total: 207
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Time:
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:            Anchor: 0.00
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:       Concat file: 0.00
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Concat fragment: 0.00
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:           Package: 0.03
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Placement config: 0.18
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:              File: 0.41
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:            Augeas: 0.54
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Config retrieval: 1.34
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Transaction evaluation: 1.39
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:    Catalog application: 1.40
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:          Last run: 1760363658
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:             Total: 1.40
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]: Version:
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:            Config: 1760363655
Oct 13 13:54:18 standalone.localdomain puppet-user[62555]:            Puppet: 7.10.0
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Warning: The string '1' was automatically coerced to the numerical value 1 (file: /etc/puppet/modules/tripleo/manifests/profile/base/swift/add_devices.pp, line: 39, column: 13)
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Warning: The string '1' was automatically coerced to the numerical value 1 (file: /etc/puppet/modules/tripleo/manifests/profile/base/swift/add_devices.pp, line: 39, column: 25)
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Warning: validate_legacy(validate_re) expects an Integer value, got String at ["/etc/puppet/modules/swift/manifests/ringbuilder/rebalance.pp", 23]:
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.75 seconds
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.30 seconds
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Swift/File[/var/run/swift]/group: group changed 'root' to 'swift'
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed %SWIFT_HASH_PATH_SUFFIX% to dZWxCiyYYMNBdNYstzHYUytrZ
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_prefix]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{sha256}189b30972178b755e8e70eab81b1d261c4def61b342300f11760e6f2e706ff64' to '{sha256}8210088fdc2e22733ccc647dd21297f6a13e582b2887bc8bfc10ab58389c8004'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Rsync::Server/Service[rsyncd]/ensure: ensure changed 0 to 'running'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed 127.0.0.1:11211 to standalone.internalapi.localdomain:11211
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/tls_enabled]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:proxy-logging/use]/value: value changed egg:swift#poxy_logging to egg:swift#proxy_logging
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[pipeline:main/pipeline]/value: value changed catch_errors proxy-logging cache proxy-server to catch_errors cache proxy-server
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift/File[/var/run/swift]/group: group changed 'root' to 'swift'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed %SWIFT_HASH_PATH_SUFFIX% to dZWxCiyYYMNBdNYstzHYUytrZ
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_prefix]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed 8 to 1
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server to catch_errors gatekeeper healthcheck proxy-logging cache listing_formats ratelimit bulk tempurl formpost authtoken s3api s3token keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: libpod-934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff.scope: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain systemd[1]: libpod-934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff.scope: Consumed 4.085s CPU time.
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created
Oct 13 13:54:18 standalone.localdomain podman[62473]: 2025-10-13 13:54:18.606291888 +0000 UTC m=+5.357699522 container died 934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=container-puppet-placement, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-placement-api, build-date=2025-07-21T13:58:12, summary=Red Hat OpenStack Platform 17.1 placement-api, release=1, com.redhat.component=openstack-placement-api-container, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, container_name=container-puppet-placement, description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/object_chunk_size]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/client_chunk_size]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed true to True
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed true to True
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/max_containers_per_account]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: libpod-d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb.scope: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: libpod-d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb.scope: Consumed 8.901s CPU time.
Oct 13 13:54:18 standalone.localdomain podman[61900]: 2025-10-13 13:54:18.626887167 +0000 UTC m=+9.693559866 container died d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, vcs-type=git, batch=17.1_20250721.1, container_name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.)
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/recoverable_node_timeout]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Gatekeeper/Swift_proxy_config[filter:gatekeeper/set log_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Gatekeeper/Swift_proxy_config[filter:gatekeeper/set log_facility]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Gatekeeper/Swift_proxy_config[filter:gatekeeper/set log_level]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Gatekeeper/Swift_proxy_config[filter:gatekeeper/set log_headers]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Gatekeeper/Swift_proxy_config[filter:gatekeeper/set log_address]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed 127.0.0.1:11211 to standalone.internalapi.localdomain:11211
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/tls_enabled]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_max_connections]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Listing_formats/Swift_proxy_config[filter:listing_formats/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: tmp-crun.1FN5z3.mount: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-caf1f29ceeb931335ec036bcac576e0e50be69d47f8ecf9dd6e53a9d522384f3-merged.mount: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/value: value changed http://127.0.0.1:5000 to http://172.17.0.2:5000
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_type]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created
Oct 13 13:54:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:18 standalone.localdomain podman[63498]: 2025-10-13 13:54:18.765752781 +0000 UTC m=+0.149969998 container cleanup 934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=container-puppet-placement, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=container-puppet-placement, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, release=1, build-date=2025-07-21T13:58:12, version=17.1.9, config_id=tripleo_puppet_step1, com.redhat.component=openstack-placement-api-container, name=rhosp17/openstack-placement-api, batch=17.1_20250721.1)
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/value: value changed %SERVICE_TENANT_NAME% to service
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/value: value changed %SERVICE_USER% to swift
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/value: value changed [old secret redacted] to [new secret redacted]
Oct 13 13:54:18 standalone.localdomain podman[63513]: 2025-10-13 13:54:18.774457583 +0000 UTC m=+0.137644890 container cleanup d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, architecture=x86_64, maintainer=OpenStack TripleO Team, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/region_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created
Oct 13 13:54:18 standalone.localdomain systemd[1]: libpod-conmon-934e8e433ff43a88c53ffa71ea59bf8be89efe6daeaeacb03d8545256b498aff.scope: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain systemd[1]: libpod-conmon-d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb.scope: Deactivated successfully.
Oct 13 13:54:18 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-placement --conmon-pidfile /run/container-puppet-placement.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,placement_config --env NAME=placement --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::database::mysql::client
                                                       include tripleo::profile::base::placement::api --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-placement --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,placement_config', 'NAME': 'placement', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::database::mysql::client\ninclude tripleo::profile::base::placement::api'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-placement.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/interface]/ensure: created
Oct 13 13:54:18 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages
                                                       # TODO(emilien): figure how to deal with libvirt profile.
                                                       # We'll probably treat it like we do with Neutron plugins.
                                                       # Until then, just include it in the default nova-compute role.
                                                       include tripleo::profile::base::nova::compute::libvirt
                                                       
                                                       include tripleo::profile::base::nova::libvirt
                                                       
                                                       include tripleo::profile::base::sshd
                                                       include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/reseller_prefix]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/delay_auth_decision]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/secret_cache_duration]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_url]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_type]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/username]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/password]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/project_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/project_domain_id]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/user_domain_id]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed admin, SwiftOperator to admin, swiftoperator, ResellerAdmin
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/system_reader_roles]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed false to True
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/meta_version_to_write]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{sha256}d2a6eae26e6bb266e89c924af5daf51baa1176313653f492df0e2dee30e7979b'
Oct 13 13:54:18 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{sha256}be1f7bb1043c590ea50b8893c4e6a01a721e90a760bfff17d69d4d759ff6fcba'
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{sha256}41b5f6e5fdef39efe7ae6c6b073c38d8857dd8103bab6e16d0207f6bf805bcf0'
Oct 13 13:54:19 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Notice: Applied catalog in 0.59 seconds
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Application:
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Initial environment: production
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Converged environment: production
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:          Run mode: user
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Changes:
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:             Total: 126
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Events:
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:           Success: 126
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:             Total: 126
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Resources:
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:           Changed: 126
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:       Out of sync: 126
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:           Skipped: 37
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:             Total: 249
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Time:
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:         Resources: 0.00
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:       Concat file: 0.00
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Concat fragment: 0.00
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:           Service: 0.00
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:      Swift config: 0.00
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Swift keymaster config: 0.01
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Swift object expirer config: 0.02
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:           Package: 0.02
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:              File: 0.03
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Swift proxy config: 0.34
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Transaction evaluation: 0.58
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Catalog application: 0.59
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:    Config retrieval: 0.90
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:          Last run: 1760363659
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:             Total: 0.59
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]: Version:
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:            Config: 1760363657
Oct 13 13:54:19 standalone.localdomain puppet-user[62927]:            Puppet: 7.10.0
Oct 13 13:54:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v538: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:19 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully
Oct 13 13:54:19 standalone.localdomain systemd[1]: libpod-d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572.scope: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain systemd[1]: libpod-d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572.scope: Consumed 4.095s CPU time.
Oct 13 13:54:19 standalone.localdomain podman[62870]: 2025-10-13 13:54:19.601140538 +0000 UTC m=+4.641256593 container died d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=container-puppet-swift, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-proxy-server)
Oct 13 13:54:19 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet
Oct 13 13:54:19 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: Devices:   id region zone ip address:port replication ip:port  name weight partitions balance flags meta
Oct 13 13:54:19 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted
Oct 13 13:54:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c7b1648ca07963a73362905865b5e148d2096f74e5b48b5a7bd40d6efde1ab3f-merged.mount: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d74033809a79eafed324d8dd7abd5ec6a93f6ad7bcf010bef6d4c897d543a1fb-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain systemd[1]: tmp-crun.GiLW28.mount: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ce87643b35ec000a1792ecfe3a8ec00ec1224e0b2865439ada2796f407c1075b-merged.mount: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain podman[63619]: 2025-10-13 13:54:19.732925781 +0000 UTC m=+0.118169905 container cleanup d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-swift, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-swift-proxy-server-container, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, release=1, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 13:54:19 standalone.localdomain systemd[1]: libpod-conmon-d8f125aa176f851926986bc036464f9a241e83bdc2e18189c4293bf1ec235572.scope: Deactivated successfully.
Oct 13 13:54:19 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-swift --conmon-pidfile /run/container-puppet-swift.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::swift::proxy
                                                       
                                                       class xinetd() {}
                                                       define xinetd::service($bind='',$port='',$server='',$server_args='') {}
                                                       noop_resource('service')
                                                       include tripleo::profile::base::swift::storage --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-swift --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', 'NAME': 'swift', 'STEP_CONFIG': "include ::tripleo::packages\ninclude tripleo::profile::base::swift::proxy\n\nclass xinetd() {}\ndefine xinetd::service($bind='',$port='',$server='',$server_args='') {}\nnoop_resource('service')\ninclude tripleo::profile::base::swift::storage"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-swift.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 13:54:19 standalone.localdomain ceph-mon[29756]: pgmap v538: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:19 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.20.0.100:%PORT%/d1]/Ring_object_device[172.20.0.100:6000/d1]/ensure: created
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: Devices:   id region zone ip address:port replication ip:port  name weight partitions balance flags meta
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.20.0.100:%PORT%/d1]/Ring_container_device[172.20.0.100:6001/d1]/ensure: created
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: Devices:   id region zone ip address:port replication ip:port  name weight partitions balance flags meta
Oct 13 13:54:20 standalone.localdomain puppet-user[63095]: Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted
Oct 13 13:54:21 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.20.0.100:%PORT%/d1]/Ring_account_device[172.20.0.100:6002/d1]/ensure: created
Oct 13 13:54:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v539: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:21 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 event
Oct 13 13:54:21 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 event
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 event
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Notice: Applied catalog in 3.72 seconds
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Application:
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Initial environment: production
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Converged environment: production
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:          Run mode: user
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Changes:
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:             Total: 15
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Events:
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:           Success: 15
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:             Total: 15
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Resources:
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:           Changed: 15
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:       Out of sync: 15
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:           Skipped: 16
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:         Restarted: 3
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:             Total: 32
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Time:
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:      Swift config: 0.00
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:              File: 0.01
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Config retrieval: 0.38
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Ring object device: 0.57
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Ring account device: 0.59
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Ring container device: 0.62
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:              Exec: 0.95
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:          Last run: 1760363662
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Transaction evaluation: 3.71
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:    Catalog application: 3.72
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:             Total: 3.72
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]: Version:
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:            Config: 1760363658
Oct 13 13:54:22 standalone.localdomain puppet-user[63095]:            Puppet: 7.10.0
Oct 13 13:54:22 standalone.localdomain ceph-mon[29756]: pgmap v539: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:22 standalone.localdomain systemd[1]: libpod-8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49.scope: Deactivated successfully.
Oct 13 13:54:22 standalone.localdomain systemd[1]: libpod-8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49.scope: Consumed 6.339s CPU time.
Oct 13 13:54:22 standalone.localdomain podman[63065]: 2025-10-13 13:54:22.509447473 +0000 UTC m=+7.002836529 container died 8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift_ringbuilder, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=container-puppet-swift_ringbuilder, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_puppet_step1, com.redhat.component=openstack-swift-proxy-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77)
Oct 13 13:54:22 standalone.localdomain systemd[1]: tmp-crun.NJ3K5k.mount: Deactivated successfully.
Oct 13 13:54:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0c8e7524cf6b6fffc8c0516c67342c7bc57cbe29bccdd18f09acae4c940add05-merged.mount: Deactivated successfully.
Oct 13 13:54:22 standalone.localdomain podman[63708]: 2025-10-13 13:54:22.62769636 +0000 UTC m=+0.110491146 container cleanup 8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=container-puppet-swift_ringbuilder, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, batch=17.1_20250721.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=container-puppet-swift_ringbuilder, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1)
Oct 13 13:54:22 standalone.localdomain systemd[1]: libpod-conmon-8639db24c8d81b791da3c0b8437f87e745d5b37c1bf36ecf8bc7e095d07eae49.scope: Deactivated successfully.
Oct 13 13:54:22 standalone.localdomain python3[58425]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-swift_ringbuilder --conmon-pidfile /run/container-puppet-swift_ringbuilder.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=false --env HOSTNAME=standalone --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env STEP_CONFIG=include ::tripleo::packages
                                                       include tripleo::profile::base::swift::ringbuilder
                                                        --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-swift_ringbuilder --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'false', 'HOSTNAME': 'standalone', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,swift_config,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', 'NAME': 'swift_ringbuilder', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::swift::ringbuilder\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-swift_ringbuilder.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:54:23
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr']
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:54:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v540: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:23 standalone.localdomain python3[63760]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:24 standalone.localdomain ceph-mon[29756]: pgmap v540: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:24 standalone.localdomain python3[63772]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:54:24 standalone.localdomain python3[63788]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:25 standalone.localdomain python3[63792]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363664.6392114-63778-19410840395217/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v541: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:25 standalone.localdomain python3[63805]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:25 standalone.localdomain python3[63809]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363665.1631556-63778-166567021056445/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:25 standalone.localdomain python3[63822]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:26 standalone.localdomain python3[63826]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363665.7115643-63812-131377172775472/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:26 standalone.localdomain python3[63840]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:26 standalone.localdomain ceph-mon[29756]: pgmap v541: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:26 standalone.localdomain python3[63844]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363666.2464552-63830-102905299445574/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:27 standalone.localdomain python3[63850]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:54:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:54:27 standalone.localdomain systemd-sysv-generator[63876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:54:27 standalone.localdomain systemd-rc-local-generator[63873]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:54:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:54:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v542: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:54:27 standalone.localdomain systemd-sysv-generator[63915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:54:27 standalone.localdomain systemd-rc-local-generator[63911]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:54:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:54:27 standalone.localdomain systemd[1]: Starting TripleO Container Shutdown...
Oct 13 13:54:27 standalone.localdomain systemd[1]: Finished TripleO Container Shutdown.
Oct 13 13:54:27 standalone.localdomain python3[63937]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:28 standalone.localdomain python3[63941]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363667.7470582-63927-95340442867848/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:28 standalone.localdomain ceph-mon[29756]: pgmap v542: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:28 standalone.localdomain python3[63955]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:28 standalone.localdomain python3[63959]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363668.2769246-63945-203722155423920/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:29 standalone.localdomain python3[63965]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:54:29 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:54:29 standalone.localdomain systemd-rc-local-generator[63987]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:54:29 standalone.localdomain systemd-sysv-generator[63995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:54:29 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:54:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v543: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:29 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:54:29 standalone.localdomain systemd-sysv-generator[64035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:54:29 standalone.localdomain systemd-rc-local-generator[64030]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:54:29 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:54:29 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 13:54:29 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 13:54:29 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 13:54:29 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 13:54:29 standalone.localdomain ceph-mon[29756]: pgmap v543: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:29 standalone.localdomain sudo[64046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:54:29 standalone.localdomain sudo[64046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:54:29 standalone.localdomain sudo[64046]: pam_unix(sudo:session): session closed for user root
Oct 13 13:54:29 standalone.localdomain sudo[64062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:54:29 standalone.localdomain sudo[64062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for memcached, new hash: fe36aaf6c21495a0eb63602becb8c29a
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for mysql_bootstrap, new hash: c719b864882013747e4a6fe61c577719
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for rabbitmq_bootstrap, new hash: 2be21da3032ad75ccbe4135cd0e338d1
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for clustercheck, new hash: a7af8ae2ee3733ec3f843ca782a7d238
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for horizon_fix_perms, new hash: eff67e8d67d5f186cef6e48df141386b
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for mysql_wait_bundle, new hash: c719b864882013747e4a6fe61c577719
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for barbican_api, new hash: d98175232adee9e03624d31ed0b22944
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for barbican_api_db_sync, new hash: d98175232adee9e03624d31ed0b22944
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for barbican_api_secret_store_sync, new hash: d98175232adee9e03624d31ed0b22944
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for barbican_keystone_listener, new hash: d98175232adee9e03624d31ed0b22944
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for barbican_worker, new hash: d98175232adee9e03624d31ed0b22944
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for cinder_api_db_sync, new hash: bf2556c9454e19b68e3daf4f11f84a81
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for glance_api_db_sync, new hash: 20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for heat_engine_db_sync, new hash: 02af0e891420e5c6be1d463bd6e7979c
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for horizon, new hash: eff67e8d67d5f186cef6e48df141386b
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 9002d51dff127f237c02241c0a80d08e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for keystone, new hash: 32ceb64403625ed4f04d4f0bcdc85988
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for keystone_cron, new hash: 32ceb64403625ed4f04d4f0bcdc85988
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for keystone_db_sync, new hash: 32ceb64403625ed4f04d4f0bcdc85988
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for manila_api_db_sync, new hash: 6f421a64683e0927a1c7cc208f5b5307
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for neutron_db_sync, new hash: bfea4b567a7178c2bf424bc40994d7e4
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_api_db_sync, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_api_ensure_default_cells, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_db_sync, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for placement_api_db_sync, new hash: 2c5d4de4e0570c35257b2db1f73cb503
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_copy_rings, new hash: d8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for cinder_api, new hash: bf2556c9454e19b68e3daf4f11f84a81
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for cinder_api_cron, new hash: bf2556c9454e19b68e3daf4f11f84a81
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for cinder_scheduler, new hash: bf2556c9454e19b68e3daf4f11f84a81
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for glance_api, new hash: 20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for glance_api_cron, new hash: 20647e333af6a74e07ef3325107e31dd
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for glance_api_internal, new hash: 2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for heat_api, new hash: 171dfb6084155b43ebab325c6674ca84
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for heat_api_cfn, new hash: 8a8bc26dd2600d9ce5af7a68a05ecc83
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for heat_api_cron, new hash: 171dfb6084155b43ebab325c6674ca84
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for heat_engine, new hash: 02af0e891420e5c6be1d463bd6e7979c
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for manila_api, new hash: 6f421a64683e0927a1c7cc208f5b5307
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for manila_api_cron, new hash: 6f421a64683e0927a1c7cc208f5b5307
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for manila_scheduler, new hash: 6f421a64683e0927a1c7cc208f5b5307
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for neutron_api, new hash: bfea4b567a7178c2bf424bc40994d7e4
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for neutron_dhcp, new hash: bfea4b567a7178c2bf424bc40994d7e4
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for neutron_sriov_agent, new hash: bfea4b567a7178c2bf424bc40994d7e4
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_api, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_api_cron, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_conductor, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_metadata, new hash: 512448c809be25559cd4e0a76027bdf9
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_scheduler, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_vnc_proxy, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_api_service, new hash: ff6c887813d25a6bfef54f8920eca651
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: bfea4b567a7178c2bf424bc40994d7e4
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for placement_api, new hash: 2c5d4de4e0570c35257b2db1f73cb503
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for placement_wait_for_service, new hash: 2c5d4de4e0570c35257b2db1f73cb503
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_account_reaper, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_account_server, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_container_server, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_container_updater, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_object_expirer, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_object_server, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_object_updater, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for swift_proxy, new hash: d8b7706e98ee5ba7e1c4d758abf97545
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain python3[64077]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 5ce329a35cfc30978bc40d323681fc5e
Oct 13 13:54:30 standalone.localdomain sudo[64062]: pam_unix(sudo:session): session closed for user root
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:54:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 42223d03-12c2-4394-9e17-6a5d3eb8cdf3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:54:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 42223d03-12c2-4394-9e17-6a5d3eb8cdf3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:54:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 42223d03-12c2-4394-9e17-6a5d3eb8cdf3 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:54:30 standalone.localdomain sudo[64116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:54:30 standalone.localdomain sudo[64116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:54:30 standalone.localdomain sudo[64116]: pam_unix(sudo:session): session closed for user root
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:54:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:54:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v544: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:31 standalone.localdomain ceph-mon[29756]: pgmap v544: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:31 standalone.localdomain python3[64166]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 13:54:32 standalone.localdomain podman[64280]: 2025-10-13 13:54:32.277157002 +0000 UTC m=+0.073898177 container create 5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_data_ownership, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, container_name=mysql_data_ownership, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started libpod-conmon-5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363.scope.
Oct 13 13:54:32 standalone.localdomain podman[64279]: 2025-10-13 13:54:32.324253145 +0000 UTC m=+0.122164406 container create ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_bootstrap, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=rabbitmq_bootstrap, config_data={'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2be21da3032ad75ccbe4135cd0e338d1'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, name=rhosp17/openstack-rabbitmq, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 13:54:32 standalone.localdomain podman[64269]: 2025-10-13 13:54:32.232029508 +0000 UTC m=+0.046412752 image pull  registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:32 standalone.localdomain podman[64279]: 2025-10-13 13:54:32.238598476 +0000 UTC m=+0.036509767 image pull  registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1
Oct 13 13:54:32 standalone.localdomain podman[64280]: 2025-10-13 13:54:32.240587945 +0000 UTC m=+0.037329130 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:54:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be825dd8871dd84d3ff7059da1d66b913a6d0660f0a5515a236de88c784717e2/merged/var/lib/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:32 standalone.localdomain podman[64269]: 2025-10-13 13:54:32.345576984 +0000 UTC m=+0.159960188 container create 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, container_name=memcached, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, com.redhat.component=openstack-memcached-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']})
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started libpod-conmon-ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a.scope.
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started libpod-conmon-3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.scope.
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927bd228992d5daf6278b2cf1e7a3606939c12c35e65ead5742f030df49b151a/merged/var/lib/rabbitmq supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43593073e6f183b06ef562b34ef0cb60361ac1108d165f04c57da6bff77c59d1/merged/var/log/memcached supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43593073e6f183b06ef562b34ef0cb60361ac1108d165f04c57da6bff77c59d1/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:32 standalone.localdomain podman[64280]: 2025-10-13 13:54:32.435232214 +0000 UTC m=+0.231973389 container init 5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_data_ownership, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, release=1, container_name=mysql_data_ownership, version=17.1.9, build-date=2025-07-21T12:58:45, config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, config_id=tripleo_step1, distribution-scope=public)
Oct 13 13:54:32 standalone.localdomain podman[64280]: 2025-10-13 13:54:32.443293146 +0000 UTC m=+0.240034321 container start 5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_data_ownership, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, container_name=mysql_data_ownership, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., tcib_managed=true, config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, release=1, architecture=x86_64)
Oct 13 13:54:32 standalone.localdomain podman[64280]: 2025-10-13 13:54:32.443406479 +0000 UTC m=+0.240147654 container attach 5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_data_ownership, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, vcs-type=git, managed_by=tripleo_ansible, container_name=mysql_data_ownership, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 13:54:32 standalone.localdomain systemd[1]: libpod-5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363.scope: Deactivated successfully.
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:54:32 standalone.localdomain podman[64280]: 2025-10-13 13:54:32.448855693 +0000 UTC m=+0.245596878 container died 5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_data_ownership, summary=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, container_name=mysql_data_ownership, version=17.1.9, config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, vcs-type=git, tcib_managed=true, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:54:32 standalone.localdomain podman[64269]: 2025-10-13 13:54:32.449856042 +0000 UTC m=+0.264239226 container init 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.component=openstack-memcached-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, container_name=memcached, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public)
Oct 13 13:54:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:54:32 standalone.localdomain podman[64269]: 2025-10-13 13:54:32.473928555 +0000 UTC m=+0.288311719 container start 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.expose-services=, container_name=memcached, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, release=1, build-date=2025-07-21T12:58:43)
Oct 13 13:54:32 standalone.localdomain sudo[64330]: memcached : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:54:32 standalone.localdomain sudo[64330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42457)
Oct 13 13:54:32 standalone.localdomain python3[64166]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name memcached --conmon-pidfile /run/memcached.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=fe36aaf6c21495a0eb63602becb8c29a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=memcached --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/memcached.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z --volume /var/log/containers/memcached:/var/log/memcached:rw registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1
Oct 13 13:54:32 standalone.localdomain podman[64279]: 2025-10-13 13:54:32.487842101 +0000 UTC m=+0.285753382 container init ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_bootstrap, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, architecture=x86_64, config_data={'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2be21da3032ad75ccbe4135cd0e338d1'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, container_name=rabbitmq_bootstrap, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:54:32 standalone.localdomain podman[64279]: 2025-10-13 13:54:32.496847852 +0000 UTC m=+0.294759113 container start ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_bootstrap, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05, com.redhat.component=openstack-rabbitmq-container, container_name=rabbitmq_bootstrap, distribution-scope=public, vcs-type=git, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, release=1, name=rhosp17/openstack-rabbitmq, config_id=tripleo_step1, config_data={'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2be21da3032ad75ccbe4135cd0e338d1'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:54:32 standalone.localdomain python3[64166]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rabbitmq_bootstrap --conmon-pidfile /run/rabbitmq_bootstrap.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=2be21da3032ad75ccbe4135cd0e338d1 --label config_id=tripleo_step1 --label container_name=rabbitmq_bootstrap --label managed_by=tripleo_ansible --label config_data={'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2be21da3032ad75ccbe4135cd0e338d1'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rabbitmq_bootstrap.log --network host --privileged=False --user root --volume /var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /var/lib/rabbitmq:/var/lib/rabbitmq:z --volume /etc/puppet:/etc/puppet:ro,z registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 bash -ec kolla_set_configs
                                                       if [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi
                                                       hiera 'rabbitmq::erlang_cookie' > /var/lib/rabbitmq/.erlang.cookie
                                                       chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
                                                       chmod 400 /var/lib/rabbitmq/.erlang.cookie
Oct 13 13:54:32 standalone.localdomain sudo[64330]: pam_unix(sudo:session): session closed for user root
Oct 13 13:54:32 standalone.localdomain podman[64336]: 2025-10-13 13:54:32.579943724 +0000 UTC m=+0.097313209 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=starting, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, container_name=memcached, vcs-type=git, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 13:54:32 standalone.localdomain podman[64336]: 2025-10-13 13:54:32.601738908 +0000 UTC m=+0.119108363 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, name=rhosp17/openstack-memcached, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, architecture=x86_64, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container)
Oct 13 13:54:32 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:54:32 standalone.localdomain systemd[1]: libpod-ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a.scope: Deactivated successfully.
Oct 13 13:54:32 standalone.localdomain podman[64328]: 2025-10-13 13:54:32.657096618 +0000 UTC m=+0.193858225 container cleanup 5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_data_ownership, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']}, container_name=mysql_data_ownership, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, distribution-scope=public)
Oct 13 13:54:32 standalone.localdomain systemd[1]: libpod-conmon-5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363.scope: Deactivated successfully.
Oct 13 13:54:32 standalone.localdomain python3[64166]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name mysql_data_ownership --conmon-pidfile /run/mysql_data_ownership.pid --detach=False --label config_id=tripleo_step1 --label container_name=mysql_data_ownership --label managed_by=tripleo_ansible --label config_data={'command': ['chown', '-R', 'mysql:', '/var/lib/mysql'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/mysql:/var/lib/mysql:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/mysql_data_ownership.log --network host --user root --volume /var/lib/mysql:/var/lib/mysql:z registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 chown -R mysql: /var/lib/mysql
Oct 13 13:54:32 standalone.localdomain podman[64418]: 2025-10-13 13:54:32.736628993 +0000 UTC m=+0.090123394 container died ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_bootstrap, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, config_data={'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2be21da3032ad75ccbe4135cd0e338d1'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']}, container_name=rabbitmq_bootstrap, distribution-scope=public, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 13:54:32 standalone.localdomain podman[64418]: 2025-10-13 13:54:32.755062167 +0000 UTC m=+0.108556528 container cleanup ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_bootstrap, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, config_id=tripleo_step1, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:08:05, vcs-type=git, config_data={'command': ['bash', '-ec', 'kolla_set_configs\nif [[ -e "/var/lib/rabbitmq/.erlang.cookie" ]]; then rm -f /var/lib/rabbitmq/.erlang.cookie; fi\nhiera \'rabbitmq::erlang_cookie\' > /var/lib/rabbitmq/.erlang.cookie\nchown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\nchmod 400 /var/lib/rabbitmq/.erlang.cookie'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2be21da3032ad75ccbe4135cd0e338d1'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'net': 'host', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rabbitmq:/var/lib/kolla/config_files/src:ro', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/var/lib/rabbitmq:/var/lib/rabbitmq:z', '/etc/puppet:/etc/puppet:ro,z']}, container_name=rabbitmq_bootstrap, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:54:32 standalone.localdomain systemd[1]: libpod-conmon-ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a.scope: Deactivated successfully.
Oct 13 13:54:33 standalone.localdomain podman[64501]: 2025-10-13 13:54:33.086437496 +0000 UTC m=+0.068537476 container create e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_bootstrap, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, version=17.1.9, release=1, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=mysql_bootstrap, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:54:33 standalone.localdomain systemd[1]: Started libpod-conmon-e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0.scope.
Oct 13 13:54:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:54:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b51a5cc988a88f9993b66e8a15f32d57fa9953fbb5e785faf55f0f27288b1b/merged/var/lib/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b51a5cc988a88f9993b66e8a15f32d57fa9953fbb5e785faf55f0f27288b1b/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 13:54:33 standalone.localdomain podman[64501]: 2025-10-13 13:54:33.046151758 +0000 UTC m=+0.028251768 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:54:33 standalone.localdomain podman[64501]: 2025-10-13 13:54:33.157811027 +0000 UTC m=+0.139911017 container init e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_bootstrap, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-mariadb, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=mysql_bootstrap, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 13:54:33 standalone.localdomain podman[64501]: 2025-10-13 13:54:33.167795266 +0000 UTC m=+0.149895246 container start e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_bootstrap, com.redhat.component=openstack-mariadb-container, container_name=mysql_bootstrap, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step1, config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, managed_by=tripleo_ansible, release=1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:54:33 standalone.localdomain podman[64501]: 2025-10-13 13:54:33.168110006 +0000 UTC m=+0.150210026 container attach e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_bootstrap, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, container_name=mysql_bootstrap, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:54:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-927bd228992d5daf6278b2cf1e7a3606939c12c35e65ead5742f030df49b151a-merged.mount: Deactivated successfully.
Oct 13 13:54:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddd83346d30924c80a959fff63d164667a77d94362b7bef121d77a63aedb4d1a-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-be825dd8871dd84d3ff7059da1d66b913a6d0660f0a5515a236de88c784717e2-merged.mount: Deactivated successfully.
Oct 13 13:54:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a1c313d052d2a90d6a9b5d69dc2bc56ced1254f77df3a8320b7c1b10fc78363-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v545: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:33 standalone.localdomain sudo[64521]:     root : PWD=/ ; USER=mysql ; COMMAND=/usr/local/bin/kolla_extend_start
Oct 13 13:54:33 standalone.localdomain sudo[64521]: pam_unix(sudo:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:54:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 23 completed events
Oct 13 13:54:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:54:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:54:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:34 standalone.localdomain ceph-mon[29756]: pgmap v545: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:54:35 standalone.localdomain sudo[64878]:    mysql : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_security_reset
Oct 13 13:54:35 standalone.localdomain sudo[64878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42434)
Oct 13 13:54:35 standalone.localdomain sudo[64878]: pam_unix(sudo:session): session closed for user root
Oct 13 13:54:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v546: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:36 standalone.localdomain sudo[64521]: pam_unix(sudo:session): session closed for user mysql
Oct 13 13:54:36 standalone.localdomain ceph-mon[29756]: pgmap v546: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v547: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:37 standalone.localdomain ceph-mon[29756]: pgmap v547: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:39 standalone.localdomain systemd[1]: libpod-e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0.scope: Deactivated successfully.
Oct 13 13:54:39 standalone.localdomain systemd[1]: libpod-e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0.scope: Consumed 2.462s CPU time.
Oct 13 13:54:39 standalone.localdomain podman[64501]: 2025-10-13 13:54:39.209631871 +0000 UTC m=+6.191731831 container died e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_bootstrap, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, com.redhat.component=openstack-mariadb-container, container_name=mysql_bootstrap, name=rhosp17/openstack-mariadb, release=1, io.openshift.expose-services=, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:54:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v548: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0-userdata-shm.mount: Deactivated successfully.
Oct 13 13:54:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96b51a5cc988a88f9993b66e8a15f32d57fa9953fbb5e785faf55f0f27288b1b-merged.mount: Deactivated successfully.
Oct 13 13:54:39 standalone.localdomain podman[65239]: 2025-10-13 13:54:39.562198666 +0000 UTC m=+0.340085212 container cleanup e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_bootstrap, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=mysql_bootstrap, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, io.buildah.version=1.33.12, config_id=tripleo_step1, distribution-scope=public, build-date=2025-07-21T12:58:45, config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:54:39 standalone.localdomain systemd[1]: libpod-conmon-e0bd8bc2c84478b83c50bb6c7cbd74aab5c7e81245d71dfd92d966e2fa4032e0.scope: Deactivated successfully.
Oct 13 13:54:39 standalone.localdomain python3[64166]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name mysql_bootstrap --conmon-pidfile /run/mysql_bootstrap.pid --detach=False --env DB_MARIABACKUP_PASSWORD=QHxmQQkEtb7pnNBAuySM7gZpW --env DB_MARIABACKUP_USER=mariabackup --env DB_MAX_TIMEOUT=60 --env KOLLA_BOOTSTRAP=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=c719b864882013747e4a6fe61c577719 --label config_id=tripleo_step1 --label container_name=mysql_bootstrap --label managed_by=tripleo_ansible --label config_data={'command': ['bash', '-ec', 'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nexport DB_ROOT_PASSWORD=$(hiera \'mysql::server::root_password\')\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done\'\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'$(hiera mysql_clustercheck_password)\';"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\nmysql -uroot -p"$(hiera \'mysql::server::root_password\')" -e "DELETE FROM mysql.user WHERE user = \'root\' AND host NOT IN (\'%\',\'localhost\');"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera \'mysql::server::root_password\')" shutdown'], 'detach': False, 'environment': {'DB_MARIABACKUP_PASSWORD': 'QHxmQQkEtb7pnNBAuySM7gZpW', 'DB_MARIABACKUP_USER': 'mariabackup', 'DB_MAX_TIMEOUT': 60, 'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z', '/var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z', '/var/lib/mysql:/var/lib/mysql:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/mysql_bootstrap.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json:rw,z --volume /var/lib/config-data/puppet-generated/mysql:/var/lib/kolla/config_files/src:ro,z --volume /var/lib/mysql:/var/lib/mysql:rw,z registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 bash -ec if [ -e /var/lib/mysql/mysql ]; then exit 0; fi
                                                       echo -e "\n[mysqld]\nwsrep_provider=none" >> /etc/my.cnf
                                                       export DB_ROOT_PASSWORD=$(hiera 'mysql::server::root_password')
                                                       kolla_set_configs
                                                       sudo -u mysql -E kolla_extend_start
                                                       timeout ${DB_MAX_TIMEOUT} /bin/bash -c 'while pgrep -af /usr/bin/mysqld_safe | grep -q -v grep; do sleep 1; done'
                                                       mysqld_safe --skip-networking --wsrep-on=OFF &
                                                       timeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p"$(hiera 'mysql::server::root_password')" ping 2>/dev/null; do sleep 1; done'
                                                       mysql -uroot -p"$(hiera 'mysql::server::root_password')" -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '$(hiera mysql_clustercheck_password)';"
                                                       mysql -uroot -p"$(hiera 'mysql::server::root_password')" -e "GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;"
                                                       mysql -uroot -p"$(hiera 'mysql::server::root_password')" -e "DELETE FROM mysql.user WHERE user = 'root' AND host NOT IN ('%','localhost');"
                                                       timeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"$(hiera 'mysql::server::root_password')" shutdown
Oct 13 13:54:39 standalone.localdomain ceph-mon[29756]: pgmap v548: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:40 standalone.localdomain python3[65283]: ansible-file Invoked with path=/etc/systemd/system/tripleo_memcached.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:40 standalone.localdomain python3[65285]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_memcached_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:54:40 standalone.localdomain python3[65295]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363680.256834-65281-47948064893437/source dest=/etc/systemd/system/tripleo_memcached.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:40 standalone.localdomain python3[65297]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 13:54:40 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:54:40 standalone.localdomain systemd-sysv-generator[65321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:54:40 standalone.localdomain systemd-rc-local-generator[65314]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:54:41 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:54:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v549: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:41 standalone.localdomain python3[65335]: ansible-systemd Invoked with state=restarted name=tripleo_memcached.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:54:41 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:54:42 standalone.localdomain systemd-sysv-generator[65365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:54:42 standalone.localdomain systemd-rc-local-generator[65360]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:54:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:54:42 standalone.localdomain systemd[1]: Starting memcached container...
Oct 13 13:54:42 standalone.localdomain systemd[1]: Started memcached container.
Oct 13 13:54:42 standalone.localdomain ceph-mon[29756]: pgmap v549: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:42 standalone.localdomain python3[65391]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v550: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:43 standalone.localdomain ceph-mon[29756]: pgmap v550: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:43 standalone.localdomain python3[65417]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=standalone step=1 update_config_hash_only=False
Oct 13 13:54:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:44 standalone.localdomain python3[65425]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v551: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:45 standalone.localdomain python3[65450]: ansible-file Invoked with path=/root/standalone-ansible-fa02z7re/cephadm state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:46 standalone.localdomain python3[65454]: ansible-file Invoked with src=/root/standalone-ansible-fa02z7re/inventory.yaml dest=/root/standalone-ansible-fa02z7re/cephadm/inventory.yml state=link force=True path=/root/standalone-ansible-fa02z7re/cephadm/inventory.yml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:46 standalone.localdomain ceph-mon[29756]: pgmap v551: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v552: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:47 standalone.localdomain ceph-mon[29756]: pgmap v552: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v553: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:50 standalone.localdomain python3[65535]: ansible-ansible.legacy.stat Invoked with path=/root/standalone-ansible-fa02z7re/cephadm/cephadm-extra-vars-heat.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:50 standalone.localdomain ceph-mon[29756]: pgmap v553: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:50 standalone.localdomain python3[65539]: ansible-ansible.legacy.copy Invoked with dest=/root/standalone-ansible-fa02z7re/cephadm/cephadm-extra-vars-heat.yml src=/tmp/ansible-root/ansible-tmp-1760363689.8843522-65526-16501381837891/source _original_basename=tmp1jnyhk_c follow=False checksum=b55c3435c8f9f8fa19ea256b2f6dc8fad537b2a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:50 standalone.localdomain python3[65558]: ansible-ansible.legacy.stat Invoked with path=/root/standalone-ansible-fa02z7re/cephadm/cephadm-extra-vars-ansible.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:51 standalone.localdomain python3[65562]: ansible-ansible.legacy.copy Invoked with dest=/root/standalone-ansible-fa02z7re/cephadm/cephadm-extra-vars-ansible.yml src=/tmp/ansible-root/ansible-tmp-1760363690.7539165-65549-157801504878811/source _original_basename=tmpflga_2zm follow=False checksum=27531c640d43ede77c8e3ff3d9c95d513b46c105 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v554: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:52 standalone.localdomain ceph-mon[29756]: pgmap v554: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:52 standalone.localdomain python3[65578]: ansible-ansible.legacy.command Invoked with chdir=/root/standalone-ansible-fa02z7re/cephadm/ _raw_params=if [[ -e cephadm_command.log ]]; then
                                                         mv cephadm_command.log cephadm_command.log-$(date "+%Y-%m-%dT%H:%M:%S");
                                                       fi
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None executable=None creates=None removes=None stdin=None
Oct 13 13:54:52 standalone.localdomain python3[65590]: ansible-ansible.legacy.stat Invoked with path=/root/standalone-ansible-fa02z7re/cephadm/cephadm_command.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:54:53 standalone.localdomain python3[65594]: ansible-ansible.legacy.copy Invoked with dest=/root/standalone-ansible-fa02z7re/cephadm/cephadm_command.sh mode=0755 src=/tmp/ansible-root/ansible-tmp-1760363692.6952517-65581-114626830356021/source _original_basename=tmppffo8etr follow=False checksum=79afe32cb8f0d143188ff05a860da0fee6d73b60 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:54:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v555: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:53 standalone.localdomain python3[65602]: ansible-ansible.legacy.command Invoked with _raw_params=/root/standalone-ansible-fa02z7re/cephadm/cephadm_command.sh _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:54:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:54 standalone.localdomain ceph-mon[29756]: pgmap v555: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:54 standalone.localdomain python3[65614]: ansible-stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:54:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v556: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:55 standalone.localdomain python3[65622]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:54:56 standalone.localdomain ceph-mon[29756]: pgmap v556: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:56 standalone.localdomain python3[65677]: ansible-file Invoked with path=/etc/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:57 standalone.localdomain python3[65681]: ansible-file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v557: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:57 standalone.localdomain python3[65685]: ansible-stat Invoked with path=/root/standalone-ansible-fa02z7re/cephadm/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:54:58 standalone.localdomain python3[65703]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363697.8211691-65693-114073129370541/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 _original_basename=ceph.conf.j2 follow=True backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:54:58 standalone.localdomain ceph-mon[29756]: pgmap v557: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:54:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v558: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:54:59 standalone.localdomain python3[65729]: ansible-sefcontext Invoked with seuser=system_u target=/etc/ceph/ceph.conf setype=etc_t state=present ignore_selinux_state=False ftype=a reload=True selevel=None
Oct 13 13:55:00 standalone.localdomain ceph-mon[29756]: pgmap v558: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  Converting 2741 SID table entries...
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:55:00 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:55:00 standalone.localdomain python3[65737]: ansible-sefcontext Invoked with seuser=system_u target=/etc/ceph/ceph.client.admin.keyring setype=etc_t state=present ignore_selinux_state=False ftype=a reload=True selevel=None
Oct 13 13:55:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v559: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  Converting 2741 SID table entries...
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 13:55:01 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 13:55:02 standalone.localdomain python3[65745]: ansible-ansible.legacy.command Invoked with _raw_params=restorecon -R -v /etc/ceph _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:02 standalone.localdomain ceph-mon[29756]: pgmap v559: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:02 standalone.localdomain python3[65754]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf
                                                       -i /home/assimilate_ceph.conf
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:02 standalone.localdomain podman[65755]: 
Oct 13 13:55:02 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=21 res=1
Oct 13 13:55:02 standalone.localdomain podman[65755]: 2025-10-13 13:55:02.718519802 +0000 UTC m=+0.079375212 container create 4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_greider, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, GIT_CLEAN=True, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Oct 13 13:55:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:55:02 standalone.localdomain systemd[1]: Started libpod-conmon-4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118.scope.
Oct 13 13:55:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df78b30c8abe05ac73719f8cf4ad4fa47dc4aa240c3aee27ba85e29ee0723ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df78b30c8abe05ac73719f8cf4ad4fa47dc4aa240c3aee27ba85e29ee0723ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:02 standalone.localdomain podman[65755]: 2025-10-13 13:55:02.686305876 +0000 UTC m=+0.047161286 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:02 standalone.localdomain podman[65755]: 2025-10-13 13:55:02.788337326 +0000 UTC m=+0.149192726 container init 4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_greider, io.buildah.version=1.33.12, vcs-type=git, release=553, RELEASE=main, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph)
Oct 13 13:55:02 standalone.localdomain podman[65755]: 2025-10-13 13:55:02.798036107 +0000 UTC m=+0.158891497 container start 4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_greider, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, GIT_BRANCH=main, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, io.openshift.expose-services=)
Oct 13 13:55:02 standalone.localdomain podman[65755]: 2025-10-13 13:55:02.798295355 +0000 UTC m=+0.159150795 container attach 4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_greider, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, GIT_BRANCH=main, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:55:02 standalone.localdomain podman[65768]: 2025-10-13 13:55:02.879933083 +0000 UTC m=+0.141136334 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, container_name=memcached, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-memcached, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 13:55:02 standalone.localdomain podman[65768]: 2025-10-13 13:55:02.909894312 +0000 UTC m=+0.171097553 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, release=1, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, vcs-type=git, config_id=tripleo_step1)
Oct 13 13:55:02 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:55:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct 13 13:55:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3240994274' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Oct 13 13:55:03 standalone.localdomain wizardly_greider[65779]: 
Oct 13 13:55:03 standalone.localdomain wizardly_greider[65779]: [global]
Oct 13 13:55:03 standalone.localdomain wizardly_greider[65779]:         fsid = 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:55:03 standalone.localdomain wizardly_greider[65779]:         mon_host = 172.18.0.100
Oct 13 13:55:03 standalone.localdomain systemd[1]: libpod-4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118.scope: Deactivated successfully.
Oct 13 13:55:03 standalone.localdomain podman[65755]: 2025-10-13 13:55:03.139649504 +0000 UTC m=+0.500504914 container died 4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_greider, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main)
Oct 13 13:55:03 standalone.localdomain podman[65817]: 2025-10-13 13:55:03.238814139 +0000 UTC m=+0.088033823 container remove 4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_greider, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.buildah.version=1.33.12, version=7, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:55:03 standalone.localdomain systemd[1]: libpod-conmon-4678c0ed89eee20d9ad193c8645cbbf4a803b42168cfe556e290aa0f05846118.scope: Deactivated successfully.
Oct 13 13:55:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v560: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:03 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3240994274' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Oct 13 13:55:03 standalone.localdomain systemd[1]: tmp-crun.xA9Hw4.mount: Deactivated successfully.
Oct 13 13:55:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7df78b30c8abe05ac73719f8cf4ad4fa47dc4aa240c3aee27ba85e29ee0723ef-merged.mount: Deactivated successfully.
Oct 13 13:55:03 standalone.localdomain python3[65840]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/container_image_base registry.redhat.io/rhceph/rhceph-7-rhel9
                                                        _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:03 standalone.localdomain podman[65841]: 
Oct 13 13:55:03 standalone.localdomain podman[65841]: 2025-10-13 13:55:03.961523455 +0000 UTC m=+0.062202466 container create c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_euclid, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:04 standalone.localdomain systemd[1]: Started libpod-conmon-c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883.scope.
Oct 13 13:55:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a04ab311b5d9e101a8c6a789ceb26d35667ce657833c29f25c2072a8ef69972/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a04ab311b5d9e101a8c6a789ceb26d35667ce657833c29f25c2072a8ef69972/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:04 standalone.localdomain podman[65841]: 2025-10-13 13:55:03.937470894 +0000 UTC m=+0.038149885 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:04 standalone.localdomain podman[65841]: 2025-10-13 13:55:04.037627469 +0000 UTC m=+0.138306440 container init c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_euclid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:04 standalone.localdomain podman[65841]: 2025-10-13 13:55:04.046948038 +0000 UTC m=+0.147627019 container start c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_euclid, vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, release=553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64)
Oct 13 13:55:04 standalone.localdomain podman[65841]: 2025-10-13 13:55:04.047175265 +0000 UTC m=+0.147854286 container attach c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_euclid, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553)
Oct 13 13:55:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_image_base}] v 0)
Oct 13 13:55:04 standalone.localdomain systemd[1]: libpod-c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883.scope: Deactivated successfully.
Oct 13 13:55:04 standalone.localdomain podman[65841]: 2025-10-13 13:55:04.405519313 +0000 UTC m=+0.506198284 container died c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_euclid, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.tags=rhceph ceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=)
Oct 13 13:55:04 standalone.localdomain ceph-mon[29756]: pgmap v560: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:04 standalone.localdomain podman[65881]: 2025-10-13 13:55:04.497897454 +0000 UTC m=+0.079017811 container remove c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_euclid, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, ceph=True, distribution-scope=public, release=553, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:04 standalone.localdomain systemd[1]: libpod-conmon-c3ab3b2833eee568fbad8a0a35ab51f6298a94c8730ba58e180ce80ec1680883.scope: Deactivated successfully.
Oct 13 13:55:04 standalone.localdomain systemd[1]: tmp-crun.ax76jS.mount: Deactivated successfully.
Oct 13 13:55:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4a04ab311b5d9e101a8c6a789ceb26d35667ce657833c29f25c2072a8ef69972-merged.mount: Deactivated successfully.
Oct 13 13:55:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v561: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:05 standalone.localdomain ansible-async_wrapper.py[65921]: Invoked with 75224417555 30 /root/.ansible/tmp/ansible-tmp-1760363705.170069-65909-189442584777564/AnsiballZ_command.py _
Oct 13 13:55:05 standalone.localdomain ansible-async_wrapper.py[65924]: Starting module and watcher
Oct 13 13:55:05 standalone.localdomain ansible-async_wrapper.py[65924]: Start watching 65925 (30)
Oct 13 13:55:05 standalone.localdomain ansible-async_wrapper.py[65925]: Start module (65925)
Oct 13 13:55:05 standalone.localdomain ansible-async_wrapper.py[65921]: Return async_wrapper task started.
Oct 13 13:55:05 standalone.localdomain python3[65926]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:05 standalone.localdomain podman[65927]: 
Oct 13 13:55:05 standalone.localdomain podman[65927]: 2025-10-13 13:55:05.788859746 +0000 UTC m=+0.073910758 container create 1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatelet, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, ceph=True, GIT_BRANCH=main, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, RELEASE=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12)
Oct 13 13:55:05 standalone.localdomain systemd[1]: Started libpod-conmon-1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f.scope.
Oct 13 13:55:05 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72121eb6fb362baa59f158cd9ea07896c2e6e94c06117eb3fc2bcdd9ac5c7934/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72121eb6fb362baa59f158cd9ea07896c2e6e94c06117eb3fc2bcdd9ac5c7934/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:05 standalone.localdomain podman[65927]: 2025-10-13 13:55:05.848630659 +0000 UTC m=+0.133681661 container init 1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatelet, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:55:05 standalone.localdomain podman[65927]: 2025-10-13 13:55:05.857327339 +0000 UTC m=+0.142378351 container start 1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatelet, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, release=553, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph)
Oct 13 13:55:05 standalone.localdomain podman[65927]: 2025-10-13 13:55:05.857610308 +0000 UTC m=+0.142661350 container attach 1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatelet, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True)
Oct 13 13:55:05 standalone.localdomain podman[65927]: 2025-10-13 13:55:05.759541066 +0000 UTC m=+0.044592088 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:06 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14204 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:06 standalone.localdomain friendly_chatelet[65941]: 
Oct 13 13:55:06 standalone.localdomain friendly_chatelet[65941]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 13 13:55:06 standalone.localdomain systemd[1]: libpod-1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f.scope: Deactivated successfully.
Oct 13 13:55:06 standalone.localdomain podman[65927]: 2025-10-13 13:55:06.215554455 +0000 UTC m=+0.500605477 container died 1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatelet, RELEASE=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:06 standalone.localdomain podman[65966]: 2025-10-13 13:55:06.308978736 +0000 UTC m=+0.079365581 container remove 1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_chatelet, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, release=553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:06 standalone.localdomain systemd[1]: libpod-conmon-1bb5b444e3ceb0fb0ae4ba2384bfebfd183eb75409575848039e9633678ea23f.scope: Deactivated successfully.
Oct 13 13:55:06 standalone.localdomain ansible-async_wrapper.py[65925]: Module complete (65925)
Oct 13 13:55:06 standalone.localdomain ceph-mon[29756]: pgmap v561: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-72121eb6fb362baa59f158cd9ea07896c2e6e94c06117eb3fc2bcdd9ac5c7934-merged.mount: Deactivated successfully.
Oct 13 13:55:06 standalone.localdomain python3[65982]: ansible-ansible.legacy.async_status Invoked with jid=75224417555.65921 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:55:07 standalone.localdomain python3[65984]: ansible-ansible.legacy.async_status Invoked with jid=75224417555.65921 mode=cleanup _async_dir=/tmp/.ansible_async
Oct 13 13:55:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v562: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:07 standalone.localdomain ceph-mon[29756]: from='client.14204 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:08 standalone.localdomain ceph-mon[29756]: pgmap v562: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v563: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:09 standalone.localdomain python3[66028]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:09 standalone.localdomain podman[66029]: 
Oct 13 13:55:09 standalone.localdomain podman[66029]: 2025-10-13 13:55:09.453016152 +0000 UTC m=+0.067931259 container create 920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_antonelli, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph)
Oct 13 13:55:09 standalone.localdomain systemd[1]: Started libpod-conmon-920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496.scope.
Oct 13 13:55:09 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea7145a6d740d31c4e874e19c12211f9d1db8ec94d5fe5794d1e547e544f5a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea7145a6d740d31c4e874e19c12211f9d1db8ec94d5fe5794d1e547e544f5a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:09 standalone.localdomain podman[66029]: 2025-10-13 13:55:09.417172166 +0000 UTC m=+0.032087283 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:09 standalone.localdomain podman[66029]: 2025-10-13 13:55:09.527390822 +0000 UTC m=+0.142305919 container init 920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_antonelli, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, ceph=True)
Oct 13 13:55:09 standalone.localdomain systemd[1]: tmp-crun.0GEyIG.mount: Deactivated successfully.
Oct 13 13:55:09 standalone.localdomain podman[66029]: 2025-10-13 13:55:09.541718532 +0000 UTC m=+0.156633629 container start 920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_antonelli, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph)
Oct 13 13:55:09 standalone.localdomain podman[66029]: 2025-10-13 13:55:09.542046032 +0000 UTC m=+0.156961149 container attach 920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_antonelli, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:55:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 13 13:55:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3494366539' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:55:09 standalone.localdomain musing_antonelli[66043]: 
Oct 13 13:55:09 standalone.localdomain musing_antonelli[66043]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"registry.redhat.io/rhceph/rhceph-7-rhel9:latest","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"172.18.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_image_base","value":"registry.redhat.io/rhceph/rhceph-7-rhel9","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"6049460633","level":"basic","can_update_at_runtime":true,"mask":"host:standalone","location_type":"host","location_value":"standalone"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Oct 13 13:55:09 standalone.localdomain systemd[1]: libpod-920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496.scope: Deactivated successfully.
Oct 13 13:55:09 standalone.localdomain podman[66029]: 2025-10-13 13:55:09.890521174 +0000 UTC m=+0.505436311 container died 920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_antonelli, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:55:09 standalone.localdomain podman[66068]: 2025-10-13 13:55:09.962211865 +0000 UTC m=+0.063369802 container remove 920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_antonelli, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Oct 13 13:55:09 standalone.localdomain systemd[1]: libpod-conmon-920ab8cc283920fca8b5fd0f39ef88ce681a92474aa759f95c8835541f3f3496.scope: Deactivated successfully.
Oct 13 13:55:09 standalone.localdomain ceph-mon[29756]: pgmap v563: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3494366539' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Oct 13 13:55:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dea7145a6d740d31c4e874e19c12211f9d1db8ec94d5fe5794d1e547e544f5a3-merged.mount: Deactivated successfully.
Oct 13 13:55:10 standalone.localdomain ansible-async_wrapper.py[65924]: Done in kid B.
Oct 13 13:55:10 standalone.localdomain python3[66087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on --target-size-ratio 0 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:11 standalone.localdomain podman[66088]: 
Oct 13 13:55:11 standalone.localdomain podman[66088]: 2025-10-13 13:55:11.048429495 +0000 UTC m=+0.078097284 container create 58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_gould, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, release=553, ceph=True, vendor=Red Hat, Inc.)
Oct 13 13:55:11 standalone.localdomain systemd[1]: Started libpod-conmon-58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77.scope.
Oct 13 13:55:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fa60d2abc6963d634dbb7a16309878cf8c51fda7be5c60db6faf7278842098/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fa60d2abc6963d634dbb7a16309878cf8c51fda7be5c60db6faf7278842098/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:11 standalone.localdomain podman[66088]: 2025-10-13 13:55:11.018074784 +0000 UTC m=+0.047742633 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:11 standalone.localdomain podman[66088]: 2025-10-13 13:55:11.123972431 +0000 UTC m=+0.153640190 container init 58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_gould, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_BRANCH=main, release=553, ceph=True, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:55:11 standalone.localdomain podman[66088]: 2025-10-13 13:55:11.142709853 +0000 UTC m=+0.172377632 container start 58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_gould, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:55:11 standalone.localdomain podman[66088]: 2025-10-13 13:55:11.143342222 +0000 UTC m=+0.173010011 container attach 58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_gould, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Oct 13 13:55:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v564: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:11 standalone.localdomain systemd[1]: tmp-crun.fWtYRc.mount: Deactivated successfully.
Oct 13 13:55:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} v 0)
Oct 13 13:55:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1584347847' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 13 13:55:12 standalone.localdomain ceph-mon[29756]: pgmap v564: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1584347847' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e10 e10: 1 total, 1 up, 1 in
Oct 13 13:55:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1584347847' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:12 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e10: 1 total, 1 up, 1 in
Oct 13 13:55:12 standalone.localdomain interesting_gould[66103]: pool 'vms' created
Oct 13 13:55:12 standalone.localdomain systemd[1]: libpod-58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77.scope: Deactivated successfully.
Oct 13 13:55:12 standalone.localdomain podman[66088]: 2025-10-13 13:55:12.606938552 +0000 UTC m=+1.636606381 container died 58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_gould, vendor=Red Hat, Inc., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_BRANCH=main)
Oct 13 13:55:12 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 10 pg[2.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:12 standalone.localdomain systemd[1]: tmp-crun.PpTwAN.mount: Deactivated successfully.
Oct 13 13:55:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-54fa60d2abc6963d634dbb7a16309878cf8c51fda7be5c60db6faf7278842098-merged.mount: Deactivated successfully.
Oct 13 13:55:12 standalone.localdomain podman[66130]: 2025-10-13 13:55:12.946440716 +0000 UTC m=+0.329010001 container remove 58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_gould, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:12 standalone.localdomain systemd[1]: libpod-conmon-58d90cdc3b08b31df51f344aa91332d983f4740ed6e8678d713a4d01519cdd77.scope: Deactivated successfully.
Oct 13 13:55:13 standalone.localdomain python3[66144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on --target-size-ratio 0 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:13 standalone.localdomain podman[66145]: 
Oct 13 13:55:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v566: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:13 standalone.localdomain podman[66145]: 2025-10-13 13:55:13.350039372 +0000 UTC m=+0.075460365 container create d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_dhawan, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Oct 13 13:55:13 standalone.localdomain systemd[1]: Started libpod-conmon-d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d.scope.
Oct 13 13:55:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba604864fe506d389a5785c7b8ef483518b3d4e9c81bdc2ce53b27b58cae01/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba604864fe506d389a5785c7b8ef483518b3d4e9c81bdc2ce53b27b58cae01/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:13 standalone.localdomain podman[66145]: 2025-10-13 13:55:13.410856306 +0000 UTC m=+0.136277309 container init d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_dhawan, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Oct 13 13:55:13 standalone.localdomain podman[66145]: 2025-10-13 13:55:13.416786264 +0000 UTC m=+0.142207237 container start d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_dhawan, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, GIT_BRANCH=main, version=7, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:13 standalone.localdomain podman[66145]: 2025-10-13 13:55:13.417141454 +0000 UTC m=+0.142562497 container attach d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_dhawan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:55:13 standalone.localdomain podman[66145]: 2025-10-13 13:55:13.321782274 +0000 UTC m=+0.047203317 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e11 e11: 1 total, 1 up, 1 in
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1584347847' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: osdmap e10: 1 total, 1 up, 1 in
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: pgmap v566: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e11: 1 total, 1 up, 1 in
Oct 13 13:55:13 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 11 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} v 0)
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1588171972' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: osdmap e11: 1 total, 1 up, 1 in
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1588171972' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e12 e12: 1 total, 1 up, 1 in
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1588171972' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:14 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e12: 1 total, 1 up, 1 in
Oct 13 13:55:14 standalone.localdomain busy_dhawan[66160]: pool 'volumes' created
Oct 13 13:55:14 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 12 pg[3.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:14 standalone.localdomain systemd[1]: libpod-d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d.scope: Deactivated successfully.
Oct 13 13:55:14 standalone.localdomain podman[66145]: 2025-10-13 13:55:14.610073047 +0000 UTC m=+1.335494050 container died d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_dhawan, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-64ba604864fe506d389a5785c7b8ef483518b3d4e9c81bdc2ce53b27b58cae01-merged.mount: Deactivated successfully.
Oct 13 13:55:14 standalone.localdomain podman[66187]: 2025-10-13 13:55:14.702415786 +0000 UTC m=+0.079606729 container remove d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_dhawan, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, release=553, version=7, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:14 standalone.localdomain systemd[1]: libpod-conmon-d2061d5593b222b888f33a6a59d4d5b3260354499e63d5b4419b80de3e92375d.scope: Deactivated successfully.
Oct 13 13:55:14 standalone.localdomain python3[66202]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on --target-size-ratio 0 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:15 standalone.localdomain podman[66203]: 
Oct 13 13:55:15 standalone.localdomain podman[66203]: 2025-10-13 13:55:15.05495104 +0000 UTC m=+0.076885907 container create f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_wu, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.)
Oct 13 13:55:15 standalone.localdomain systemd[1]: Started libpod-conmon-f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905.scope.
Oct 13 13:55:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e476d1fb19af84d0d01e97f859f83c715624c72150ddcda8abb5130d092924da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e476d1fb19af84d0d01e97f859f83c715624c72150ddcda8abb5130d092924da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:15 standalone.localdomain podman[66203]: 2025-10-13 13:55:15.112250049 +0000 UTC m=+0.134184956 container init f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_wu, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64)
Oct 13 13:55:15 standalone.localdomain systemd[1]: tmp-crun.hriilZ.mount: Deactivated successfully.
Oct 13 13:55:15 standalone.localdomain podman[66203]: 2025-10-13 13:55:15.023923499 +0000 UTC m=+0.045858416 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:15 standalone.localdomain podman[66203]: 2025-10-13 13:55:15.12497517 +0000 UTC m=+0.146910037 container start f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_wu, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True)
Oct 13 13:55:15 standalone.localdomain podman[66203]: 2025-10-13 13:55:15.12526141 +0000 UTC m=+0.147196327 container attach f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_wu, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., release=553, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container)
Oct 13 13:55:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v569: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} v 0)
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3596132536' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1588171972' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: osdmap e12: 1 total, 1 up, 1 in
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: pgmap v569: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3596132536' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e13 e13: 1 total, 1 up, 1 in
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3596132536' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:15 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e13: 1 total, 1 up, 1 in
Oct 13 13:55:15 standalone.localdomain reverent_wu[66217]: pool 'images' created
Oct 13 13:55:15 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 13 pg[4.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:15 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 13 pg[3.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:15 standalone.localdomain systemd[1]: libpod-f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905.scope: Deactivated successfully.
Oct 13 13:55:15 standalone.localdomain podman[66203]: 2025-10-13 13:55:15.75339078 +0000 UTC m=+0.775325667 container died f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_wu, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, ceph=True, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:55:15 standalone.localdomain podman[66244]: 2025-10-13 13:55:15.849383259 +0000 UTC m=+0.082280058 container remove f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_wu, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, RELEASE=main)
Oct 13 13:55:15 standalone.localdomain systemd[1]: libpod-conmon-f6491bc0b1dda03fc4d4092f9fd9ba3e5d8955af9b259841b0dcb37934687905.scope: Deactivated successfully.
Oct 13 13:55:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e476d1fb19af84d0d01e97f859f83c715624c72150ddcda8abb5130d092924da-merged.mount: Deactivated successfully.
Oct 13 13:55:16 standalone.localdomain python3[66259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on --target-size-ratio 0 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:16 standalone.localdomain podman[66260]: 
Oct 13 13:55:16 standalone.localdomain podman[66260]: 2025-10-13 13:55:16.245390127 +0000 UTC m=+0.077147845 container create 2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_mccarthy, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.component=rhceph-container, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:16 standalone.localdomain systemd[1]: Started libpod-conmon-2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66.scope.
Oct 13 13:55:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d78fd28f8e5d61d5b651231cd7ce63ee35fae9c56c80814057bf5e19b5349d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d78fd28f8e5d61d5b651231cd7ce63ee35fae9c56c80814057bf5e19b5349d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:16 standalone.localdomain podman[66260]: 2025-10-13 13:55:16.304462509 +0000 UTC m=+0.136220157 container init 2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_mccarthy, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git)
Oct 13 13:55:16 standalone.localdomain podman[66260]: 2025-10-13 13:55:16.312258013 +0000 UTC m=+0.144015691 container start 2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_mccarthy, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.)
Oct 13 13:55:16 standalone.localdomain podman[66260]: 2025-10-13 13:55:16.313078748 +0000 UTC m=+0.144836456 container attach 2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_mccarthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph)
Oct 13 13:55:16 standalone.localdomain podman[66260]: 2025-10-13 13:55:16.215890043 +0000 UTC m=+0.047647781 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} v 0)
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2288015040' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e14 e14: 1 total, 1 up, 1 in
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2288015040' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e14: 1 total, 1 up, 1 in
Oct 13 13:55:16 standalone.localdomain reverent_mccarthy[66274]: pool 'backups' created
Oct 13 13:55:16 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 14 pg[5.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3596132536' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: osdmap e13: 1 total, 1 up, 1 in
Oct 13 13:55:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2288015040' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0} : dispatch
Oct 13 13:55:16 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 14 pg[4.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:16 standalone.localdomain systemd[1]: libpod-2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66.scope: Deactivated successfully.
Oct 13 13:55:16 standalone.localdomain podman[66260]: 2025-10-13 13:55:16.868177128 +0000 UTC m=+0.699934856 container died 2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_mccarthy, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:16 standalone.localdomain podman[66301]: 2025-10-13 13:55:16.972800926 +0000 UTC m=+0.090211247 container remove 2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_mccarthy, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, ceph=True, version=7, name=rhceph)
Oct 13 13:55:16 standalone.localdomain systemd[1]: libpod-conmon-2a358c8f3ef7ba8ba8f09917bce370f16423f431638d294284de94e8e6205e66.scope: Deactivated successfully.
Oct 13 13:55:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5d78fd28f8e5d61d5b651231cd7ce63ee35fae9c56c80814057bf5e19b5349d0-merged.mount: Deactivated successfully.
Oct 13 13:55:17 standalone.localdomain python3[66315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v572: 5 pgs: 3 unknown, 2 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:17 standalone.localdomain podman[66316]: 
Oct 13 13:55:17 standalone.localdomain podman[66316]: 2025-10-13 13:55:17.404067471 +0000 UTC m=+0.077753763 container create 3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_pascal, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, version=7)
Oct 13 13:55:17 standalone.localdomain systemd[1]: Started libpod-conmon-3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb.scope.
Oct 13 13:55:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2a76c4575d3c24c2546a8857dfbc573f65e8bbc419d5e0d857e5c86faa4a3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2a76c4575d3c24c2546a8857dfbc573f65e8bbc419d5e0d857e5c86faa4a3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:17 standalone.localdomain podman[66316]: 2025-10-13 13:55:17.46638014 +0000 UTC m=+0.140066432 container init 3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_pascal, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:17 standalone.localdomain podman[66316]: 2025-10-13 13:55:17.373268208 +0000 UTC m=+0.046954730 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:17 standalone.localdomain podman[66316]: 2025-10-13 13:55:17.474872145 +0000 UTC m=+0.148558427 container start 3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_pascal, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container)
Oct 13 13:55:17 standalone.localdomain podman[66316]: 2025-10-13 13:55:17.475099532 +0000 UTC m=+0.148785814 container attach 3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_pascal, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7)
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3333218387' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e15 e15: 1 total, 1 up, 1 in
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3333218387' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e15: 1 total, 1 up, 1 in
Oct 13 13:55:17 standalone.localdomain fervent_pascal[66330]: enabled application 'rbd' on pool 'vms'
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2288015040' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on", "target_size_ratio": 0.0}]': finished
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: osdmap e14: 1 total, 1 up, 1 in
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: pgmap v572: 5 pgs: 3 unknown, 2 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3333218387' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Oct 13 13:55:17 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 15 pg[5.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:17 standalone.localdomain systemd[1]: libpod-3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb.scope: Deactivated successfully.
Oct 13 13:55:17 standalone.localdomain podman[66316]: 2025-10-13 13:55:17.88711368 +0000 UTC m=+0.560799962 container died 3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_pascal, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:17 standalone.localdomain podman[66355]: 2025-10-13 13:55:17.978753259 +0000 UTC m=+0.078296550 container remove 3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_pascal, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, RELEASE=main, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:17 standalone.localdomain systemd[1]: libpod-conmon-3cdbd07a432b530b6b676cb4bcca92c5f0ff348b63494c0bede0f3d80a88dacb.scope: Deactivated successfully.
Oct 13 13:55:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4c2a76c4575d3c24c2546a8857dfbc573f65e8bbc419d5e0d857e5c86faa4a3d-merged.mount: Deactivated successfully.
Oct 13 13:55:18 standalone.localdomain python3[66371]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:18 standalone.localdomain podman[66372]: 
Oct 13 13:55:18 standalone.localdomain podman[66372]: 2025-10-13 13:55:18.394733967 +0000 UTC m=+0.079677742 container create ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_jackson, version=7, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, ceph=True, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55)
Oct 13 13:55:18 standalone.localdomain systemd[1]: Started libpod-conmon-ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7.scope.
Oct 13 13:55:18 standalone.localdomain systemd[1]: tmp-crun.yjtmRX.mount: Deactivated successfully.
Oct 13 13:55:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd77ab528a3747f91556cbf1f12d24b02e246330743a8d99554188be8590ab19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd77ab528a3747f91556cbf1f12d24b02e246330743a8d99554188be8590ab19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:18 standalone.localdomain podman[66372]: 2025-10-13 13:55:18.360040565 +0000 UTC m=+0.044984400 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:18 standalone.localdomain podman[66372]: 2025-10-13 13:55:18.462279042 +0000 UTC m=+0.147222817 container init ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_jackson, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main)
Oct 13 13:55:18 standalone.localdomain podman[66372]: 2025-10-13 13:55:18.47020801 +0000 UTC m=+0.155151785 container start ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_jackson, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, release=553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, architecture=x86_64)
Oct 13 13:55:18 standalone.localdomain podman[66372]: 2025-10-13 13:55:18.470400906 +0000 UTC m=+0.155344691 container attach ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_jackson, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, description=Red Hat Ceph Storage 7)
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2690813316' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e16 e16: 1 total, 1 up, 1 in
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2690813316' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e16: 1 total, 1 up, 1 in
Oct 13 13:55:18 standalone.localdomain serene_jackson[66386]: enabled application 'rbd' on pool 'volumes'
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3333218387' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: osdmap e15: 1 total, 1 up, 1 in
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2690813316' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Oct 13 13:55:18 standalone.localdomain systemd[1]: libpod-ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7.scope: Deactivated successfully.
Oct 13 13:55:18 standalone.localdomain podman[66372]: 2025-10-13 13:55:18.939597689 +0000 UTC m=+0.624541504 container died ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_jackson, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 13 13:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:19 standalone.localdomain podman[66411]: 2025-10-13 13:55:19.024780235 +0000 UTC m=+0.074985131 container remove ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_jackson, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, release=553, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12)
Oct 13 13:55:19 standalone.localdomain systemd[1]: libpod-conmon-ab331abed06392e5c6e09f31d0800186a19df6f34cec79504398bb477b648bf7.scope: Deactivated successfully.
Oct 13 13:55:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fd77ab528a3747f91556cbf1f12d24b02e246330743a8d99554188be8590ab19-merged.mount: Deactivated successfully.
Oct 13 13:55:19 standalone.localdomain python3[66427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:19 standalone.localdomain podman[66428]: 
Oct 13 13:55:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v575: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:19 standalone.localdomain podman[66428]: 2025-10-13 13:55:19.347981069 +0000 UTC m=+0.045347821 container create 29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_faraday, vendor=Red Hat, Inc., io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=)
Oct 13 13:55:19 standalone.localdomain systemd[1]: Started libpod-conmon-29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad.scope.
Oct 13 13:55:19 standalone.localdomain systemd[1]: tmp-crun.Pu54iD.mount: Deactivated successfully.
Oct 13 13:55:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5bd5b4168682ae13e980c99aa25e584d8f48989b04ed82666fd90dcccd37b9f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5bd5b4168682ae13e980c99aa25e584d8f48989b04ed82666fd90dcccd37b9f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:19 standalone.localdomain podman[66428]: 2025-10-13 13:55:19.401514984 +0000 UTC m=+0.098881746 container init 29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_faraday, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph)
Oct 13 13:55:19 standalone.localdomain podman[66428]: 2025-10-13 13:55:19.410959317 +0000 UTC m=+0.108326099 container start 29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_faraday, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main)
Oct 13 13:55:19 standalone.localdomain podman[66428]: 2025-10-13 13:55:19.411191574 +0000 UTC m=+0.108558366 container attach 29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_faraday, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., name=rhceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container)
Oct 13 13:55:19 standalone.localdomain podman[66428]: 2025-10-13 13:55:19.330937797 +0000 UTC m=+0.028304559 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1869223117' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2690813316' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: osdmap e16: 1 total, 1 up, 1 in
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: pgmap v575: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1869223117' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 13 13:55:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e17 e17: 1 total, 1 up, 1 in
Oct 13 13:55:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1869223117' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 13 13:55:20 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e17: 1 total, 1 up, 1 in
Oct 13 13:55:20 standalone.localdomain sharp_faraday[66443]: enabled application 'rbd' on pool 'images'
Oct 13 13:55:20 standalone.localdomain systemd[1]: libpod-29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad.scope: Deactivated successfully.
Oct 13 13:55:20 standalone.localdomain podman[66428]: 2025-10-13 13:55:20.027845261 +0000 UTC m=+0.725212103 container died 29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_faraday, architecture=x86_64, RELEASE=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:20 standalone.localdomain systemd[1]: tmp-crun.o6xcF7.mount: Deactivated successfully.
Oct 13 13:55:20 standalone.localdomain systemd[1]: tmp-crun.efCod1.mount: Deactivated successfully.
Oct 13 13:55:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5bd5b4168682ae13e980c99aa25e584d8f48989b04ed82666fd90dcccd37b9f-merged.mount: Deactivated successfully.
Oct 13 13:55:20 standalone.localdomain podman[66468]: 2025-10-13 13:55:20.123985074 +0000 UTC m=+0.088580648 container remove 29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_faraday, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 13:55:20 standalone.localdomain systemd[1]: libpod-conmon-29c1be7b48fa4b130386d65cac7a1052c5236c799e80b1f5a2eb06ba1f9420ad.scope: Deactivated successfully.
Oct 13 13:55:20 standalone.localdomain python3[66483]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:20 standalone.localdomain podman[66484]: 
Oct 13 13:55:20 standalone.localdomain podman[66484]: 2025-10-13 13:55:20.461755855 +0000 UTC m=+0.075592917 container create 835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_taussig, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, RELEASE=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public)
Oct 13 13:55:20 standalone.localdomain systemd[1]: Started libpod-conmon-835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730.scope.
Oct 13 13:55:20 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:20 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f698d03475c0819b929d3032150597d3b9c94d7c1dce7a06adac183aa2cdbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:20 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f698d03475c0819b929d3032150597d3b9c94d7c1dce7a06adac183aa2cdbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:20 standalone.localdomain podman[66484]: 2025-10-13 13:55:20.512600751 +0000 UTC m=+0.126437803 container init 835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_taussig, RELEASE=main, name=rhceph, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, release=553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:20 standalone.localdomain podman[66484]: 2025-10-13 13:55:20.520890839 +0000 UTC m=+0.134727881 container start 835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_taussig, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:55:20 standalone.localdomain podman[66484]: 2025-10-13 13:55:20.521179888 +0000 UTC m=+0.135016930 container attach 835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_taussig, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph)
Oct 13 13:55:20 standalone.localdomain podman[66484]: 2025-10-13 13:55:20.432281332 +0000 UTC m=+0.046118404 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct 13 13:55:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3424365521' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1869223117' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: osdmap e17: 1 total, 1 up, 1 in
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3424365521' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e18 e18: 1 total, 1 up, 1 in
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3424365521' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 13 13:55:21 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e18: 1 total, 1 up, 1 in
Oct 13 13:55:21 standalone.localdomain serene_taussig[66498]: enabled application 'rbd' on pool 'backups'
Oct 13 13:55:21 standalone.localdomain systemd[1]: libpod-835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730.scope: Deactivated successfully.
Oct 13 13:55:21 standalone.localdomain podman[66523]: 2025-10-13 13:55:21.108630587 +0000 UTC m=+0.051348380 container died 835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_taussig, vcs-type=git, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28f698d03475c0819b929d3032150597d3b9c94d7c1dce7a06adac183aa2cdbe-merged.mount: Deactivated successfully.
Oct 13 13:55:21 standalone.localdomain podman[66523]: 2025-10-13 13:55:21.141157073 +0000 UTC m=+0.083874816 container remove 835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_taussig, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, name=rhceph, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Oct 13 13:55:21 standalone.localdomain systemd[1]: libpod-conmon-835311b39034d893cefe615e42034428aeb5c24c145aa4a80c857d7d177d7730.scope: Deactivated successfully.
Oct 13 13:55:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v578: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:22 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct 13 13:55:22 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 13 13:55:22 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3424365521' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 13 13:55:22 standalone.localdomain ceph-mon[29756]: osdmap e18: 1 total, 1 up, 1 in
Oct 13 13:55:22 standalone.localdomain ceph-mon[29756]: pgmap v578: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:22 standalone.localdomain python3[66550]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls rgw --export --format json _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:22 standalone.localdomain podman[66551]: 
Oct 13 13:55:22 standalone.localdomain podman[66551]: 2025-10-13 13:55:22.198409025 +0000 UTC m=+0.088059102 container create acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_murdock, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, vcs-type=git, release=553, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:22 standalone.localdomain systemd[1]: Started libpod-conmon-acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2.scope.
Oct 13 13:55:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c369fd46a80f25363955415ae06cfa973a6d838b6a6647497145486f365b6997/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c369fd46a80f25363955415ae06cfa973a6d838b6a6647497145486f365b6997/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:22 standalone.localdomain podman[66551]: 2025-10-13 13:55:22.162991533 +0000 UTC m=+0.052641650 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:22 standalone.localdomain podman[66551]: 2025-10-13 13:55:22.265335313 +0000 UTC m=+0.154985380 container init acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_murdock, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:22 standalone.localdomain podman[66551]: 2025-10-13 13:55:22.282316382 +0000 UTC m=+0.171966449 container start acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_murdock, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 13:55:22 standalone.localdomain podman[66551]: 2025-10-13 13:55:22.282809137 +0000 UTC m=+0.172459244 container attach acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_murdock, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:22 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14224 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "rgw", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:22 standalone.localdomain interesting_murdock[66566]: 
Oct 13 13:55:22 standalone.localdomain interesting_murdock[66566]: No services reported
Oct 13 13:55:22 standalone.localdomain systemd[1]: libpod-acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2.scope: Deactivated successfully.
Oct 13 13:55:22 standalone.localdomain podman[66551]: 2025-10-13 13:55:22.673846516 +0000 UTC m=+0.563496573 container died acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_murdock, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, version=7, vcs-type=git, ceph=True)
Oct 13 13:55:22 standalone.localdomain podman[66591]: 2025-10-13 13:55:22.772131134 +0000 UTC m=+0.086359772 container remove acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_murdock, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, ceph=True)
Oct 13 13:55:22 standalone.localdomain systemd[1]: libpod-conmon-acbcfd575d11ab3b5f5958e3ea908a829be931853218ad4d9348cad2d9ea2eb2.scope: Deactivated successfully.
Oct 13 13:55:23 standalone.localdomain ceph-mon[29756]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct 13 13:55:23 standalone.localdomain ceph-mon[29756]: Cluster is now healthy
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:55:23
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'volumes', 'vms', 'backups', '.mgr']
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 13 13:55:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct 13 13:55:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:23 standalone.localdomain systemd[1]: tmp-crun.5BVt2p.mount: Deactivated successfully.
Oct 13 13:55:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c369fd46a80f25363955415ae06cfa973a6d838b6a6647497145486f365b6997-merged.mount: Deactivated successfully.
Oct 13 13:55:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v579: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:23 standalone.localdomain python3[66624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls mds --export --format json _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:23 standalone.localdomain podman[66625]: 
Oct 13 13:55:23 standalone.localdomain podman[66625]: 2025-10-13 13:55:23.942215131 +0000 UTC m=+0.077485286 container create 09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_volhard, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:23 standalone.localdomain systemd[1]: Started libpod-conmon-09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05.scope.
Oct 13 13:55:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e2709dc148017c0c6fbfd2a91222a6a1212e674b14adb7db7018d21a88dfde/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e2709dc148017c0c6fbfd2a91222a6a1212e674b14adb7db7018d21a88dfde/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:23 standalone.localdomain podman[66625]: 2025-10-13 13:55:23.995344054 +0000 UTC m=+0.130614189 container init 09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_volhard, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:24 standalone.localdomain podman[66625]: 2025-10-13 13:55:24.001525839 +0000 UTC m=+0.136795984 container start 09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_volhard, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=)
Oct 13 13:55:24 standalone.localdomain podman[66625]: 2025-10-13 13:55:24.001745796 +0000 UTC m=+0.137015981 container attach 09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_volhard, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container)
Oct 13 13:55:24 standalone.localdomain podman[66625]: 2025-10-13 13:55:23.910799378 +0000 UTC m=+0.046069573 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: from='client.14224 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "rgw", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: pgmap v579: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e19 e19: 1 total, 1 up, 1 in
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e19: 1 total, 1 up, 1 in
Oct 13 13:55:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 89cf7ae9-9153-4304-85a2-3107ea75ca2d (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct 13 13:55:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:24 standalone.localdomain systemd[1]: tmp-crun.Ez57q3.mount: Deactivated successfully.
Oct 13 13:55:24 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "mds", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:24 standalone.localdomain beautiful_volhard[66640]: 
Oct 13 13:55:24 standalone.localdomain beautiful_volhard[66640]: No services reported
Oct 13 13:55:24 standalone.localdomain systemd[1]: libpod-09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05.scope: Deactivated successfully.
Oct 13 13:55:24 standalone.localdomain podman[66625]: 2025-10-13 13:55:24.37824791 +0000 UTC m=+0.513518065 container died 09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_volhard, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Oct 13 13:55:24 standalone.localdomain systemd[1]: tmp-crun.oRDVkG.mount: Deactivated successfully.
Oct 13 13:55:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e5e2709dc148017c0c6fbfd2a91222a6a1212e674b14adb7db7018d21a88dfde-merged.mount: Deactivated successfully.
Oct 13 13:55:24 standalone.localdomain podman[66665]: 2025-10-13 13:55:24.468671252 +0000 UTC m=+0.081478645 container remove 09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_volhard, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, release=553, version=7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 13:55:24 standalone.localdomain systemd[1]: libpod-conmon-09742a6799faf8c9b7dfc828766eaff45c3ae3c12ffdd89ca6c824f6af39df05.scope: Deactivated successfully.
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: osdmap e19: 1 total, 1 up, 1 in
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: from='client.14226 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "mds", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e20 e20: 1 total, 1 up, 1 in
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e20: 1 total, 1 up, 1 in
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 60a02b67-d405-4354-9fbd-043246d91f9c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:25 standalone.localdomain python3[66688]: ansible-ceph_mkspec Invoked with service_type=mds cluster=ceph apply=True hosts=['standalone.localdomain'] render_path=/home/ceph-admin/specs service_id=None service_name=None host_pattern=None networks=None labels=None spec=None extra=None
Oct 13 13:55:25 standalone.localdomain podman[66689]: 
Oct 13 13:55:25 standalone.localdomain podman[66689]: 2025-10-13 13:55:25.279891373 +0000 UTC m=+0.064897627 container create 1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mestorf, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=553, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, distribution-scope=public, build-date=2025-09-24T08:57:55, name=rhceph, version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:55:25 standalone.localdomain systemd[1]: Started libpod-conmon-1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f.scope.
Oct 13 13:55:25 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268daa83d9a45c628dabddc9efdfb514e006c45b436872dabf039daa502a8652/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268daa83d9a45c628dabddc9efdfb514e006c45b436872dabf039daa502a8652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268daa83d9a45c628dabddc9efdfb514e006c45b436872dabf039daa502a8652/merged/var/lib/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268daa83d9a45c628dabddc9efdfb514e006c45b436872dabf039daa502a8652/merged/home/ceph-admin/specs/mds supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:25 standalone.localdomain podman[66689]: 2025-10-13 13:55:25.343944945 +0000 UTC m=+0.128951199 container init 1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mestorf, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v582: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:25 standalone.localdomain podman[66689]: 2025-10-13 13:55:25.352683257 +0000 UTC m=+0.137689541 container start 1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mestorf, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, GIT_BRANCH=main, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph)
Oct 13 13:55:25 standalone.localdomain podman[66689]: 2025-10-13 13:55:25.352958025 +0000 UTC m=+0.137964359 container attach 1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mestorf, io.buildah.version=1.33.12, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git)
Oct 13 13:55:25 standalone.localdomain podman[66689]: 2025-10-13 13:55:25.26109426 +0000 UTC m=+0.046100554 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14228 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Saving service mds.mds spec with placement standalone.localdomain
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Saving service mds.mds spec with placement standalone.localdomain
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.mds}] v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:25 standalone.localdomain practical_mestorf[66704]: Scheduled mds.mds update...
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev cc64293b-e415-4bf4-a60a-415747518cd5 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev cc64293b-e415-4bf4-a60a-415747518cd5 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event cc64293b-e415-4bf4-a60a-415747518cd5 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c5331695-a276-49c4-8d49-a4c4c87932cb (Updating mds.mds deployment (+1 -> 1))
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.standalone.ophgjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.standalone.ophgjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Oct 13 13:55:25 standalone.localdomain systemd[1]: libpod-1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f.scope: Deactivated successfully.
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.mds.standalone.ophgjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:55:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: [cephadm INFO cephadm.serve] Deploying daemon mds.mds.standalone.ophgjq on standalone.localdomain
Oct 13 13:55:25 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Deploying daemon mds.mds.standalone.ophgjq on standalone.localdomain
Oct 13 13:55:25 standalone.localdomain podman[66729]: 2025-10-13 13:55:25.863909751 +0000 UTC m=+0.056774034 container died 1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mestorf, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, ceph=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:25 standalone.localdomain sudo[66735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:55:25 standalone.localdomain sudo[66735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:25 standalone.localdomain sudo[66735]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:25 standalone.localdomain podman[66729]: 2025-10-13 13:55:25.897654133 +0000 UTC m=+0.090518366 container remove 1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_mestorf, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553)
Oct 13 13:55:25 standalone.localdomain systemd[1]: libpod-conmon-1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f.scope: Deactivated successfully.
Oct 13 13:55:25 standalone.localdomain sudo[66756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:55:25 standalone.localdomain sudo[66756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e21 e21: 1 total, 1 up, 1 in
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e21: 1 total, 1 up, 1 in
Oct 13 13:55:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 92590e90-9831-45ef-abfa-26cef0a0bdfd (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: osdmap e20: 1 total, 1 up, 1 in
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: pgmap v582: 5 pgs: 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.standalone.ophgjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.mds.standalone.ophgjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 13 13:55:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:26 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=21 pruub=11.494238853s) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active pruub 1108.764038086s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 13 13:55:26 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=21 pruub=13.658046722s) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active pruub 1110.927856445s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 13 13:55:26 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=21 pruub=13.658046722s) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown pruub 1110.927856445s@ mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:26 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 21 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=21 pruub=11.494238853s) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown pruub 1108.764038086s@ mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-268daa83d9a45c628dabddc9efdfb514e006c45b436872dabf039daa502a8652-merged.mount: Deactivated successfully.
Oct 13 13:55:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1acf2cb767432acd1fad59ada2ca8e587ea454f513b2b7a5ea32e404b4f8c41f-userdata-shm.mount: Deactivated successfully.
Oct 13 13:55:26 standalone.localdomain podman[66822]: 
Oct 13 13:55:26 standalone.localdomain podman[66822]: 2025-10-13 13:55:26.57967937 +0000 UTC m=+0.065319480 container create b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_gates, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, build-date=2025-09-24T08:57:55, RELEASE=main, architecture=x86_64)
Oct 13 13:55:26 standalone.localdomain systemd[1]: Started libpod-conmon-b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06.scope.
Oct 13 13:55:26 standalone.localdomain python3[66821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create manila_data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:26 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:26 standalone.localdomain podman[66822]: 2025-10-13 13:55:26.548576707 +0000 UTC m=+0.034216867 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:26 standalone.localdomain podman[66822]: 2025-10-13 13:55:26.65004534 +0000 UTC m=+0.135685460 container init b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_gates, RELEASE=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:26 standalone.localdomain podman[66822]: 2025-10-13 13:55:26.660434312 +0000 UTC m=+0.146074382 container start b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_gates, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, version=7, architecture=x86_64, release=553)
Oct 13 13:55:26 standalone.localdomain podman[66822]: 2025-10-13 13:55:26.66070896 +0000 UTC m=+0.146349070 container attach b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_gates, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, name=rhceph, RELEASE=main, architecture=x86_64, release=553, GIT_CLEAN=True)
Oct 13 13:55:26 standalone.localdomain silly_gates[66837]: 167 167
Oct 13 13:55:26 standalone.localdomain systemd[1]: libpod-b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06.scope: Deactivated successfully.
Oct 13 13:55:26 standalone.localdomain podman[66822]: 2025-10-13 13:55:26.663396851 +0000 UTC m=+0.149036961 container died b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_gates, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, release=553, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True)
Oct 13 13:55:26 standalone.localdomain podman[66839]: 
Oct 13 13:55:26 standalone.localdomain podman[66839]: 2025-10-13 13:55:26.724942927 +0000 UTC m=+0.080554967 container create 11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_austin, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=553, architecture=x86_64)
Oct 13 13:55:26 standalone.localdomain podman[66850]: 2025-10-13 13:55:26.762939927 +0000 UTC m=+0.087200686 container remove b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_gates, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc.)
Oct 13 13:55:26 standalone.localdomain systemd[1]: libpod-conmon-b2fdd850e9340fad2db07e4a9c375788e9c4cc5b13ec4699561f8c2a05767f06.scope: Deactivated successfully.
Oct 13 13:55:26 standalone.localdomain podman[66839]: 2025-10-13 13:55:26.699240776 +0000 UTC m=+0.054852816 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:26 standalone.localdomain systemd[1]: Started libpod-conmon-11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3.scope.
Oct 13 13:55:26 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:26 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f15cf9eb1c45ce742c8de6079a8a870bff118a641935337bacbd3fb63d7658e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:26 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:55:26 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f15cf9eb1c45ce742c8de6079a8a870bff118a641935337bacbd3fb63d7658e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:26 standalone.localdomain podman[66839]: 2025-10-13 13:55:26.828264086 +0000 UTC m=+0.183876156 container init 11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_austin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 13:55:26 standalone.localdomain podman[66839]: 2025-10-13 13:55:26.836430601 +0000 UTC m=+0.192042671 container start 11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_austin, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:26 standalone.localdomain podman[66839]: 2025-10-13 13:55:26.836684389 +0000 UTC m=+0.192296449 container attach 11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_austin, vcs-type=git, com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 13:55:26 standalone.localdomain systemd-rc-local-generator[66897]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:55:26 standalone.localdomain systemd-sysv-generator[66901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:55:26 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:55:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60154188585fc8ac6c1c4bb223c53e748c794e5ec884bd44476fa72626f7160f-merged.mount: Deactivated successfully.
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e22 e22: 1 total, 1 up, 1 in
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e22: 1 total, 1 up, 1 in
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev aef60ca1-9fef-4398-ace7-51bd82002b59 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: from='client.14228 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: Saving service mds.mds spec with placement standalone.localdomain
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: Deploying daemon mds.mds.standalone.ophgjq on standalone.localdomain
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: osdmap e21: 1 total, 1 up, 1 in
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 89cf7ae9-9153-4304-85a2-3107ea75ca2d (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 89cf7ae9-9153-4304-85a2-3107ea75ca2d (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 60a02b67-d405-4354-9fbd-043246d91f9c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 60a02b67-d405-4354-9fbd-043246d91f9c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 2 seconds
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 92590e90-9831-45ef-abfa-26cef0a0bdfd (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 92590e90-9831-45ef-abfa-26cef0a0bdfd (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev aef60ca1-9fef-4398-ace7-51bd82002b59 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event aef60ca1-9fef-4398-ace7-51bd82002b59 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.17( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.18( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.16( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.14( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.12( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.11( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.f( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.b( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.3( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.7( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.2( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.5( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.8( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1a( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1c( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1d( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=12/13 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=10/11 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.17( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.16( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.14( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.11( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.3( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.7( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.2( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.8( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1a( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=21/22 n=0 ec=21/12 lis/c=12/12 les/c/f=13/13/0 sis=21) [0] r=0 lpr=21 pi=[12,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/10 lis/c=10/10 les/c/f=11/11/0 sis=21) [0] r=0 lpr=21 pi=[10,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:55:27 standalone.localdomain systemd-sysv-generator[66962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:55:27 standalone.localdomain systemd-rc-local-generator[66959]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:55:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "manila_data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/240330404' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "manila_data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v585: 67 pgs: 62 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:27 standalone.localdomain systemd[1]: Starting Ceph mds.mds.standalone.ophgjq for 627e7f45-65aa-56de-94df-66eaee84a56e...
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:27 standalone.localdomain podman[67026]: 
Oct 13 13:55:27 standalone.localdomain podman[67026]: 2025-10-13 13:55:27.643671134 +0000 UTC m=+0.077504125 container create 12809c8f16597aaa197f738bbd8442c22ce664289d1643ac3743017d5cad0e2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mds-mds-standalone-ophgjq, release=553, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public)
Oct 13 13:55:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9581c2cded5115d11a55646b643211d496e57ca59ee66dc10b468d4d8e2f4767/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9581c2cded5115d11a55646b643211d496e57ca59ee66dc10b468d4d8e2f4767/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9581c2cded5115d11a55646b643211d496e57ca59ee66dc10b468d4d8e2f4767/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9581c2cded5115d11a55646b643211d496e57ca59ee66dc10b468d4d8e2f4767/merged/var/lib/ceph/mds/ceph-mds.standalone.ophgjq supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:27 standalone.localdomain podman[67026]: 2025-10-13 13:55:27.700150838 +0000 UTC m=+0.133983829 container init 12809c8f16597aaa197f738bbd8442c22ce664289d1643ac3743017d5cad0e2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mds-mds-standalone-ophgjq, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, release=553, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph)
Oct 13 13:55:27 standalone.localdomain podman[67026]: 2025-10-13 13:55:27.710125058 +0000 UTC m=+0.143958049 container start 12809c8f16597aaa197f738bbd8442c22ce664289d1643ac3743017d5cad0e2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mds-mds-standalone-ophgjq, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph)
Oct 13 13:55:27 standalone.localdomain bash[67026]: 12809c8f16597aaa197f738bbd8442c22ce664289d1643ac3743017d5cad0e2a
Oct 13 13:55:27 standalone.localdomain podman[67026]: 2025-10-13 13:55:27.613119868 +0000 UTC m=+0.046952899 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:27 standalone.localdomain systemd[1]: Started Ceph mds.mds.standalone.ophgjq for 627e7f45-65aa-56de-94df-66eaee84a56e.
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: set uid:gid to 167:167 (ceph:ceph)
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: main not setting numa affinity
Oct 13 13:55:27 standalone.localdomain sudo[66756]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: pidfile_write: ignore empty --pid-file
Oct 13 13:55:27 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mds-mds-standalone-ophgjq[67040]: starting mds.mds.standalone.ophgjq at 
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq Updating MDS map to version 1 from mon.0
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e2 new map
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e2 print_map
                                                        e2
                                                        enable_multiple, ever_enabled_multiple: 1,1
                                                        default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        legacy client fscid: -1
                                                         
                                                        No filesystems configured
                                                        Standby daemons:
                                                         
                                                        [mds.mds.standalone.ophgjq{-1:14232} state up:standby seq 1 addr [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] compat {c=[1],r=[1],i=[17ff]}]
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq Updating MDS map to version 2 from mon.0
Oct 13 13:55:27 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq Monitors have assigned me to become a standby.
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mds.? [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] up:boot
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : fsmap  1 up:standby
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "mds.standalone.ophgjq"} v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata", "who": "mds.standalone.ophgjq"} : dispatch
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e2 all = 0
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.mds}] v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c5331695-a276-49c4-8d49-a4c4c87932cb (Updating mds.mds deployment (+1 -> 1))
Oct 13 13:55:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c5331695-a276-49c4-8d49-a4c4c87932cb (Updating mds.mds deployment (+1 -> 1)) in 2 seconds
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.mds}] v 0)
Oct 13 13:55:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:27 standalone.localdomain sudo[67064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:55:27 standalone.localdomain sudo[67064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:27 standalone.localdomain sudo[67064]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:27 standalone.localdomain sudo[67079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:55:27 standalone.localdomain sudo[67079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:27 standalone.localdomain sudo[67079]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:28 standalone.localdomain sudo[67094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 13:55:28 standalone.localdomain sudo[67094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e23 e23: 1 total, 1 up, 1 in
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: osdmap e22: 1 total, 1 up, 1 in
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/240330404' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "manila_data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: pgmap v585: 67 pgs: 62 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: mds.? [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] up:boot
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: fsmap  1 up:standby
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "mds metadata", "who": "mds.standalone.ophgjq"} : dispatch
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/240330404' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "manila_data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e23: 1 total, 1 up, 1 in
Oct 13 13:55:28 standalone.localdomain relaxed_austin[66870]: pool 'manila_data' created
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=12.747773170s) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active pruub 1112.049438477s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 23 pg[5.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=23 pruub=13.763231277s) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active pruub 1113.064941406s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=12.747773170s) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown pruub 1112.049438477s@ mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 23 pg[5.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=23 pruub=13.763231277s) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown pruub 1113.064941406s@ mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:28 standalone.localdomain systemd[1]: libpod-11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3.scope: Deactivated successfully.
Oct 13 13:55:28 standalone.localdomain podman[66839]: 2025-10-13 13:55:28.129026292 +0000 UTC m=+1.484638332 container died 11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_austin, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main)
Oct 13 13:55:28 standalone.localdomain podman[67110]: 2025-10-13 13:55:28.207274309 +0000 UTC m=+0.067479705 container remove 11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_austin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=553, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:55:28 standalone.localdomain systemd[1]: libpod-conmon-11a4c99f4653979dc434cc48c492570d7770fc04c4611f812edf26265d06c2e3.scope: Deactivated successfully.
Oct 13 13:55:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8f15cf9eb1c45ce742c8de6079a8a870bff118a641935337bacbd3fb63d7658e-merged.mount: Deactivated successfully.
Oct 13 13:55:28 standalone.localdomain python3[67126]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create manila_metadata  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:28 standalone.localdomain podman[67154]: 
Oct 13 13:55:28 standalone.localdomain podman[67154]: 2025-10-13 13:55:28.540143904 +0000 UTC m=+0.069995841 container create 67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcclintock, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, version=7, vcs-type=git, architecture=x86_64)
Oct 13 13:55:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 29 completed events
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:28 standalone.localdomain systemd[1]: Started libpod-conmon-67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875.scope.
Oct 13 13:55:28 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7ead51503ffa295956fc51e535a9bd8aad134f89f9088839d63f25778e636/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7ead51503ffa295956fc51e535a9bd8aad134f89f9088839d63f25778e636/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:28 standalone.localdomain podman[67154]: 2025-10-13 13:55:28.607545675 +0000 UTC m=+0.137397612 container init 67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcclintock, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, ceph=True, release=553, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main)
Oct 13 13:55:28 standalone.localdomain podman[67154]: 2025-10-13 13:55:28.50833567 +0000 UTC m=+0.038187617 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:28 standalone.localdomain podman[67154]: 2025-10-13 13:55:28.615822674 +0000 UTC m=+0.145674611 container start 67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcclintock, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, name=rhceph, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph)
Oct 13 13:55:28 standalone.localdomain podman[67154]: 2025-10-13 13:55:28.616022479 +0000 UTC m=+0.145874416 container attach 67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcclintock, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph)
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct 13 13:55:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct 13 13:55:28 standalone.localdomain podman[67238]: 2025-10-13 13:55:28.917604876 +0000 UTC m=+0.083923529 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "manila_metadata", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 13 13:55:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2325259354' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "manila_metadata", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Oct 13 13:55:29 standalone.localdomain podman[67238]: 2025-10-13 13:55:29.025914485 +0000 UTC m=+0.192233148 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, com.redhat.component=rhceph-container, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, vcs-type=git, GIT_CLEAN=True)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e24 e24: 1 total, 1 up, 1 in
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/240330404' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "manila_data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: osdmap e23: 1 total, 1 up, 1 in
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2325259354' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "manila_metadata", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2325259354' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "manila_metadata", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.19( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.18( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.18( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.19( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1a( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1b( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e24: 1 total, 1 up, 1 in
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1c( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1d( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1e( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.f( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.2( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.3( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.2( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.3( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.4( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.5( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.6( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.7( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.6( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.7( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.5( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.4( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.e( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.d( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.c( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.b( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.a( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.9( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.8( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.9( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.8( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.16( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.17( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.16( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.17( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.15( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.15( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.14( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.12( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain laughing_mcclintock[67183]: pool 'manila_metadata' created
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.14( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.13( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.13( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.12( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.10( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.11( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.11( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.10( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1f( empty local-lis/les=14/15 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.19( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1d( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1e( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.3( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.5( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.6( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.7( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.c( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.a( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.16( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.17( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.17( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.14( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.11( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/14 lis/c=14/14 les/c/f=15/15/0 sis=23) [0] r=0 lpr=23 pi=[14,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 24 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:29 standalone.localdomain systemd[1]: libpod-67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875.scope: Deactivated successfully.
Oct 13 13:55:29 standalone.localdomain podman[67154]: 2025-10-13 13:55:29.225742368 +0000 UTC m=+0.755594345 container died 67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcclintock, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git)
Oct 13 13:55:29 standalone.localdomain podman[67303]: 2025-10-13 13:55:29.306260274 +0000 UTC m=+0.070280039 container remove 67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_mcclintock, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, version=7, description=Red Hat Ceph Storage 7, vcs-type=git)
Oct 13 13:55:29 standalone.localdomain systemd[1]: libpod-conmon-67aac8081702f81ee956e97b4b77f4cdbf763d6f1e78c9ba9ff016285b656875.scope: Deactivated successfully.
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v588: 131 pgs: 1 creating+peering, 2 peering, 63 unknown, 65 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-62d7ead51503ffa295956fc51e535a9bd8aad134f89f9088839d63f25778e636-merged.mount: Deactivated successfully.
Oct 13 13:55:29 standalone.localdomain sudo[67094]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 10883134-3984-4e0e-a093-2f69d280788a (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 10883134-3984-4e0e-a093-2f69d280788a (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 10883134-3984-4e0e-a093-2f69d280788a (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:55:29 standalone.localdomain sudo[67357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:55:29 standalone.localdomain sudo[67357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:29 standalone.localdomain sudo[67357]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:55:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev dfb5c9a6-ef3b-4eb5-9075-5d5d60682ed0 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev dfb5c9a6-ef3b-4eb5-9075-5d5d60682ed0 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:55:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event dfb5c9a6-ef3b-4eb5-9075-5d5d60682ed0 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Oct 13 13:55:29 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Oct 13 13:55:29 standalone.localdomain sudo[67373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:55:29 standalone.localdomain sudo[67373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:55:29 standalone.localdomain sudo[67373]: pam_unix(sudo:session): session closed for user root
Oct 13 13:55:29 standalone.localdomain python3[67358]: ansible-ceph_fs Invoked with name=cephfs cluster=ceph data=manila_data metadata=manila_metadata state=present max_mds=None
Oct 13 13:55:29 standalone.localdomain podman[67388]: 
Oct 13 13:55:29 standalone.localdomain podman[67388]: 2025-10-13 13:55:29.840069914 +0000 UTC m=+0.063335310 container create 57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sanderson, ceph=True, release=553, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git)
Oct 13 13:55:29 standalone.localdomain systemd[1]: Started libpod-conmon-57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7.scope.
Oct 13 13:55:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0139dce5c72e8d4edb83e998ca3958b454a9f1c4585566fbfafaf249a14e237/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0139dce5c72e8d4edb83e998ca3958b454a9f1c4585566fbfafaf249a14e237/merged/var/lib/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0139dce5c72e8d4edb83e998ca3958b454a9f1c4585566fbfafaf249a14e237/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:29 standalone.localdomain podman[67388]: 2025-10-13 13:55:29.898520998 +0000 UTC m=+0.121786424 container init 57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sanderson, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, version=7, build-date=2025-09-24T08:57:55, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553)
Oct 13 13:55:29 standalone.localdomain podman[67388]: 2025-10-13 13:55:29.90727925 +0000 UTC m=+0.130544676 container start 57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sanderson, RELEASE=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553)
Oct 13 13:55:29 standalone.localdomain podman[67388]: 2025-10-13 13:55:29.907550518 +0000 UTC m=+0.130816014 container attach 57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sanderson, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:55:29 standalone.localdomain podman[67388]: 2025-10-13 13:55:29.825540549 +0000 UTC m=+0.048805955 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e25 e25: 1 total, 1 up, 1 in
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e25: 1 total, 1 up, 1 in
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: 3.1 scrub starts
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: 3.1 scrub ok
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2325259354' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "manila_metadata", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: osdmap e24: 1 total, 1 up, 1 in
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: pgmap v588: 131 pgs: 1 creating+peering, 2 peering, 63 unknown, 65 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "fs get", "fs_name": "cephfs", "format": "json"} v 0)
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2009177780' entity='client.admin' cmd={"prefix": "fs get", "fs_name": "cephfs", "format": "json"} : dispatch
Oct 13 13:55:30 standalone.localdomain flamboyant_sanderson[67402]: 
Oct 13 13:55:30 standalone.localdomain flamboyant_sanderson[67402]: Error ENOENT: filesystem 'cephfs' not found
Oct 13 13:55:30 standalone.localdomain systemd[1]: libpod-57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7.scope: Deactivated successfully.
Oct 13 13:55:30 standalone.localdomain podman[67388]: 2025-10-13 13:55:30.262121364 +0000 UTC m=+0.485386820 container died 57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sanderson, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:30 standalone.localdomain podman[67426]: 2025-10-13 13:55:30.354077743 +0000 UTC m=+0.083523097 container remove 57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sanderson, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, RELEASE=main, release=553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:30 standalone.localdomain systemd[1]: libpod-conmon-57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7.scope: Deactivated successfully.
Oct 13 13:55:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a0139dce5c72e8d4edb83e998ca3958b454a9f1c4585566fbfafaf249a14e237-merged.mount: Deactivated successfully.
Oct 13 13:55:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57e2c84fb79c68bc35e2c5de1f406165ebe2795c1c40be88a62f4599447662b7-userdata-shm.mount: Deactivated successfully.
Oct 13 13:55:30 standalone.localdomain podman[67441]: 
Oct 13 13:55:30 standalone.localdomain podman[67441]: 2025-10-13 13:55:30.45636584 +0000 UTC m=+0.077941519 container create 399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hermann, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Oct 13 13:55:30 standalone.localdomain systemd[1]: Started libpod-conmon-399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959.scope.
Oct 13 13:55:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a160a061a9e32722cac66042bd1e95a71c4c3a0954a8c60345061a1f2224764e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a160a061a9e32722cac66042bd1e95a71c4c3a0954a8c60345061a1f2224764e/merged/var/lib/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a160a061a9e32722cac66042bd1e95a71c4c3a0954a8c60345061a1f2224764e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:30 standalone.localdomain podman[67441]: 2025-10-13 13:55:30.425923367 +0000 UTC m=+0.047499106 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:30 standalone.localdomain podman[67441]: 2025-10-13 13:55:30.528819153 +0000 UTC m=+0.150394842 container init 399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hermann, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7)
Oct 13 13:55:30 standalone.localdomain podman[67441]: 2025-10-13 13:55:30.538100792 +0000 UTC m=+0.159676481 container start 399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hermann, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:55:30 standalone.localdomain podman[67441]: 2025-10-13 13:55:30.538391231 +0000 UTC m=+0.159966960 container attach 399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hermann, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, version=7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., release=553)
Oct 13 13:55:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 13 13:55:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "manila_metadata", "data": "manila_data"} v 0)
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/4225231235' entity='client.admin' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "manila_metadata", "data": "manila_data"} : dispatch
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : daemon mds.mds.standalone.ophgjq assigned to filesystem cephfs as rank 0
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e3 new map
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e3 print_map
                                                        e3
                                                        enable_multiple, ever_enabled_multiple: 1,1
                                                        default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        legacy client fscid: 1
                                                         
                                                        Filesystem 'cephfs' (1)
                                                        fs_name        cephfs
                                                        epoch        3
                                                        flags        12 joinable allow_snaps allow_multimds_snaps
                                                        created        2025-10-13T13:55:30.936602+0000
                                                        modified        2025-10-13T13:55:30.936764+0000
                                                        tableserver        0
                                                        root        0
                                                        session_timeout        60
                                                        session_autoclose        300
                                                        max_file_size        1099511627776
                                                        required_client_features        {}
                                                        last_failure        0
                                                        last_failure_osd_epoch        0
                                                        compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        max_mds        1
                                                        in        0
                                                        up        {0=14232}
                                                        failed        
                                                        damaged        
                                                        stopped        
                                                        data_pools        [6]
                                                        metadata_pool        7
                                                        inline_data        disabled
                                                        balancer        
                                                        bal_rank_mask        -1
                                                        standby_count_wanted        0
                                                        qdb_cluster        leader: 0 members: 
                                                        [mds.mds.standalone.ophgjq{0:14232} state up:creating seq 1 addr [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] compat {c=[1],r=[1],i=[17ff]}]
                                                         
                                                         
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e26 e26: 1 total, 1 up, 1 in
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq Updating MDS map to version 3 from mon.0
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.3 handle_mds_map i am now mds.0.3
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x1
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x100
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x600
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x601
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x602
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x603
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x604
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x605
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x606
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x607
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x608
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.cache creating system inode with ino:0x609
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e26: 1 total, 1 up, 1 in
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/4225231235' entity='client.admin' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "manila_metadata", "data": "manila_data"}]': finished
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.standalone.ophgjq=up:creating}
Oct 13 13:55:30 standalone.localdomain distracted_hermann[67455]:   Pool 'manila_data' (id '6') has pg autoscale mode 'on' but is not marked as bulk.
Oct 13 13:55:30 standalone.localdomain distracted_hermann[67455]:   Consider setting the flag by running
Oct 13 13:55:30 standalone.localdomain distracted_hermann[67455]:     # ceph osd pool set manila_data bulk true
Oct 13 13:55:30 standalone.localdomain distracted_hermann[67455]: new fs with metadata pool 7 and data pool 6
Oct 13 13:55:30 standalone.localdomain systemd[1]: libpod-399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959.scope: Deactivated successfully.
Oct 13 13:55:30 standalone.localdomain podman[67441]: 2025-10-13 13:55:30.962286475 +0000 UTC m=+0.583862124 container died 399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hermann, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7)
Oct 13 13:55:30 standalone.localdomain ceph-mds[67044]: mds.0.3 creating_done
Oct 13 13:55:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : daemon mds.mds.standalone.ophgjq is now active in filesystem cephfs as rank 0
Oct 13 13:55:31 standalone.localdomain podman[67488]: 2025-10-13 13:55:31.03577752 +0000 UTC m=+0.067002181 container remove 399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hermann, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=)
Oct 13 13:55:31 standalone.localdomain systemd[1]: libpod-conmon-399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959.scope: Deactivated successfully.
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: 2.1 deep-scrub starts
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: 2.1 deep-scrub ok
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: osdmap e25: 1 total, 1 up, 1 in
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2009177780' entity='client.admin' cmd={"prefix": "fs get", "fs_name": "cephfs", "format": "json"} : dispatch
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4225231235' entity='client.admin' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "manila_metadata", "data": "manila_data"} : dispatch
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: daemon mds.mds.standalone.ophgjq assigned to filesystem cephfs as rank 0
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: osdmap e26: 1 total, 1 up, 1 in
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4225231235' entity='client.admin' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "manila_metadata", "data": "manila_data"}]': finished
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: fsmap cephfs:1 {0=mds.standalone.ophgjq=up:creating}
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: daemon mds.mds.standalone.ophgjq is now active in filesystem cephfs as rank 0
Oct 13 13:55:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v591: 131 pgs: 1 creating+peering, 2 peering, 63 unknown, 65 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a160a061a9e32722cac66042bd1e95a71c4c3a0954a8c60345061a1f2224764e-merged.mount: Deactivated successfully.
Oct 13 13:55:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-399ee36568ed1740bd31b7eb9faf9b5a48d9c2c8a6d9b1fbcda9c0206e01f959-userdata-shm.mount: Deactivated successfully.
Oct 13 13:55:31 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Oct 13 13:55:31 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e4 new map
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).mds e4 print_map
                                                        e4
                                                        enable_multiple, ever_enabled_multiple: 1,1
                                                        default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        legacy client fscid: 1
                                                         
                                                        Filesystem 'cephfs' (1)
                                                        fs_name        cephfs
                                                        epoch        4
                                                        flags        12 joinable allow_snaps allow_multimds_snaps
                                                        created        2025-10-13T13:55:30.936602+0000
                                                        modified        2025-10-13T13:55:31.942657+0000
                                                        tableserver        0
                                                        root        0
                                                        session_timeout        60
                                                        session_autoclose        300
                                                        max_file_size        1099511627776
                                                        required_client_features        {}
                                                        last_failure        0
                                                        last_failure_osd_epoch        0
                                                        compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                        max_mds        1
                                                        in        0
                                                        up        {0=14232}
                                                        failed        
                                                        damaged        
                                                        stopped        
                                                        data_pools        [6]
                                                        metadata_pool        7
                                                        inline_data        disabled
                                                        balancer        
                                                        bal_rank_mask        -1
                                                        standby_count_wanted        0
                                                        qdb_cluster        leader: 14232 members: 14232
                                                        [mds.mds.standalone.ophgjq{0:14232} state up:active seq 2 addr [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] compat {c=[1],r=[1],i=[17ff]}]
                                                         
                                                         
Oct 13 13:55:31 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq Updating MDS map to version 4 from mon.0
Oct 13 13:55:31 standalone.localdomain ceph-mds[67044]: mds.0.3 handle_mds_map i am now mds.0.3
Oct 13 13:55:31 standalone.localdomain ceph-mds[67044]: mds.0.3 handle_mds_map state change up:creating --> up:active
Oct 13 13:55:31 standalone.localdomain ceph-mds[67044]: mds.0.3 recovery_done -- successful recovery!
Oct 13 13:55:31 standalone.localdomain ceph-mds[67044]: mds.0.3 active_start
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : mds.? [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] up:active
Oct 13 13:55:31 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.standalone.ophgjq=up:active}
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: 3.2 scrub starts
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: 3.2 scrub ok
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: pgmap v591: 131 pgs: 1 creating+peering, 2 peering, 63 unknown, 65 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: mds.? [v2:172.18.0.100:6806/3678097372,v1:172.18.0.100:6807/3678097372] up:active
Oct 13 13:55:32 standalone.localdomain ceph-mon[29756]: fsmap cephfs:1 {0=mds.standalone.ophgjq=up:active}
Oct 13 13:55:32 standalone.localdomain python3[67533]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module ls --format json _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:32 standalone.localdomain podman[67534]: 
Oct 13 13:55:32 standalone.localdomain podman[67534]: 2025-10-13 13:55:32.671640117 +0000 UTC m=+0.081793564 container create 01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_wing, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:32 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 13 13:55:32 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 13 13:55:32 standalone.localdomain systemd[1]: Started libpod-conmon-01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56.scope.
Oct 13 13:55:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6211f9d686faa8f4cb71327918b665f84939578408faa3dbd46d1804c0c4fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6211f9d686faa8f4cb71327918b665f84939578408faa3dbd46d1804c0c4fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:32 standalone.localdomain podman[67534]: 2025-10-13 13:55:32.735252065 +0000 UTC m=+0.145405512 container init 01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_wing, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git)
Oct 13 13:55:32 standalone.localdomain podman[67534]: 2025-10-13 13:55:32.639574636 +0000 UTC m=+0.049728133 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:32 standalone.localdomain systemd[1]: tmp-crun.ToeOzh.mount: Deactivated successfully.
Oct 13 13:55:32 standalone.localdomain podman[67534]: 2025-10-13 13:55:32.745823951 +0000 UTC m=+0.155977398 container start 01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_wing, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55)
Oct 13 13:55:32 standalone.localdomain podman[67534]: 2025-10-13 13:55:32.746203574 +0000 UTC m=+0.156357021 container attach 01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_wing, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, release=553, architecture=x86_64, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, ceph=True, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main)
Oct 13 13:55:32 standalone.localdomain ceph-mds[67044]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 13 13:55:32 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mds-mds-standalone-ophgjq[67040]: 2025-10-13T13:55:32.789+0000 7f5975006640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json"} v 0)
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2203989479' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json"} : dispatch
Oct 13 13:55:33 standalone.localdomain romantic_wing[67548]: 
Oct 13 13:55:33 standalone.localdomain romantic_wing[67548]: {"always_on_modules":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"enabled_modules":["cephadm","iostat","nfs","restful"],"disabled_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"CALL_HOME_REMIND_LATER_ON":{"name":"CALL_HOME_REMIND_LATER_ON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"STORAGE_INSIGHTS_REMIND_LATER_ON":{"name":"STORAGE_INSIGHTS_REMIND_LATER_ON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level"
Oct 13 13:55:33 standalone.localdomain romantic_wing[67548]: :"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB.  Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_als
Oct 13 13:55:33 standalone.localdomain romantic_wing[67548]: o":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_va
Oct 13 13:55:33 standalone.localdomain romantic_wing[67548]: lue":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_als
Oct 13 13:55:33 standalone.localdomain romantic_wing[67548]: o":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}
Oct 13 13:55:33 standalone.localdomain systemd[1]: libpod-01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56.scope: Deactivated successfully.
Oct 13 13:55:33 standalone.localdomain podman[67534]: 2025-10-13 13:55:33.144308645 +0000 UTC m=+0.554462132 container died 01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_wing, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, release=553, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:55:33 standalone.localdomain podman[67574]: 2025-10-13 13:55:33.220209021 +0000 UTC m=+0.053328410 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, container_name=memcached, version=17.1.9, config_id=tripleo_step1, tcib_managed=true, build-date=2025-07-21T12:58:43, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: 3.3 deep-scrub starts
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: 3.3 deep-scrub ok
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: Cluster is now healthy
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2203989479' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json"} : dispatch
Oct 13 13:55:33 standalone.localdomain podman[67574]: 2025-10-13 13:55:33.263899991 +0000 UTC m=+0.097019410 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, batch=17.1_20250721.1, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, vcs-type=git, version=17.1.9, build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, container_name=memcached, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, distribution-scope=public)
Oct 13 13:55:33 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:55:33 standalone.localdomain podman[67573]: 2025-10-13 13:55:33.292967544 +0000 UTC m=+0.134223238 container remove 01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_wing, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, version=7, release=553, RELEASE=main)
Oct 13 13:55:33 standalone.localdomain systemd[1]: libpod-conmon-01641c6f5edd0754ccdc578a6f4490cdbe49e02363d0952bba33847baf957d56.scope: Deactivated successfully.
Oct 13 13:55:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v592: 131 pgs: 1 creating+peering, 2 peering, 63 unknown, 65 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:33 standalone.localdomain rsyslogd[56156]: message too long (16383) with configured size 8096, begin of message is: {"always_on_modules":["balancer","crash","devicehealth","orchestrator","pg_autos [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:55:33 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: :"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:55:33 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: o":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1 [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:55:33 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: lue":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address o [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 13:55:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 31 completed events
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:55:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d6211f9d686faa8f4cb71327918b665f84939578408faa3dbd46d1804c0c4fa-merged.mount: Deactivated successfully.
Oct 13 13:55:33 standalone.localdomain python3[67621]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:33 standalone.localdomain python3[67625]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363733.5818343-67611-146242596036105/source dest=/etc/ceph/ceph.client.openstack.keyring mode=420 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=e2b92839f87802a8b24ea6e6780e50ee4d6ed8f3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:34 standalone.localdomain ceph-mon[29756]: 2.2 scrub starts
Oct 13 13:55:34 standalone.localdomain ceph-mon[29756]: 2.2 scrub ok
Oct 13 13:55:34 standalone.localdomain ceph-mon[29756]: pgmap v592: 131 pgs: 1 creating+peering, 2 peering, 63 unknown, 65 active+clean; 449 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:55:34 standalone.localdomain python3[67638]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:34 standalone.localdomain python3[67642]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363734.1038802-67611-95907002615177/source dest=/etc/ceph/ceph.client.manila.keyring mode=420 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=a6bbe08b4ca238c19d82de3bedd06393264e7b1f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:35 standalone.localdomain python3[67651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:35 standalone.localdomain podman[67652]: 
Oct 13 13:55:35 standalone.localdomain podman[67652]: 2025-10-13 13:55:35.252417416 +0000 UTC m=+0.080040971 container create 7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chebyshev, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, RELEASE=main, name=rhceph, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 13:55:35 standalone.localdomain systemd[1]: Started libpod-conmon-7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0.scope.
Oct 13 13:55:35 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0c8151110f44276d5ba07d5265abb7fcd4b1cf678a33049248966345a1eecd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0c8151110f44276d5ba07d5265abb7fcd4b1cf678a33049248966345a1eecd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:35 standalone.localdomain podman[67652]: 2025-10-13 13:55:35.320588352 +0000 UTC m=+0.148211907 container init 7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chebyshev, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:35 standalone.localdomain podman[67652]: 2025-10-13 13:55:35.222773748 +0000 UTC m=+0.050397323 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:35 standalone.localdomain podman[67652]: 2025-10-13 13:55:35.330627242 +0000 UTC m=+0.158250787 container start 7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chebyshev, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:35 standalone.localdomain podman[67652]: 2025-10-13 13:55:35.330927851 +0000 UTC m=+0.158551436 container attach 7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chebyshev, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7)
Oct 13 13:55:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v593: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 2.3 KiB/s wr, 7 op/s
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 e27: 1 total, 1 up, 1 in
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e27: 1 total, 1 up, 1 in
Oct 13 13:55:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 13 13:55:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2147382860' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Oct 13 13:55:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2147382860' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 13 13:55:35 standalone.localdomain systemd[1]: libpod-7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0.scope: Deactivated successfully.
Oct 13 13:55:35 standalone.localdomain podman[67691]: 2025-10-13 13:55:35.957826925 +0000 UTC m=+0.051478175 container died 7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chebyshev, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:35 standalone.localdomain podman[67691]: 2025-10-13 13:55:35.994049642 +0000 UTC m=+0.087700852 container remove 7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chebyshev, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:35 standalone.localdomain systemd[1]: libpod-conmon-7a6a25af70461468f1932c4d7c8460b9709fbd416bc51a2dd3064296c92fc9c0.scope: Deactivated successfully.
Oct 13 13:55:36 standalone.localdomain python3[67707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.manila.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2a0c8151110f44276d5ba07d5265abb7fcd4b1cf678a33049248966345a1eecd-merged.mount: Deactivated successfully.
Oct 13 13:55:36 standalone.localdomain podman[67708]: 
Oct 13 13:55:36 standalone.localdomain podman[67708]: 2025-10-13 13:55:36.31838042 +0000 UTC m=+0.058155496 container create f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_rubin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 13:55:36 standalone.localdomain systemd[1]: Started libpod-conmon-f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc.scope.
Oct 13 13:55:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f0d2c9ec7f0ccf6849b7c2fbe7b0a477e4ec0f63d959444f1bbb610e593668/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f0d2c9ec7f0ccf6849b7c2fbe7b0a477e4ec0f63d959444f1bbb610e593668/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:36 standalone.localdomain podman[67708]: 2025-10-13 13:55:36.377227775 +0000 UTC m=+0.117002871 container init f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_rubin, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.)
Oct 13 13:55:36 standalone.localdomain podman[67708]: 2025-10-13 13:55:36.385949967 +0000 UTC m=+0.125725033 container start f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_rubin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:36 standalone.localdomain podman[67708]: 2025-10-13 13:55:36.386129582 +0000 UTC m=+0.125904708 container attach f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_rubin, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, ceph=True, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Oct 13 13:55:36 standalone.localdomain podman[67708]: 2025-10-13 13:55:36.290121052 +0000 UTC m=+0.029896178 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: pgmap v593: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 2.3 KiB/s wr, 7 op/s
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: osdmap e27: 1 total, 1 up, 1 in
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: 3.4 scrub starts
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: 3.4 scrub ok
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2147382860' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2147382860' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 13 13:55:36 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 13 13:55:36 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3851980873' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Oct 13 13:55:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/3851980873' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 13 13:55:36 standalone.localdomain systemd[1]: libpod-f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc.scope: Deactivated successfully.
Oct 13 13:55:36 standalone.localdomain podman[67708]: 2025-10-13 13:55:36.815418278 +0000 UTC m=+0.555193454 container died f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_rubin, architecture=x86_64, name=rhceph, vcs-type=git, distribution-scope=public, release=553, ceph=True, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 13:55:36 standalone.localdomain podman[67747]: 2025-10-13 13:55:36.884582333 +0000 UTC m=+0.063647450 container remove f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=competent_rubin, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph)
Oct 13 13:55:36 standalone.localdomain systemd[1]: libpod-conmon-f04c5b323f0c6805b9fc1f880f7e780e5ecb659aec4c291f5cc04b3a085029fc.scope: Deactivated successfully.
Oct 13 13:55:37 standalone.localdomain systemd[1]: tmp-crun.RkNNM4.mount: Deactivated successfully.
Oct 13 13:55:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-53f0d2c9ec7f0ccf6849b7c2fbe7b0a477e4ec0f63d959444f1bbb610e593668-merged.mount: Deactivated successfully.
Oct 13 13:55:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v595: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 2.0 KiB/s wr, 6 op/s
Oct 13 13:55:37 standalone.localdomain ceph-mon[29756]: 2.3 scrub starts
Oct 13 13:55:37 standalone.localdomain ceph-mon[29756]: 2.3 scrub ok
Oct 13 13:55:37 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3851980873' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Oct 13 13:55:37 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3851980873' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 13 13:55:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 13 13:55:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 13 13:55:38 standalone.localdomain python3[67784]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:38 standalone.localdomain podman[67785]: 
Oct 13 13:55:38 standalone.localdomain podman[67785]: 2025-10-13 13:55:38.375781681 +0000 UTC m=+0.078495086 container create e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elion, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 13:55:38 standalone.localdomain systemd[1]: Started libpod-conmon-e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce.scope.
Oct 13 13:55:38 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd4529b377f212744cc28d64b398e381bb36a09b2f3f3a6a4a011e8571011cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd4529b377f212744cc28d64b398e381bb36a09b2f3f3a6a4a011e8571011cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:38 standalone.localdomain ceph-mon[29756]: pgmap v595: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 2.0 KiB/s wr, 6 op/s
Oct 13 13:55:38 standalone.localdomain ceph-mon[29756]: 3.5 scrub starts
Oct 13 13:55:38 standalone.localdomain ceph-mon[29756]: 3.5 scrub ok
Oct 13 13:55:38 standalone.localdomain podman[67785]: 2025-10-13 13:55:38.441574455 +0000 UTC m=+0.144287860 container init e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elion, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, release=553)
Oct 13 13:55:38 standalone.localdomain podman[67785]: 2025-10-13 13:55:38.343630487 +0000 UTC m=+0.046343922 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:38 standalone.localdomain podman[67785]: 2025-10-13 13:55:38.456718058 +0000 UTC m=+0.159431463 container start e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elion, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.buildah.version=1.33.12, release=553, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:55:38 standalone.localdomain podman[67785]: 2025-10-13 13:55:38.456989276 +0000 UTC m=+0.159702681 container attach e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elion, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:55:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 13:55:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3784425065' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 13:55:38 standalone.localdomain keen_elion[67799]: 
Oct 13 13:55:38 standalone.localdomain keen_elion[67799]: {"epoch":1,"fsid":"627e7f45-65aa-56de-94df-66eaee84a56e","modified":"2025-10-13T13:36:01.554952Z","created":"2025-10-13T13:36:01.554952Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"standalone","public_addrs":{"addrvec":[{"type":"v2","addr":"172.18.0.100:3300","nonce":0},{"type":"v1","addr":"172.18.0.100:6789","nonce":0}]},"addr":"172.18.0.100:6789/0","public_addr":"172.18.0.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 13 13:55:38 standalone.localdomain keen_elion[67799]: dumped monmap epoch 1
Oct 13 13:55:38 standalone.localdomain systemd[1]: libpod-e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce.scope: Deactivated successfully.
Oct 13 13:55:38 standalone.localdomain podman[67785]: 2025-10-13 13:55:38.909710526 +0000 UTC m=+0.612423951 container died e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elion, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 13:55:38 standalone.localdomain podman[67824]: 2025-10-13 13:55:38.980807568 +0000 UTC m=+0.065394102 container remove e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elion, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 13:55:38 standalone.localdomain systemd[1]: libpod-conmon-e483a7c599907a2161a5888d821e6a66f5da1c92e59af2921c64a0f15ea8d5ce.scope: Deactivated successfully.
Oct 13 13:55:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v596: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 13 13:55:39 standalone.localdomain systemd[1]: tmp-crun.ae4g59.mount: Deactivated successfully.
Oct 13 13:55:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fdd4529b377f212744cc28d64b398e381bb36a09b2f3f3a6a4a011e8571011cf-merged.mount: Deactivated successfully.
Oct 13 13:55:39 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3784425065' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 13:55:39 standalone.localdomain python3[67849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:39 standalone.localdomain podman[67850]: 
Oct 13 13:55:39 standalone.localdomain podman[67850]: 2025-10-13 13:55:39.853594478 +0000 UTC m=+0.070141825 container create 851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hellman, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main)
Oct 13 13:55:39 standalone.localdomain systemd[1]: Started libpod-conmon-851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf.scope.
Oct 13 13:55:39 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3f5830190d29bfdf1bd1c97f8dfd12a0609d14de6317b4bdecfd86281d4cbe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3f5830190d29bfdf1bd1c97f8dfd12a0609d14de6317b4bdecfd86281d4cbe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:39 standalone.localdomain podman[67850]: 2025-10-13 13:55:39.912908517 +0000 UTC m=+0.129455874 container init 851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hellman, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, version=7, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Oct 13 13:55:39 standalone.localdomain podman[67850]: 2025-10-13 13:55:39.819636709 +0000 UTC m=+0.036184126 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:39 standalone.localdomain podman[67850]: 2025-10-13 13:55:39.925519524 +0000 UTC m=+0.142066871 container start 851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hellman, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, distribution-scope=public)
Oct 13 13:55:39 standalone.localdomain podman[67850]: 2025-10-13 13:55:39.926050981 +0000 UTC m=+0.142598368 container attach 851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hellman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, version=7, distribution-scope=public, name=rhceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct 13 13:55:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/1201434590' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Oct 13 13:55:40 standalone.localdomain vigorous_hellman[67863]: [client.openstack]
Oct 13 13:55:40 standalone.localdomain vigorous_hellman[67863]:         key = AQCfAO1oAAAAABAAiB3Q1b3KtZcxj3AF45GhgA==
Oct 13 13:55:40 standalone.localdomain vigorous_hellman[67863]:         caps mgr = "allow *"
Oct 13 13:55:40 standalone.localdomain vigorous_hellman[67863]:         caps mon = "profile rbd"
Oct 13 13:55:40 standalone.localdomain vigorous_hellman[67863]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, profile rbd pool=backups"
Oct 13 13:55:40 standalone.localdomain systemd[1]: libpod-851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf.scope: Deactivated successfully.
Oct 13 13:55:40 standalone.localdomain podman[67850]: 2025-10-13 13:55:40.362136431 +0000 UTC m=+0.578683828 container died 851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hellman, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:40 standalone.localdomain systemd[1]: tmp-crun.zCkuak.mount: Deactivated successfully.
Oct 13 13:55:40 standalone.localdomain systemd[1]: tmp-crun.FjZl7g.mount: Deactivated successfully.
Oct 13 13:55:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7f3f5830190d29bfdf1bd1c97f8dfd12a0609d14de6317b4bdecfd86281d4cbe-merged.mount: Deactivated successfully.
Oct 13 13:55:40 standalone.localdomain ceph-mon[29756]: pgmap v596: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 13 13:55:40 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1201434590' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Oct 13 13:55:40 standalone.localdomain podman[67890]: 2025-10-13 13:55:40.460421589 +0000 UTC m=+0.088144995 container remove 851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hellman, vcs-type=git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12)
Oct 13 13:55:40 standalone.localdomain systemd[1]: libpod-conmon-851b91a3b0c7d54a328851a93010aa10e7eceddeabc69db1050ed2cdf234a0bf.scope: Deactivated successfully.
Oct 13 13:55:40 standalone.localdomain python3[67910]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.manila _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:40 standalone.localdomain podman[67911]: 
Oct 13 13:55:40 standalone.localdomain podman[67911]: 2025-10-13 13:55:40.930159579 +0000 UTC m=+0.083253779 container create bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_clarke, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553)
Oct 13 13:55:40 standalone.localdomain systemd[1]: Started libpod-conmon-bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd.scope.
Oct 13 13:55:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87a8a68103e5cc5c341f54487e933e2ba9a4c2d9390700152448f961c7b74d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87a8a68103e5cc5c341f54487e933e2ba9a4c2d9390700152448f961c7b74d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:40 standalone.localdomain podman[67911]: 2025-10-13 13:55:40.985628603 +0000 UTC m=+0.138722793 container init bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_clarke, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.)
Oct 13 13:55:40 standalone.localdomain podman[67911]: 2025-10-13 13:55:40.995381205 +0000 UTC m=+0.148475395 container start bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_clarke, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:55:40 standalone.localdomain podman[67911]: 2025-10-13 13:55:40.995606301 +0000 UTC m=+0.148700491 container attach bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_clarke, vcs-type=git, distribution-scope=public, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main)
Oct 13 13:55:40 standalone.localdomain podman[67911]: 2025-10-13 13:55:40.896905241 +0000 UTC m=+0.049999441 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v597: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 1.4 KiB/s wr, 4 op/s
Oct 13 13:55:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.manila"} v 0)
Oct 13 13:55:41 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/2359896826' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.manila"} : dispatch
Oct 13 13:55:41 standalone.localdomain intelligent_clarke[67924]: [client.manila]
Oct 13 13:55:41 standalone.localdomain intelligent_clarke[67924]:         key = AQCfAO1oAAAAABAALGtW3Z0UHpT9m5DdPzse3w==
Oct 13 13:55:41 standalone.localdomain intelligent_clarke[67924]:         caps mgr = "allow rw"
Oct 13 13:55:41 standalone.localdomain intelligent_clarke[67924]:         caps mon = "allow r"
Oct 13 13:55:41 standalone.localdomain intelligent_clarke[67924]:         caps osd = "allow rw pool manila_data"
Oct 13 13:55:41 standalone.localdomain systemd[1]: libpod-bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd.scope: Deactivated successfully.
Oct 13 13:55:41 standalone.localdomain podman[67911]: 2025-10-13 13:55:41.456168616 +0000 UTC m=+0.609262836 container died bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_clarke, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, ceph=True, architecture=x86_64, RELEASE=main, io.buildah.version=1.33.12, release=553, io.openshift.expose-services=)
Oct 13 13:55:41 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2359896826' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.manila"} : dispatch
Oct 13 13:55:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c87a8a68103e5cc5c341f54487e933e2ba9a4c2d9390700152448f961c7b74d4-merged.mount: Deactivated successfully.
Oct 13 13:55:41 standalone.localdomain podman[67949]: 2025-10-13 13:55:41.5306512 +0000 UTC m=+0.069546956 container remove bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_clarke, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, name=rhceph, GIT_CLEAN=True, distribution-scope=public)
Oct 13 13:55:41 standalone.localdomain systemd[1]: libpod-conmon-bc471bd067d796782527583626211bdd4cb8e6db777590615677845f2d4e6ffd.scope: Deactivated successfully.
Oct 13 13:55:41 standalone.localdomain python3[67968]: ansible-file Invoked with path=/root/overcloud-deploy/overcloud state=directory recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:42 standalone.localdomain python3[67980]: ansible-ansible.legacy.stat Invoked with path=/root/overcloud-deploy/overcloud/ceph_client.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:42 standalone.localdomain python3[67984]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363741.9741604-67970-207447399020042/source dest=/root/overcloud-deploy/overcloud/ceph_client.yml mode=420 force=True follow=False _original_basename=ceph_client.yaml.j2 checksum=17bf905c161e23c965e1b1c333db722115c2a0ed backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:42 standalone.localdomain ceph-mon[29756]: pgmap v597: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 1.4 KiB/s wr, 4 op/s
Oct 13 13:55:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Oct 13 13:55:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Oct 13 13:55:42 standalone.localdomain ansible-async_wrapper.py[68004]: Invoked with 445644255357 30 /root/.ansible/tmp/ansible-tmp-1760363742.7509906-67992-86311331645961/AnsiballZ_command.py _
Oct 13 13:55:42 standalone.localdomain ansible-async_wrapper.py[68007]: Starting module and watcher
Oct 13 13:55:42 standalone.localdomain ansible-async_wrapper.py[68007]: Start watching 68008 (30)
Oct 13 13:55:42 standalone.localdomain ansible-async_wrapper.py[68008]: Start module (68008)
Oct 13 13:55:42 standalone.localdomain ansible-async_wrapper.py[68004]: Return async_wrapper task started.
Oct 13 13:55:43 standalone.localdomain python3[68009]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:43 standalone.localdomain podman[68010]: 
Oct 13 13:55:43 standalone.localdomain podman[68010]: 2025-10-13 13:55:43.204137906 +0000 UTC m=+0.081458834 container create b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_gagarin, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_CLEAN=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 13:55:43 standalone.localdomain systemd[1]: Started libpod-conmon-b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b.scope.
Oct 13 13:55:43 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8df2485a4b6aa0fc3f617d290cc52307a1fc60784254f551bd7af7a4fbe42f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8df2485a4b6aa0fc3f617d290cc52307a1fc60784254f551bd7af7a4fbe42f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:43 standalone.localdomain podman[68010]: 2025-10-13 13:55:43.172645841 +0000 UTC m=+0.049966789 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:43 standalone.localdomain podman[68010]: 2025-10-13 13:55:43.274014312 +0000 UTC m=+0.151335230 container init b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_gagarin, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, ceph=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:43 standalone.localdomain podman[68010]: 2025-10-13 13:55:43.284143235 +0000 UTC m=+0.161464153 container start b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_gagarin, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64)
Oct 13 13:55:43 standalone.localdomain podman[68010]: 2025-10-13 13:55:43.285680022 +0000 UTC m=+0.163000950 container attach b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_gagarin, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.tags=rhceph ceph, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public)
Oct 13 13:55:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v598: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 1.4 KiB/s wr, 4 op/s
Oct 13 13:55:43 standalone.localdomain ceph-mon[29756]: 3.6 deep-scrub starts
Oct 13 13:55:43 standalone.localdomain ceph-mon[29756]: 3.6 deep-scrub ok
Oct 13 13:55:43 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:43 standalone.localdomain gracious_gagarin[68024]: 
Oct 13 13:55:43 standalone.localdomain gracious_gagarin[68024]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 13 13:55:43 standalone.localdomain systemd[1]: libpod-b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b.scope: Deactivated successfully.
Oct 13 13:55:43 standalone.localdomain podman[68010]: 2025-10-13 13:55:43.659296248 +0000 UTC m=+0.536617256 container died b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_gagarin, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553)
Oct 13 13:55:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0f8df2485a4b6aa0fc3f617d290cc52307a1fc60784254f551bd7af7a4fbe42f-merged.mount: Deactivated successfully.
Oct 13 13:55:43 standalone.localdomain podman[68049]: 2025-10-13 13:55:43.752307598 +0000 UTC m=+0.082064863 container remove b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_gagarin, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container)
Oct 13 13:55:43 standalone.localdomain systemd[1]: libpod-conmon-b7eafc5c1772d0a4832cd59067f543caa22593ab3b494318b3a8403a8746d91b.scope: Deactivated successfully.
Oct 13 13:55:43 standalone.localdomain ansible-async_wrapper.py[68008]: Module complete (68008)
Oct 13 13:55:44 standalone.localdomain python3[68064]: ansible-ansible.legacy.async_status Invoked with jid=445644255357.68004 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:55:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:44 standalone.localdomain python3[68066]: ansible-ansible.legacy.async_status Invoked with jid=445644255357.68004 mode=cleanup _async_dir=/tmp/.ansible_async
Oct 13 13:55:44 standalone.localdomain ceph-mon[29756]: pgmap v598: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail; 1.4 KiB/s wr, 4 op/s
Oct 13 13:55:44 standalone.localdomain python3[68082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:45 standalone.localdomain podman[68083]: 
Oct 13 13:55:45 standalone.localdomain podman[68083]: 2025-10-13 13:55:45.07059978 +0000 UTC m=+0.113926719 container create aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_neumann, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc.)
Oct 13 13:55:45 standalone.localdomain podman[68083]: 2025-10-13 13:55:44.993453915 +0000 UTC m=+0.036780904 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:45 standalone.localdomain systemd[1]: Started libpod-conmon-aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5.scope.
Oct 13 13:55:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2c94ceb1567d8721380610c9744c8a3f1a5dc7b95ec384db747fdef8155ae5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2c94ceb1567d8721380610c9744c8a3f1a5dc7b95ec384db747fdef8155ae5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:45 standalone.localdomain podman[68083]: 2025-10-13 13:55:45.147443554 +0000 UTC m=+0.190770533 container init aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_neumann, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, ceph=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:55:45 standalone.localdomain podman[68083]: 2025-10-13 13:55:45.155923569 +0000 UTC m=+0.199250498 container start aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_neumann, release=553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc.)
Oct 13 13:55:45 standalone.localdomain podman[68083]: 2025-10-13 13:55:45.156119926 +0000 UTC m=+0.199446925 container attach aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_neumann, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, RELEASE=main, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_BRANCH=main)
Oct 13 13:55:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v599: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:45 standalone.localdomain ceph-mon[29756]: from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:45 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:45 standalone.localdomain thirsty_neumann[68098]: 
Oct 13 13:55:45 standalone.localdomain thirsty_neumann[68098]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 13 13:55:45 standalone.localdomain systemd[1]: libpod-aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5.scope: Deactivated successfully.
Oct 13 13:55:45 standalone.localdomain podman[68083]: 2025-10-13 13:55:45.52469652 +0000 UTC m=+0.568023479 container died aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_neumann, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7)
Oct 13 13:55:45 standalone.localdomain podman[68123]: 2025-10-13 13:55:45.582644769 +0000 UTC m=+0.052460345 container remove aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_neumann, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, ceph=True, release=553, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12)
Oct 13 13:55:45 standalone.localdomain systemd[1]: libpod-conmon-aaa21e44253bd7fa28b3dfacc3d0392f9c5460f12f9b672276c2da49b533b3c5.scope: Deactivated successfully.
Oct 13 13:55:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 13 13:55:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 13 13:55:45 standalone.localdomain python3[68141]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:45 standalone.localdomain podman[68142]: 
Oct 13 13:55:45 standalone.localdomain podman[68142]: 2025-10-13 13:55:45.948724349 +0000 UTC m=+0.069469104 container create f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_kapitsa, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, ceph=True, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=)
Oct 13 13:55:45 standalone.localdomain systemd[1]: Started libpod-conmon-f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a.scope.
Oct 13 13:55:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a76525e178e9d4be01c4f2b46982453abc61a348a72ac501223447890415f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63a76525e178e9d4be01c4f2b46982453abc61a348a72ac501223447890415f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:45 standalone.localdomain podman[68142]: 2025-10-13 13:55:45.997017948 +0000 UTC m=+0.117762703 container init f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_kapitsa, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 13:55:46 standalone.localdomain podman[68142]: 2025-10-13 13:55:46.002102741 +0000 UTC m=+0.122847496 container start f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_kapitsa, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., release=553, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 13:55:46 standalone.localdomain podman[68142]: 2025-10-13 13:55:46.002320087 +0000 UTC m=+0.123064842 container attach f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_kapitsa, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:55:46 standalone.localdomain podman[68142]: 2025-10-13 13:55:45.9247644 +0000 UTC m=+0.045509185 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4f2c94ceb1567d8721380610c9744c8a3f1a5dc7b95ec384db747fdef8155ae5-merged.mount: Deactivated successfully.
Oct 13 13:55:46 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_type: crash
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_name: crash
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: placement:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   host_pattern: '*'
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: ---
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_type: mds
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_id: mds
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_name: mds.mds
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: placement:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   hosts:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   - standalone.localdomain
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: ---
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_type: mgr
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_name: mgr
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: placement:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   hosts:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   - standalone.localdomain
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: ---
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_type: mon
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_name: mon
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: placement:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   hosts:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   - standalone.localdomain
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: ---
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_type: node-proxy
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_name: node-proxy
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: placement:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   host_pattern: '*'
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: ---
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_type: osd
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_id: default_drive_group
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: service_name: osd.default_drive_group
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: placement:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   hosts:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   - standalone.localdomain
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]: spec:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   data_devices:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:     paths:
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:     - /dev/vg2/data-lv2
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   filter_logic: AND
Oct 13 13:55:46 standalone.localdomain beautiful_kapitsa[68156]:   objectstore: bluestore
Oct 13 13:55:46 standalone.localdomain systemd[1]: libpod-f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a.scope: Deactivated successfully.
Oct 13 13:55:46 standalone.localdomain podman[68142]: 2025-10-13 13:55:46.36414419 +0000 UTC m=+0.484888945 container died f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_kapitsa, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 13:55:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-63a76525e178e9d4be01c4f2b46982453abc61a348a72ac501223447890415f2-merged.mount: Deactivated successfully.
Oct 13 13:55:46 standalone.localdomain podman[68182]: 2025-10-13 13:55:46.461134028 +0000 UTC m=+0.083998380 container remove f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_kapitsa, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:55:46 standalone.localdomain systemd[1]: libpod-conmon-f283ea2f217426cf6a831fe03f171702d5af38def459517f8648d3403982a31a.scope: Deactivated successfully.
Oct 13 13:55:46 standalone.localdomain ceph-mon[29756]: pgmap v599: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:46 standalone.localdomain ceph-mon[29756]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 13 13:55:46 standalone.localdomain ceph-mon[29756]: 2.4 scrub starts
Oct 13 13:55:46 standalone.localdomain ceph-mon[29756]: 2.4 scrub ok
Oct 13 13:55:46 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 13 13:55:46 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 13 13:55:46 standalone.localdomain python3[68201]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph registry.redhat.io/rhceph/rhceph-7-rhel9:latest --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:46 standalone.localdomain podman[68202]: 
Oct 13 13:55:46 standalone.localdomain podman[68202]: 2025-10-13 13:55:46.869219529 +0000 UTC m=+0.068199706 container create 98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bassi, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Oct 13 13:55:46 standalone.localdomain systemd[1]: Started libpod-conmon-98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573.scope.
Oct 13 13:55:46 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:55:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6ef353a5e65036ab517755c711bb212b6b33b3cebb9bc6a53db334c4ee7dd2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6ef353a5e65036ab517755c711bb212b6b33b3cebb9bc6a53db334c4ee7dd2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 13:55:46 standalone.localdomain podman[68202]: 2025-10-13 13:55:46.92125039 +0000 UTC m=+0.120230587 container init 98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bassi, GIT_CLEAN=True, version=7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container)
Oct 13 13:55:46 standalone.localdomain podman[68202]: 2025-10-13 13:55:46.928429015 +0000 UTC m=+0.127409232 container start 98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bassi, release=553, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 13:55:46 standalone.localdomain podman[68202]: 2025-10-13 13:55:46.928711993 +0000 UTC m=+0.127692220 container attach 98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bassi, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 13:55:46 standalone.localdomain podman[68202]: 2025-10-13 13:55:46.838188108 +0000 UTC m=+0.037168365 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 13:55:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 13 13:55:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3614795384' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:   cluster:
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     id:     627e7f45-65aa-56de-94df-66eaee84a56e
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     health: HEALTH_OK
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:  
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:   services:
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     mon: 1 daemons, quorum standalone (age 19m)
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     mgr: standalone.ectizd(active, since 19m)
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     mds: 1/1 daemons up
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     osd: 1 osds: 1 up (since 18m), 1 in (since 18m)
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:  
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:   data:
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     volumes: 1/1 healthy
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     pools:   7 pools, 131 pgs
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     objects: 24 objects, 451 KiB
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     usage:   27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:     pgs:     131 active+clean
Oct 13 13:55:47 standalone.localdomain dazzling_bassi[68216]:  
Oct 13 13:55:47 standalone.localdomain systemd[1]: libpod-98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573.scope: Deactivated successfully.
Oct 13 13:55:47 standalone.localdomain podman[68202]: 2025-10-13 13:55:47.292805564 +0000 UTC m=+0.491785781 container died 98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bassi, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, release=553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Oct 13 13:55:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ec6ef353a5e65036ab517755c711bb212b6b33b3cebb9bc6a53db334c4ee7dd2-merged.mount: Deactivated successfully.
Oct 13 13:55:47 standalone.localdomain podman[68241]: 2025-10-13 13:55:47.353258428 +0000 UTC m=+0.051594369 container remove 98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_bassi, distribution-scope=public, com.redhat.component=rhceph-container, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Oct 13 13:55:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v600: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:47 standalone.localdomain systemd[1]: libpod-conmon-98ef8edd2b91dbf50c3bf8117416292af492ce0c35c98c15446120f9038bf573.scope: Deactivated successfully.
Oct 13 13:55:47 standalone.localdomain ceph-mon[29756]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 13:55:47 standalone.localdomain ceph-mon[29756]: 3.7 scrub starts
Oct 13 13:55:47 standalone.localdomain ceph-mon[29756]: 3.7 scrub ok
Oct 13 13:55:47 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3614795384' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 13:55:47 standalone.localdomain ansible-async_wrapper.py[68007]: Done in kid B.
Oct 13 13:55:48 standalone.localdomain python3[68265]: ansible-slurp Invoked with src=/etc/ceph/ceph.client.admin.keyring
Oct 13 13:55:48 standalone.localdomain python3[68269]: ansible-file Invoked with path=/etc/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:48 standalone.localdomain ceph-mon[29756]: pgmap v600: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:48 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 13 13:55:48 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 13 13:55:48 standalone.localdomain python3[68275]: ansible-sefcontext Invoked with seuser=system_u target=/etc/ceph/ceph.conf setype=etc_t state=present ignore_selinux_state=False ftype=a reload=True selevel=None
Oct 13 13:55:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v601: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:49 standalone.localdomain python3[68278]: ansible-sefcontext Invoked with seuser=system_u target=/etc/ceph/ceph.client.admin.keyring setype=etc_t state=present ignore_selinux_state=False ftype=a reload=True selevel=None
Oct 13 13:55:49 standalone.localdomain python3[68281]: ansible-ansible.legacy.command Invoked with _raw_params=restorecon -R -v /etc/ceph _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:50 standalone.localdomain ceph-mon[29756]: 2.5 scrub starts
Oct 13 13:55:50 standalone.localdomain ceph-mon[29756]: 2.5 scrub ok
Oct 13 13:55:50 standalone.localdomain ceph-mon[29756]: pgmap v601: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:50 standalone.localdomain python3[68291]: ansible-stat Invoked with path=/root/overcloud-deploy/overcloud/ceph_client.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:55:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 13 13:55:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 13 13:55:51 standalone.localdomain python3[68305]: ansible-file Invoked with state=absent path=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v602: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:51 standalone.localdomain ceph-mon[29756]: 3.8 scrub starts
Oct 13 13:55:51 standalone.localdomain ceph-mon[29756]: 3.8 scrub ok
Oct 13 13:55:51 standalone.localdomain python3[68309]: ansible-file Invoked with path=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:51 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 13 13:55:51 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 13 13:55:51 standalone.localdomain python3[68313]: ansible-stat Invoked with path=/root/overcloud-deploy/overcloud/ceph_client.yml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:55:52 standalone.localdomain python3[68334]: ansible-ansible.legacy.stat Invoked with path=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:52 standalone.localdomain ceph-mon[29756]: pgmap v602: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:52 standalone.localdomain ceph-mon[29756]: 3.9 scrub starts
Oct 13 13:55:52 standalone.localdomain ceph-mon[29756]: 3.9 scrub ok
Oct 13 13:55:52 standalone.localdomain python3[68338]: ansible-ansible.legacy.copy Invoked with src=/tmp/ansible-root/ansible-tmp-1760363752.155741-68325-11112914170241/source dest=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ceph.client.openstack.keyring mode=384 force=True follow=False _original_basename=ceph_key.j2 checksum=e2b92839f87802a8b24ea6e6780e50ee4d6ed8f3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:52 standalone.localdomain python3[68350]: ansible-ansible.legacy.stat Invoked with path=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:53 standalone.localdomain python3[68354]: ansible-ansible.legacy.copy Invoked with src=/tmp/ansible-root/ansible-tmp-1760363752.6223547-68325-168578301332028/source dest=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ceph.client.manila.keyring mode=384 force=True follow=False _original_basename=ceph_key.j2 checksum=a6bbe08b4ca238c19d82de3bedd06393264e7b1f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:55:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v603: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:53 standalone.localdomain python3[68368]: ansible-ansible.legacy.stat Invoked with path=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:53 standalone.localdomain python3[68372]: ansible-ansible.legacy.copy Invoked with src=/tmp/ansible-root/ansible-tmp-1760363753.2196722-68359-39708031034130/source dest=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ceph.conf group=root owner=root mode=420 force=True follow=False _original_basename=ceph_conf.j2 checksum=4761edcb9f264c48bfd4a33c10f622d5334a0538 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:54 standalone.localdomain ceph-mon[29756]: pgmap v603: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:54 standalone.localdomain python3[68386]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:54 standalone.localdomain python3[68389]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:54 standalone.localdomain python3[68391]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:55 standalone.localdomain python3[68402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v604: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:55 standalone.localdomain python3[68406]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363755.0302398-68392-179638197794139/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=e2b92839f87802a8b24ea6e6780e50ee4d6ed8f3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:55 standalone.localdomain python3[68419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:55 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 13 13:55:55 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 13 13:55:55 standalone.localdomain python3[68423]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363755.4280274-68392-151980407999741/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=a6bbe08b4ca238c19d82de3bedd06393264e7b1f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:56 standalone.localdomain python3[68435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:56 standalone.localdomain python3[68439]: ansible-ansible.legacy.copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363755.8817215-68392-50037280579534/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=4761edcb9f264c48bfd4a33c10f622d5334a0538 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:56 standalone.localdomain ceph-mon[29756]: pgmap v604: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:56 standalone.localdomain ceph-mon[29756]: 2.6 scrub starts
Oct 13 13:55:56 standalone.localdomain ceph-mon[29756]: 2.6 scrub ok
Oct 13 13:55:56 standalone.localdomain python3[68446]: ansible-file Invoked with state=absent path=/root/standalone-ansible-fa02z7re/ceph_client_fetch_dir/ recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:57 standalone.localdomain python3[68469]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v605: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:57 standalone.localdomain python3[68475]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760363757.193678-68459-26275131811945/source _original_basename=tmp6q_n85cc follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 13 13:55:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 13 13:55:58 standalone.localdomain python3[68491]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:55:58 standalone.localdomain python3[68495]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/root/.ansible/tmp/ansible-tmp-1760363757.8470106-68481-188069999336305/source _original_basename=tmp4ywyxva9 follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:55:58 standalone.localdomain ceph-mon[29756]: pgmap v605: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:58 standalone.localdomain ceph-mon[29756]: 3.a scrub starts
Oct 13 13:55:58 standalone.localdomain ceph-mon[29756]: 3.a scrub ok
Oct 13 13:55:58 standalone.localdomain python3[68501]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None
Oct 13 13:55:58 standalone.localdomain crontab[68502]: (root) LIST (root)
Oct 13 13:55:58 standalone.localdomain crontab[68503]: (root) REPLACE (root)
Oct 13 13:55:59 standalone.localdomain python3[68509]: ansible-ansible.legacy.command Invoked with _raw_params=pcs resource config "haproxy-bundle"
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:55:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:55:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v606: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:55:59 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 13 13:55:59 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 13 13:55:59 standalone.localdomain python3[68515]: ansible-ansible.legacy.command Invoked with _raw_params=puppet apply  --detailed-exitcodes --summarize --color=false --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation' -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle'
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:56:00 standalone.localdomain ceph-mon[29756]: pgmap v606: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:01 standalone.localdomain ceph-mon[29756]: 2.7 scrub starts
Oct 13 13:56:01 standalone.localdomain ceph-mon[29756]: 2.7 scrub ok
Oct 13 13:56:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v607: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:02 standalone.localdomain ceph-mon[29756]: pgmap v607: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v608: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:03 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 13 13:56:03 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 13 13:56:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:56:03 standalone.localdomain podman[68638]: 2025-10-13 13:56:03.815987323 +0000 UTC m=+0.085276800 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, vcs-type=git, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, name=rhosp17/openstack-memcached)
Oct 13 13:56:03 standalone.localdomain podman[68638]: 2025-10-13 13:56:03.860021507 +0000 UTC m=+0.129310944 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, config_id=tripleo_step1, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 13:56:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:56:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:04 standalone.localdomain ceph-mon[29756]: pgmap v608: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:04 standalone.localdomain ceph-mon[29756]: 3.b scrub starts
Oct 13 13:56:04 standalone.localdomain ceph-mon[29756]: 3.b scrub ok
Oct 13 13:56:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v609: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:06 standalone.localdomain ceph-mon[29756]: pgmap v609: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 13 13:56:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 13 13:56:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:56:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 1200.0 total, 600.0 interval
                                                        Cumulative writes: 3344 writes, 14K keys, 3344 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                                        Cumulative WAL: 3344 writes, 3344 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1616 writes, 7121 keys, 1616 commit groups, 1.0 writes per commit group, ingest: 2.90 MB, 0.00 MB/s
                                                        Interval WAL: 1616 writes, 1616 syncs, 1.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     84.9      0.05              0.02         6    0.008       0      0       0.0       0.0
                                                          L6      1/0    2.30 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    156.3    134.6      0.08              0.04         5    0.016     12K   2285       0.0       0.0
                                                         Sum      1/0    2.30 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.7     98.5    116.2      0.13              0.07        11    0.012     12K   2285       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    108.6    114.4      0.07              0.04         6    0.012    8172   1492       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    156.3    134.6      0.08              0.04         5    0.016     12K   2285       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     89.3      0.05              0.02         5    0.009       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.004, interval 0.002
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.01 GB write, 0.01 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.1 seconds
                                                        Interval compaction: 0.01 GB write, 0.01 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.1 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 1.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.00069 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(84,1.32 MB,0.429104%) FilterBlock(12,40.73 KB,0.0129155%) IndexBlock(12,55.80 KB,0.0176913%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 13:56:07 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=167.94.138.135 DST=38.102.83.151 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=15783 PROTO=TCP SPT=47815 DPT=9090 SEQ=4103467486 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A68DC1AAD000000000103030A) 
Oct 13 13:56:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v610: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:07 standalone.localdomain ceph-mon[29756]: 2.8 scrub starts
Oct 13 13:56:07 standalone.localdomain ceph-mon[29756]: 2.8 scrub ok
Oct 13 13:56:08 standalone.localdomain ceph-mon[29756]: pgmap v610: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:08 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 13 13:56:08 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 13 13:56:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v611: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:10 standalone.localdomain ceph-mon[29756]: 3.c scrub starts
Oct 13 13:56:10 standalone.localdomain ceph-mon[29756]: 3.c scrub ok
Oct 13 13:56:10 standalone.localdomain ceph-mon[29756]: pgmap v611: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:11 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:11 standalone.localdomain pacemaker-schedulerd[57910]:  notice: No fencing will be done until there are resources to manage
Oct 13 13:56:11 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 2, saving inputs in /var/lib/pacemaker/pengine/pe-input-2.bz2
Oct 13 13:56:11 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Complete
Oct 13 13:56:11 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v612: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 13 13:56:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 13 13:56:12 standalone.localdomain ceph-mon[29756]: pgmap v612: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:12 standalone.localdomain ceph-mon[29756]: 3.d scrub starts
Oct 13 13:56:12 standalone.localdomain ceph-mon[29756]: 3.d scrub ok
Oct 13 13:56:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v613: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:14 standalone.localdomain ceph-mon[29756]: pgmap v613: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v614: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:15 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 13 13:56:15 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 13 13:56:16 standalone.localdomain ceph-mon[29756]: pgmap v614: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:16 standalone.localdomain ceph-mon[29756]: 2.9 scrub starts
Oct 13 13:56:16 standalone.localdomain ceph-mon[29756]: 2.9 scrub ok
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:17 standalone.localdomain pacemaker-fenced[57907]:  warning: Blind faith: not fencing unseen nodes
Oct 13 13:56:17 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 3, saving inputs in /var/lib/pacemaker/pengine/pe-input-3.bz2
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-192.168.122.99_monitor_0 locally on standalone
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for ip-192.168.122.99 on standalone
Oct 13 13:56:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v615: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for ip-192.168.122.99 on standalone: ok
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 3 aborted by operation ip-192.168.122.99_monitor_0 'modify' on standalone: Event failed
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 3 action 1 (ip-192.168.122.99_monitor_0 on standalone): expected 'not running' but got 'ok'
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 3 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-3.bz2): Complete
Oct 13 13:56:17 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-192.168.122.99     ( standalone )  due to node availability
Oct 13 13:56:17 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 4, saving inputs in /var/lib/pacemaker/pengine/pe-input-4.bz2
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-192.168.122.99_stop_0 locally on standalone
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-192.168.122.99 on standalone
Oct 13 13:56:17 standalone.localdomain IPaddr2(ip-192.168.122.99)[68830]: INFO: IP status = ok, IP_CIP=
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-192.168.122.99 on standalone: ok
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 4 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-4.bz2): Complete
Oct 13 13:56:17 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:18 standalone.localdomain ceph-mon[29756]: pgmap v615: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:19 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:19 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 5, saving inputs in /var/lib/pacemaker/pengine/pe-input-5.bz2
Oct 13 13:56:19 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 5 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-5.bz2): Complete
Oct 13 13:56:19 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v616: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 13 13:56:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 13 13:56:20 standalone.localdomain ceph-mon[29756]: pgmap v616: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 13 13:56:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 13 13:56:21 standalone.localdomain ceph-mon[29756]: 3.e scrub starts
Oct 13 13:56:21 standalone.localdomain ceph-mon[29756]: 3.e scrub ok
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v617: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:21 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      ip-192.168.122.99     ( standalone )
Oct 13 13:56:21 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 6, saving inputs in /var/lib/pacemaker/pengine/pe-input-6.bz2
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation ip-192.168.122.99_start_0 locally on standalone
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for ip-192.168.122.99 on standalone
Oct 13 13:56:21 standalone.localdomain IPaddr2(ip-192.168.122.99)[68914]: INFO: Adding inet address 192.168.122.99/32 with broadcast address 192.168.122.255 to device br-ctlplane
Oct 13 13:56:21 standalone.localdomain IPaddr2(ip-192.168.122.99)[68920]: INFO: Bringing device br-ctlplane up
Oct 13 13:56:21 standalone.localdomain IPaddr2(ip-192.168.122.99)[68926]: INFO: /usr/libexec/heartbeat/send_arp  -i 200 -r 5 -p /run/resource-agents/send_arp-192.168.122.99 br-ctlplane 192.168.122.99 auto not_used not_used
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for ip-192.168.122.99 on standalone: ok
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-192.168.122.99_monitor_10000 locally on standalone
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for ip-192.168.122.99 on standalone
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for ip-192.168.122.99 on standalone: ok
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 6 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-6.bz2): Complete
Oct 13 13:56:21 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 13 13:56:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 13 13:56:22 standalone.localdomain ceph-mon[29756]: 2.a scrub starts
Oct 13 13:56:22 standalone.localdomain ceph-mon[29756]: 2.a scrub ok
Oct 13 13:56:22 standalone.localdomain ceph-mon[29756]: pgmap v617: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:22 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 13 13:56:22 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:56:23
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'images', 'manila_metadata', 'vms', 'volumes', 'manila_data', 'backups']
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:56:23 standalone.localdomain ceph-mon[29756]: 3.f scrub starts
Oct 13 13:56:23 standalone.localdomain ceph-mon[29756]: 3.f scrub ok
Oct 13 13:56:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v618: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:23 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 13 13:56:23 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 13 13:56:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:24 standalone.localdomain ceph-mon[29756]: 2.b scrub starts
Oct 13 13:56:24 standalone.localdomain ceph-mon[29756]: 2.b scrub ok
Oct 13 13:56:24 standalone.localdomain ceph-mon[29756]: pgmap v618: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:25 standalone.localdomain ceph-mon[29756]: 3.10 scrub starts
Oct 13 13:56:25 standalone.localdomain ceph-mon[29756]: 3.10 scrub ok
Oct 13 13:56:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v619: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:25 standalone.localdomain IPaddr2(ip-192.168.122.99)[69003]: INFO: ARPING 192.168.122.99 from 192.168.122.99 br-ctlplane
                                                                          Sent 5 probes (5 broadcast(s))
                                                                          Received 0 response(s)
Oct 13 13:56:26 standalone.localdomain ceph-mon[29756]: pgmap v619: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 1)
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num", "val": "32"} v 0)
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e28 e28: 1 total, 1 up, 1 in
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e28: 1 total, 1 up, 1 in
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 24e8ccae-c109-4bd5-84e2-8a13decfd934 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num", "val": "16"} v 0)
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num", "val": "16"} : dispatch
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num", "val": "32"} : dispatch
Oct 13 13:56:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v621: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num_actual", "val": "32"} v 0)
Oct 13 13:56:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:27 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 7, saving inputs in /var/lib/pacemaker/pengine/pe-input-7.bz2
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.21.0.2_monitor_0 locally on standalone
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for ip-172.21.0.2 on standalone
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for ip-172.21.0.2 on standalone: ok
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 7 aborted by operation ip-172.21.0.2_monitor_0 'modify' on standalone: Event failed
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 7 action 2 (ip-172.21.0.2_monitor_0 on standalone): expected 'not running' but got 'ok'
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 7 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-7.bz2): Complete
Oct 13 13:56:27 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.21.0.2         ( standalone )  due to node availability
Oct 13 13:56:27 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 8, saving inputs in /var/lib/pacemaker/pengine/pe-input-8.bz2
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.21.0.2_stop_0 locally on standalone
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.21.0.2 on standalone
Oct 13 13:56:27 standalone.localdomain IPaddr2(ip-172.21.0.2)[69107]: INFO: IP status = ok, IP_CIP=
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.21.0.2 on standalone: ok
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 8 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-8.bz2): Complete
Oct 13 13:56:27 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e29 e29: 1 total, 1 up, 1 in
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num", "val": "16"}]': finished
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e29: 1 total, 1 up, 1 in
Oct 13 13:56:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 152427bd-1550-481c-9978-f60046f7b682 (PG autoscaler increasing pool 7 PGs from 1 to 16)
Oct 13 13:56:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 24e8ccae-c109-4bd5-84e2-8a13decfd934 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct 13 13:56:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 24e8ccae-c109-4bd5-84e2-8a13decfd934 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct 13 13:56:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 152427bd-1550-481c-9978-f60046f7b682 (PG autoscaler increasing pool 7 PGs from 1 to 16)
Oct 13 13:56:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 152427bd-1550-481c-9978-f60046f7b682 (PG autoscaler increasing pool 7 PGs from 1 to 16) in 0 seconds
Oct 13 13:56:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 29 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=12.912283897s) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active pruub 1172.396606445s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 13 13:56:28 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 29 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=29 pruub=12.912283897s) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown pruub 1172.396606445s@ mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num", "val": "32"}]': finished
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: osdmap e28: 1 total, 1 up, 1 in
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num", "val": "16"} : dispatch
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: pgmap v621: 131 pgs: 131 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num_actual", "val": "32"} : dispatch
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num", "val": "16"}]': finished
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: osdmap e29: 1 total, 1 up, 1 in
Oct 13 13:56:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 33 completed events
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:56:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e30 e30: 1 total, 1 up, 1 in
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e30: 1 total, 1 up, 1 in
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1b( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.19( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.18( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1f( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1e( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1d( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.c( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1a( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.6( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.3( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.2( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.5( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.4( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.7( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.d( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.e( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.8( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.f( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.9( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.a( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.b( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.14( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.15( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.16( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.17( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.10( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.11( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.13( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1c( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.12( empty local-lis/les=23/24 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1b( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.19( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1e( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.18( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1f( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1d( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.c( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.0( empty local-lis/les=29/30 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.6( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.3( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.2( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.4( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.5( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1a( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.7( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.d( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.8( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.e( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.f( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.b( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.14( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.16( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.9( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.a( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.15( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.10( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.17( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.13( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.1c( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.12( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 30 pg[6.11( empty local-lis/les=29/30 n=0 ec=29/23 lis/c=23/23 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[23,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v624: 162 pgs: 1 peering, 31 unknown, 130 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num_actual", "val": "16"} v 0)
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num_actual", "val": "16"} : dispatch
Oct 13 13:56:29 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:29 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 9, saving inputs in /var/lib/pacemaker/pengine/pe-input-9.bz2
Oct 13 13:56:29 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 9 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-9.bz2): Complete
Oct 13 13:56:29 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: osdmap e30: 1 total, 1 up, 1 in
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: pgmap v624: 162 pgs: 1 peering, 31 unknown, 130 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num_actual", "val": "16"} : dispatch
Oct 13 13:56:29 standalone.localdomain sudo[69125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:56:29 standalone.localdomain sudo[69125]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:56:29 standalone.localdomain sudo[69125]: pam_unix(sudo:session): session closed for user root
Oct 13 13:56:29 standalone.localdomain sudo[69140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:56:29 standalone.localdomain sudo[69140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e31 e31: 1 total, 1 up, 1 in
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num_actual", "val": "16"}]': finished
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e31: 1 total, 1 up, 1 in
Oct 13 13:56:30 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 31 pg[7.0( v 26'39 (0'0,26'39] local-lis/les=24/25 n=22 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31 pruub=11.897431374s) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 26'38 mlcod 26'38 active pruub 1173.403076172s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 13 13:56:30 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 31 pg[7.0( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31 pruub=11.897431374s) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 26'38 mlcod 0'0 unknown pruub 1173.403076172s@ mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:30 standalone.localdomain sudo[69140]: pam_unix(sudo:session): session closed for user root
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:56:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:56:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 5964b600-6059-46e4-aed1-147c84cdbc5d (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:56:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 5964b600-6059-46e4-aed1-147c84cdbc5d (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:56:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 5964b600-6059-46e4-aed1-147c84cdbc5d (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:56:30 standalone.localdomain sudo[69190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:56:30 standalone.localdomain sudo[69190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:56:30 standalone.localdomain sudo[69190]: pam_unix(sudo:session): session closed for user root
Oct 13 13:56:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 13 13:56:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pg_num_actual", "val": "16"}]': finished
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: osdmap e31: 1 total, 1 up, 1 in
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: 2.c scrub starts
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: 2.c scrub ok
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e32 e32: 1 total, 1 up, 1 in
Oct 13 13:56:31 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e32: 1 total, 1 up, 1 in
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.d( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.1( v 26'39 (0'0,26'39] local-lis/les=24/25 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.7( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.2( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.3( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.4( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.5( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.6( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.c( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.f( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.e( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.9( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.8( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.b( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.a( v 26'39 lc 0'0 (0'0,26'39] local-lis/les=24/25 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.d( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.0( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 26'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.1( v 26'39 (0'0,26'39] local-lis/les=31/32 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.7( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.2( v 26'39 (0'0,26'39] local-lis/les=31/32 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.3( v 26'39 (0'0,26'39] local-lis/les=31/32 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.4( v 26'39 (0'0,26'39] local-lis/les=31/32 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.6( v 26'39 (0'0,26'39] local-lis/les=31/32 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.f( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.c( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.9( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.8( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.b( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.e( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.a( v 26'39 (0'0,26'39] local-lis/les=31/32 n=1 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-osd[37878]: osd.0 pg_epoch: 32 pg[7.5( v 26'39 (0'0,26'39] local-lis/les=31/32 n=2 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [0] r=0 lpr=31 pi=[24,31)/1 crt=26'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 13 13:56:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 1 peering, 46 unknown, 130 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:31 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      ip-172.21.0.2         ( standalone )
Oct 13 13:56:31 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 10, saving inputs in /var/lib/pacemaker/pengine/pe-input-10.bz2
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation ip-172.21.0.2_start_0 locally on standalone
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for ip-172.21.0.2 on standalone
Oct 13 13:56:31 standalone.localdomain IPaddr2(ip-172.21.0.2)[69309]: INFO: Adding inet address 172.21.0.2/32 with broadcast address 172.21.0.255 to device vlan44
Oct 13 13:56:31 standalone.localdomain IPaddr2(ip-172.21.0.2)[69315]: INFO: Bringing device vlan44 up
Oct 13 13:56:31 standalone.localdomain IPaddr2(ip-172.21.0.2)[69321]: INFO: /usr/libexec/heartbeat/send_arp  -i 200 -r 5 -p /run/resource-agents/send_arp-172.21.0.2 vlan44 172.21.0.2 auto not_used not_used
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for ip-172.21.0.2 on standalone: ok
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.21.0.2_monitor_10000 locally on standalone
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for ip-172.21.0.2 on standalone
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for ip-172.21.0.2 on standalone: ok
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 10 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-10.bz2): Complete
Oct 13 13:56:31 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:32 standalone.localdomain ceph-mon[29756]: osdmap e32: 1 total, 1 up, 1 in
Oct 13 13:56:32 standalone.localdomain ceph-mon[29756]: pgmap v627: 177 pgs: 1 peering, 46 unknown, 130 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 1 peering, 46 unknown, 130 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 34 completed events
Oct 13 13:56:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:56:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:56:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:34 standalone.localdomain ceph-mon[29756]: pgmap v628: 177 pgs: 1 peering, 46 unknown, 130 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:56:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:56:34 standalone.localdomain podman[69388]: 2025-10-13 13:56:34.821139376 +0000 UTC m=+0.081955529 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, container_name=memcached, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T12:58:43, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step1, vcs-type=git)
Oct 13 13:56:34 standalone.localdomain podman[69388]: 2025-10-13 13:56:34.845565886 +0000 UTC m=+0.106382039 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-memcached, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']})
Oct 13 13:56:34 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:56:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_data", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "2"} v 0)
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "2"} : dispatch
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e33 e33: 1 total, 1 up, 1 in
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e33: 1 total, 1 up, 1 in
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_data", "var": "pgp_num_actual", "val": "32"} : dispatch
Oct 13 13:56:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "2"} : dispatch
Oct 13 13:56:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 13 13:56:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 13 13:56:35 standalone.localdomain IPaddr2(ip-172.21.0.2)[69416]: INFO: ARPING 172.21.0.2 from 172.21.0.2 vlan44
                                                                      Sent 5 probes (5 broadcast(s))
                                                                      Received 0 response(s)
Oct 13 13:56:36 standalone.localdomain ceph-mon[29756]: pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 13 13:56:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 13 13:56:36 standalone.localdomain ceph-mon[29756]: osdmap e33: 1 total, 1 up, 1 in
Oct 13 13:56:36 standalone.localdomain ceph-mon[29756]: 3.11 scrub starts
Oct 13 13:56:36 standalone.localdomain ceph-mon[29756]: 3.11 scrub ok
Oct 13 13:56:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "3"} v 0)
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "3"} : dispatch
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e34 e34: 1 total, 1 up, 1 in
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e34: 1 total, 1 up, 1 in
Oct 13 13:56:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "3"} : dispatch
Oct 13 13:56:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 13 13:56:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:38 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 11, saving inputs in /var/lib/pacemaker/pengine/pe-input-11.bz2
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.17.0.2_monitor_0 locally on standalone
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for ip-172.17.0.2 on standalone
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for ip-172.17.0.2 on standalone: ok
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 11 aborted by operation ip-172.17.0.2_monitor_0 'modify' on standalone: Event failed
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 11 action 3 (ip-172.17.0.2_monitor_0 on standalone): expected 'not running' but got 'ok'
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 11 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-11.bz2): Complete
Oct 13 13:56:38 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.17.0.2         ( standalone )  due to node availability
Oct 13 13:56:38 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 12, saving inputs in /var/lib/pacemaker/pengine/pe-input-12.bz2
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.17.0.2_stop_0 locally on standalone
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.17.0.2 on standalone
Oct 13 13:56:38 standalone.localdomain IPaddr2(ip-172.17.0.2)[69526]: INFO: IP status = ok, IP_CIP=
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.17.0.2 on standalone: ok
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 12 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-12.bz2): Complete
Oct 13 13:56:38 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:38 standalone.localdomain ceph-mon[29756]: pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 13 13:56:38 standalone.localdomain ceph-mon[29756]: osdmap e34: 1 total, 1 up, 1 in
Oct 13 13:56:38 standalone.localdomain ceph-mon[29756]: 3.12 scrub starts
Oct 13 13:56:38 standalone.localdomain ceph-mon[29756]: 3.12 scrub ok
Oct 13 13:56:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "4"} v 0)
Oct 13 13:56:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "4"} : dispatch
Oct 13 13:56:40 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 13 13:56:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e35 e35: 1 total, 1 up, 1 in
Oct 13 13:56:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 13 13:56:40 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e35: 1 total, 1 up, 1 in
Oct 13 13:56:40 standalone.localdomain ceph-mon[29756]: pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "4"} : dispatch
Oct 13 13:56:40 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 13, saving inputs in /var/lib/pacemaker/pengine/pe-input-13.bz2
Oct 13 13:56:40 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 13 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-13.bz2): Complete
Oct 13 13:56:40 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 13 13:56:41 standalone.localdomain ceph-mon[29756]: osdmap e35: 1 total, 1 up, 1 in
Oct 13 13:56:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "5"} v 0)
Oct 13 13:56:41 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "5"} : dispatch
Oct 13 13:56:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 13 13:56:42 standalone.localdomain ceph-mon[29756]: pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "5"} : dispatch
Oct 13 13:56:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e36 e36: 1 total, 1 up, 1 in
Oct 13 13:56:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 13 13:56:42 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e36: 1 total, 1 up, 1 in
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      ip-172.17.0.2         ( standalone )
Oct 13 13:56:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 14, saving inputs in /var/lib/pacemaker/pengine/pe-input-14.bz2
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation ip-172.17.0.2_start_0 locally on standalone
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for ip-172.17.0.2 on standalone
Oct 13 13:56:42 standalone.localdomain IPaddr2(ip-172.17.0.2)[69692]: INFO: Adding inet address 172.17.0.2/32 with broadcast address 172.17.0.255 to device vlan20
Oct 13 13:56:42 standalone.localdomain IPaddr2(ip-172.17.0.2)[69698]: INFO: Bringing device vlan20 up
Oct 13 13:56:42 standalone.localdomain IPaddr2(ip-172.17.0.2)[69704]: INFO: /usr/libexec/heartbeat/send_arp  -i 200 -r 5 -p /run/resource-agents/send_arp-172.17.0.2 vlan20 172.17.0.2 auto not_used not_used
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for ip-172.17.0.2 on standalone: ok
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.17.0.2_monitor_10000 locally on standalone
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for ip-172.17.0.2 on standalone
Oct 13 13:56:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 13 13:56:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for ip-172.17.0.2 on standalone: ok
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 14 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-14.bz2): Complete
Oct 13 13:56:42 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 13 13:56:43 standalone.localdomain ceph-mon[29756]: osdmap e36: 1 total, 1 up, 1 in
Oct 13 13:56:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "6"} v 0)
Oct 13 13:56:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "6"} : dispatch
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e37 e37: 1 total, 1 up, 1 in
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e37: 1 total, 1 up, 1 in
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: 3.13 scrub starts
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: 3.13 scrub ok
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "6"} : dispatch
Oct 13 13:56:44 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 13 13:56:44 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 13 13:56:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "7"} v 0)
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "7"} : dispatch
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: osdmap e37: 1 total, 1 up, 1 in
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: 2.d scrub starts
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: 2.d scrub ok
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e38 e38: 1 total, 1 up, 1 in
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 13 13:56:45 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e38: 1 total, 1 up, 1 in
Oct 13 13:56:46 standalone.localdomain ceph-mon[29756]: pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "7"} : dispatch
Oct 13 13:56:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 13 13:56:46 standalone.localdomain ceph-mon[29756]: osdmap e38: 1 total, 1 up, 1 in
Oct 13 13:56:46 standalone.localdomain IPaddr2(ip-172.17.0.2)[69777]: INFO: ARPING 172.17.0.2 from 172.17.0.2 vlan20
                                                                      Sent 5 probes (5 broadcast(s))
                                                                      Received 0 response(s)
Oct 13 13:56:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "8"} v 0)
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "8"} : dispatch
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e39 e39: 1 total, 1 up, 1 in
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e39: 1 total, 1 up, 1 in
Oct 13 13:56:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "8"} : dispatch
Oct 13 13:56:47 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 13 13:56:47 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 13 13:56:48 standalone.localdomain ceph-mon[29756]: pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 13 13:56:48 standalone.localdomain ceph-mon[29756]: osdmap e39: 1 total, 1 up, 1 in
Oct 13 13:56:48 standalone.localdomain ceph-mon[29756]: 3.14 scrub starts
Oct 13 13:56:48 standalone.localdomain ceph-mon[29756]: 3.14 scrub ok
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:49 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 15, saving inputs in /var/lib/pacemaker/pengine/pe-input-15.bz2
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.18.0.2_monitor_0 locally on standalone
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for ip-172.18.0.2 on standalone
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for ip-172.18.0.2 on standalone: ok
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 15 aborted by operation ip-172.18.0.2_monitor_0 'modify' on standalone: Event failed
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 15 action 4 (ip-172.18.0.2_monitor_0 on standalone): expected 'not running' but got 'ok'
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 15 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-15.bz2): Complete
Oct 13 13:56:49 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.18.0.2         ( standalone )  due to node availability
Oct 13 13:56:49 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 16, saving inputs in /var/lib/pacemaker/pengine/pe-input-16.bz2
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.18.0.2_stop_0 locally on standalone
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.18.0.2 on standalone
Oct 13 13:56:49 standalone.localdomain IPaddr2(ip-172.18.0.2)[69887]: INFO: IP status = ok, IP_CIP=
Oct 13 13:56:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.18.0.2 on standalone: ok
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 16 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-16.bz2): Complete
Oct 13 13:56:49 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "9"} v 0)
Oct 13 13:56:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "9"} : dispatch
Oct 13 13:56:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 13 13:56:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e40 e40: 1 total, 1 up, 1 in
Oct 13 13:56:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 13 13:56:50 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e40: 1 total, 1 up, 1 in
Oct 13 13:56:50 standalone.localdomain ceph-mon[29756]: pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "9"} : dispatch
Oct 13 13:56:51 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:51 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 17, saving inputs in /var/lib/pacemaker/pengine/pe-input-17.bz2
Oct 13 13:56:51 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 17 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-17.bz2): Complete
Oct 13 13:56:51 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 13 13:56:51 standalone.localdomain ceph-mon[29756]: osdmap e40: 1 total, 1 up, 1 in
Oct 13 13:56:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "10"} v 0)
Oct 13 13:56:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "10"} : dispatch
Oct 13 13:56:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 13 13:56:52 standalone.localdomain ceph-mon[29756]: pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "10"} : dispatch
Oct 13 13:56:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e41 e41: 1 total, 1 up, 1 in
Oct 13 13:56:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 13 13:56:52 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e41: 1 total, 1 up, 1 in
Oct 13 13:56:52 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 13 13:56:52 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:56:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 13 13:56:53 standalone.localdomain ceph-mon[29756]: osdmap e41: 1 total, 1 up, 1 in
Oct 13 13:56:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "11"} v 0)
Oct 13 13:56:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "11"} : dispatch
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:53 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      ip-172.18.0.2         ( standalone )
Oct 13 13:56:53 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 18, saving inputs in /var/lib/pacemaker/pengine/pe-input-18.bz2
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation ip-172.18.0.2_start_0 locally on standalone
Oct 13 13:56:53 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for ip-172.18.0.2 on standalone
Oct 13 13:56:53 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 13 13:56:53 standalone.localdomain IPaddr2(ip-172.18.0.2)[70094]: INFO: Adding inet address 172.18.0.2/32 with broadcast address 172.18.0.255 to device vlan21
Oct 13 13:56:53 standalone.localdomain IPaddr2(ip-172.18.0.2)[70100]: INFO: Bringing device vlan21 up
Oct 13 13:56:53 standalone.localdomain IPaddr2(ip-172.18.0.2)[70106]: INFO: /usr/libexec/heartbeat/send_arp  -i 200 -r 5 -p /run/resource-agents/send_arp-172.18.0.2 vlan21 172.18.0.2 auto not_used not_used
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for ip-172.18.0.2 on standalone: ok
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.18.0.2_monitor_10000 locally on standalone
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for ip-172.18.0.2 on standalone
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for ip-172.18.0.2 on standalone: ok
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 18 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-18.bz2): Complete
Oct 13 13:56:53 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e42 e42: 1 total, 1 up, 1 in
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e42: 1 total, 1 up, 1 in
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: 2.e scrub starts
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: 2.e scrub ok
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "11"} : dispatch
Oct 13 13:56:55 standalone.localdomain ceph-mon[29756]: 3.15 scrub starts
Oct 13 13:56:55 standalone.localdomain ceph-mon[29756]: 3.15 scrub ok
Oct 13 13:56:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 13 13:56:55 standalone.localdomain ceph-mon[29756]: osdmap e42: 1 total, 1 up, 1 in
Oct 13 13:56:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "12"} v 0)
Oct 13 13:56:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "12"} : dispatch
Oct 13 13:56:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 13 13:56:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e43 e43: 1 total, 1 up, 1 in
Oct 13 13:56:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 13 13:56:56 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e43: 1 total, 1 up, 1 in
Oct 13 13:56:56 standalone.localdomain ceph-mon[29756]: pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "12"} : dispatch
Oct 13 13:56:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 13 13:56:57 standalone.localdomain ceph-mon[29756]: osdmap e43: 1 total, 1 up, 1 in
Oct 13 13:56:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "13"} v 0)
Oct 13 13:56:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "13"} : dispatch
Oct 13 13:56:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 13 13:56:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 13 13:56:57 standalone.localdomain IPaddr2(ip-172.18.0.2)[70180]: INFO: ARPING 172.18.0.2 from 172.18.0.2 vlan21
                                                                      Sent 5 probes (5 broadcast(s))
                                                                      Received 0 response(s)
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "13"} : dispatch
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: 2.f scrub starts
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: 2.f scrub ok
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e44 e44: 1 total, 1 up, 1 in
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 13 13:56:58 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e44: 1 total, 1 up, 1 in
Oct 13 13:56:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:56:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 13 13:56:59 standalone.localdomain ceph-mon[29756]: osdmap e44: 1 total, 1 up, 1 in
Oct 13 13:56:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:56:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "14"} v 0)
Oct 13 13:56:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "14"} : dispatch
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:56:59 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 19, saving inputs in /var/lib/pacemaker/pengine/pe-input-19.bz2
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.20.0.2_monitor_0 locally on standalone
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for ip-172.20.0.2 on standalone
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for ip-172.20.0.2 on standalone: ok
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 19 aborted by operation ip-172.20.0.2_monitor_0 'modify' on standalone: Event failed
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 19 action 5 (ip-172.20.0.2_monitor_0 on standalone): expected 'not running' but got 'ok'
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 19 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-19.bz2): Complete
Oct 13 13:56:59 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.20.0.2         ( standalone )  due to node availability
Oct 13 13:56:59 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 20, saving inputs in /var/lib/pacemaker/pengine/pe-input-20.bz2
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.20.0.2_stop_0 locally on standalone
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.20.0.2 on standalone
Oct 13 13:56:59 standalone.localdomain IPaddr2(ip-172.20.0.2)[70290]: INFO: IP status = ok, IP_CIP=
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.20.0.2 on standalone: ok
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 20 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-20.bz2): Complete
Oct 13 13:56:59 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e45 e45: 1 total, 1 up, 1 in
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e45: 1 total, 1 up, 1 in
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "14"} : dispatch
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.376263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363820376385, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3290, "num_deletes": 502, "total_data_size": 2127990, "memory_usage": 2191328, "flush_reason": "Manual Compaction"}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363820386668, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2033079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12193, "largest_seqno": 15482, "table_properties": {"data_size": 2021300, "index_size": 6754, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3973, "raw_key_size": 32148, "raw_average_key_size": 20, "raw_value_size": 1992974, "raw_average_value_size": 1266, "num_data_blocks": 305, "num_entries": 1574, "num_filter_entries": 1574, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363613, "oldest_key_time": 1760363613, "file_creation_time": 1760363820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10428 microseconds, and 5113 cpu microseconds.
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.386711) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2033079 bytes OK
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.386732) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.388432) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.388466) EVENT_LOG_v1 {"time_micros": 1760363820388441, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.388503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2112762, prev total WAL file size 2112762, number of live WAL files 2.
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.389078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1985KB)], [32(2352KB)]
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363820389131, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 4442170, "oldest_snapshot_seqno": -1}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 2980 keys, 3637272 bytes, temperature: kUnknown
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363820403585, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 3637272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 3617094, "index_size": 11634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7493, "raw_key_size": 67978, "raw_average_key_size": 22, "raw_value_size": 3563226, "raw_average_value_size": 1195, "num_data_blocks": 513, "num_entries": 2980, "num_filter_entries": 2980, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.403733) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 3637272 bytes
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 306.1 rd, 250.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 2.3 +0.0 blob) out(3.5 +0.0 blob), read-write-amplify(4.0) write-amplify(1.8) OK, records in: 4012, records dropped: 1032 output_compression: NoCompression
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405210) EVENT_LOG_v1 {"time_micros": 1760363820405203, "job": 14, "event": "compaction_finished", "compaction_time_micros": 14511, "compaction_time_cpu_micros": 6959, "output_level": 6, "num_output_files": 1, "total_output_size": 3637272, "num_input_records": 4012, "num_output_records": 2980, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363820405408, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363820405588, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.389033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:57:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:57:00.405612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:57:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 13 13:57:01 standalone.localdomain ceph-mon[29756]: osdmap e45: 1 total, 1 up, 1 in
Oct 13 13:57:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "15"} v 0)
Oct 13 13:57:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "15"} : dispatch
Oct 13 13:57:01 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:01 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 21, saving inputs in /var/lib/pacemaker/pengine/pe-input-21.bz2
Oct 13 13:57:01 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 21 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-21.bz2): Complete
Oct 13 13:57:01 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 13 13:57:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e46 e46: 1 total, 1 up, 1 in
Oct 13 13:57:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 13 13:57:02 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e46: 1 total, 1 up, 1 in
Oct 13 13:57:02 standalone.localdomain ceph-mon[29756]: pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "15"} : dispatch
Oct 13 13:57:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "16"} v 0)
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "16"} : dispatch
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 e47: 1 total, 1 up, 1 in
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e47: 1 total, 1 up, 1 in
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: osdmap e46: 1 total, 1 up, 1 in
Oct 13 13:57:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "16"} : dispatch
Oct 13 13:57:03 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 13 13:57:03 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:03 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      ip-172.20.0.2         ( standalone )
Oct 13 13:57:03 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 22, saving inputs in /var/lib/pacemaker/pengine/pe-input-22.bz2
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation ip-172.20.0.2_start_0 locally on standalone
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for ip-172.20.0.2 on standalone
Oct 13 13:57:03 standalone.localdomain IPaddr2(ip-172.20.0.2)[70538]: INFO: Adding inet address 172.20.0.2/32 with broadcast address 172.20.0.255 to device vlan23
Oct 13 13:57:03 standalone.localdomain IPaddr2(ip-172.20.0.2)[70544]: INFO: Bringing device vlan23 up
Oct 13 13:57:03 standalone.localdomain IPaddr2(ip-172.20.0.2)[70550]: INFO: /usr/libexec/heartbeat/send_arp  -i 200 -r 5 -p /run/resource-agents/send_arp-172.20.0.2 vlan23 172.20.0.2 auto not_used not_used
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for ip-172.20.0.2 on standalone: ok
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation ip-172.20.0.2_monitor_10000 locally on standalone
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for ip-172.20.0.2 on standalone
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for ip-172.20.0.2 on standalone: ok
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 22 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-22.bz2): Complete
Oct 13 13:57:03 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:04 standalone.localdomain ceph-mon[29756]: pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd='[{"prefix": "osd pool set", "pool": "manila_metadata", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 13 13:57:04 standalone.localdomain ceph-mon[29756]: osdmap e47: 1 total, 1 up, 1 in
Oct 13 13:57:04 standalone.localdomain ceph-mon[29756]: 3.16 scrub starts
Oct 13 13:57:04 standalone.localdomain ceph-mon[29756]: 3.16 scrub ok
Oct 13 13:57:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:57:05 standalone.localdomain podman[70612]: 2025-10-13 13:57:05.557944803 +0000 UTC m=+0.102264833 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, managed_by=tripleo_ansible, release=1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 13:57:05 standalone.localdomain podman[70612]: 2025-10-13 13:57:05.581738114 +0000 UTC m=+0.126058074 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, vendor=Red Hat, Inc., container_name=memcached, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 13:57:05 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:57:05 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Oct 13 13:57:05 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Oct 13 13:57:06 standalone.localdomain ceph-mon[29756]: pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:06 standalone.localdomain ceph-mon[29756]: 2.10 deep-scrub starts
Oct 13 13:57:06 standalone.localdomain ceph-mon[29756]: 2.10 deep-scrub ok
Oct 13 13:57:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Oct 13 13:57:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Oct 13 13:57:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:07 standalone.localdomain ceph-mon[29756]: 3.17 deep-scrub starts
Oct 13 13:57:07 standalone.localdomain ceph-mon[29756]: 3.17 deep-scrub ok
Oct 13 13:57:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 13 13:57:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 13 13:57:07 standalone.localdomain IPaddr2(ip-172.20.0.2)[70653]: INFO: ARPING 172.20.0.2 from 172.20.0.2 vlan23
                                                                      Sent 5 probes (5 broadcast(s))
                                                                      Received 0 response(s)
Oct 13 13:57:08 standalone.localdomain ceph-mon[29756]: pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:08 standalone.localdomain ceph-mon[29756]: 2.11 scrub starts
Oct 13 13:57:08 standalone.localdomain ceph-mon[29756]: 2.11 scrub ok
Oct 13 13:57:08 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:08 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 23, saving inputs in /var/lib/pacemaker/pengine/pe-input-23.bz2
Oct 13 13:57:08 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation haproxy-bundle-podman-0_monitor_0 locally on standalone
Oct 13 13:57:08 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for haproxy-bundle-podman-0 on standalone
Oct 13 13:57:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 13:57:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Cumulative writes: 4266 writes, 20K keys, 4266 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                        Cumulative WAL: 4266 writes, 361 syncs, 11.82 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 878 writes, 4226 keys, 878 commit groups, 1.0 writes per commit group, ingest: 1.69 MB, 0.00 MB/s
                                                        Interval WAL: 878 writes, 163 syncs, 5.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-0] **
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-1] **
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-2] **
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-0] **
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-1] **
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-2] **
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 288.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,6.88765e-05%) FilterBlock(1,0.11 KB,3.70873e-05%) IndexBlock(1,0.14 KB,4.76837e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-0] **
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 288.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,6.88765e-05%) FilterBlock(1,0.11 KB,3.70873e-05%) IndexBlock(1,0.14 KB,4.76837e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-1] **
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 288.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,6.88765e-05%) FilterBlock(1,0.11 KB,3.70873e-05%) IndexBlock(1,0.14 KB,4.76837e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-2] **
                                                        
                                                        ** Compaction Stats [L] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [L] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [L] **
                                                        
                                                        ** Compaction Stats [P] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [P] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1200.1 total, 600.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 1.72 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,7.88949e-05%) FilterBlock(3,0.33 KB,1.82065e-05%) IndexBlock(3,0.34 KB,1.90735e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [P] **
Oct 13 13:57:08 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for haproxy-bundle-podman-0 on standalone: not running
Oct 13 13:57:08 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 23 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-23.bz2): Complete
Oct 13 13:57:08 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:10 standalone.localdomain ceph-mon[29756]: pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Oct 13 13:57:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Oct 13 13:57:10 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:10 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 24, saving inputs in /var/lib/pacemaker/pengine/pe-input-24.bz2
Oct 13 13:57:10 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 24 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-24.bz2): Complete
Oct 13 13:57:10 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:11 standalone.localdomain ceph-mon[29756]: 3.18 deep-scrub starts
Oct 13 13:57:11 standalone.localdomain ceph-mon[29756]: 3.18 deep-scrub ok
Oct 13 13:57:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct 13 13:57:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct 13 13:57:12 standalone.localdomain ceph-mon[29756]: pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:12 standalone.localdomain ceph-mon[29756]: 2.12 scrub starts
Oct 13 13:57:12 standalone.localdomain ceph-mon[29756]: 2.12 scrub ok
Oct 13 13:57:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 13 13:57:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 13 13:57:13 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:13 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      haproxy-bundle-podman-0     ( standalone )
Oct 13 13:57:13 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 25, saving inputs in /var/lib/pacemaker/pengine/pe-input-25.bz2
Oct 13 13:57:13 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation haproxy-bundle-podman-0_start_0 locally on standalone
Oct 13 13:57:13 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for haproxy-bundle-podman-0 on standalone
Oct 13 13:57:13 standalone.localdomain ceph-mon[29756]: 3.19 scrub starts
Oct 13 13:57:13 standalone.localdomain ceph-mon[29756]: 3.19 scrub ok
Oct 13 13:57:13 standalone.localdomain podman(haproxy-bundle-podman-0)[70882]: INFO: running container haproxy-bundle-podman-0 for the first time
Oct 13 13:57:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:13 standalone.localdomain podman[70886]: 2025-10-13 13:57:13.462554055 +0000 UTC m=+0.071792698 container create 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, name=rhosp17/openstack-haproxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 13:57:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:57:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148ce7eb9cddf6791c4b1db7affe1d541b049874ebcbdb399317e20ea109207b/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 13 13:57:13 standalone.localdomain podman[70886]: 2025-10-13 13:57:13.530629626 +0000 UTC m=+0.139868309 container init 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, vcs-type=git, com.redhat.component=openstack-haproxy-container, batch=17.1_20250721.1)
Oct 13 13:57:13 standalone.localdomain podman[70886]: 2025-10-13 13:57:13.431083878 +0000 UTC m=+0.040322551 image pull  cluster.common.tag/haproxy:pcmklatest
Oct 13 13:57:13 standalone.localdomain podman[70886]: 2025-10-13 13:57:13.541223362 +0000 UTC m=+0.150462035 container start 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-haproxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container)
Oct 13 13:57:13 standalone.localdomain sudo[70906]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:57:13 standalone.localdomain sudo[70906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:57:13 standalone.localdomain podman(haproxy-bundle-podman-0)[70911]: INFO: 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb
Oct 13 13:57:13 standalone.localdomain sudo[70906]: pam_unix(sudo:session): session closed for user root
Oct 13 13:57:13 standalone.localdomain podman(haproxy-bundle-podman-0)[70935]: INFO: Creating drop-in dependency for "haproxy-bundle-podman-0" (60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb)
Oct 13 13:57:13 standalone.localdomain haproxy[70940]: Server barbican/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:13 standalone.localdomain haproxy[70940]: proxy barbican has no server available!
Oct 13 13:57:13 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:57:13 standalone.localdomain haproxy[70940]: Server cinder/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:13 standalone.localdomain haproxy[70940]: proxy cinder has no server available!
Oct 13 13:57:13 standalone.localdomain systemd-sysv-generator[71020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:57:13 standalone.localdomain systemd-rc-local-generator[71013]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:57:13 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:57:13 standalone.localdomain haproxy[70940]: Server glance_api/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 9ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:13 standalone.localdomain haproxy[70940]: proxy glance_api has no server available!
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server glance_api_internal/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy glance_api_internal has no server available!
Oct 13 13:57:14 standalone.localdomain podman[71030]: 2025-10-13 13:57:14.049467477 +0000 UTC m=+0.084203048 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:08:11, distribution-scope=public)
Oct 13 13:57:14 standalone.localdomain podman[71030]: 2025-10-13 13:57:14.052659005 +0000 UTC m=+0.087394576 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, name=rhosp17/openstack-haproxy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:08:11, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, summary=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server heat_api/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy heat_api has no server available!
Oct 13 13:57:14 standalone.localdomain podman[71095]: 2025-10-13 13:57:14.163913494 +0000 UTC m=+0.091079219 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public, build-date=2025-07-21T13:08:11, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-haproxy, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 13:57:14 standalone.localdomain podman[71095]: 2025-10-13 13:57:14.192288705 +0000 UTC m=+0.119454420 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.expose-services=, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, batch=17.1_20250721.1)
Oct 13 13:57:14 standalone.localdomain podman(haproxy-bundle-podman-0)[71133]: NOTICE: Container haproxy-bundle-podman-0  started successfully
Oct 13 13:57:14 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for haproxy-bundle-podman-0 on standalone: ok
Oct 13 13:57:14 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation haproxy-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:57:14 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for haproxy-bundle-podman-0 on standalone
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server heat_cfn/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy heat_cfn has no server available!
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Backup Server mysql/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy mysql has no server available!
Oct 13 13:57:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:14 standalone.localdomain podman[71139]: 2025-10-13 13:57:14.296138426 +0000 UTC m=+0.059144128 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-haproxy-container, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, build-date=2025-07-21T13:08:11, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 13:57:14 standalone.localdomain podman[71139]: 2025-10-13 13:57:14.323645592 +0000 UTC m=+0.086651304 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-haproxy-container, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, version=17.1.9, build-date=2025-07-21T13:08:11, tcib_managed=true, release=1, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=)
Oct 13 13:57:14 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for haproxy-bundle-podman-0 on standalone: ok
Oct 13 13:57:14 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 25 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-25.bz2): Complete
Oct 13 13:57:14 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server horizon/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy horizon has no server available!
Oct 13 13:57:14 standalone.localdomain ceph-mon[29756]: pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server keystone_admin/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy keystone_admin has no server available!
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server keystone_public/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy keystone_public has no server available!
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server manila/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy manila has no server available!
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: Server neutron/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:14 standalone.localdomain haproxy[70940]: proxy neutron has no server available!
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: Server nova_metadata/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: proxy nova_metadata has no server available!
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: Server nova_novncproxy/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: proxy nova_novncproxy has no server available!
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: Server nova_osapi/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: proxy nova_osapi has no server available!
Oct 13 13:57:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: Server placement/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: proxy placement has no server available!
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: Server swift_proxy_server/standalone.storage.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 13:57:15 standalone.localdomain haproxy[70940]: proxy swift_proxy_server has no server available!
Oct 13 13:57:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 13:57:16 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:16 standalone.localdomain ceph-mon[29756]: pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:16 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 26, saving inputs in /var/lib/pacemaker/pengine/pe-input-26.bz2
Oct 13 13:57:16 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 26 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-26.bz2): Complete
Oct 13 13:57:16 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:18 standalone.localdomain ceph-mon[29756]: pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:18 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 13 13:57:18 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 13 13:57:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 13 13:57:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 13 13:57:19 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:19 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 27, saving inputs in /var/lib/pacemaker/pengine/pe-input-27.bz2
Oct 13 13:57:19 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 27 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-27.bz2): Complete
Oct 13 13:57:19 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:20 standalone.localdomain ceph-mon[29756]: 2.13 scrub starts
Oct 13 13:57:20 standalone.localdomain ceph-mon[29756]: 2.13 scrub ok
Oct 13 13:57:20 standalone.localdomain ceph-mon[29756]: pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Oct 13 13:57:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Oct 13 13:57:21 standalone.localdomain ceph-mon[29756]: 3.1a scrub starts
Oct 13 13:57:21 standalone.localdomain ceph-mon[29756]: 3.1a scrub ok
Oct 13 13:57:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:22 standalone.localdomain ceph-mon[29756]: 3.1b deep-scrub starts
Oct 13 13:57:22 standalone.localdomain ceph-mon[29756]: 3.1b deep-scrub ok
Oct 13 13:57:22 standalone.localdomain ceph-mon[29756]: pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:22 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:22 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 28, saving inputs in /var/lib/pacemaker/pengine/pe-input-28.bz2
Oct 13 13:57:22 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 28 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-28.bz2): Complete
Oct 13 13:57:22 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:57:23
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'manila_metadata', 'manila_data', 'vms', 'images']
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:57:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:24 standalone.localdomain ceph-mon[29756]: pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:24 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 13 13:57:24 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 13 13:57:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:25 standalone.localdomain ceph-mon[29756]: 2.14 scrub starts
Oct 13 13:57:25 standalone.localdomain ceph-mon[29756]: 2.14 scrub ok
Oct 13 13:57:25 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:25 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 29, saving inputs in /var/lib/pacemaker/pengine/pe-input-29.bz2
Oct 13 13:57:25 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 29 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-29.bz2): Complete
Oct 13 13:57:25 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:26 standalone.localdomain ceph-mon[29756]: pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:26 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Oct 13 13:57:26 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Oct 13 13:57:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:27 standalone.localdomain ceph-mon[29756]: 3.1c deep-scrub starts
Oct 13 13:57:27 standalone.localdomain ceph-mon[29756]: 3.1c deep-scrub ok
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 13:57:28 standalone.localdomain ceph-mon[29756]: pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:29 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:29 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 30, saving inputs in /var/lib/pacemaker/pengine/pe-input-30.bz2
Oct 13 13:57:29 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 30 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-30.bz2): Complete
Oct 13 13:57:29 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:30 standalone.localdomain ceph-mon[29756]: pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:30 standalone.localdomain sudo[71475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:57:30 standalone.localdomain sudo[71475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:57:30 standalone.localdomain sudo[71475]: pam_unix(sudo:session): session closed for user root
Oct 13 13:57:30 standalone.localdomain sudo[71490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 13:57:30 standalone.localdomain sudo[71490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:57:31 standalone.localdomain sudo[71490]: pam_unix(sudo:session): session closed for user root
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:31 standalone.localdomain sudo[71530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:57:31 standalone.localdomain sudo[71530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:57:31 standalone.localdomain sudo[71530]: pam_unix(sudo:session): session closed for user root
Oct 13 13:57:31 standalone.localdomain sudo[71545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:57:31 standalone.localdomain sudo[71545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:57:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:31 standalone.localdomain sudo[71545]: pam_unix(sudo:session): session closed for user root
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Oct 13 13:57:31 standalone.localdomain ceph-mgr[29999]: [cephadm INFO root] Adjusting osd_memory_target on standalone.localdomain to  1673M
Oct 13 13:57:31 standalone.localdomain ceph-mgr[29999]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on standalone.localdomain to  1673M
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:57:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:57:31 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 69f0649d-1c3a-4c28-8419-eb6b597db543 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:57:31 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 69f0649d-1c3a-4c28-8419-eb6b597db543 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:57:31 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 69f0649d-1c3a-4c28-8419-eb6b597db543 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:57:31 standalone.localdomain sudo[71594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:57:31 standalone.localdomain sudo[71594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:57:31 standalone.localdomain sudo[71594]: pam_unix(sudo:session): session closed for user root
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:57:32 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:32 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 31, saving inputs in /var/lib/pacemaker/pengine/pe-input-31.bz2
Oct 13 13:57:32 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 31 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-31.bz2): Complete
Oct 13 13:57:32 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:33 standalone.localdomain ceph-mon[29756]: Adjusting osd_memory_target on standalone.localdomain to  1673M
Oct 13 13:57:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:33 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 13 13:57:33 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 13 13:57:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 35 completed events
Oct 13 13:57:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:57:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:34 standalone.localdomain ceph-mon[29756]: pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:57:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 13 13:57:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 13 13:57:35 standalone.localdomain ceph-mon[29756]: 2.15 scrub starts
Oct 13 13:57:35 standalone.localdomain ceph-mon[29756]: 2.15 scrub ok
Oct 13 13:57:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:35 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:35 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 32, saving inputs in /var/lib/pacemaker/pengine/pe-input-32.bz2
Oct 13 13:57:35 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 32 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-32.bz2): Complete
Oct 13 13:57:35 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:57:35 standalone.localdomain podman[71841]: 2025-10-13 13:57:35.833617843 +0000 UTC m=+0.097033322 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12)
Oct 13 13:57:35 standalone.localdomain podman[71841]: 2025-10-13 13:57:35.884326091 +0000 UTC m=+0.147741570 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, container_name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, version=17.1.9)
Oct 13 13:57:35 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:57:36 standalone.localdomain ceph-mon[29756]: 3.1d scrub starts
Oct 13 13:57:36 standalone.localdomain ceph-mon[29756]: 3.1d scrub ok
Oct 13 13:57:36 standalone.localdomain ceph-mon[29756]: pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:38 standalone.localdomain ceph-mon[29756]: pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:38 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:38 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 33, saving inputs in /var/lib/pacemaker/pengine/pe-input-33.bz2
Oct 13 13:57:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 33 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-33.bz2): Complete
Oct 13 13:57:38 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:39 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 13 13:57:39 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 13 13:57:40 standalone.localdomain ceph-mon[29756]: pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:41 standalone.localdomain ceph-mon[29756]: 3.1e scrub starts
Oct 13 13:57:41 standalone.localdomain ceph-mon[29756]: 3.1e scrub ok
Oct 13 13:57:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:41 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:41 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 34, saving inputs in /var/lib/pacemaker/pengine/pe-input-34.bz2
Oct 13 13:57:41 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 34 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-34.bz2): Complete
Oct 13 13:57:41 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:42 standalone.localdomain ceph-mon[29756]: pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 13 13:57:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 13 13:57:43 standalone.localdomain ceph-mon[29756]: 3.1f scrub starts
Oct 13 13:57:43 standalone.localdomain ceph-mon[29756]: 3.1f scrub ok
Oct 13 13:57:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:44 standalone.localdomain ceph-mon[29756]: pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:44 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 13 13:57:44 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 13 13:57:44 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 35, saving inputs in /var/lib/pacemaker/pengine/pe-input-35.bz2
Oct 13 13:57:44 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 35 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-35.bz2): Complete
Oct 13 13:57:44 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:57:45 standalone.localdomain ceph-mon[29756]: 2.16 scrub starts
Oct 13 13:57:45 standalone.localdomain ceph-mon[29756]: 2.16 scrub ok
Oct 13 13:57:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 13 13:57:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 13 13:57:45 standalone.localdomain python3[72129]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/haproxy.md5sum follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:57:45 standalone.localdomain python3[72135]: ansible-tripleo_diff_exec Invoked with command=/var/lib/container-config-scripts/pacemaker_restart_bundle.sh haproxy haproxy-bundle haproxy-bundle Started state_file=/var/lib/config-data/puppet-generated/haproxy.md5sum state_file_suffix=.previous_run environment={'TRIPLEO_MINOR_UPDATE': '', 'TRIPLEO_HA_WRAPPER_RESOURCE_EXISTS': 'False'} return_codes=[0]
Oct 13 13:57:46 standalone.localdomain pcmkrestart[72140]: Initial deployment, skipping the restart of haproxy-bundle
Oct 13 13:57:46 standalone.localdomain ceph-mon[29756]: pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:46 standalone.localdomain ceph-mon[29756]: 2.17 scrub starts
Oct 13 13:57:46 standalone.localdomain ceph-mon[29756]: 2.17 scrub ok
Oct 13 13:57:46 standalone.localdomain python3[72144]: ansible-stat Invoked with path=/tmp/tripleo_ha_image_haproxy-bundle follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:57:47 standalone.localdomain python3[72160]: ansible-ansible.legacy.command Invoked with _raw_params=pcs resource config "galera-bundle"
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:57:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:48 standalone.localdomain python3[72166]: ansible-ansible.legacy.command Invoked with _raw_params=puppet apply  --detailed-exitcodes --summarize --color=false --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation' -e '["Mysql_datadir", "Mysql_user", "Mysql_database", "Mysql_grant", "Mysql_plugin"].each |String $val| { noop_resource($val) }; include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::database::mysql_bundle'
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:57:48 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 13 13:57:48 standalone.localdomain ceph-mon[29756]: pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:48 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 13 13:57:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:50 standalone.localdomain ceph-mon[29756]: 2.18 scrub starts
Oct 13 13:57:50 standalone.localdomain ceph-mon[29756]: 2.18 scrub ok
Oct 13 13:57:50 standalone.localdomain ceph-mon[29756]: pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:52 standalone.localdomain ceph-mon[29756]: pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:57:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:54 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Oct 13 13:57:54 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Oct 13 13:57:54 standalone.localdomain ceph-mon[29756]: pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:55 standalone.localdomain ceph-mon[29756]: 2.19 deep-scrub starts
Oct 13 13:57:55 standalone.localdomain ceph-mon[29756]: 2.19 deep-scrub ok
Oct 13 13:57:56 standalone.localdomain ceph-mon[29756]: pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:58 standalone.localdomain ceph-mon[29756]: pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:58 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 13 13:57:58 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 13 13:57:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:57:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:57:59 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:57:59 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 36, saving inputs in /var/lib/pacemaker/pengine/pe-input-36.bz2
Oct 13 13:57:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 36 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-36.bz2): Complete
Oct 13 13:57:59 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:00 standalone.localdomain ceph-mon[29756]: 2.1a scrub starts
Oct 13 13:58:00 standalone.localdomain ceph-mon[29756]: 2.1a scrub ok
Oct 13 13:58:00 standalone.localdomain ceph-mon[29756]: pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 13 13:58:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 13 13:58:02 standalone.localdomain ceph-mon[29756]: pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:03 standalone.localdomain ceph-mon[29756]: 2.1b scrub starts
Oct 13 13:58:03 standalone.localdomain ceph-mon[29756]: 2.1b scrub ok
Oct 13 13:58:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:04 standalone.localdomain ceph-mon[29756]: pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:04 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:04 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 37, saving inputs in /var/lib/pacemaker/pengine/pe-input-37.bz2
Oct 13 13:58:04 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera-bundle-podman-0_monitor_0 locally on standalone
Oct 13 13:58:04 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:05 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for galera-bundle-podman-0 on standalone: not running
Oct 13 13:58:05 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 37 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-37.bz2): Complete
Oct 13 13:58:05 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 13 13:58:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 13 13:58:06 standalone.localdomain ceph-mon[29756]: pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:06 standalone.localdomain ceph-mon[29756]: 2.1c scrub starts
Oct 13 13:58:06 standalone.localdomain ceph-mon[29756]: 2.1c scrub ok
Oct 13 13:58:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:58:06 standalone.localdomain systemd[1]: tmp-crun.kteCq7.mount: Deactivated successfully.
Oct 13 13:58:06 standalone.localdomain podman[72798]: 2025-10-13 13:58:06.821065931 +0000 UTC m=+0.086779538 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, name=rhosp17/openstack-memcached, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, container_name=memcached)
Oct 13 13:58:06 standalone.localdomain podman[72798]: 2025-10-13 13:58:06.847065439 +0000 UTC m=+0.112779036 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 memcached, release=1, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, container_name=memcached, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, tcib_managed=true, name=rhosp17/openstack-memcached)
Oct 13 13:58:06 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:58:07 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:07 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 38, saving inputs in /var/lib/pacemaker/pengine/pe-input-38.bz2
Oct 13 13:58:07 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 38 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-38.bz2): Complete
Oct 13 13:58:07 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:08 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 13 13:58:08 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 13 13:58:08 standalone.localdomain ceph-mon[29756]: pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:08 standalone.localdomain ceph-mon[29756]: 2.1d scrub starts
Oct 13 13:58:08 standalone.localdomain ceph-mon[29756]: 2.1d scrub ok
Oct 13 13:58:09 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:09 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      galera-bundle-podman-0      ( standalone )
Oct 13 13:58:09 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 39, saving inputs in /var/lib/pacemaker/pengine/pe-input-39.bz2
Oct 13 13:58:09 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation galera-bundle-podman-0_start_0 locally on standalone
Oct 13 13:58:09 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:09 standalone.localdomain podman(galera-bundle-podman-0)[72889]: INFO: running container galera-bundle-podman-0 for the first time
Oct 13 13:58:09 standalone.localdomain podman[72893]: 2025-10-13 13:58:09.52436448 +0000 UTC m=+0.060390627 container create 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:58:09 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:58:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d626563e53c89bef652db085e448e67d77bfce0754c19eda902493a58ac4abd1/merged/var/log/mariadb supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d626563e53c89bef652db085e448e67d77bfce0754c19eda902493a58ac4abd1/merged/var/log/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d626563e53c89bef652db085e448e67d77bfce0754c19eda902493a58ac4abd1/merged/var/lib/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:09 standalone.localdomain podman[72893]: 2025-10-13 13:58:09.575933254 +0000 UTC m=+0.111959441 container init 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, distribution-scope=public, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64)
Oct 13 13:58:09 standalone.localdomain podman[72893]: 2025-10-13 13:58:09.585506539 +0000 UTC m=+0.121532706 container start 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45)
Oct 13 13:58:09 standalone.localdomain podman[72893]: 2025-10-13 13:58:09.499784754 +0000 UTC m=+0.035810941 image pull  cluster.common.tag/mariadb:pcmklatest
Oct 13 13:58:09 standalone.localdomain podman(galera-bundle-podman-0)[72918]: INFO: 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3
Oct 13 13:58:09 standalone.localdomain sudo[72913]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:58:09 standalone.localdomain sudo[72913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:58:09 standalone.localdomain sudo[72913]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:09 standalone.localdomain pacemaker-remoted[72912]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:58:09 standalone.localdomain pacemaker-remoted[72912]:  notice: Starting Pacemaker remote executor
Oct 13 13:58:09 standalone.localdomain podman(galera-bundle-podman-0)[72940]: INFO: Creating drop-in dependency for "galera-bundle-podman-0" (95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3)
Oct 13 13:58:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:58:09 standalone.localdomain systemd-rc-local-generator[72970]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:58:09 standalone.localdomain systemd-sysv-generator[72973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:58:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:58:09 standalone.localdomain pacemaker-remoted[72912]:  warning: Could not read Pacemaker Remote key from default location /etc/pacemaker/authkey (or fallback location /etc/corosync/authkey): No such file or directory
Oct 13 13:58:09 standalone.localdomain pacemaker-remoted[72912]:  warning: A cluster connection will not be possible until the key is available
Oct 13 13:58:09 standalone.localdomain pacemaker-remoted[72912]:  notice: Pacemaker remote executor successfully started and accepting connections
Oct 13 13:58:09 standalone.localdomain pacemaker-remoted[72912]:  notice: OCF resource agent search path is /usr/lib/ocf/resource.d
Oct 13 13:58:10 standalone.localdomain podman[72988]: 2025-10-13 13:58:10.096446347 +0000 UTC m=+0.103615805 container exec 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-mariadb-container)
Oct 13 13:58:10 standalone.localdomain podman[72988]: 2025-10-13 13:58:10.129963757 +0000 UTC m=+0.137133215 container exec_died 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, release=1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-mariadb-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:58:10 standalone.localdomain podman[73018]: 2025-10-13 13:58:10.231154746 +0000 UTC m=+0.082561537 container exec 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:58:10 standalone.localdomain podman[73018]: 2025-10-13 13:58:10.260066875 +0000 UTC m=+0.111473666 container exec_died 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 13:58:10 standalone.localdomain podman(galera-bundle-podman-0)[73052]: NOTICE: Container galera-bundle-podman-0  started successfully
Oct 13 13:58:10 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for galera-bundle-podman-0 on standalone: ok
Oct 13 13:58:10 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:10 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:10 standalone.localdomain ceph-mon[29756]: pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:10 standalone.localdomain podman[73059]: 2025-10-13 13:58:10.394586018 +0000 UTC m=+0.073329365 container exec 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:58:10 standalone.localdomain podman[73059]: 2025-10-13 13:58:10.422918828 +0000 UTC m=+0.101662215 container exec_died 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:58:10 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for galera-bundle-podman-0 on standalone: ok
Oct 13 13:58:10 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 39 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-39.bz2): Complete
Oct 13 13:58:10 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:12 standalone.localdomain ceph-mon[29756]: pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:13 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 13 13:58:13 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 13 13:58:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:14 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.289873) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363894289925, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1105, "num_deletes": 254, "total_data_size": 820268, "memory_usage": 840984, "flush_reason": "Manual Compaction"}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363894296534, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 804268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15483, "largest_seqno": 16587, "table_properties": {"data_size": 799510, "index_size": 2167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11513, "raw_average_key_size": 19, "raw_value_size": 789104, "raw_average_value_size": 1339, "num_data_blocks": 98, "num_entries": 589, "num_filter_entries": 589, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363821, "oldest_key_time": 1760363821, "file_creation_time": 1760363894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6708 microseconds, and 2681 cpu microseconds.
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.296581) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 804268 bytes OK
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.296604) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.300552) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.300571) EVENT_LOG_v1 {"time_micros": 1760363894300565, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.300588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 814895, prev total WAL file size 815384, number of live WAL files 2.
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.301202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(785KB)], [35(3552KB)]
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363894301263, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 4441540, "oldest_snapshot_seqno": -1}
Oct 13 13:58:14 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Restart    galera-bundle-podman-0      ( standalone )  due to resource definition change
Oct 13 13:58:14 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      galera-bundle-0             ( standalone )
Oct 13 13:58:14 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      galera:0                    ( galera-bundle-0 )
Oct 13 13:58:14 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 40, saving inputs in /var/lib/pacemaker/pengine/pe-input-40.bz2
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3037 keys, 4293131 bytes, temperature: kUnknown
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363894324833, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 4293131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4271171, "index_size": 13288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 70764, "raw_average_key_size": 23, "raw_value_size": 4214631, "raw_average_value_size": 1387, "num_data_blocks": 581, "num_entries": 3037, "num_filter_entries": 3037, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:58:14 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation galera-bundle-podman-0_stop_0 locally on standalone
Oct 13 13:58:14 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.325044) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 4293131 bytes
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.327894) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.0 rd, 181.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 3.5 +0.0 blob) out(4.1 +0.0 blob), read-write-amplify(10.9) write-amplify(5.3) OK, records in: 3569, records dropped: 532 output_compression: NoCompression
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.327943) EVENT_LOG_v1 {"time_micros": 1760363894327925, "job": 16, "event": "compaction_finished", "compaction_time_micros": 23622, "compaction_time_cpu_micros": 11047, "output_level": 6, "num_output_files": 1, "total_output_size": 4293131, "num_input_records": 3569, "num_output_records": 3037, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363894328319, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363894328869, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.301144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.329025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.329030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.329032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.329034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:58:14.329036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:58:14 standalone.localdomain podman[73299]: 2025-10-13 13:58:14.411235069 +0000 UTC m=+0.063554958 container exec 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, release=1, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:58:14 standalone.localdomain podman[73312]: 2025-10-13 13:58:14.427549644 +0000 UTC m=+0.068554182 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, version=17.1.9, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 13:58:14 standalone.localdomain podman[73299]: 2025-10-13 13:58:14.444763297 +0000 UTC m=+0.097083166 container exec_died 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 13:58:14 standalone.localdomain podman[73312]: 2025-10-13 13:58:14.457807071 +0000 UTC m=+0.098811629 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-haproxy, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: 2.1e scrub starts
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: 2.1e scrub ok
Oct 13 13:58:14 standalone.localdomain ceph-mon[29756]: pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:14 standalone.localdomain pacemaker-remoted[72912]:  notice: Caught 'Terminated' signal
Oct 13 13:58:14 standalone.localdomain systemd[1]: libpod-95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3.scope: Deactivated successfully.
Oct 13 13:58:14 standalone.localdomain podman[73396]: 2025-10-13 13:58:14.517498798 +0000 UTC m=+0.048815992 container stop 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:58:14 standalone.localdomain podman[73396]: 2025-10-13 13:58:14.549668944 +0000 UTC m=+0.080986118 container died 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:58:14 standalone.localdomain podman[73396]: 2025-10-13 13:58:14.575840724 +0000 UTC m=+0.107157898 container cleanup 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 13:58:14 standalone.localdomain podman(galera-bundle-podman-0)[73426]: INFO: galera-bundle-podman-0
Oct 13 13:58:14 standalone.localdomain podman[73409]: 2025-10-13 13:58:14.592331594 +0000 UTC m=+0.064231189 container cleanup 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 13:58:14 standalone.localdomain podman(galera-bundle-podman-0)[73449]: NOTICE: Cleaning up inactive container, galera-bundle-podman-0.
Oct 13 13:58:14 standalone.localdomain podman[73453]: 2025-10-13 13:58:14.730417988 +0000 UTC m=+0.063159875 container remove 95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3 (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, release=1, architecture=x86_64, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:58:14 standalone.localdomain podman(galera-bundle-podman-0)[73476]: INFO: galera-bundle-podman-0
Oct 13 13:58:14 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for galera-bundle-podman-0 on standalone: ok
Oct 13 13:58:14 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation galera-bundle-podman-0_start_0 locally on standalone
Oct 13 13:58:14 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:14 standalone.localdomain python3[73471]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/mysql.md5sum follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:58:14 standalone.localdomain podman(galera-bundle-podman-0)[73532]: INFO: running container galera-bundle-podman-0 for the first time
Oct 13 13:58:15 standalone.localdomain podman[73536]: 2025-10-13 13:58:15.05872526 +0000 UTC m=+0.095654512 container create cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, version=17.1.9, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 13:58:15 standalone.localdomain podman[73536]: 2025-10-13 13:58:15.010791806 +0000 UTC m=+0.047721108 image pull  cluster.common.tag/mariadb:pcmklatest
Oct 13 13:58:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:58:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d576005e5a03efdde860f6e828ca520a2e06f29826e732542b52aab0e5905866/merged/var/log supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d576005e5a03efdde860f6e828ca520a2e06f29826e732542b52aab0e5905866/merged/var/log/mariadb supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d576005e5a03efdde860f6e828ca520a2e06f29826e732542b52aab0e5905866/merged/var/lib/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d576005e5a03efdde860f6e828ca520a2e06f29826e732542b52aab0e5905866/merged/var/log/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d576005e5a03efdde860f6e828ca520a2e06f29826e732542b52aab0e5905866/merged/etc/pacemaker/authkey supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:15 standalone.localdomain podman[73536]: 2025-10-13 13:58:15.12819237 +0000 UTC m=+0.165121622 container init cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, release=1, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, architecture=x86_64)
Oct 13 13:58:15 standalone.localdomain podman[73536]: 2025-10-13 13:58:15.136051633 +0000 UTC m=+0.172980885 container start cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.component=openstack-mariadb-container, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 13:58:15 standalone.localdomain podman(galera-bundle-podman-0)[73562]: INFO: cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c
Oct 13 13:58:15 standalone.localdomain sudo[73557]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:58:15 standalone.localdomain sudo[73557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:58:15 standalone.localdomain python3[73549]: ansible-tripleo_diff_exec Invoked with command=/var/lib/container-config-scripts/pacemaker_restart_bundle.sh mysql galera galera-bundle Master state_file=/var/lib/config-data/puppet-generated/mysql.md5sum state_file_suffix=.previous_run environment={'TRIPLEO_MINOR_UPDATE': '', 'TRIPLEO_HA_WRAPPER_RESOURCE_EXISTS': 'False'} return_codes=[0]
Oct 13 13:58:15 standalone.localdomain sudo[73557]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:15 standalone.localdomain pacemaker-remoted[73556]:  warning: Logging to '/var/log/pacemaker/pacemaker.log' is disabled: No such file or directory
Oct 13 13:58:15 standalone.localdomain pacemaker-remoted[73556]:  notice: Starting Pacemaker remote executor
Oct 13 13:58:15 standalone.localdomain podman(galera-bundle-podman-0)[73587]: INFO: Creating drop-in dependency for "galera-bundle-podman-0" (cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c)
Oct 13 13:58:15 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:58:15 standalone.localdomain pacemaker-remoted[73556]:  notice: Pacemaker remote executor successfully started and accepting connections
Oct 13 13:58:15 standalone.localdomain pacemaker-remoted[73556]:  notice: OCF resource agent search path is /usr/lib/ocf/resource.d
Oct 13 13:58:15 standalone.localdomain pcmkrestart[73595]: Initial deployment, skipping the restart of galera-bundle
Oct 13 13:58:15 standalone.localdomain systemd-rc-local-generator[73612]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:58:15 standalone.localdomain systemd-sysv-generator[73618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:58:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:15 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:58:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d626563e53c89bef652db085e448e67d77bfce0754c19eda902493a58ac4abd1-merged.mount: Deactivated successfully.
Oct 13 13:58:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95a57e38a22e5f1a2bc13c81dae8b9a31eea0d43ac19f5b59fda68adea6ec6e3-userdata-shm.mount: Deactivated successfully.
Oct 13 13:58:15 standalone.localdomain python3[73634]: ansible-stat Invoked with path=/tmp/tripleo_ha_image_galera-bundle follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:58:15 standalone.localdomain podman[73636]: 2025-10-13 13:58:15.667643197 +0000 UTC m=+0.101975988 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vcs-type=git, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64)
Oct 13 13:58:15 standalone.localdomain podman[73636]: 2025-10-13 13:58:15.700859765 +0000 UTC m=+0.135192526 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, distribution-scope=public, architecture=x86_64)
Oct 13 13:58:15 standalone.localdomain systemd[1]: tmp-crun.Lcl0Nc.mount: Deactivated successfully.
Oct 13 13:58:15 standalone.localdomain podman[73666]: 2025-10-13 13:58:15.817921748 +0000 UTC m=+0.093164735 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, name=rhosp17/openstack-mariadb, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=)
Oct 13 13:58:15 standalone.localdomain podman[73666]: 2025-10-13 13:58:15.851101934 +0000 UTC m=+0.126344911 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:58:15 standalone.localdomain podman(galera-bundle-podman-0)[73705]: NOTICE: Container galera-bundle-podman-0  started successfully
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for galera-bundle-podman-0 on standalone: ok
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera-bundle-0_monitor_0 locally on standalone
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for galera-bundle-0 on standalone
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for galera-bundle-0 on standalone: not running (Remote connection inactive)
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation galera-bundle-0_start_0 locally on standalone
Oct 13 13:58:15 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for galera-bundle-0 on standalone
Oct 13 13:58:15 standalone.localdomain podman[73714]: 2025-10-13 13:58:15.99212534 +0000 UTC m=+0.087578902 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, architecture=x86_64, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-mariadb, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 13:58:16 standalone.localdomain podman[73714]: 2025-10-13 13:58:16.026583526 +0000 UTC m=+0.122037168 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-mariadb-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb)
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for galera-bundle-podman-0 on standalone: ok
Oct 13 13:58:16 standalone.localdomain python3[73756]: ansible-ansible.legacy.command Invoked with _raw_params=pcs resource config "rabbitmq-bundle"
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:58:16 standalone.localdomain pacemaker-remoted[73556]:  notice: Remote client connection accepted
Oct 13 13:58:16 standalone.localdomain ceph-mon[29756]: pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Node galera-bundle-0 state is now member
Oct 13 13:58:16 standalone.localdomain pacemaker-attrd[57909]:  notice: Removing all galera-bundle-0 attributes for peer standalone
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 40 aborted: Pacemaker Remote node integrated
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for galera-bundle-0 on standalone: ok
Oct 13 13:58:16 standalone.localdomain pacemaker-fenced[57907]:  notice: Node galera-bundle-0 state is now member
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 40 (Complete=8, Pending=0, Fired=0, Skipped=2, Incomplete=8, Source=/var/lib/pacemaker/pengine/pe-input-40.bz2): Stopped
Oct 13 13:58:16 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      galera:0                    ( galera-bundle-0 )
Oct 13 13:58:16 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 41, saving inputs in /var/lib/pacemaker/pengine/pe-input-41.bz2
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera:0_monitor_0 locally on galera-bundle-0
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera-bundle-0_monitor_30000 locally on standalone
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for galera-bundle-0 on standalone
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for galera on galera-bundle-0
Oct 13 13:58:16 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for galera-bundle-0 on standalone: ok
Oct 13 13:58:16 standalone.localdomain galera(galera)[73804]: INFO: MySQL is not running
Oct 13 13:58:17 standalone.localdomain python3[73812]: ansible-ansible.legacy.command Invoked with _raw_params=puppet apply  --detailed-exitcodes --summarize --color=false --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation' -e '["Rabbitmq_policy", "Rabbitmq_user"].each |String $val| { noop_resource($val) }; include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::rabbitmq_bundle'
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:58:17 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for galera on galera-bundle-0: not running
Oct 13 13:58:17 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation galera:0_start_0 locally on galera-bundle-0
Oct 13 13:58:17 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for galera on galera-bundle-0
Oct 13 13:58:17 standalone.localdomain galera(galera)[73867]: ERROR: MySQL is not running
Oct 13 13:58:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:17 standalone.localdomain galera(galera)[73878]: INFO: Creating PID dir: /var/run/mysql
Oct 13 13:58:17 standalone.localdomain runuser[73883]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:17 standalone.localdomain runuser[73883]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:17 standalone.localdomain runuser[73899]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:17 standalone.localdomain runuser[73899]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:17 standalone.localdomain runuser[73915]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:17 standalone.localdomain runuser[73915]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:17 standalone.localdomain galera(galera)[73946]: INFO: attempting to detect last commit version by reading /var/lib/mysql/grastate.dat
Oct 13 13:58:18 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-no-grastate[standalone]: (unset) -> true
Oct 13 13:58:18 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 41 aborted by status-1-galera-no-grastate doing create galera-no-grastate=true: Transient attribute change
Oct 13 13:58:18 standalone.localdomain galera(galera)[73955]: INFO: now attempting to detect last commit version using 'mysqld_safe --wsrep-recover'
Oct 13 13:58:18 standalone.localdomain runuser[73957]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:18 standalone.localdomain ceph-mon[29756]: pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:19 standalone.localdomain runuser[73957]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:19 standalone.localdomain galera(galera)[74328]: INFO: Last commit version found:  -1
Oct 13 13:58:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 13 13:58:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 13 13:58:19 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-last-committed[standalone]: (unset) -> -1
Oct 13 13:58:20 standalone.localdomain ceph-mon[29756]: pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:20 standalone.localdomain galera(galera)[74469]: INFO: Promoting standalone to be our bootstrap node
Oct 13 13:58:20 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-bootstrap[standalone]: (unset) -> true
Oct 13 13:58:21 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting master-galera[standalone]: (unset) -> 100
Oct 13 13:58:21 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for galera on galera-bundle-0: ok
Oct 13 13:58:21 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 41 (Complete=7, Pending=0, Fired=0, Skipped=2, Incomplete=2, Source=/var/lib/pacemaker/pengine/pe-input-41.bz2): Stopped
Oct 13 13:58:21 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Promote    galera:0                    ( Unpromoted -> Promoted galera-bundle-0 )
Oct 13 13:58:21 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 42, saving inputs in /var/lib/pacemaker/pengine/pe-input-42.bz2
Oct 13 13:58:21 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating promote operation galera_promote_0 locally on galera-bundle-0
Oct 13 13:58:21 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of promote operation for galera on galera-bundle-0
Oct 13 13:58:21 standalone.localdomain ceph-mon[29756]: 2.1f scrub starts
Oct 13 13:58:21 standalone.localdomain ceph-mon[29756]: 2.1f scrub ok
Oct 13 13:58:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 13 13:58:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 13 13:58:21 standalone.localdomain galera(galera)[74547]: INFO: Node <standalone> is bootstrapping the cluster
Oct 13 13:58:21 standalone.localdomain galera(galera)[74550]: ERROR: MySQL is not running
Oct 13 13:58:22 standalone.localdomain ceph-mon[29756]: pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:22 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 13 13:58:22 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:58:23
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_data', 'backups', '.mgr', 'images', 'manila_metadata', 'vms']
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:58:23 standalone.localdomain galera(galera)[74698]: INFO: Promoting standalone to be our bootstrap node
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:58:23 standalone.localdomain ceph-mon[29756]: 4.1 scrub starts
Oct 13 13:58:23 standalone.localdomain ceph-mon[29756]: 4.1 scrub ok
Oct 13 13:58:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:23 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-last-committed[standalone]: -1 -> (unset)
Oct 13 13:58:23 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 42 aborted by deletion of nvpair[@id='status-1-galera-last-committed']: Transient attribute change
Oct 13 13:58:24 standalone.localdomain runuser[74775]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:24 standalone.localdomain runuser[74775]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:24 standalone.localdomain runuser[74818]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:24 standalone.localdomain runuser[74818]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:24 standalone.localdomain runuser[74834]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:24 standalone.localdomain runuser[74834]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 13:58:24 standalone.localdomain runuser[74850]: pam_unix(runuser-l:session): session opened for user mysql(uid=42434) by (uid=0)
Oct 13 13:58:24 standalone.localdomain galera(galera)[74853]: INFO: MySQL is not running
Oct 13 13:58:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:24 standalone.localdomain ceph-mon[29756]: 4.2 scrub starts
Oct 13 13:58:24 standalone.localdomain ceph-mon[29756]: 4.2 scrub ok
Oct 13 13:58:24 standalone.localdomain ceph-mon[29756]: pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:26 standalone.localdomain ceph-mon[29756]: pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 13:58:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 13 13:58:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 13 13:58:28 standalone.localdomain ceph-mon[29756]: pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:28 standalone.localdomain ceph-mon[29756]: 5.1 scrub starts
Oct 13 13:58:28 standalone.localdomain ceph-mon[29756]: 5.1 scrub ok
Oct 13 13:58:28 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-bootstrap[standalone]: true -> (unset)
Oct 13 13:58:29 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-no-grastate[standalone]: true -> (unset)
Oct 13 13:58:29 standalone.localdomain galera(galera)[75319]: INFO: Bootstrap complete, promoting the rest of the galera instances.
Oct 13 13:58:29 standalone.localdomain galera(galera)[75323]: INFO: Galera started
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: Result of promote operation for galera on galera-bundle-0: ok
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 42 (Complete=5, Pending=0, Fired=0, Skipped=1, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-42.bz2): Stopped
Oct 13 13:58:29 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 43, saving inputs in /var/lib/pacemaker/pengine/pe-input-43.bz2
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera_monitor_10000 locally on galera-bundle-0
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for galera on galera-bundle-0
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for galera on galera-bundle-0: promoted
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 43 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-43.bz2): Complete
Oct 13 13:58:29 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 13 13:58:30 standalone.localdomain ceph-mon[29756]: pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 13 13:58:31 standalone.localdomain ceph-mon[29756]: 4.3 scrub starts
Oct 13 13:58:31 standalone.localdomain ceph-mon[29756]: 4.3 scrub ok
Oct 13 13:58:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:31 standalone.localdomain sudo[75397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:58:31 standalone.localdomain sudo[75397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:58:31 standalone.localdomain sudo[75397]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:32 standalone.localdomain sudo[75412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:58:32 standalone.localdomain sudo[75412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:58:32 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 13 13:58:32 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:32 standalone.localdomain sudo[75412]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:58:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:58:32 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 815ac67f-4871-4dcf-b69a-e26612851a45 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:58:32 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 815ac67f-4871-4dcf-b69a-e26612851a45 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:58:32 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 815ac67f-4871-4dcf-b69a-e26612851a45 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:58:32 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:32 standalone.localdomain sudo[75542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:58:32 standalone.localdomain sudo[75542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:58:32 standalone.localdomain sudo[75542]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:32 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 44, saving inputs in /var/lib/pacemaker/pengine/pe-input-44.bz2
Oct 13 13:58:32 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq-bundle-podman-0_monitor_0 locally on standalone
Oct 13 13:58:32 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:32 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for rabbitmq-bundle-podman-0 on standalone: not running
Oct 13 13:58:32 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 44 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-44.bz2): Complete
Oct 13 13:58:32 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: 5.2 scrub starts
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: 5.2 scrub ok
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:58:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 36 completed events
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:58:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:58:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:34 standalone.localdomain ceph-mon[29756]: pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:58:34 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:34 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 45, saving inputs in /var/lib/pacemaker/pengine/pe-input-45.bz2
Oct 13 13:58:34 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 45 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-45.bz2): Complete
Oct 13 13:58:34 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:36 standalone.localdomain ceph-mon[29756]: pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:37 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:37 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      rabbitmq-bundle-podman-0    (                             standalone )
Oct 13 13:58:37 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 46, saving inputs in /var/lib/pacemaker/pengine/pe-input-46.bz2
Oct 13 13:58:37 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation rabbitmq-bundle-podman-0_start_0 locally on standalone
Oct 13 13:58:37 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:37 standalone.localdomain podman(rabbitmq-bundle-podman-0)[75778]: INFO: running container rabbitmq-bundle-podman-0 for the first time
Oct 13 13:58:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:37 standalone.localdomain podman[75782]: 2025-10-13 13:58:37.426795518 +0000 UTC m=+0.089912614 container create 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, name=rhosp17/openstack-rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:58:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:58:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:58:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3746c6aebf6b9c6bb1f531f8a01807e21f3d7e4f303f0f910c6c11728bfaf71/merged/var/log/rabbitmq supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3746c6aebf6b9c6bb1f531f8a01807e21f3d7e4f303f0f910c6c11728bfaf71/merged/var/lib/rabbitmq supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:37 standalone.localdomain podman[75782]: 2025-10-13 13:58:37.486793696 +0000 UTC m=+0.149910802 container init 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq)
Oct 13 13:58:37 standalone.localdomain podman[75782]: 2025-10-13 13:58:37.389561586 +0000 UTC m=+0.052678672 image pull  cluster.common.tag/rabbitmq:pcmklatest
Oct 13 13:58:37 standalone.localdomain podman[75782]: 2025-10-13 13:58:37.494363329 +0000 UTC m=+0.157480395 container start 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, architecture=x86_64, distribution-scope=public, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git)
Oct 13 13:58:37 standalone.localdomain sudo[75810]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:58:37 standalone.localdomain sudo[75810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:58:37 standalone.localdomain podman(rabbitmq-bundle-podman-0)[75816]: INFO: 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7
Oct 13 13:58:37 standalone.localdomain podman[75799]: 2025-10-13 13:58:37.558723401 +0000 UTC m=+0.083979160 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, container_name=memcached, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 13:58:37 standalone.localdomain sudo[75810]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:37 standalone.localdomain pacemaker-remoted[75809]:  notice: Additional logging available in /var/log/pacemaker/pacemaker.log
Oct 13 13:58:37 standalone.localdomain pacemaker-remoted[75809]:  notice: Starting Pacemaker remote executor
Oct 13 13:58:37 standalone.localdomain podman[75799]: 2025-10-13 13:58:37.608059639 +0000 UTC m=+0.133315388 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-memcached, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-memcached-container, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 13:58:37 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:58:37 standalone.localdomain podman(rabbitmq-bundle-podman-0)[75855]: INFO: Creating drop-in dependency for "rabbitmq-bundle-podman-0" (60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7)
Oct 13 13:58:37 standalone.localdomain pacemaker-remoted[75809]:  warning: Could not read Pacemaker Remote key from default location /etc/pacemaker/authkey (or fallback location /etc/corosync/authkey): No such file or directory
Oct 13 13:58:37 standalone.localdomain pacemaker-remoted[75809]:  warning: A cluster connection will not be possible until the key is available
Oct 13 13:58:37 standalone.localdomain pacemaker-remoted[75809]:  notice: Pacemaker remote executor successfully started and accepting connections
Oct 13 13:58:37 standalone.localdomain pacemaker-remoted[75809]:  notice: OCF resource agent search path is /usr/lib/ocf/resource.d
Oct 13 13:58:37 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:58:37 standalone.localdomain systemd-rc-local-generator[75886]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:58:37 standalone.localdomain systemd-sysv-generator[75891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:58:37 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:58:38 standalone.localdomain podman[75899]: 2025-10-13 13:58:38.076138426 +0000 UTC m=+0.082186874 container exec 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., release=1, distribution-scope=public, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 13:58:38 standalone.localdomain podman[75899]: 2025-10-13 13:58:38.108855289 +0000 UTC m=+0.114903727 container exec_died 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-rabbitmq, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9)
Oct 13 13:58:38 standalone.localdomain podman[75928]: 2025-10-13 13:58:38.212230318 +0000 UTC m=+0.079073078 container exec 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, build-date=2025-07-21T13:08:05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:58:38 standalone.localdomain podman[75928]: 2025-10-13 13:58:38.242756463 +0000 UTC m=+0.109599193 container exec_died 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:08:05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 13:58:38 standalone.localdomain podman(rabbitmq-bundle-podman-0)[75970]: NOTICE: Container rabbitmq-bundle-podman-0  started successfully
Oct 13 13:58:38 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 13:58:38 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:38 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:38 standalone.localdomain podman[75978]: 2025-10-13 13:58:38.377771333 +0000 UTC m=+0.079427250 container exec 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, release=1, version=17.1.9, build-date=2025-07-21T13:08:05, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, com.redhat.component=openstack-rabbitmq-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, vcs-type=git)
Oct 13 13:58:38 standalone.localdomain podman[75978]: 2025-10-13 13:58:38.406072309 +0000 UTC m=+0.107728306 container exec_died 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, release=1, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:08:05, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 13:58:38 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 13:58:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 46 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-46.bz2): Complete
Oct 13 13:58:38 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:58:38 standalone.localdomain ceph-mon[29756]: pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:40 standalone.localdomain ceph-mon[29756]: pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:42 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 13:58:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Restart    rabbitmq-bundle-podman-0    (                             standalone )  due to resource definition change
Oct 13 13:58:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      rabbitmq-bundle-0           (                             standalone )
Oct 13 13:58:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      rabbitmq:0                  (                      rabbitmq-bundle-0 )
Oct 13 13:58:42 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 47, saving inputs in /var/lib/pacemaker/pengine/pe-input-47.bz2
Oct 13 13:58:42 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation rabbitmq-bundle-podman-0_stop_0 locally on standalone
Oct 13 13:58:42 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:42 standalone.localdomain systemd[1]: tmp-crun.RcifFD.mount: Deactivated successfully.
Oct 13 13:58:42 standalone.localdomain podman[76093]: 2025-10-13 13:58:42.284203861 +0000 UTC m=+0.084635031 container exec 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, vcs-type=git, release=1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:58:42 standalone.localdomain podman[76093]: 2025-10-13 13:58:42.312355272 +0000 UTC m=+0.112786462 container exec_died 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, release=1, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 13:58:42 standalone.localdomain systemd[1]: tmp-crun.yP0mEX.mount: Deactivated successfully.
Oct 13 13:58:42 standalone.localdomain pacemaker-remoted[75809]:  notice: Caught 'Terminated' signal
Oct 13 13:58:42 standalone.localdomain systemd[1]: libpod-60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7.scope: Deactivated successfully.
Oct 13 13:58:42 standalone.localdomain podman[76136]: 2025-10-13 13:58:42.408997294 +0000 UTC m=+0.072521436 container died 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-rabbitmq-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:08:05)
Oct 13 13:58:42 standalone.localdomain podman[76136]: 2025-10-13 13:58:42.437904098 +0000 UTC m=+0.101428240 container cleanup 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-rabbitmq, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-rabbitmq-container, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, maintainer=OpenStack TripleO Team)
Oct 13 13:58:42 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76195]: INFO: rabbitmq-bundle-podman-0
Oct 13 13:58:42 standalone.localdomain podman[76177]: 2025-10-13 13:58:42.462547271 +0000 UTC m=+0.052147215 container cleanup 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-rabbitmq, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:58:42 standalone.localdomain ceph-mon[29756]: pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:42 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76217]: NOTICE: Cleaning up inactive container, rabbitmq-bundle-podman-0.
Oct 13 13:58:42 standalone.localdomain podman[76221]: 2025-10-13 13:58:42.618184088 +0000 UTC m=+0.072975190 container remove 60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7 (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, release=1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team)
Oct 13 13:58:42 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76281]: INFO: rabbitmq-bundle-podman-0
Oct 13 13:58:42 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 13:58:42 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation rabbitmq-bundle-podman-0_start_0 locally on standalone
Oct 13 13:58:42 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:42 standalone.localdomain python3[76210]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/rabbitmq.md5sum follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:58:42 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76336]: INFO: running container rabbitmq-bundle-podman-0 for the first time
Oct 13 13:58:42 standalone.localdomain podman[76340]: 2025-10-13 13:58:42.955114927 +0000 UTC m=+0.090893275 container create f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:08:05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:58:42 standalone.localdomain python3[76324]: ansible-tripleo_diff_exec Invoked with command=/var/lib/container-config-scripts/pacemaker_restart_bundle.sh oslo_messaging_rpc rabbitmq rabbitmq-bundle Started state_file=/var/lib/config-data/puppet-generated/rabbitmq.md5sum state_file_suffix=.previous_run environment={'TRIPLEO_MINOR_UPDATE': '', 'TRIPLEO_HA_WRAPPER_RESOURCE_EXISTS': 'False'} return_codes=[0]
Oct 13 13:58:43 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:58:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f052ed4aa72a8b162d1a4c4bd3dde09a0351b4551b7d966ee67d9a4fe7a7b/merged/var/log supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f052ed4aa72a8b162d1a4c4bd3dde09a0351b4551b7d966ee67d9a4fe7a7b/merged/var/log/rabbitmq supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f052ed4aa72a8b162d1a4c4bd3dde09a0351b4551b7d966ee67d9a4fe7a7b/merged/etc/pacemaker/authkey supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308f052ed4aa72a8b162d1a4c4bd3dde09a0351b4551b7d966ee67d9a4fe7a7b/merged/var/lib/rabbitmq supports timestamps until 2038 (0x7fffffff)
Oct 13 13:58:43 standalone.localdomain podman[76340]: 2025-10-13 13:58:42.909644159 +0000 UTC m=+0.045422507 image pull  cluster.common.tag/rabbitmq:pcmklatest
Oct 13 13:58:43 standalone.localdomain podman[76340]: 2025-10-13 13:58:43.027852368 +0000 UTC m=+0.163630716 container init f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-rabbitmq, release=1, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, build-date=2025-07-21T13:08:05)
Oct 13 13:58:43 standalone.localdomain podman[76340]: 2025-10-13 13:58:43.036356511 +0000 UTC m=+0.172134849 container start f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.component=openstack-rabbitmq-container, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:58:43 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76368]: INFO: f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd
Oct 13 13:58:43 standalone.localdomain sudo[76363]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:58:43 standalone.localdomain sudo[76363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:58:43 standalone.localdomain pcmkrestart[76386]: Initial deployment, skipping the restart of rabbitmq-bundle
Oct 13 13:58:43 standalone.localdomain sudo[76363]: pam_unix(sudo:session): session closed for user root
Oct 13 13:58:43 standalone.localdomain pacemaker-remoted[76362]:  warning: Logging to '/var/log/pacemaker/pacemaker.log' is disabled: No such file or directory
Oct 13 13:58:43 standalone.localdomain pacemaker-remoted[76362]:  notice: Starting Pacemaker remote executor
Oct 13 13:58:43 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76389]: INFO: Creating drop-in dependency for "rabbitmq-bundle-podman-0" (f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd)
Oct 13 13:58:43 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:58:43 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 13 13:58:43 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 13 13:58:43 standalone.localdomain systemd-sysv-generator[76422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:58:43 standalone.localdomain systemd-rc-local-generator[76418]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:58:43 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:58:43 standalone.localdomain python3[76420]: ansible-stat Invoked with path=/tmp/tripleo_ha_image_rabbitmq-bundle follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:58:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60e0e7d001f5ed37b89383c6b149d7acaefdd361c9d6abdef0d074e14db366c7-userdata-shm.mount: Deactivated successfully.
Oct 13 13:58:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b3746c6aebf6b9c6bb1f531f8a01807e21f3d7e4f303f0f910c6c11728bfaf71-merged.mount: Deactivated successfully.
Oct 13 13:58:43 standalone.localdomain pacemaker-remoted[76362]:  notice: Pacemaker remote executor successfully started and accepting connections
Oct 13 13:58:43 standalone.localdomain pacemaker-remoted[76362]:  notice: OCF resource agent search path is /usr/lib/ocf/resource.d
Oct 13 13:58:43 standalone.localdomain ceph-mon[29756]: 4.4 scrub starts
Oct 13 13:58:43 standalone.localdomain ceph-mon[29756]: 4.4 scrub ok
Oct 13 13:58:43 standalone.localdomain podman[76479]: 2025-10-13 13:58:43.548584955 +0000 UTC m=+0.068964775 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, vendor=Red Hat, Inc., release=1, vcs-type=git, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:58:43 standalone.localdomain podman[76479]: 2025-10-13 13:58:43.580810832 +0000 UTC m=+0.101190393 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.component=openstack-rabbitmq-container, vcs-type=git, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:05, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq)
Oct 13 13:58:43 standalone.localdomain podman[76515]: 2025-10-13 13:58:43.674258564 +0000 UTC m=+0.073663710 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:08:05, io.buildah.version=1.33.12, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rabbitmq, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 13:58:43 standalone.localdomain podman[76515]: 2025-10-13 13:58:43.70673713 +0000 UTC m=+0.106142256 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64)
Oct 13 13:58:43 standalone.localdomain podman(rabbitmq-bundle-podman-0)[76550]: NOTICE: Container rabbitmq-bundle-podman-0  started successfully
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq-bundle-0_monitor_0 locally on standalone
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for rabbitmq-bundle-0 on standalone
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for rabbitmq-bundle-0 on standalone: not running (Remote connection inactive)
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation rabbitmq-bundle-0_start_0 locally on standalone
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for rabbitmq-bundle-0 on standalone
Oct 13 13:58:43 standalone.localdomain podman[76559]: 2025-10-13 13:58:43.836674861 +0000 UTC m=+0.079542702 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12)
Oct 13 13:58:43 standalone.localdomain podman[76559]: 2025-10-13 13:58:43.869081825 +0000 UTC m=+0.111949576 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-rabbitmq, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:58:43 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 13:58:44 standalone.localdomain python3[76592]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:58:44 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 13 13:58:44 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 13 13:58:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:44 standalone.localdomain pacemaker-remoted[76362]:  notice: Remote client connection accepted
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Node rabbitmq-bundle-0 state is now member
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 47 aborted: Pacemaker Remote node integrated
Oct 13 13:58:44 standalone.localdomain pacemaker-attrd[57909]:  notice: Removing all rabbitmq-bundle-0 attributes for peer standalone
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for rabbitmq-bundle-0 on standalone: ok
Oct 13 13:58:44 standalone.localdomain pacemaker-fenced[57907]:  notice: Node rabbitmq-bundle-0 state is now member
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 47 (Complete=10, Pending=0, Fired=0, Skipped=2, Incomplete=10, Source=/var/lib/pacemaker/pengine/pe-input-47.bz2): Stopped
Oct 13 13:58:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      rabbitmq:0                  (                      rabbitmq-bundle-0 )
Oct 13 13:58:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 48, saving inputs in /var/lib/pacemaker/pengine/pe-input-48.bz2
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq:0_monitor_0 locally on rabbitmq-bundle-0
Oct 13 13:58:44 standalone.localdomain ceph-mon[29756]: pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:44 standalone.localdomain ceph-mon[29756]: 5.3 scrub starts
Oct 13 13:58:44 standalone.localdomain ceph-mon[29756]: 5.3 scrub ok
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq-bundle-0_monitor_30000 locally on standalone
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for rabbitmq-bundle-0 on standalone
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for rabbitmq on rabbitmq-bundle-0
Oct 13 13:58:44 standalone.localdomain runuser[76706]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:44 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for rabbitmq-bundle-0 on standalone: ok
Oct 13 13:58:44 standalone.localdomain ansible-async_wrapper.py[76734]: Invoked with 944954193872 3600 /root/.ansible/tmp/ansible-tmp-1760363924.7613611-76701-53748748068366/AnsiballZ_command.py _
Oct 13 13:58:44 standalone.localdomain ansible-async_wrapper.py[76765]: Starting module and watcher
Oct 13 13:58:44 standalone.localdomain ansible-async_wrapper.py[76765]: Start watching 76766 (3600)
Oct 13 13:58:44 standalone.localdomain ansible-async_wrapper.py[76766]: Start module (76766)
Oct 13 13:58:44 standalone.localdomain ansible-async_wrapper.py[76734]: Return async_wrapper task started.
Oct 13 13:58:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 13 13:58:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 13 13:58:45 standalone.localdomain python3[76771]: ansible-ansible.legacy.async_status Invoked with jid=944954193872.76734 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:58:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:45 standalone.localdomain ceph-mon[29756]: 4.5 scrub starts
Oct 13 13:58:45 standalone.localdomain ceph-mon[29756]: 4.5 scrub ok
Oct 13 13:58:45 standalone.localdomain runuser[76706]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:45 standalone.localdomain rabbitmq-cluster(rabbitmq)[76822]: INFO: RabbitMQ server could not get cluster status from mnesia
Oct 13 13:58:45 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for rabbitmq on rabbitmq-bundle-0: not running
Oct 13 13:58:45 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation rabbitmq:0_start_0 locally on rabbitmq-bundle-0
Oct 13 13:58:45 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for rabbitmq on rabbitmq-bundle-0
Oct 13 13:58:45 standalone.localdomain runuser[76844]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:46 standalone.localdomain ceph-mon[29756]: pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:46 standalone.localdomain runuser[76844]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:46 standalone.localdomain rabbitmq-cluster(rabbitmq)[76905]: INFO: RabbitMQ server could not get cluster status from mnesia
Oct 13 13:58:47 standalone.localdomain rabbitmq-cluster(rabbitmq)[76957]: INFO: Bootstrapping rabbitmq cluster
Oct 13 13:58:47 standalone.localdomain rabbitmq-cluster(rabbitmq)[76963]: INFO: Waiting for server to start
Oct 13 13:58:47 standalone.localdomain runuser[76964]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:47 standalone.localdomain runuser[76969]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:48 standalone.localdomain ceph-mon[29756]: pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]:    (file: /etc/puppet/hiera.yaml)
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]: Warning: Undefined variable '::deploy_config_name';
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]:    (file & line not available)
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]:    (file & line not available)
Oct 13 13:58:49 standalone.localdomain puppet-user[76774]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Oct 13 13:58:49 standalone.localdomain ansible-async_wrapper.py[76765]: 76766 still running (3600)
Oct 13 13:58:50 standalone.localdomain puppet-user[76774]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Oct 13 13:58:50 standalone.localdomain puppet-user[76774]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.45 seconds
Oct 13 13:58:50 standalone.localdomain ceph-mon[29756]: pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:52 standalone.localdomain ceph-mon[29756]: pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:52 standalone.localdomain runuser[76969]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:52 standalone.localdomain runuser[77366]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:58:53 standalone.localdomain runuser[77366]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:53 standalone.localdomain rabbitmq-cluster(rabbitmq)[77436]: INFO: cluster bootstrapped
Oct 13 13:58:53 standalone.localdomain runuser[77440]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:54 standalone.localdomain runuser[77440]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:54 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 13 13:58:54 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 13 13:58:54 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting rmq-node-attr-rabbitmq[standalone]: (unset) -> rabbit@standalone.internalapi.localdomain
Oct 13 13:58:54 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 48 aborted by status-1-rmq-node-attr-rabbitmq doing create rmq-node-attr-rabbitmq=rabbit@standalone.internalapi.localdomain: Transient attribute change
Oct 13 13:58:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:54 standalone.localdomain ceph-mon[29756]: pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:54 standalone.localdomain ceph-mon[29756]: 5.4 scrub starts
Oct 13 13:58:54 standalone.localdomain ceph-mon[29756]: 5.4 scrub ok
Oct 13 13:58:54 standalone.localdomain runuser[77632]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:55 standalone.localdomain ansible-async_wrapper.py[76765]: 76766 still running (3595)
Oct 13 13:58:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:55 standalone.localdomain runuser[77632]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:55 standalone.localdomain rabbitmq-cluster(rabbitmq)[77689]: INFO: Policy set: ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":1,"ha-promote-on-shutdown":"always"}
Oct 13 13:58:55 standalone.localdomain runuser[77692]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:55 standalone.localdomain python3[77681]: ansible-ansible.legacy.async_status Invoked with jid=944954193872.76734 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:58:55 standalone.localdomain puppet-user[76774]: Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created
Oct 13 13:58:56 standalone.localdomain runuser[77692]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for rabbitmq on rabbitmq-bundle-0: ok
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating notify operation rabbitmq:0_post_notify_start_0 locally on rabbitmq-bundle-0
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of notify operation for rabbitmq on rabbitmq-bundle-0
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Result of notify operation for rabbitmq on rabbitmq-bundle-0: ok
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 48 (Complete=12, Pending=0, Fired=0, Skipped=1, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-48.bz2): Stopped
Oct 13 13:58:56 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 49, saving inputs in /var/lib/pacemaker/pengine/pe-input-49.bz2
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq_monitor_10000 locally on rabbitmq-bundle-0
Oct 13 13:58:56 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for rabbitmq on rabbitmq-bundle-0
Oct 13 13:58:56 standalone.localdomain runuser[77760]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:56 standalone.localdomain ceph-mon[29756]: pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:57 standalone.localdomain runuser[77760]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct 13 13:58:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct 13 13:58:57 standalone.localdomain runuser[77827]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:57 standalone.localdomain ceph-mon[29756]: 4.6 scrub starts
Oct 13 13:58:57 standalone.localdomain ceph-mon[29756]: 4.6 scrub ok
Oct 13 13:58:57 standalone.localdomain runuser[77827]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:57 standalone.localdomain runuser[77885]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:58:58 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Oct 13 13:58:58 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Oct 13 13:58:58 standalone.localdomain ceph-mon[29756]: pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:58 standalone.localdomain ceph-mon[29756]: 5.5 deep-scrub starts
Oct 13 13:58:58 standalone.localdomain ceph-mon[29756]: 5.5 deep-scrub ok
Oct 13 13:58:58 standalone.localdomain runuser[77885]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:58:58 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 49 aborted by op_defaults. 'create': Configuration change
Oct 13 13:58:58 standalone.localdomain puppet-user[76774]: Notice: /Stage[main]/Pacemaker::Resource_op_defaults/Pcmk_resource_op_default[bundle]/ensure: created
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for rabbitmq on rabbitmq-bundle-0: ok
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 49 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-49.bz2): Complete
Oct 13 13:58:59 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 50, saving inputs in /var/lib/pacemaker/pengine/pe-input-50.bz2
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation haproxy-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for haproxy-bundle-podman-0 on standalone
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation galera-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for galera-bundle-podman-0 on standalone
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation rabbitmq-bundle-podman-0_monitor_60000 locally on standalone
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 13:58:59 standalone.localdomain podman[77957]: 2025-10-13 13:58:59.307259565 +0000 UTC m=+0.084415763 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:58:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:58:59 standalone.localdomain podman[77957]: 2025-10-13 13:58:59.341840865 +0000 UTC m=+0.118997093 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, release=1, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team)
Oct 13 13:58:59 standalone.localdomain systemd[1]: tmp-crun.d3zeVp.mount: Deactivated successfully.
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for galera-bundle-podman-0 on standalone: ok
Oct 13 13:58:59 standalone.localdomain podman[77962]: 2025-10-13 13:58:59.357574973 +0000 UTC m=+0.131812001 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, name=rhosp17/openstack-rabbitmq, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:58:59 standalone.localdomain podman[77962]: 2025-10-13 13:58:59.385988522 +0000 UTC m=+0.160225500 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, build-date=2025-07-21T13:08:05, distribution-scope=public, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vcs-type=git, tcib_managed=true, version=17.1.9)
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 13:58:59 standalone.localdomain podman[77959]: 2025-10-13 13:58:59.41275361 +0000 UTC m=+0.194152170 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git)
Oct 13 13:58:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:58:59 standalone.localdomain podman[77959]: 2025-10-13 13:58:59.458903619 +0000 UTC m=+0.240302209 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, name=rhosp17/openstack-haproxy, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for haproxy-bundle-podman-0 on standalone: ok
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 50 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-50.bz2): Complete
Oct 13 13:58:59 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 13:59:00 standalone.localdomain ansible-async_wrapper.py[76765]: 76766 still running (3590)
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs property config' instead.
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Notice: Applied catalog in 10.11 seconds
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Application:
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Initial environment: production
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Converged environment: production
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:          Run mode: user
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Changes:
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:             Total: 2
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Events:
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:           Success: 2
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:             Total: 2
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Resources:
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:           Changed: 2
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:       Out of sync: 2
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:             Total: 28
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Time:
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:          Schedule: 0.00
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:           Package: 0.00
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:         File line: 0.00
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:              File: 0.00
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:              User: 0.01
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:            Augeas: 0.01
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:           Service: 0.08
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Config retrieval: 0.50
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:     Pcmk property: 1.51
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:              Exec: 1.95
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Transaction evaluation: 10.09
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Catalog application: 10.11
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:          Last run: 1760363940
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Pcmk resource op default: 3.15
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:    Pcmk resource default: 3.16
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:        Filebucket: 0.00
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:             Total: 10.11
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]: Version:
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:            Config: 1760363929
Oct 13 13:59:00 standalone.localdomain puppet-user[76774]:            Puppet: 7.10.0
Oct 13 13:59:00 standalone.localdomain ceph-mon[29756]: pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:00 standalone.localdomain ansible-async_wrapper.py[76766]: Module complete (76766)
Oct 13 13:59:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 13 13:59:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 13 13:59:02 standalone.localdomain ceph-mon[29756]: pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:02 standalone.localdomain ceph-mon[29756]: 4.7 scrub starts
Oct 13 13:59:02 standalone.localdomain ceph-mon[29756]: 4.7 scrub ok
Oct 13 13:59:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:04 standalone.localdomain ceph-mon[29756]: pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:05 standalone.localdomain ansible-async_wrapper.py[76765]: Done in kid B.
Oct 13 13:59:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:05 standalone.localdomain python3[78310]: ansible-ansible.legacy.async_status Invoked with jid=944954193872.76734 mode=status _async_dir=/tmp/.ansible_async
Oct 13 13:59:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 13 13:59:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 13 13:59:06 standalone.localdomain python3[78319]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:59:06 standalone.localdomain python3[78323]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:59:06 standalone.localdomain ceph-mon[29756]: pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:06 standalone.localdomain ceph-mon[29756]: 5.6 scrub starts
Oct 13 13:59:06 standalone.localdomain ceph-mon[29756]: 5.6 scrub ok
Oct 13 13:59:06 standalone.localdomain python3[78339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:07 standalone.localdomain python3[78343]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp80p5i0w1 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 13:59:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Oct 13 13:59:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Oct 13 13:59:07 standalone.localdomain python3[78349]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:07 standalone.localdomain ceph-mon[29756]: 5.7 deep-scrub starts
Oct 13 13:59:07 standalone.localdomain ceph-mon[29756]: 5.7 deep-scrub ok
Oct 13 13:59:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:59:07 standalone.localdomain podman[78356]: 2025-10-13 13:59:07.80307082 +0000 UTC m=+0.068157090 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:59:07 standalone.localdomain podman[78356]: 2025-10-13 13:59:07.832885253 +0000 UTC m=+0.097971533 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:07 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:59:08 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 13 13:59:08 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 13 13:59:08 standalone.localdomain python3[78456]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Oct 13 13:59:08 standalone.localdomain ceph-mon[29756]: pgmap v720: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:08 standalone.localdomain ceph-mon[29756]: 4.8 scrub starts
Oct 13 13:59:08 standalone.localdomain ceph-mon[29756]: 4.8 scrub ok
Oct 13 13:59:09 standalone.localdomain python3[78469]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:09 standalone.localdomain runuser[78476]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:09 standalone.localdomain runuser[78476]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:09 standalone.localdomain python3[78588]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:59:09 standalone.localdomain runuser[78603]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 13 13:59:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 13 13:59:10 standalone.localdomain python3[78660]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:10 standalone.localdomain ceph-mon[29756]: pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:10 standalone.localdomain python3[78664]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:10 standalone.localdomain runuser[78603]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:10 standalone.localdomain runuser[78674]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:10 standalone.localdomain python3[78733]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:10 standalone.localdomain python3[78737]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:11 standalone.localdomain runuser[78674]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:11 standalone.localdomain python3[78759]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:11 standalone.localdomain ceph-mon[29756]: 5.8 scrub starts
Oct 13 13:59:11 standalone.localdomain ceph-mon[29756]: 5.8 scrub ok
Oct 13 13:59:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:11 standalone.localdomain python3[78764]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:11 standalone.localdomain python3[78779]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:12 standalone.localdomain python3[78783]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:12 standalone.localdomain ceph-mon[29756]: pgmap v722: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:12 standalone.localdomain python3[78791]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:59:12 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:12 standalone.localdomain systemd-rc-local-generator[78809]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:12 standalone.localdomain systemd-sysv-generator[78816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:12 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:13 standalone.localdomain python3[78922]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:13 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 13 13:59:13 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 13 13:59:13 standalone.localdomain python3[78926]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:13 standalone.localdomain ceph-mon[29756]: 4.9 scrub starts
Oct 13 13:59:13 standalone.localdomain ceph-mon[29756]: 4.9 scrub ok
Oct 13 13:59:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:13 standalone.localdomain python3[78940]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:13 standalone.localdomain python3[78958]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:14 standalone.localdomain python3[78991]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:59:14 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:14 standalone.localdomain systemd-sysv-generator[79021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:14 standalone.localdomain systemd-rc-local-generator[79016]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:14 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 13 13:59:14 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 13 13:59:14 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:14 standalone.localdomain ceph-mon[29756]: pgmap v723: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:14 standalone.localdomain ceph-mon[29756]: 5.9 scrub starts
Oct 13 13:59:14 standalone.localdomain ceph-mon[29756]: 5.9 scrub ok
Oct 13 13:59:14 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 13:59:14 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 13:59:14 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 13:59:14 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 13:59:14 standalone.localdomain python3[79078]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Oct 13 13:59:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:16 standalone.localdomain ceph-mon[29756]: pgmap v724: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:16 standalone.localdomain python3[79162]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 13:59:16 standalone.localdomain podman[79312]: 2025-10-13 13:59:16.852147449 +0000 UTC m=+0.068688037 container create 17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_init_log, build-date=2025-07-21T15:44:11, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, container_name=heat_init_log, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']}, config_id=tripleo_step2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 13:59:16 standalone.localdomain podman[79316]: 2025-10-13 13:59:16.877732971 +0000 UTC m=+0.097572001 container create fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_init_log, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T15:22:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-barbican-api, com.redhat.component=openstack-barbican-api-container, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']}, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., release=1, container_name=barbican_init_log, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:16 standalone.localdomain podman[79332]: 2025-10-13 13:59:16.887144712 +0000 UTC m=+0.088072237 container create 723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_init_logs, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-api, tcib_managed=true, config_id=tripleo_step2, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.openshift.expose-services=, container_name=cinder_api_init_logs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:58:55, vendor=Red Hat, Inc., vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libpod-conmon-17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4.scope.
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libpod-conmon-723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc.scope.
Oct 13 13:59:16 standalone.localdomain podman[79312]: 2025-10-13 13:59:16.808614221 +0000 UTC m=+0.025154809 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:16 standalone.localdomain podman[79316]: 2025-10-13 13:59:16.809646103 +0000 UTC m=+0.029485153 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d7f3190520dadfdb112eff8448e727d7969c79a4854317f6519c23470ffe05/merged/var/log/heat supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain podman[79347]: 2025-10-13 13:59:16.917020447 +0000 UTC m=+0.107732956 container create 7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.component=openstack-cinder-scheduler-container, container_name=cinder_scheduler_init_logs, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, build-date=2025-07-21T16:10:12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, release=1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edcb83c23c59fae27c97060729ed4fcf0db2402d1d24ba03f5c92901d5ee987/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edcb83c23c59fae27c97060729ed4fcf0db2402d1d24ba03f5c92901d5ee987/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain podman[79370]: 2025-10-13 13:59:16.925415127 +0000 UTC m=+0.075557140 container create 539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_init_logs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, container_name=glance_init_logs, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 13:59:16 standalone.localdomain podman[79332]: 2025-10-13 13:59:16.82795902 +0000 UTC m=+0.028886595 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 13:59:16 standalone.localdomain podman[79347]: 2025-10-13 13:59:16.842670035 +0000 UTC m=+0.033382564 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libpod-conmon-539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615.scope.
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libpod-conmon-fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616.scope.
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0412472b532a34bb78587014039d8acbf511963f662c6f865f972c75cba2f/merged/var/log/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56b0412472b532a34bb78587014039d8acbf511963f662c6f865f972c75cba2f/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec428e69571aa789160db41f2d270b1a0ec90be244ec1a5a2bef3ac87e11321b/merged/var/log/barbican supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec428e69571aa789160db41f2d270b1a0ec90be244ec1a5a2bef3ac87e11321b/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:16 standalone.localdomain podman[79316]: 2025-10-13 13:59:16.977257721 +0000 UTC m=+0.197096751 container init fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_init_log, version=17.1.9, build-date=2025-07-21T15:22:44, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=barbican_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']}, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 13:59:16 standalone.localdomain podman[79370]: 2025-10-13 13:59:16.878995149 +0000 UTC m=+0.029137192 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libpod-conmon-7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee.scope.
Oct 13 13:59:16 standalone.localdomain podman[79316]: 2025-10-13 13:59:16.984565717 +0000 UTC m=+0.204404757 container start fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_init_log, version=17.1.9, build-date=2025-07-21T15:22:44, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, container_name=barbican_init_log, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_id=tripleo_step2, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-barbican-api, config_data={'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible)
Oct 13 13:59:16 standalone.localdomain systemd[1]: libpod-fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616.scope: Deactivated successfully.
Oct 13 13:59:16 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name barbican_init_log --conmon-pidfile /run/barbican_init_log.pid --detach=True --label config_id=tripleo_step2 --label container_name=barbican_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/barbican_init_log.log --network none --user root --volume /var/log/containers/barbican:/var/log/barbican:z --volume /var/log/containers/httpd/barbican-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1 /bin/bash -c chown -R barbican:barbican /var/log/barbican
Oct 13 13:59:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5fd54a930c5dd347b749289ddb340e344fa5157d8593c8cf9b000e4c28a692/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain podman[79347]: 2025-10-13 13:59:17.00210268 +0000 UTC m=+0.192815189 container init 7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler_init_logs, io.openshift.expose-services=, config_id=tripleo_step2, build-date=2025-07-21T16:10:12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, container_name=cinder_scheduler_init_logs, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 13:59:17 standalone.localdomain podman[79347]: 2025-10-13 13:59:17.009554701 +0000 UTC m=+0.200267240 container start 7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, name=rhosp17/openstack-cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, container_name=cinder_scheduler_init_logs, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, release=1, io.openshift.expose-services=, config_id=tripleo_step2, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_scheduler_init_logs --conmon-pidfile /run/cinder_scheduler_init_logs.pid --detach=True --label config_id=tripleo_step2 --label container_name=cinder_scheduler_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_scheduler_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/cinder:/var/log/cinder:z registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1 /bin/bash -c chown -R cinder:cinder /var/log/cinder
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79332]: 2025-10-13 13:59:17.027855487 +0000 UTC m=+0.228783022 container init 723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_init_logs, config_id=tripleo_step2, container_name=cinder_api_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 13:59:17 standalone.localdomain podman[79370]: 2025-10-13 13:59:17.030751146 +0000 UTC m=+0.180893179 container init 539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_init_logs, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']}, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 13:59:17 standalone.localdomain podman[79370]: 2025-10-13 13:59:17.035256506 +0000 UTC m=+0.185398539 container start 539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_init_logs, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, release=1, config_data={'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']}, tcib_managed=true, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=glance_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name glance_init_logs --conmon-pidfile /run/glance_init_logs.pid --detach=True --label config_id=tripleo_step2 --label container_name=glance_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/glance_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/glance:/var/log/glance:z --volume /var/log/containers/httpd/glance:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1 /bin/bash -c chown -R glance:glance /var/log/glance
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79431]: 2025-10-13 13:59:17.060703554 +0000 UTC m=+0.037992438 container died 7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler_init_logs, container_name=cinder_scheduler_init_logs, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, release=1, config_id=tripleo_step2, name=rhosp17/openstack-cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.component=openstack-cinder-scheduler-container, maintainer=OpenStack TripleO Team)
Oct 13 13:59:17 standalone.localdomain podman[79312]: 2025-10-13 13:59:17.075824851 +0000 UTC m=+0.292365429 container init 17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_init_log, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-heat-engine-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']}, build-date=2025-07-21T15:44:11, name=rhosp17/openstack-heat-engine, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, release=1, container_name=heat_init_log, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git)
Oct 13 13:59:17 standalone.localdomain podman[79411]: 2025-10-13 13:59:17.083803168 +0000 UTC m=+0.087393925 container died fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_init_log, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, build-date=2025-07-21T15:22:44, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']}, container_name=barbican_init_log, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step2, io.openshift.expose-services=)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79312]: 2025-10-13 13:59:17.133098605 +0000 UTC m=+0.349639173 container start 17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_init_log, distribution-scope=public, io.buildah.version=1.33.12, container_name=heat_init_log, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']}, build-date=2025-07-21T15:44:11, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step2, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, release=1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Oct 13 13:59:17 standalone.localdomain podman[79496]: 2025-10-13 13:59:17.134230619 +0000 UTC m=+0.040222725 container died 17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_init_log, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.component=openstack-heat-engine-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']}, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-heat-engine, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, release=1, build-date=2025-07-21T15:44:11, container_name=heat_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2)
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name heat_init_log --conmon-pidfile /run/heat_init_log.pid --detach=True --label config_id=tripleo_step2 --label container_name=heat_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/heat_init_log.log --network none --user root --volume /var/log/containers/heat:/var/log/heat:z registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1 /bin/bash -c chown -R heat:heat /var/log/heat
Oct 13 13:59:17 standalone.localdomain podman[79456]: 2025-10-13 13:59:17.183244967 +0000 UTC m=+0.136306390 container died 539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_init_logs, config_id=tripleo_step2, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_init_logs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']})
Oct 13 13:59:17 standalone.localdomain podman[79496]: 2025-10-13 13:59:17.248110884 +0000 UTC m=+0.154102970 container cleanup 17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_init_log, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'command': ['/bin/bash', '-c', 'chown -R heat:heat /var/log/heat'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/heat:/var/log/heat:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, container_name=heat_init_log, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:11, config_id=tripleo_step2, tcib_managed=true)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-conmon-17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79457]: 2025-10-13 13:59:17.254579864 +0000 UTC m=+0.207361979 container cleanup 539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_init_logs, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R glance:glance /var/log/glance'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z']}, container_name=glance_init_logs, vcs-type=git, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-conmon-539e0068a28e6ad4f6cde2e552557b19403cbfb8a6933fd39c2a3914401d8615.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79432]: 2025-10-13 13:59:17.28704636 +0000 UTC m=+0.261127524 container cleanup 7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler_init_logs, build-date=2025-07-21T16:10:12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, name=rhosp17/openstack-cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, container_name=cinder_scheduler_init_logs, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, config_id=tripleo_step2, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-conmon-7dbac50bda1230c89073173a131c877252eb4dba314d7a9e9f79eb31a0d6d0ee.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79332]: 2025-10-13 13:59:17.295666696 +0000 UTC m=+0.496594241 container start 723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_init_logs, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step2, container_name=cinder_api_init_logs, build-date=2025-07-21T15:58:55, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 13:59:17 standalone.localdomain podman[79458]: 2025-10-13 13:59:17.2970995 +0000 UTC m=+0.244949532 container died 723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_init_logs, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api_init_logs, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_api_init_logs --conmon-pidfile /run/cinder_api_init_logs.pid --detach=True --label config_id=tripleo_step2 --label container_name=cinder_api_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_api_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/cinder:/var/log/cinder:z --volume /var/log/containers/httpd/cinder-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1 /bin/bash -c chown -R cinder:cinder /var/log/cinder
Oct 13 13:59:17 standalone.localdomain podman[79458]: 2025-10-13 13:59:17.314772858 +0000 UTC m=+0.262622870 container cleanup 723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_init_logs, container_name=cinder_api_init_logs, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T15:58:55, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-conmon-723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79412]: 2025-10-13 13:59:17.342847757 +0000 UTC m=+0.343865365 container cleanup fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_init_log, build-date=2025-07-21T15:22:44, config_data={'command': ['/bin/bash', '-c', 'chown -R barbican:barbican /var/log/barbican'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, release=1, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, container_name=barbican_init_log, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-conmon-fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:17 standalone.localdomain podman[79671]: 2025-10-13 13:59:17.44731333 +0000 UTC m=+0.071343329 container create 66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon_fix_perms, release=1, tcib_managed=true, container_name=horizon_fix_perms, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, config_data={'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard'], 'environment': {'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step2, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libpod-conmon-66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16.scope.
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93319d9ea9a072f29f7188e9587784800c667ed7ca47085ef153ed67451a066/merged/etc/openstack-dashboard supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93319d9ea9a072f29f7188e9587784800c667ed7ca47085ef153ed67451a066/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93319d9ea9a072f29f7188e9587784800c667ed7ca47085ef153ed67451a066/merged/var/log/horizon supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain podman[79672]: 2025-10-13 13:59:17.49513057 +0000 UTC m=+0.114143894 container create 4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_init_logs, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, container_name=manila_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8)
Oct 13 13:59:17 standalone.localdomain podman[79671]: 2025-10-13 13:59:17.407660413 +0000 UTC m=+0.031690422 image pull  registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1
Oct 13 13:59:17 standalone.localdomain podman[79723]: 2025-10-13 13:59:17.523185598 +0000 UTC m=+0.076730075 container create d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_wait_bundle, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, version=17.1.9, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, container_name=mysql_wait_bundle, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, tcib_managed=true, release=1, vcs-type=git)
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libpod-conmon-4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b.scope.
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d88355902d8f002faf7767ac1094db78824bb783ddbe91278ee46e87ad271e/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d88355902d8f002faf7767ac1094db78824bb783ddbe91278ee46e87ad271e/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain podman[79671]: 2025-10-13 13:59:17.544663253 +0000 UTC m=+0.168693252 container init 66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon_fix_perms, version=17.1.9, vcs-type=git, architecture=x86_64, container_name=horizon_fix_perms, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, config_data={'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard'], 'environment': {'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, managed_by=tripleo_ansible)
Oct 13 13:59:17 standalone.localdomain podman[79671]: 2025-10-13 13:59:17.561613418 +0000 UTC m=+0.185643417 container start 66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon_fix_perms, name=rhosp17/openstack-horizon, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:15, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, container_name=horizon_fix_perms, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard'], 'environment': {'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 13:59:17 standalone.localdomain podman[79672]: 2025-10-13 13:59:17.462921003 +0000 UTC m=+0.081934347 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name horizon_fix_perms --conmon-pidfile /run/horizon_fix_perms.pid --detach=True --env TRIPLEO_CONFIG_HASH=eff67e8d67d5f186cef6e48df141386b --label config_id=tripleo_step2 --label container_name=horizon_fix_perms --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard'], 'environment': {'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/horizon_fix_perms.log --network none --user root --volume /var/log/containers/horizon:/var/log/horizon:z --volume /var/log/containers/httpd/horizon:/var/log/httpd:z --volume /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1 /bin/bash -c touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libpod-conmon-d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294.scope.
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d29d2ea8946261949ed750c0626f01bb8c54f722399da45bf75e3237c128367f/merged/root supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d29d2ea8946261949ed750c0626f01bb8c54f722399da45bf75e3237c128367f/merged/var/lib/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain podman[79723]: 2025-10-13 13:59:17.486312847 +0000 UTC m=+0.039857334 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:59:17 standalone.localdomain podman[79723]: 2025-10-13 13:59:17.590260874 +0000 UTC m=+0.143805351 container init d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_wait_bundle, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, container_name=mysql_wait_bundle, architecture=x86_64, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 13:59:17 standalone.localdomain podman[79672]: 2025-10-13 13:59:17.594734113 +0000 UTC m=+0.213747437 container init 4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_init_logs, description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, managed_by=tripleo_ansible, container_name=manila_init_logs, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-manila-api-container, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8)
Oct 13 13:59:17 standalone.localdomain podman[79723]: 2025-10-13 13:59:17.596514337 +0000 UTC m=+0.150058824 container start d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_wait_bundle, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, container_name=mysql_wait_bundle, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-mariadb, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 13:59:17 standalone.localdomain podman[79723]: 2025-10-13 13:59:17.596724904 +0000 UTC m=+0.150269381 container attach d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_wait_bundle, container_name=mysql_wait_bundle, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, version=17.1.9, vcs-type=git, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79774]: 2025-10-13 13:59:17.614935338 +0000 UTC m=+0.041356101 container died 66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon_fix_perms, build-date=2025-07-21T13:58:15, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard'], 'environment': {'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, com.redhat.component=openstack-horizon-container, container_name=horizon_fix_perms, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, summary=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_id=tripleo_step2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1)
Oct 13 13:59:17 standalone.localdomain podman[79672]: 2025-10-13 13:59:17.663172381 +0000 UTC m=+0.282185705 container start 4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_init_logs, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, tcib_managed=true, container_name=manila_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, name=rhosp17/openstack-manila-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-manila-api-container, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:06:43)
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name manila_init_logs --conmon-pidfile /run/manila_init_logs.pid --detach=True --label config_id=tripleo_step2 --label container_name=manila_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/manila_init_logs.log --network none --user root --volume /var/log/containers/manila:/var/log/manila:z --volume /var/log/containers/httpd/manila-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1 /bin/bash -c chown -R manila:manila /var/log/manila
Oct 13 13:59:17 standalone.localdomain podman[79868]: 2025-10-13 13:59:17.718946897 +0000 UTC m=+0.041568538 container died 4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_init_logs, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-api-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, architecture=x86_64, version=17.1.9, config_id=tripleo_step2, name=rhosp17/openstack-manila-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=manila_init_logs, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:06:43, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:17 standalone.localdomain podman[79846]: 2025-10-13 13:59:17.76654445 +0000 UTC m=+0.114558356 container create 7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_init_logs, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=neutron_init_logs, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_step2, build-date=2025-07-21T15:44:03, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']}, release=1, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.component=openstack-neutron-server-container)
Oct 13 13:59:17 standalone.localdomain podman[79846]: 2025-10-13 13:59:17.691823897 +0000 UTC m=+0.039837813 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libpod-conmon-7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77.scope.
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a179da2c9d84bbf21427b6e010a4aba1f19cf7e5641074900b9e6b403199bfdf/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a179da2c9d84bbf21427b6e010a4aba1f19cf7e5641074900b9e6b403199bfdf/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain podman[79846]: 2025-10-13 13:59:17.83082711 +0000 UTC m=+0.178841036 container init 7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_init_logs, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-neutron-server, config_data={'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']}, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, config_id=tripleo_step2, container_name=neutron_init_logs, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 13:59:17 standalone.localdomain podman[79846]: 2025-10-13 13:59:17.835913028 +0000 UTC m=+0.183926934 container start 7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_init_logs, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vcs-type=git, container_name=neutron_init_logs, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name neutron_init_logs --conmon-pidfile /run/neutron_init_logs.pid --detach=True --label config_id=tripleo_step2 --label container_name=neutron_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/neutron_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/log/containers/httpd/neutron-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 /bin/bash -c chown -R neutron:neutron /var/log/neutron
Oct 13 13:59:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3edcb83c23c59fae27c97060729ed4fcf0db2402d1d24ba03f5c92901d5ee987-merged.mount: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ec428e69571aa789160db41f2d270b1a0ec90be244ec1a5a2bef3ac87e11321b-merged.mount: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-01d7f3190520dadfdb112eff8448e727d7969c79a4854317f6519c23470ffe05-merged.mount: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17db54ea3a1fc9edcf145d3be0f141a084370c22c8a6c18aeaacf94c0f6260e4-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79889]: 2025-10-13 13:59:17.863327816 +0000 UTC m=+0.125046121 container create 2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, version=17.1.9, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']}, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_init_logs, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step2, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11)
Oct 13 13:59:17 standalone.localdomain podman[79929]: 2025-10-13 13:59:17.907040239 +0000 UTC m=+0.056482309 container died 7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_init_logs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:44:03, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, com.redhat.component=openstack-neutron-server-container, config_data={'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, name=rhosp17/openstack-neutron-server, container_name=neutron_init_logs, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libpod-conmon-2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108.scope.
Oct 13 13:59:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4525e4f58503c299e14f2206981efdbe4549b63528f450366b37f72d8ab6a848/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4525e4f58503c299e14f2206981efdbe4549b63528f450366b37f72d8ab6a848/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:17 standalone.localdomain podman[79889]: 2025-10-13 13:59:17.826571338 +0000 UTC m=+0.088289663 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:59:17 standalone.localdomain podman[79889]: 2025-10-13 13:59:17.930237287 +0000 UTC m=+0.191955602 container init 2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_init_logs, architecture=x86_64, release=1, container_name=nova_api_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']}, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-nova-api, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79773]: 2025-10-13 13:59:17.963574828 +0000 UTC m=+0.394294704 container cleanup 66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon_fix_perms, vendor=Red Hat, Inc., name=rhosp17/openstack-horizon, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'touch /var/log/horizon/horizon.log ; chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard'], 'environment': {'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}}, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=horizon_fix_perms, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.component=openstack-horizon-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 13:59:17 standalone.localdomain systemd[1]: libpod-conmon-66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16.scope: Deactivated successfully.
Oct 13 13:59:17 standalone.localdomain podman[79889]: 2025-10-13 13:59:17.986109856 +0000 UTC m=+0.247828171 container start 2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_init_logs, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, architecture=x86_64, container_name=nova_api_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step2, com.redhat.component=openstack-nova-api-container, name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:59:17 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_api_init_logs --conmon-pidfile /run/nova_api_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step2 --label container_name=nova_api_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_api_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Oct 13 13:59:18 standalone.localdomain podman[79825]: 2025-10-13 13:59:18.03924117 +0000 UTC m=+0.417273486 container cleanup 4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, build-date=2025-07-21T16:06:43, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, tcib_managed=true, container_name=manila_init_logs, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, release=1, config_id=tripleo_step2, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-api)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[79988]: 2025-10-13 13:59:18.09220777 +0000 UTC m=+0.093666620 container died 2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_init_logs, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=nova_api_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']}, distribution-scope=public, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 13:59:18 standalone.localdomain podman[79929]: 2025-10-13 13:59:18.140151544 +0000 UTC m=+0.289593604 container cleanup 7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_init_logs, name=rhosp17/openstack-neutron-server, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, architecture=x86_64, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_data={'command': ['/bin/bash', '-c', 'chown -R neutron:neutron /var/log/neutron'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_init_logs)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[79977]: 2025-10-13 13:59:18.198885872 +0000 UTC m=+0.252389333 container cleanup 2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_init_logs, com.redhat.component=openstack-nova-api-container, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_api_init_logs, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80040]: 2025-10-13 13:59:18.269035663 +0000 UTC m=+0.139240430 container create c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, version=17.1.9, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, container_name=nova_compute_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libpod-conmon-c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b.scope.
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedf3a02c6a3e654014c5b18d30521b2df9e55f326acde83b2ea6fe9d2600174/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:18 standalone.localdomain podman[80040]: 2025-10-13 13:59:18.313249271 +0000 UTC m=+0.183454038 container init c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37)
Oct 13 13:59:18 standalone.localdomain podman[80040]: 2025-10-13 13:59:18.318954318 +0000 UTC m=+0.189159085 container start c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, build-date=2025-07-21T14:48:37, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step2)
Oct 13 13:59:18 standalone.localdomain podman[80040]: 2025-10-13 13:59:18.229612173 +0000 UTC m=+0.099816970 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 13:59:18 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80141]: 2025-10-13 13:59:18.380288286 +0000 UTC m=+0.039218674 container died c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, container_name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 13:59:18 standalone.localdomain podman[80141]: 2025-10-13 13:59:18.399546543 +0000 UTC m=+0.058476931 container cleanup c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute_init_log, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, version=17.1.9, config_id=tripleo_step2, vcs-type=git)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80079]: 2025-10-13 13:59:18.451091098 +0000 UTC m=+0.232329332 container create b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor_init_log, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_conductor_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, com.redhat.component=openstack-nova-conductor-container, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step2, build-date=2025-07-21T15:44:17, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true)
Oct 13 13:59:18 standalone.localdomain podman[80079]: 2025-10-13 13:59:18.371955278 +0000 UTC m=+0.153193512 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libpod-conmon-b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87.scope.
Oct 13 13:59:18 standalone.localdomain ceph-mon[29756]: pgmap v725: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:18 standalone.localdomain podman[80190]: 2025-10-13 13:59:18.503584133 +0000 UTC m=+0.101556794 container create fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, architecture=x86_64, config_id=tripleo_step2, managed_by=tripleo_ansible, distribution-scope=public, release=2, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2)
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d307bd82fd874de705c72ae1d9d0871388589f1e07b79f85f6bf3ab4a2175384/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:18 standalone.localdomain podman[80079]: 2025-10-13 13:59:18.51965435 +0000 UTC m=+0.300892604 container init b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor_init_log, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_conductor_init_log, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, config_id=tripleo_step2, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 13:59:18 standalone.localdomain podman[80190]: 2025-10-13 13:59:18.424450344 +0000 UTC m=+0.022423015 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 13:59:18 standalone.localdomain podman[80079]: 2025-10-13 13:59:18.526386158 +0000 UTC m=+0.307624422 container start b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor_init_log, config_id=tripleo_step2, io.openshift.expose-services=, build-date=2025-07-21T15:44:17, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor_init_log, architecture=x86_64, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, summary=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 13:59:18 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_conductor_init_log --conmon-pidfile /run/nova_conductor_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step2 --label container_name=nova_conductor_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_conductor_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libpod-conmon-fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f.scope.
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d47d2bb79e4159e5b63c2e82543365dfb186987c7afdef56d8598cc2a6c5bdf/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:18 standalone.localdomain podman[80190]: 2025-10-13 13:59:18.558346748 +0000 UTC m=+0.156319419 container init fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step2, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, release=2, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:18 standalone.localdomain podman[80190]: 2025-10-13 13:59:18.56778914 +0000 UTC m=+0.165761801 container start fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, build-date=2025-07-21T14:56:59, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, batch=17.1_20250721.1, container_name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step2, io.openshift.expose-services=, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0)
Oct 13 13:59:18 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80220]: 2025-10-13 13:59:18.61886203 +0000 UTC m=+0.127970461 container create bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, name=rhosp17/openstack-nova-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, release=1, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_metadata_init_logs, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:59:18 standalone.localdomain podman[80245]: 2025-10-13 13:59:18.632804402 +0000 UTC m=+0.090258954 container died b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor_init_log, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, distribution-scope=public, name=rhosp17/openstack-nova-conductor, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-conductor-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc.)
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libpod-conmon-bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218.scope.
Oct 13 13:59:18 standalone.localdomain podman[80245]: 2025-10-13 13:59:18.666124123 +0000 UTC m=+0.123578655 container cleanup b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor_init_log, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_conductor_init_log, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., release=1, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 13:59:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-b9697b30bbe81d23f78162621ccb8809188fae8f20d8faa5271546763165bf87.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88290a70e6b0d5a66e3861564060c18eea821ae554c9c70ccab8ad18232a364f/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88290a70e6b0d5a66e3861564060c18eea821ae554c9c70ccab8ad18232a364f/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:18 standalone.localdomain podman[80220]: 2025-10-13 13:59:18.579848193 +0000 UTC m=+0.088956634 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 13:59:18 standalone.localdomain podman[80267]: 2025-10-13 13:59:18.683394338 +0000 UTC m=+0.104314900 container died fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtqemud_init_logs, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 13:59:18 standalone.localdomain podman[80281]: 2025-10-13 13:59:18.766064447 +0000 UTC m=+0.161128239 container cleanup fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=nova_virtqemud_init_logs, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:59, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-fc9bd13b21b4843dec5266896270cab5f99cb30bb84c3a19bcbec588904eff5f.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80220]: 2025-10-13 13:59:18.781806964 +0000 UTC m=+0.290915425 container init bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata_init_logs, io.buildah.version=1.33.12, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_metadata_init_logs, version=17.1.9, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:59:18 standalone.localdomain podman[80220]: 2025-10-13 13:59:18.792973959 +0000 UTC m=+0.302082420 container start bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata_init_logs, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-nova-api-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_metadata_init_logs)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_metadata_init_logs --conmon-pidfile /run/nova_metadata_init_logs.pid --detach=True --label config_id=tripleo_step2 --label container_name=nova_metadata_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_metadata_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-metadata:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Oct 13 13:59:18 standalone.localdomain podman[80351]: 2025-10-13 13:59:18.849711296 +0000 UTC m=+0.041490945 container died bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata_init_logs, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata_init_logs, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api)
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4525e4f58503c299e14f2206981efdbe4549b63528f450366b37f72d8ab6a848-merged.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a5b405d8eeae8e39580814bbae17e1aa822f9daa6b5d67357d1004d50326108-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a179da2c9d84bbf21427b6e010a4aba1f19cf7e5641074900b9e6b403199bfdf-merged.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d434b3f1e50123d695597424f5c71954eeaddb918956037901c7c3081b7de77-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-61d88355902d8f002faf7767ac1094db78824bb783ddbe91278ee46e87ad271e-merged.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cf6a95daaebadd5428c4245529dc25cac44733c7069949fd71bf6c3709b0e3b-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f93319d9ea9a072f29f7188e9587784800c667ed7ca47085ef153ed67451a066-merged.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88290a70e6b0d5a66e3861564060c18eea821ae554c9c70ccab8ad18232a364f-merged.mount: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80350]: 2025-10-13 13:59:18.915790411 +0000 UTC m=+0.112003538 container cleanup bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_id=tripleo_step2, name=rhosp17/openstack-nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, container_name=nova_metadata_init_logs, release=1, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 13:59:18 standalone.localdomain systemd[1]: libpod-conmon-bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218.scope: Deactivated successfully.
Oct 13 13:59:18 standalone.localdomain podman[80397]: 2025-10-13 13:59:18.967870143 +0000 UTC m=+0.056089627 container create 553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9 (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_wait_bundle, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T13:08:05, com.redhat.component=openstack-rabbitmq-container, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, container_name=rabbitmq_wait_bundle, version=17.1.9, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:59:19 standalone.localdomain systemd[1]: Started libpod-conmon-553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9.scope.
Oct 13 13:59:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81afee617d1c2ad30d64b9a6344392209bb792d6cbb7bddbfe6b6c66b765cc51/merged/usr/lib64/erlang/erts-12.3.2.2/bin/epmd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:19 standalone.localdomain podman[80397]: 2025-10-13 13:59:19.033765032 +0000 UTC m=+0.121984516 container init 553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9 (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_wait_bundle, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-rabbitmq, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:08:05, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=rabbitmq_wait_bundle, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 13:59:19 standalone.localdomain podman[80397]: 2025-10-13 13:59:19.041232464 +0000 UTC m=+0.129451948 container start 553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9 (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_wait_bundle, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}, release=1, config_id=tripleo_step2, container_name=rabbitmq_wait_bundle, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, tcib_managed=true)
Oct 13 13:59:19 standalone.localdomain podman[80397]: 2025-10-13 13:59:19.041381088 +0000 UTC m=+0.129600572 container attach 553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9 (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_wait_bundle, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, container_name=rabbitmq_wait_bundle, version=17.1.9, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:59:19 standalone.localdomain podman[80397]: 2025-10-13 13:59:18.944061096 +0000 UTC m=+0.032280600 image pull  registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1
Oct 13 13:59:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 13 13:59:19 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 13 13:59:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Oct 13 13:59:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Oct 13 13:59:20 standalone.localdomain ceph-mon[29756]: 4.a scrub starts
Oct 13 13:59:20 standalone.localdomain ceph-mon[29756]: 4.a scrub ok
Oct 13 13:59:20 standalone.localdomain ceph-mon[29756]: pgmap v726: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Oct 13 13:59:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Oct 13 13:59:21 standalone.localdomain runuser[80620]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:21 standalone.localdomain ceph-mon[29756]: 5.a deep-scrub starts
Oct 13 13:59:21 standalone.localdomain ceph-mon[29756]: 5.a deep-scrub ok
Oct 13 13:59:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:21 standalone.localdomain runuser[80620]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:21 standalone.localdomain runuser[80667]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:21 standalone.localdomain runuser[80688]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:22 standalone.localdomain runuser[80667]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:22 standalone.localdomain runuser[80688]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:22 standalone.localdomain ceph-mon[29756]: 4.b deep-scrub starts
Oct 13 13:59:22 standalone.localdomain ceph-mon[29756]: 4.b deep-scrub ok
Oct 13 13:59:22 standalone.localdomain ceph-mon[29756]: pgmap v727: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:22 standalone.localdomain runuser[80820]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:23 standalone.localdomain runuser[80820]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:23 standalone.localdomain runuser[81033]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_13:59:23
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'volumes', 'vms', 'manila_data', 'images', 'backups']
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 13:59:23 standalone.localdomain runuser[81095]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:23 standalone.localdomain runuser[81033]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:23 standalone.localdomain runuser[81095]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:23 standalone.localdomain runuser[81220]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:24 standalone.localdomain systemd[1]: libpod-d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294.scope: Deactivated successfully.
Oct 13 13:59:24 standalone.localdomain systemd[1]: libpod-d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294.scope: Consumed 5.956s CPU time.
Oct 13 13:59:24 standalone.localdomain podman[81275]: 2025-10-13 13:59:24.450571529 +0000 UTC m=+0.055198610 container died d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_wait_bundle, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., release=1, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, container_name=mysql_wait_bundle, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, architecture=x86_64)
Oct 13 13:59:24 standalone.localdomain systemd[1]: tmp-crun.VXgoiG.mount: Deactivated successfully.
Oct 13 13:59:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d29d2ea8946261949ed750c0626f01bb8c54f722399da45bf75e3237c128367f-merged.mount: Deactivated successfully.
Oct 13 13:59:24 standalone.localdomain ceph-mon[29756]: pgmap v728: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:24 standalone.localdomain podman[81275]: 2025-10-13 13:59:24.487518763 +0000 UTC m=+0.092145804 container cleanup d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294 (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=mysql_wait_bundle, container_name=mysql_wait_bundle, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, vcs-type=git, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']}, config_id=tripleo_step2, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public)
Oct 13 13:59:24 standalone.localdomain systemd[1]: libpod-conmon-d61b38d71dd3ed0f59598221199949b56a856f34d25ea898557a32c04dbb2294.scope: Deactivated successfully.
Oct 13 13:59:24 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name mysql_wait_bundle --conmon-pidfile /run/mysql_wait_bundle.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=c719b864882013747e4a6fe61c577719 --ipc host --label config_id=tripleo_step2 --label container_name=mysql_wait_bundle --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user', 'include tripleo::profile::pacemaker::database::mysql_bundle'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'c719b864882013747e4a6fe61c577719'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/mysql:/var/lib/mysql:rw,z', '/var/lib/config-data/puppet-generated/mysql/root:/root:rw']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/mysql_wait_bundle.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/mysql:/var/lib/mysql:rw,z --volume /var/lib/config-data/puppet-generated/mysql/root:/root:rw registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1 /container_puppet_apply.sh 2 file,file_line,concat,augeas,galera_ready,mysql_database,mysql_grant,mysql_user include tripleo::profile::pacemaker::database::mysql_bundle
Oct 13 13:59:24 standalone.localdomain runuser[81220]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:24 standalone.localdomain runuser[81366]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:25 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 13 13:59:25 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 13 13:59:25 standalone.localdomain runuser[81366]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:25 standalone.localdomain runuser[81459]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:25 standalone.localdomain ceph-mon[29756]: 4.c scrub starts
Oct 13 13:59:25 standalone.localdomain ceph-mon[29756]: 4.c scrub ok
Oct 13 13:59:26 standalone.localdomain runuser[81459]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:26 standalone.localdomain runuser[81511]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:26 standalone.localdomain ceph-mon[29756]: pgmap v729: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:26 standalone.localdomain runuser[81511]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:26 standalone.localdomain runuser[81565]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:27 standalone.localdomain runuser[81565]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:27 standalone.localdomain runuser[81617]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:28 standalone.localdomain runuser[81617]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 13:59:28 standalone.localdomain systemd[1]: libpod-553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9.scope: Deactivated successfully.
Oct 13 13:59:28 standalone.localdomain systemd[1]: libpod-553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9.scope: Consumed 10.236s CPU time.
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 13:59:28 standalone.localdomain podman[80397]: 2025-10-13 13:59:28.322592213 +0000 UTC m=+9.410811737 container died 553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9 (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_wait_bundle, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}, architecture=x86_64, vcs-type=git, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, container_name=rabbitmq_wait_bundle, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 13:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 13:59:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-81afee617d1c2ad30d64b9a6344392209bb792d6cbb7bddbfe6b6c66b765cc51-merged.mount: Deactivated successfully.
Oct 13 13:59:28 standalone.localdomain podman[81669]: 2025-10-13 13:59:28.420771961 +0000 UTC m=+0.082340399 container cleanup 553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9 (image=registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1, name=rabbitmq_wait_bundle, config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']}, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, container_name=rabbitmq_wait_bundle, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:08:05, config_id=tripleo_step2)
Oct 13 13:59:28 standalone.localdomain systemd[1]: libpod-conmon-553a06ff454a7df858c9dcb2fdc2aa6520e22dfa0dd0f73573be7005770358b9.scope: Deactivated successfully.
Oct 13 13:59:28 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rabbitmq_wait_bundle --conmon-pidfile /run/rabbitmq_wait_bundle.pid --detach=False --env KOLLA_BOOTSTRAP=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --ipc host --label config_id=tripleo_step2 --label container_name=rabbitmq_wait_bundle --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '2', 'file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', 'include tripleo::profile::pacemaker::rabbitmq_bundle', ''], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1', 'ipc': 'host', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/bin/true:/bin/epmd']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rabbitmq_wait_bundle.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /bin/true:/bin/epmd registry.redhat.io/rhosp-rhel9/openstack-rabbitmq:17.1 /container_puppet_apply.sh 2 file,file_line,concat,augeas,rabbitmq_policy,rabbitmq_user,rabbitmq_ready include tripleo::profile::pacemaker::rabbitmq_bundle 
Oct 13 13:59:28 standalone.localdomain ceph-mon[29756]: pgmap v730: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:28 standalone.localdomain podman[81866]: 2025-10-13 13:59:28.943033047 +0000 UTC m=+0.090842144 container create 393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=create_dnsmasq_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., distribution-scope=public, container_name=create_dnsmasq_wrapper, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 13:59:28 standalone.localdomain podman[81866]: 2025-10-13 13:59:28.885767664 +0000 UTC m=+0.033576801 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 13:59:28 standalone.localdomain podman[81911]: 2025-10-13 13:59:28.98935386 +0000 UTC m=+0.097683575 container create cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, release=1, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45)
Oct 13 13:59:28 standalone.localdomain podman[81872]: 2025-10-13 13:59:28.894642568 +0000 UTC m=+0.035433467 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Oct 13 13:59:28 standalone.localdomain systemd[1]: Started libpod-conmon-393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042.scope.
Oct 13 13:59:29 standalone.localdomain podman[81870]: 2025-10-13 13:59:28.901919834 +0000 UTC m=+0.044957913 image pull  registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:29 standalone.localdomain podman[81872]: 2025-10-13 13:59:29.012834967 +0000 UTC m=+0.153625846 container create 5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1725e6410ebd724af9a156d33baa34cd7b609f52e9eac888ddfe9ef0e5981ec4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain podman[81911]: 2025-10-13 13:59:28.930356864 +0000 UTC m=+0.038686579 image pull  registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libpod-conmon-5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c.scope.
Oct 13 13:59:29 standalone.localdomain podman[81870]: 2025-10-13 13:59:29.070650036 +0000 UTC m=+0.213688175 container create 959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, batch=17.1_20250721.1, container_name=keystone_init_log, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:29 standalone.localdomain podman[81866]: 2025-10-13 13:59:29.08045981 +0000 UTC m=+0.228268947 container init 393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=create_dnsmasq_wrapper, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, managed_by=tripleo_ansible, vcs-type=git, container_name=create_dnsmasq_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988b334fbbc02e37b29bbd89994cd142c427e41613fb7debd89bcf8ffd5cd653/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libpod-conmon-cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.scope.
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:29 standalone.localdomain podman[81872]: 2025-10-13 13:59:29.091523762 +0000 UTC m=+0.232314681 container init 5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9)
Oct 13 13:59:29 standalone.localdomain podman[81914]: 2025-10-13 13:59:28.997644466 +0000 UTC m=+0.089945534 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03412483bd4ca7b5539c903f8c964f9a8806e7da0b632626e9e28c6f9211cb67/merged/var/lib/mysql supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03412483bd4ca7b5539c903f8c964f9a8806e7da0b632626e9e28c6f9211cb67/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain podman[81872]: 2025-10-13 13:59:29.105593028 +0000 UTC m=+0.246383947 container start 5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step2, container_name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64)
Oct 13 13:59:29 standalone.localdomain podman[81872]: 2025-10-13 13:59:29.106512846 +0000 UTC m=+0.247303765 container attach 5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2025-07-21T16:28:53, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=create_haproxy_wrapper)
Oct 13 13:59:29 standalone.localdomain podman[81914]: 2025-10-13 13:59:29.119461637 +0000 UTC m=+0.211762675 container create 4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, tcib_managed=true, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=2, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, container_name=create_virtlogd_wrapper, io.openshift.expose-services=)
Oct 13 13:59:29 standalone.localdomain podman[81866]: 2025-10-13 13:59:29.145381429 +0000 UTC m=+0.293190556 container start 393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=create_dnsmasq_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, container_name=create_dnsmasq_wrapper, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, build-date=2025-07-21T16:28:54, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1)
Oct 13 13:59:29 standalone.localdomain podman[81866]: 2025-10-13 13:59:29.145670618 +0000 UTC m=+0.293479785 container attach 393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=create_dnsmasq_wrapper, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=create_dnsmasq_wrapper, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:28:54, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, name=rhosp17/openstack-neutron-dhcp-agent)
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libpod-conmon-4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285.scope.
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 13:59:29 standalone.localdomain podman[81911]: 2025-10-13 13:59:29.177767011 +0000 UTC m=+0.286096746 container init cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=clustercheck, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e96b15229bea211cd977b193c3b43c6267f471f74ae0329da8bebc24a16a86/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain podman[81914]: 2025-10-13 13:59:29.18742283 +0000 UTC m=+0.279723908 container init 4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step2, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true)
Oct 13 13:59:29 standalone.localdomain podman[81914]: 2025-10-13 13:59:29.195904813 +0000 UTC m=+0.288205871 container start 4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, release=2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, version=17.1.9, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59)
Oct 13 13:59:29 standalone.localdomain podman[81914]: 2025-10-13 13:59:29.196203832 +0000 UTC m=+0.288504940 container attach 4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, tcib_managed=true, container_name=create_virtlogd_wrapper, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 13:59:29 standalone.localdomain sudo[81971]:    mysql : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:59:29 standalone.localdomain sudo[81971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42434)
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libpod-conmon-959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264.scope.
Oct 13 13:59:29 standalone.localdomain podman[81911]: 2025-10-13 13:59:29.218607316 +0000 UTC m=+0.326937021 container start cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, vcs-type=git)
Oct 13 13:59:29 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name clustercheck --conmon-pidfile /run/clustercheck.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=a7af8ae2ee3733ec3f843ca782a7d238 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step2 --label container_name=clustercheck --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/clustercheck.log --network host --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json --volume /var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro --volume /var/lib/mysql:/var/lib/mysql registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ee7f6adbabfd307c31c3f6589e2eb42d60dc9c98a3ffc82e4943ad43a114b/merged/var/log/keystone supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ee7f6adbabfd307c31c3f6589e2eb42d60dc9c98a3ffc82e4943ad43a114b/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain podman[81870]: 2025-10-13 13:59:29.253840967 +0000 UTC m=+0.396879076 container init 959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_init_log, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, container_name=keystone_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, config_id=tripleo_step2, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 13:59:29 standalone.localdomain podman[81870]: 2025-10-13 13:59:29.270078448 +0000 UTC m=+0.413116557 container start 959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_init_log, managed_by=tripleo_ansible, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']}, com.redhat.component=openstack-keystone-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., version=17.1.9, container_name=keystone_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 13:59:29 standalone.localdomain systemd[1]: libpod-959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264.scope: Deactivated successfully.
Oct 13 13:59:29 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name keystone_init_log --conmon-pidfile /run/keystone_init_log.pid --detach=True --label config_id=tripleo_step2 --label container_name=keystone_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/keystone_init_log.log --network none --user root --volume /var/log/containers/keystone:/var/log/keystone:z --volume /var/log/containers/httpd/keystone:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1 /bin/bash -c chown -R keystone:keystone /var/log/keystone
Oct 13 13:59:29 standalone.localdomain sudo[81971]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:29 standalone.localdomain podman[82001]: 2025-10-13 13:59:29.322906593 +0000 UTC m=+0.033808477 container died 959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step2, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, container_name=keystone_init_log, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']}, distribution-scope=public, vcs-type=git, release=1, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64)
Oct 13 13:59:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:29 standalone.localdomain podman[81976]: 2025-10-13 13:59:29.329180847 +0000 UTC m=+0.108555990 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=starting, distribution-scope=public, container_name=clustercheck, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:29 standalone.localdomain podman[82001]: 2025-10-13 13:59:29.459020157 +0000 UTC m=+0.169922041 container cleanup 959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_init_log, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, container_name=keystone_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', 'chown -R keystone:keystone /var/log/keystone'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z']}, config_id=tripleo_step2, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 13:59:29 standalone.localdomain podman[81976]: 2025-10-13 13:59:29.459634156 +0000 UTC m=+0.239009389 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, batch=17.1_20250721.1, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 13:59:29 standalone.localdomain systemd[1]: libpod-conmon-959f6d4af46dafd6258750e66aeef14bd6783bf284775fa69ebfb3c1ffc10264.scope: Deactivated successfully.
Oct 13 13:59:29 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 13:59:29 standalone.localdomain podman[82122]: 2025-10-13 13:59:29.834978133 +0000 UTC m=+0.107261501 container create 00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_init_log, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']}, distribution-scope=public, com.redhat.component=openstack-placement-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, name=rhosp17/openstack-placement-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_init_log, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, build-date=2025-07-21T13:58:12, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:29 standalone.localdomain podman[82122]: 2025-10-13 13:59:29.777746762 +0000 UTC m=+0.050030220 image pull  registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libpod-conmon-00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1.scope.
Oct 13 13:59:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76bdac62c12a19965f1d6e6025533ee72572af7604275a0c89647dc23f734651/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76bdac62c12a19965f1d6e6025533ee72572af7604275a0c89647dc23f734651/merged/var/log/placement supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:29 standalone.localdomain podman[82122]: 2025-10-13 13:59:29.903414511 +0000 UTC m=+0.175697879 container init 00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_init_log, name=rhosp17/openstack-placement-api, io.buildah.version=1.33.12, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, config_data={'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']}, version=17.1.9, com.redhat.component=openstack-placement-api-container, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step2, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_init_log, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1)
Oct 13 13:59:29 standalone.localdomain podman[82122]: 2025-10-13 13:59:29.913019479 +0000 UTC m=+0.185302837 container start 00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_init_log, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-placement-api-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, container_name=placement_init_log, version=17.1.9, config_data={'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, release=1, architecture=x86_64, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step2, vcs-type=git, name=rhosp17/openstack-placement-api, tcib_managed=true)
Oct 13 13:59:29 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name placement_init_log --conmon-pidfile /run/placement_init_log.pid --detach=True --label config_id=tripleo_step2 --label container_name=placement_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/placement_init_log.log --network none --user root --volume /var/log/containers/placement:/var/log/placement:z --volume /var/log/containers/httpd/placement:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1 /bin/bash -c chown -R placement:placement /var/log/placement
Oct 13 13:59:29 standalone.localdomain systemd[1]: libpod-00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1.scope: Deactivated successfully.
Oct 13 13:59:30 standalone.localdomain podman[82230]: 2025-10-13 13:59:30.001753035 +0000 UTC m=+0.075932592 container died 00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, com.redhat.component=openstack-placement-api-container, release=1, container_name=placement_init_log, build-date=2025-07-21T13:58:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-placement-api, config_id=tripleo_step2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_data={'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 13:59:30 standalone.localdomain podman[82237]: 2025-10-13 13:59:30.071902966 +0000 UTC m=+0.135569177 container cleanup 00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_init_log, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-placement-api, maintainer=OpenStack TripleO Team, container_name=placement_init_log, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-placement-api-container, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R placement:placement /var/log/placement'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'none', 'start_order': 1, 'user': 'root', 'volumes': ['/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:58:12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 13:59:30 standalone.localdomain systemd[1]: libpod-conmon-00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1.scope: Deactivated successfully.
Oct 13 13:59:30 standalone.localdomain ceph-mon[29756]: pgmap v731: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:30 standalone.localdomain haproxy[70940]: Backup Server mysql/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 20ms. 0 active and 1 backup servers online. Running on backup. 0 sessions requeued, 0 total in queue.
Oct 13 13:59:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-76bdac62c12a19965f1d6e6025533ee72572af7604275a0c89647dc23f734651-merged.mount: Deactivated successfully.
Oct 13 13:59:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00a66ec9ed428728a8b49c60b19c83bda4e44d06a42624a00b4f296e3c17a8c1-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:30 standalone.localdomain ovs-vsctl[82361]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Oct 13 13:59:31 standalone.localdomain systemd[1]: libpod-4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285.scope: Deactivated successfully.
Oct 13 13:59:31 standalone.localdomain systemd[1]: libpod-4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285.scope: Consumed 2.000s CPU time.
Oct 13 13:59:31 standalone.localdomain podman[82551]: 2025-10-13 13:59:31.282689212 +0000 UTC m=+0.053744155 container died 4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, config_id=tripleo_step2, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, container_name=create_virtlogd_wrapper, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=2, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team)
Oct 13 13:59:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:31 standalone.localdomain podman[82551]: 2025-10-13 13:59:31.319378797 +0000 UTC m=+0.090433700 container cleanup 4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 13:59:31 standalone.localdomain systemd[1]: libpod-conmon-4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285.scope: Deactivated successfully.
Oct 13 13:59:31 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper
Oct 13 13:59:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-85e96b15229bea211cd977b193c3b43c6267f471f74ae0329da8bebc24a16a86-merged.mount: Deactivated successfully.
Oct 13 13:59:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:31 standalone.localdomain systemd[1]: libpod-5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c.scope: Deactivated successfully.
Oct 13 13:59:31 standalone.localdomain systemd[1]: libpod-5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c.scope: Consumed 2.092s CPU time.
Oct 13 13:59:31 standalone.localdomain podman[81872]: 2025-10-13 13:59:31.972054848 +0000 UTC m=+3.112845737 container died 5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, architecture=x86_64, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, container_name=create_haproxy_wrapper, config_id=tripleo_step2, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 13:59:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain podman[82596]: 2025-10-13 13:59:32.054743187 +0000 UTC m=+0.069767101 container cleanup 5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, config_id=tripleo_step2, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, container_name=create_haproxy_wrapper, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1)
Oct 13 13:59:32 standalone.localdomain systemd[1]: libpod-conmon-5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c.scope: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers
Oct 13 13:59:32 standalone.localdomain systemd[1]: libpod-393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042.scope: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain systemd[1]: libpod-393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042.scope: Consumed 1.952s CPU time.
Oct 13 13:59:32 standalone.localdomain podman[81866]: 2025-10-13 13:59:32.201146379 +0000 UTC m=+3.348955576 container died 393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=create_dnsmasq_wrapper, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, name=rhosp17/openstack-neutron-dhcp-agent, version=17.1.9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, vendor=Red Hat, Inc., container_name=create_dnsmasq_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, managed_by=tripleo_ansible)
Oct 13 13:59:32 standalone.localdomain podman[82637]: 2025-10-13 13:59:32.274649424 +0000 UTC m=+0.063145876 container cleanup 393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=create_dnsmasq_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=create_dnsmasq_wrapper, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step2, io.buildah.version=1.33.12)
Oct 13 13:59:32 standalone.localdomain systemd[1]: libpod-conmon-393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042.scope: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain python3[79162]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_dnsmasq_wrapper --conmon-pidfile /run/create_dnsmasq_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_dnsmasq_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_dnsmasq_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::dhcp_agent_wrappers
Oct 13 13:59:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-988b334fbbc02e37b29bbd89994cd142c427e41613fb7debd89bcf8ffd5cd653-merged.mount: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1725e6410ebd724af9a156d33baa34cd7b609f52e9eac888ddfe9ef0e5981ec4-merged.mount: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-393b3dc0b7e843737344e7cf830611e72adc31f1d5376dcf07d68d475264c042-userdata-shm.mount: Deactivated successfully.
Oct 13 13:59:32 standalone.localdomain ceph-mon[29756]: pgmap v732: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:32 standalone.localdomain python3[82687]: ansible-file Invoked with path=/etc/systemd/system/tripleo_clustercheck.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:32 standalone.localdomain sudo[82729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 13:59:32 standalone.localdomain sudo[82729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:59:32 standalone.localdomain sudo[82729]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:32 standalone.localdomain sudo[82746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 13:59:32 standalone.localdomain sudo[82746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:59:32 standalone.localdomain python3[82740]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_clustercheck_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:59:33 standalone.localdomain python3[82811]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363972.957255-82685-214353781394974/source dest=/etc/systemd/system/tripleo_clustercheck.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:33 standalone.localdomain sudo[82746]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:59:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev f6a13162-9836-45e4-8305-f93fb2fb4bd5 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:59:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev f6a13162-9836-45e4-8305-f93fb2fb4bd5 (Updating node-proxy deployment (+1 -> 1))
Oct 13 13:59:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event f6a13162-9836-45e4-8305-f93fb2fb4bd5 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 13:59:33 standalone.localdomain sudo[82855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 13:59:33 standalone.localdomain sudo[82855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 13:59:33 standalone.localdomain sudo[82855]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:33 standalone.localdomain python3[82832]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 13:59:33 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 37 completed events
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 13:59:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:59:33 standalone.localdomain systemd-rc-local-generator[82890]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:33 standalone.localdomain systemd-sysv-generator[82893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:33 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:34 standalone.localdomain runuser[82953]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: pgmap v733: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 13:59:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 13:59:34 standalone.localdomain python3[82948]: ansible-systemd Invoked with state=restarted name=tripleo_clustercheck.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:59:35 standalone.localdomain runuser[82953]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:35 standalone.localdomain runuser[83106]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:35 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:35 standalone.localdomain systemd-rc-local-generator[83180]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:35 standalone.localdomain systemd-sysv-generator[83184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:35 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:35 standalone.localdomain systemd[1]: Starting clustercheck container...
Oct 13 13:59:35 standalone.localdomain runuser[83106]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:36 standalone.localdomain runuser[83213]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:36 standalone.localdomain systemd[1]: Started clustercheck container.
Oct 13 13:59:36 standalone.localdomain python3[83270]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:36 standalone.localdomain ceph-mon[29756]: pgmap v734: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:36 standalone.localdomain runuser[83213]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:37 standalone.localdomain python3[83315]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=standalone step=2 update_config_hash_only=False
Oct 13 13:59:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:37 standalone.localdomain python3[83331]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:38 standalone.localdomain ceph-mon[29756]: pgmap v735: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 13:59:38 standalone.localdomain podman[83357]: 2025-10-13 13:59:38.821162146 +0000 UTC m=+0.091959907 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, version=17.1.9, config_id=tripleo_step1, name=rhosp17/openstack-memcached, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 13:59:38 standalone.localdomain podman[83357]: 2025-10-13 13:59:38.845850711 +0000 UTC m=+0.116648532 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 13:59:38 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 13:59:39 standalone.localdomain python3[83399]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 13:59:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:39 standalone.localdomain python3[83405]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760363979.049419-83389-262835412805317/source _original_basename=tmpddpd7r_4 follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:40 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct 13 13:59:40 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct 13 13:59:40 standalone.localdomain python3[83478]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:40 standalone.localdomain ceph-mon[29756]: pgmap v736: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:40 standalone.localdomain ceph-mon[29756]: 5.b scrub starts
Oct 13 13:59:40 standalone.localdomain ceph-mon[29756]: 5.b scrub ok
Oct 13 13:59:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 13 13:59:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 13 13:59:41 standalone.localdomain ceph-mon[29756]: 4.d scrub starts
Oct 13 13:59:41 standalone.localdomain ceph-mon[29756]: 4.d scrub ok
Oct 13 13:59:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:42 standalone.localdomain ceph-mon[29756]: pgmap v737: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:42 standalone.localdomain python3[83550]: ansible-tripleo_container_manage Invoked with config_id=ovn_cluster_north_db_server config_dir=/var/lib/tripleo-config/container-startup-config/step_0 config_patterns=ovn_cluster_north_db_server.json config_overrides={} concurrency=1 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 13:59:43 standalone.localdomain podman[83680]: 2025-10-13 13:59:43.258181198 +0000 UTC m=+0.101482053 container create ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, config_id=ovn_cluster_north_db_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_cluster_north_db_server, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-nb-db-server-container, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:29, name=rhosp17/openstack-ovn-nb-db-server, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., release=1, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 13:59:43 standalone.localdomain podman[83680]: 2025-10-13 13:59:43.205879349 +0000 UTC m=+0.049180274 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1
Oct 13 13:59:43 standalone.localdomain systemd[1]: Started libpod-conmon-ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd.scope.
Oct 13 13:59:43 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/run/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/etc/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/etc/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/var/lib/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/var/lib/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:43 standalone.localdomain podman[83680]: 2025-10-13 13:59:43.369300957 +0000 UTC m=+0.212601812 container init ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, batch=17.1_20250721.1, container_name=ovn_cluster_north_db_server, vcs-type=git, com.redhat.component=openstack-ovn-nb-db-server-container, name=rhosp17/openstack-ovn-nb-db-server, release=1, config_id=ovn_cluster_north_db_server, maintainer=OpenStack TripleO Team, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:29, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, vendor=Red Hat, Inc.)
Oct 13 13:59:43 standalone.localdomain podman[83680]: 2025-10-13 13:59:43.379882294 +0000 UTC m=+0.223183149 container start ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, container_name=ovn_cluster_north_db_server, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, version=17.1.9, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, name=rhosp17/openstack-ovn-nb-db-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, io.openshift.expose-services=, com.redhat.component=openstack-ovn-nb-db-server-container, build-date=2025-07-21T13:58:29, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, config_id=ovn_cluster_north_db_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git)
Oct 13 13:59:43 standalone.localdomain python3[83550]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_cluster_north_db_server --conmon-pidfile /run/ovn_cluster_north_db_server.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=ovn_cluster_north_db_server --label container_name=ovn_cluster_north_db_server --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_cluster_north_db_server.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z --volume /var/lib/openvswitch/ovn:/run/openvswitch:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/lib/openvswitch/ovn:/var/lib/ovn:shared,z --volume /var/lib/openvswitch/ovn:/etc/openvswitch:shared,z --volume /var/lib/openvswitch/ovn:/etc/ovn:shared,z --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/ovn:z --volume /var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1
Oct 13 13:59:43 standalone.localdomain sudo[83698]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:59:43 standalone.localdomain sudo[83698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:59:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:43 standalone.localdomain sudo[83698]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:43 standalone.localdomain ovsdb-tool[83797]: ovs|00001|ovsdb|WARN|/usr/share/ovn/ovn-nb.ovsschema: changed 2 columns in 'OVN_Northbound' database from ephemeral to persistent, including 'status' column in 'Connection' table, because clusters do not support ephemeral columns
Oct 13 13:59:43 standalone.localdomain ovsdb-server[83804]: ovs|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log
Oct 13 13:59:43 standalone.localdomain ovn-nbctl[83808]: ovs|00001|ovn_dbctl|INFO|Called as ovn-nbctl --no-leader-only --db=unix:/var/run/ovn/ovnnb_db.sock init
Oct 13 13:59:43 standalone.localdomain ovn-nbctl[83808]: ovs|00002|db_ctl_base|ERR|unix:/var/run/ovn/ovnnb_db.sock: database connection failed (No such file or directory)
Oct 13 13:59:43 standalone.localdomain ovsdb-server[83804]: ovs|00002|raft|INFO|term 2: 3426829 ms timeout expired, starting election (vote)
Oct 13 13:59:43 standalone.localdomain ovsdb-server[83804]: ovs|00003|raft|INFO|term 2: elected leader by 1+ of 1 servers
Oct 13 13:59:43 standalone.localdomain ovsdb-server[83804]: ovs|00004|raft|INFO|local server ID is 4a1c
Oct 13 13:59:43 standalone.localdomain ovsdb-server[83804]: ovs|00005|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.5-110.el9fdp
Oct 13 13:59:43 standalone.localdomain ovsdb-server[83804]: ovs|00006|raft|INFO|Election timer changed from 1000 to 10000
Oct 13 13:59:43 standalone.localdomain ovsdb-client[83812]: ovs|00001|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting...
Oct 13 13:59:43 standalone.localdomain ovsdb-client[83812]: ovs|00002|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connected
Oct 13 13:59:43 standalone.localdomain python3[83817]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_north_db_server.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:44 standalone.localdomain python3[83868]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_north_db_server_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:59:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:44 standalone.localdomain python3[83878]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363984.0798428-83792-155718319912193/source dest=/etc/systemd/system/tripleo_ovn_cluster_north_db_server.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:44 standalone.localdomain ceph-mon[29756]: pgmap v738: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:44 standalone.localdomain python3[83880]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 13:59:44 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:44 standalone.localdomain systemd-rc-local-generator[83947]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:44 standalone.localdomain systemd-sysv-generator[83950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:44 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 13 13:59:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 13 13:59:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:45 standalone.localdomain ceph-mon[29756]: 5.c scrub starts
Oct 13 13:59:45 standalone.localdomain ceph-mon[29756]: 5.c scrub ok
Oct 13 13:59:45 standalone.localdomain python3[84007]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_cluster_north_db_server.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:59:45 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:45 standalone.localdomain systemd-sysv-generator[84044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:45 standalone.localdomain systemd-rc-local-generator[84040]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 13 13:59:46 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 13 13:59:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:46 standalone.localdomain systemd[1]: Starting ovn_cluster_north_db_server container...
Oct 13 13:59:46 standalone.localdomain systemd[1]: Started ovn_cluster_north_db_server container.
Oct 13 13:59:46 standalone.localdomain ceph-mon[29756]: pgmap v739: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:46 standalone.localdomain ceph-mon[29756]: 4.e scrub starts
Oct 13 13:59:46 standalone.localdomain ceph-mon[29756]: 4.e scrub ok
Oct 13 13:59:46 standalone.localdomain python3[84075]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:47 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 13 13:59:47 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 13 13:59:47 standalone.localdomain runuser[84096]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:47 standalone.localdomain ceph-mon[29756]: 5.d scrub starts
Oct 13 13:59:47 standalone.localdomain ceph-mon[29756]: 5.d scrub ok
Oct 13 13:59:47 standalone.localdomain runuser[84096]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:48 standalone.localdomain runuser[84200]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:48 standalone.localdomain ceph-mon[29756]: pgmap v740: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:48 standalone.localdomain runuser[84200]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:48 standalone.localdomain runuser[84257]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 13:59:49 standalone.localdomain python3[84318]: ansible-tripleo_container_manage Invoked with config_id=ovn_cluster_south_db_server config_dir=/var/lib/tripleo-config/container-startup-config/step_0 config_patterns=ovn_cluster_south_db_server.json config_overrides={} concurrency=1 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 13:59:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:49 standalone.localdomain podman[84362]: 2025-10-13 13:59:49.595577598 +0000 UTC m=+0.077640615 container create 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, build-date=2025-07-21T13:30:03, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, com.redhat.component=openstack-ovn-sb-db-server-container, container_name=ovn_cluster_south_db_server, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, config_id=ovn_cluster_south_db_server, vcs-type=git, name=rhosp17/openstack-ovn-sb-db-server)
Oct 13 13:59:49 standalone.localdomain runuser[84257]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 13:59:49 standalone.localdomain systemd[1]: Started libpod-conmon-7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc.scope.
Oct 13 13:59:49 standalone.localdomain podman[84362]: 2025-10-13 13:59:49.555813157 +0000 UTC m=+0.037876254 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1
Oct 13 13:59:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/etc/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/run/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/etc/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/var/lib/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e/merged/var/lib/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:49 standalone.localdomain podman[84362]: 2025-10-13 13:59:49.672219319 +0000 UTC m=+0.154282336 container init 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-ovn-sb-db-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, architecture=x86_64, config_id=ovn_cluster_south_db_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:30:03, name=rhosp17/openstack-ovn-sb-db-server, maintainer=OpenStack TripleO Team, container_name=ovn_cluster_south_db_server)
Oct 13 13:59:49 standalone.localdomain podman[84362]: 2025-10-13 13:59:49.680753333 +0000 UTC m=+0.162816350 container start 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, maintainer=OpenStack TripleO Team, config_id=ovn_cluster_south_db_server, container_name=ovn_cluster_south_db_server, tcib_managed=true, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, vcs-type=git, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, name=rhosp17/openstack-ovn-sb-db-server, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, com.redhat.component=openstack-ovn-sb-db-server-container, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, build-date=2025-07-21T13:30:03)
Oct 13 13:59:49 standalone.localdomain python3[84318]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_cluster_south_db_server --conmon-pidfile /run/ovn_cluster_south_db_server.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=ovn_cluster_south_db_server --label container_name=ovn_cluster_south_db_server --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_cluster_south_db_server.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z --volume /var/lib/openvswitch/ovn:/run/openvswitch:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/lib/openvswitch/ovn:/var/lib/ovn:shared,z --volume /var/lib/openvswitch/ovn:/etc/openvswitch:shared,z --volume /var/lib/openvswitch/ovn:/etc/ovn:shared,z --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/ovn:z --volume /var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1
Oct 13 13:59:49 standalone.localdomain sudo[84384]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:59:49 standalone.localdomain sudo[84384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:59:49 standalone.localdomain sudo[84384]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:49 standalone.localdomain ovsdb-tool[84484]: ovs|00001|ovsdb|WARN|/usr/share/ovn/ovn-sb.ovsschema: changed 2 columns in 'OVN_Southbound' database from ephemeral to persistent, including 'status' column in 'Connection' table, because clusters do not support ephemeral columns
Oct 13 13:59:49 standalone.localdomain ovsdb-server[84491]: ovs|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-sb.log
Oct 13 13:59:49 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 13 13:59:49 standalone.localdomain ovsdb-server[84491]: ovs|00002|raft|INFO|term 2: 3433135 ms timeout expired, starting election (vote)
Oct 13 13:59:49 standalone.localdomain ovsdb-server[84491]: ovs|00003|raft|INFO|term 2: elected leader by 1+ of 1 servers
Oct 13 13:59:49 standalone.localdomain ovsdb-server[84491]: ovs|00004|raft|INFO|local server ID is fd0a
Oct 13 13:59:49 standalone.localdomain ovn-sbctl[84495]: ovs|00001|ovn_dbctl|INFO|Called as ovn-sbctl --no-leader-only --db=unix:/var/run/ovn/ovnsb_db.sock init
Oct 13 13:59:49 standalone.localdomain ovsdb-server[84491]: ovs|00005|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 3.3.5-110.el9fdp
Oct 13 13:59:49 standalone.localdomain ovsdb-server[84491]: ovs|00006|raft|INFO|Election timer changed from 1000 to 16000
Oct 13 13:59:49 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 13 13:59:49 standalone.localdomain ovsdb-client[84504]: ovs|00001|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting...
Oct 13 13:59:49 standalone.localdomain ovsdb-client[84504]: ovs|00002|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected
Oct 13 13:59:50 standalone.localdomain python3[84539]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_south_db_server.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:50 standalone.localdomain python3[84567]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_south_db_server_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:59:50 standalone.localdomain ceph-mon[29756]: pgmap v741: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:50 standalone.localdomain ceph-mon[29756]: 4.f scrub starts
Oct 13 13:59:50 standalone.localdomain ceph-mon[29756]: 4.f scrub ok
Oct 13 13:59:50 standalone.localdomain python3[84577]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363990.3577003-84479-167353864078653/source dest=/etc/systemd/system/tripleo_ovn_cluster_south_db_server.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 13 13:59:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 13 13:59:51 standalone.localdomain python3[84579]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 13:59:51 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:51 standalone.localdomain systemd-sysv-generator[84615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:51 standalone.localdomain systemd-rc-local-generator[84611]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:51 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:51 standalone.localdomain ceph-mon[29756]: 5.e scrub starts
Oct 13 13:59:51 standalone.localdomain ceph-mon[29756]: 5.e scrub ok
Oct 13 13:59:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:52 standalone.localdomain python3[84625]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_cluster_south_db_server.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:59:52 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:52 standalone.localdomain systemd-rc-local-generator[84660]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:52 standalone.localdomain systemd-sysv-generator[84664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:52 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:52 standalone.localdomain ceph-mon[29756]: pgmap v742: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:52 standalone.localdomain systemd[1]: Starting ovn_cluster_south_db_server container...
Oct 13 13:59:52 standalone.localdomain systemd[1]: Started ovn_cluster_south_db_server container.
Oct 13 13:59:53 standalone.localdomain python3[84772]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 13:59:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:53 standalone.localdomain ovsdb-server[83804]: ovs|00007|memory|INFO|11216 kB peak resident set size after 10.0 seconds
Oct 13 13:59:53 standalone.localdomain ovsdb-server[83804]: ovs|00008|memory|INFO|atoms:17 cells:20 monitors:0 n-weak-refs:0 raft-log:2 txn-history:1
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: pgmap v743: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.505638) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363994505896, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1352, "num_deletes": 251, "total_data_size": 1041950, "memory_usage": 1068048, "flush_reason": "Manual Compaction"}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363994513925, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1012003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16588, "largest_seqno": 17939, "table_properties": {"data_size": 1006677, "index_size": 2607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13199, "raw_average_key_size": 20, "raw_value_size": 995056, "raw_average_value_size": 1516, "num_data_blocks": 121, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363894, "oldest_key_time": 1760363894, "file_creation_time": 1760363994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 8326 microseconds, and 4105 cpu microseconds.
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.513980) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1012003 bytes OK
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.514002) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.516035) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.516059) EVENT_LOG_v1 {"time_micros": 1760363994516052, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.516082) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1035667, prev total WAL file size 1035667, number of live WAL files 2.
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.516832) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(988KB)], [38(4192KB)]
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363994516879, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 5305134, "oldest_snapshot_seqno": -1}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3175 keys, 4889245 bytes, temperature: kUnknown
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363994539953, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 4889245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4865860, "index_size": 14329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 74487, "raw_average_key_size": 23, "raw_value_size": 4806329, "raw_average_value_size": 1513, "num_data_blocks": 624, "num_entries": 3175, "num_filter_entries": 3175, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760363994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.540199) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 4889245 bytes
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.541832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.1 rd, 211.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 4.1 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(10.1) write-amplify(4.8) OK, records in: 3693, records dropped: 518 output_compression: NoCompression
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.541860) EVENT_LOG_v1 {"time_micros": 1760363994541849, "job": 18, "event": "compaction_finished", "compaction_time_micros": 23159, "compaction_time_cpu_micros": 15003, "output_level": 6, "num_output_files": 1, "total_output_size": 4889245, "num_input_records": 3693, "num_output_records": 3175, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363994542234, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760363994542941, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.516706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.543107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.543115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.543118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.543122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:59:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-13:59:54.543125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 13:59:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:55 standalone.localdomain python3[85011]: ansible-tripleo_container_manage Invoked with config_id=ovn_cluster_northd config_dir=/var/lib/tripleo-config/container-startup-config/step_0 config_patterns=ovn_cluster_northd.json config_overrides={} concurrency=1 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 13:59:56 standalone.localdomain podman[85052]: 2025-10-13 13:59:56.131601136 +0000 UTC m=+0.096274771 container create 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, batch=17.1_20250721.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, description=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, distribution-scope=public, config_id=ovn_cluster_northd, build-date=2025-07-21T13:30:04, com.redhat.component=openstack-ovn-northd-container, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 13:59:56 standalone.localdomain podman[85052]: 2025-10-13 13:59:56.08426309 +0000 UTC m=+0.048936745 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1
Oct 13 13:59:56 standalone.localdomain systemd[1]: Started libpod-conmon-89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.scope.
Oct 13 13:59:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 13:59:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30/merged/run/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 13:59:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 13:59:56 standalone.localdomain podman[85052]: 2025-10-13 13:59:56.242723665 +0000 UTC m=+0.207397310 container init 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, release=1, build-date=2025-07-21T13:30:04, com.redhat.component=openstack-ovn-northd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, io.buildah.version=1.33.12, vcs-type=git, config_id=ovn_cluster_northd, io.openshift.expose-services=, tcib_managed=true, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4)
Oct 13 13:59:56 standalone.localdomain sudo[85080]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 13:59:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 13:59:56 standalone.localdomain sudo[85080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 13:59:56 standalone.localdomain podman[85052]: 2025-10-13 13:59:56.276700617 +0000 UTC m=+0.241374262 container start 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, name=rhosp17/openstack-ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=ovn_cluster_northd, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64, tcib_managed=true)
Oct 13 13:59:56 standalone.localdomain python3[85011]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_cluster_northd --conmon-pidfile /run/ovn_cluster_northd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_cluster_northd --label container_name=ovn_cluster_northd --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_cluster_northd.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /var/lib/openvswitch/ovn:/run/openvswitch:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/ovn:z --volume /var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1
Oct 13 13:59:56 standalone.localdomain sudo[85080]: pam_unix(sudo:session): session closed for user root
Oct 13 13:59:56 standalone.localdomain podman[85081]: 2025-10-13 13:59:56.375575307 +0000 UTC m=+0.089101519 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=starting, vcs-type=git, container_name=ovn_cluster_northd, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-northd, version=17.1.9, config_id=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-northd-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 13:59:56 standalone.localdomain podman[85081]: 2025-10-13 13:59:56.386205436 +0000 UTC m=+0.099731678 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, com.redhat.component=openstack-ovn-northd-container, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, container_name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1)
Oct 13 13:59:56 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 13:59:56 standalone.localdomain ceph-mon[29756]: pgmap v744: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:56 standalone.localdomain python3[85134]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_northd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:57 standalone.localdomain python3[85136]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_northd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 13:59:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 13 13:59:57 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 13 13:59:57 standalone.localdomain python3[85150]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760363997.048041-85132-92242844862586/source dest=/etc/systemd/system/tripleo_ovn_cluster_northd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 13:59:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:57 standalone.localdomain ceph-mon[29756]: 4.10 scrub starts
Oct 13 13:59:57 standalone.localdomain ceph-mon[29756]: 4.10 scrub ok
Oct 13 13:59:57 standalone.localdomain python3[85156]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 13:59:57 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:57 standalone.localdomain systemd-sysv-generator[85179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:57 standalone.localdomain systemd-rc-local-generator[85173]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:57 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:58 standalone.localdomain ceph-mon[29756]: pgmap v745: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:58 standalone.localdomain python3[85202]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_cluster_northd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 13:59:58 standalone.localdomain systemd[1]: Reloading.
Oct 13 13:59:58 standalone.localdomain systemd-sysv-generator[85232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 13:59:58 standalone.localdomain systemd-rc-local-generator[85227]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 13:59:58 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 13:59:59 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 13 13:59:59 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 13 13:59:59 standalone.localdomain systemd[1]: Starting ovn_cluster_northd container...
Oct 13 13:59:59 standalone.localdomain systemd[1]: Started ovn_cluster_northd container.
Oct 13 13:59:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 13:59:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 13:59:59 standalone.localdomain podman[85267]: 2025-10-13 13:59:59.506372429 +0000 UTC m=+0.118574502 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 13:59:59 standalone.localdomain ceph-mon[29756]: 5.f scrub starts
Oct 13 13:59:59 standalone.localdomain ceph-mon[29756]: 5.f scrub ok
Oct 13 13:59:59 standalone.localdomain podman[85298]: 2025-10-13 13:59:59.586790007 +0000 UTC m=+0.090791900 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.expose-services=, release=1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:08:11, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 13:59:59 standalone.localdomain podman[85267]: 2025-10-13 13:59:59.5907346 +0000 UTC m=+0.202936693 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, release=1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 13:59:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 13:59:59 standalone.localdomain podman[85298]: 2025-10-13 13:59:59.622964627 +0000 UTC m=+0.126966510 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-haproxy, io.openshift.expose-services=, release=1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-haproxy-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 13:59:59 standalone.localdomain python3[85294]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec ovn_cluster_north_db_server bash -c "ovn-nbctl --no-leader-only --inactivity-probe=60000 set-connection ptcp:6641:0.0.0.0"
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 13:59:59 standalone.localdomain podman[85278]: 2025-10-13 13:59:59.639164649 +0000 UTC m=+0.215483281 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.component=openstack-rabbitmq-container, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 13:59:59 standalone.localdomain podman[85357]: 2025-10-13 13:59:59.731904039 +0000 UTC m=+0.081757421 container exec ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, tcib_managed=true, config_id=ovn_cluster_north_db_server, build-date=2025-07-21T13:58:29, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, version=17.1.9, managed_by=tripleo_ansible, container_name=ovn_cluster_north_db_server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, com.redhat.component=openstack-ovn-nb-db-server-container, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-nb-db-server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server)
Oct 13 13:59:59 standalone.localdomain ovn-nbctl[85387]: ovs|00001|ovn_dbctl|INFO|Called as ovn-nbctl --no-leader-only --inactivity-probe=60000 set-connection ptcp:6641:0.0.0.0
Oct 13 13:59:59 standalone.localdomain podman[85358]: 2025-10-13 13:59:59.776665424 +0000 UTC m=+0.120543221 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 13:59:59 standalone.localdomain podman[85357]: 2025-10-13 13:59:59.777037686 +0000 UTC m=+0.126891108 container exec_died ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, release=1, container_name=ovn_cluster_north_db_server, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-nb-db-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, tcib_managed=true, build-date=2025-07-21T13:58:29, config_id=ovn_cluster_north_db_server, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-nb-db-server, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 13:59:59 standalone.localdomain podman[85332]: 2025-10-13 13:59:59.743425385 +0000 UTC m=+0.138351202 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=clustercheck, io.openshift.expose-services=, config_id=tripleo_step2)
Oct 13 13:59:59 standalone.localdomain podman[85332]: 2025-10-13 13:59:59.797676885 +0000 UTC m=+0.192602722 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, container_name=clustercheck, tcib_managed=true)
Oct 13 13:59:59 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 13:59:59 standalone.localdomain podman[85278]: 2025-10-13 13:59:59.833341858 +0000 UTC m=+0.409660490 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, tcib_managed=true, name=rhosp17/openstack-rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container)
Oct 13 13:59:59 standalone.localdomain ovsdb-server[84491]: ovs|00007|memory|INFO|11240 kB peak resident set size after 10.0 seconds
Oct 13 13:59:59 standalone.localdomain ovsdb-server[84491]: ovs|00008|memory|INFO|atoms:21 cells:28 monitors:0 n-weak-refs:0 raft-log:3 txn-history:2 txn-history-atoms:4
Oct 13 14:00:00 standalone.localdomain python3[85428]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec ovn_cluster_south_db_server bash -c "ovn-sbctl --no-leader-only --inactivity-probe=60000 set-connection ptcp:6642:0.0.0.0"
                                                        _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:00:00 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 13 14:00:00 standalone.localdomain podman[85431]: 2025-10-13 14:00:00.178606116 +0000 UTC m=+0.104192827 container exec 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, name=rhosp17/openstack-ovn-sb-db-server, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:30:03, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_cluster_south_db_server, com.redhat.component=openstack-ovn-sb-db-server-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_south_db_server, architecture=x86_64, distribution-scope=public, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, release=1)
Oct 13 14:00:00 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 13 14:00:00 standalone.localdomain ovn-sbctl[85500]: ovs|00001|ovn_dbctl|INFO|Called as ovn-sbctl --no-leader-only --inactivity-probe=60000 set-connection ptcp:6642:0.0.0.0
Oct 13 14:00:00 standalone.localdomain podman[85431]: 2025-10-13 14:00:00.19911366 +0000 UTC m=+0.124700401 container exec_died 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, vendor=Red Hat, Inc., tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, com.redhat.component=openstack-ovn-sb-db-server-container, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, build-date=2025-07-21T13:30:03, container_name=ovn_cluster_south_db_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.openshift.expose-services=, config_id=ovn_cluster_south_db_server, name=rhosp17/openstack-ovn-sb-db-server, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server)
Oct 13 14:00:00 standalone.localdomain runuser[85503]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:00 standalone.localdomain python3[85567]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:00:00 standalone.localdomain ceph-mon[29756]: pgmap v746: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:00 standalone.localdomain ceph-mon[29756]: 4.11 scrub starts
Oct 13 14:00:00 standalone.localdomain ceph-mon[29756]: 4.11 scrub ok
Oct 13 14:00:01 standalone.localdomain runuser[85503]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:01 standalone.localdomain runuser[85608]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:01 standalone.localdomain ansible-async_wrapper.py[85676]: Invoked with 493058270737 3600 /root/.ansible/tmp/ansible-tmp-1760364001.2884429-85656-19465304919695/AnsiballZ_command.py _
Oct 13 14:00:01 standalone.localdomain ansible-async_wrapper.py[85679]: Starting module and watcher
Oct 13 14:00:01 standalone.localdomain ansible-async_wrapper.py[85679]: Start watching 85680 (3600)
Oct 13 14:00:01 standalone.localdomain ansible-async_wrapper.py[85680]: Start module (85680)
Oct 13 14:00:01 standalone.localdomain ansible-async_wrapper.py[85676]: Return async_wrapper task started.
Oct 13 14:00:01 standalone.localdomain python3[85685]: ansible-ansible.legacy.async_status Invoked with jid=493058270737.85676 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:00:01 standalone.localdomain runuser[85608]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:01 standalone.localdomain runuser[85699]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:02 standalone.localdomain ceph-mon[29756]: pgmap v747: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:02 standalone.localdomain runuser[85699]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:04 standalone.localdomain ceph-mon[29756]: pgmap v748: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]:    (file: /etc/puppet/hiera.yaml)
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]: Warning: Undefined variable '::deploy_config_name';
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]:    (file & line not available)
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]:    (file & line not available)
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Oct 13 14:00:06 standalone.localdomain ansible-async_wrapper.py[85679]: 85680 still running (3600)
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Oct 13 14:00:06 standalone.localdomain puppet-user[85688]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.45 seconds
Oct 13 14:00:06 standalone.localdomain ceph-mon[29756]: pgmap v749: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:08 standalone.localdomain ceph-mon[29756]: pgmap v750: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:00:09 standalone.localdomain podman[86203]: 2025-10-13 14:00:09.826326925 +0000 UTC m=+0.088968055 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, distribution-scope=public, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git)
Oct 13 14:00:09 standalone.localdomain podman[86203]: 2025-10-13 14:00:09.849694378 +0000 UTC m=+0.112335488 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, container_name=memcached, name=rhosp17/openstack-memcached, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, version=17.1.9, build-date=2025-07-21T12:58:43, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64)
Oct 13 14:00:09 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:00:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 13 14:00:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 13 14:00:10 standalone.localdomain ceph-mon[29756]: pgmap v751: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:10 standalone.localdomain ceph-mon[29756]: 5.10 scrub starts
Oct 13 14:00:10 standalone.localdomain ceph-mon[29756]: 5.10 scrub ok
Oct 13 14:00:10 standalone.localdomain puppet-user[85688]: Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created
Oct 13 14:00:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 13 14:00:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 13 14:00:11 standalone.localdomain ceph-mon[29756]: 4.12 scrub starts
Oct 13 14:00:11 standalone.localdomain ceph-mon[29756]: 4.12 scrub ok
Oct 13 14:00:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:11 standalone.localdomain ansible-async_wrapper.py[85679]: 85680 still running (3595)
Oct 13 14:00:11 standalone.localdomain python3[86311]: ansible-ansible.legacy.async_status Invoked with jid=493058270737.85676 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:00:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 13 14:00:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 13 14:00:12 standalone.localdomain ceph-mon[29756]: pgmap v752: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:12 standalone.localdomain ceph-mon[29756]: 5.11 scrub starts
Oct 13 14:00:12 standalone.localdomain ceph-mon[29756]: 5.11 scrub ok
Oct 13 14:00:13 standalone.localdomain puppet-user[85688]: Notice: /Stage[main]/Pacemaker::Resource_op_defaults/Pcmk_resource_op_default[bundle]/ensure: created
Oct 13 14:00:13 standalone.localdomain runuser[86405]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:13 standalone.localdomain runuser[86405]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:13 standalone.localdomain runuser[86488]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:14 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 13 14:00:14 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 13 14:00:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:14 standalone.localdomain ceph-mon[29756]: pgmap v753: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:14 standalone.localdomain ceph-mon[29756]: 4.13 scrub starts
Oct 13 14:00:14 standalone.localdomain ceph-mon[29756]: 4.13 scrub ok
Oct 13 14:00:14 standalone.localdomain runuser[86488]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs property config' instead.
Oct 13 14:00:14 standalone.localdomain runuser[86588]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Notice: Applied catalog in 8.14 seconds
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Application:
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Initial environment: production
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Converged environment: production
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:          Run mode: user
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Changes:
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:             Total: 2
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Events:
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:           Success: 2
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:             Total: 2
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Resources:
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:           Changed: 2
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:       Out of sync: 2
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:             Total: 28
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Time:
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:        Filebucket: 0.00
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:          Schedule: 0.00
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:           Package: 0.00
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:         File line: 0.00
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:              File: 0.00
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:              User: 0.01
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:            Augeas: 0.01
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:           Service: 0.07
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Config retrieval: 0.50
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:     Pcmk property: 1.56
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:              Exec: 2.00
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:          Last run: 1760364014
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Pcmk resource op default: 2.15
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Pcmk resource default: 2.17
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Transaction evaluation: 8.12
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:    Catalog application: 8.14
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:             Total: 8.15
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]: Version:
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:            Config: 1760364006
Oct 13 14:00:14 standalone.localdomain puppet-user[85688]:            Puppet: 7.10.0
Oct 13 14:00:14 standalone.localdomain ansible-async_wrapper.py[85680]: Module complete (85680)
Oct 13 14:00:15 standalone.localdomain runuser[86588]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:16 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 13 14:00:16 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 13 14:00:16 standalone.localdomain ceph-mon[29756]: pgmap v754: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:16 standalone.localdomain ceph-mon[29756]: 5.12 scrub starts
Oct 13 14:00:16 standalone.localdomain ceph-mon[29756]: 5.12 scrub ok
Oct 13 14:00:16 standalone.localdomain ansible-async_wrapper.py[85679]: Done in kid B.
Oct 13 14:00:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:18 standalone.localdomain ceph-mon[29756]: pgmap v755: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:20 standalone.localdomain ceph-mon[29756]: pgmap v756: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 13 14:00:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 13 14:00:21 standalone.localdomain ceph-mon[29756]: 4.14 scrub starts
Oct 13 14:00:21 standalone.localdomain ceph-mon[29756]: 4.14 scrub ok
Oct 13 14:00:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:22 standalone.localdomain python3[86846]: ansible-ansible.legacy.async_status Invoked with jid=493058270737.85676 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:00:22 standalone.localdomain ceph-mon[29756]: pgmap v757: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:22 standalone.localdomain python3[86857]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:00:23 standalone.localdomain python3[86869]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:00:23
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'images', 'manila_metadata', 'manila_data', '.mgr']
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:00:23 standalone.localdomain sshd[86925]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:00:23 standalone.localdomain python3[86969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:23 standalone.localdomain python3[86973]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpf_vh5br_ recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:00:23 standalone.localdomain python3[86979]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:23 standalone.localdomain sshd[86925]: Received disconnect from 80.94.93.176 port 19338:11:  [preauth]
Oct 13 14:00:23 standalone.localdomain sshd[86925]: Disconnected from authenticating user root 80.94.93.176 port 19338 [preauth]
Oct 13 14:00:24 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 13 14:00:24 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 13 14:00:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:24 standalone.localdomain ceph-mon[29756]: pgmap v758: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:24 standalone.localdomain ceph-mon[29756]: 5.13 scrub starts
Oct 13 14:00:24 standalone.localdomain ceph-mon[29756]: 5.13 scrub ok
Oct 13 14:00:24 standalone.localdomain python3[87110]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Oct 13 14:00:25 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 13 14:00:25 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 13 14:00:25 standalone.localdomain python3[87213]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:25 standalone.localdomain ceph-mon[29756]: 4.15 scrub starts
Oct 13 14:00:25 standalone.localdomain ceph-mon[29756]: 4.15 scrub ok
Oct 13 14:00:25 standalone.localdomain runuser[87227]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:26 standalone.localdomain python3[87253]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:00:26 standalone.localdomain ceph-mon[29756]: pgmap v759: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:26 standalone.localdomain python3[87298]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:26 standalone.localdomain runuser[87227]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:26 standalone.localdomain python3[87309]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:00:26 standalone.localdomain runuser[87330]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:26 standalone.localdomain podman[87319]: 2025-10-13 14:00:26.828559325 +0000 UTC m=+0.094500976 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-northd-container, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:30:04, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, config_id=ovn_cluster_northd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ovn-northd)
Oct 13 14:00:26 standalone.localdomain podman[87319]: 2025-10-13 14:00:26.837224365 +0000 UTC m=+0.103165996 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, container_name=ovn_cluster_northd, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T13:30:04, io.openshift.expose-services=, config_id=ovn_cluster_northd)
Oct 13 14:00:26 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:00:27 standalone.localdomain python3[87366]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:27 standalone.localdomain python3[87407]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:27 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 13 14:00:27 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 13 14:00:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:27 standalone.localdomain runuser[87330]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:27 standalone.localdomain runuser[87430]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:27 standalone.localdomain python3[87422]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:27 standalone.localdomain ceph-mon[29756]: 4.16 scrub starts
Oct 13 14:00:27 standalone.localdomain ceph-mon[29756]: 4.16 scrub ok
Oct 13 14:00:27 standalone.localdomain python3[87480]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:28 standalone.localdomain python3[87494]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:28 standalone.localdomain runuser[87430]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:28 standalone.localdomain python3[87513]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:00:28 standalone.localdomain ceph-mon[29756]: pgmap v760: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:28 standalone.localdomain python3[87522]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:00:28 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:00:28 standalone.localdomain systemd-sysv-generator[87547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:00:28 standalone.localdomain systemd-rc-local-generator[87543]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:00:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:00:29 standalone.localdomain python3[87581]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:29 standalone.localdomain python3[87585]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:29 standalone.localdomain python3[87599]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:00:30 standalone.localdomain python3[87603]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:00:30 standalone.localdomain ceph-mon[29756]: pgmap v761: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:30 standalone.localdomain python3[87617]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:00:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:00:30 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:00:30 standalone.localdomain systemd-rc-local-generator[87705]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:00:30 standalone.localdomain systemd-sysv-generator[87709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:00:30 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:00:30 standalone.localdomain podman[87672]: 2025-10-13 14:00:30.646548094 +0000 UTC m=+0.101507995 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=clustercheck)
Oct 13 14:00:30 standalone.localdomain podman[87672]: 2025-10-13 14:00:30.699933248 +0000 UTC m=+0.154893209 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-mariadb-container, tcib_managed=true, vcs-type=git, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team)
Oct 13 14:00:30 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:00:30 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 14:00:30 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 14:00:30 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 14:00:30 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 14:00:31 standalone.localdomain python3[87762]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Oct 13 14:00:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:32 standalone.localdomain ceph-mon[29756]: pgmap v762: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:33 standalone.localdomain python3[87857]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 14:00:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:33 standalone.localdomain sudo[88105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:00:33 standalone.localdomain podman[88068]: 2025-10-13 14:00:33.667121036 +0000 UTC m=+0.067020360 container create 25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_db_sync, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, container_name=glance_api_db_sync, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, release=1)
Oct 13 14:00:33 standalone.localdomain sudo[88105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:00:33 standalone.localdomain sudo[88105]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:33 standalone.localdomain podman[88077]: 2025-10-13 14:00:33.69904459 +0000 UTC m=+0.094115293 container create 3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1, name=cinder_volume_init_logs, architecture=x86_64, build-date=2025-07-21T16:13:39, distribution-scope=public, container_name=cinder_volume_init_logs, name=rhosp17/openstack-cinder-volume, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']}, com.redhat.component=openstack-cinder-volume-container, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libpod-conmon-25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb.scope.
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libpod-conmon-3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c.scope.
Oct 13 14:00:33 standalone.localdomain sudo[88147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1/merged/var/log/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1/merged/var/lib/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain podman[88068]: 2025-10-13 14:00:33.632800066 +0000 UTC m=+0.032699410 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:00:33 standalone.localdomain sudo[88147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57a101a7ef6ab2b3f3e0eb8648411f7f09d848d387bab284c11bac1038a2c94b/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain podman[88077]: 2025-10-13 14:00:33.646239764 +0000 UTC m=+0.041310487 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1
Oct 13 14:00:33 standalone.localdomain podman[88077]: 2025-10-13 14:00:33.747300003 +0000 UTC m=+0.142370696 container init 3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1, name=cinder_volume_init_logs, architecture=x86_64, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1, io.buildah.version=1.33.12, container_name=cinder_volume_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, name=rhosp17/openstack-cinder-volume, io.openshift.expose-services=, com.redhat.component=openstack-cinder-volume-container, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:13:39, maintainer=OpenStack TripleO Team)
Oct 13 14:00:33 standalone.localdomain podman[88084]: 2025-10-13 14:00:33.651793537 +0000 UTC m=+0.038535920 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 14:00:33 standalone.localdomain podman[88092]: 2025-10-13 14:00:33.654721749 +0000 UTC m=+0.036164277 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1
Oct 13 14:00:33 standalone.localdomain podman[88077]: 2025-10-13 14:00:33.759239176 +0000 UTC m=+0.154309879 container start 3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1, name=cinder_volume_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, container_name=cinder_volume_init_logs, summary=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, config_id=tripleo_step3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, name=rhosp17/openstack-cinder-volume, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']}, com.redhat.component=openstack-cinder-volume-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, release=1, tcib_managed=true, version=17.1.9)
Oct 13 14:00:33 standalone.localdomain podman[88092]: 2025-10-13 14:00:33.760759483 +0000 UTC m=+0.142201991 container create 159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1, name=cinder_backup_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-backup, container_name=cinder_backup_init_logs, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-backup-container)
Oct 13 14:00:33 standalone.localdomain systemd[1]: libpod-3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c.scope: Deactivated successfully.
Oct 13 14:00:33 standalone.localdomain podman[88068]: 2025-10-13 14:00:33.765351936 +0000 UTC m=+0.165251260 container init 25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_db_sync, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step3, container_name=glance_api_db_sync, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:00:33 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_volume_init_logs --conmon-pidfile /run/cinder_volume_init_logs.pid --detach=True --label config_id=tripleo_step3 --label container_name=cinder_volume_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_volume_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/cinder:/var/log/cinder registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1 /bin/bash -c chown -R cinder:cinder /var/log/cinder
Oct 13 14:00:33 standalone.localdomain podman[88068]: 2025-10-13 14:00:33.774417949 +0000 UTC m=+0.174317273 container start 25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_db_sync, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-glance-api, release=1, container_name=glance_api_db_sync, tcib_managed=true, build-date=2025-07-21T13:58:20)
Oct 13 14:00:33 standalone.localdomain podman[88068]: 2025-10-13 14:00:33.774645946 +0000 UTC m=+0.174545290 container attach 25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_db_sync, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_id=tripleo_step3, release=1, io.openshift.expose-services=, container_name=glance_api_db_sync, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, vcs-type=git, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team)
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libpod-conmon-159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee.scope.
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:33 standalone.localdomain podman[88084]: 2025-10-13 14:00:33.798499869 +0000 UTC m=+0.185242242 container create 8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_db_sync, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public)
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ac4d190a9bc6bdffd7c21680e10b5fe3531211b9daf7795599b16a76f460b2/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain podman[88134]: 2025-10-13 14:00:33.706675198 +0000 UTC m=+0.043469556 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1
Oct 13 14:00:33 standalone.localdomain podman[88134]: 2025-10-13 14:00:33.810468882 +0000 UTC m=+0.147263220 container create 2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine_db_sync, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine_db_sync, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, distribution-scope=public, release=1, vcs-type=git, version=17.1.9, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libpod-conmon-8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6.scope.
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libpod-conmon-2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93.scope.
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bdd984fb7049ab7f79c8d6ad9e4d45d8ae9e4d29126e230f8a5d1abdc81fab/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c800d59e331506d01353fe0ada7b93084552dc4f0ee734e61788460834519d95/merged/var/log/heat supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:33 standalone.localdomain podman[88084]: 2025-10-13 14:00:33.849793557 +0000 UTC m=+0.236535930 container init 8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_db_sync, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, build-date=2025-07-21T15:58:55, version=17.1.9, release=1, config_id=tripleo_step3, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-cinder-api, maintainer=OpenStack TripleO Team, container_name=cinder_api_db_sync)
Oct 13 14:00:33 standalone.localdomain podman[88134]: 2025-10-13 14:00:33.852210493 +0000 UTC m=+0.189004831 container init 2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, container_name=heat_engine_db_sync, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-heat-engine, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11)
Oct 13 14:00:33 standalone.localdomain podman[88186]: 2025-10-13 14:00:33.859525491 +0000 UTC m=+0.085992591 container died 3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1, name=cinder_volume_init_logs, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_volume_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, com.redhat.component=openstack-cinder-volume-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']}, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:13:39, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-volume)
Oct 13 14:00:33 standalone.localdomain sudo[88224]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:33 standalone.localdomain sudo[88225]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:33 standalone.localdomain sudo[88224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:33 standalone.localdomain podman[88092]: 2025-10-13 14:00:33.870304946 +0000 UTC m=+0.251747454 container init 159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1, name=cinder_backup_init_logs, io.openshift.expose-services=, name=rhosp17/openstack-cinder-backup, build-date=2025-07-21T16:18:24, distribution-scope=public, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-cinder-backup-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-backup, managed_by=tripleo_ansible, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, description=Red Hat OpenStack Platform 17.1 cinder-backup, container_name=cinder_backup_init_logs, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc.)
Oct 13 14:00:33 standalone.localdomain sudo[88225]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:33 standalone.localdomain sudo[88225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:33 standalone.localdomain su[88228]: (to glance) root on none
Oct 13 14:00:33 standalone.localdomain su[88228]: pam_systemd(su:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:33 standalone.localdomain su[88228]: pam_unix(su:session): session opened for user glance(uid=42415) by (uid=0)
Oct 13 14:00:33 standalone.localdomain podman[88092]: 2025-10-13 14:00:33.878722189 +0000 UTC m=+0.260164697 container start 159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1, name=cinder_backup_init_logs, name=rhosp17/openstack-cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, description=Red Hat OpenStack Platform 17.1 cinder-backup, container_name=cinder_backup_init_logs, managed_by=tripleo_ansible, build-date=2025-07-21T16:18:24, release=1, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container)
Oct 13 14:00:33 standalone.localdomain systemd[1]: libpod-159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee.scope: Deactivated successfully.
Oct 13 14:00:33 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_backup_init_logs --conmon-pidfile /run/cinder_backup_init_logs.pid --detach=True --label config_id=tripleo_step3 --label container_name=cinder_backup_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_backup_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/cinder:/var/log/cinder:z registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1 /bin/bash -c chown -R cinder:cinder /var/log/cinder
Oct 13 14:00:33 standalone.localdomain sudo[88233]:   glance : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:33 standalone.localdomain sudo[88233]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:33 standalone.localdomain sudo[88233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42415)
Oct 13 14:00:33 standalone.localdomain podman[88134]: 2025-10-13 14:00:33.917662682 +0000 UTC m=+0.254457030 container start 2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine_db_sync, container_name=heat_engine_db_sync, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:00:33 standalone.localdomain podman[88134]: 2025-10-13 14:00:33.918083395 +0000 UTC m=+0.254877733 container attach 2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine_db_sync, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, release=1, container_name=heat_engine_db_sync, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1)
Oct 13 14:00:33 standalone.localdomain sudo[88224]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:33 standalone.localdomain podman[88234]: 2025-10-13 14:00:33.92883504 +0000 UTC m=+0.040022818 container died 159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1, name=cinder_backup_init_logs, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, container_name=cinder_backup_init_logs, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-backup, build-date=2025-07-21T16:18:24, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:00:33 standalone.localdomain sudo[88233]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:33 standalone.localdomain sudo[88225]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:33 standalone.localdomain podman[88084]: 2025-10-13 14:00:33.962697725 +0000 UTC m=+0.349440088 container start 8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_db_sync, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_db_sync, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true)
Oct 13 14:00:33 standalone.localdomain podman[88084]: 2025-10-13 14:00:33.963513171 +0000 UTC m=+0.350255544 container attach 8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_db_sync, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, build-date=2025-07-21T15:58:55, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-cinder-api-container, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_db_sync, vcs-type=git)
Oct 13 14:00:33 standalone.localdomain podman[88186]: 2025-10-13 14:00:33.978496398 +0000 UTC m=+0.204963508 container cleanup 3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1, name=cinder_volume_init_logs, tcib_managed=true, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-volume:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:13:39, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=cinder_volume_init_logs, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, name=rhosp17/openstack-cinder-volume, io.openshift.expose-services=)
Oct 13 14:00:33 standalone.localdomain systemd[1]: libpod-conmon-3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c.scope: Deactivated successfully.
Oct 13 14:00:34 standalone.localdomain podman[88234]: 2025-10-13 14:00:34.014865341 +0000 UTC m=+0.126053099 container cleanup 159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1, name=cinder_backup_init_logs, vendor=Red Hat, Inc., container_name=cinder_backup_init_logs, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-backup, com.redhat.component=openstack-cinder-backup-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 cinder-backup, architecture=x86_64, release=1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', 'chown -R cinder:cinder /var/log/cinder'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-backup:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/cinder:/var/log/cinder:z']}, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T16:18:24, version=17.1.9)
Oct 13 14:00:34 standalone.localdomain su[88269]: (to heat) root on none
Oct 13 14:00:34 standalone.localdomain systemd[1]: libpod-conmon-159b033647a38fcc89f8c08dea702931096d81138d0095e80c7e66e6528432ee.scope: Deactivated successfully.
Oct 13 14:00:34 standalone.localdomain su[88269]: pam_unix(su:session): session opened for user heat(uid=42418) by (uid=0)
Oct 13 14:00:34 standalone.localdomain su[88269]: pam_lastlog(su:session): file /var/log/lastlog created
Oct 13 14:00:34 standalone.localdomain su[88269]: pam_lastlog(su:session): unable to open /var/log/btmp: No such file or directory
Oct 13 14:00:34 standalone.localdomain su[88291]: (to cinder) root on none
Oct 13 14:00:34 standalone.localdomain su[88291]: pam_systemd(su:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:34 standalone.localdomain su[88291]: pam_unix(su:session): session opened for user cinder(uid=42407) by (uid=0)
Oct 13 14:00:34 standalone.localdomain sudo[88147]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:00:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 32bb9fd8-9635-4b18-be18-42208aab5527 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:00:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 32bb9fd8-9635-4b18-be18-42208aab5527 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:00:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 32bb9fd8-9635-4b18-be18-42208aab5527 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:00:34 standalone.localdomain podman[88428]: 2025-10-13 14:00:34.367674875 +0000 UTC m=+0.077686672 container create 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, container_name=horizon, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:15, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-horizon-container, description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:34 standalone.localdomain sudo[88448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:00:34 standalone.localdomain systemd[1]: Started libpod-conmon-808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.scope.
Oct 13 14:00:34 standalone.localdomain sudo[88448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:00:34 standalone.localdomain sudo[88448]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:34 standalone.localdomain podman[88447]: 2025-10-13 14:00:34.412986087 +0000 UTC m=+0.087217429 container create fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_db_sync, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=keystone_db_sync, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, vendor=Red Hat, Inc., config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:00:34 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:34 standalone.localdomain podman[88428]: 2025-10-13 14:00:34.323267771 +0000 UTC m=+0.033279568 image pull  registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1
Oct 13 14:00:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233/merged/var/tmp supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233/merged/var/log/horizon supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:34 standalone.localdomain systemd[1]: Started libpod-conmon-fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1.scope.
Oct 13 14:00:34 standalone.localdomain podman[88447]: 2025-10-13 14:00:34.358338834 +0000 UTC m=+0.032570166 image pull  registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 14:00:34 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a74ddba43a8c0deb95d8e678513479d7605354398a4facd4f04bb49f5adb5/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a74ddba43a8c0deb95d8e678513479d7605354398a4facd4f04bb49f5adb5/merged/var/log/keystone supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:34 standalone.localdomain podman[88447]: 2025-10-13 14:00:34.473156742 +0000 UTC m=+0.147388044 container init fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_db_sync, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, version=17.1.9, container_name=keystone_db_sync, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:00:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:00:34 standalone.localdomain podman[88428]: 2025-10-13 14:00:34.475620228 +0000 UTC m=+0.185631995 container init 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, release=1, container_name=horizon, com.redhat.component=openstack-horizon-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:15)
Oct 13 14:00:34 standalone.localdomain sudo[88489]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:34 standalone.localdomain sudo[88489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:00:34 standalone.localdomain podman[88447]: 2025-10-13 14:00:34.497765328 +0000 UTC m=+0.171996670 container start fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_db_sync, name=rhosp17/openstack-keystone, io.buildah.version=1.33.12, config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, version=17.1.9, container_name=keystone_db_sync, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, architecture=x86_64, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, config_id=tripleo_step3, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:00:34 standalone.localdomain podman[88447]: 2025-10-13 14:00:34.500648288 +0000 UTC m=+0.174879620 container attach fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_db_sync, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, container_name=keystone_db_sync, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:00:34 standalone.localdomain podman[88428]: 2025-10-13 14:00:34.514035125 +0000 UTC m=+0.224046932 container start 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-horizon, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:15, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, container_name=horizon, maintainer=OpenStack TripleO Team, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1)
Oct 13 14:00:34 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name horizon --conmon-pidfile /run/horizon.pid --detach=True --env ENABLE_DESIGNATE=yes --env ENABLE_HEAT=yes --env ENABLE_IRONIC=yes --env ENABLE_MANILA=yes --env ENABLE_OCTAVIA=yes --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=eff67e8d67d5f186cef6e48df141386b --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=horizon --label managed_by=tripleo_ansible --label config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/horizon.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/horizon:/var/log/horizon:z --volume /var/log/containers/httpd/horizon:/var/log/httpd:z --volume /var/tmp/horizon:/var/tmp:z --volume /var/www:/var/www:ro registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: pgmap v763: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:00:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:00:34 standalone.localdomain podman[88493]: 2025-10-13 14:00:34.596824815 +0000 UTC m=+0.090689547 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=starting, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-horizon, description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-horizon-container, tcib_managed=true, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vcs-type=git)
Oct 13 14:00:34 standalone.localdomain sudo[88489]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:34 standalone.localdomain podman[88493]: 2025-10-13 14:00:34.621171353 +0000 UTC m=+0.115036075 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, distribution-scope=public, version=17.1.9, container_name=horizon, io.openshift.expose-services=, release=1, architecture=x86_64, vcs-type=git, build-date=2025-07-21T13:58:15, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-horizon, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:00:34 standalone.localdomain sudo[88531]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:34 standalone.localdomain sudo[88531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:34 standalone.localdomain podman[88493]: unhealthy
Oct 13 14:00:34 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:00:34 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Failed with result 'exit-code'.
Oct 13 14:00:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-57a101a7ef6ab2b3f3e0eb8648411f7f09d848d387bab284c11bac1038a2c94b-merged.mount: Deactivated successfully.
Oct 13 14:00:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:34 standalone.localdomain sudo[88531]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:34 standalone.localdomain sudo[88579]:     root : PWD=/ ; USER=keystone ; COMMAND=/usr/bin/keystone-manage db_sync
Oct 13 14:00:34 standalone.localdomain sudo[88579]: pam_unix(sudo:session): session opened for user keystone(uid=42425) by (uid=0)
Oct 13 14:00:35 standalone.localdomain podman[88664]: 2025-10-13 14:00:35.005251882 +0000 UTC m=+0.068286649 container create 200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_db_sync, name=rhosp17/openstack-manila-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T16:06:43, vcs-type=git, batch=17.1_20250721.1, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, release=1, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=manila_api_db_sync, architecture=x86_64, com.redhat.component=openstack-manila-api-container, tcib_managed=true)
Oct 13 14:00:35 standalone.localdomain systemd[1]: Started libpod-conmon-200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8.scope.
Oct 13 14:00:35 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccbbc5cc888099d6bfaecb9e9bb51395da78f6c50564be8b58709cb79be8f753/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:35 standalone.localdomain podman[88664]: 2025-10-13 14:00:35.047169507 +0000 UTC m=+0.110204274 container init 200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_db_sync, build-date=2025-07-21T16:06:43, name=rhosp17/openstack-manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, distribution-scope=public, container_name=manila_api_db_sync, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, com.redhat.component=openstack-manila-api-container, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:00:35 standalone.localdomain podman[88664]: 2025-10-13 14:00:35.058442199 +0000 UTC m=+0.121476966 container start 200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_db_sync, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=manila_api_db_sync, build-date=2025-07-21T16:06:43)
Oct 13 14:00:35 standalone.localdomain podman[88664]: 2025-10-13 14:00:35.05911318 +0000 UTC m=+0.122147947 container attach 200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_db_sync, build-date=2025-07-21T16:06:43, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, com.redhat.component=openstack-manila-api-container, release=1, description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-manila-api, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, vendor=Red Hat, Inc., container_name=manila_api_db_sync, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 14:00:35 standalone.localdomain podman[88664]: 2025-10-13 14:00:34.964858363 +0000 UTC m=+0.027893140 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 14:00:35 standalone.localdomain sudo[88681]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:35 standalone.localdomain sudo[88681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:35 standalone.localdomain sudo[88681]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:35 standalone.localdomain su[88745]: (to manila) root on none
Oct 13 14:00:35 standalone.localdomain su[88745]: pam_unix(su:session): session opened for user manila(uid=42429) by (uid=0)
Oct 13 14:00:35 standalone.localdomain su[88745]: pam_lastlog(su:session): file /var/log/lastlog created
Oct 13 14:00:35 standalone.localdomain su[88745]: pam_lastlog(su:session): unable to open /var/log/btmp: No such file or directory
Oct 13 14:00:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:36 standalone.localdomain ceph-mon[29756]: pgmap v764: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 13 14:00:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 13 14:00:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:37 standalone.localdomain ceph-mon[29756]: 5.14 scrub starts
Oct 13 14:00:37 standalone.localdomain ceph-mon[29756]: 5.14 scrub ok
Oct 13 14:00:38 standalone.localdomain haproxy[70940]: 172.17.0.100:48949 [13/Oct/2025:14:00:35.063] mysql mysql/standalone.internalapi.localdomain 1/0/2958 35540 -- 7/7/6/6/0 0/0
Oct 13 14:00:38 standalone.localdomain su[88269]: pam_unix(su:session): session closed for user heat
Oct 13 14:00:38 standalone.localdomain systemd[1]: libpod-2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93.scope: Deactivated successfully.
Oct 13 14:00:38 standalone.localdomain systemd[1]: libpod-2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93.scope: Consumed 1.783s CPU time.
Oct 13 14:00:38 standalone.localdomain podman[88134]: 2025-10-13 14:00:38.165929629 +0000 UTC m=+4.502724037 container died 2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine_db_sync, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, container_name=heat_engine_db_sync, name=rhosp17/openstack-heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, build-date=2025-07-21T15:44:11, com.redhat.component=openstack-heat-engine-container, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:00:38 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 13 14:00:38 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 13 14:00:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c800d59e331506d01353fe0ada7b93084552dc4f0ee734e61788460834519d95-merged.mount: Deactivated successfully.
Oct 13 14:00:38 standalone.localdomain podman[88769]: 2025-10-13 14:00:38.442703804 +0000 UTC m=+0.260235840 container cleanup 2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine_db_sync, architecture=x86_64, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine_db_sync, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, tcib_managed=true, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, version=17.1.9, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11)
Oct 13 14:00:38 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name heat_engine_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/heat_engine_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=02af0e891420e5c6be1d463bd6e7979c --label config_id=tripleo_step3 --label container_name=heat_engine_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/heat_engine_db_sync.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/heat:/var/log/heat:z --volume /var/lib/kolla/config_files/heat_engine_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1
Oct 13 14:00:38 standalone.localdomain systemd[1]: libpod-conmon-2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93.scope: Deactivated successfully.
Oct 13 14:00:38 standalone.localdomain ceph-mon[29756]: pgmap v765: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:38 standalone.localdomain ceph-mon[29756]: 4.17 scrub starts
Oct 13 14:00:38 standalone.localdomain ceph-mon[29756]: 4.17 scrub ok
Oct 13 14:00:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 38 completed events
Oct 13 14:00:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:00:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:00:38 standalone.localdomain runuser[88855]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:38 standalone.localdomain podman[88870]: 2025-10-13 14:00:38.873580769 +0000 UTC m=+0.078712923 container create 5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1, name=manila_share_init_logs, name=rhosp17/openstack-manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, config_id=tripleo_step3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, container_name=manila_share_init_logs, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:00:38 standalone.localdomain systemd[1]: Started libpod-conmon-5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3.scope.
Oct 13 14:00:38 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:38 standalone.localdomain podman[88870]: 2025-10-13 14:00:38.830657042 +0000 UTC m=+0.035789236 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1
Oct 13 14:00:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3347794ca7bacaae66520ae91082dfbc824007d4215bbab2b34c037a86b9c119/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:38 standalone.localdomain podman[88870]: 2025-10-13 14:00:38.945097738 +0000 UTC m=+0.150229892 container init 5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1, name=manila_share_init_logs, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=manila_share_init_logs, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, summary=Red Hat OpenStack Platform 17.1 manila-share, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-manila-share, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:00:38 standalone.localdomain podman[88870]: 2025-10-13 14:00:38.959079654 +0000 UTC m=+0.164211808 container start 5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1, name=manila_share_init_logs, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=manila_share_init_logs, description=Red Hat OpenStack Platform 17.1 manila-share, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step3, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-manila-share-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 manila-share, batch=17.1_20250721.1, name=rhosp17/openstack-manila-share, tcib_managed=true, release=1)
Oct 13 14:00:38 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name manila_share_init_logs --conmon-pidfile /run/manila_share_init_logs.pid --detach=True --label config_id=tripleo_step3 --label container_name=manila_share_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/manila_share_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/manila:/var/log/manila:z registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1 /bin/bash -c chown -R manila:manila /var/log/manila
Oct 13 14:00:38 standalone.localdomain systemd[1]: libpod-5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3.scope: Deactivated successfully.
Oct 13 14:00:39 standalone.localdomain podman[88924]: 2025-10-13 14:00:39.020138027 +0000 UTC m=+0.042971291 container died 5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1, name=manila_share_init_logs, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-share-container, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:22:36, distribution-scope=public, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-share, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, container_name=manila_share_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc.)
Oct 13 14:00:39 standalone.localdomain podman[88925]: 2025-10-13 14:00:39.109665347 +0000 UTC m=+0.123752348 container cleanup 5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1, name=manila_share_init_logs, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-share, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, name=rhosp17/openstack-manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=manila_share_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', 'chown -R manila:manila /var/log/manila'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-share:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/manila:/var/log/manila:z']}, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:00:39 standalone.localdomain systemd[1]: libpod-conmon-5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3.scope: Deactivated successfully.
Oct 13 14:00:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3347794ca7bacaae66520ae91082dfbc824007d4215bbab2b34c037a86b9c119-merged.mount: Deactivated successfully.
Oct 13 14:00:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:39 standalone.localdomain podman[88998]: 2025-10-13 14:00:39.373646432 +0000 UTC m=+0.075418401 container create 4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_db_sync, name=rhosp17/openstack-neutron-server, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:03, config_id=tripleo_step3, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.expose-services=, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, container_name=neutron_db_sync, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:00:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:39 standalone.localdomain systemd[1]: Started libpod-conmon-4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9.scope.
Oct 13 14:00:39 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97bde4ffeeebee51995225d1b54e9cfe542557c553e6334a0f543d8b59656baa/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97bde4ffeeebee51995225d1b54e9cfe542557c553e6334a0f543d8b59656baa/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:39 standalone.localdomain podman[88998]: 2025-10-13 14:00:39.427605753 +0000 UTC m=+0.129377722 container init 4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_db_sync, com.redhat.component=openstack-neutron-server-container, release=1, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, name=rhosp17/openstack-neutron-server, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_db_sync, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:00:39 standalone.localdomain podman[88998]: 2025-10-13 14:00:39.432750114 +0000 UTC m=+0.134522093 container start 4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_db_sync, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, container_name=neutron_db_sync, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1)
Oct 13 14:00:39 standalone.localdomain podman[88998]: 2025-10-13 14:00:39.432932349 +0000 UTC m=+0.134704318 container attach 4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, container_name=neutron_db_sync, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step3, com.redhat.component=openstack-neutron-server-container, io.openshift.expose-services=, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:00:39 standalone.localdomain podman[88998]: 2025-10-13 14:00:39.336363631 +0000 UTC m=+0.038135640 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 14:00:39 standalone.localdomain runuser[88855]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:39 standalone.localdomain sudo[89029]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:39 standalone.localdomain sudo[89029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:39 standalone.localdomain sudo[89029]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:39 standalone.localdomain runuser[89043]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:00:39 standalone.localdomain ceph-mon[29756]: pgmap v766: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:40 standalone.localdomain haproxy[70940]: 172.17.0.100:37091 [13/Oct/2025:14:00:35.028] mysql mysql/standalone.internalapi.localdomain 1/0/5190 14174 -- 6/6/5/5/0 0/0
Oct 13 14:00:40 standalone.localdomain haproxy[70940]: 172.17.0.100:44915 [13/Oct/2025:14:00:35.183] mysql mysql/standalone.internalapi.localdomain 1/0/5036 10499 -- 5/5/4/4/0 0/0
Oct 13 14:00:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:00:40 standalone.localdomain podman[89089]: 2025-10-13 14:00:40.294567948 +0000 UTC m=+0.064263903 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step1, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, release=1, build-date=2025-07-21T12:58:43)
Oct 13 14:00:40 standalone.localdomain podman[89089]: 2025-10-13 14:00:40.321849648 +0000 UTC m=+0.091545573 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, release=1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:00:40 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:00:40 standalone.localdomain runuser[89043]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:40 standalone.localdomain runuser[89130]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:41 standalone.localdomain haproxy[70940]: 172.17.0.100:52633 [13/Oct/2025:14:00:41.089] mysql mysql/standalone.internalapi.localdomain 1/0/83 3312 -- 5/5/4/4/0 0/0
Oct 13 14:00:41 standalone.localdomain runuser[89130]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:41 standalone.localdomain haproxy[70940]: 172.17.0.100:49699 [13/Oct/2025:14:00:35.280] mysql mysql/standalone.internalapi.localdomain 1/0/6486 52345 -- 6/6/5/5/0 0/0
Oct 13 14:00:41 standalone.localdomain su[88291]: pam_unix(su:session): session closed for user cinder
Oct 13 14:00:41 standalone.localdomain systemd[1]: libpod-8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6.scope: Deactivated successfully.
Oct 13 14:00:41 standalone.localdomain systemd[1]: libpod-8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6.scope: Consumed 2.185s CPU time.
Oct 13 14:00:41 standalone.localdomain podman[89250]: 2025-10-13 14:00:41.944889473 +0000 UTC m=+0.071096327 container died 8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_db_sync, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_id=tripleo_step3, container_name=cinder_api_db_sync, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:00:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-53bdd984fb7049ab7f79c8d6ad9e4d45d8ae9e4d29126e230f8a5d1abdc81fab-merged.mount: Deactivated successfully.
Oct 13 14:00:41 standalone.localdomain podman[89250]: 2025-10-13 14:00:41.99682581 +0000 UTC m=+0.123032614 container cleanup 8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_db_sync, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_id=tripleo_step3, container_name=cinder_api_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']}, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1)
Oct 13 14:00:42 standalone.localdomain systemd[1]: libpod-conmon-8da90624bd166c94a8f74056f7c1500633c39b5ccb88fc4a26099af974c7f9d6.scope: Deactivated successfully.
Oct 13 14:00:42 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_api_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/cinder_api_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=bf2556c9454e19b68e3daf4f11f84a81 --label config_id=tripleo_step3 --label container_name=cinder_api_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_api_db_sync.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/cinder_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/cinder:/var/log/cinder:z registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 14:00:42 standalone.localdomain podman[89363]: 2025-10-13 14:00:42.48811612 +0000 UTC m=+0.064899014 container create 5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_db_sync, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:00:42 standalone.localdomain ceph-mon[29756]: pgmap v767: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:42 standalone.localdomain systemd[1]: Started libpod-conmon-5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9.scope.
Oct 13 14:00:42 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:42 standalone.localdomain podman[89363]: 2025-10-13 14:00:42.447962148 +0000 UTC m=+0.024745022 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:00:42 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20ffedf2813c4567ec9b5d5ba8ae086d18f1a9d9860d73e5badf30326d6edd4c/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:42 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20ffedf2813c4567ec9b5d5ba8ae086d18f1a9d9860d73e5badf30326d6edd4c/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:42 standalone.localdomain podman[89363]: 2025-10-13 14:00:42.561232498 +0000 UTC m=+0.138015372 container init 5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_db_sync, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, container_name=nova_api_db_sync, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:00:42 standalone.localdomain podman[89363]: 2025-10-13 14:00:42.568211655 +0000 UTC m=+0.144994529 container start 5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_db_sync, com.redhat.component=openstack-nova-api-container, name=rhosp17/openstack-nova-api, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step3, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., container_name=nova_api_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:00:42 standalone.localdomain podman[89363]: 2025-10-13 14:00:42.5683854 +0000 UTC m=+0.145168284 container attach 5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_db_sync, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_db_sync, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true)
Oct 13 14:00:42 standalone.localdomain sudo[89385]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:42 standalone.localdomain sudo[89385]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:42 standalone.localdomain sudo[89385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:42 standalone.localdomain sudo[89385]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:42 standalone.localdomain su[89390]: (to nova) root on none
Oct 13 14:00:42 standalone.localdomain su[89390]: pam_systemd(su:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:42 standalone.localdomain su[89390]: pam_unix(su:session): session opened for user nova(uid=42436) by (uid=0)
Oct 13 14:00:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:44 standalone.localdomain haproxy[70940]: Server horizon/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 301, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:00:44 standalone.localdomain ceph-mon[29756]: pgmap v768: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:46 standalone.localdomain ceph-mon[29756]: pgmap v769: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:48 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 13 14:00:48 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 13 14:00:48 standalone.localdomain haproxy[70940]: 172.17.0.100:59205 [13/Oct/2025:14:00:44.546] mysql mysql/standalone.internalapi.localdomain 1/0/3855 22299 -- 6/6/5/5/0 0/0
Oct 13 14:00:48 standalone.localdomain su[89390]: pam_unix(su:session): session closed for user nova
Oct 13 14:00:48 standalone.localdomain systemd[1]: libpod-5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9.scope: Deactivated successfully.
Oct 13 14:00:48 standalone.localdomain systemd[1]: libpod-5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9.scope: Consumed 2.600s CPU time.
Oct 13 14:00:48 standalone.localdomain podman[89363]: 2025-10-13 14:00:48.520759387 +0000 UTC m=+6.097542341 container died 5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_db_sync, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, release=1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, container_name=nova_api_db_sync, tcib_managed=true)
Oct 13 14:00:48 standalone.localdomain ceph-mon[29756]: pgmap v770: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:48 standalone.localdomain ceph-mon[29756]: 4.18 scrub starts
Oct 13 14:00:48 standalone.localdomain ceph-mon[29756]: 4.18 scrub ok
Oct 13 14:00:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-20ffedf2813c4567ec9b5d5ba8ae086d18f1a9d9860d73e5badf30326d6edd4c-merged.mount: Deactivated successfully.
Oct 13 14:00:48 standalone.localdomain podman[89638]: 2025-10-13 14:00:48.784111244 +0000 UTC m=+0.253787560 container cleanup 5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_db_sync, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, distribution-scope=public, version=17.1.9, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T16:05:11, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, container_name=nova_api_db_sync)
Oct 13 14:00:48 standalone.localdomain systemd[1]: libpod-conmon-5fcad3bf4a7f18b3e6d256d14dc206fccf1b2c237c8b81d562b82949bbd43cd9.scope: Deactivated successfully.
Oct 13 14:00:48 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_api_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/nova_api_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --label config_id=tripleo_step3 --label container_name=nova_api_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_api_db_sync.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/nova_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:00:49 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Oct 13 14:00:49 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Oct 13 14:00:49 standalone.localdomain podman[89721]: 2025-10-13 14:00:49.276962912 +0000 UTC m=+0.062532000 container create e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, config_id=tripleo_step3, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:00:49 standalone.localdomain systemd[1]: Started libpod-conmon-e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090.scope.
Oct 13 14:00:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ad63805e4e2b4992e516cdb5b5f8a715356c3206cbc457c40619f8af2b3b5c/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ad63805e4e2b4992e516cdb5b5f8a715356c3206cbc457c40619f8af2b3b5c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ad63805e4e2b4992e516cdb5b5f8a715356c3206cbc457c40619f8af2b3b5c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:49 standalone.localdomain podman[89721]: 2025-10-13 14:00:49.245071798 +0000 UTC m=+0.030640866 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:00:49 standalone.localdomain podman[89721]: 2025-10-13 14:00:49.353394493 +0000 UTC m=+0.138963541 container init e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=nova_statedir_owner, release=1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']})
Oct 13 14:00:49 standalone.localdomain podman[89721]: 2025-10-13 14:00:49.359775682 +0000 UTC m=+0.145344730 container start e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_statedir_owner)
Oct 13 14:00:49 standalone.localdomain podman[89721]: 2025-10-13 14:00:49.359927797 +0000 UTC m=+0.145496845 container attach e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=nova_statedir_owner, io.buildah.version=1.33.12)
Oct 13 14:00:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:49 standalone.localdomain systemd[1]: libpod-e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090.scope: Deactivated successfully.
Oct 13 14:00:49 standalone.localdomain podman[89721]: 2025-10-13 14:00:49.430574008 +0000 UTC m=+0.216143116 container died e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, config_id=tripleo_step3, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, release=1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true)
Oct 13 14:00:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:49 standalone.localdomain podman[89741]: 2025-10-13 14:00:49.502270032 +0000 UTC m=+0.064762449 container cleanup e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=nova_statedir_owner, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Oct 13 14:00:49 standalone.localdomain systemd[1]: libpod-conmon-e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090.scope: Deactivated successfully.
Oct 13 14:00:49 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py
Oct 13 14:00:49 standalone.localdomain ceph-mon[29756]: 5.15 deep-scrub starts
Oct 13 14:00:49 standalone.localdomain ceph-mon[29756]: 5.15 deep-scrub ok
Oct 13 14:00:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6e7778fa4592e33578dca0758aeb52bd416a30f0cb1b9436349e643ca76e090-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:49 standalone.localdomain podman[89825]: 2025-10-13 14:00:49.985047686 +0000 UTC m=+0.102199256 container create a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64)
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started libpod-conmon-a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189.scope.
Oct 13 14:00:50 standalone.localdomain podman[89825]: 2025-10-13 14:00:49.937402782 +0000 UTC m=+0.054554362 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain podman[89825]: 2025-10-13 14:00:50.069514748 +0000 UTC m=+0.186666328 container init a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=2, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-libvirt, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, container_name=nova_virtlogd_wrapper, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2)
Oct 13 14:00:50 standalone.localdomain podman[89825]: 2025-10-13 14:00:50.078810608 +0000 UTC m=+0.195962188 container start a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, container_name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=2, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container)
Oct 13 14:00:50 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:00:50 standalone.localdomain sudo[89844]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:50 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started Session c1 of User root.
Oct 13 14:00:50 standalone.localdomain sudo[89844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:50 standalone.localdomain sudo[89844]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:50 standalone.localdomain systemd[1]: session-c1.scope: Deactivated successfully.
Oct 13 14:00:50 standalone.localdomain podman[89944]: 2025-10-13 14:00:50.505123431 +0000 UTC m=+0.059175224 container create 807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=2, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt)
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started libpod-conmon-807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432.scope.
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022ad1fccb5a07d341e4b8a6e9305339a3bb07c497484ef58989394e0aab9453/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022ad1fccb5a07d341e4b8a6e9305339a3bb07c497484ef58989394e0aab9453/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022ad1fccb5a07d341e4b8a6e9305339a3bb07c497484ef58989394e0aab9453/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/022ad1fccb5a07d341e4b8a6e9305339a3bb07c497484ef58989394e0aab9453/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain podman[89944]: 2025-10-13 14:00:50.571618624 +0000 UTC m=+0.125670467 container init 807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=2, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2025-07-21T14:56:59)
Oct 13 14:00:50 standalone.localdomain ceph-mon[29756]: pgmap v771: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:50 standalone.localdomain podman[89944]: 2025-10-13 14:00:50.478394619 +0000 UTC m=+0.032446432 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:00:50 standalone.localdomain podman[89965]: 2025-10-13 14:00:50.57983455 +0000 UTC m=+0.088026424 container create 4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_copy_rings, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, container_name=swift_copy_rings, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step3, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']}, architecture=x86_64, vcs-type=git, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started libpod-conmon-4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c.scope.
Oct 13 14:00:50 standalone.localdomain podman[89965]: 2025-10-13 14:00:50.532240237 +0000 UTC m=+0.040432091 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 14:00:50 standalone.localdomain podman[89944]: 2025-10-13 14:00:50.636598658 +0000 UTC m=+0.190650451 container start 807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:00:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afea50fcd80de40189eae447b9f1e1ce11d3f127805d4e44f9b8663c38c63435/merged/etc/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:50 standalone.localdomain podman[89965]: 2025-10-13 14:00:50.659541844 +0000 UTC m=+0.167733698 container init 4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_copy_rings, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, container_name=swift_copy_rings, vendor=Red Hat, Inc.)
Oct 13 14:00:50 standalone.localdomain podman[89965]: 2025-10-13 14:00:50.671833707 +0000 UTC m=+0.180025551 container start 4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_copy_rings, container_name=swift_copy_rings, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, build-date=2025-07-21T14:48:37, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1)
Oct 13 14:00:50 standalone.localdomain podman[89965]: 2025-10-13 14:00:50.672051833 +0000 UTC m=+0.180243697 container attach 4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_copy_rings, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, tcib_managed=true, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, managed_by=tripleo_ansible, distribution-scope=public, container_name=swift_copy_rings, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']})
Oct 13 14:00:50 standalone.localdomain systemd[1]: libpod-4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c.scope: Deactivated successfully.
Oct 13 14:00:50 standalone.localdomain podman[89965]: 2025-10-13 14:00:50.682474298 +0000 UTC m=+0.190666182 container died 4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_copy_rings, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']}, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-swift-proxy-server-container, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_copy_rings, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:00:50 standalone.localdomain podman[90034]: 2025-10-13 14:00:50.793665272 +0000 UTC m=+0.101516334 container cleanup 4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_copy_rings, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, container_name=swift_copy_rings, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:00:50 standalone.localdomain systemd[1]: libpod-conmon-4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c.scope: Deactivated successfully.
Oct 13 14:00:50 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_copy_rings --conmon-pidfile /run/swift_copy_rings.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160 --label config_id=tripleo_step3 --label container_name=swift_copy_rings --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545-1d74184aa75622548b4006bd57ec8160'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z', '/var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_copy_rings.log --network none --user root --volume /var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw,z --volume /var/lib/config-data/puppet-generated/swift_ringbuilder:/swift_ringbuilder:ro registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1 /bin/bash -c cp -v -dR --preserve -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups
Oct 13 14:00:51 standalone.localdomain haproxy[70940]: 172.17.0.100:55507 [13/Oct/2025:14:00:41.191] mysql mysql/standalone.internalapi.localdomain 1/0/9921 119515 -- 5/5/4/4/0 0/0
Oct 13 14:00:51 standalone.localdomain su[88228]: pam_unix(su:session): session closed for user glance
Oct 13 14:00:51 standalone.localdomain systemd[1]: libpod-25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb.scope: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain systemd[1]: libpod-25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb.scope: Consumed 3.721s CPU time.
Oct 13 14:00:51 standalone.localdomain podman[88068]: 2025-10-13 14:00:51.173813778 +0000 UTC m=+17.573713112 container died 25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_db_sync, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:00:51 standalone.localdomain podman[90141]: 2025-10-13 14:00:51.257355231 +0000 UTC m=+0.069529687 container cleanup 25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_db_sync, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-glance-api, release=1, config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, config_id=tripleo_step3, build-date=2025-07-21T13:58:20, container_name=glance_api_db_sync, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:00:51 standalone.localdomain systemd[1]: libpod-conmon-25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb.scope: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name glance_api_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/glance_api_db_sync.pid --detach=False --env KOLLA_BOOTSTRAP=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e --label config_id=tripleo_step3 --label container_name=glance_api_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'command': "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/glance_api_db_sync.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/glance:/var/log/glance:z --volume /var/log/containers/httpd/glance:/var/log/httpd:z --volume /var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json --volume /var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/glance:/var/lib/glance:shared registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1 /usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'
Oct 13 14:00:51 standalone.localdomain podman[90132]: 2025-10-13 14:00:51.284187898 +0000 UTC m=+0.131643473 container create cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_setup_srv, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, name=rhosp17/openstack-swift-account, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, release=1, tcib_managed=true, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, container_name=swift_setup_srv)
Oct 13 14:00:51 standalone.localdomain podman[90132]: 2025-10-13 14:00:51.19312687 +0000 UTC m=+0.040582475 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1
Oct 13 14:00:51 standalone.localdomain systemd[1]: Started libpod-conmon-cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391.scope.
Oct 13 14:00:51 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:51 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e5a8eff02736f76a8eaf288551e295af46a07d57fc5a8429f8e2724a24fb08/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:51 standalone.localdomain podman[90132]: 2025-10-13 14:00:51.344466636 +0000 UTC m=+0.191922221 container init cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_setup_srv, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, config_data={'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']}, architecture=x86_64, release=1, build-date=2025-07-21T16:11:22, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_setup_srv, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:00:51 standalone.localdomain podman[90132]: 2025-10-13 14:00:51.350446482 +0000 UTC m=+0.197902057 container start cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_setup_srv, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_setup_srv, release=1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, config_data={'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:00:51 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_setup_srv --conmon-pidfile /run/swift_setup_srv.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step3 --label container_name=swift_setup_srv --label managed_by=tripleo_ansible --label config_data={'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_setup_srv.log --network none --user root --volume /srv/node:/srv/node:z registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1 find /srv/node -maxdepth 1 -type d -exec chown swift: {} ;
Oct 13 14:00:51 standalone.localdomain systemd[1]: libpod-cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391.scope: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain podman[90193]: 2025-10-13 14:00:51.415671104 +0000 UTC m=+0.039373378 container died cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_setup_srv, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, container_name=swift_setup_srv, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, config_data={'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:00:51 standalone.localdomain podman[90193]: 2025-10-13 14:00:51.437457503 +0000 UTC m=+0.061159767 container cleanup cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_setup_srv, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'command': ['find', '/srv/node', '-maxdepth', '1', '-type', 'd', '-exec', 'chown', 'swift:', '{}', ';'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'none', 'user': 'root', 'volumes': ['/srv/node:/srv/node:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, container_name=swift_setup_srv, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:00:51 standalone.localdomain systemd[1]: libpod-conmon-cdfb3026bb713f8e473c502404ab9b9d75d7071ad88b917d60b2a51e402b3391.scope: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-afea50fcd80de40189eae447b9f1e1ce11d3f127805d4e44f9b8663c38c63435-merged.mount: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4487f7f14a9a7dca143a9e4ef7453777d81255096a67647ef25a999395e5295c-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1-merged.mount: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25716feee99bcd3cd2497feee6c71712573a1666b141c257cbf4d304edce3feb-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:51 standalone.localdomain runuser[90243]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:52 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 13 14:00:52 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 13 14:00:52 standalone.localdomain runuser[90243]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:52 standalone.localdomain ceph-mon[29756]: pgmap v772: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:52 standalone.localdomain ceph-mon[29756]: 4.19 scrub starts
Oct 13 14:00:52 standalone.localdomain ceph-mon[29756]: 4.19 scrub ok
Oct 13 14:00:52 standalone.localdomain runuser[90314]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:53 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 13 14:00:53 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:00:53 standalone.localdomain runuser[90314]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:53 standalone.localdomain runuser[90445]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:00:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:53 standalone.localdomain ceph-mon[29756]: 5.16 scrub starts
Oct 13 14:00:53 standalone.localdomain ceph-mon[29756]: 5.16 scrub ok
Oct 13 14:00:53 standalone.localdomain haproxy[70940]: 172.17.0.100:34533 [13/Oct/2025:14:00:36.409] mysql mysql/standalone.internalapi.localdomain 1/0/17320 7048 -- 4/4/3/3/0 0/0
Oct 13 14:00:53 standalone.localdomain haproxy[70940]: 172.17.0.100:54267 [13/Oct/2025:14:00:36.313] mysql mysql/standalone.internalapi.localdomain 1/0/17417 164933 -- 3/3/2/2/0 0/0
Oct 13 14:00:53 standalone.localdomain su[88745]: pam_unix(su:session): session closed for user manila
Oct 13 14:00:53 standalone.localdomain systemd[1]: libpod-200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8.scope: Deactivated successfully.
Oct 13 14:00:53 standalone.localdomain systemd[1]: libpod-200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8.scope: Consumed 2.397s CPU time.
Oct 13 14:00:53 standalone.localdomain podman[88664]: 2025-10-13 14:00:53.796014657 +0000 UTC m=+18.859049434 container died 200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_db_sync, release=1, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, managed_by=tripleo_ansible, container_name=manila_api_db_sync, build-date=2025-07-21T16:06:43, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-api-container, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:00:53 standalone.localdomain systemd[1]: tmp-crun.6PJ9Tj.mount: Deactivated successfully.
Oct 13 14:00:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccbbc5cc888099d6bfaecb9e9bb51395da78f6c50564be8b58709cb79be8f753-merged.mount: Deactivated successfully.
Oct 13 14:00:53 standalone.localdomain podman[90541]: 2025-10-13 14:00:53.869005051 +0000 UTC m=+0.061909280 container cleanup 200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_db_sync, com.redhat.component=openstack-manila-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-api, build-date=2025-07-21T16:06:43, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, release=1, vcs-type=git, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., container_name=manila_api_db_sync, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:00:53 standalone.localdomain systemd[1]: libpod-conmon-200ca74a091c100ac69a956dc9e7b94d99f7ae7d1fb23739c0842dfd61f8c5a8.scope: Deactivated successfully.
Oct 13 14:00:53 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name manila_api_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/manila_api_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6f421a64683e0927a1c7cc208f5b5307 --label config_id=tripleo_step3 --label container_name=manila_api_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/manila_api_db_sync.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/manila_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/manila:/var/log/manila:z registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 14:00:54 standalone.localdomain runuser[90445]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:00:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:54 standalone.localdomain ceph-mon[29756]: pgmap v773: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:55 standalone.localdomain haproxy[70940]: 172.17.0.100:50075 [13/Oct/2025:14:00:35.705] mysql mysql/standalone.internalapi.localdomain 1/0/19948 328199 -- 2/2/1/1/0 0/0
Oct 13 14:00:55 standalone.localdomain sudo[88579]: pam_unix(sudo:session): session closed for user keystone
Oct 13 14:00:55 standalone.localdomain systemd[1]: libpod-fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1.scope: Deactivated successfully.
Oct 13 14:00:55 standalone.localdomain systemd[1]: libpod-fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1.scope: Consumed 3.851s CPU time.
Oct 13 14:00:55 standalone.localdomain podman[88447]: 2025-10-13 14:00:55.702917217 +0000 UTC m=+21.377148559 container died fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_db_sync, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=keystone_db_sync, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:00:55 standalone.localdomain systemd[1]: tmp-crun.jThbz3.mount: Deactivated successfully.
Oct 13 14:00:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d33a74ddba43a8c0deb95d8e678513479d7605354398a4facd4f04bb49f5adb5-merged.mount: Deactivated successfully.
Oct 13 14:00:55 standalone.localdomain podman[90721]: 2025-10-13 14:00:55.809644742 +0000 UTC m=+0.100530193 container cleanup fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_db_sync, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_db_sync, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:00:55 standalone.localdomain systemd[1]: libpod-conmon-fd9cd2fa2a7531a1532a0cf9c7d72ba7ee3833e067699c9a6586e171e04be9e1.scope: Deactivated successfully.
Oct 13 14:00:55 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name keystone_db_sync --conmon-pidfile /run/keystone_db_sync.pid --detach=False --env KOLLA_BOOTSTRAP=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=32ceb64403625ed4f04d4f0bcdc85988 --label config_id=tripleo_step3 --label container_name=keystone_db_sync --label managed_by=tripleo_ansible --label config_data={'command': ['/usr/bin/bootstrap_host_exec', 'keystone', '/usr/local/bin/kolla_start'], 'detach': False, 'environment': {'KOLLA_BOOTSTRAP': True, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/keystone_db_sync.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/keystone:/var/log/keystone:z --volume /var/log/containers/httpd/keystone:/var/log/httpd:z --volume /etc/openldap:/etc/openldap:ro --volume /var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1 /usr/bin/bootstrap_host_exec keystone /usr/local/bin/kolla_start
Oct 13 14:00:56 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 13 14:00:56 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 13 14:00:56 standalone.localdomain ceph-mon[29756]: pgmap v774: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:56 standalone.localdomain ceph-mon[29756]: 4.1a scrub starts
Oct 13 14:00:56 standalone.localdomain ceph-mon[29756]: 4.1a scrub ok
Oct 13 14:00:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:00:57 standalone.localdomain systemd[1]: tmp-crun.BGByB5.mount: Deactivated successfully.
Oct 13 14:00:57 standalone.localdomain podman[90777]: 2025-10-13 14:00:57.840609018 +0000 UTC m=+0.100042068 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=ovn_cluster_northd, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:30:04, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., container_name=ovn_cluster_northd)
Oct 13 14:00:57 standalone.localdomain podman[90777]: 2025-10-13 14:00:57.852938202 +0000 UTC m=+0.112371202 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-northd, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:30:04, config_id=ovn_cluster_northd, container_name=ovn_cluster_northd, tcib_managed=true)
Oct 13 14:00:57 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:00:57 standalone.localdomain haproxy[70940]: 172.17.0.100:39687 [13/Oct/2025:14:00:41.175] mysql mysql/standalone.internalapi.localdomain 1/0/16730 60265 -- 1/1/0/0/0 0/0
Oct 13 14:00:57 standalone.localdomain haproxy[70940]: 172.17.0.100:56479 [13/Oct/2025:14:00:57.915] mysql mysql/standalone.internalapi.localdomain 1/0/38 2378 -- 1/1/0/0/0 0/0
Oct 13 14:00:58 standalone.localdomain ceph-mon[29756]: pgmap v775: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:58 standalone.localdomain haproxy[70940]: 172.17.0.100:40801 [13/Oct/2025:14:00:57.959] mysql mysql/standalone.internalapi.localdomain 1/0/812 9267 -- 1/1/0/0/0 0/0
Oct 13 14:00:59 standalone.localdomain systemd[1]: libpod-4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9.scope: Deactivated successfully.
Oct 13 14:00:59 standalone.localdomain systemd[1]: libpod-4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9.scope: Consumed 3.167s CPU time.
Oct 13 14:00:59 standalone.localdomain podman[88998]: 2025-10-13 14:00:59.060936524 +0000 UTC m=+19.762708583 container died 4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-server, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, build-date=2025-07-21T15:44:03, batch=17.1_20250721.1, container_name=neutron_db_sync, tcib_managed=true, name=rhosp17/openstack-neutron-server, distribution-scope=public, config_id=tripleo_step3, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:00:59 standalone.localdomain systemd[1]: tmp-crun.HQoHpc.mount: Deactivated successfully.
Oct 13 14:00:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9-userdata-shm.mount: Deactivated successfully.
Oct 13 14:00:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-97bde4ffeeebee51995225d1b54e9cfe542557c553e6334a0f543d8b59656baa-merged.mount: Deactivated successfully.
Oct 13 14:00:59 standalone.localdomain podman[90810]: 2025-10-13 14:00:59.17730072 +0000 UTC m=+0.105058254 container cleanup 4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_db_sync, config_id=tripleo_step3, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.component=openstack-neutron-server-container, batch=17.1_20250721.1, build-date=2025-07-21T15:44:03, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, container_name=neutron_db_sync, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64)
Oct 13 14:00:59 standalone.localdomain systemd[1]: libpod-conmon-4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9.scope: Deactivated successfully.
Oct 13 14:00:59 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name neutron_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/neutron_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=bfea4b567a7178c2bf424bc40994d7e4 --label config_id=tripleo_step3 --label container_name=neutron_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/neutron_db_sync.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/log/containers/httpd/neutron-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/neutron_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 14:00:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:00:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:00:59 standalone.localdomain podman[90952]: 2025-10-13 14:00:59.634866197 +0000 UTC m=+0.075222314 container create 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:59, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt)
Oct 13 14:00:59 standalone.localdomain podman[90969]: 2025-10-13 14:00:59.659244627 +0000 UTC m=+0.075083551 container create a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api_db_sync, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, build-date=2025-07-21T13:58:12, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-placement-api-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step3, release=1, name=rhosp17/openstack-placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api_db_sync)
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started libpod-conmon-75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6.scope.
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started libpod-conmon-a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04.scope.
Oct 13 14:00:59 standalone.localdomain podman[90952]: 2025-10-13 14:00:59.587037897 +0000 UTC m=+0.027394004 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7f5cc363ca2027e31b478163d4d24e61ff18cbc712f3882286eff78f2e65ae/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7f5cc363ca2027e31b478163d4d24e61ff18cbc712f3882286eff78f2e65ae/merged/var/log/placement supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain podman[90995]: 2025-10-13 14:00:59.701327308 +0000 UTC m=+0.070392124 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.component=openstack-mariadb-container, version=17.1.9, name=rhosp17/openstack-mariadb, tcib_managed=true, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:00:59 standalone.localdomain podman[90952]: 2025-10-13 14:00:59.704899649 +0000 UTC m=+0.145255736 container init 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vcs-type=git, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-libvirt, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible)
Oct 13 14:00:59 standalone.localdomain podman[90952]: 2025-10-13 14:00:59.711550257 +0000 UTC m=+0.151906354 container start 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T14:56:59, distribution-scope=public, maintainer=OpenStack TripleO Team, release=2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-type=git, container_name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:00:59 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:00:59 standalone.localdomain podman[90969]: 2025-10-13 14:00:59.631821872 +0000 UTC m=+0.047660806 image pull  registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 14:00:59 standalone.localdomain sudo[91044]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:59 standalone.localdomain podman[90970]: 2025-10-13 14:00:59.741578623 +0000 UTC m=+0.149308314 container create b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_ensure_default_cells, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_ensure_default_cells, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:00:59 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started Session c2 of User root.
Oct 13 14:00:59 standalone.localdomain podman[90969]: 2025-10-13 14:00:59.757094116 +0000 UTC m=+0.172933050 container init a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api_db_sync, version=17.1.9, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api_db_sync, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.component=openstack-placement-api-container, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team)
Oct 13 14:00:59 standalone.localdomain sudo[91044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started libpod-conmon-b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264.scope.
Oct 13 14:00:59 standalone.localdomain sudo[91070]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:00:59 standalone.localdomain sudo[91070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:59 standalone.localdomain podman[91008]: 2025-10-13 14:00:59.779781623 +0000 UTC m=+0.120376612 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, version=17.1.9, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:08:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:00:59 standalone.localdomain podman[90995]: 2025-10-13 14:00:59.787869035 +0000 UTC m=+0.156933881 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:00:59 standalone.localdomain podman[90970]: 2025-10-13 14:00:59.693131733 +0000 UTC m=+0.100861434 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:00:59 standalone.localdomain podman[90970]: 2025-10-13 14:00:59.840935089 +0000 UTC m=+0.248664780 container init b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_ensure_default_cells, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, vcs-type=git, com.redhat.component=openstack-nova-api-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_api_ensure_default_cells, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:00:59 standalone.localdomain sudo[91044]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:59 standalone.localdomain systemd[1]: session-c2.scope: Deactivated successfully.
Oct 13 14:00:59 standalone.localdomain sudo[91070]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:59 standalone.localdomain podman[90970]: 2025-10-13 14:00:59.84771102 +0000 UTC m=+0.255440731 container start b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_ensure_default_cells, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_ensure_default_cells)
Oct 13 14:00:59 standalone.localdomain podman[90970]: 2025-10-13 14:00:59.847943067 +0000 UTC m=+0.255672788 container attach b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_ensure_default_cells, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T16:05:11, release=1, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_ensure_default_cells)
Oct 13 14:00:59 standalone.localdomain sudo[91096]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:00:59 standalone.localdomain sudo[91096]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:00:59 standalone.localdomain sudo[91096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:00:59 standalone.localdomain podman[90969]: 2025-10-13 14:00:59.869082936 +0000 UTC m=+0.284921870 container start a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api_db_sync, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=placement_api_db_sync, com.redhat.component=openstack-placement-api-container, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3)
Oct 13 14:00:59 standalone.localdomain podman[90969]: 2025-10-13 14:00:59.869805028 +0000 UTC m=+0.285644022 container attach a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api_db_sync, summary=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api_db_sync, description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:00:59 standalone.localdomain podman[91008]: 2025-10-13 14:00:59.869832429 +0000 UTC m=+0.210427438 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-haproxy-container, summary=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team)
Oct 13 14:00:59 standalone.localdomain sudo[91096]: pam_unix(sudo:session): session closed for user root
Oct 13 14:00:59 standalone.localdomain podman[91107]: 2025-10-13 14:00:59.953584709 +0000 UTC m=+0.082183212 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, tcib_managed=true, build-date=2025-07-21T13:08:05, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:00:59 standalone.localdomain su[91163]: (to placement) root on none
Oct 13 14:00:59 standalone.localdomain su[91163]: pam_unix(su:session): session opened for user placement(uid=998) by (uid=0)
Oct 13 14:00:59 standalone.localdomain su[91163]: pam_lastlog(su:session): file /var/log/lastlog created
Oct 13 14:00:59 standalone.localdomain su[91163]: pam_lastlog(su:session): unable to open /var/log/btmp: No such file or directory
Oct 13 14:00:59 standalone.localdomain podman[91107]: 2025-10-13 14:00:59.985807973 +0000 UTC m=+0.114406466 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:01:00 standalone.localdomain su[91169]: (to nova) root on none
Oct 13 14:01:00 standalone.localdomain su[91169]: pam_systemd(su:session): Failed to connect to system bus: No such file or directory
Oct 13 14:01:00 standalone.localdomain su[91169]: pam_unix(su:session): session opened for user nova(uid=42436) by (uid=0)
Oct 13 14:01:00 standalone.localdomain ceph-mon[29756]: pgmap v776: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:01 standalone.localdomain haproxy[70940]: 172.17.0.100:45531 [13/Oct/2025:14:01:00.674] mysql mysql/standalone.internalapi.localdomain 1/0/460 6063 -- 1/1/0/0/0 0/0
Oct 13 14:01:01 standalone.localdomain su[91163]: pam_unix(su:session): session closed for user placement
Oct 13 14:01:01 standalone.localdomain systemd[1]: libpod-a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04.scope: Deactivated successfully.
Oct 13 14:01:01 standalone.localdomain systemd[1]: libpod-a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04.scope: Consumed 1.090s CPU time.
Oct 13 14:01:01 standalone.localdomain podman[90969]: 2025-10-13 14:01:01.197016435 +0000 UTC m=+1.612855449 container died a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api_db_sync, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=placement_api_db_sync, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-placement-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67)
Oct 13 14:01:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:01:01 standalone.localdomain CROND[91260]: (root) CMD (run-parts /etc/cron.hourly)
Oct 13 14:01:01 standalone.localdomain run-parts[91264]: (/etc/cron.hourly) starting 0anacron
Oct 13 14:01:01 standalone.localdomain anacron[91273]: Anacron started on 2025-10-13
Oct 13 14:01:01 standalone.localdomain anacron[91273]: Will run job `cron.daily' in 5 min.
Oct 13 14:01:01 standalone.localdomain anacron[91273]: Will run job `cron.weekly' in 25 min.
Oct 13 14:01:01 standalone.localdomain anacron[91273]: Will run job `cron.monthly' in 45 min.
Oct 13 14:01:01 standalone.localdomain anacron[91273]: Jobs will be executed sequentially
Oct 13 14:01:01 standalone.localdomain run-parts[91275]: (/etc/cron.hourly) finished 0anacron
Oct 13 14:01:01 standalone.localdomain CROND[91259]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 13 14:01:01 standalone.localdomain systemd[1]: tmp-crun.XgIcAJ.mount: Deactivated successfully.
Oct 13 14:01:01 standalone.localdomain podman[91241]: 2025-10-13 14:01:01.323679261 +0000 UTC m=+0.098608943 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, version=17.1.9, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1)
Oct 13 14:01:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04-userdata-shm.mount: Deactivated successfully.
Oct 13 14:01:01 standalone.localdomain podman[91235]: 2025-10-13 14:01:01.357761623 +0000 UTC m=+0.149182889 container cleanup a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api_db_sync, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-placement-api, com.redhat.component=openstack-placement-api-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api_db_sync, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, release=1, build-date=2025-07-21T13:58:12, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:01:01 standalone.localdomain systemd[1]: libpod-conmon-a3983a374637911ded1a8b3c997f57c0a12bf4fcdfc12f89ed189722d9125b04.scope: Deactivated successfully.
Oct 13 14:01:01 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name placement_api_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/placement_api_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=2c5d4de4e0570c35257b2db1f73cb503 --label config_id=tripleo_step3 --label container_name=placement_api_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/placement_api_db_sync.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/placement:/var/log/placement:z --volume /var/log/containers/httpd/placement:/var/log/httpd:z --volume /var/lib/kolla/config_files/placement_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 14:01:01 standalone.localdomain podman[91241]: 2025-10-13 14:01:01.379002435 +0000 UTC m=+0.153932117 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=clustercheck, vcs-type=git, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45)
Oct 13 14:01:01 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:01:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:02 standalone.localdomain haproxy[70940]: 172.17.0.100:33237 [13/Oct/2025:14:01:01.860] mysql mysql/standalone.internalapi.localdomain 1/0/326 3040 -- 1/1/0/0/0 0/0
Oct 13 14:01:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf7f5cc363ca2027e31b478163d4d24e61ff18cbc712f3882286eff78f2e65ae-merged.mount: Deactivated successfully.
Oct 13 14:01:02 standalone.localdomain ceph-mon[29756]: pgmap v777: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 13 14:01:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 13 14:01:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:03 standalone.localdomain ceph-mon[29756]: 5.17 scrub starts
Oct 13 14:01:03 standalone.localdomain ceph-mon[29756]: 5.17 scrub ok
Oct 13 14:01:04 standalone.localdomain haproxy[70940]: 172.17.0.100:55407 [13/Oct/2025:14:01:03.983] mysql mysql/standalone.internalapi.localdomain 1/0/425 2307 -- 1/1/0/0/0 0/0
Oct 13 14:01:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:04 standalone.localdomain ceph-mon[29756]: pgmap v778: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:04 standalone.localdomain runuser[91485]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:01:04 standalone.localdomain podman[91530]: 2025-10-13 14:01:04.827374497 +0000 UTC m=+0.090768269 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=starting, tcib_managed=true, batch=17.1_20250721.1, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-horizon, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, container_name=horizon, architecture=x86_64)
Oct 13 14:01:04 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 13 14:01:04 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 13 14:01:05 standalone.localdomain runuser[91485]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:05 standalone.localdomain runuser[91662]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:05 standalone.localdomain ceph-mon[29756]: 4.1b scrub starts
Oct 13 14:01:05 standalone.localdomain ceph-mon[29756]: 4.1b scrub ok
Oct 13 14:01:05 standalone.localdomain runuser[91662]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:05 standalone.localdomain runuser[91716]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:06 standalone.localdomain podman[91530]: 2025-10-13 14:01:06.209848305 +0000 UTC m=+1.473242077 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, summary=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-horizon, com.redhat.component=openstack-horizon-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, build-date=2025-07-21T13:58:15, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=horizon, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5)
Oct 13 14:01:06 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:01:06 standalone.localdomain haproxy[70940]: 172.17.0.100:46941 [13/Oct/2025:14:01:06.117] mysql mysql/standalone.internalapi.localdomain 1/0/426 3341 -- 1/1/0/0/0 0/0
Oct 13 14:01:06 standalone.localdomain ceph-mon[29756]: pgmap v779: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:06 standalone.localdomain runuser[91716]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:06 standalone.localdomain su[91169]: pam_unix(su:session): session closed for user nova
Oct 13 14:01:06 standalone.localdomain systemd[1]: libpod-b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264.scope: Deactivated successfully.
Oct 13 14:01:06 standalone.localdomain systemd[1]: libpod-b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264.scope: Consumed 6.724s CPU time.
Oct 13 14:01:06 standalone.localdomain podman[90970]: 2025-10-13 14:01:06.686101165 +0000 UTC m=+7.093830926 container died b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_ensure_default_cells, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-nova-api, vcs-type=git, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, tcib_managed=true, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_api_ensure_default_cells, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:01:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264-userdata-shm.mount: Deactivated successfully.
Oct 13 14:01:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf-merged.mount: Deactivated successfully.
Oct 13 14:01:06 standalone.localdomain podman[91786]: 2025-10-13 14:01:06.790165609 +0000 UTC m=+0.089882142 container cleanup b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_ensure_default_cells, container_name=nova_api_ensure_default_cells, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:01:06 standalone.localdomain systemd[1]: libpod-conmon-b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264.scope: Deactivated successfully.
Oct 13 14:01:06 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_api_ensure_default_cells --conmon-pidfile /run/nova_api_ensure_default_cells.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --label config_id=tripleo_step3 --label container_name=nova_api_ensure_default_cells --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_api_ensure_default_cells.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/nova_api_ensure_default_cells.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:01:07 standalone.localdomain podman[91932]: 2025-10-13 14:01:07.196374066 +0000 UTC m=+0.059992951 container create 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, name=rhosp17/openstack-keystone, container_name=keystone, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64)
Oct 13 14:01:07 standalone.localdomain podman[91942]: 2025-10-13 14:01:07.220176747 +0000 UTC m=+0.066507073 container create 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=2, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0)
Oct 13 14:01:07 standalone.localdomain podman[91944]: 2025-10-13 14:01:07.246460126 +0000 UTC m=+0.084839394 container create 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=iscsid, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, name=rhosp17/openstack-iscsid, version=17.1.9, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started libpod-conmon-0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.scope.
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started libpod-conmon-4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc.scope.
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:07 standalone.localdomain podman[91932]: 2025-10-13 14:01:07.160707725 +0000 UTC m=+0.024326610 image pull  registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0152fbf47c47b98476307a2f85450ead7caad437035e3d8d6c9790cae3a7e742/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0152fbf47c47b98476307a2f85450ead7caad437035e3d8d6c9790cae3a7e742/merged/var/log/keystone supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain podman[91942]: 2025-10-13 14:01:07.274566443 +0000 UTC m=+0.120896789 container init 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, container_name=nova_virtnodedevd, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt)
Oct 13 14:01:07 standalone.localdomain podman[91942]: 2025-10-13 14:01:07.284640506 +0000 UTC m=+0.130970862 container start 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=2, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:59, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtnodedevd, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0)
Oct 13 14:01:07 standalone.localdomain podman[91942]: 2025-10-13 14:01:07.186638282 +0000 UTC m=+0.032968628 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:07 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started libpod-conmon-0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.scope.
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:07 standalone.localdomain sudo[91984]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b0f95b63c63d227c33e5b75d478d141a6629833a5ec16c3909f4639560a9e/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain podman[91944]: 2025-10-13 14:01:07.214297344 +0000 UTC m=+0.052676612 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Oct 13 14:01:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6b0f95b63c63d227c33e5b75d478d141a6629833a5ec16c3909f4639560a9e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:07 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started Session c3 of User root.
Oct 13 14:01:07 standalone.localdomain sudo[91984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:01:07 standalone.localdomain podman[91944]: 2025-10-13 14:01:07.344819292 +0000 UTC m=+0.183198590 container init 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=iscsid, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:01:07 standalone.localdomain podman[91944]: 2025-10-13 14:01:07.374064823 +0000 UTC m=+0.212444091 container start 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15)
Oct 13 14:01:07 standalone.localdomain sudo[92004]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:07 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=9002d51dff127f237c02241c0a80d08e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Oct 13 14:01:07 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started Session c4 of User root.
Oct 13 14:01:07 standalone.localdomain sudo[91984]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:07 standalone.localdomain systemd[1]: session-c3.scope: Deactivated successfully.
Oct 13 14:01:07 standalone.localdomain sudo[92004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:01:07 standalone.localdomain podman[91932]: 2025-10-13 14:01:07.444500207 +0000 UTC m=+0.308119092 container init 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=keystone, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3)
Oct 13 14:01:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:07 standalone.localdomain sudo[92058]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:07 standalone.localdomain sudo[92058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:01:07 standalone.localdomain podman[92005]: 2025-10-13 14:01:07.475892405 +0000 UTC m=+0.093259066 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-07-21T13:27:15)
Oct 13 14:01:07 standalone.localdomain sudo[92004]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:07 standalone.localdomain systemd[1]: session-c4.scope: Deactivated successfully.
Oct 13 14:01:07 standalone.localdomain podman[92005]: 2025-10-13 14:01:07.513685713 +0000 UTC m=+0.131052384 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Oct 13 14:01:07 standalone.localdomain kernel: Loading iSCSI transport class v2.0-870.
Oct 13 14:01:07 standalone.localdomain podman[92005]: unhealthy
Oct 13 14:01:07 standalone.localdomain sudo[92058]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:07 standalone.localdomain podman[91932]: 2025-10-13 14:01:07.527620958 +0000 UTC m=+0.391239833 container start 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, distribution-scope=public, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone)
Oct 13 14:01:07 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:01:07 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Failed with result 'exit-code'.
Oct 13 14:01:07 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name keystone --conmon-pidfile /run/keystone.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=32ceb64403625ed4f04d4f0bcdc85988 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=keystone --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/keystone.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/keystone:/var/log/keystone:z --volume /var/log/containers/httpd/keystone:/var/log/httpd:z --volume /etc/openldap:/etc/openldap:ro --volume /var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 14:01:07 standalone.localdomain podman[92064]: 2025-10-13 14:01:07.564547748 +0000 UTC m=+0.086743974 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=starting, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, container_name=keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, release=1)
Oct 13 14:01:07 standalone.localdomain podman[92217]: 2025-10-13 14:01:07.867977373 +0000 UTC m=+0.099850812 container exec 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-keystone, container_name=keystone, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, com.redhat.component=openstack-keystone-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9)
Oct 13 14:01:07 standalone.localdomain podman[92290]: 2025-10-13 14:01:07.968889317 +0000 UTC m=+0.091608925 container create 77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_db_sync, container_name=barbican_api_db_sync, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, name=rhosp17/openstack-barbican-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team)
Oct 13 14:01:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 13 14:01:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 13 14:01:07 standalone.localdomain podman[92292]: 2025-10-13 14:01:07.998775019 +0000 UTC m=+0.116681327 container create fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_db_sync, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-nova-conductor, vcs-type=git, com.redhat.component=openstack-nova-conductor-container, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_db_sync, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, managed_by=tripleo_ansible)
Oct 13 14:01:08 standalone.localdomain podman[92290]: 2025-10-13 14:01:07.91633047 +0000 UTC m=+0.039050098 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 14:01:08 standalone.localdomain podman[92313]: 2025-10-13 14:01:08.016354706 +0000 UTC m=+0.098407207 container create dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, build-date=2025-07-21T14:56:59, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, release=2, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team)
Oct 13 14:01:08 standalone.localdomain podman[92292]: 2025-10-13 14:01:07.923617407 +0000 UTC m=+0.041523725 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started libpod-conmon-77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04.scope.
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started libpod-conmon-fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c.scope.
Oct 13 14:01:08 standalone.localdomain podman[92313]: 2025-10-13 14:01:07.953317162 +0000 UTC m=+0.035369673 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started libpod-conmon-dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d.scope.
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0f78246b2c0232f2a45eb268f58d7cda22f69a7a6ba5bc0279db237ec6e56/merged/var/log/barbican supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb0f78246b2c0232f2a45eb268f58d7cda22f69a7a6ba5bc0279db237ec6e56/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88928f310f1ba382800c4333f5e27b201b6df1b159808455dd99069f0d30925c/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:08 standalone.localdomain podman[92292]: 2025-10-13 14:01:08.067527811 +0000 UTC m=+0.185434109 container init fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_db_sync, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-nova-conductor-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:17, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-conductor, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, container_name=nova_db_sync)
Oct 13 14:01:08 standalone.localdomain podman[92290]: 2025-10-13 14:01:08.070465823 +0000 UTC m=+0.193185441 container init 77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_db_sync, description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api_db_sync, com.redhat.component=openstack-barbican-api-container, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:44, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, distribution-scope=public, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:08 standalone.localdomain podman[92292]: 2025-10-13 14:01:08.07423593 +0000 UTC m=+0.192142238 container start fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_db_sync, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-conductor-container, config_id=tripleo_step3, build-date=2025-07-21T15:44:17, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_db_sync, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, name=rhosp17/openstack-nova-conductor, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:01:08 standalone.localdomain podman[92292]: 2025-10-13 14:01:08.074577531 +0000 UTC m=+0.192483939 container attach fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_db_sync, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=nova_db_sync, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, version=17.1.9, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T15:44:17, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:01:08 standalone.localdomain podman[92290]: 2025-10-13 14:01:08.077898734 +0000 UTC m=+0.200618342 container start 77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_db_sync, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-api-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, container_name=barbican_api_db_sync, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, build-date=2025-07-21T15:22:44, release=1)
Oct 13 14:01:08 standalone.localdomain podman[92290]: 2025-10-13 14:01:08.078016918 +0000 UTC m=+0.200736536 container attach 77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_db_sync, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_api_db_sync, name=rhosp17/openstack-barbican-api, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container, version=17.1.9, architecture=x86_64, release=1)
Oct 13 14:01:08 standalone.localdomain sudo[92356]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:08 standalone.localdomain sudo[92356]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:01:08 standalone.localdomain sudo[92356]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:08 standalone.localdomain sudo[92358]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:08 standalone.localdomain sudo[92358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:08 standalone.localdomain podman[92313]: 2025-10-13 14:01:08.129619756 +0000 UTC m=+0.211672287 container init dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, container_name=nova_virtstoraged, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=2, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git)
Oct 13 14:01:08 standalone.localdomain podman[92313]: 2025-10-13 14:01:08.136671715 +0000 UTC m=+0.218724246 container start dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20250721.1, container_name=nova_virtstoraged, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, architecture=x86_64)
Oct 13 14:01:08 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:08 standalone.localdomain sudo[92356]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:08 standalone.localdomain sudo[92358]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:08 standalone.localdomain sudo[92364]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:08 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:01:08 standalone.localdomain systemd[1]: Started Session c5 of User root.
Oct 13 14:01:08 standalone.localdomain sudo[92364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:08 standalone.localdomain su[92390]: (to barbican) root on none
Oct 13 14:01:08 standalone.localdomain su[92390]: pam_unix(su:session): session opened for user barbican(uid=42403) by (uid=0)
Oct 13 14:01:08 standalone.localdomain su[92390]: pam_lastlog(su:session): file /var/log/lastlog created
Oct 13 14:01:08 standalone.localdomain su[92390]: pam_lastlog(su:session): unable to open /var/log/btmp: No such file or directory
Oct 13 14:01:08 standalone.localdomain sudo[92364]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:08 standalone.localdomain systemd[1]: session-c5.scope: Deactivated successfully.
Oct 13 14:01:08 standalone.localdomain su[92393]: (to nova) root on none
Oct 13 14:01:08 standalone.localdomain su[92393]: pam_systemd(su:session): Failed to connect to system bus: No such file or directory
Oct 13 14:01:08 standalone.localdomain su[92393]: pam_unix(su:session): session opened for user nova(uid=42436) by (uid=0)
Oct 13 14:01:08 standalone.localdomain ceph-mon[29756]: pgmap v780: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:08 standalone.localdomain ceph-mon[29756]: 5.18 deep-scrub starts
Oct 13 14:01:08 standalone.localdomain ceph-mon[29756]: 5.18 deep-scrub ok
Oct 13 14:01:08 standalone.localdomain podman[92064]: 2025-10-13 14:01:08.830465025 +0000 UTC m=+1.352661281 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, architecture=x86_64, container_name=keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:01:08 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:01:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:09 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 13 14:01:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 13 14:01:10 standalone.localdomain haproxy[70940]: 172.17.0.100:41137 [13/Oct/2025:14:01:09.059] mysql mysql/standalone.internalapi.localdomain 1/0/1061 14348 -- 4/4/3/3/0 0/0
Oct 13 14:01:10 standalone.localdomain podman[92217]: 2025-10-13 14:01:10.231912004 +0000 UTC m=+2.463785433 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, container_name=keystone, distribution-scope=public, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:01:10 standalone.localdomain haproxy[70940]: 172.17.0.100:42885 [13/Oct/2025:14:01:09.025] mysql mysql/standalone.internalapi.localdomain 1/0/1274 4376 -- 4/4/3/3/0 0/0
Oct 13 14:01:10 standalone.localdomain haproxy[70940]: 172.17.0.100:48323 [13/Oct/2025:14:01:10.161] mysql mysql/standalone.internalapi.localdomain 1/0/137 272 -- 4/4/2/2/0 0/0
Oct 13 14:01:10 standalone.localdomain su[92390]: pam_unix(su:session): session closed for user barbican
Oct 13 14:01:10 standalone.localdomain systemd[1]: libpod-77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04.scope: Deactivated successfully.
Oct 13 14:01:10 standalone.localdomain systemd[1]: libpod-77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04.scope: Consumed 1.186s CPU time.
Oct 13 14:01:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:01:10 standalone.localdomain ceph-mon[29756]: pgmap v781: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:10 standalone.localdomain ceph-mon[29756]: 4.1c scrub starts
Oct 13 14:01:10 standalone.localdomain ceph-mon[29756]: 4.1c scrub ok
Oct 13 14:01:10 standalone.localdomain systemd[1]: tmp-crun.2ck3Y0.mount: Deactivated successfully.
Oct 13 14:01:10 standalone.localdomain podman[92449]: 2025-10-13 14:01:10.480289283 +0000 UTC m=+0.063252542 container died 77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_db_sync, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=barbican_api_db_sync, name=rhosp17/openstack-barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, version=17.1.9)
Oct 13 14:01:10 standalone.localdomain podman[92454]: 2025-10-13 14:01:10.497546771 +0000 UTC m=+0.072087047 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:01:10 standalone.localdomain podman[92454]: 2025-10-13 14:01:10.518777873 +0000 UTC m=+0.093318129 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-memcached, container_name=memcached, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:01:10 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:01:10 standalone.localdomain podman[92449]: 2025-10-13 14:01:10.578674879 +0000 UTC m=+0.161638098 container cleanup 77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_db_sync, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=barbican_api_db_sync, summary=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, release=1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:01:10 standalone.localdomain systemd[1]: libpod-conmon-77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04.scope: Deactivated successfully.
Oct 13 14:01:10 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name barbican_api_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/barbican_api_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d98175232adee9e03624d31ed0b22944 --label config_id=tripleo_step3 --label container_name=barbican_api_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/barbican_api_db_sync.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/barbican:/var/log/barbican:z --volume /var/log/containers/httpd/barbican-api:/var/log/httpd:z --volume /var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro --volume /var/lib/kolla/config_files/barbican_api_db_sync.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 14:01:10 standalone.localdomain haproxy[70940]: Server keystone_admin/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 5ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:01:10 standalone.localdomain haproxy[70940]: Server keystone_public/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 7ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:01:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 13 14:01:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 13 14:01:11 standalone.localdomain ceph-mon[29756]: 5.19 scrub starts
Oct 13 14:01:11 standalone.localdomain ceph-mon[29756]: 5.19 scrub ok
Oct 13 14:01:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eeb0f78246b2c0232f2a45eb268f58d7cda22f69a7a6ba5bc0279db237ec6e56-merged.mount: Deactivated successfully.
Oct 13 14:01:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77773b5c48917d01c3b9d9c27ee0facaf8a47d75ca76a3f47196db5ac150bd04-userdata-shm.mount: Deactivated successfully.
Oct 13 14:01:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 13 14:01:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 13 14:01:12 standalone.localdomain ceph-mon[29756]: pgmap v782: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:12 standalone.localdomain ceph-mon[29756]: 4.1d scrub starts
Oct 13 14:01:12 standalone.localdomain ceph-mon[29756]: 4.1d scrub ok
Oct 13 14:01:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:14 standalone.localdomain ceph-mon[29756]: pgmap v783: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:16 standalone.localdomain ceph-mon[29756]: pgmap v784: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:17 standalone.localdomain runuser[92825]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v785: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:17 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 13 14:01:17 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 13 14:01:17 standalone.localdomain runuser[92825]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:18 standalone.localdomain runuser[92894]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:18 standalone.localdomain ceph-mon[29756]: pgmap v785: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:18 standalone.localdomain ceph-mon[29756]: 5.1a scrub starts
Oct 13 14:01:18 standalone.localdomain ceph-mon[29756]: 5.1a scrub ok
Oct 13 14:01:18 standalone.localdomain runuser[92894]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:18 standalone.localdomain runuser[92956]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:19 standalone.localdomain runuser[92956]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:20 standalone.localdomain haproxy[70940]: 172.17.0.100:48543 [13/Oct/2025:14:01:10.062] mysql mysql/standalone.internalapi.localdomain 1/0/10251 96105 -- 3/3/2/2/0 0/0
Oct 13 14:01:20 standalone.localdomain haproxy[70940]: 172.17.0.100:47779 [13/Oct/2025:14:01:14.980] mysql mysql/standalone.internalapi.localdomain 1/0/5334 95715 -- 2/2/0/0/0 0/0
Oct 13 14:01:20 standalone.localdomain haproxy[70940]: 172.17.0.100:58015 [13/Oct/2025:14:01:09.941] mysql mysql/standalone.internalapi.localdomain 1/0/10373 3252 -- 2/2/0/0/0 0/0
Oct 13 14:01:20 standalone.localdomain su[92393]: pam_unix(su:session): session closed for user nova
Oct 13 14:01:20 standalone.localdomain systemd[1]: libpod-fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c.scope: Deactivated successfully.
Oct 13 14:01:20 standalone.localdomain systemd[1]: libpod-fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c.scope: Consumed 4.025s CPU time.
Oct 13 14:01:20 standalone.localdomain podman[92292]: 2025-10-13 14:01:20.438351969 +0000 UTC m=+12.556258297 container died fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_db_sync, tcib_managed=true, com.redhat.component=openstack-nova-conductor-container, container_name=nova_db_sync, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step3, release=1, name=rhosp17/openstack-nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:01:20 standalone.localdomain ceph-mon[29756]: pgmap v786: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c-userdata-shm.mount: Deactivated successfully.
Oct 13 14:01:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88928f310f1ba382800c4333f5e27b201b6df1b159808455dd99069f0d30925c-merged.mount: Deactivated successfully.
Oct 13 14:01:20 standalone.localdomain podman[93030]: 2025-10-13 14:01:20.545669543 +0000 UTC m=+0.092713150 container cleanup fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_db_sync, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, release=1, container_name=nova_db_sync, build-date=2025-07-21T15:44:17, config_id=tripleo_step3, batch=17.1_20250721.1, com.redhat.component=openstack-nova-conductor-container, managed_by=tripleo_ansible, config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-conductor)
Oct 13 14:01:20 standalone.localdomain systemd[1]: libpod-conmon-fdae5f2685dd6c0b2c5a94e58177ae691a9367fd824c6b6c63dbd8bb248bf72c.scope: Deactivated successfully.
Oct 13 14:01:20 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_db_sync --cap-add AUDIT_WRITE --conmon-pidfile /run/nova_db_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --label config_id=tripleo_step3 --label container_name=nova_db_sync --label managed_by=tripleo_ansible --label config_data={'cap_add': ['AUDIT_WRITE'], 'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_db_sync.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/kolla/config_files/nova_conductor_db_sync.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1
Oct 13 14:01:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 13 14:01:20 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 13 14:01:21 standalone.localdomain podman[93178]: 2025-10-13 14:01:21.048812 +0000 UTC m=+0.093464523 container create 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-keystone, io.openshift.expose-services=, tcib_managed=true, release=1, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:01:21 standalone.localdomain podman[93185]: 2025-10-13 14:01:21.075058709 +0000 UTC m=+0.112975572 container create 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud)
Oct 13 14:01:21 standalone.localdomain podman[93208]: 2025-10-13 14:01:21.10110018 +0000 UTC m=+0.124809670 container create 429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_secret_store_sync, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, build-date=2025-07-21T15:22:44, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, release=1, name=rhosp17/openstack-barbican-api, architecture=x86_64, container_name=barbican_api_secret_store_sync, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:01:21 standalone.localdomain podman[93178]: 2025-10-13 14:01:21.014624925 +0000 UTC m=+0.059277448 image pull  registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started libpod-conmon-1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e.scope.
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started libpod-conmon-429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6.scope.
Oct 13 14:01:21 standalone.localdomain podman[93185]: 2025-10-13 14:01:21.019136016 +0000 UTC m=+0.057053029 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started libpod-conmon-2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.scope.
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48f191c76ffab71d400aed04746f4c46427f833446e2083cf65ea27103edc34/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b48f191c76ffab71d400aed04746f4c46427f833446e2083cf65ea27103edc34/merged/var/log/barbican supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e224b1120f02376fec61e07ffc17d3c98868215655f2d726d537b67be2bd79/merged/var/log/keystone supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44e224b1120f02376fec61e07ffc17d3c98868215655f2d726d537b67be2bd79/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:21 standalone.localdomain podman[93185]: 2025-10-13 14:01:21.140286171 +0000 UTC m=+0.178202994 container init 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtqemud, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, release=2, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git)
Oct 13 14:01:21 standalone.localdomain podman[93185]: 2025-10-13 14:01:21.146578667 +0000 UTC m=+0.184495510 container start 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:56:59, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.buildah.version=1.33.12, release=2, container_name=nova_virtqemud, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container)
Oct 13 14:01:21 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:01:21 standalone.localdomain podman[93178]: 2025-10-13 14:01:21.159278913 +0000 UTC m=+0.203931456 container init 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, config_id=tripleo_step3, managed_by=tripleo_ansible)
Oct 13 14:01:21 standalone.localdomain podman[93208]: 2025-10-13 14:01:21.066965606 +0000 UTC m=+0.090675086 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 14:01:21 standalone.localdomain sudo[93293]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:21 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:01:21 standalone.localdomain podman[93178]: 2025-10-13 14:01:21.187544724 +0000 UTC m=+0.232197247 container start 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, com.redhat.component=openstack-keystone-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:01:21 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name keystone_cron --conmon-pidfile /run/keystone_cron.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=32ceb64403625ed4f04d4f0bcdc85988 --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron keystone --label config_id=tripleo_step3 --label container_name=keystone_cron --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/keystone_cron.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/keystone:/var/log/keystone:z --volume /var/log/containers/httpd/keystone:/var/log/httpd:z --volume /var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1 /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/sbin/crond -n
Oct 13 14:01:21 standalone.localdomain systemd[1]: Started Session c6 of User root.
Oct 13 14:01:21 standalone.localdomain podman[93208]: 2025-10-13 14:01:21.198244217 +0000 UTC m=+0.221953677 container init 429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_secret_store_sync, config_id=tripleo_step3, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:22:44, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=barbican_api_secret_store_sync, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, vcs-type=git, architecture=x86_64, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:01:21 standalone.localdomain podman[93208]: 2025-10-13 14:01:21.20634219 +0000 UTC m=+0.230051660 container start 429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_secret_store_sync, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, managed_by=tripleo_ansible, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.9, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-api, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api_secret_store_sync, description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:01:21 standalone.localdomain podman[93208]: 2025-10-13 14:01:21.209054084 +0000 UTC m=+0.232763574 container attach 429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_secret_store_sync, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, container_name=barbican_api_secret_store_sync, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-barbican-api, build-date=2025-07-21T15:22:44, tcib_managed=true, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:01:21 standalone.localdomain sudo[93293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:21 standalone.localdomain sudo[93324]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:21 standalone.localdomain sudo[93324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:21 standalone.localdomain crond[93302]: (CRON) STARTUP (1.5.7)
Oct 13 14:01:21 standalone.localdomain crond[93302]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 14:01:21 standalone.localdomain crond[93302]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 99% if used.)
Oct 13 14:01:21 standalone.localdomain crond[93302]: (CRON) INFO (running with inotify support)
Oct 13 14:01:21 standalone.localdomain sudo[93293]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:21 standalone.localdomain systemd[1]: session-c6.scope: Deactivated successfully.
Oct 13 14:01:21 standalone.localdomain sudo[93324]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:21 standalone.localdomain podman[93310]: 2025-10-13 14:01:21.327575397 +0000 UTC m=+0.129156816 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=starting, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, vcs-type=git, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1, container_name=keystone_cron, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:01:21 standalone.localdomain su[93384]: (to barbican) root on none
Oct 13 14:01:21 standalone.localdomain su[93384]: pam_unix(su:session): session opened for user barbican(uid=42403) by (uid=0)
Oct 13 14:01:21 standalone.localdomain su[93384]: pam_lastlog(su:session): file /var/log/lastlog created
Oct 13 14:01:21 standalone.localdomain su[93384]: pam_lastlog(su:session): unable to open /var/log/btmp: No such file or directory
Oct 13 14:01:21 standalone.localdomain podman[93310]: 2025-10-13 14:01:21.408031914 +0000 UTC m=+0.209613333 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, release=1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, tcib_managed=true, build-date=2025-07-21T13:27:18, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, container_name=keystone_cron, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:01:21 standalone.localdomain ceph-mon[29756]: 4.1e scrub starts
Oct 13 14:01:21 standalone.localdomain ceph-mon[29756]: 4.1e scrub ok
Oct 13 14:01:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:21 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:01:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 13 14:01:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 13 14:01:22 standalone.localdomain haproxy[70940]: 172.17.0.100:46689 [13/Oct/2025:14:01:22.142] mysql mysql/standalone.internalapi.localdomain 1/0/210 3232 -- 1/1/0/0/0 0/0
Oct 13 14:01:22 standalone.localdomain su[93384]: pam_unix(su:session): session closed for user barbican
Oct 13 14:01:22 standalone.localdomain systemd[1]: libpod-429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6.scope: Deactivated successfully.
Oct 13 14:01:22 standalone.localdomain systemd[1]: libpod-429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6.scope: Consumed 1.161s CPU time.
Oct 13 14:01:22 standalone.localdomain ceph-mon[29756]: pgmap v787: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:22 standalone.localdomain ceph-mon[29756]: 5.1b scrub starts
Oct 13 14:01:22 standalone.localdomain ceph-mon[29756]: 5.1b scrub ok
Oct 13 14:01:22 standalone.localdomain podman[93408]: 2025-10-13 14:01:22.465649159 +0000 UTC m=+0.056343727 container died 429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_secret_store_sync, name=rhosp17/openstack-barbican-api, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T15:22:44, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, container_name=barbican_api_secret_store_sync, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:01:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6-userdata-shm.mount: Deactivated successfully.
Oct 13 14:01:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b48f191c76ffab71d400aed04746f4c46427f833446e2083cf65ea27103edc34-merged.mount: Deactivated successfully.
Oct 13 14:01:22 standalone.localdomain podman[93408]: 2025-10-13 14:01:22.527231158 +0000 UTC m=+0.117925696 container cleanup 429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api_secret_store_sync, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, version=17.1.9, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, build-date=2025-07-21T15:22:44, description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_api_secret_store_sync, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, batch=17.1_20250721.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-api-container)
Oct 13 14:01:22 standalone.localdomain systemd[1]: libpod-conmon-429bc2cca9435dff000dec9a48cdb0f8ecbd6088075dde57480c41c2a6b5aaf6.scope: Deactivated successfully.
Oct 13 14:01:22 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name barbican_api_secret_store_sync --conmon-pidfile /run/barbican_api_secret_store_sync.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d98175232adee9e03624d31ed0b22944 --label config_id=tripleo_step3 --label container_name=barbican_api_secret_store_sync --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro', '/var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/barbican_api_secret_store_sync.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/barbican:/var/log/barbican:z --volume /var/log/containers/httpd/barbican-api:/var/log/httpd:z --volume /var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro --volume /var/lib/kolla/config_files/barbican_api_secret_store_sync.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 14:01:23 standalone.localdomain podman[93527]: 2025-10-13 14:01:23.060198745 +0000 UTC m=+0.072277893 container create 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, release=2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtproxyd, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:01:23
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'backups', 'manila_metadata', 'images', 'manila_data', '.mgr', 'volumes']
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started libpod-conmon-082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f.scope.
Oct 13 14:01:23 standalone.localdomain podman[93533]: 2025-10-13 14:01:23.10847222 +0000 UTC m=+0.106420967 container create 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:22:44, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, distribution-scope=public, release=1, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, managed_by=tripleo_ansible)
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:23 standalone.localdomain podman[93527]: 2025-10-13 14:01:23.021764678 +0000 UTC m=+0.033843896 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain podman[93527]: 2025-10-13 14:01:23.134003855 +0000 UTC m=+0.146083003 container init 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, container_name=nova_virtproxyd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3)
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started libpod-conmon-491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.scope.
Oct 13 14:01:23 standalone.localdomain podman[93527]: 2025-10-13 14:01:23.142711366 +0000 UTC m=+0.154790524 container start 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, container_name=nova_virtproxyd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=2, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:01:23 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:23 standalone.localdomain podman[93533]: 2025-10-13 14:01:23.060320668 +0000 UTC m=+0.058269406 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3ceeb611a5bf472a6616e0e2799f4dfa33cb00f5de706c2cb6f3f3948731f3/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3ceeb611a5bf472a6616e0e2799f4dfa33cb00f5de706c2cb6f3f3948731f3/merged/var/log/barbican supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:01:23 standalone.localdomain sudo[93563]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:23 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started Session c7 of User root.
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:01:23 standalone.localdomain podman[93533]: 2025-10-13 14:01:23.201158038 +0000 UTC m=+0.199106765 container init 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=barbican_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., config_id=tripleo_step3, build-date=2025-07-21T15:22:44, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, name=rhosp17/openstack-barbican-api, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12)
Oct 13 14:01:23 standalone.localdomain sudo[93563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:23 standalone.localdomain sudo[93579]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:23 standalone.localdomain sudo[93579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:01:23 standalone.localdomain podman[93533]: 2025-10-13 14:01:23.243571529 +0000 UTC m=+0.241520286 container start 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, release=1, container_name=barbican_api, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44)
Oct 13 14:01:23 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name barbican_api --conmon-pidfile /run/barbican_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d98175232adee9e03624d31ed0b22944 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=barbican_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/barbican_api.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/barbican:/var/log/barbican:z --volume /var/log/containers/httpd/barbican-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1
Oct 13 14:01:23 standalone.localdomain sudo[93563]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:23 standalone.localdomain systemd[1]: session-c7.scope: Deactivated successfully.
Oct 13 14:01:23 standalone.localdomain sudo[93579]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:23 standalone.localdomain podman[93583]: 2025-10-13 14:01:23.38961571 +0000 UTC m=+0.138226288 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=starting, build-date=2025-07-21T15:22:44, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vcs-type=git, container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-api, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:01:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:23 standalone.localdomain podman[93829]: 2025-10-13 14:01:23.777531937 +0000 UTC m=+0.089148348 container create 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, version=17.1.9, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=barbican_keystone_listener, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-barbican-keystone-listener, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-barbican-keystone-listener-container)
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started libpod-conmon-5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.scope.
Oct 13 14:01:23 standalone.localdomain podman[93829]: 2025-10-13 14:01:23.72979954 +0000 UTC m=+0.041415961 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6698bdef4914a669560655838bf20b3925b44012adb424488f64497bc052998/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6698bdef4914a669560655838bf20b3925b44012adb424488f64497bc052998/merged/var/log/barbican supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:01:23 standalone.localdomain podman[93829]: 2025-10-13 14:01:23.872530407 +0000 UTC m=+0.184146848 container init 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-barbican-keystone-listener, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, container_name=barbican_keystone_listener, architecture=x86_64, build-date=2025-07-21T16:18:19, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:01:23 standalone.localdomain sudo[93849]: barbican : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:23 standalone.localdomain sudo[93849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42403)
Oct 13 14:01:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:01:23 standalone.localdomain podman[93829]: 2025-10-13 14:01:23.904211815 +0000 UTC m=+0.215828186 container start 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-keystone-listener, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, build-date=2025-07-21T16:18:19, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, com.redhat.component=openstack-barbican-keystone-listener-container, release=1)
Oct 13 14:01:23 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name barbican_keystone_listener --conmon-pidfile /run/barbican_keystone_listener.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d98175232adee9e03624d31ed0b22944 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step3 --label container_name=barbican_keystone_listener --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/barbican_keystone_listener.log --network host --privileged=False --user barbican --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/barbican:/var/log/barbican:z --volume /var/log/containers/httpd/barbican-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1
Oct 13 14:01:23 standalone.localdomain sudo[93849]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:24 standalone.localdomain podman[93851]: 2025-10-13 14:01:24.069882197 +0000 UTC m=+0.153766072 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=starting, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.expose-services=)
Oct 13 14:01:24 standalone.localdomain podman[93851]: 2025-10-13 14:01:24.09177491 +0000 UTC m=+0.175658775 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, io.openshift.expose-services=, release=1, version=17.1.9, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-barbican-keystone-listener-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, config_id=tripleo_step3, tcib_managed=true, container_name=barbican_keystone_listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:01:24 standalone.localdomain podman[93851]: unhealthy
Oct 13 14:01:24 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:01:24 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Failed with result 'exit-code'.
Oct 13 14:01:24 standalone.localdomain podman[93936]: 2025-10-13 14:01:24.434001193 +0000 UTC m=+0.083230975 container create 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container, name=rhosp17/openstack-barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:01:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:24 standalone.localdomain systemd[1]: Started libpod-conmon-8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.scope.
Oct 13 14:01:24 standalone.localdomain ceph-mon[29756]: pgmap v788: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:01:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfe70c52a10e8bd718ed4752cbab0ea6264e7dcefe217a06d9a919012c408d9/merged/var/log/barbican supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfe70c52a10e8bd718ed4752cbab0ea6264e7dcefe217a06d9a919012c408d9/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:01:24 standalone.localdomain podman[93936]: 2025-10-13 14:01:24.396683651 +0000 UTC m=+0.045913463 image pull  registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1
Oct 13 14:01:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:01:24 standalone.localdomain podman[93936]: 2025-10-13 14:01:24.513987835 +0000 UTC m=+0.163217637 container init 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, managed_by=tripleo_ansible, container_name=barbican_worker, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T15:36:22, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, config_id=tripleo_step3, com.redhat.component=openstack-barbican-worker-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:01:24 standalone.localdomain sudo[93996]: barbican : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:01:24 standalone.localdomain sudo[93996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42403)
Oct 13 14:01:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:01:24 standalone.localdomain podman[93936]: 2025-10-13 14:01:24.544771614 +0000 UTC m=+0.194001396 container start 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, container_name=barbican_worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, build-date=2025-07-21T15:36:22, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, com.redhat.component=openstack-barbican-worker-container, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-worker, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.9)
Oct 13 14:01:24 standalone.localdomain python3[87857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name barbican_worker --conmon-pidfile /run/barbican_worker.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d98175232adee9e03624d31ed0b22944 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step3 --label container_name=barbican_worker --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/barbican_worker.log --network host --privileged=False --user barbican --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/barbican:/var/log/barbican:z --volume /var/log/containers/httpd/barbican-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1
Oct 13 14:01:24 standalone.localdomain sudo[93996]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:24 standalone.localdomain podman[93998]: 2025-10-13 14:01:24.643065147 +0000 UTC m=+0.090709677 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=starting, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, container_name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, com.redhat.component=openstack-barbican-worker-container, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, config_id=tripleo_step3)
Oct 13 14:01:24 standalone.localdomain podman[93998]: 2025-10-13 14:01:24.68582636 +0000 UTC m=+0.133470880 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.component=openstack-barbican-worker-container, name=rhosp17/openstack-barbican-worker, build-date=2025-07-21T15:36:22, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, batch=17.1_20250721.1)
Oct 13 14:01:24 standalone.localdomain podman[93998]: unhealthy
Oct 13 14:01:24 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:01:24 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Failed with result 'exit-code'.
Oct 13 14:01:24 standalone.localdomain podman[93583]: 2025-10-13 14:01:24.721204282 +0000 UTC m=+1.469814860 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, release=1, build-date=2025-07-21T15:22:44)
Oct 13 14:01:24 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:01:24 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 13 14:01:24 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 13 14:01:25 standalone.localdomain python3[94057]: ansible-file Invoked with path=/etc/systemd/system/tripleo_barbican_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:25 standalone.localdomain python3[94059]: ansible-file Invoked with path=/etc/systemd/system/tripleo_barbican_keystone_listener.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:25 standalone.localdomain python3[94102]: ansible-file Invoked with path=/etc/systemd/system/tripleo_barbican_worker.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:25 standalone.localdomain ceph-mon[29756]: 4.1f scrub starts
Oct 13 14:01:25 standalone.localdomain ceph-mon[29756]: 4.1f scrub ok
Oct 13 14:01:25 standalone.localdomain python3[94106]: ansible-file Invoked with path=/etc/systemd/system/tripleo_horizon.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:25 standalone.localdomain python3[94147]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:25 standalone.localdomain python3[94157]: ansible-file Invoked with path=/etc/systemd/system/tripleo_keystone.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:26 standalone.localdomain python3[94159]: ansible-file Invoked with path=/etc/systemd/system/tripleo_keystone_cron.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:26 standalone.localdomain python3[94161]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:26 standalone.localdomain python3[94163]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:26 standalone.localdomain ceph-mon[29756]: pgmap v789: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:26 standalone.localdomain python3[94165]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:26 standalone.localdomain python3[94167]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:26 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 13 14:01:26 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 13 14:01:27 standalone.localdomain python3[94177]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:27 standalone.localdomain python3[94179]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:27 standalone.localdomain python3[94181]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_barbican_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v790: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:27 standalone.localdomain ceph-mon[29756]: 5.1c scrub starts
Oct 13 14:01:27 standalone.localdomain ceph-mon[29756]: 5.1c scrub ok
Oct 13 14:01:27 standalone.localdomain python3[94183]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_barbican_keystone_listener_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:27 standalone.localdomain python3[94185]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_barbican_worker_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:27 standalone.localdomain python3[94195]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_horizon_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:27 standalone.localdomain python3[94197]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain python3[94199]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_keystone_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain haproxy[70940]: Server barbican/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 1236ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:01:28 standalone.localdomain python3[94201]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_keystone_cron_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:01:28 standalone.localdomain python3[94203]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain ceph-mon[29756]: pgmap v790: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:28 standalone.localdomain python3[94205]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain python3[94207]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:01:28 standalone.localdomain podman[94217]: 2025-10-13 14:01:28.824589185 +0000 UTC m=+0.088823409 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, build-date=2025-07-21T13:30:04, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, container_name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, config_id=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:01:28 standalone.localdomain podman[94217]: 2025-10-13 14:01:28.86293033 +0000 UTC m=+0.127164584 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, config_id=ovn_cluster_northd, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, tcib_managed=true, container_name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:01:28 standalone.localdomain python3[94209]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:28 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:01:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct 13 14:01:29 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct 13 14:01:29 standalone.localdomain python3[94238]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:29 standalone.localdomain python3[94240]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:01:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:29 standalone.localdomain ceph-mon[29756]: 5.1d deep-scrub starts
Oct 13 14:01:29 standalone.localdomain ceph-mon[29756]: 5.1d deep-scrub ok
Oct 13 14:01:29 standalone.localdomain python3[94250]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_barbican_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:29 standalone.localdomain runuser[94265]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:29 standalone.localdomain python3[94260]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_barbican_keystone_listener.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:30 standalone.localdomain python3[94311]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_barbican_worker.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:30 standalone.localdomain ceph-mon[29756]: pgmap v791: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:30 standalone.localdomain runuser[94265]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:30 standalone.localdomain python3[94320]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_horizon.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:30 standalone.localdomain runuser[94330]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:31 standalone.localdomain python3[94384]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:31 standalone.localdomain runuser[94330]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:31 standalone.localdomain runuser[94449]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:31 standalone.localdomain python3[94439]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_keystone.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:01:31 standalone.localdomain python3[94497]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_keystone_cron.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:31 standalone.localdomain podman[94498]: 2025-10-13 14:01:31.828044843 +0000 UTC m=+0.093939647 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, vendor=Red Hat, Inc.)
Oct 13 14:01:31 standalone.localdomain podman[94498]: 2025-10-13 14:01:31.911067151 +0000 UTC m=+0.176961935 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, release=1, tcib_managed=true, config_id=tripleo_step2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, container_name=clustercheck, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=)
Oct 13 14:01:31 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:01:32 standalone.localdomain runuser[94449]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:32 standalone.localdomain python3[94547]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:32 standalone.localdomain ceph-mon[29756]: pgmap v792: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:32 standalone.localdomain python3[94558]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:32 standalone.localdomain python3[94560]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:33 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Oct 13 14:01:33 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Oct 13 14:01:33 standalone.localdomain python3[94570]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:33 standalone.localdomain ceph-mon[29756]: 5.1e deep-scrub starts
Oct 13 14:01:33 standalone.localdomain ceph-mon[29756]: 5.1e deep-scrub ok
Oct 13 14:01:33 standalone.localdomain python3[94613]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:34 standalone.localdomain python3[94656]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364089.222693-94055-84021428536824/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 13 14:01:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 13 14:01:34 standalone.localdomain python3[94666]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 14:01:34 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:34 standalone.localdomain systemd-rc-local-generator[94685]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:34 standalone.localdomain systemd-sysv-generator[94694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:34 standalone.localdomain sudo[94699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:01:34 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:34 standalone.localdomain ceph-mon[29756]: pgmap v793: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:34 standalone.localdomain ceph-mon[29756]: 5.1f scrub starts
Oct 13 14:01:34 standalone.localdomain ceph-mon[29756]: 5.1f scrub ok
Oct 13 14:01:34 standalone.localdomain sudo[94699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:01:34 standalone.localdomain sudo[94699]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:34 standalone.localdomain sudo[94758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:01:34 standalone.localdomain sudo[94758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:01:35 standalone.localdomain sudo[94758]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:01:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev f6b1bcb5-f764-4d8f-8cce-8c12f4088f03 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:01:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev f6b1bcb5-f764-4d8f-8cce-8c12f4088f03 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:01:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event f6b1bcb5-f764-4d8f-8cce-8c12f4088f03 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:01:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:35 standalone.localdomain sudo[94855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:01:35 standalone.localdomain sudo[94855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:01:35 standalone.localdomain sudo[94855]: pam_unix(sudo:session): session closed for user root
Oct 13 14:01:35 standalone.localdomain python3[94813]: ansible-systemd Invoked with state=restarted name=tripleo_barbican_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:35 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:01:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:01:35 standalone.localdomain systemd-rc-local-generator[94933]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:35 standalone.localdomain systemd-sysv-generator[94938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:35 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:35 standalone.localdomain systemd[1]: Starting barbican_api container...
Oct 13 14:01:35 standalone.localdomain systemd[1]: Started barbican_api container.
Oct 13 14:01:36 standalone.localdomain ceph-mon[29756]: pgmap v794: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:01:36 standalone.localdomain python3[94971]: ansible-systemd Invoked with state=restarted name=tripleo_barbican_keystone_listener.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:36 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:36 standalone.localdomain podman[94972]: 2025-10-13 14:01:36.834459154 +0000 UTC m=+0.100351567 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=horizon, description=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, release=1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-horizon, vcs-type=git)
Oct 13 14:01:36 standalone.localdomain systemd-sysv-generator[95027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:36 standalone.localdomain systemd-rc-local-generator[95019]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:36 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Oct 13 14:01:37 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Oct 13 14:01:37 standalone.localdomain systemd[1]: Starting barbican_keystone_listener container...
Oct 13 14:01:37 standalone.localdomain systemd[1]: Started barbican_keystone_listener container.
Oct 13 14:01:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:37 standalone.localdomain ceph-mon[29756]: 6.1 deep-scrub starts
Oct 13 14:01:37 standalone.localdomain ceph-mon[29756]: 6.1 deep-scrub ok
Oct 13 14:01:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:01:37 standalone.localdomain systemd[1]: tmp-crun.MaEyrN.mount: Deactivated successfully.
Oct 13 14:01:37 standalone.localdomain podman[95060]: 2025-10-13 14:01:37.819039314 +0000 UTC m=+0.083078920 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1, version=17.1.9, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:01:37 standalone.localdomain podman[95060]: 2025-10-13 14:01:37.829652785 +0000 UTC m=+0.093692431 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid)
Oct 13 14:01:37 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:01:37 standalone.localdomain python3[95059]: ansible-systemd Invoked with state=restarted name=tripleo_barbican_worker.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:38 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:38 standalone.localdomain systemd-rc-local-generator[95119]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:38 standalone.localdomain systemd-sysv-generator[95122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:38 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 13 14:01:38 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 13 14:01:38 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:38 standalone.localdomain podman[94972]: 2025-10-13 14:01:38.298096872 +0000 UTC m=+1.563989295 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, tcib_managed=true, release=1, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, container_name=horizon, name=rhosp17/openstack-horizon, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:01:38 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:01:38 standalone.localdomain systemd[1]: Starting barbican_worker container...
Oct 13 14:01:38 standalone.localdomain systemd[1]: Started barbican_worker container.
Oct 13 14:01:38 standalone.localdomain ceph-mon[29756]: pgmap v795: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:38 standalone.localdomain ceph-mon[29756]: 6.2 scrub starts
Oct 13 14:01:38 standalone.localdomain ceph-mon[29756]: 6.2 scrub ok
Oct 13 14:01:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 39 completed events
Oct 13 14:01:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:01:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:01:39 standalone.localdomain python3[95146]: ansible-systemd Invoked with state=restarted name=tripleo_horizon.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:01:39 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:39 standalone.localdomain podman[95156]: 2025-10-13 14:01:39.330647386 +0000 UTC m=+0.065439800 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, release=1, vcs-type=git, config_id=tripleo_step3, batch=17.1_20250721.1, container_name=keystone, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:01:39 standalone.localdomain podman[95156]: 2025-10-13 14:01:39.367967039 +0000 UTC m=+0.102759493 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, config_id=tripleo_step3)
Oct 13 14:01:39 standalone.localdomain systemd-rc-local-generator[95206]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:39 standalone.localdomain systemd-sysv-generator[95211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:39 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:39 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:01:39 standalone.localdomain systemd[1]: Starting horizon container...
Oct 13 14:01:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:01:39 standalone.localdomain ceph-mon[29756]: pgmap v796: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:39 standalone.localdomain systemd[1]: Started horizon container.
Oct 13 14:01:40 standalone.localdomain python3[95243]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:01:40 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:40 standalone.localdomain podman[95245]: 2025-10-13 14:01:40.642825604 +0000 UTC m=+0.073861483 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, batch=17.1_20250721.1, container_name=memcached, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 memcached, release=1, vcs-type=git)
Oct 13 14:01:40 standalone.localdomain podman[95245]: 2025-10-13 14:01:40.695194446 +0000 UTC m=+0.126230385 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, build-date=2025-07-21T12:58:43, version=17.1.9, name=rhosp17/openstack-memcached, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, container_name=memcached, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, distribution-scope=public)
Oct 13 14:01:40 standalone.localdomain systemd-sysv-generator[95296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:40 standalone.localdomain systemd-rc-local-generator[95293]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:40 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:40 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:01:40 standalone.localdomain systemd[1]: Starting iscsid container...
Oct 13 14:01:41 standalone.localdomain systemd[1]: Started iscsid container.
Oct 13 14:01:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 13 14:01:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 13 14:01:41 standalone.localdomain ceph-mon[29756]: 6.3 scrub starts
Oct 13 14:01:41 standalone.localdomain ceph-mon[29756]: 6.3 scrub ok
Oct 13 14:01:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:41 standalone.localdomain python3[95381]: ansible-systemd Invoked with state=restarted name=tripleo_keystone.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:41 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:42 standalone.localdomain systemd-rc-local-generator[95409]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:42 standalone.localdomain systemd-sysv-generator[95412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:42 standalone.localdomain systemd[1]: Starting keystone container...
Oct 13 14:01:42 standalone.localdomain systemd[1]: Started keystone container.
Oct 13 14:01:42 standalone.localdomain ceph-mon[29756]: pgmap v797: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:42 standalone.localdomain runuser[95447]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:43 standalone.localdomain python3[95493]: ansible-systemd Invoked with state=restarted name=tripleo_keystone_cron.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:43 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:43 standalone.localdomain systemd-rc-local-generator[95524]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:43 standalone.localdomain systemd-sysv-generator[95528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:43 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:43 standalone.localdomain runuser[95447]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:43 standalone.localdomain systemd[1]: Starting keystone_cron container...
Oct 13 14:01:43 standalone.localdomain runuser[95598]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:43 standalone.localdomain systemd[1]: Started keystone_cron container.
Oct 13 14:01:44 standalone.localdomain runuser[95598]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:44 standalone.localdomain runuser[95715]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:44 standalone.localdomain python3[95711]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:44 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:44 standalone.localdomain ceph-mon[29756]: pgmap v798: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:44 standalone.localdomain systemd-rc-local-generator[95819]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:44 standalone.localdomain systemd-sysv-generator[95824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:44 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:44 standalone.localdomain systemd[1]: Starting nova_virtlogd_wrapper container...
Oct 13 14:01:44 standalone.localdomain systemd[1]: Started nova_virtlogd_wrapper container.
Oct 13 14:01:45 standalone.localdomain runuser[95715]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 13 14:01:45 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 13 14:01:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v799: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:45 standalone.localdomain ceph-mon[29756]: 6.4 scrub starts
Oct 13 14:01:45 standalone.localdomain ceph-mon[29756]: 6.4 scrub ok
Oct 13 14:01:45 standalone.localdomain python3[95916]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:46 standalone.localdomain ceph-mon[29756]: pgmap v799: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:46 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:46 standalone.localdomain systemd-sysv-generator[95996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:46 standalone.localdomain systemd-rc-local-generator[95992]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:47 standalone.localdomain systemd[1]: Starting nova_virtnodedevd container...
Oct 13 14:01:47 standalone.localdomain tripleo-start-podman-container[96005]: Creating additional drop-in dependency for "nova_virtnodedevd" (4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc)
Oct 13 14:01:47 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:47 standalone.localdomain systemd-rc-local-generator[96064]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:47 standalone.localdomain systemd-sysv-generator[96069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:47 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:47 standalone.localdomain systemd[1]: Started nova_virtnodedevd container.
Oct 13 14:01:48 standalone.localdomain python3[96083]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:48 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:48 standalone.localdomain ceph-mon[29756]: pgmap v800: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:48 standalone.localdomain systemd-rc-local-generator[96117]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:48 standalone.localdomain systemd-sysv-generator[96122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:48 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:48 standalone.localdomain systemd[1]: Starting nova_virtproxyd container...
Oct 13 14:01:49 standalone.localdomain tripleo-start-podman-container[96131]: Creating additional drop-in dependency for "nova_virtproxyd" (082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f)
Oct 13 14:01:49 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:49 standalone.localdomain systemd-rc-local-generator[96188]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:49 standalone.localdomain systemd-sysv-generator[96192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:49 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:49 standalone.localdomain systemd[1]: Started nova_virtproxyd container.
Oct 13 14:01:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v801: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 13 14:01:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 13 14:01:50 standalone.localdomain python3[96209]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:50 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:50 standalone.localdomain ceph-mon[29756]: pgmap v801: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:50 standalone.localdomain systemd-sysv-generator[96249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:50 standalone.localdomain ceph-mon[29756]: 6.5 scrub starts
Oct 13 14:01:50 standalone.localdomain ceph-mon[29756]: 6.5 scrub ok
Oct 13 14:01:50 standalone.localdomain systemd-rc-local-generator[96245]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:50 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:50 standalone.localdomain systemd[1]: Starting nova_virtqemud container...
Oct 13 14:01:50 standalone.localdomain tripleo-start-podman-container[96257]: Creating additional drop-in dependency for "nova_virtqemud" (1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e)
Oct 13 14:01:50 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:51 standalone.localdomain systemd-rc-local-generator[96314]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:51 standalone.localdomain systemd-sysv-generator[96317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:51 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:51 standalone.localdomain systemd[1]: Started nova_virtqemud container.
Oct 13 14:01:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:01:51 standalone.localdomain podman[96387]: 2025-10-13 14:01:51.814856017 +0000 UTC m=+0.084937687 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vcs-type=git, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:01:51 standalone.localdomain podman[96387]: 2025-10-13 14:01:51.825984564 +0000 UTC m=+0.096066234 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1)
Oct 13 14:01:51 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:01:52 standalone.localdomain python3[96407]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:52 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:52 standalone.localdomain systemd-rc-local-generator[96431]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:52 standalone.localdomain systemd-sysv-generator[96436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:52 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:52 standalone.localdomain ceph-mon[29756]: pgmap v802: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:52 standalone.localdomain systemd[1]: Starting nova_virtsecretd container...
Oct 13 14:01:52 standalone.localdomain tripleo-start-podman-container[96457]: Creating additional drop-in dependency for "nova_virtsecretd" (75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6)
Oct 13 14:01:52 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:52 standalone.localdomain systemd-rc-local-generator[96522]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:52 standalone.localdomain systemd-sysv-generator[96525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:52 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:53 standalone.localdomain systemd[1]: Started nova_virtsecretd container.
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:01:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:53 standalone.localdomain python3[96612]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:01:54 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:54 standalone.localdomain systemd-rc-local-generator[96683]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:54 standalone.localdomain systemd-sysv-generator[96686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:54 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:01:54 standalone.localdomain systemd[1]: Starting nova_virtstoraged container...
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.463760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364114463843, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1537, "num_deletes": 250, "total_data_size": 1230661, "memory_usage": 1265672, "flush_reason": "Manual Compaction"}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364114471463, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 754706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17940, "largest_seqno": 19476, "table_properties": {"data_size": 750005, "index_size": 1918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13872, "raw_average_key_size": 20, "raw_value_size": 738892, "raw_average_value_size": 1102, "num_data_blocks": 89, "num_entries": 670, "num_filter_entries": 670, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760363995, "oldest_key_time": 1760363995, "file_creation_time": 1760364114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 7760 microseconds, and 3497 cpu microseconds.
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.471529) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 754706 bytes OK
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.471549) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.474810) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.474830) EVENT_LOG_v1 {"time_micros": 1760364114474823, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.474858) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1223684, prev total WAL file size 1224173, number of live WAL files 2.
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.475569) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(737KB)], [41(4774KB)]
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364114475612, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 5643951, "oldest_snapshot_seqno": -1}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3383 keys, 4359351 bytes, temperature: kUnknown
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364114494200, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4359351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4337327, "index_size": 12456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 79443, "raw_average_key_size": 23, "raw_value_size": 4276757, "raw_average_value_size": 1264, "num_data_blocks": 544, "num_entries": 3383, "num_filter_entries": 3383, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.494446) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4359351 bytes
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.495883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 302.0 rd, 233.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 4.7 +0.0 blob) out(4.2 +0.0 blob), read-write-amplify(13.3) write-amplify(5.8) OK, records in: 3845, records dropped: 462 output_compression: NoCompression
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.495903) EVENT_LOG_v1 {"time_micros": 1760364114495894, "job": 20, "event": "compaction_finished", "compaction_time_micros": 18686, "compaction_time_cpu_micros": 9778, "output_level": 6, "num_output_files": 1, "total_output_size": 4359351, "num_input_records": 3845, "num_output_records": 3383, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364114496098, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364114496792, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.475439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.496849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.496853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.496855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.496857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:01:54.496859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:01:54 standalone.localdomain ceph-mon[29756]: pgmap v803: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:54 standalone.localdomain podman[96694]: 2025-10-13 14:01:54.51488334 +0000 UTC m=+0.129572039 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=starting, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, version=17.1.9, tcib_managed=true, container_name=barbican_keystone_listener, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, name=rhosp17/openstack-barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19)
Oct 13 14:01:54 standalone.localdomain tripleo-start-podman-container[96695]: Creating additional drop-in dependency for "nova_virtstoraged" (dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d)
Oct 13 14:01:54 standalone.localdomain podman[96694]: 2025-10-13 14:01:54.601058445 +0000 UTC m=+0.215747174 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-barbican-keystone-listener-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-keystone-listener, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9)
Oct 13 14:01:54 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:01:54 standalone.localdomain systemd-rc-local-generator[96824]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:01:54 standalone.localdomain systemd-sysv-generator[96829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:01:54 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:01:55 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:01:55 standalone.localdomain systemd[1]: Started nova_virtstoraged container.
Oct 13 14:01:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:01:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:01:55 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 13 14:01:55 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 13 14:01:55 standalone.localdomain podman[96836]: 2025-10-13 14:01:55.256825279 +0000 UTC m=+0.148466828 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, com.redhat.component=openstack-barbican-api-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, container_name=barbican_api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step3, distribution-scope=public)
Oct 13 14:01:55 standalone.localdomain podman[96837]: 2025-10-13 14:01:55.310302796 +0000 UTC m=+0.200559021 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=starting, description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, architecture=x86_64, container_name=barbican_worker, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, tcib_managed=true, build-date=2025-07-21T15:36:22)
Oct 13 14:01:55 standalone.localdomain podman[96836]: 2025-10-13 14:01:55.374999072 +0000 UTC m=+0.266640661 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, container_name=barbican_api, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-barbican-api-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:01:55 standalone.localdomain podman[96837]: 2025-10-13 14:01:55.40318479 +0000 UTC m=+0.293441045 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, com.redhat.component=openstack-barbican-worker-container, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, container_name=barbican_worker, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 14:01:55 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:01:55 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:01:55 standalone.localdomain python3[96886]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v804: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:55 standalone.localdomain ceph-mon[29756]: 6.6 scrub starts
Oct 13 14:01:55 standalone.localdomain ceph-mon[29756]: 6.6 scrub ok
Oct 13 14:01:55 standalone.localdomain runuser[96978]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:56 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 13 14:01:56 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 13 14:01:56 standalone.localdomain runuser[96978]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:56 standalone.localdomain python3[97060]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=standalone step=3 update_config_hash_only=False
Oct 13 14:01:56 standalone.localdomain runuser[97083]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:56 standalone.localdomain ceph-mon[29756]: pgmap v804: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:56 standalone.localdomain ceph-mon[29756]: 6.7 scrub starts
Oct 13 14:01:56 standalone.localdomain ceph-mon[29756]: 6.7 scrub ok
Oct 13 14:01:56 standalone.localdomain python3[97135]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:01:57 standalone.localdomain runuser[97083]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:57 standalone.localdomain runuser[97149]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:01:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:58 standalone.localdomain runuser[97149]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:01:58 standalone.localdomain python3[97232]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=cinderv2 service_type=volumev2 state=absent type=volumev2 wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None
Oct 13 14:01:58 standalone.localdomain ceph-mon[29756]: pgmap v805: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37042 [13/Oct/2025:14:01:58.858] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 11/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:01:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37042 [13/Oct/2025:14:01:58.864] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/530/530 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:01:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37042 [13/Oct/2025:14:01:59.397] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/34/34 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:01:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:01:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:01:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:01:59 standalone.localdomain python3[97253]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=cinderv3 service_type=volume state=absent type=volume wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None
Oct 13 14:01:59 standalone.localdomain podman[97254]: 2025-10-13 14:01:59.830759544 +0000 UTC m=+0.091365248 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, container_name=ovn_cluster_northd, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vendor=Red Hat, Inc., build-date=2025-07-21T13:30:04, summary=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, com.redhat.component=openstack-ovn-northd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:01:59 standalone.localdomain podman[97254]: 2025-10-13 14:01:59.846983859 +0000 UTC m=+0.107589583 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, architecture=x86_64, build-date=2025-07-21T13:30:04, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, config_id=ovn_cluster_northd, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, container_name=ovn_cluster_northd, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-northd-container)
Oct 13 14:01:59 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:01:59 standalone.localdomain systemd[1]: tmp-crun.4CMR8p.mount: Deactivated successfully.
Oct 13 14:01:59 standalone.localdomain podman[97275]: 2025-10-13 14:01:59.917645601 +0000 UTC m=+0.084306748 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git)
Oct 13 14:02:00 standalone.localdomain podman[97275]: 2025-10-13 14:01:59.999896655 +0000 UTC m=+0.166557772 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, tcib_managed=true, vcs-type=git, distribution-scope=public, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:02:00 standalone.localdomain podman[97294]: 2025-10-13 14:02:00.045858336 +0000 UTC m=+0.141699066 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, name=rhosp17/openstack-haproxy, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:02:00 standalone.localdomain podman[97294]: 2025-10-13 14:02:00.052818603 +0000 UTC m=+0.148659363 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public)
Oct 13 14:02:00 standalone.localdomain podman[97324]: 2025-10-13 14:02:00.103133491 +0000 UTC m=+0.084455523 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, release=1, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, version=17.1.9, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:02:00 standalone.localdomain podman[97324]: 2025-10-13 14:02:00.133801286 +0000 UTC m=+0.115123328 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, release=1, com.redhat.component=openstack-rabbitmq-container, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:02:00 standalone.localdomain haproxy[70940]: 172.21.0.2:55114 [13/Oct/2025:14:02:00.301] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:00 standalone.localdomain ceph-mon[29756]: pgmap v806: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:00 standalone.localdomain haproxy[70940]: 172.21.0.2:55114 [13/Oct/2025:14:02:00.305] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/265/265 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:00 standalone.localdomain haproxy[70940]: 172.21.0.2:55114 [13/Oct/2025:14:02:00.573] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:00 standalone.localdomain systemd[1]: tmp-crun.4mLRuv.mount: Deactivated successfully.
Oct 13 14:02:01 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 13 14:02:01 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 13 14:02:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:01 standalone.localdomain ceph-mon[29756]: 6.8 scrub starts
Oct 13 14:02:01 standalone.localdomain ceph-mon[29756]: 6.8 scrub ok
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97393]: Invoked with 442651142569 60 /tmp/ansible-root/ansible-tmp-1760364121.1042054-97382-84281240186135/AnsiballZ_project.py _
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97449]: Starting module and watcher
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97449]: Start watching 97450 (60)
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97450]: Start module (97450)
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97393]: Return async_wrapper task started.
Oct 13 14:02:01 standalone.localdomain python3[97451]: ansible-openstack.cloud.project Invoked with cloud=standalone name=admin domain_id=default state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None properties=None
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97471]: Invoked with 302725679469 60 /tmp/ansible-root/ansible-tmp-1760364121.6317725-97382-121541579045199/AnsiballZ_project.py _
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97474]: Starting module and watcher
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97474]: Start watching 97475 (60)
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97475]: Start module (97475)
Oct 13 14:02:01 standalone.localdomain ansible-async_wrapper.py[97471]: Return async_wrapper task started.
Oct 13 14:02:02 standalone.localdomain python3[97476]: ansible-openstack.cloud.project Invoked with cloud=standalone name=service domain_id=default state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None properties=None
Oct 13 14:02:02 standalone.localdomain python3[97480]: ansible-ansible.legacy.async_status Invoked with jid=442651142569.97393 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 13 14:02:02 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55126 [13/Oct/2025:14:02:02.255] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain ceph-mon[29756]: pgmap v807: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:02 standalone.localdomain ceph-mon[29756]: 6.9 scrub starts
Oct 13 14:02:02 standalone.localdomain ceph-mon[29756]: 6.9 scrub ok
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55126 [13/Oct/2025:14:02:02.260] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/305/305 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55126 [13/Oct/2025:14:02:02.569] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 200 396 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55126 [13/Oct/2025:14:02:02.586] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/18/18 200 651 - - ---- 13/2/1/1/0 0/0 "GET /v3/projects?domain_id=default HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:02.598] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/8/8 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain ansible-async_wrapper.py[97450]: Module complete (97450)
Oct 13 14:02:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:02:02 standalone.localdomain podman[97493]: 2025-10-13 14:02:02.818320277 +0000 UTC m=+0.080902382 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, managed_by=tripleo_ansible, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, version=17.1.9, name=rhosp17/openstack-mariadb, vcs-type=git)
Oct 13 14:02:02 standalone.localdomain podman[97493]: 2025-10-13 14:02:02.861814742 +0000 UTC m=+0.124396897 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=)
Oct 13 14:02:02 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:02.607] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/274/274 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:02.883] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/10/10 200 396 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:02:02 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:02.895] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 200 651 - - ---- 12/1/0/0/0 0/0 "GET /v3/projects?domain_id=default HTTP/1.1"
Oct 13 14:02:03 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:02.918] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/92/92 201 505 - - ---- 12/1/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:02:03 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:03.015] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/54/54 200 919 - - ---- 12/1/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:02:03 standalone.localdomain haproxy[70940]: 172.21.0.2:55132 [13/Oct/2025:14:02:03.075] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 513 - - ---- 12/1/0/0/0 0/0 "PATCH /v3/projects/206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:02:03 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 13 14:02:03 standalone.localdomain ansible-async_wrapper.py[97475]: Module complete (97475)
Oct 13 14:02:03 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 13 14:02:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v808: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:03 standalone.localdomain ceph-mon[29756]: 6.a scrub starts
Oct 13 14:02:03 standalone.localdomain ceph-mon[29756]: 6.a scrub ok
Oct 13 14:02:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:04 standalone.localdomain ceph-mon[29756]: pgmap v808: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 13 14:02:06 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 13 14:02:06 standalone.localdomain ansible-async_wrapper.py[97449]: Done in kid B.
Oct 13 14:02:06 standalone.localdomain ceph-mon[29756]: pgmap v809: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:06 standalone.localdomain ceph-mon[29756]: 6.b scrub starts
Oct 13 14:02:06 standalone.localdomain ceph-mon[29756]: 6.b scrub ok
Oct 13 14:02:06 standalone.localdomain ansible-async_wrapper.py[97474]: Done in kid B.
Oct 13 14:02:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 13 14:02:07 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 13 14:02:07 standalone.localdomain python3[97772]: ansible-ansible.legacy.async_status Invoked with jid=442651142569.97393 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:07 standalone.localdomain python3[97775]: ansible-ansible.legacy.async_status Invoked with jid=302725679469.97471 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:07 standalone.localdomain ceph-mon[29756]: 6.c scrub starts
Oct 13 14:02:07 standalone.localdomain ceph-mon[29756]: 6.c scrub ok
Oct 13 14:02:08 standalone.localdomain python3[97786]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=admin wait=True timeout=180 interface=public state=present auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:08 standalone.localdomain ceph-mon[29756]: pgmap v810: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:08 standalone.localdomain runuser[97791]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:08 standalone.localdomain haproxy[70940]: 172.21.0.2:55138 [13/Oct/2025:14:02:08.614] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:02:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:02:08 standalone.localdomain podman[97836]: 2025-10-13 14:02:08.820757924 +0000 UTC m=+0.087002622 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, name=rhosp17/openstack-iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:02:08 standalone.localdomain podman[97836]: 2025-10-13 14:02:08.859157961 +0000 UTC m=+0.125402719 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1, architecture=x86_64)
Oct 13 14:02:08 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:02:08 standalone.localdomain haproxy[70940]: 172.21.0.2:55138 [13/Oct/2025:14:02:08.619] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/287/287 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:08 standalone.localdomain podman[97837]: 2025-10-13 14:02:08.872140736 +0000 UTC m=+0.136042251 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, container_name=horizon, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:15, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, vcs-type=git, com.redhat.component=openstack-horizon-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:02:08 standalone.localdomain haproxy[70940]: 172.21.0.2:55138 [13/Oct/2025:14:02:08.910] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 200 962 - - ---- 12/1/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:02:09 standalone.localdomain runuser[97791]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:09 standalone.localdomain runuser[97909]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:09 standalone.localdomain python3[97908]: ansible-openstack.cloud.identity_user_info Invoked with name=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER auth_type=v3password auth=NOT_LOGGING_PARAMETER wait=True timeout=180 interface=public cloud=None region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None domain=None filters=None
Oct 13 14:02:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:02:09 standalone.localdomain systemd[1]: tmp-crun.7rdqSf.mount: Deactivated successfully.
Oct 13 14:02:09 standalone.localdomain podman[97954]: 2025-10-13 14:02:09.807666987 +0000 UTC m=+0.077459845 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, config_id=tripleo_step3, container_name=keystone, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-keystone, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 14:02:09 standalone.localdomain podman[97954]: 2025-10-13 14:02:09.843969358 +0000 UTC m=+0.113762266 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:02:09 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:02:09 standalone.localdomain runuser[97909]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:10 standalone.localdomain runuser[97998]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 13 14:02:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 13 14:02:10 standalone.localdomain podman[97837]: 2025-10-13 14:02:10.288312123 +0000 UTC m=+1.552213638 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, batch=17.1_20250721.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, summary=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-horizon-container, distribution-scope=public, vcs-type=git, container_name=horizon, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:02:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:02:10 standalone.localdomain haproxy[70940]: 172.21.0.2:38758 [13/Oct/2025:14:02:10.069] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/307/307 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:10 standalone.localdomain haproxy[70940]: 172.21.0.2:38758 [13/Oct/2025:14:02:10.378] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:10 standalone.localdomain haproxy[70940]: 172.21.0.2:38758 [13/Oct/2025:14:02:10.383] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/26/26 200 544 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?name=admin HTTP/1.1"
Oct 13 14:02:10 standalone.localdomain ceph-mon[29756]: pgmap v811: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:10 standalone.localdomain ceph-mon[29756]: 6.d scrub starts
Oct 13 14:02:10 standalone.localdomain ceph-mon[29756]: 6.d scrub ok
Oct 13 14:02:10 standalone.localdomain runuser[97998]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 13 14:02:11 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 13 14:02:11 standalone.localdomain haproxy[70940]: 172.21.0.2:38764 [13/Oct/2025:14:02:11.461] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:11 standalone.localdomain ceph-mon[29756]: 6.e scrub starts
Oct 13 14:02:11 standalone.localdomain ceph-mon[29756]: 6.e scrub ok
Oct 13 14:02:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:02:11 standalone.localdomain haproxy[70940]: 172.21.0.2:38764 [13/Oct/2025:14:02:11.466] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/288/288 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:11 standalone.localdomain haproxy[70940]: 172.21.0.2:38764 [13/Oct/2025:14:02:11.766] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 200 396 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:02:11 standalone.localdomain podman[98126]: 2025-10-13 14:02:11.806622484 +0000 UTC m=+0.075152763 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=memcached, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached)
Oct 13 14:02:11 standalone.localdomain podman[98126]: 2025-10-13 14:02:11.827798934 +0000 UTC m=+0.096329193 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:02:11 standalone.localdomain haproxy[70940]: 172.21.0.2:38764 [13/Oct/2025:14:02:11.788] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 200 562 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?domain_id=default&name=admin HTTP/1.1"
Oct 13 14:02:11 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:02:11 standalone.localdomain haproxy[70940]: 172.21.0.2:38764 [13/Oct/2025:14:02:11.833] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 551 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?domain_id=default HTTP/1.1"
Oct 13 14:02:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Oct 13 14:02:12 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Oct 13 14:02:12 standalone.localdomain haproxy[70940]: 172.21.0.2:38764 [13/Oct/2025:14:02:11.865] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 200 511 - - ---- 12/1/0/0/0 0/0 "PATCH /v3/users/3d9eeef137fc40c78332936114fd7ee4 HTTP/1.1"
Oct 13 14:02:12 standalone.localdomain ceph-mon[29756]: pgmap v812: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:12 standalone.localdomain ceph-mon[29756]: 6.f deep-scrub starts
Oct 13 14:02:12 standalone.localdomain ceph-mon[29756]: 6.f deep-scrub ok
Oct 13 14:02:12 standalone.localdomain python3[98161]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=admin project=admin role=admin domain=default wait=True timeout=180 interface=public state=present auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.201] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.206] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/336/336 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.547] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/18/18 404 291 - - ---- 12/1/0/0/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.567] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 531 - - ---- 12/1/0/0/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.586] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 396 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.602] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 404 291 - - ---- 12/1/0/0/0 0/0 "GET /v3/users/admin?domain_id=default HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.638] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 590 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?domain_id=default&name=admin HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.662] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 404 294 - - ---- 12/1/0/0/0 0/0 "GET /v3/projects/admin?domain_id=default HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.689] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 200 662 - - ---- 12/1/0/0/0 0/0 "GET /v3/projects?domain_id=default&name=admin HTTP/1.1"
Oct 13 14:02:13 standalone.localdomain haproxy[70940]: 172.21.0.2:38778 [13/Oct/2025:14:02:13.712] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 800 - - ---- 12/1/0/0/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=3d9eeef137fc40c78332936114fd7ee4&scope.project.id=e44641a80bcb466cb3dd688e48b72d8e HTTP/1.1"
Oct 13 14:02:14 standalone.localdomain python3[98265]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=keystone service_type=identity type=identity wait=True timeout=180 interface=public enabled=True state=present auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None
Oct 13 14:02:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:14 standalone.localdomain ceph-mon[29756]: pgmap v813: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:14 standalone.localdomain haproxy[70940]: 172.21.0.2:38792 [13/Oct/2025:14:02:14.781] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:15 standalone.localdomain haproxy[70940]: 172.21.0.2:38792 [13/Oct/2025:14:02:14.786] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/307/307 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:15 standalone.localdomain haproxy[70940]: 172.21.0.2:38792 [13/Oct/2025:14:02:15.098] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:15 standalone.localdomain python3[98318]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=keystone url=http://172.21.0.2:5000 endpoint_interface=public region=regionOne wait=True timeout=180 interface=public enabled=True state=present auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:16 standalone.localdomain haproxy[70940]: 172.21.0.2:38804 [13/Oct/2025:14:02:16.143] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:16 standalone.localdomain haproxy[70940]: 172.21.0.2:38804 [13/Oct/2025:14:02:16.149] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/303/303 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:16 standalone.localdomain haproxy[70940]: 172.21.0.2:38804 [13/Oct/2025:14:02:16.457] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:16 standalone.localdomain haproxy[70940]: 172.21.0.2:38804 [13/Oct/2025:14:02:16.475] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/19/19 200 1236 - - ---- 12/1/0/0/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:16 standalone.localdomain ceph-mon[29756]: pgmap v814: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:16 standalone.localdomain python3[98412]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=keystone url=http://172.17.0.2:5000 endpoint_interface=internal region=regionOne wait=True timeout=180 interface=public enabled=True state=present auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:17 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct 13 14:02:17 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct 13 14:02:17 standalone.localdomain haproxy[70940]: 172.21.0.2:38812 [13/Oct/2025:14:02:17.345] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v815: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:17 standalone.localdomain ceph-mon[29756]: 6.10 scrub starts
Oct 13 14:02:17 standalone.localdomain ceph-mon[29756]: 6.10 scrub ok
Oct 13 14:02:17 standalone.localdomain haproxy[70940]: 172.21.0.2:38812 [13/Oct/2025:14:02:17.350] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/302/302 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:17 standalone.localdomain haproxy[70940]: 172.21.0.2:38812 [13/Oct/2025:14:02:17.658] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:17 standalone.localdomain haproxy[70940]: 172.21.0.2:38812 [13/Oct/2025:14:02:17.679] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 200 1236 - - ---- 12/1/0/0/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:18 standalone.localdomain python3[98425]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=keystone url=http://192.168.122.99:35357 endpoint_interface=admin region=regionOne wait=True timeout=180 interface=public enabled=True state=present auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:18 standalone.localdomain haproxy[70940]: 172.21.0.2:38824 [13/Oct/2025:14:02:18.570] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:18 standalone.localdomain ceph-mon[29756]: pgmap v815: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:18 standalone.localdomain haproxy[70940]: 172.21.0.2:38824 [13/Oct/2025:14:02:18.573] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:18 standalone.localdomain haproxy[70940]: 172.21.0.2:38824 [13/Oct/2025:14:02:18.863] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/12/12 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:18 standalone.localdomain haproxy[70940]: 172.21.0.2:38824 [13/Oct/2025:14:02:18.878] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 1236 - - ---- 12/1/0/0/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:19 standalone.localdomain ansible-async_wrapper.py[98456]: Invoked with 879573449533 60 /tmp/ansible-root/ansible-tmp-1760364139.2534788-98445-77361265410845/AnsiballZ_project.py _
Oct 13 14:02:19 standalone.localdomain ansible-async_wrapper.py[98459]: Starting module and watcher
Oct 13 14:02:19 standalone.localdomain ansible-async_wrapper.py[98459]: Start watching 98460 (60)
Oct 13 14:02:19 standalone.localdomain ansible-async_wrapper.py[98460]: Start module (98460)
Oct 13 14:02:19 standalone.localdomain ansible-async_wrapper.py[98456]: Return async_wrapper task started.
Oct 13 14:02:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:19 standalone.localdomain python3[98461]: ansible-openstack.cloud.project Invoked with cloud=standalone name=service domain_id=default state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None properties=None
Oct 13 14:02:19 standalone.localdomain python3[98466]: ansible-ansible.legacy.async_status Invoked with jid=879573449533.98456 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:20 standalone.localdomain haproxy[70940]: 172.21.0.2:48020 [13/Oct/2025:14:02:20.149] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:20 standalone.localdomain haproxy[70940]: 172.21.0.2:48020 [13/Oct/2025:14:02:20.154] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/315/315 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:20 standalone.localdomain haproxy[70940]: 172.21.0.2:48020 [13/Oct/2025:14:02:20.473] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 396 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:02:20 standalone.localdomain haproxy[70940]: 172.21.0.2:48020 [13/Oct/2025:14:02:20.490] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/30/30 200 937 - - ---- 12/1/0/0/0 0/0 "GET /v3/projects?domain_id=default HTTP/1.1"
Oct 13 14:02:20 standalone.localdomain ansible-async_wrapper.py[98460]: Module complete (98460)
Oct 13 14:02:20 standalone.localdomain ceph-mon[29756]: pgmap v816: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct 13 14:02:21 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct 13 14:02:21 standalone.localdomain runuser[98488]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:21 standalone.localdomain ceph-mon[29756]: 6.11 scrub starts
Oct 13 14:02:21 standalone.localdomain ceph-mon[29756]: 6.11 scrub ok
Oct 13 14:02:21 standalone.localdomain runuser[98488]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:22 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct 13 14:02:22 standalone.localdomain runuser[98610]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:22 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct 13 14:02:22 standalone.localdomain ceph-mon[29756]: pgmap v817: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:22 standalone.localdomain ceph-mon[29756]: 6.12 scrub starts
Oct 13 14:02:22 standalone.localdomain ceph-mon[29756]: 6.12 scrub ok
Oct 13 14:02:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:02:22 standalone.localdomain runuser[98610]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:22 standalone.localdomain runuser[98679]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:22 standalone.localdomain podman[98662]: 2025-10-13 14:02:22.82119287 +0000 UTC m=+0.085643900 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, release=1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:02:22 standalone.localdomain podman[98662]: 2025-10-13 14:02:22.857025487 +0000 UTC m=+0.121476487 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, container_name=keystone_cron, name=rhosp17/openstack-keystone, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:02:22 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:02:23
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'images', 'backups', 'vms', 'volumes', '.mgr']
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:02:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:23 standalone.localdomain runuser[98679]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:24 standalone.localdomain ansible-async_wrapper.py[98459]: Done in kid B.
Oct 13 14:02:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:24 standalone.localdomain ceph-mon[29756]: pgmap v818: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:24 standalone.localdomain python3[98843]: ansible-ansible.legacy.async_status Invoked with jid=879573449533.98456 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:25 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct 13 14:02:25 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct 13 14:02:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:25 standalone.localdomain ansible-async_wrapper.py[98906]: Invoked with 3970389003 60 /tmp/ansible-root/ansible-tmp-1760364145.1655676-98887-278806538607248/AnsiballZ_identity_domain.py _
Oct 13 14:02:25 standalone.localdomain ansible-async_wrapper.py[98923]: Starting module and watcher
Oct 13 14:02:25 standalone.localdomain ansible-async_wrapper.py[98923]: Start watching 98924 (60)
Oct 13 14:02:25 standalone.localdomain ansible-async_wrapper.py[98924]: Start module (98924)
Oct 13 14:02:25 standalone.localdomain ansible-async_wrapper.py[98906]: Return async_wrapper task started.
Oct 13 14:02:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:02:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:02:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:02:25 standalone.localdomain ceph-mon[29756]: 6.13 scrub starts
Oct 13 14:02:25 standalone.localdomain ceph-mon[29756]: 6.13 scrub ok
Oct 13 14:02:25 standalone.localdomain systemd[1]: tmp-crun.rAoZKl.mount: Deactivated successfully.
Oct 13 14:02:25 standalone.localdomain podman[98955]: 2025-10-13 14:02:25.711145141 +0000 UTC m=+0.126315227 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, architecture=x86_64, build-date=2025-07-21T16:18:19, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.component=openstack-barbican-keystone-listener-container, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-keystone-listener, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.expose-services=)
Oct 13 14:02:25 standalone.localdomain python3[98928]: ansible-openstack.cloud.identity_domain Invoked with cloud=standalone name=heat_stack state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None description=None
Oct 13 14:02:25 standalone.localdomain podman[98955]: 2025-10-13 14:02:25.741812627 +0000 UTC m=+0.156982723 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, com.redhat.component=openstack-barbican-keystone-listener-container, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-keystone-listener, release=1, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=barbican_keystone_listener)
Oct 13 14:02:25 standalone.localdomain podman[98954]: 2025-10-13 14:02:25.748532346 +0000 UTC m=+0.163717432 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, distribution-scope=public, release=1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:22:44, container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api)
Oct 13 14:02:25 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:02:25 standalone.localdomain podman[98956]: 2025-10-13 14:02:25.671658151 +0000 UTC m=+0.082909565 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, architecture=x86_64, distribution-scope=public, version=17.1.9, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-barbican-worker-container, build-date=2025-07-21T15:36:22, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:02:25 standalone.localdomain podman[98954]: 2025-10-13 14:02:25.779777379 +0000 UTC m=+0.194962425 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, container_name=barbican_api, io.openshift.expose-services=, release=1, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:02:25 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:02:25 standalone.localdomain podman[98956]: 2025-10-13 14:02:25.803403546 +0000 UTC m=+0.214654960 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, build-date=2025-07-21T15:36:22, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_worker, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:02:25 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:02:25 standalone.localdomain python3[99006]: ansible-ansible.legacy.async_status Invoked with jid=3970389003.98906 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:26 standalone.localdomain haproxy[70940]: 172.21.0.2:48028 [13/Oct/2025:14:02:26.243] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:26 standalone.localdomain haproxy[70940]: 172.21.0.2:48028 [13/Oct/2025:14:02:26.247] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/270/270 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:26 standalone.localdomain haproxy[70940]: 172.21.0.2:48028 [13/Oct/2025:14:02:26.525] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 322 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains?name=heat_stack HTTP/1.1"
Oct 13 14:02:26 standalone.localdomain haproxy[70940]: 172.21.0.2:48028 [13/Oct/2025:14:02:26.545] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 201 436 - - ---- 12/1/0/0/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:02:26 standalone.localdomain ceph-mon[29756]: pgmap v819: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:26 standalone.localdomain ansible-async_wrapper.py[98924]: Module complete (98924)
Oct 13 14:02:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct 13 14:02:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:02:28 standalone.localdomain ceph-mon[29756]: pgmap v820: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:28 standalone.localdomain ceph-mon[29756]: 6.14 scrub starts
Oct 13 14:02:28 standalone.localdomain ceph-mon[29756]: 6.14 scrub ok
Oct 13 14:02:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v821: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.15 deep-scrub starts
Oct 13 14:02:30 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.15 deep-scrub ok
Oct 13 14:02:30 standalone.localdomain ansible-async_wrapper.py[98923]: Done in kid B.
Oct 13 14:02:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:02:30 standalone.localdomain ceph-mon[29756]: pgmap v821: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:30 standalone.localdomain ceph-mon[29756]: 6.15 deep-scrub starts
Oct 13 14:02:30 standalone.localdomain ceph-mon[29756]: 6.15 deep-scrub ok
Oct 13 14:02:30 standalone.localdomain podman[99113]: 2025-10-13 14:02:30.67671443 +0000 UTC m=+0.085952250 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, container_name=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, version=17.1.9, release=1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=ovn_cluster_northd, tcib_managed=true, com.redhat.component=openstack-ovn-northd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:02:30 standalone.localdomain podman[99113]: 2025-10-13 14:02:30.717885893 +0000 UTC m=+0.127123713 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, config_id=ovn_cluster_northd, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-northd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vendor=Red Hat, Inc., build-date=2025-07-21T13:30:04, container_name=ovn_cluster_northd, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:02:30 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:02:31 standalone.localdomain python3[99134]: ansible-ansible.legacy.async_status Invoked with jid=3970389003.98906 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:31 standalone.localdomain python3[99147]: ansible-openstack.cloud.identity_domain_info Invoked with cloud=standalone wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None name=None filters=None
Oct 13 14:02:31 standalone.localdomain ceph-mon[29756]: pgmap v822: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:32 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct 13 14:02:32 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct 13 14:02:32 standalone.localdomain haproxy[70940]: 172.21.0.2:41888 [13/Oct/2025:14:02:32.129] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:32 standalone.localdomain haproxy[70940]: 172.21.0.2:41888 [13/Oct/2025:14:02:32.134] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/330/330 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:32 standalone.localdomain haproxy[70940]: 172.21.0.2:41888 [13/Oct/2025:14:02:32.468] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 705 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains HTTP/1.1"
Oct 13 14:02:32 standalone.localdomain ceph-mon[29756]: 6.16 scrub starts
Oct 13 14:02:32 standalone.localdomain ceph-mon[29756]: 6.16 scrub ok
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99227]: Invoked with 201692964266 60 /tmp/ansible-root/ansible-tmp-1760364152.965913-99216-102444902371939/AnsiballZ_catalog_service.py _
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99230]: Starting module and watcher
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99230]: Start watching 99231 (60)
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99231]: Start module (99231)
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99227]: Return async_wrapper task started.
Oct 13 14:02:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:02:33 standalone.localdomain podman[99234]: 2025-10-13 14:02:33.326771476 +0000 UTC m=+0.074849033 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team)
Oct 13 14:02:33 standalone.localdomain python3[99232]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=barbican service_type=key-manager description=OpenStack Key-Manager Service state=present type=key-manager wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:33 standalone.localdomain podman[99234]: 2025-10-13 14:02:33.377790186 +0000 UTC m=+0.125867733 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, container_name=clustercheck, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45)
Oct 13 14:02:33 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99261]: Invoked with 241294654939 60 /tmp/ansible-root/ansible-tmp-1760364153.280077-99216-126995734523461/AnsiballZ_catalog_service.py _
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99293]: Starting module and watcher
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99293]: Start watching 99294 (60)
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99294]: Start module (99294)
Oct 13 14:02:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99261]: Return async_wrapper task started.
Oct 13 14:02:33 standalone.localdomain python3[99295]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=cinderv3 service_type=volumev3 description=OpenStack Volumev3 Service state=present type=volumev3 wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:33 standalone.localdomain ceph-mon[29756]: pgmap v823: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99306]: Invoked with 441249086382 60 /tmp/ansible-root/ansible-tmp-1760364153.5592914-99216-235627548987150/AnsiballZ_catalog_service.py _
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99309]: Starting module and watcher
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99309]: Start watching 99310 (60)
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99310]: Start module (99310)
Oct 13 14:02:33 standalone.localdomain ansible-async_wrapper.py[99306]: Return async_wrapper task started.
Oct 13 14:02:33 standalone.localdomain python3[99311]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=glance service_type=image description=OpenStack Image Service state=present type=image wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:33 standalone.localdomain haproxy[70940]: 172.21.0.2:41890 [13/Oct/2025:14:02:33.969] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99363]: Invoked with 851716979299 60 /tmp/ansible-root/ansible-tmp-1760364153.8810828-99216-260569533153933/AnsiballZ_catalog_service.py _
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99413]: Starting module and watcher
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99413]: Start watching 99414 (60)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99414]: Start module (99414)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99363]: Return async_wrapper task started.
Oct 13 14:02:34 standalone.localdomain runuser[99409]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Oct 13 14:02:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Oct 13 14:02:34 standalone.localdomain python3[99418]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=heat service_type=orchestration description=OpenStack Orchestration Service state=present type=orchestration wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41890 [13/Oct/2025:14:02:33.973] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/265/265 201 1619 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41890 [13/Oct/2025:14:02:34.239] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 200 497 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41890 [13/Oct/2025:14:02:34.256] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/20/20 201 461 - - ---- 13/2/1/1/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41894 [13/Oct/2025:14:02:34.262] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99469]: Invoked with 59767413275 60 /tmp/ansible-root/ansible-tmp-1760364154.1492527-99216-140032519938142/AnsiballZ_catalog_service.py _
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99472]: Starting module and watcher
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99472]: Start watching 99473 (60)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99473]: Start module (99473)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99469]: Return async_wrapper task started.
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99231]: Module complete (99231)
Oct 13 14:02:34 standalone.localdomain python3[99474]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=heat-cfn service_type=cloudformation description=OpenStack Cloudformation Service state=present type=cloudformation wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41894 [13/Oct/2025:14:02:34.281] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/271/271 201 1723 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41900 [13/Oct/2025:14:02:34.530] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41894 [13/Oct/2025:14:02:34.555] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 200 739 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99493]: Invoked with 277628983310 60 /tmp/ansible-root/ansible-tmp-1760364154.4327457-99216-166547842362141/AnsiballZ_catalog_service.py _
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99496]: Starting module and watcher
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99496]: Start watching 99497 (60)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99497]: Start module (99497)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99493]: Return async_wrapper task started.
Oct 13 14:02:34 standalone.localdomain ceph-mon[29756]: 6.17 scrub starts
Oct 13 14:02:34 standalone.localdomain ceph-mon[29756]: 6.17 scrub ok
Oct 13 14:02:34 standalone.localdomain runuser[99409]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:34 standalone.localdomain python3[99498]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=manila service_type=share description=OpenStack Share Service state=present type=share wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41900 [13/Oct/2025:14:02:34.556] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 201 1723 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99513]: Invoked with 478460337492 60 /tmp/ansible-root/ansible-tmp-1760364154.6736834-99216-65417584603782/AnsiballZ_catalog_service.py _
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41894 [13/Oct/2025:14:02:34.574] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/272/272 201 455 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41916 [13/Oct/2025:14:02:34.774] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/77/77 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99530]: Starting module and watcher
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99530]: Start watching 99533 (60)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99533]: Start module (99533)
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99513]: Return async_wrapper task started.
Oct 13 14:02:34 standalone.localdomain haproxy[70940]: 172.21.0.2:41900 [13/Oct/2025:14:02:34.835] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/34/34 200 975 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:34 standalone.localdomain ansible-async_wrapper.py[99294]: Module complete (99294)
Oct 13 14:02:34 standalone.localdomain runuser[99572]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:35 standalone.localdomain python3[99535]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=manilav2 service_type=sharev2 description=OpenStack Sharev2 Service state=present type=sharev2 wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct 13 14:02:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99599]: Invoked with 230856793293 60 /tmp/ansible-root/ansible-tmp-1760364154.9452748-99216-277062354847098/AnsiballZ_catalog_service.py _
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99630]: Starting module and watcher
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99630]: Start watching 99631 (60)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99631]: Start module (99631)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99599]: Return async_wrapper task started.
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41916 [13/Oct/2025:14:02:34.853] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 201 1824 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41900 [13/Oct/2025:14:02:34.871] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/280/280 201 447 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41926 [13/Oct/2025:14:02:35.117] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41916 [13/Oct/2025:14:02:35.139] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 1204 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99310]: Module complete (99310)
Oct 13 14:02:35 standalone.localdomain python3[99632]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=neutron service_type=network description=OpenStack Network Service state=present type=network wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99643]: Invoked with 966289471031 60 /tmp/ansible-root/ansible-tmp-1760364155.2073154-99216-61868377516062/AnsiballZ_catalog_service.py _
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99659]: Starting module and watcher
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41926 [13/Oct/2025:14:02:35.155] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/271/271 201 1920 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99659]: Start watching 99660 (60)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99660]: Start module (99660)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99643]: Return async_wrapper task started.
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41916 [13/Oct/2025:14:02:35.170] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 201 461 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41942 [13/Oct/2025:14:02:35.399] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/67/67 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain runuser[99572]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41926 [13/Oct/2025:14:02:35.431] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/51/51 200 1446 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:35 standalone.localdomain runuser[99667]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99414]: Module complete (99414)
Oct 13 14:02:35 standalone.localdomain sudo[99673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:02:35 standalone.localdomain sudo[99673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:02:35 standalone.localdomain sudo[99673]: pam_unix(sudo:session): session closed for user root
Oct 13 14:02:35 standalone.localdomain python3[99661]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=nova service_type=compute description=OpenStack Compute Service state=present type=compute wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:35 standalone.localdomain sudo[99740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:02:35 standalone.localdomain sudo[99740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99743]: Invoked with 59113699536 60 /tmp/ansible-root/ansible-tmp-1760364155.5150137-99216-149343214358668/AnsiballZ_catalog_service.py _
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99799]: Starting module and watcher
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99799]: Start watching 99800 (60)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99800]: Start module (99800)
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99743]: Return async_wrapper task started.
Oct 13 14:02:35 standalone.localdomain ceph-mon[29756]: 6.18 scrub starts
Oct 13 14:02:35 standalone.localdomain ceph-mon[29756]: 6.18 scrub ok
Oct 13 14:02:35 standalone.localdomain ceph-mon[29756]: pgmap v824: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41942 [13/Oct/2025:14:02:35.469] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/295/295 201 2022 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41926 [13/Oct/2025:14:02:35.484] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/300/300 201 467 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41950 [13/Oct/2025:14:02:35.608] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/178/178 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain haproxy[70940]: 172.21.0.2:41942 [13/Oct/2025:14:02:35.768] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/42/42 200 1694 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:35 standalone.localdomain ansible-async_wrapper.py[99473]: Module complete (99473)
Oct 13 14:02:35 standalone.localdomain python3[99801]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=placement service_type=placement description=OpenStack Placement Service state=present type=placement wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:35 standalone.localdomain python3[99828]: ansible-ansible.legacy.async_status Invoked with jid=201692964266.99227 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:36 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41950 [13/Oct/2025:14:02:35.788] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 201 2129 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41942 [13/Oct/2025:14:02:35.814] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 201 447 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41956 [13/Oct/2025:14:02:35.842] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/260/260 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41950 [13/Oct/2025:14:02:36.082] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/42/42 200 1922 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain ansible-async_wrapper.py[99497]: Module complete (99497)
Oct 13 14:02:36 standalone.localdomain python3[99869]: ansible-ansible.legacy.async_status Invoked with jid=241294654939.99261 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:36 standalone.localdomain sudo[99740]: pam_unix(sudo:session): session closed for user root
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:02:36 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b5e7112d-4ece-4888-a8c1-37b84a907230 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:02:36 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b5e7112d-4ece-4888-a8c1-37b84a907230 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:02:36 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b5e7112d-4ece-4888-a8c1-37b84a907230 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:02:36 standalone.localdomain runuser[99667]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:36 standalone.localdomain sudo[99894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:02:36 standalone.localdomain sudo[99894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:02:36 standalone.localdomain sudo[99894]: pam_unix(sudo:session): session closed for user root
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41956 [13/Oct/2025:14:02:36.104] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/289/289 201 2225 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41950 [13/Oct/2025:14:02:36.128] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 201 453 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41972 [13/Oct/2025:14:02:36.187] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/228/228 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain python3[99893]: ansible-ansible.legacy.async_status Invoked with jid=441249086382.99306 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41956 [13/Oct/2025:14:02:36.396] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/34/34 200 2156 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain ansible-async_wrapper.py[99533]: Module complete (99533)
Oct 13 14:02:36 standalone.localdomain python3[99920]: ansible-ansible.legacy.async_status Invoked with jid=851716979299.99363 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41972 [13/Oct/2025:14:02:36.417] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/296/296 201 2325 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41956 [13/Oct/2025:14:02:36.440] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/295/295 201 452 - - ---- 14/3/2/2/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41982 [13/Oct/2025:14:02:36.502] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/237/237 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: 6.19 scrub starts
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: 6.19 scrub ok
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:02:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:02:36 standalone.localdomain haproxy[70940]: 172.21.0.2:41972 [13/Oct/2025:14:02:36.719] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/58/58 200 2389 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:36 standalone.localdomain ansible-async_wrapper.py[99631]: Module complete (99631)
Oct 13 14:02:36 standalone.localdomain python3[99926]: ansible-ansible.legacy.async_status Invoked with jid=59767413275.99469 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:37 standalone.localdomain haproxy[70940]: 172.21.0.2:41982 [13/Oct/2025:14:02:36.743] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/300/300 201 2424 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:37 standalone.localdomain haproxy[70940]: 172.21.0.2:41972 [13/Oct/2025:14:02:36.782] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 201 449 - - ---- 13/2/1/1/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:37 standalone.localdomain haproxy[70940]: 172.21.0.2:41982 [13/Oct/2025:14:02:37.048] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/44/44 200 2619 - - ---- 13/2/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:37 standalone.localdomain haproxy[70940]: 172.21.0.2:41982 [13/Oct/2025:14:02:37.100] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 201 458 - - ---- 12/1/0/0/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:37 standalone.localdomain ansible-async_wrapper.py[99660]: Module complete (99660)
Oct 13 14:02:37 standalone.localdomain ansible-async_wrapper.py[99800]: Module complete (99800)
Oct 13 14:02:37 standalone.localdomain python3[99929]: ansible-ansible.legacy.async_status Invoked with jid=277628983310.99493 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:37 standalone.localdomain python3[99932]: ansible-ansible.legacy.async_status Invoked with jid=478460337492.99513 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:37 standalone.localdomain python3[99943]: ansible-ansible.legacy.async_status Invoked with jid=230856793293.99599 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:37 standalone.localdomain ceph-mon[29756]: pgmap v825: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:37 standalone.localdomain python3[99946]: ansible-ansible.legacy.async_status Invoked with jid=966289471031.99643 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99230]: Done in kid B.
Oct 13 14:02:38 standalone.localdomain python3[99949]: ansible-ansible.legacy.async_status Invoked with jid=59113699536.99743 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99293]: Done in kid B.
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99961]: Invoked with 983244401138 60 /tmp/ansible-root/ansible-tmp-1760364158.3656263-99950-70497888242110/AnsiballZ_catalog_service.py _
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99972]: Starting module and watcher
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99972]: Start watching 99973 (60)
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99973]: Start module (99973)
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99961]: Return async_wrapper task started.
Oct 13 14:02:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 40 completed events
Oct 13 14:02:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:02:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:02:38 standalone.localdomain python3[99974]: ansible-openstack.cloud.catalog_service Invoked with cloud=standalone name=swift service_type=object-store description=OpenStack Object-Store Service state=present type=object-store wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:38 standalone.localdomain ansible-async_wrapper.py[99309]: Done in kid B.
Oct 13 14:02:38 standalone.localdomain python3[99979]: ansible-ansible.legacy.async_status Invoked with jid=983244401138.99961 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:39 standalone.localdomain ansible-async_wrapper.py[99413]: Done in kid B.
Oct 13 14:02:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:02:39 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct 13 14:02:39 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct 13 14:02:39 standalone.localdomain podman[99981]: 2025-10-13 14:02:39.176361377 +0000 UTC m=+0.078858543 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:02:39 standalone.localdomain podman[99981]: 2025-10-13 14:02:39.183959813 +0000 UTC m=+0.086456969 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:02:39 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:02:39 standalone.localdomain haproxy[70940]: 172.21.0.2:41988 [13/Oct/2025:14:02:39.316] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:39 standalone.localdomain ansible-async_wrapper.py[99472]: Done in kid B.
Oct 13 14:02:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:39 standalone.localdomain haproxy[70940]: 172.21.0.2:41988 [13/Oct/2025:14:02:39.322] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/278/278 201 2623 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:39 standalone.localdomain ansible-async_wrapper.py[99496]: Done in kid B.
Oct 13 14:02:39 standalone.localdomain haproxy[70940]: 172.21.0.2:41988 [13/Oct/2025:14:02:39.604] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 2858 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:39 standalone.localdomain haproxy[70940]: 172.21.0.2:41988 [13/Oct/2025:14:02:39.621] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 201 460 - - ---- 12/1/0/0/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:02:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:02:39 standalone.localdomain ceph-mon[29756]: 6.1a scrub starts
Oct 13 14:02:39 standalone.localdomain ceph-mon[29756]: 6.1a scrub ok
Oct 13 14:02:39 standalone.localdomain ceph-mon[29756]: pgmap v826: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:39 standalone.localdomain ansible-async_wrapper.py[99973]: Module complete (99973)
Oct 13 14:02:39 standalone.localdomain ansible-async_wrapper.py[99530]: Done in kid B.
Oct 13 14:02:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:02:40 standalone.localdomain systemd[1]: tmp-crun.9TwJvh.mount: Deactivated successfully.
Oct 13 14:02:40 standalone.localdomain podman[100007]: 2025-10-13 14:02:40.007024158 +0000 UTC m=+0.095976044 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=keystone, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:02:40 standalone.localdomain podman[100007]: 2025-10-13 14:02:40.041190017 +0000 UTC m=+0.130141933 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., version=17.1.9, container_name=keystone, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git)
Oct 13 14:02:40 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:02:40 standalone.localdomain ansible-async_wrapper.py[99630]: Done in kid B.
Oct 13 14:02:40 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Oct 13 14:02:40 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Oct 13 14:02:40 standalone.localdomain ansible-async_wrapper.py[99659]: Done in kid B.
Oct 13 14:02:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:02:40 standalone.localdomain podman[100035]: 2025-10-13 14:02:40.564006232 +0000 UTC m=+0.087657397 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.expose-services=, build-date=2025-07-21T13:58:15, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, distribution-scope=public, config_id=tripleo_step3, release=1, description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., container_name=horizon, tcib_managed=true)
Oct 13 14:02:40 standalone.localdomain ansible-async_wrapper.py[99799]: Done in kid B.
Oct 13 14:02:40 standalone.localdomain ceph-mon[29756]: 6.1b scrub starts
Oct 13 14:02:40 standalone.localdomain ceph-mon[29756]: 6.1b scrub ok
Oct 13 14:02:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Oct 13 14:02:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Oct 13 14:02:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:41 standalone.localdomain ceph-mon[29756]: 6.1c scrub starts
Oct 13 14:02:41 standalone.localdomain ceph-mon[29756]: 6.1c scrub ok
Oct 13 14:02:41 standalone.localdomain ceph-mon[29756]: pgmap v827: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:02:42 standalone.localdomain podman[100035]: 2025-10-13 14:02:42.035113911 +0000 UTC m=+1.558765076 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, release=1, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, tcib_managed=true, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, com.redhat.component=openstack-horizon-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, name=rhosp17/openstack-horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, container_name=horizon)
Oct 13 14:02:42 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:02:42 standalone.localdomain systemd[1]: tmp-crun.e7SIT7.mount: Deactivated successfully.
Oct 13 14:02:42 standalone.localdomain podman[100136]: 2025-10-13 14:02:42.096975937 +0000 UTC m=+0.068849074 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, vcs-type=git, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.expose-services=)
Oct 13 14:02:42 standalone.localdomain podman[100136]: 2025-10-13 14:02:42.122036624 +0000 UTC m=+0.093909781 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, container_name=memcached, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:02:42 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:02:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Oct 13 14:02:42 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Oct 13 14:02:42 standalone.localdomain ceph-mon[29756]: 6.1d scrub starts
Oct 13 14:02:42 standalone.localdomain ceph-mon[29756]: 6.1d scrub ok
Oct 13 14:02:43 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Oct 13 14:02:43 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Oct 13 14:02:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:43 standalone.localdomain ansible-async_wrapper.py[99972]: Done in kid B.
Oct 13 14:02:43 standalone.localdomain ceph-mon[29756]: 6.1e scrub starts
Oct 13 14:02:43 standalone.localdomain ceph-mon[29756]: 6.1e scrub ok
Oct 13 14:02:43 standalone.localdomain ceph-mon[29756]: pgmap v828: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:44 standalone.localdomain python3[100180]: ansible-ansible.legacy.async_status Invoked with jid=983244401138.99961 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:44 standalone.localdomain ansible-async_wrapper.py[100281]: Invoked with 344793307256 60 /tmp/ansible-root/ansible-tmp-1760364164.571465-100267-169998711663864/AnsiballZ_endpoint.py _
Oct 13 14:02:44 standalone.localdomain ansible-async_wrapper.py[100289]: Starting module and watcher
Oct 13 14:02:44 standalone.localdomain ansible-async_wrapper.py[100289]: Start watching 100290 (60)
Oct 13 14:02:44 standalone.localdomain ansible-async_wrapper.py[100290]: Start module (100290)
Oct 13 14:02:44 standalone.localdomain ansible-async_wrapper.py[100281]: Return async_wrapper task started.
Oct 13 14:02:44 standalone.localdomain python3[100291]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=barbican url=http://172.21.0.2:9311 endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100308]: Invoked with 36056709265 60 /tmp/ansible-root/ansible-tmp-1760364164.8835099-100267-269049179617539/AnsiballZ_endpoint.py _
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100347]: Starting module and watcher
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100347]: Start watching 100348 (60)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100348]: Start module (100348)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100308]: Return async_wrapper task started.
Oct 13 14:02:45 standalone.localdomain python3[100349]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=cinderv3 url=http://172.21.0.2:8776/v3/%(tenant_id)s endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100360]: Invoked with 432829332897 60 /tmp/ansible-root/ansible-tmp-1760364165.146967-100267-168318965734197/AnsiballZ_endpoint.py _
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100363]: Starting module and watcher
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100363]: Start watching 100364 (60)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100364]: Start module (100364)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100360]: Return async_wrapper task started.
Oct 13 14:02:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:45 standalone.localdomain haproxy[70940]: 172.21.0.2:60370 [13/Oct/2025:14:02:45.493] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:45 standalone.localdomain python3[100365]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=glance url=http://172.21.0.2:9292 endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100376]: Invoked with 636683430691 60 /tmp/ansible-root/ansible-tmp-1760364165.3724637-100267-164363098787288/AnsiballZ_endpoint.py _
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100379]: Starting module and watcher
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100379]: Start watching 100380 (60)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100380]: Start module (100380)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100376]: Return async_wrapper task started.
Oct 13 14:02:45 standalone.localdomain python3[100381]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=heat url=http://172.21.0.2:8004/v1/%(tenant_id)s endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:45 standalone.localdomain haproxy[70940]: 172.21.0.2:60384 [13/Oct/2025:14:02:45.800] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/6/6 300 515 - - ---- 13/2/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:45 standalone.localdomain haproxy[70940]: 172.21.0.2:60370 [13/Oct/2025:14:02:45.499] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 2725 - - ---- 13/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100441]: Invoked with 357916116060 60 /tmp/ansible-root/ansible-tmp-1760364165.6511881-100267-100638005928734/AnsiballZ_endpoint.py _
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100444]: Starting module and watcher
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100444]: Start watching 100445 (60)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100445]: Start module (100445)
Oct 13 14:02:45 standalone.localdomain ansible-async_wrapper.py[100441]: Return async_wrapper task started.
Oct 13 14:02:46 standalone.localdomain python3[100447]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=heat-cfn url=http://172.21.0.2:8000/v1 endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60384 [13/Oct/2025:14:02:45.808] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/263/263 201 2725 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60370 [13/Oct/2025:14:02:45.810] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 200 3099 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60384 [13/Oct/2025:14:02:46.073] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/29/29 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60394 [13/Oct/2025:14:02:46.093] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60370 [13/Oct/2025:14:02:46.093] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/26/26 200 1236 - - ---- 14/3/2/2/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100498]: Invoked with 433155979894 60 /tmp/ansible-root/ansible-tmp-1760364165.958552-100267-118085521212289/AnsiballZ_endpoint.py _
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100501]: Starting module and watcher
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100501]: Start watching 100502 (60)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100502]: Start module (100502)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100498]: Return async_wrapper task started.
Oct 13 14:02:46 standalone.localdomain python3[100503]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=manila url=http://172.21.0.2:8786/v1/%(tenant_id)s endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60394 [13/Oct/2025:14:02:46.105] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 201 2725 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60384 [13/Oct/2025:14:02:46.106] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/290/290 200 1236 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60370 [13/Oct/2025:14:02:46.123] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/285/285 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60396 [13/Oct/2025:14:02:46.361] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/48/48 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60394 [13/Oct/2025:14:02:46.385] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100514]: Invoked with 884381504433 60 /tmp/ansible-root/ansible-tmp-1760364166.2173412-100267-240298573009476/AnsiballZ_endpoint.py _
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100517]: Starting module and watcher
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100517]: Start watching 100518 (60)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100518]: Start module (100518)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100514]: Return async_wrapper task started.
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60384 [13/Oct/2025:14:02:46.399] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/40/40 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain ceph-mon[29756]: pgmap v829: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:46 standalone.localdomain python3[100519]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=manilav2 url=http://172.21.0.2:8786/v2 endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100530]: Invoked with 614147080496 60 /tmp/ansible-root/ansible-tmp-1760364166.5005822-100267-253784951978119/AnsiballZ_endpoint.py _
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100541]: Starting module and watcher
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100541]: Start watching 100542 (60)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100542]: Start module (100542)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100530]: Return async_wrapper task started.
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60396 [13/Oct/2025:14:02:46.412] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/313/313 201 2725 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60370 [13/Oct/2025:14:02:46.416] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/342/342 201 527 - - ---- 16/5/4/4/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60394 [13/Oct/2025:14:02:46.427] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/362/362 200 1543 - - ---- 16/5/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60384 [13/Oct/2025:14:02:46.446] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/370/370 201 544 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60406 [13/Oct/2025:14:02:46.597] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/220/220 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60396 [13/Oct/2025:14:02:46.728] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/117/117 200 3099 - - ---- 16/5/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100290]: Module complete (100290)
Oct 13 14:02:46 standalone.localdomain haproxy[70940]: 172.21.0.2:60394 [13/Oct/2025:14:02:46.791] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/72/72 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:46 standalone.localdomain python3[100543]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=neutron url=http://172.21.0.2:9696 endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:46 standalone.localdomain runuser[100559]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100348]: Module complete (100348)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100554]: Invoked with 733150735568 60 /tmp/ansible-root/ansible-tmp-1760364166.7804983-100267-130629324973107/AnsiballZ_endpoint.py _
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100606]: Starting module and watcher
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100606]: Start watching 100607 (60)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100607]: Start module (100607)
Oct 13 14:02:46 standalone.localdomain ansible-async_wrapper.py[100554]: Return async_wrapper task started.
Oct 13 14:02:47 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct 13 14:02:47 standalone.localdomain python3[100608]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=nova url=http://172.21.0.2:8774/v2.1 endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:47 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60406 [13/Oct/2025:14:02:46.820] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/321/321 201 3055 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60422 [13/Oct/2025:14:02:46.839] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/306/306 300 515 - - ---- 15/4/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60396 [13/Oct/2025:14:02:46.849] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/307/307 200 1867 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60394 [13/Oct/2025:14:02:46.866] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 527 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60406 [13/Oct/2025:14:02:47.145] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/48/48 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100619]: Invoked with 902392342257 60 /tmp/ansible-root/ansible-tmp-1760364167.044448-100267-232388227163953/AnsiballZ_endpoint.py _
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100622]: Starting module and watcher
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100622]: Start watching 100623 (60)
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100623]: Start module (100623)
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100619]: Return async_wrapper task started.
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100364]: Module complete (100364)
Oct 13 14:02:47 standalone.localdomain python3[100624]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=placement url=http://172.21.0.2:8778/placement endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60422 [13/Oct/2025:14:02:47.147] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/313/313 201 3202 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60396 [13/Oct/2025:14:02:47.158] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/315/315 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60406 [13/Oct/2025:14:02:47.197] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/287/287 200 2174 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60430 [13/Oct/2025:14:02:47.244] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/242/242 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60422 [13/Oct/2025:14:02:47.464] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60396 [13/Oct/2025:14:02:47.477] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/40/40 201 544 - - ---- 16/5/4/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain ceph-mon[29756]: 6.1f scrub starts
Oct 13 14:02:47 standalone.localdomain ceph-mon[29756]: 6.1f scrub ok
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60406 [13/Oct/2025:14:02:47.487] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/48/48 200 3099 - - ---- 16/5/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain python3[100628]: ansible-ansible.legacy.async_status Invoked with jid=344793307256.100281 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100380]: Module complete (100380)
Oct 13 14:02:47 standalone.localdomain runuser[100559]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:47 standalone.localdomain python3[100637]: ansible-ansible.legacy.async_status Invoked with jid=36056709265.100308 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60430 [13/Oct/2025:14:02:47.488] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/326/326 201 3385 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60422 [13/Oct/2025:14:02:47.506] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/322/322 200 2498 - - ---- 16/5/4/4/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60434 [13/Oct/2025:14:02:47.517] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/313/313 300 515 - - ---- 16/5/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain runuser[100658]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60438 [13/Oct/2025:14:02:47.652] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/214/214 300 515 - - ---- 16/5/4/4/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60406 [13/Oct/2025:14:02:47.540] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/324/324 201 530 - - ---- 16/5/4/4/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60430 [13/Oct/2025:14:02:47.817] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/80/80 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain haproxy[70940]: 172.21.0.2:60422 [13/Oct/2025:14:02:47.834] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/77/77 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:47 standalone.localdomain ansible-async_wrapper.py[100445]: Module complete (100445)
Oct 13 14:02:48 standalone.localdomain python3[100689]: ansible-ansible.legacy.async_status Invoked with jid=432829332897.100360 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60434 [13/Oct/2025:14:02:47.835] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/382/382 201 3535 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain python3[100708]: ansible-ansible.legacy.async_status Invoked with jid=636683430691.100376 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60438 [13/Oct/2025:14:02:47.869] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/616/616 201 3535 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain runuser[100658]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60430 [13/Oct/2025:14:02:47.901] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/597/597 200 2808 - - ---- 16/5/4/4/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain runuser[100721]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60422 [13/Oct/2025:14:02:47.916] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/602/602 201 544 - - ---- 16/5/4/4/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60454 [13/Oct/2025:14:02:48.021] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/511/511 300 515 - - ---- 16/5/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: pgmap v830: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:48 standalone.localdomain python3[100716]: ansible-ansible.legacy.async_status Invoked with jid=357916116060.100441 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.542119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364168542186, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 897, "num_deletes": 251, "total_data_size": 583066, "memory_usage": 599576, "flush_reason": "Manual Compaction"}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364168546942, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 570302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19477, "largest_seqno": 20373, "table_properties": {"data_size": 566468, "index_size": 1499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10141, "raw_average_key_size": 20, "raw_value_size": 558004, "raw_average_value_size": 1100, "num_data_blocks": 69, "num_entries": 507, "num_filter_entries": 507, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364114, "oldest_key_time": 1760364114, "file_creation_time": 1760364168, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 4838 microseconds, and 1571 cpu microseconds.
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.546965) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 570302 bytes OK
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.546978) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.548930) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.548943) EVENT_LOG_v1 {"time_micros": 1760364168548939, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.548961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 578528, prev total WAL file size 578528, number of live WAL files 2.
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.549400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(556KB)], [44(4257KB)]
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364168549458, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 4929653, "oldest_snapshot_seqno": -1}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 3372 keys, 3968866 bytes, temperature: kUnknown
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364168565934, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 3968866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 3947930, "index_size": 11432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80108, "raw_average_key_size": 23, "raw_value_size": 3888461, "raw_average_value_size": 1153, "num_data_blocks": 494, "num_entries": 3372, "num_filter_entries": 3372, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364168, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.566204) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 3968866 bytes
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.567734) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 297.0 rd, 239.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 4.2 +0.0 blob) out(3.8 +0.0 blob), read-write-amplify(15.6) write-amplify(7.0) OK, records in: 3890, records dropped: 518 output_compression: NoCompression
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.567768) EVENT_LOG_v1 {"time_micros": 1760364168567757, "job": 22, "event": "compaction_finished", "compaction_time_micros": 16598, "compaction_time_cpu_micros": 8932, "output_level": 6, "num_output_files": 1, "total_output_size": 3968866, "num_input_records": 3890, "num_output_records": 3372, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364168567968, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364168568329, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.549320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.568360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.568363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.568365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.568366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:02:48 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:02:48.568368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60434 [13/Oct/2025:14:02:48.221] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/352/352 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60438 [13/Oct/2025:14:02:48.488] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/105/105 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60430 [13/Oct/2025:14:02:48.502] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/105/105 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain ansible-async_wrapper.py[100502]: Module complete (100502)
Oct 13 14:02:48 standalone.localdomain python3[100770]: ansible-ansible.legacy.async_status Invoked with jid=433155979894.100498 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60454 [13/Oct/2025:14:02:48.535] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/343/343 201 3718 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60434 [13/Oct/2025:14:02:48.576] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/312/312 200 3132 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60438 [13/Oct/2025:14:02:48.598] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/302/302 200 3132 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60430 [13/Oct/2025:14:02:48.610] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/306/306 201 530 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60454 [13/Oct/2025:14:02:48.883] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/53/53 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60434 [13/Oct/2025:14:02:48.892] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/57/57 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60438 [13/Oct/2025:14:02:48.905] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/56/56 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60454 [13/Oct/2025:14:02:48.941] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 200 3442 - - ---- 14/3/2/2/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:48 standalone.localdomain haproxy[70940]: 172.21.0.2:60434 [13/Oct/2025:14:02:48.958] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 201 527 - - ---- 14/3/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:49 standalone.localdomain haproxy[70940]: 172.21.0.2:60438 [13/Oct/2025:14:02:48.968] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/48/48 201 532 - - ---- 14/3/1/1/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:49 standalone.localdomain ansible-async_wrapper.py[100518]: Module complete (100518)
Oct 13 14:02:49 standalone.localdomain python3[100781]: ansible-ansible.legacy.async_status Invoked with jid=884381504433.100514 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:49 standalone.localdomain haproxy[70940]: 172.21.0.2:60454 [13/Oct/2025:14:02:48.978] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/59/59 200 3099 - - ---- 14/3/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:49 standalone.localdomain haproxy[70940]: 172.21.0.2:60454 [13/Oct/2025:14:02:49.047] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 201 537 - - ---- 13/2/0/0/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:49 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Oct 13 14:02:49 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Oct 13 14:02:49 standalone.localdomain ansible-async_wrapper.py[100542]: Module complete (100542)
Oct 13 14:02:49 standalone.localdomain ansible-async_wrapper.py[100607]: Module complete (100607)
Oct 13 14:02:49 standalone.localdomain ansible-async_wrapper.py[100623]: Module complete (100623)
Oct 13 14:02:49 standalone.localdomain runuser[100721]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:02:49 standalone.localdomain python3[100788]: ansible-ansible.legacy.async_status Invoked with jid=614147080496.100530 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:49 standalone.localdomain python3[100797]: ansible-ansible.legacy.async_status Invoked with jid=733150735568.100554 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:49 standalone.localdomain ceph-mon[29756]: 7.1 deep-scrub starts
Oct 13 14:02:49 standalone.localdomain ceph-mon[29756]: 7.1 deep-scrub ok
Oct 13 14:02:49 standalone.localdomain python3[100801]: ansible-ansible.legacy.async_status Invoked with jid=902392342257.100619 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:49 standalone.localdomain ansible-async_wrapper.py[100289]: Done in kid B.
Oct 13 14:02:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100347]: Done in kid B.
Oct 13 14:02:50 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100821]: Invoked with 371672775075 60 /tmp/ansible-root/ansible-tmp-1760364169.873439-100810-53313628512210/AnsiballZ_endpoint.py _
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100824]: Starting module and watcher
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100824]: Start watching 100825 (60)
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100825]: Start module (100825)
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100821]: Return async_wrapper task started.
Oct 13 14:02:50 standalone.localdomain python3[100826]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=swift url=http://172.21.0.2:8080/v1/AUTH_%(tenant_id)s endpoint_interface=public region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100363]: Done in kid B.
Oct 13 14:02:50 standalone.localdomain python3[100831]: ansible-ansible.legacy.async_status Invoked with jid=371672775075.100821 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100379]: Done in kid B.
Oct 13 14:02:50 standalone.localdomain ceph-mon[29756]: pgmap v831: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:50 standalone.localdomain ceph-mon[29756]: 7.2 scrub starts
Oct 13 14:02:50 standalone.localdomain ceph-mon[29756]: 7.2 scrub ok
Oct 13 14:02:50 standalone.localdomain haproxy[70940]: 172.21.0.2:37430 [13/Oct/2025:14:02:50.758] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:50 standalone.localdomain ansible-async_wrapper.py[100444]: Done in kid B.
Oct 13 14:02:51 standalone.localdomain haproxy[70940]: 172.21.0.2:37430 [13/Oct/2025:14:02:50.762] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 201 4324 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:51 standalone.localdomain haproxy[70940]: 172.21.0.2:37430 [13/Oct/2025:14:02:51.045] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:51 standalone.localdomain haproxy[70940]: 172.21.0.2:37430 [13/Oct/2025:14:02:51.059] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 200 4378 - - ---- 12/1/0/0/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:51 standalone.localdomain haproxy[70940]: 172.21.0.2:37430 [13/Oct/2025:14:02:51.072] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:51 standalone.localdomain haproxy[70940]: 172.21.0.2:37430 [13/Oct/2025:14:02:51.087] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 201 549 - - ---- 12/1/0/0/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:51 standalone.localdomain ansible-async_wrapper.py[100501]: Done in kid B.
Oct 13 14:02:51 standalone.localdomain ansible-async_wrapper.py[100825]: Module complete (100825)
Oct 13 14:02:51 standalone.localdomain ansible-async_wrapper.py[100517]: Done in kid B.
Oct 13 14:02:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:51 standalone.localdomain ansible-async_wrapper.py[100541]: Done in kid B.
Oct 13 14:02:51 standalone.localdomain ansible-async_wrapper.py[100606]: Done in kid B.
Oct 13 14:02:52 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 13 14:02:52 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 13 14:02:52 standalone.localdomain ansible-async_wrapper.py[100622]: Done in kid B.
Oct 13 14:02:52 standalone.localdomain ceph-mon[29756]: pgmap v832: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:52 standalone.localdomain ceph-mon[29756]: 7.3 scrub starts
Oct 13 14:02:52 standalone.localdomain ceph-mon[29756]: 7.3 scrub ok
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:02:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:02:53 standalone.localdomain podman[100948]: 2025-10-13 14:02:53.817579796 +0000 UTC m=+0.078971028 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, container_name=keystone_cron, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9)
Oct 13 14:02:53 standalone.localdomain podman[100948]: 2025-10-13 14:02:53.828620688 +0000 UTC m=+0.090011710 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public)
Oct 13 14:02:53 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:02:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:54 standalone.localdomain ceph-mon[29756]: pgmap v833: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:55 standalone.localdomain ansible-async_wrapper.py[100824]: Done in kid B.
Oct 13 14:02:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:55 standalone.localdomain python3[101108]: ansible-ansible.legacy.async_status Invoked with jid=371672775075.100821 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:02:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:02:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:02:55 standalone.localdomain podman[101152]: 2025-10-13 14:02:55.943083425 +0000 UTC m=+0.101257657 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, name=rhosp17/openstack-barbican-api, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., distribution-scope=public, container_name=barbican_api)
Oct 13 14:02:55 standalone.localdomain podman[101152]: 2025-10-13 14:02:55.979013689 +0000 UTC m=+0.137187911 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, build-date=2025-07-21T15:22:44, container_name=barbican_api, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, name=rhosp17/openstack-barbican-api)
Oct 13 14:02:55 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:02:56 standalone.localdomain systemd[1]: tmp-crun.jNve0t.mount: Deactivated successfully.
Oct 13 14:02:56 standalone.localdomain podman[101153]: 2025-10-13 14:02:56.070976407 +0000 UTC m=+0.224786193 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, release=1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, tcib_managed=true)
Oct 13 14:02:56 standalone.localdomain podman[101153]: 2025-10-13 14:02:56.097983874 +0000 UTC m=+0.251793710 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, release=1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, build-date=2025-07-21T16:18:19, io.openshift.expose-services=, name=rhosp17/openstack-barbican-keystone-listener, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=barbican_keystone_listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101237]: Invoked with 689891016018 60 /tmp/ansible-root/ansible-tmp-1760364175.914838-101154-82066310523546/AnsiballZ_endpoint.py _
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101282]: Starting module and watcher
Oct 13 14:02:56 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101282]: Start watching 101283 (60)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101283]: Start module (101283)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101237]: Return async_wrapper task started.
Oct 13 14:02:56 standalone.localdomain podman[101155]: 2025-10-13 14:02:56.106257971 +0000 UTC m=+0.256375733 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container, maintainer=OpenStack TripleO Team, container_name=barbican_worker, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=)
Oct 13 14:02:56 standalone.localdomain podman[101155]: 2025-10-13 14:02:56.18792538 +0000 UTC m=+0.338043162 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, build-date=2025-07-21T15:36:22, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-worker)
Oct 13 14:02:56 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:02:56 standalone.localdomain python3[101284]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=barbican url=http://172.17.0.2:9311 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101300]: Invoked with 133803291853 60 /tmp/ansible-root/ansible-tmp-1760364176.1810708-101154-142303314462031/AnsiballZ_endpoint.py _
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101303]: Starting module and watcher
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101303]: Start watching 101304 (60)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101304]: Start module (101304)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101300]: Return async_wrapper task started.
Oct 13 14:02:56 standalone.localdomain python3[101305]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=cinderv3 url=http://172.17.0.2:8776/v3/%(tenant_id)s endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:56 standalone.localdomain ceph-mon[29756]: pgmap v834: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101316]: Invoked with 695544313734 60 /tmp/ansible-root/ansible-tmp-1760364176.4476514-101154-55380393395938/AnsiballZ_endpoint.py _
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101319]: Starting module and watcher
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101319]: Start watching 101320 (60)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101320]: Start module (101320)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101316]: Return async_wrapper task started.
Oct 13 14:02:56 standalone.localdomain python3[101321]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=glance url=http://172.17.0.2:9293 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:56 standalone.localdomain haproxy[70940]: 172.21.0.2:37432 [13/Oct/2025:14:02:56.849] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101332]: Invoked with 309201929839 60 /tmp/ansible-root/ansible-tmp-1760364176.756422-101154-140206874134744/AnsiballZ_endpoint.py _
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101343]: Starting module and watcher
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101343]: Start watching 101344 (60)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101344]: Start module (101344)
Oct 13 14:02:56 standalone.localdomain ansible-async_wrapper.py[101332]: Return async_wrapper task started.
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37432 [13/Oct/2025:14:02:56.853] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/272/272 201 4512 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37432 [13/Oct/2025:14:02:57.130] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/10/10 200 3099 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain python3[101345]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=heat url=http://172.17.0.2:8004/v1/%(tenant_id)s endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37444 [13/Oct/2025:14:02:57.136] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/6/6 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101356]: Invoked with 58688191570 60 /tmp/ansible-root/ansible-tmp-1760364177.068722-101154-74983357881178/AnsiballZ_endpoint.py _
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101359]: Starting module and watcher
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101359]: Start watching 101360 (60)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101360]: Start module (101360)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101356]: Return async_wrapper task started.
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37444 [13/Oct/2025:14:02:57.143] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/273/273 201 4512 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37432 [13/Oct/2025:14:02:57.146] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 200 4707 - - ---- 14/3/2/2/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37446 [13/Oct/2025:14:02:57.381] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/53/53 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain python3[101361]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=heat-cfn url=http://172.17.0.2:8000/v1 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37444 [13/Oct/2025:14:02:57.419] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/35/35 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37432 [13/Oct/2025:14:02:57.437] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101372]: Invoked with 830893222608 60 /tmp/ansible-root/ansible-tmp-1760364177.3589427-101154-72174445228103/AnsiballZ_endpoint.py _
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101375]: Starting module and watcher
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101375]: Start watching 101376 (60)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101376]: Start module (101376)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101372]: Return async_wrapper task started.
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37446 [13/Oct/2025:14:02:57.438] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/288/288 201 4512 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37444 [13/Oct/2025:14:02:57.459] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/273/273 200 4707 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37432 [13/Oct/2025:14:02:57.470] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/277/277 201 526 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37462 [13/Oct/2025:14:02:57.667] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/80/80 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37446 [13/Oct/2025:14:02:57.729] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain python3[101377]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=manila url=http://172.17.0.2:8786/v1/%(tenant_id)s endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:57 standalone.localdomain haproxy[70940]: 172.21.0.2:37444 [13/Oct/2025:14:02:57.739] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101283]: Module complete (101283)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101388]: Invoked with 179138187889 60 /tmp/ansible-root/ansible-tmp-1760364177.6313074-101154-22718994678706/AnsiballZ_endpoint.py _
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101395]: Starting module and watcher
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101395]: Start watching 101399 (60)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101399]: Start module (101399)
Oct 13 14:02:57 standalone.localdomain ansible-async_wrapper.py[101388]: Return async_wrapper task started.
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37462 [13/Oct/2025:14:02:57.751] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/283/283 201 4660 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37446 [13/Oct/2025:14:02:57.769] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 200 5013 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37444 [13/Oct/2025:14:02:57.783] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/278/278 201 543 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37464 [13/Oct/2025:14:02:57.986] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/76/76 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37462 [13/Oct/2025:14:02:58.037] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/42/42 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37446 [13/Oct/2025:14:02:58.051] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain python3[101400]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=manilav2 url=http://172.17.0.2:8786/v2 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101304]: Module complete (101304)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101412]: Invoked with 258400775622 60 /tmp/ansible-root/ansible-tmp-1760364177.9925475-101154-224062085327225/AnsiballZ_endpoint.py _
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101415]: Starting module and watcher
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101415]: Start watching 101416 (60)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101416]: Start module (101416)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101412]: Return async_wrapper task started.
Oct 13 14:02:58 standalone.localdomain python3[101417]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=neutron url=http://172.17.0.2:9696 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37464 [13/Oct/2025:14:02:58.065] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/282/282 201 4844 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37462 [13/Oct/2025:14:02:58.083] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 200 5336 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37446 [13/Oct/2025:14:02:58.098] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/274/274 201 526 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37480 [13/Oct/2025:14:02:58.334] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/40/40 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37464 [13/Oct/2025:14:02:58.354] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37462 [13/Oct/2025:14:02:58.362] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101320]: Module complete (101320)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101428]: Invoked with 224784376298 60 /tmp/ansible-root/ansible-tmp-1760364178.2612777-101154-184339460851415/AnsiballZ_endpoint.py _
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101431]: Starting module and watcher
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101431]: Start watching 101432 (60)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101432]: Start module (101432)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101428]: Return async_wrapper task started.
Oct 13 14:02:58 standalone.localdomain ceph-mon[29756]: pgmap v835: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:58 standalone.localdomain python3[101433]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=nova url=http://172.17.0.2:8774/v2.1 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37480 [13/Oct/2025:14:02:58.375] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/289/289 201 4992 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37464 [13/Oct/2025:14:02:58.400] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 200 5642 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37462 [13/Oct/2025:14:02:58.407] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/283/283 201 543 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37490 [13/Oct/2025:14:02:58.647] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/48/48 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37480 [13/Oct/2025:14:02:58.669] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/47/47 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37464 [13/Oct/2025:14:02:58.680] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/49/49 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101344]: Module complete (101344)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101444]: Invoked with 669827091703 60 /tmp/ansible-root/ansible-tmp-1760364178.5871272-101154-240311757196174/AnsiballZ_endpoint.py _
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101447]: Starting module and watcher
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101447]: Start watching 101448 (60)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101448]: Start module (101448)
Oct 13 14:02:58 standalone.localdomain ansible-async_wrapper.py[101444]: Return async_wrapper task started.
Oct 13 14:02:58 standalone.localdomain python3[101449]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=placement url=http://172.17.0.2:8778/placement endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:02:58 standalone.localdomain haproxy[70940]: 172.21.0.2:37490 [13/Oct/2025:14:02:58.699] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 201 5176 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37480 [13/Oct/2025:14:02:58.721] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/282/282 200 5965 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37464 [13/Oct/2025:14:02:58.734] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/288/288 201 529 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37504 [13/Oct/2025:14:02:58.877] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/147/147 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37490 [13/Oct/2025:14:02:58.995] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/47/47 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37480 [13/Oct/2025:14:02:59.007] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/46/46 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain python3[101461]: ansible-ansible.legacy.async_status Invoked with jid=689891016018.101237 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:59 standalone.localdomain ansible-async_wrapper.py[101360]: Module complete (101360)
Oct 13 14:02:59 standalone.localdomain python3[101465]: ansible-ansible.legacy.async_status Invoked with jid=133803291853.101300 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37504 [13/Oct/2025:14:02:59.025] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/285/285 201 5327 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37490 [13/Oct/2025:14:02:59.045] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/275/275 200 6274 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37480 [13/Oct/2025:14:02:59.057] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/278/278 201 543 - - ---- 15/4/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37508 [13/Oct/2025:14:02:59.207] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/128/128 300 515 - - ---- 15/4/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37504 [13/Oct/2025:14:02:59.317] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37490 [13/Oct/2025:14:02:59.323] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/42/42 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain ansible-async_wrapper.py[101376]: Module complete (101376)
Oct 13 14:02:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v836: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:02:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:02:59 standalone.localdomain python3[101468]: ansible-ansible.legacy.async_status Invoked with jid=695544313734.101316 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37508 [13/Oct/2025:14:02:59.339] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 201 5511 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37504 [13/Oct/2025:14:02:59.361] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 200 6597 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37490 [13/Oct/2025:14:02:59.369] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 201 529 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37514 [13/Oct/2025:14:02:59.502] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/155/155 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37508 [13/Oct/2025:14:02:59.631] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/52/52 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37504 [13/Oct/2025:14:02:59.642] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/57/57 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain ansible-async_wrapper.py[101399]: Module complete (101399)
Oct 13 14:02:59 standalone.localdomain python3[101471]: ansible-ansible.legacy.async_status Invoked with jid=309201929839.101332 mode=status _async_dir=/root/.ansible_async
Oct 13 14:02:59 standalone.localdomain runuser[101476]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37514 [13/Oct/2025:14:02:59.674] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 201 5662 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37508 [13/Oct/2025:14:02:59.691] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/275/275 200 6906 - - ---- 14/3/2/2/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37504 [13/Oct/2025:14:02:59.703] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/280/280 201 526 - - ---- 14/3/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:02:59 standalone.localdomain haproxy[70940]: 172.21.0.2:37514 [13/Oct/2025:14:02:59.960] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 200 3099 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:00 standalone.localdomain haproxy[70940]: 172.21.0.2:37508 [13/Oct/2025:14:02:59.973] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 3099 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:00 standalone.localdomain haproxy[70940]: 172.21.0.2:37514 [13/Oct/2025:14:03:00.004] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 7212 - - ---- 13/2/1/1/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:00 standalone.localdomain haproxy[70940]: 172.21.0.2:37508 [13/Oct/2025:14:03:00.018] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 201 531 - - ---- 13/2/1/1/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:00 standalone.localdomain haproxy[70940]: 172.21.0.2:37514 [13/Oct/2025:14:03:00.024] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 3099 - - ---- 13/2/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:00 standalone.localdomain python3[101495]: ansible-ansible.legacy.async_status Invoked with jid=58688191570.101356 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:00 standalone.localdomain haproxy[70940]: 172.21.0.2:37514 [13/Oct/2025:14:03:00.053] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 201 536 - - ---- 13/2/0/0/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:00 standalone.localdomain ansible-async_wrapper.py[101416]: Module complete (101416)
Oct 13 14:03:00 standalone.localdomain podman[101535]: 2025-10-13 14:03:00.141551297 +0000 UTC m=+0.081938108 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:03:00 standalone.localdomain ansible-async_wrapper.py[101432]: Module complete (101432)
Oct 13 14:03:00 standalone.localdomain ansible-async_wrapper.py[101448]: Module complete (101448)
Oct 13 14:03:00 standalone.localdomain podman[101535]: 2025-10-13 14:03:00.174177178 +0000 UTC m=+0.114563719 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:03:00 standalone.localdomain systemd[1]: tmp-crun.KNHR5W.mount: Deactivated successfully.
Oct 13 14:03:00 standalone.localdomain podman[101544]: 2025-10-13 14:03:00.272646248 +0000 UTC m=+0.183339590 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:08:11, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, io.openshift.expose-services=, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vcs-type=git, description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy)
Oct 13 14:03:00 standalone.localdomain python3[101583]: ansible-ansible.legacy.async_status Invoked with jid=830893222608.101372 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:00 standalone.localdomain podman[101544]: 2025-10-13 14:03:00.304878206 +0000 UTC m=+0.215571608 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, name=rhosp17/openstack-haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:03:00 standalone.localdomain podman[101570]: 2025-10-13 14:03:00.383798272 +0000 UTC m=+0.227459018 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, com.redhat.component=openstack-rabbitmq-container, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:03:00 standalone.localdomain podman[101570]: 2025-10-13 14:03:00.415189974 +0000 UTC m=+0.258850700 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public)
Oct 13 14:03:00 standalone.localdomain runuser[101476]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:00 standalone.localdomain python3[101616]: ansible-ansible.legacy.async_status Invoked with jid=179138187889.101388 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:00 standalone.localdomain runuser[101652]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:00 standalone.localdomain ceph-mon[29756]: pgmap v836: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:00 standalone.localdomain python3[101651]: ansible-ansible.legacy.async_status Invoked with jid=258400775622.101412 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:00 standalone.localdomain python3[101699]: ansible-ansible.legacy.async_status Invoked with jid=224784376298.101428 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101282]: Done in kid B.
Oct 13 14:03:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:03:01 standalone.localdomain podman[101711]: 2025-10-13 14:03:01.226396901 +0000 UTC m=+0.057247704 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, tcib_managed=true, container_name=ovn_cluster_northd, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:30:04, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1, vendor=Red Hat, Inc.)
Oct 13 14:03:01 standalone.localdomain podman[101711]: 2025-10-13 14:03:01.238754305 +0000 UTC m=+0.069605128 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-northd, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:03:01 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:03:01 standalone.localdomain python3[101710]: ansible-ansible.legacy.async_status Invoked with jid=669827091703.101444 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101303]: Done in kid B.
Oct 13 14:03:01 standalone.localdomain runuser[101652]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:01 standalone.localdomain runuser[101740]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101797]: Invoked with 773258891774 60 /tmp/ansible-root/ansible-tmp-1760364181.431741-101737-126124572931699/AnsiballZ_endpoint.py _
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101800]: Starting module and watcher
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101800]: Start watching 101801 (60)
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101801]: Start module (101801)
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101797]: Return async_wrapper task started.
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101319]: Done in kid B.
Oct 13 14:03:01 standalone.localdomain python3[101802]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=swift url=http://172.18.0.2:8080 endpoint_interface=admin region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:01 standalone.localdomain ansible-async_wrapper.py[101343]: Done in kid B.
Oct 13 14:03:01 standalone.localdomain python3[101807]: ansible-ansible.legacy.async_status Invoked with jid=773258891774.101797 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:02 standalone.localdomain runuser[101740]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:02 standalone.localdomain ansible-async_wrapper.py[101359]: Done in kid B.
Oct 13 14:03:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51964 [13/Oct/2025:14:03:02.442] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:02 standalone.localdomain ansible-async_wrapper.py[101375]: Done in kid B.
Oct 13 14:03:02 standalone.localdomain ceph-mon[29756]: pgmap v837: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51964 [13/Oct/2025:14:03:02.449] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 201 6121 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51964 [13/Oct/2025:14:03:02.738] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51964 [13/Oct/2025:14:03:02.758] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 7839 - - ---- 12/1/0/0/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51964 [13/Oct/2025:14:03:02.777] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51964 [13/Oct/2025:14:03:02.793] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 201 526 - - ---- 12/1/0/0/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:02 standalone.localdomain ansible-async_wrapper.py[101801]: Module complete (101801)
Oct 13 14:03:02 standalone.localdomain ansible-async_wrapper.py[101395]: Done in kid B.
Oct 13 14:03:03 standalone.localdomain ansible-async_wrapper.py[101415]: Done in kid B.
Oct 13 14:03:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:03 standalone.localdomain ansible-async_wrapper.py[101431]: Done in kid B.
Oct 13 14:03:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:03:03 standalone.localdomain podman[101891]: 2025-10-13 14:03:03.619697036 +0000 UTC m=+0.074017684 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_id=tripleo_step2, version=17.1.9, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, build-date=2025-07-21T12:58:45, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:03:03 standalone.localdomain podman[101891]: 2025-10-13 14:03:03.686228386 +0000 UTC m=+0.140549034 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step2, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:03:03 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:03:03 standalone.localdomain ansible-async_wrapper.py[101447]: Done in kid B.
Oct 13 14:03:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:04 standalone.localdomain ceph-mon[29756]: pgmap v838: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:05 standalone.localdomain ceph-mon[29756]: pgmap v839: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:06 standalone.localdomain ansible-async_wrapper.py[101800]: Done in kid B.
Oct 13 14:03:07 standalone.localdomain python3[102161]: ansible-ansible.legacy.async_status Invoked with jid=773258891774.101797 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102183]: Invoked with 199745938798 60 /tmp/ansible-root/ansible-tmp-1760364187.4885607-102172-156636882926310/AnsiballZ_endpoint.py _
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102186]: Starting module and watcher
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102186]: Start watching 102187 (60)
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102187]: Start module (102187)
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102183]: Return async_wrapper task started.
Oct 13 14:03:07 standalone.localdomain python3[102188]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=barbican url=http://172.17.0.2:9311 endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102200]: Invoked with 339927603649 60 /tmp/ansible-root/ansible-tmp-1760364187.7494195-102172-186151580563994/AnsiballZ_endpoint.py _
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102203]: Starting module and watcher
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102203]: Start watching 102204 (60)
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102204]: Start module (102204)
Oct 13 14:03:07 standalone.localdomain ansible-async_wrapper.py[102200]: Return async_wrapper task started.
Oct 13 14:03:08 standalone.localdomain python3[102205]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=cinderv3 url=http://172.17.0.2:8776/v3/%(tenant_id)s endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102216]: Invoked with 895853747547 60 /tmp/ansible-root/ansible-tmp-1760364188.0505385-102172-156278920733345/AnsiballZ_endpoint.py _
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102227]: Starting module and watcher
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102227]: Start watching 102228 (60)
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102228]: Start module (102228)
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102216]: Return async_wrapper task started.
Oct 13 14:03:08 standalone.localdomain python3[102229]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=glance url=http://172.17.0.2:9293 endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51980 [13/Oct/2025:14:03:08.382] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102240]: Invoked with 452124137992 60 /tmp/ansible-root/ansible-tmp-1760364188.304064-102172-259409516507487/AnsiballZ_endpoint.py _
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102243]: Starting module and watcher
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102243]: Start watching 102244 (60)
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102244]: Start module (102244)
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102240]: Return async_wrapper task started.
Oct 13 14:03:08 standalone.localdomain ceph-mon[29756]: pgmap v840: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51980 [13/Oct/2025:14:03:08.388] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/298/298 201 6269 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51980 [13/Oct/2025:14:03:08.691] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/12/12 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:03:08.704] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 13/2/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51980 [13/Oct/2025:14:03:08.707] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/16 200 8145 - - ---- 13/2/1/1/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:08 standalone.localdomain python3[102245]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=heat url=http://172.17.0.2:8004/v1/%(tenant_id)s endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102256]: Invoked with 356168878389 60 /tmp/ansible-root/ansible-tmp-1760364188.6212418-102172-2308917380293/AnsiballZ_endpoint.py _
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102259]: Starting module and watcher
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102259]: Start watching 102260 (60)
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102260]: Start module (102260)
Oct 13 14:03:08 standalone.localdomain ansible-async_wrapper.py[102256]: Return async_wrapper task started.
Oct 13 14:03:08 standalone.localdomain python3[102261]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=heat-cfn url=http://172.17.0.2:8000/v1 endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:03:08.715] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/270/270 201 6269 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51980 [13/Oct/2025:14:03:08.727] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/271/271 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:03:08.925] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/78/78 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:03:08.988] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:51980 [13/Oct/2025:14:03:09.003] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 201 529 - - ---- 14/3/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102272]: Invoked with 853512254706 60 /tmp/ansible-root/ansible-tmp-1760364188.891037-102172-89109730313615/AnsiballZ_endpoint.py _
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102275]: Starting module and watcher
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102275]: Start watching 102276 (60)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102276]: Start module (102276)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102272]: Return async_wrapper task started.
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102187]: Module complete (102187)
Oct 13 14:03:09 standalone.localdomain python3[102277]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=manila url=http://172.17.0.2:8786/v1/%(tenant_id)s endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:03:09.005] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/296/296 201 6420 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:03:09.019] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/292/292 200 8454 - - ---- 13/2/1/1/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:03:09.305] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 3099 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:03:09.315] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 200 3099 - - ---- 13/2/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:03:09.327] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/19/19 200 8454 - - ---- 13/2/1/1/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102292]: Invoked with 782179867518 60 /tmp/ansible-root/ansible-tmp-1760364189.166744-102172-3996728619972/AnsiballZ_endpoint.py _
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102299]: Starting module and watcher
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102299]: Start watching 102300 (60)
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:03:09.335] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/30/30 201 546 - - ---- 14/3/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:03:09.349] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102300]: Start module (102300)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102292]: Return async_wrapper task started.
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:03:09.352] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/34/34 200 3099 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102204]: Module complete (102204)
Oct 13 14:03:09 standalone.localdomain podman[102302]: 2025-10-13 14:03:09.478670632 +0000 UTC m=+0.072625310 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20250721.1, container_name=iscsid, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public)
Oct 13 14:03:09 standalone.localdomain podman[102302]: 2025-10-13 14:03:09.486346551 +0000 UTC m=+0.080301249 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:03:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v841: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:09 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:03:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:09 standalone.localdomain python3[102301]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=manilav2 url=http://172.17.0.2:8786/v2 endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102329]: Invoked with 91096664032 60 /tmp/ansible-root/ansible-tmp-1760364189.4401221-102172-92307892674765/AnsiballZ_endpoint.py _
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102332]: Starting module and watcher
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102332]: Start watching 102333 (60)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102333]: Start module (102333)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102329]: Return async_wrapper task started.
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:03:09.374] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/277/277 201 6607 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:03:09.394] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/269/269 201 529 - - ---- 14/3/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52038 [13/Oct/2025:14:03:09.491] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/177/177 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:03:09.654] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/30/30 200 3099 - - ---- 14/3/1/1/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102228]: Module complete (102228)
Oct 13 14:03:09 standalone.localdomain python3[102334]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=neutron url=http://172.17.0.2:9696 endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102345]: Invoked with 117844826629 60 /tmp/ansible-root/ansible-tmp-1760364189.662113-102172-163352446245908/AnsiballZ_endpoint.py _
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102348]: Starting module and watcher
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102348]: Start watching 102349 (60)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102349]: Start module (102349)
Oct 13 14:03:09 standalone.localdomain ansible-async_wrapper.py[102345]: Return async_wrapper task started.
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52038 [13/Oct/2025:14:03:09.673] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/273/273 201 6758 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:03:09.689] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/268/268 200 9089 - - ---- 14/3/2/2/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:54122 [13/Oct/2025:14:03:09.884] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/75/75 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52038 [13/Oct/2025:14:03:09.950] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain python3[102350]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=nova url=http://172.17.0.2:8774/v2.1 endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102361]: Invoked with 956774550400 60 /tmp/ansible-root/ansible-tmp-1760364189.907403-102172-255060849421010/AnsiballZ_endpoint.py _
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102364]: Starting module and watcher
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102364]: Start watching 102365 (60)
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102365]: Start module (102365)
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102361]: Return async_wrapper task started.
Oct 13 14:03:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:03:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 13 14:03:10 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54122 [13/Oct/2025:14:03:09.961] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 201 6758 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:03:09.961] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain podman[102367]: 2025-10-13 14:03:10.266495606 +0000 UTC m=+0.073100625 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, vcs-type=git, build-date=2025-07-21T13:27:18, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, com.redhat.component=openstack-keystone-container, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team)
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:52038 [13/Oct/2025:14:03:09.976] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/291/291 200 9089 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54132 [13/Oct/2025:14:03:10.133] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/134/134 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54122 [13/Oct/2025:14:03:10.247] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:03:10.258] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/37/37 201 546 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain python3[102366]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=placement url=http://172.17.0.2:8778/placement endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:52038 [13/Oct/2025:14:03:10.270] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102244]: Module complete (102244)
Oct 13 14:03:10 standalone.localdomain python3[102400]: ansible-ansible.legacy.async_status Invoked with jid=199745938798.102183 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:10 standalone.localdomain ceph-mon[29756]: pgmap v841: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:10 standalone.localdomain ceph-mon[29756]: 7.4 scrub starts
Oct 13 14:03:10 standalone.localdomain ceph-mon[29756]: 7.4 scrub ok
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54132 [13/Oct/2025:14:03:10.272] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/309/309 201 6945 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54122 [13/Oct/2025:14:03:10.284] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/311/311 200 9415 - - ---- 16/5/4/4/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54134 [13/Oct/2025:14:03:10.372] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/245/245 300 515 - - ---- 16/5/4/4/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:52038 [13/Oct/2025:14:03:10.314] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/303/303 201 532 - - ---- 16/5/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54144 [13/Oct/2025:14:03:10.583] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/37/37 300 515 - - ---- 16/5/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain podman[102367]: 2025-10-13 14:03:10.627018284 +0000 UTC m=+0.433623293 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, tcib_managed=true, config_id=tripleo_step3, container_name=keystone, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, version=17.1.9, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1)
Oct 13 14:03:10 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54132 [13/Oct/2025:14:03:10.588] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/51/51 200 3099 - - ---- 16/5/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54122 [13/Oct/2025:14:03:10.599] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/50/50 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:10 standalone.localdomain python3[102407]: ansible-ansible.legacy.async_status Invoked with jid=339927603649.102200 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:10 standalone.localdomain ansible-async_wrapper.py[102260]: Module complete (102260)
Oct 13 14:03:10 standalone.localdomain python3[102410]: ansible-ansible.legacy.async_status Invoked with jid=895853747547.102216 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:10 standalone.localdomain haproxy[70940]: 172.21.0.2:54134 [13/Oct/2025:14:03:10.620] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/296/296 201 7099 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain python3[102413]: ansible-ansible.legacy.async_status Invoked with jid=452124137992.102240 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54144 [13/Oct/2025:14:03:10.622] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/555/555 201 7099 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54132 [13/Oct/2025:14:03:10.646] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/543/543 200 9727 - - ---- 16/5/4/4/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54122 [13/Oct/2025:14:03:10.653] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/556/556 201 546 - - ---- 16/5/4/4/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54146 [13/Oct/2025:14:03:10.830] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/381/381 300 515 - - ---- 16/5/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54134 [13/Oct/2025:14:03:10.927] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/303/303 200 3099 - - ---- 16/5/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54144 [13/Oct/2025:14:03:11.184] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/62/62 200 3099 - - ---- 16/5/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54132 [13/Oct/2025:14:03:11.198] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/62/62 200 3099 - - ---- 15/4/3/3/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain ansible-async_wrapper.py[102276]: Module complete (102276)
Oct 13 14:03:11 standalone.localdomain python3[102416]: ansible-ansible.legacy.async_status Invoked with jid=356168878389.102256 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:11 standalone.localdomain python3[102427]: ansible-ansible.legacy.async_status Invoked with jid=853512254706.102272 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54146 [13/Oct/2025:14:03:11.229] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/290/290 201 7286 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54134 [13/Oct/2025:14:03:11.237] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/294/294 200 10053 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54144 [13/Oct/2025:14:03:11.254] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/287/287 200 10053 - - ---- 15/4/3/3/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54132 [13/Oct/2025:14:03:11.267] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/290/290 201 532 - - ---- 15/4/3/3/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54146 [13/Oct/2025:14:03:11.527] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/54/54 200 3099 - - ---- 15/4/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54134 [13/Oct/2025:14:03:11.539] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/52/52 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54144 [13/Oct/2025:14:03:11.550] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/51/51 200 3099 - - ---- 14/3/2/2/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54146 [13/Oct/2025:14:03:11.589] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 10366 - - ---- 14/3/2/2/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54134 [13/Oct/2025:14:03:11.600] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/26/26 201 529 - - ---- 14/3/2/2/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain ansible-async_wrapper.py[102300]: Module complete (102300)
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54144 [13/Oct/2025:14:03:11.607] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/49/49 201 534 - - ---- 14/3/1/1/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54146 [13/Oct/2025:14:03:11.618] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/64/64 200 3099 - - ---- 13/2/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54146 [13/Oct/2025:14:03:11.690] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/20/20 201 539 - - ---- 12/1/0/0/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:11 standalone.localdomain ansible-async_wrapper.py[102349]: Module complete (102349)
Oct 13 14:03:11 standalone.localdomain python3[102430]: ansible-ansible.legacy.async_status Invoked with jid=782179867518.102292 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:11 standalone.localdomain ansible-async_wrapper.py[102333]: Module complete (102333)
Oct 13 14:03:11 standalone.localdomain ansible-async_wrapper.py[102365]: Module complete (102365)
Oct 13 14:03:11 standalone.localdomain python3[102433]: ansible-ansible.legacy.async_status Invoked with jid=91096664032.102329 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:12 standalone.localdomain python3[102461]: ansible-ansible.legacy.async_status Invoked with jid=117844826629.102345 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:12 standalone.localdomain python3[102502]: ansible-ansible.legacy.async_status Invoked with jid=956774550400.102361 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:12 standalone.localdomain ceph-mon[29756]: pgmap v842: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:12 standalone.localdomain runuser[102518]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102186]: Done in kid B.
Oct 13 14:03:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:03:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:03:12 standalone.localdomain systemd[1]: tmp-crun.DYaX5B.mount: Deactivated successfully.
Oct 13 14:03:12 standalone.localdomain podman[102536]: 2025-10-13 14:03:12.830995935 +0000 UTC m=+0.097711528 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, container_name=memcached, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:43, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102520]: Invoked with 440932491754 60 /tmp/ansible-root/ansible-tmp-1760364192.6159093-102503-172239564957207/AnsiballZ_endpoint.py _
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102600]: Starting module and watcher
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102600]: Start watching 102602 (60)
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102602]: Start module (102602)
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102520]: Return async_wrapper task started.
Oct 13 14:03:12 standalone.localdomain podman[102537]: 2025-10-13 14:03:12.880670963 +0000 UTC m=+0.144163147 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, container_name=horizon, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, build-date=2025-07-21T13:58:15, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, release=1, vendor=Red Hat, Inc., config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team)
Oct 13 14:03:12 standalone.localdomain podman[102536]: 2025-10-13 14:03:12.90122292 +0000 UTC m=+0.167938523 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:03:12 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:03:12 standalone.localdomain podman[102537]: 2025-10-13 14:03:12.946802701 +0000 UTC m=+0.210294845 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, container_name=horizon, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-horizon-container, io.openshift.expose-services=, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-horizon, architecture=x86_64, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']})
Oct 13 14:03:12 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:03:12 standalone.localdomain ansible-async_wrapper.py[102203]: Done in kid B.
Oct 13 14:03:13 standalone.localdomain python3[102605]: ansible-openstack.cloud.endpoint Invoked with cloud=standalone service=swift url=http://172.18.0.2:8080/v1/AUTH_%(tenant_id)s endpoint_interface=internal region=regionOne state=present wait=True timeout=180 interface=public enabled=True auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:13 standalone.localdomain python3[102621]: ansible-ansible.legacy.async_status Invoked with jid=440932491754.102520 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:13 standalone.localdomain ansible-async_wrapper.py[102227]: Done in kid B.
Oct 13 14:03:13 standalone.localdomain runuser[102518]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:13 standalone.localdomain runuser[102647]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:13 standalone.localdomain ansible-async_wrapper.py[102243]: Done in kid B.
Oct 13 14:03:13 standalone.localdomain haproxy[70940]: 172.21.0.2:54156 [13/Oct/2025:14:03:13.633] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:13 standalone.localdomain ansible-async_wrapper.py[102259]: Done in kid B.
Oct 13 14:03:13 standalone.localdomain haproxy[70940]: 172.21.0.2:54156 [13/Oct/2025:14:03:13.639] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 7908 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:13 standalone.localdomain haproxy[70940]: 172.21.0.2:54156 [13/Oct/2025:14:03:13.955] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/12/12 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:13 standalone.localdomain haproxy[70940]: 172.21.0.2:54156 [13/Oct/2025:14:03:13.974] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/12/12 200 11308 - - ---- 12/1/0/0/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:03:14 standalone.localdomain haproxy[70940]: 172.21.0.2:54156 [13/Oct/2025:14:03:13.993] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:14 standalone.localdomain haproxy[70940]: 172.21.0.2:54156 [13/Oct/2025:14:03:14.011] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 201 551 - - ---- 12/1/0/0/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:03:14 standalone.localdomain runuser[102647]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:14 standalone.localdomain runuser[102701]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:14 standalone.localdomain ansible-async_wrapper.py[102275]: Done in kid B.
Oct 13 14:03:14 standalone.localdomain ansible-async_wrapper.py[102602]: Module complete (102602)
Oct 13 14:03:14 standalone.localdomain ansible-async_wrapper.py[102299]: Done in kid B.
Oct 13 14:03:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:14 standalone.localdomain ceph-mon[29756]: pgmap v843: 177 pgs: 177 active+clean; 450 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:14 standalone.localdomain ansible-async_wrapper.py[102332]: Done in kid B.
Oct 13 14:03:14 standalone.localdomain runuser[102701]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:14 standalone.localdomain ansible-async_wrapper.py[102348]: Done in kid B.
Oct 13 14:03:15 standalone.localdomain ansible-async_wrapper.py[102364]: Done in kid B.
Oct 13 14:03:15 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 13 14:03:15 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 13 14:03:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v844: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:15 standalone.localdomain ceph-mon[29756]: 7.5 scrub starts
Oct 13 14:03:15 standalone.localdomain ceph-mon[29756]: 7.5 scrub ok
Oct 13 14:03:16 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct 13 14:03:16 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct 13 14:03:16 standalone.localdomain ceph-mon[29756]: pgmap v844: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:16 standalone.localdomain ceph-mon[29756]: 7.6 scrub starts
Oct 13 14:03:16 standalone.localdomain ceph-mon[29756]: 7.6 scrub ok
Oct 13 14:03:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:17 standalone.localdomain ansible-async_wrapper.py[102600]: Done in kid B.
Oct 13 14:03:18 standalone.localdomain python3[102998]: ansible-ansible.legacy.async_status Invoked with jid=440932491754.102520 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:18 standalone.localdomain ceph-mon[29756]: pgmap v845: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:18 standalone.localdomain python3[103011]: ansible-ansible.legacy.command Invoked with _raw_params=openstack service list -c "Name" -c "Type" -f json 2>/dev/null _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:03:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:20 standalone.localdomain haproxy[70940]: 172.21.0.2:42490 [13/Oct/2025:14:03:20.601] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:20 standalone.localdomain ceph-mon[29756]: pgmap v846: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:20 standalone.localdomain haproxy[70940]: 172.21.0.2:42490 [13/Oct/2025:14:03:20.608] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/321/321 201 8100 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:20 standalone.localdomain haproxy[70940]: 172.21.0.2:42490 [13/Oct/2025:14:03:20.949] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 3099 - - ---- 12/1/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:03:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103068]: Invoked with 665701911124 60 /tmp/ansible-root/ansible-tmp-1760364201.953464-103057-100659802510986/AnsiballZ_identity_role.py _
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103073]: Starting module and watcher
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103073]: Start watching 103074 (60)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103074]: Start module (103074)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103068]: Return async_wrapper task started.
Oct 13 14:03:22 standalone.localdomain python3[103076]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=key-manager:service-admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103138]: Invoked with 53197620365 60 /tmp/ansible-root/ansible-tmp-1760364202.2282147-103057-193842980872865/AnsiballZ_identity_role.py _
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103143]: Starting module and watcher
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103143]: Start watching 103144 (60)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103144]: Start module (103144)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103138]: Return async_wrapper task started.
Oct 13 14:03:22 standalone.localdomain python3[103145]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=creator state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:22 standalone.localdomain ceph-mon[29756]: pgmap v847: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103156]: Invoked with 114758719869 60 /tmp/ansible-root/ansible-tmp-1760364202.483014-103057-216081829720529/AnsiballZ_identity_role.py _
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103167]: Starting module and watcher
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103167]: Start watching 103168 (60)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103168]: Start module (103168)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103156]: Return async_wrapper task started.
Oct 13 14:03:22 standalone.localdomain haproxy[70940]: 172.21.0.2:42492 [13/Oct/2025:14:03:22.840] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:22 standalone.localdomain python3[103169]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=observer state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103180]: Invoked with 905057617674 60 /tmp/ansible-root/ansible-tmp-1760364202.7762308-103057-106605101310004/AnsiballZ_identity_role.py _
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103183]: Starting module and watcher
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103183]: Start watching 103184 (60)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103184]: Start module (103184)
Oct 13 14:03:22 standalone.localdomain ansible-async_wrapper.py[103180]: Return async_wrapper task started.
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:03:23
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_metadata', '.mgr', 'backups', 'vms', 'manila_data', 'images']
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42492 [13/Oct/2025:14:03:22.844] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/265/265 201 8100 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42492 [13/Oct/2025:14:03:23.114] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 962 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42508 [13/Oct/2025:14:03:23.121] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/8/8 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:03:23 standalone.localdomain python3[103185]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=audit state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103196]: Invoked with 302082679617 60 /tmp/ansible-root/ansible-tmp-1760364203.0417223-103057-87930699380017/AnsiballZ_identity_role.py _
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103199]: Starting module and watcher
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103199]: Start watching 103200 (60)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103200]: Start module (103200)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103196]: Return async_wrapper task started.
Oct 13 14:03:23 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Oct 13 14:03:23 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42508 [13/Oct/2025:14:03:23.131] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/265/265 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42492 [13/Oct/2025:14:03:23.133] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/278/278 201 439 - - ---- 13/2/1/1/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42508 [13/Oct/2025:14:03:23.400] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 200 1185 - - ---- 13/2/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain python3[103201]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=service state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42508 [13/Oct/2025:14:03:23.442] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 201 421 - - ---- 12/1/0/0/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42520 [13/Oct/2025:14:03:23.459] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103074]: Module complete (103074)
Oct 13 14:03:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103144]: Module complete (103144)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103212]: Invoked with 921884833798 60 /tmp/ansible-root/ansible-tmp-1760364203.3446884-103057-125572351391519/AnsiballZ_identity_role.py _
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103223]: Starting module and watcher
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103223]: Start watching 103224 (60)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103224]: Start module (103224)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103212]: Return async_wrapper task started.
Oct 13 14:03:23 standalone.localdomain ceph-mon[29756]: 7.7 deep-scrub starts
Oct 13 14:03:23 standalone.localdomain ceph-mon[29756]: 7.7 deep-scrub ok
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42520 [13/Oct/2025:14:03:23.464] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/262/262 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain haproxy[70940]: 172.21.0.2:42524 [13/Oct/2025:14:03:23.701] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:23 standalone.localdomain python3[103225]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=heat_stack_user state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103236]: Invoked with 341788615558 60 /tmp/ansible-root/ansible-tmp-1760364203.6864383-103057-216146700180899/AnsiballZ_identity_role.py _
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103239]: Starting module and watcher
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103239]: Start watching 103240 (60)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103240]: Start module (103240)
Oct 13 14:03:23 standalone.localdomain ansible-async_wrapper.py[103236]: Return async_wrapper task started.
Oct 13 14:03:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42524 [13/Oct/2025:14:03:23.729] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/273/273 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42520 [13/Oct/2025:14:03:23.733] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 200 1391 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain podman[103242]: 2025-10-13 14:03:24.024539365 +0000 UTC m=+0.082152655 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, name=rhosp17/openstack-keystone, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, tcib_managed=true, release=1)
Oct 13 14:03:24 standalone.localdomain podman[103242]: 2025-10-13 14:03:24.035841905 +0000 UTC m=+0.093455255 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42524 [13/Oct/2025:14:03:24.006] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 200 1391 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42520 [13/Oct/2025:14:03:24.023] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 201 422 - - ---- 14/3/2/2/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42534 [13/Oct/2025:14:03:24.027] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain python3[103241]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=swiftoperator state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42524 [13/Oct/2025:14:03:24.040] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 201 419 - - ---- 14/3/1/1/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103168]: Module complete (103168)
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103184]: Module complete (103184)
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103268]: Invoked with 603946525573 60 /tmp/ansible-root/ansible-tmp-1760364203.9873977-103057-11733468760844/AnsiballZ_identity_role.py _
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103280]: Starting module and watcher
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103280]: Start watching 103281 (60)
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103281]: Start module (103281)
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103268]: Return async_wrapper task started.
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42534 [13/Oct/2025:14:03:24.061] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/294/294 201 8100 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain python3[103284]: ansible-openstack.cloud.identity_role Invoked with cloud=standalone name=ResellerAdmin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42534 [13/Oct/2025:14:03:24.359] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 200 1800 - - ---- 12/1/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42534 [13/Oct/2025:14:03:24.376] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 201 421 - - ---- 12/1/0/0/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain python3[103335]: ansible-ansible.legacy.async_status Invoked with jid=665701911124.103068 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42538 [13/Oct/2025:14:03:24.450] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain ansible-async_wrapper.py[103200]: Module complete (103200)
Oct 13 14:03:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:24 standalone.localdomain python3[103363]: ansible-ansible.legacy.async_status Invoked with jid=53197620365.103138 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:24 standalone.localdomain ceph-mon[29756]: pgmap v848: 177 pgs: 177 active+clean; 452 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42538 [13/Oct/2025:14:03:24.458] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/298/298 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42542 [13/Oct/2025:14:03:24.595] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/161/161 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:24 standalone.localdomain python3[103374]: ansible-ansible.legacy.async_status Invoked with jid=114758719869.103156 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42542 [13/Oct/2025:14:03:24.760] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/264/264 201 8100 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42538 [13/Oct/2025:14:03:24.763] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/275/275 200 2005 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42556 [13/Oct/2025:14:03:24.957] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/83/83 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42542 [13/Oct/2025:14:03:25.028] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/35/35 200 2005 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain python3[103377]: ansible-ansible.legacy.async_status Invoked with jid=905057617674.103180 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:25 standalone.localdomain runuser[103426]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42556 [13/Oct/2025:14:03:25.042] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/339/339 201 8100 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain python3[103423]: ansible-ansible.legacy.async_status Invoked with jid=302082679617.103196 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42538 [13/Oct/2025:14:03:25.045] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/361/361 201 429 - - ---- 14/3/2/2/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42542 [13/Oct/2025:14:03:25.069] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/377/377 201 427 - - ---- 13/2/1/1/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42556 [13/Oct/2025:14:03:25.389] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/79/79 200 2429 - - ---- 13/2/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain ansible-async_wrapper.py[103224]: Module complete (103224)
Oct 13 14:03:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42556 [13/Oct/2025:14:03:25.475] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/19/19 201 427 - - ---- 12/1/0/0/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:03:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:25 standalone.localdomain ansible-async_wrapper.py[103240]: Module complete (103240)
Oct 13 14:03:25 standalone.localdomain ansible-async_wrapper.py[103281]: Module complete (103281)
Oct 13 14:03:25 standalone.localdomain python3[103473]: ansible-ansible.legacy.async_status Invoked with jid=921884833798.103212 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:25 standalone.localdomain runuser[103426]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:25 standalone.localdomain python3[103484]: ansible-ansible.legacy.async_status Invoked with jid=341788615558.103236 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:26 standalone.localdomain runuser[103545]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:26 standalone.localdomain python3[103544]: ansible-ansible.legacy.async_status Invoked with jid=603946525573.103268 mode=status _async_dir=/root/.ansible_async
Oct 13 14:03:26 standalone.localdomain runuser[103545]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:26 standalone.localdomain runuser[103653]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:26 standalone.localdomain ceph-mon[29756]: pgmap v849: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:03:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:03:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:03:26 standalone.localdomain podman[103701]: 2025-10-13 14:03:26.813045683 +0000 UTC m=+0.075100528 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, container_name=barbican_keystone_listener, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-barbican-keystone-listener-container, release=1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64)
Oct 13 14:03:26 standalone.localdomain podman[103701]: 2025-10-13 14:03:26.857079087 +0000 UTC m=+0.119133932 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_keystone_listener, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-keystone-listener-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-keystone-listener, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:03:26 standalone.localdomain systemd[1]: tmp-crun.OgCPj6.mount: Deactivated successfully.
Oct 13 14:03:26 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:03:26 standalone.localdomain podman[103700]: 2025-10-13 14:03:26.877382686 +0000 UTC m=+0.139841533 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, distribution-scope=public, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-barbican-api-container, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=barbican_api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, version=17.1.9)
Oct 13 14:03:26 standalone.localdomain systemd[1]: tmp-crun.1hhbEo.mount: Deactivated successfully.
Oct 13 14:03:26 standalone.localdomain podman[103702]: 2025-10-13 14:03:26.953659048 +0000 UTC m=+0.215425994 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.component=openstack-barbican-worker-container, build-date=2025-07-21T15:36:22, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, architecture=x86_64, vcs-type=git, release=1, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:03:26 standalone.localdomain podman[103700]: 2025-10-13 14:03:26.968532518 +0000 UTC m=+0.230991365 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, version=17.1.9, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, container_name=barbican_api, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:03:26 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:03:27 standalone.localdomain podman[103702]: 2025-10-13 14:03:27.000050816 +0000 UTC m=+0.261817812 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=barbican_worker, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-worker-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T15:36:22, vendor=Red Hat, Inc.)
Oct 13 14:03:27 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:03:27 standalone.localdomain ansible-async_wrapper.py[103073]: Done in kid B.
Oct 13 14:03:27 standalone.localdomain haproxy[70940]: 172.21.0.2:42560 [13/Oct/2025:14:03:27.326] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/44/44 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:27 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct 13 14:03:27 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct 13 14:03:27 standalone.localdomain ansible-async_wrapper.py[103143]: Done in kid B.
Oct 13 14:03:27 standalone.localdomain runuser[103653]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:27 standalone.localdomain ceph-mon[29756]: 7.8 scrub starts
Oct 13 14:03:27 standalone.localdomain ceph-mon[29756]: 7.8 scrub ok
Oct 13 14:03:27 standalone.localdomain ceph-mon[29756]: pgmap v850: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:27 standalone.localdomain ansible-async_wrapper.py[103167]: Done in kid B.
Oct 13 14:03:27 standalone.localdomain ansible-async_wrapper.py[103183]: Done in kid B.
Oct 13 14:03:28 standalone.localdomain ansible-async_wrapper.py[103199]: Done in kid B.
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:03:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 13 14:03:28 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 13 14:03:28 standalone.localdomain haproxy[70940]: 172.21.0.2:42576 [13/Oct/2025:14:03:28.416] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/35/35 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:28 standalone.localdomain ansible-async_wrapper.py[103223]: Done in kid B.
Oct 13 14:03:28 standalone.localdomain ansible-async_wrapper.py[103239]: Done in kid B.
Oct 13 14:03:29 standalone.localdomain ansible-async_wrapper.py[103280]: Done in kid B.
Oct 13 14:03:29 standalone.localdomain haproxy[70940]: 172.21.0.2:42592 [13/Oct/2025:14:03:29.343] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:29 standalone.localdomain ceph-mon[29756]: 7.9 scrub starts
Oct 13 14:03:29 standalone.localdomain ceph-mon[29756]: 7.9 scrub ok
Oct 13 14:03:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:30 standalone.localdomain haproxy[70940]: 172.21.0.2:35610 [13/Oct/2025:14:03:30.219] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:30 standalone.localdomain ceph-mon[29756]: pgmap v851: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:31 standalone.localdomain haproxy[70940]: 172.21.0.2:35626 [13/Oct/2025:14:03:31.177] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/29/29 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:31 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 13 14:03:31 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 13 14:03:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:03:31 standalone.localdomain systemd[1]: tmp-crun.VWmUtd.mount: Deactivated successfully.
Oct 13 14:03:31 standalone.localdomain podman[103824]: 2025-10-13 14:03:31.827865492 +0000 UTC m=+0.095401626 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.component=openstack-ovn-northd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., build-date=2025-07-21T13:30:04, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, container_name=ovn_cluster_northd, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-northd)
Oct 13 14:03:31 standalone.localdomain podman[103824]: 2025-10-13 14:03:31.841992669 +0000 UTC m=+0.109528793 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, build-date=2025-07-21T13:30:04, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, release=1, description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, container_name=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-northd, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-ovn-northd-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:03:31 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:03:32 standalone.localdomain haproxy[70940]: 172.21.0.2:35632 [13/Oct/2025:14:03:32.114] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:32 standalone.localdomain ceph-mon[29756]: 7.a scrub starts
Oct 13 14:03:32 standalone.localdomain ceph-mon[29756]: 7.a scrub ok
Oct 13 14:03:32 standalone.localdomain ceph-mon[29756]: pgmap v852: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:33 standalone.localdomain haproxy[70940]: 172.21.0.2:35648 [13/Oct/2025:14:03:33.015] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:03:33 standalone.localdomain systemd[1]: tmp-crun.NQlacf.mount: Deactivated successfully.
Oct 13 14:03:33 standalone.localdomain podman[103919]: 2025-10-13 14:03:33.813981364 +0000 UTC m=+0.081753954 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:03:33 standalone.localdomain podman[103919]: 2025-10-13 14:03:33.86454888 +0000 UTC m=+0.132321420 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=clustercheck, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, build-date=2025-07-21T12:58:45, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1)
Oct 13 14:03:33 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:03:33 standalone.localdomain haproxy[70940]: 172.21.0.2:35658 [13/Oct/2025:14:03:33.913] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/30/30 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 13 14:03:34 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 13 14:03:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:34 standalone.localdomain ceph-mon[29756]: pgmap v853: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:34 standalone.localdomain haproxy[70940]: 172.21.0.2:35664 [13/Oct/2025:14:03:34.819] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 13 14:03:35 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 13 14:03:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:35 standalone.localdomain ceph-mon[29756]: 7.b scrub starts
Oct 13 14:03:35 standalone.localdomain ceph-mon[29756]: 7.b scrub ok
Oct 13 14:03:35 standalone.localdomain haproxy[70940]: 172.21.0.2:35680 [13/Oct/2025:14:03:35.773] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/29/29 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104162]: Invoked with 536486879699 60 /tmp/ansible-root/ansible-tmp-1760364216.029798-104119-271609060448714/AnsiballZ_identity_user.py _
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104206]: Starting module and watcher
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104206]: Start watching 104207 (60)
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104207]: Start module (104207)
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104162]: Return async_wrapper task started.
Oct 13 14:03:36 standalone.localdomain sudo[104210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:03:36 standalone.localdomain sudo[104210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:03:36 standalone.localdomain sudo[104210]: pam_unix(sudo:session): session closed for user root
Oct 13 14:03:36 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 13 14:03:36 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 13 14:03:36 standalone.localdomain sudo[104235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:03:36 standalone.localdomain sudo[104235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:03:36 standalone.localdomain ceph-mon[29756]: 7.c scrub starts
Oct 13 14:03:36 standalone.localdomain ceph-mon[29756]: 7.c scrub ok
Oct 13 14:03:36 standalone.localdomain ceph-mon[29756]: pgmap v854: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104237]: Invoked with 784830324929 60 /tmp/ansible-root/ansible-tmp-1760364216.39129-104119-75015140410597/AnsiballZ_identity_user.py _
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104253]: Starting module and watcher
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104253]: Start watching 104254 (60)
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104254]: Start module (104254)
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104237]: Return async_wrapper task started.
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104274]: Invoked with 706176306296 60 /tmp/ansible-root/ansible-tmp-1760364216.7241833-104119-14101250480646/AnsiballZ_identity_user.py _
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104293]: Starting module and watcher
Oct 13 14:03:36 standalone.localdomain haproxy[70940]: 172.21.0.2:35696 [13/Oct/2025:14:03:36.913] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104293]: Start watching 104294 (60)
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104294]: Start module (104294)
Oct 13 14:03:36 standalone.localdomain ansible-async_wrapper.py[104274]: Return async_wrapper task started.
Oct 13 14:03:37 standalone.localdomain sudo[104235]: pam_unix(sudo:session): session closed for user root
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:03:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ce38a4ca-02ab-4d3c-b91c-695838f64e40 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:03:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ce38a4ca-02ab-4d3c-b91c-695838f64e40 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:03:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ce38a4ca-02ab-4d3c-b91c-695838f64e40 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:03:37 standalone.localdomain sudo[104319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:03:37 standalone.localdomain sudo[104319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:03:37 standalone.localdomain sudo[104319]: pam_unix(sudo:session): session closed for user root
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35696 [13/Oct/2025:14:03:36.918] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/278/278 201 8100 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35696 [13/Oct/2025:14:03:37.202] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/13/13 200 396 - - ---- 12/1/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104327]: Invoked with 769937505430 60 /tmp/ansible-root/ansible-tmp-1760364217.0531693-104119-17377863117100/AnsiballZ_identity_user.py _
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104338]: Starting module and watcher
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104338]: Start watching 104339 (60)
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104339]: Start module (104339)
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104327]: Return async_wrapper task started.
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35696 [13/Oct/2025:14:03:37.221] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/53/53 200 334 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?domain_id=default&name=barbican HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: 7.d scrub starts
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: 7.d scrub ok
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:03:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35696 [13/Oct/2025:14:03:37.281] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/299/299 201 504 - - ---- 13/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35712 [13/Oct/2025:14:03:37.353] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/228/228 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104351]: Invoked with 129040324027 60 /tmp/ansible-root/ansible-tmp-1760364217.381711-104119-29505571400028/AnsiballZ_identity_user.py _
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104354]: Starting module and watcher
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104354]: Start watching 104355 (60)
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104355]: Start module (104355)
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104351]: Return async_wrapper task started.
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104207]: Module complete (104207)
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35712 [13/Oct/2025:14:03:37.584] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/264/264 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain haproxy[70940]: 172.21.0.2:35720 [13/Oct/2025:14:03:37.677] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/175/175 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104367]: Invoked with 170378815619 60 /tmp/ansible-root/ansible-tmp-1760364217.7311404-104119-244037796666458/AnsiballZ_identity_user.py _
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104378]: Starting module and watcher
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104378]: Start watching 104379 (60)
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104379]: Start module (104379)
Oct 13 14:03:37 standalone.localdomain ansible-async_wrapper.py[104367]: Return async_wrapper task started.
Oct 13 14:03:38 standalone.localdomain runuser[104385]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35720 [13/Oct/2025:14:03:37.853] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/267/267 201 8100 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35712 [13/Oct/2025:14:03:37.856] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/272/272 200 396 - - ---- 14/3/1/1/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35736 [13/Oct/2025:14:03:37.953] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/176/176 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35720 [13/Oct/2025:14:03:38.127] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 200 396 - - ---- 14/3/2/2/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104440]: Invoked with 573896897206 60 /tmp/ansible-root/ansible-tmp-1760364218.084363-104119-154631868526025/AnsiballZ_identity_user.py _
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104443]: Starting module and watcher
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104443]: Start watching 104444 (60)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104444]: Start module (104444)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104440]: Return async_wrapper task started.
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35736 [13/Oct/2025:14:03:38.131] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/266/266 201 8100 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35712 [13/Oct/2025:14:03:38.132] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/286/286 200 332 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?domain_id=default&name=cinder HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35720 [13/Oct/2025:14:03:38.142] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/294/294 200 334 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?domain_id=default&name=cinderv3 HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35750 [13/Oct/2025:14:03:38.357] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/84/84 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35736 [13/Oct/2025:14:03:38.408] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 200 396 - - ---- 15/4/3/3/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104456]: Invoked with 667758918017 60 /tmp/ansible-root/ansible-tmp-1760364218.3619678-104119-6386694253633/AnsiballZ_identity_user.py _
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104464]: Starting module and watcher
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104464]: Start watching 104465 (60)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104465]: Start module (104465)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104456]: Return async_wrapper task started.
Oct 13 14:03:38 standalone.localdomain ceph-mon[29756]: pgmap v855: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:38 standalone.localdomain runuser[104385]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 41 completed events
Oct 13 14:03:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:03:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35712 [13/Oct/2025:14:03:38.426] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 201 500 - - ---- 16/5/4/4/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104254]: Module complete (104254)
Oct 13 14:03:38 standalone.localdomain runuser[104489]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104488]: Invoked with 264834298806 60 /tmp/ansible-root/ansible-tmp-1760364218.7000651-104119-251777382661798/AnsiballZ_identity_user.py _
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104536]: Starting module and watcher
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104536]: Start watching 104537 (60)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104537]: Start module (104537)
Oct 13 14:03:38 standalone.localdomain ansible-async_wrapper.py[104488]: Return async_wrapper task started.
Oct 13 14:03:38 standalone.localdomain haproxy[70940]: 172.21.0.2:35750 [13/Oct/2025:14:03:38.442] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/533/533 201 8100 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104557]: Invoked with 257982814113 60 /tmp/ansible-root/ansible-tmp-1760364218.9747777-104119-52403185824786/AnsiballZ_identity_user.py _
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104560]: Starting module and watcher
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104560]: Start watching 104561 (60)
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104561]: Start module (104561)
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104557]: Return async_wrapper task started.
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35720 [13/Oct/2025:14:03:38.443] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/792/792 201 504 - - ---- 16/5/4/4/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35736 [13/Oct/2025:14:03:38.452] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/803/803 200 332 - - ---- 16/5/3/3/0 0/0 "GET /v3/users?domain_id=default&name=glance HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35758 [13/Oct/2025:14:03:38.650] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/608/608 300 515 - - ---- 16/5/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35750 [13/Oct/2025:14:03:38.979] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 200 396 - - ---- 15/4/3/3/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35768 [13/Oct/2025:14:03:38.981] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/295/295 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104294]: Module complete (104294)
Oct 13 14:03:39 standalone.localdomain runuser[104489]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:39 standalone.localdomain runuser[104577]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35758 [13/Oct/2025:14:03:39.259] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/275/275 201 8100 - - ---- 17/6/5/5/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:03:39 standalone.localdomain ceph-mon[29756]: pgmap v856: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35736 [13/Oct/2025:14:03:39.260] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/530/530 201 500 - - ---- 17/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain podman[104627]: 2025-10-13 14:03:39.825112785 +0000 UTC m=+0.092251349 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vcs-type=git, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:03:39 standalone.localdomain haproxy[70940]: 172.21.0.2:35750 [13/Oct/2025:14:03:39.276] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/557/557 200 330 - - ---- 17/6/5/5/0 0/0 "GET /v3/users?domain_id=default&name=heat HTTP/1.1"
Oct 13 14:03:39 standalone.localdomain podman[104627]: 2025-10-13 14:03:39.842454862 +0000 UTC m=+0.109593446 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, container_name=iscsid, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-07-21T13:27:15, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:03:39 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:03:39 standalone.localdomain ansible-async_wrapper.py[104339]: Module complete (104339)
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35768 [13/Oct/2025:14:03:39.278] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/819/819 201 8100 - - ---- 17/6/5/5/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35770 [13/Oct/2025:14:03:39.300] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/799/799 300 515 - - ---- 17/6/4/4/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35786 [13/Oct/2025:14:03:39.526] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/578/578 300 515 - - ---- 17/6/4/4/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35758 [13/Oct/2025:14:03:39.547] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/595/595 404 298 - - ---- 17/6/5/5/0 0/0 "GET /v3/domains/heat_stack HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:34756 [13/Oct/2025:14:03:39.821] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/327/327 300 515 - - ---- 17/6/5/5/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain runuser[104577]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35750 [13/Oct/2025:14:03:39.839] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/583/583 201 496 - - ---- 17/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 13 14:03:40 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 13 14:03:40 standalone.localdomain ansible-async_wrapper.py[104355]: Module complete (104355)
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35770 [13/Oct/2025:14:03:40.104] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/583/583 201 8100 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:03:40 standalone.localdomain podman[104674]: 2025-10-13 14:03:40.786040351 +0000 UTC m=+0.061478926 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, build-date=2025-07-21T13:27:18, distribution-scope=public, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=keystone, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:03:40 standalone.localdomain haproxy[70940]: 172.21.0.2:35786 [13/Oct/2025:14:03:40.106] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/891/891 201 8100 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35768 [13/Oct/2025:14:03:40.109] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/915/915 200 396 - - ---- 16/5/4/4/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35758 [13/Oct/2025:14:03:40.144] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/912/912 200 538 - - ---- 16/5/4/4/0 0/0 "GET /v3/domains?name=heat_stack HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain ansible-async_wrapper.py[104206]: Done in kid B.
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:34756 [13/Oct/2025:14:03:40.150] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1196/1196 201 8100 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35770 [13/Oct/2025:14:03:40.703] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/663/663 200 396 - - ---- 16/5/4/4/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain podman[104674]: 2025-10-13 14:03:41.369077111 +0000 UTC m=+0.644515716 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, config_id=tripleo_step3, architecture=x86_64, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, release=1, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, name=rhosp17/openstack-keystone, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:03:41 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35786 [13/Oct/2025:14:03:41.014] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/379/379 200 396 - - ---- 16/5/4/4/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 13 14:03:41 standalone.localdomain ceph-osd[37878]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35768 [13/Oct/2025:14:03:41.031] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/395/395 200 334 - - ---- 16/5/4/4/0 0/0 "GET /v3/users?domain_id=default&name=heat-cfn HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35758 [13/Oct/2025:14:03:41.067] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/395/395 200 374 - - ---- 16/5/4/4/0 0/0 "GET /v3/users?domain_id=802de09a9466458a91c2aee8ab64ffc3&name=heat_stack_domain_admin HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain ceph-mon[29756]: 7.e scrub starts
Oct 13 14:03:41 standalone.localdomain ceph-mon[29756]: 7.e scrub ok
Oct 13 14:03:41 standalone.localdomain ceph-mon[29756]: 7.f scrub starts
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:34756 [13/Oct/2025:14:03:41.351] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/129/129 200 396 - - ---- 16/5/4/4/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35770 [13/Oct/2025:14:03:41.374] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/150/150 200 332 - - ---- 16/5/4/4/0 0/0 "GET /v3/users?domain_id=default&name=manila HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35786 [13/Oct/2025:14:03:41.402] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/163/163 200 334 - - ---- 16/5/4/4/0 0/0 "GET /v3/users?domain_id=default&name=manilav2 HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain ansible-async_wrapper.py[104253]: Done in kid B.
Oct 13 14:03:41 standalone.localdomain haproxy[70940]: 172.21.0.2:35768 [13/Oct/2025:14:03:41.435] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/476/476 201 504 - - ---- 16/5/4/4/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:41 standalone.localdomain ansible-async_wrapper.py[104293]: Done in kid B.
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104444]: Module complete (104444)
Oct 13 14:03:42 standalone.localdomain haproxy[70940]: 172.21.0.2:35758 [13/Oct/2025:14:03:41.471] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/721/721 201 559 - - ---- 15/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:42 standalone.localdomain haproxy[70940]: 172.21.0.2:34756 [13/Oct/2025:14:03:41.485] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/755/755 200 333 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?domain_id=default&name=neutron HTTP/1.1"
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104338]: Done in kid B.
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104379]: Module complete (104379)
Oct 13 14:03:42 standalone.localdomain ceph-mon[29756]: 7.f scrub ok
Oct 13 14:03:42 standalone.localdomain ceph-mon[29756]: pgmap v857: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:42 standalone.localdomain haproxy[70940]: 172.21.0.2:35770 [13/Oct/2025:14:03:41.531] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/997/997 201 500 - - ---- 14/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104465]: Module complete (104465)
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104354]: Done in kid B.
Oct 13 14:03:42 standalone.localdomain haproxy[70940]: 172.21.0.2:35786 [13/Oct/2025:14:03:41.573] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1222/1222 201 504 - - ---- 13/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104537]: Module complete (104537)
Oct 13 14:03:42 standalone.localdomain ansible-async_wrapper.py[104378]: Done in kid B.
Oct 13 14:03:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:03:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:03:43 standalone.localdomain podman[104773]: 2025-10-13 14:03:43.071143463 +0000 UTC m=+0.068381728 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:03:43 standalone.localdomain haproxy[70940]: 172.21.0.2:34756 [13/Oct/2025:14:03:42.246] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/837/837 201 502 - - ---- 12/1/0/0/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:43 standalone.localdomain podman[104773]: 2025-10-13 14:03:43.110752331 +0000 UTC m=+0.107990586 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, build-date=2025-07-21T13:58:15, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-horizon, release=1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, com.redhat.component=openstack-horizon-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=horizon, io.openshift.expose-services=)
Oct 13 14:03:43 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:03:43 standalone.localdomain ansible-async_wrapper.py[104561]: Module complete (104561)
Oct 13 14:03:43 standalone.localdomain podman[104772]: 2025-10-13 14:03:43.202587015 +0000 UTC m=+0.201348638 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-memcached-container, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, architecture=x86_64, release=1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, version=17.1.9, distribution-scope=public)
Oct 13 14:03:43 standalone.localdomain podman[104772]: 2025-10-13 14:03:43.225839546 +0000 UTC m=+0.224601149 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, container_name=memcached, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, name=rhosp17/openstack-memcached, version=17.1.9, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, release=1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:03:43 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:03:43 standalone.localdomain ansible-async_wrapper.py[104443]: Done in kid B.
Oct 13 14:03:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:43 standalone.localdomain ansible-async_wrapper.py[104464]: Done in kid B.
Oct 13 14:03:43 standalone.localdomain ansible-async_wrapper.py[104536]: Done in kid B.
Oct 13 14:03:44 standalone.localdomain ansible-async_wrapper.py[104560]: Done in kid B.
Oct 13 14:03:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:44 standalone.localdomain ceph-mon[29756]: pgmap v858: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:46 standalone.localdomain ceph-mon[29756]: pgmap v859: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:48 standalone.localdomain haproxy[70940]: 172.21.0.2:34772 [13/Oct/2025:14:03:47.983] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:48 standalone.localdomain ceph-mon[29756]: pgmap v860: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:48 standalone.localdomain haproxy[70940]: 172.21.0.2:34784 [13/Oct/2025:14:03:48.937] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:49 standalone.localdomain haproxy[70940]: 172.21.0.2:60664 [13/Oct/2025:14:03:49.836] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 401 377 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105116]: Invoked with 720173558041 60 /tmp/ansible-root/ansible-tmp-1760364230.1306274-105105-279904673969191/AnsiballZ_identity_user.py _
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105127]: Starting module and watcher
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105127]: Start watching 105128 (60)
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105128]: Start module (105128)
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105116]: Return async_wrapper task started.
Oct 13 14:03:50 standalone.localdomain ceph-mon[29756]: pgmap v861: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105141]: Invoked with 4495671038 60 /tmp/ansible-root/ansible-tmp-1760364230.4800656-105105-182324331135541/AnsiballZ_identity_user.py _
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105144]: Starting module and watcher
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105144]: Start watching 105145 (60)
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105145]: Start module (105145)
Oct 13 14:03:50 standalone.localdomain ansible-async_wrapper.py[105141]: Return async_wrapper task started.
Oct 13 14:03:50 standalone.localdomain runuser[105158]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:51 standalone.localdomain ansible-async_wrapper.py[105173]: Invoked with 253634340073 60 /tmp/ansible-root/ansible-tmp-1760364230.8150873-105105-249245467458445/AnsiballZ_identity_user.py _
Oct 13 14:03:51 standalone.localdomain ansible-async_wrapper.py[105209]: Starting module and watcher
Oct 13 14:03:51 standalone.localdomain ansible-async_wrapper.py[105209]: Start watching 105210 (60)
Oct 13 14:03:51 standalone.localdomain ansible-async_wrapper.py[105210]: Start module (105210)
Oct 13 14:03:51 standalone.localdomain ansible-async_wrapper.py[105173]: Return async_wrapper task started.
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60678 [13/Oct/2025:14:03:51.114] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60678 [13/Oct/2025:14:03:51.119] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/296/296 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60686 [13/Oct/2025:14:03:51.380] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain runuser[105158]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:51 standalone.localdomain runuser[105241]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60686 [13/Oct/2025:14:03:51.419] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/261/261 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60678 [13/Oct/2025:14:03:51.426] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/268/268 200 396 - - ---- 13/2/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60686 [13/Oct/2025:14:03:51.694] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/10/10 200 396 - - ---- 13/2/1/1/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60678 [13/Oct/2025:14:03:51.699] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 200 330 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?domain_id=default&name=nova HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60696 [13/Oct/2025:14:03:51.707] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 300 515 - - ---- 14/3/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:51 standalone.localdomain haproxy[70940]: 172.21.0.2:60686 [13/Oct/2025:14:03:51.713] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/46/46 200 335 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?domain_id=default&name=placement HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain haproxy[70940]: 172.21.0.2:60678 [13/Oct/2025:14:03:51.738] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 201 496 - - ---- 14/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain ansible-async_wrapper.py[105128]: Module complete (105128)
Oct 13 14:03:52 standalone.localdomain runuser[105241]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:52 standalone.localdomain runuser[105295]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:03:52 standalone.localdomain haproxy[70940]: 172.21.0.2:60696 [13/Oct/2025:14:03:51.741] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/543/543 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain haproxy[70940]: 172.21.0.2:60686 [13/Oct/2025:14:03:51.766] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/788/788 201 506 - - ---- 13/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain haproxy[70940]: 172.21.0.2:60696 [13/Oct/2025:14:03:52.297] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/274/274 200 396 - - ---- 13/2/0/0/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain ceph-mon[29756]: pgmap v862: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:52 standalone.localdomain haproxy[70940]: 172.21.0.2:60696 [13/Oct/2025:14:03:52.584] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 200 331 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?domain_id=default&name=swift HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain ansible-async_wrapper.py[105145]: Module complete (105145)
Oct 13 14:03:52 standalone.localdomain haproxy[70940]: 172.21.0.2:60696 [13/Oct/2025:14:03:52.622] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/269/269 201 498 - - ---- 12/1/0/0/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:03:52 standalone.localdomain runuser[105295]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:03:52 standalone.localdomain ansible-async_wrapper.py[105210]: Module complete (105210)
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:03:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:54 standalone.localdomain ceph-mon[29756]: pgmap v863: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:03:54 standalone.localdomain podman[105550]: 2025-10-13 14:03:54.794899179 +0000 UTC m=+0.064159159 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:03:54 standalone.localdomain podman[105550]: 2025-10-13 14:03:54.808876412 +0000 UTC m=+0.078136442 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, tcib_managed=true, vcs-type=git)
Oct 13 14:03:54 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:03:55 standalone.localdomain ansible-async_wrapper.py[105127]: Done in kid B.
Oct 13 14:03:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:55 standalone.localdomain ansible-async_wrapper.py[105144]: Done in kid B.
Oct 13 14:03:56 standalone.localdomain ansible-async_wrapper.py[105209]: Done in kid B.
Oct 13 14:03:56 standalone.localdomain ceph-mon[29756]: pgmap v864: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:03:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:03:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:03:57 standalone.localdomain systemd[1]: tmp-crun.WD2C3a.mount: Deactivated successfully.
Oct 13 14:03:57 standalone.localdomain podman[105741]: 2025-10-13 14:03:57.82145333 +0000 UTC m=+0.079022288 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, release=1, vcs-type=git, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:22:44)
Oct 13 14:03:57 standalone.localdomain ansible-async_wrapper.py[105740]: Invoked with 732851860048 60 /tmp/ansible-root/ansible-tmp-1760364237.6033583-105729-79548804479046/AnsiballZ_role_assignment.py _
Oct 13 14:03:57 standalone.localdomain ansible-async_wrapper.py[105790]: Starting module and watcher
Oct 13 14:03:57 standalone.localdomain ansible-async_wrapper.py[105790]: Start watching 105791 (60)
Oct 13 14:03:57 standalone.localdomain ansible-async_wrapper.py[105791]: Start module (105791)
Oct 13 14:03:57 standalone.localdomain ansible-async_wrapper.py[105740]: Return async_wrapper task started.
Oct 13 14:03:57 standalone.localdomain podman[105742]: 2025-10-13 14:03:57.875610678 +0000 UTC m=+0.126629654 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-keystone-listener-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, release=1, container_name=barbican_keystone_listener, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-keystone-listener, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, architecture=x86_64)
Oct 13 14:03:57 standalone.localdomain podman[105741]: 2025-10-13 14:03:57.927691241 +0000 UTC m=+0.185260229 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-barbican-api-container, tcib_managed=true)
Oct 13 14:03:57 standalone.localdomain podman[105748]: 2025-10-13 14:03:57.938600279 +0000 UTC m=+0.185136396 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, build-date=2025-07-21T15:36:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, config_id=tripleo_step3, container_name=barbican_worker, name=rhosp17/openstack-barbican-worker, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, managed_by=tripleo_ansible, release=1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9)
Oct 13 14:03:57 standalone.localdomain podman[105748]: 2025-10-13 14:03:57.960124735 +0000 UTC m=+0.206660842 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, architecture=x86_64, container_name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, build-date=2025-07-21T15:36:22, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-barbican-worker-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team)
Oct 13 14:03:57 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:03:57 standalone.localdomain podman[105742]: 2025-10-13 14:03:57.983346545 +0000 UTC m=+0.234365591 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, com.redhat.component=openstack-barbican-keystone-listener-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, vendor=Red Hat, Inc.)
Oct 13 14:03:57 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:03:58 standalone.localdomain python3[105793]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=barbican project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:03:58 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105832]: Invoked with 818742412048 60 /tmp/ansible-root/ansible-tmp-1760364237.9532113-105729-51812772876018/AnsiballZ_role_assignment.py _
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105835]: Starting module and watcher
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105835]: Start watching 105836 (60)
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105836]: Start module (105836)
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105832]: Return async_wrapper task started.
Oct 13 14:03:58 standalone.localdomain python3[105837]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=cinder project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105848]: Invoked with 86467490211 60 /tmp/ansible-root/ansible-tmp-1760364238.2387357-105729-124408837506661/AnsiballZ_role_assignment.py _
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105859]: Starting module and watcher
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105859]: Start watching 105860 (60)
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105860]: Start module (105860)
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105848]: Return async_wrapper task started.
Oct 13 14:03:58 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:58.606] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:58 standalone.localdomain python3[105861]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=cinderv3 project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:03:58 standalone.localdomain ceph-mon[29756]: pgmap v865: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105872]: Invoked with 703112658352 60 /tmp/ansible-root/ansible-tmp-1760364238.5175576-105729-144009444223876/AnsiballZ_role_assignment.py _
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105875]: Starting module and watcher
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105875]: Start watching 105876 (60)
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105876]: Start module (105876)
Oct 13 14:03:58 standalone.localdomain ansible-async_wrapper.py[105872]: Return async_wrapper task started.
Oct 13 14:03:58 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:58.612] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:58 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:58.888] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/9/9 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:58 standalone.localdomain python3[105877]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=glance project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105888]: Invoked with 405914794906 60 /tmp/ansible-root/ansible-tmp-1760364238.8225749-105729-64247145817883/AnsiballZ_role_assignment.py _
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105891]: Starting module and watcher
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105892]: Start module (105892)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105891]: Start watching 105892 (60)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105888]: Return async_wrapper task started.
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:58.899] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/263/263 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:58.901] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/280/280 404 291 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain python3[105893]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=heat project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.171] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 404 291 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:59.182] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 200 531 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.208] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 531 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:03:59.209] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 300 515 - - ---- 14/3/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:59.221] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 404 294 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/barbican HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.236] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/46/46 404 292 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/cinder HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105904]: Invoked with 475458525056 60 /tmp/ansible-root/ansible-tmp-1760364239.1311347-105729-123508256505491/AnsiballZ_role_assignment.py _
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105907]: Starting module and watcher
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105907]: Start watching 105908 (60)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105908]: Start module (105908)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105904]: Return async_wrapper task started.
Oct 13 14:03:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:03:59 standalone.localdomain python3[105909]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=heat_stack_domain_admin domain=802de09a9466458a91c2aee8ab64ffc3 role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None project=None
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:03:59.238] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/307/307 201 8100 - - ---- 15/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:59.262] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/309/309 200 602 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?name=barbican HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.288] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/306/306 200 596 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?name=cinder HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:03:59.507] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/89/89 300 515 - - ---- 15/4/2/2/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:03:59.561] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/54/54 404 291 - - ---- 15/4/3/3/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105928]: Invoked with 663528935911 60 /tmp/ansible-root/ansible-tmp-1760364239.4621804-105729-234262841336893/AnsiballZ_role_assignment.py _
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105931]: Starting module and watcher
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105931]: Start watching 105932 (60)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105932]: Start module (105932)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105928]: Return async_wrapper task started.
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:59.576] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/64/64 404 296 - - ---- 15/4/3/3/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.598] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/68/68 404 296 - - ---- 15/4/3/3/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain python3[105933]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=heat-cfn project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:03:59.599] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/328/328 201 8100 - - ---- 16/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:03:59.619] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/321/321 200 531 - - ---- 16/5/4/4/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:59.645] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/311/311 200 605 - - ---- 16/5/4/4/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105944]: Invoked with 617572776341 60 /tmp/ansible-root/ansible-tmp-1760364239.7128494-105729-86947673424379/AnsiballZ_role_assignment.py _
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.668] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/306/306 200 605 - - ---- 16/5/4/4/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:03:59.725] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/250/250 300 515 - - ---- 16/5/3/3/0 0/0 "GET / HTTP/1.1"
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105947]: Starting module and watcher
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105947]: Start watching 105948 (60)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105948]: Start module (105948)
Oct 13 14:03:59 standalone.localdomain ansible-async_wrapper.py[105944]: Return async_wrapper task started.
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:03:59.936] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/129/129 404 291 - - ---- 16/5/4/4/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:03:59.945] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/148/148 404 294 - - ---- 16/5/4/4/0 0/0 "GET /v3/users/cinderv3 HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:03:59.964] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/141/141 200 456 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=9d7dff422a3f478abaea6a6f86b4f702&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:03:59.980] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/138/138 200 456 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=006fcf9d033646a883a614fee0c9f470&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain python3[105949]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=manila project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:00 standalone.localdomain podman[105964]: 2025-10-13 14:04:00.271396079 +0000 UTC m=+0.055867341 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:04:00 standalone.localdomain podman[105964]: 2025-10-13 14:04:00.298940303 +0000 UTC m=+0.083411545 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-mariadb-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[105960]: Invoked with 669893488899 60 /tmp/ansible-root/ansible-tmp-1760364240.0770469-105729-102294565157358/AnsiballZ_role_assignment.py _
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[105993]: Starting module and watcher
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[105993]: Start watching 105994 (60)
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[105994]: Start module (105994)
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[105960]: Return async_wrapper task started.
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:03:59.981] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/414/414 201 8100 - - ---- 17/6/5/5/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain podman[105999]: 2025-10-13 14:04:00.395966199 +0000 UTC m=+0.054491570 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:08:11, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, version=17.1.9, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1)
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:00.068] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/346/346 200 531 - - ---- 17/6/5/5/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:00.095] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/323/323 300 515 - - ---- 17/6/4/4/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain podman[105999]: 2025-10-13 14:04:00.424910165 +0000 UTC m=+0.083435466 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:08:11, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:00.097] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/352/352 200 602 - - ---- 17/6/5/5/0 0/0 "GET /v3/users?name=cinderv3 HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:04:00.110] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/354/354 200 2640 - - ---- 17/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:04:00.120] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/361/361 200 2640 - - ---- 17/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain python3[105995]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=manilav2 project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:00.401] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/101/101 404 291 - - ---- 17/6/5/5/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain podman[106045]: 2025-10-13 14:04:00.527158133 +0000 UTC m=+0.070437534 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:04:00 standalone.localdomain podman[106045]: 2025-10-13 14:04:00.560574758 +0000 UTC m=+0.103854159 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, release=1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9)
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[106052]: Invoked with 410888138125 60 /tmp/ansible-root/ansible-tmp-1760364240.407162-105729-216787592654322/AnsiballZ_role_assignment.py _
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[106077]: Starting module and watcher
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[106077]: Start watching 106078 (60)
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[106078]: Start module (106078)
Oct 13 14:04:00 standalone.localdomain ansible-async_wrapper.py[106052]: Return async_wrapper task started.
Oct 13 14:04:00 standalone.localdomain ceph-mon[29756]: pgmap v866: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:00 standalone.localdomain python3[106079]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=neutron project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:00.420] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/348/348 201 8100 - - ---- 18/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:00.419] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/370/370 404 292 - - ---- 18/7/6/6/0 0/0 "GET /v3/users/glance HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:00.455] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/352/352 404 296 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:04:00.469] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/360/360 200 602 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=barbican HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:04:00.484] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/414/414 200 596 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=cinder HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain python3[106083]: ansible-ansible.legacy.async_status Invoked with jid=732851860048.105740 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:00.506] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/406/406 200 531 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:00.512] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/402/402 300 515 - - ---- 19/8/6/6/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:00.774] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/156/156 404 291 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:00.792] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/161/161 200 596 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=glance HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:00.798] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/159/159 300 515 - - ---- 19/8/6/6/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:00.810] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/164/164 200 605 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:04:00.836] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/151/151 200 919 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:04:00.903] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/97/97 200 919 - - ---- 19/8/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:00.915] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/342/342 201 8100 - - ---- 20/9/8/8/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:00.917] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/364/364 404 290 - - ---- 20/9/8/8/0 0/0 "GET /v3/users/heat HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:00.934] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/361/361 200 531 - - ---- 21/10/9/9/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:00.958] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/356/356 404 296 - - ---- 21/10/9/9/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:00.960] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/616/616 201 8100 - - ---- 21/10/9/9/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:00.983] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/605/605 200 456 - - ---- 21/10/9/9/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=da52d851b182439cafdb19c4843c5299&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:04:01.002] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/597/597 200 456 - - ---- 21/10/9/9/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=9d7dff422a3f478abaea6a6f86b4f702&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:04:01.009] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/601/601 200 456 - - ---- 21/10/9/9/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=006fcf9d033646a883a614fee0c9f470&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:01.035] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/577/577 300 515 - - ---- 21/10/8/8/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:01.262] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/374/374 404 291 - - ---- 21/10/9/9/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:01.284] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/383/383 200 590 - - ---- 21/10/9/9/0 0/0 "GET /v3/users?name=heat HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:01.291] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/379/379 300 515 - - ---- 21/10/8/8/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:01.300] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/383/383 200 431 - - ---- 21/10/9/9/0 0/0 "GET /v3/domains/802de09a9466458a91c2aee8ab64ffc3 HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:01.317] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/381/381 200 605 - - ---- 21/10/9/9/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:01.586] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/129/129 404 291 - - ---- 21/10/9/9/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:01.594] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/139/139 200 2640 - - ---- 21/10/9/9/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain haproxy[70940]: 172.21.0.2:60702 [13/Oct/2025:14:04:01.605] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/162/162 204 165 - - ---- 21/10/9/9/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/9d7dff422a3f478abaea6a6f86b4f702/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:01 standalone.localdomain ansible-async_wrapper.py[105791]: Module complete (105791)
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:01.616] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/432/432 201 8100 - - ---- 20/9/8/8/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60712 [13/Oct/2025:14:04:01.616] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/459/459 204 165 - - ---- 20/9/8/8/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/006fcf9d033646a883a614fee0c9f470/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:01.643] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/479/479 200 531 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain ansible-async_wrapper.py[105836]: Module complete (105836)
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:01.674] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/716/716 201 8100 - - ---- 19/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:01.676] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/732/732 404 296 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:01.689] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/742/742 404 310 - - ---- 19/8/7/7/0 0/0 "GET /v3/users/heat_stack_domain_admin?domain_id=802de09a9466458a91c2aee8ab64ffc3 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:01.705] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/737/737 200 456 - - ---- 19/8/7/7/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=754da14aa25448e4850b03194fd6ca26&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:01.719] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/736/736 200 531 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:01.740] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/746/746 200 602 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=cinderv3 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:02.058] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/447/447 404 291 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:02.125] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/402/402 404 294 - - ---- 19/8/7/7/0 0/0 "GET /v3/users/heat-cfn HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:02.399] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/142/142 404 291 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:02.411] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/155/155 200 605 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:02.435] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/169/169 200 715 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?domain_id=802de09a9466458a91c2aee8ab64ffc3&name=heat_stack_domain_admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:02.448] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/179/179 200 2640 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain ceph-mon[29756]: pgmap v867: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:02.460] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/197/197 404 292 - - ---- 19/8/7/7/0 0/0 "GET /v3/users/manila HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:02.493] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/190/190 200 919 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:02.508] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/194/194 200 531 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:02.531] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/214/214 200 602 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=heat-cfn HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:02.546] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/225/225 200 531 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:02.578] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/215/215 200 456 - - ---- 19/8/7/7/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=d5f09829ce434a5e808443d0fecdba73&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:02.612] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/207/207 200 406 - - ---- 19/8/7/7/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=0a6a73501bd448b999d89b837335e3b2 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain podman[106157]: 2025-10-13 14:04:02.838744606 +0000 UTC m=+0.105691915 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, version=17.1.9, config_id=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, container_name=ovn_cluster_northd, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:30:04, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:04:02 standalone.localdomain ansible-async_wrapper.py[105790]: Done in kid B.
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:02.632] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/224/224 200 596 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=glance HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain podman[106157]: 2025-10-13 14:04:02.877621311 +0000 UTC m=+0.144568650 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=ovn_cluster_northd, version=17.1.9, distribution-scope=public, architecture=x86_64, container_name=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, summary=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:30:04, description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-northd, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:04:02 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:02.661] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/232/232 200 596 - - ---- 19/8/7/7/0 0/0 "GET /v3/users?name=manila HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:02.688] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/220/220 200 456 - - ---- 19/8/7/7/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=da52d851b182439cafdb19c4843c5299&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:02.709] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/224/224 404 294 - - ---- 19/8/7/7/0 0/0 "GET /v3/users/manilav2 HTTP/1.1"
Oct 13 14:04:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:02.752] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/230/230 404 296 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:02.778] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/245/245 404 293 - - ---- 19/8/7/7/0 0/0 "GET /v3/users/neutron HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:02.800] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/242/242 200 2640 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:02.826] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/235/235 200 2640 - - ---- 19/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:02.865] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/216/216 200 919 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:02.899] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/209/209 404 296 - - ---- 19/8/7/7/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:60728 [13/Oct/2025:14:04:02.913] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/226/226 204 165 - - ---- 19/8/7/7/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/da52d851b182439cafdb19c4843c5299/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105835]: Done in kid B.
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:02.935] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/284/284 200 602 - - ---- 18/7/6/6/0 0/0 "GET /v3/users?name=manilav2 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105860]: Module complete (105860)
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:02.986] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/255/255 200 605 - - ---- 18/7/6/6/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:03.025] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/240/240 200 599 - - ---- 18/7/6/6/0 0/0 "GET /v3/users?name=neutron HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:03.049] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/238/238 200 590 - - ---- 18/7/6/6/0 0/0 "GET /v3/users?name=heat HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:03.067] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/230/230 200 431 - - ---- 18/7/6/6/0 0/0 "GET /v3/domains/802de09a9466458a91c2aee8ab64ffc3 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:03.091] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/217/217 200 456 - - ---- 18/7/6/6/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=754da14aa25448e4850b03194fd6ca26&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:03.112] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/211/211 200 605 - - ---- 18/7/6/6/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:03.225] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/120/120 404 296 - - ---- 18/7/6/6/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:03.254] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/102/102 200 456 - - ---- 18/7/6/6/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=d0e2538a83ad4c7b9eb079c3a71f0187&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:03.271] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/105/105 404 296 - - ---- 18/7/6/6/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:03.295] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/95/95 200 919 - - ---- 18/7/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:03.302] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/110/110 200 715 - - ---- 18/7/6/6/0 0/0 "GET /v3/users?domain_id=802de09a9466458a91c2aee8ab64ffc3&name=heat_stack_domain_admin HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105859]: Done in kid B.
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:60736 [13/Oct/2025:14:04:03.312] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/127/127 204 165 - - ---- 18/7/6/6/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/754da14aa25448e4850b03194fd6ca26/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:03 standalone.localdomain runuser[106182]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105876]: Module complete (105876)
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:03.331] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/211/211 200 456 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=370caf092b1848c094ed1eb6f2ac50a9&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:03.350] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/216/216 200 605 - - ---- 17/6/5/5/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:03.364] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/229/229 200 2640 - - ---- 17/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:03.381] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/240/240 200 605 - - ---- 17/6/5/5/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:03.400] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/243/243 200 456 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=d5f09829ce434a5e808443d0fecdba73&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:03.422] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/244/244 200 455 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=0a6a73501bd448b999d89b837335e3b2&scope.domain.id=802de09a9466458a91c2aee8ab64ffc3 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:03.547] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/137/137 200 2640 - - ---- 17/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:03.576] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/123/123 200 456 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=d58ee94dbc88452685d808c956aeae06&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:03.598] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/133/133 200 602 - - ---- 17/6/5/5/0 0/0 "GET /v3/users?name=heat-cfn HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105875]: Done in kid B.
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:03.631] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/127/127 200 456 - - ---- 17/6/5/5/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=150f517a42bd40f99c171eab4e1a8605&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50678 [13/Oct/2025:14:04:03.648] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/156/156 204 165 - - ---- 17/6/5/5/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/d5f09829ce434a5e808443d0fecdba73/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105892]: Module complete (105892)
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50688 [13/Oct/2025:14:04:03.673] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/221/221 204 165 - - ---- 16/5/4/4/0 0/0 "PUT /v3/domains/802de09a9466458a91c2aee8ab64ffc3/users/0a6a73501bd448b999d89b837335e3b2/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:03.692] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 200 596 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?name=manila HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:03.704] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/282/282 200 2640 - - ---- 15/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:03 standalone.localdomain ansible-async_wrapper.py[105908]: Module complete (105908)
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:03.737] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/272/272 200 919 - - ---- 15/4/3/3/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105891]: Done in kid B.
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:03.763] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/262/262 200 2640 - - ---- 15/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:03.973] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/69/69 200 919 - - ---- 15/4/3/3/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:03.993] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/75/75 200 602 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?name=manilav2 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:04.014] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/66/66 200 456 - - ---- 15/4/3/3/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=d0e2538a83ad4c7b9eb079c3a71f0187&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:04.028] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/84/84 200 599 - - ---- 15/4/3/3/0 0/0 "GET /v3/users?name=neutron HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:04.048] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/82/82 200 456 - - ---- 15/4/3/3/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=370caf092b1848c094ed1eb6f2ac50a9&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain podman[106233]: 2025-10-13 14:04:04.146475505 +0000 UTC m=+0.083411405 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, container_name=clustercheck, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:04.071] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/79/79 200 919 - - ---- 15/4/3/3/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50704 [13/Oct/2025:14:04:04.084] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/100/100 204 165 - - ---- 15/4/3/3/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/d0e2538a83ad4c7b9eb079c3a71f0187/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain runuser[106182]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:04 standalone.localdomain podman[106233]: 2025-10-13 14:04:04.213934574 +0000 UTC m=+0.150870494 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:04:04 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:04.116] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/129/129 200 919 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105932]: Module complete (105932)
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50712 [13/Oct/2025:14:04:04.133] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/145/145 204 165 - - ---- 14/3/2/2/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/370caf092b1848c094ed1eb6f2ac50a9/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:04.155] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/173/173 200 456 - - ---- 13/2/1/1/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=d58ee94dbc88452685d808c956aeae06&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:04.250] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/94/94 200 456 - - ---- 13/2/1/1/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=150f517a42bd40f99c171eab4e1a8605&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105907]: Done in kid B.
Oct 13 14:04:04 standalone.localdomain runuser[106288]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105948]: Module complete (105948)
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50726 [13/Oct/2025:14:04:04.333] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/53/53 204 165 - - ---- 13/2/1/1/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/d58ee94dbc88452685d808c956aeae06/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain haproxy[70940]: 172.21.0.2:50740 [13/Oct/2025:14:04:04.352] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/89/89 204 165 - - ---- 12/1/0/0/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/150f517a42bd40f99c171eab4e1a8605/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105994]: Module complete (105994)
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[106078]: Module complete (106078)
Oct 13 14:04:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105931]: Done in kid B.
Oct 13 14:04:04 standalone.localdomain ceph-mon[29756]: pgmap v868: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:04 standalone.localdomain runuser[106288]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:04 standalone.localdomain ansible-async_wrapper.py[105947]: Done in kid B.
Oct 13 14:04:05 standalone.localdomain runuser[106432]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:05 standalone.localdomain ansible-async_wrapper.py[105993]: Done in kid B.
Oct 13 14:04:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:05 standalone.localdomain ansible-async_wrapper.py[106077]: Done in kid B.
Oct 13 14:04:05 standalone.localdomain runuser[106432]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:06 standalone.localdomain python3[106541]: ansible-ansible.legacy.async_status Invoked with jid=732851860048.105740 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:06 standalone.localdomain python3[106585]: ansible-ansible.legacy.async_status Invoked with jid=818742412048.105832 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:06 standalone.localdomain python3[106629]: ansible-ansible.legacy.async_status Invoked with jid=86467490211.105848 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:06 standalone.localdomain ceph-mon[29756]: pgmap v869: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:06 standalone.localdomain python3[106640]: ansible-ansible.legacy.async_status Invoked with jid=703112658352.105872 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:07 standalone.localdomain python3[106643]: ansible-ansible.legacy.async_status Invoked with jid=405914794906.105888 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:07 standalone.localdomain python3[106646]: ansible-ansible.legacy.async_status Invoked with jid=475458525056.105904 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:07 standalone.localdomain python3[106649]: ansible-ansible.legacy.async_status Invoked with jid=663528935911.105928 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:07 standalone.localdomain python3[106660]: ansible-ansible.legacy.async_status Invoked with jid=617572776341.105944 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:08 standalone.localdomain python3[106663]: ansible-ansible.legacy.async_status Invoked with jid=669893488899.105960 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:08 standalone.localdomain python3[106666]: ansible-ansible.legacy.async_status Invoked with jid=410888138125.106052 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:08 standalone.localdomain ceph-mon[29756]: pgmap v870: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:08 standalone.localdomain ansible-async_wrapper.py[106678]: Invoked with 611147018590 60 /tmp/ansible-root/ansible-tmp-1760364248.590236-106667-123203592131361/AnsiballZ_role_assignment.py _
Oct 13 14:04:08 standalone.localdomain ansible-async_wrapper.py[106689]: Starting module and watcher
Oct 13 14:04:08 standalone.localdomain ansible-async_wrapper.py[106689]: Start watching 106690 (60)
Oct 13 14:04:08 standalone.localdomain ansible-async_wrapper.py[106690]: Start module (106690)
Oct 13 14:04:08 standalone.localdomain ansible-async_wrapper.py[106678]: Return async_wrapper task started.
Oct 13 14:04:09 standalone.localdomain python3[106691]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=nova project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106703]: Invoked with 465770704704 60 /tmp/ansible-root/ansible-tmp-1760364248.9327674-106667-200325569796315/AnsiballZ_role_assignment.py _
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106706]: Starting module and watcher
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106706]: Start watching 106707 (60)
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106707]: Start module (106707)
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106703]: Return async_wrapper task started.
Oct 13 14:04:09 standalone.localdomain python3[106708]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=placement project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106719]: Invoked with 19384240053 60 /tmp/ansible-root/ansible-tmp-1760364249.2805562-106667-260896514864263/AnsiballZ_role_assignment.py _
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106722]: Starting module and watcher
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106722]: Start watching 106723 (60)
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106723]: Start module (106723)
Oct 13 14:04:09 standalone.localdomain ansible-async_wrapper.py[106719]: Return async_wrapper task started.
Oct 13 14:04:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:09.573] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:09 standalone.localdomain python3[106724]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=swift project=service domain= role=admin state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:09 standalone.localdomain python3[106728]: ansible-ansible.legacy.async_status Invoked with jid=611147018590.106678 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:09.577] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/288/288 201 8100 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:09.873] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 404 291 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:09 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:09.885] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/14/14 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:09.900] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 200 531 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:09.900] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/323/323 201 8100 - - ---- 13/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:09.922] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/363/363 404 290 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/nova HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:10.227] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/96/96 404 291 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:10.254] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/73/73 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:10.289] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/86/86 200 590 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=nova HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:10.329] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/71/71 200 531 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:10.331] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/351/351 201 8100 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain ceph-mon[29756]: pgmap v871: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:10.381] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/325/325 404 296 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:10.406] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/326/326 404 295 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/placement HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:10.695] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/55/55 404 291 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles/admin HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:10.712] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/57/57 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:10.734] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/60/60 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=placement HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain podman[106747]: 2025-10-13 14:04:10.812593174 +0000 UTC m=+0.076654545 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:10.754] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/60/60 200 531 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles?name=admin HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain podman[106747]: 2025-10-13 14:04:10.822932064 +0000 UTC m=+0.086993435 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:10.781] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/50/50 200 456 - - ---- 14/3/2/2/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=f9ccee51fa914ab59733fdcbd8ba3194&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:10.798] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/58/58 404 296 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:10.819] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/131/131 404 291 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/swift HTTP/1.1"
Oct 13 14:04:10 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:10.839] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/135/135 200 2640 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:10.860] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/142/142 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:10.955] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/88/88 200 593 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=swift HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:10.980] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/101/101 200 590 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=nova HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:11.012] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/84/84 200 456 - - ---- 14/3/2/2/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=cbabdefa3ba54f91b7fdc6b6827ba846&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.045] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/76/76 404 296 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:11.089] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/54/54 200 919 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:11.101] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/60/60 200 2640 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.124] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/56/56 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:11.152] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/45/45 200 456 - - ---- 14/3/2/2/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=f9ccee51fa914ab59733fdcbd8ba3194&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:11.172] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/47/47 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=placement HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.186] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/44/44 200 456 - - ---- 14/3/2/2/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=4d2a0342504f40469650191deb127018&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:50742 [13/Oct/2025:14:04:11.205] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/64/64 204 165 - - ---- 14/3/2/2/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/f9ccee51fa914ab59733fdcbd8ba3194/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:11.226] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/98/98 200 919 - - ---- 13/2/1/1/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.233] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/109/109 200 2640 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain ansible-async_wrapper.py[106690]: Module complete (106690)
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:11.330] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 200 456 - - ---- 13/2/1/1/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=cbabdefa3ba54f91b7fdc6b6827ba846&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.347] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/55/55 200 593 - - ---- 13/2/1/1/0 0/0 "GET /v3/users?name=swift HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41582 [13/Oct/2025:14:04:11.373] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/71/71 204 165 - - ---- 13/2/1/1/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/cbabdefa3ba54f91b7fdc6b6827ba846/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.407] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/103/103 200 919 - - ---- 12/1/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:11 standalone.localdomain ansible-async_wrapper.py[106707]: Module complete (106707)
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.515] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 456 - - ---- 12/1/0/0/0 0/0 "GET /v3/role_assignments?role.id=c256a819446e40c281f7e7aa56296425&user.id=4d2a0342504f40469650191deb127018&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain haproxy[70940]: 172.21.0.2:41594 [13/Oct/2025:14:04:11.541] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 204 165 - - ---- 12/1/0/0/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/4d2a0342504f40469650191deb127018/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:04:11 standalone.localdomain ansible-async_wrapper.py[106723]: Module complete (106723)
Oct 13 14:04:11 standalone.localdomain ceph-mon[29756]: pgmap v872: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:04:11 standalone.localdomain systemd[1]: tmp-crun.CM6lVq.mount: Deactivated successfully.
Oct 13 14:04:11 standalone.localdomain podman[106767]: 2025-10-13 14:04:11.81743387 +0000 UTC m=+0.086746118 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, container_name=keystone, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git)
Oct 13 14:04:11 standalone.localdomain podman[106767]: 2025-10-13 14:04:11.854475677 +0000 UTC m=+0.123787925 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, container_name=keystone, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, name=rhosp17/openstack-keystone)
Oct 13 14:04:11 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:04:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:04:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:04:13 standalone.localdomain ansible-async_wrapper.py[106689]: Done in kid B.
Oct 13 14:04:13 standalone.localdomain podman[106865]: 2025-10-13 14:04:13.838005238 +0000 UTC m=+0.100364480 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, version=17.1.9, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, name=rhosp17/openstack-horizon, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, container_name=horizon, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, build-date=2025-07-21T13:58:15, description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-horizon-container, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:04:13 standalone.localdomain podman[106865]: 2025-10-13 14:04:13.871069783 +0000 UTC m=+0.133429095 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, name=rhosp17/openstack-horizon, description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, release=1, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, container_name=horizon, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-horizon-container, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, version=17.1.9)
Oct 13 14:04:13 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:04:13 standalone.localdomain podman[106864]: 2025-10-13 14:04:13.886536251 +0000 UTC m=+0.152062771 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-memcached, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:04:13 standalone.localdomain podman[106864]: 2025-10-13 14:04:13.908675567 +0000 UTC m=+0.174202117 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, com.redhat.component=openstack-memcached-container, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:04:13 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:04:14 standalone.localdomain ansible-async_wrapper.py[106706]: Done in kid B.
Oct 13 14:04:14 standalone.localdomain ansible-async_wrapper.py[106722]: Done in kid B.
Oct 13 14:04:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:14 standalone.localdomain ceph-mon[29756]: pgmap v873: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:15 standalone.localdomain python3[107010]: ansible-ansible.legacy.async_status Invoked with jid=611147018590.106678 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:15 standalone.localdomain python3[107013]: ansible-ansible.legacy.async_status Invoked with jid=465770704704.106703 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:15 standalone.localdomain python3[107016]: ansible-ansible.legacy.async_status Invoked with jid=19384240053.106719 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107079]: Invoked with 674269058205 60 /tmp/ansible-root/ansible-tmp-1760364255.8791451-107060-117489373669325/AnsiballZ_role_assignment.py _
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107082]: Starting module and watcher
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107082]: Start watching 107083 (60)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107083]: Start module (107083)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107079]: Return async_wrapper task started.
Oct 13 14:04:16 standalone.localdomain python3[107084]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=cinder project=service domain= role=service state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:16 standalone.localdomain runuser[107142]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107137]: Invoked with 107057291203 60 /tmp/ansible-root/ansible-tmp-1760364256.153073-107060-85685964364998/AnsiballZ_role_assignment.py _
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107191]: Starting module and watcher
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107191]: Start watching 107195 (60)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107195]: Start module (107195)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107137]: Return async_wrapper task started.
Oct 13 14:04:16 standalone.localdomain ceph-mon[29756]: pgmap v874: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:16 standalone.localdomain python3[107198]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=cinderv3 project=service domain= role=service state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107243]: Invoked with 813467366382 60 /tmp/ansible-root/ansible-tmp-1760364256.5301006-107060-40624831326787/AnsiballZ_role_assignment.py _
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107246]: Starting module and watcher
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107246]: Start watching 107247 (60)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107247]: Start module (107247)
Oct 13 14:04:16 standalone.localdomain ansible-async_wrapper.py[107243]: Return async_wrapper task started.
Oct 13 14:04:16 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:16.760] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 12/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:16 standalone.localdomain python3[107248]: ansible-openstack.cloud.role_assignment Invoked with cloud=standalone user=nova project=service domain= role=service state=present wait=True timeout=180 interface=public auth_type=None auth=NOT_LOGGING_PARAMETER region_name=None availability_zone=None validate_certs=None ca_cert=None client_cert=None client_key=NOT_LOGGING_PARAMETER api_timeout=None group=None
Oct 13 14:04:16 standalone.localdomain runuser[107142]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:16 standalone.localdomain python3[107265]: ansible-ansible.legacy.async_status Invoked with jid=674269058205.107079 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:16.764] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/313/313 201 8100 - - ---- 12/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain runuser[107278]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.081] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 404 293 - - ---- 12/1/0/0/0 0/0 "GET /v3/roles/service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.101] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/15/15 200 518 - - ---- 12/1/0/0/0 0/0 "GET /v3/roles?name=service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.119] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 404 292 - - ---- 12/1/0/0/0 0/0 "GET /v3/users/cinder HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.144] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 596 - - ---- 12/1/0/0/0 0/0 "GET /v3/users?name=cinder HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.171] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/19/19 404 296 - - ---- 13/2/1/1/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.184] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/8/8 300 515 - - ---- 13/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.194] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/262/262 201 8100 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.196] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:17.413] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/61/61 300 515 - - ---- 14/3/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.467] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 404 293 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles/service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:17 standalone.localdomain runuser[107278]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:17 standalone.localdomain runuser[107332]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:17.475] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/285/285 201 8100 - - ---- 14/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.483] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/290/290 200 456 - - ---- 14/3/2/2/0 0/0 "GET /v3/role_assignments?role.id=0abff0352b4342aaa2f7c3affc739552&user.id=006fcf9d033646a883a614fee0c9f470&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.499] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/290/290 200 518 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles?name=service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:17.768] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 404 293 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles/service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.780] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/44/44 200 2640 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.793] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/56/56 404 294 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/cinderv3 HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:17.814] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/55/55 200 518 - - ---- 14/3/2/2/0 0/0 "GET /v3/roles?name=service HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.833] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/66/66 200 596 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=cinder HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.853] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/72/72 200 602 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=cinderv3 HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:17.873] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/76/76 404 290 - - ---- 14/3/2/2/0 0/0 "GET /v3/users/nova HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.908] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/62/62 200 919 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:17 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.930] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/62/62 404 296 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:17.952] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/69/69 200 590 - - ---- 14/3/2/2/0 0/0 "GET /v3/users?name=nova HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:17.979] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/55/55 200 456 - - ---- 14/3/2/2/0 0/0 "GET /v3/role_assignments?role.id=0abff0352b4342aaa2f7c3affc739552&user.id=006fcf9d033646a883a614fee0c9f470&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:17.995] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/54/54 200 605 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.026] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/42/42 404 296 - - ---- 14/3/2/2/0 0/0 "GET /v3/projects/service HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41608 [13/Oct/2025:14:04:18.039] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/49/49 204 165 - - ---- 14/3/2/2/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/006fcf9d033646a883a614fee0c9f470/roles/0abff0352b4342aaa2f7c3affc739552 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:18.057] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/76/76 200 456 - - ---- 13/2/1/1/0 0/0 "GET /v3/role_assignments?role.id=0abff0352b4342aaa2f7c3affc739552&user.id=da52d851b182439cafdb19c4843c5299&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.076] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/72/72 200 605 - - ---- 13/2/1/1/0 0/0 "GET /v3/projects?name=service HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:18.138] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 2640 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.158] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/16/16 200 456 - - ---- 13/2/1/1/0 0/0 "GET /v3/role_assignments?role.id=0abff0352b4342aaa2f7c3affc739552&user.id=f9ccee51fa914ab59733fdcbd8ba3194&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain ansible-async_wrapper.py[107083]: Module complete (107083)
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:18.167] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 200 602 - - ---- 13/2/1/1/0 0/0 "GET /v3/users?name=cinderv3 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.177] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/35/35 200 2640 - - ---- 13/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:18.201] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 919 - - ---- 13/2/1/1/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.220] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 200 590 - - ---- 13/2/1/1/0 0/0 "GET /v3/users?name=nova HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:18.235] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/26/26 200 456 - - ---- 13/2/1/1/0 0/0 "GET /v3/role_assignments?role.id=0abff0352b4342aaa2f7c3affc739552&user.id=da52d851b182439cafdb19c4843c5299&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.258] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 919 - - ---- 13/2/1/1/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41622 [13/Oct/2025:14:04:18.268] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 204 165 - - ---- 13/2/1/1/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/da52d851b182439cafdb19c4843c5299/roles/0abff0352b4342aaa2f7c3affc739552 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.284] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/60/60 200 456 - - ---- 12/1/0/0/0 0/0 "GET /v3/role_assignments?role.id=0abff0352b4342aaa2f7c3affc739552&user.id=f9ccee51fa914ab59733fdcbd8ba3194&scope.project.id=206ad444831c476581ca5d3bd344a788 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain haproxy[70940]: 172.21.0.2:41628 [13/Oct/2025:14:04:18.350] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/25/25 204 165 - - ---- 12/1/0/0/0 0/0 "PUT /v3/projects/206ad444831c476581ca5d3bd344a788/users/f9ccee51fa914ab59733fdcbd8ba3194/roles/0abff0352b4342aaa2f7c3affc739552 HTTP/1.1"
Oct 13 14:04:18 standalone.localdomain ansible-async_wrapper.py[107195]: Module complete (107195)
Oct 13 14:04:18 standalone.localdomain runuser[107332]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:18 standalone.localdomain ansible-async_wrapper.py[107247]: Module complete (107247)
Oct 13 14:04:18 standalone.localdomain ceph-mon[29756]: pgmap v875: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:20 standalone.localdomain ceph-mon[29756]: pgmap v876: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:21 standalone.localdomain ansible-async_wrapper.py[107082]: Done in kid B.
Oct 13 14:04:21 standalone.localdomain ansible-async_wrapper.py[107191]: Done in kid B.
Oct 13 14:04:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:21 standalone.localdomain ansible-async_wrapper.py[107246]: Done in kid B.
Oct 13 14:04:22 standalone.localdomain python3[107432]: ansible-ansible.legacy.async_status Invoked with jid=674269058205.107079 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:22 standalone.localdomain python3[107435]: ansible-ansible.legacy.async_status Invoked with jid=107057291203.107137 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:22 standalone.localdomain ceph-mon[29756]: pgmap v877: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:22 standalone.localdomain python3[107440]: ansible-ansible.legacy.async_status Invoked with jid=813467366382.107243 mode=status _async_dir=/root/.ansible_async
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:04:23
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_metadata', 'manila_data', 'images', '.mgr', 'volumes', 'backups']
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:04:23 standalone.localdomain python3[107521]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:04:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:23 standalone.localdomain python3[107527]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760364263.2275486-107511-43066702607662/source _original_basename=tmpy7_esqts follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:04:24 standalone.localdomain python3[107543]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:04:24 standalone.localdomain python3[107555]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/root/.ansible/tmp/ansible-tmp-1760364263.8565245-107533-137604917577695/source _original_basename=tmp3mypsmz_ follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:04:24 standalone.localdomain python3[107569]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:04:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:24 standalone.localdomain ceph-mon[29756]: pgmap v878: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:24 standalone.localdomain python3[107614]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760364264.2958105-107559-260910415204343/source _original_basename=tmpg4ouv17x follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:04:25 standalone.localdomain python3[107669]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:04:25 standalone.localdomain python3[107678]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760364264.8263702-107659-180086504433256/source _original_basename=tmpjtlxzif8 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:04:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:25 standalone.localdomain python3[107687]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 14:04:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:04:25 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:25 standalone.localdomain systemd-rc-local-generator[107763]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:25 standalone.localdomain podman[107730]: 2025-10-13 14:04:25.815970248 +0000 UTC m=+0.112939030 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, name=rhosp17/openstack-keystone, architecture=x86_64, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, container_name=keystone_cron, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:04:25 standalone.localdomain systemd-sysv-generator[107772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:25 standalone.localdomain podman[107730]: 2025-10-13 14:04:25.828908918 +0000 UTC m=+0.125877720 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, container_name=keystone_cron, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, release=1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, batch=17.1_20250721.1)
Oct 13 14:04:25 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:26 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:04:26 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:26 standalone.localdomain systemd-sysv-generator[107815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:26 standalone.localdomain systemd-rc-local-generator[107809]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:26 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:26 standalone.localdomain ceph-mon[29756]: pgmap v879: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:26 standalone.localdomain python3[107912]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:04:26 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:26 standalone.localdomain systemd-sysv-generator[107936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:26 standalone.localdomain systemd-rc-local-generator[107932]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:27 standalone.localdomain systemd-sysv-generator[107986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:27 standalone.localdomain systemd-rc-local-generator[107983]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:27 standalone.localdomain systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m.
Oct 13 14:04:27 standalone.localdomain python3[107999]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:04:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:28 standalone.localdomain systemd-sysv-generator[108026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:28 standalone.localdomain systemd-rc-local-generator[108023]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:04:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:04:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:04:28 standalone.localdomain podman[108045]: 2025-10-13 14:04:28.364643786 +0000 UTC m=+0.096403348 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, container_name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:18:19, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:04:28 standalone.localdomain podman[108046]: 2025-10-13 14:04:28.409818785 +0000 UTC m=+0.143127065 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, architecture=x86_64, container_name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, tcib_managed=true)
Oct 13 14:04:28 standalone.localdomain podman[108045]: 2025-10-13 14:04:28.415015566 +0000 UTC m=+0.146775148 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, container_name=barbican_keystone_listener, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, build-date=2025-07-21T16:18:19, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_id=tripleo_step3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:04:28 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:04:28 standalone.localdomain podman[108044]: 2025-10-13 14:04:28.494617751 +0000 UTC m=+0.227856808 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, version=17.1.9, name=rhosp17/openstack-barbican-api, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, container_name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, tcib_managed=true, release=1, vendor=Red Hat, Inc.)
Oct 13 14:04:28 standalone.localdomain podman[108046]: 2025-10-13 14:04:28.5113687 +0000 UTC m=+0.244676920 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, architecture=x86_64, container_name=barbican_worker, name=rhosp17/openstack-barbican-worker, distribution-scope=public, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc.)
Oct 13 14:04:28 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:04:28 standalone.localdomain podman[108044]: 2025-10-13 14:04:28.531271977 +0000 UTC m=+0.264511044 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, build-date=2025-07-21T15:22:44, config_id=tripleo_step3, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, distribution-scope=public, container_name=barbican_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:04:28 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:04:28 standalone.localdomain python3[108108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:04:28 standalone.localdomain ceph-mon[29756]: pgmap v880: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:28 standalone.localdomain python3[108132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760364268.3819597-108077-114354449518612/source _original_basename=tmpfxddmkw_ follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:04:29 standalone.localdomain runuser[108143]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:29 standalone.localdomain python3[108138]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:04:29 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:29 standalone.localdomain systemd-rc-local-generator[108216]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:29 standalone.localdomain systemd-sysv-generator[108222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:29 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:29 standalone.localdomain systemd[1]: Reached target tripleo_nova_libvirt.target.
Oct 13 14:04:29 standalone.localdomain runuser[108143]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:29 standalone.localdomain runuser[108256]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:30 standalone.localdomain python3[108305]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:04:30 standalone.localdomain runuser[108256]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:30 standalone.localdomain runuser[108342]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:30 standalone.localdomain ceph-mon[29756]: pgmap v881: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:30 standalone.localdomain ansible-async_wrapper.py[108405]: Invoked with 355681657509 3600 /root/.ansible/tmp/ansible-tmp-1760364270.7696729-108393-160141221292877/AnsiballZ_command.py _
Oct 13 14:04:30 standalone.localdomain ansible-async_wrapper.py[108408]: Starting module and watcher
Oct 13 14:04:30 standalone.localdomain ansible-async_wrapper.py[108408]: Start watching 108409 (3600)
Oct 13 14:04:30 standalone.localdomain ansible-async_wrapper.py[108409]: Start module (108409)
Oct 13 14:04:30 standalone.localdomain ansible-async_wrapper.py[108405]: Return async_wrapper task started.
Oct 13 14:04:31 standalone.localdomain python3[108418]: ansible-ansible.legacy.async_status Invoked with jid=355681657509.108405 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:04:31 standalone.localdomain runuser[108342]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:32 standalone.localdomain ceph-mon[29756]: pgmap v882: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:04:33 standalone.localdomain podman[108550]: 2025-10-13 14:04:33.82355327 +0000 UTC m=+0.080265946 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, config_id=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:30:04, version=17.1.9, architecture=x86_64, container_name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:04:33 standalone.localdomain podman[108550]: 2025-10-13 14:04:33.83675728 +0000 UTC m=+0.093469936 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, vcs-type=git, config_id=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.component=openstack-ovn-northd-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, name=rhosp17/openstack-ovn-northd, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:04:33 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:04:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:04:34 standalone.localdomain podman[108598]: 2025-10-13 14:04:34.492893134 +0000 UTC m=+0.063954802 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, io.openshift.expose-services=, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team)
Oct 13 14:04:34 standalone.localdomain podman[108598]: 2025-10-13 14:04:34.555801393 +0000 UTC m=+0.126863051 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T12:58:45, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:04:34 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:04:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:34 standalone.localdomain ceph-mon[29756]: pgmap v883: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (file: /etc/puppet/hiera.yaml)
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: Undefined variable '::deploy_config_name';
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (file & line not available)
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (file & line not available)
Oct 13 14:04:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Oct 13 14:04:35 standalone.localdomain ansible-async_wrapper.py[108408]: 108409 still running (3600)
Oct 13 14:04:35 standalone.localdomain puppet-user[108419]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.52 seconds
Oct 13 14:04:36 standalone.localdomain ceph-mon[29756]: pgmap v884: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:37 standalone.localdomain sudo[108935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:04:37 standalone.localdomain sudo[108935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:04:37 standalone.localdomain sudo[108935]: pam_unix(sudo:session): session closed for user root
Oct 13 14:04:37 standalone.localdomain sudo[108950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:04:37 standalone.localdomain sudo[108950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:04:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: pgmap v885: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:37 standalone.localdomain sudo[108950]: pam_unix(sudo:session): session closed for user root
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:04:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:04:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev a4cbbf7a-0972-49ad-8176-f7895a108a45 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:04:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev a4cbbf7a-0972-49ad-8176-f7895a108a45 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:04:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event a4cbbf7a-0972-49ad-8176-f7895a108a45 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:04:37 standalone.localdomain sudo[109022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:04:37 standalone.localdomain sudo[109022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:04:37 standalone.localdomain sudo[109022]: pam_unix(sudo:session): session closed for user root
Oct 13 14:04:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 42 completed events
Oct 13 14:04:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:04:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:04:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:04:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:04:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:04:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:04:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.588419) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364279588524, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 257, "total_data_size": 1111041, "memory_usage": 1137688, "flush_reason": "Manual Compaction"}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364279597222, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1080725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20374, "largest_seqno": 21720, "table_properties": {"data_size": 1075288, "index_size": 2781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12028, "raw_average_key_size": 18, "raw_value_size": 1063789, "raw_average_value_size": 1675, "num_data_blocks": 129, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364169, "oldest_key_time": 1760364169, "file_creation_time": 1760364279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8847 microseconds, and 3994 cpu microseconds.
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.597274) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1080725 bytes OK
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.597296) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.599480) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.599523) EVENT_LOG_v1 {"time_micros": 1760364279599517, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.599548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1104835, prev total WAL file size 1105324, number of live WAL files 2.
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.600221) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1055KB)], [47(3875KB)]
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364279600301, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 5049591, "oldest_snapshot_seqno": -1}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 3477 keys, 4961939 bytes, temperature: kUnknown
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364279627076, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 4961939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4938252, "index_size": 13927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 83423, "raw_average_key_size": 23, "raw_value_size": 4874942, "raw_average_value_size": 1402, "num_data_blocks": 602, "num_entries": 3477, "num_filter_entries": 3477, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.627343) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 4961939 bytes
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.629531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.8 rd, 186.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 3.8 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 4007, records dropped: 530 output_compression: NoCompression
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.629566) EVENT_LOG_v1 {"time_micros": 1760364279629551, "job": 24, "event": "compaction_finished", "compaction_time_micros": 26609, "compaction_time_cpu_micros": 15661, "output_level": 6, "num_output_files": 1, "total_output_size": 4961939, "num_input_records": 4007, "num_output_records": 3477, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364279629870, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364279630662, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.600059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.630704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.630710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.630713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.630716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:04:39.630719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:04:39 standalone.localdomain ceph-mon[29756]: pgmap v886: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:40 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created
Oct 13 14:04:40 standalone.localdomain ansible-async_wrapper.py[108408]: 108409 still running (3595)
Oct 13 14:04:41 standalone.localdomain python3[109095]: ansible-ansible.legacy.async_status Invoked with jid=355681657509.108405 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:04:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:04:41 standalone.localdomain systemd[1]: tmp-crun.vlePp0.mount: Deactivated successfully.
Oct 13 14:04:41 standalone.localdomain podman[109107]: 2025-10-13 14:04:41.816849812 +0000 UTC m=+0.083631422 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, release=1, vcs-type=git, container_name=iscsid, io.buildah.version=1.33.12)
Oct 13 14:04:41 standalone.localdomain podman[109107]: 2025-10-13 14:04:41.831027501 +0000 UTC m=+0.097809171 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:04:41 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:04:41 standalone.localdomain runuser[109130]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:42 standalone.localdomain runuser[109130]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:42 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Pacemaker::Resource_op_defaults/Pcmk_resource_op_default[bundle]/ensure: created
Oct 13 14:04:42 standalone.localdomain ceph-mon[29756]: pgmap v887: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:42 standalone.localdomain runuser[109209]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:04:42 standalone.localdomain systemd[1]: tmp-crun.owJYmK.mount: Deactivated successfully.
Oct 13 14:04:42 standalone.localdomain podman[109221]: 2025-10-13 14:04:42.839636524 +0000 UTC m=+0.107411219 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T13:27:18, vcs-type=git, container_name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc.)
Oct 13 14:04:42 standalone.localdomain podman[109221]: 2025-10-13 14:04:42.880224 +0000 UTC m=+0.147998735 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, version=17.1.9, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, container_name=keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git)
Oct 13 14:04:42 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:04:43 standalone.localdomain runuser[109209]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:43 standalone.localdomain runuser[109344]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:44 standalone.localdomain runuser[109344]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:44 standalone.localdomain ceph-mon[29756]: pgmap v888: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:04:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:04:44 standalone.localdomain podman[109462]: 2025-10-13 14:04:44.820554206 +0000 UTC m=+0.087124864 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-horizon, version=17.1.9, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:04:44 standalone.localdomain podman[109462]: 2025-10-13 14:04:44.856949385 +0000 UTC m=+0.123520083 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.9, build-date=2025-07-21T13:58:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, name=rhosp17/openstack-horizon, release=1, com.redhat.component=openstack-horizon-container, summary=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:04:44 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:04:44 standalone.localdomain podman[109459]: 2025-10-13 14:04:44.879717321 +0000 UTC m=+0.147743718 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=memcached, distribution-scope=public, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43)
Oct 13 14:04:44 standalone.localdomain podman[109459]: 2025-10-13 14:04:44.905438825 +0000 UTC m=+0.173465212 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, name=rhosp17/openstack-memcached, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, release=1, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:04:44 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:04:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:45 standalone.localdomain ansible-async_wrapper.py[108408]: 108409 still running (3590)
Oct 13 14:04:46 standalone.localdomain ceph-mon[29756]: pgmap v889: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:46 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 14:04:46 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 14:04:46 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:46 standalone.localdomain systemd-sysv-generator[109780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:46 standalone.localdomain systemd-rc-local-generator[109774]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:47 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 14:04:47 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 14:04:47 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 14:04:47 standalone.localdomain systemd[1]: run-r44d2d818c79843e6b50e286f9ad490c4.service: Deactivated successfully.
Oct 13 14:04:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:48 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created
Oct 13 14:04:48 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}16aee50c6df07a52bdd69fea227775eebbb3d3ae1538699447cdb3874bd74865'
Oct 13 14:04:48 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd'
Oct 13 14:04:48 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea'
Oct 13 14:04:48 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97'
Oct 13 14:04:48 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events
Oct 13 14:04:48 standalone.localdomain ceph-mon[29756]: pgmap v890: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v891: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:49 standalone.localdomain puppet-user[108419]: Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs property config' instead.
Oct 13 14:04:50 standalone.localdomain ceph-mon[29756]: pgmap v891: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:50 standalone.localdomain ansible-async_wrapper.py[108408]: 108409 still running (3585)
Oct 13 14:04:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v892: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:51 standalone.localdomain python3[110036]: ansible-ansible.legacy.async_status Invoked with jid=355681657509.108405 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:04:52 standalone.localdomain ceph-mon[29756]: pgmap v892: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:04:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v893: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:04:54 standalone.localdomain ceph-mon[29756]: pgmap v893: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:54 standalone.localdomain runuser[110197]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:54 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully
Oct 13 14:04:54 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:55 standalone.localdomain systemd-rc-local-generator[110320]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:55 standalone.localdomain systemd-sysv-generator[110325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:55 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:55 standalone.localdomain systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon....
Oct 13 14:04:55 standalone.localdomain snmpd[110332]: Can't find directory of RPM packages
Oct 13 14:04:55 standalone.localdomain snmpd[110332]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB
Oct 13 14:04:55 standalone.localdomain systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..
Oct 13 14:04:55 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:55 standalone.localdomain runuser[110197]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v894: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:55 standalone.localdomain systemd-rc-local-generator[110370]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:55 standalone.localdomain systemd-sysv-generator[110374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:55 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:55 standalone.localdomain runuser[110385]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:55 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:04:55 standalone.localdomain systemd-rc-local-generator[110506]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:04:55 standalone.localdomain systemd-sysv-generator[110509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:04:55 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:04:56 standalone.localdomain ansible-async_wrapper.py[108408]: 108409 still running (3580)
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'
Oct 13 14:04:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Notice: Applied catalog in 20.18 seconds
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Application:
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Initial environment: production
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Converged environment: production
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:          Run mode: user
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Changes:
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:             Total: 10
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Events:
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:           Success: 10
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:             Total: 10
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Resources:
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:         Restarted: 1
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:           Changed: 10
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:       Out of sync: 10
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:             Total: 37
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Time:
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:          Schedule: 0.00
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:         File line: 0.00
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:            Augeas: 0.01
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:              User: 0.01
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:              File: 0.06
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Config retrieval: 0.59
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:     Pcmk property: 1.50
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:           Service: 1.51
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:          Last run: 1760364296
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Pcmk resource op default: 2.14
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Pcmk resource default: 2.15
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Transaction evaluation: 20.16
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:    Catalog application: 20.18
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:           Package: 5.60
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:              Exec: 6.98
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:        Filebucket: 0.00
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:             Total: 20.18
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]: Version:
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:            Config: 1760364275
Oct 13 14:04:56 standalone.localdomain puppet-user[108419]:            Puppet: 7.10.0
Oct 13 14:04:56 standalone.localdomain systemd[1]: tmp-crun.UfZv6C.mount: Deactivated successfully.
Oct 13 14:04:56 standalone.localdomain podman[110516]: 2025-10-13 14:04:56.326876536 +0000 UTC m=+0.099723413 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:04:56 standalone.localdomain ansible-async_wrapper.py[108409]: Module complete (108409)
Oct 13 14:04:56 standalone.localdomain podman[110516]: 2025-10-13 14:04:56.362970555 +0000 UTC m=+0.135817462 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, tcib_managed=true, container_name=keystone_cron, distribution-scope=public, vcs-type=git, version=17.1.9, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, config_id=tripleo_step3, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team)
Oct 13 14:04:56 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:04:56 standalone.localdomain runuser[110385]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:56 standalone.localdomain runuser[110543]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:04:56 standalone.localdomain ceph-mon[29756]: pgmap v894: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:57 standalone.localdomain runuser[110543]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:04:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v895: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:58 standalone.localdomain ceph-mon[29756]: pgmap v895: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:04:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:04:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:04:58 standalone.localdomain podman[110698]: 2025-10-13 14:04:58.819644288 +0000 UTC m=+0.086654879 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, container_name=barbican_api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:22:44, description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, distribution-scope=public, release=1, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:04:58 standalone.localdomain podman[110700]: 2025-10-13 14:04:58.878502753 +0000 UTC m=+0.139805885 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, name=rhosp17/openstack-barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, build-date=2025-07-21T15:36:22)
Oct 13 14:04:58 standalone.localdomain podman[110698]: 2025-10-13 14:04:58.88292311 +0000 UTC m=+0.149933721 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, summary=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-api-container, vcs-type=git, vendor=Red Hat, Inc., container_name=barbican_api, managed_by=tripleo_ansible, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T15:22:44, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:04:58 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:04:58 standalone.localdomain podman[110699]: 2025-10-13 14:04:58.928049239 +0000 UTC m=+0.191874355 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3)
Oct 13 14:04:58 standalone.localdomain podman[110700]: 2025-10-13 14:04:58.934014867 +0000 UTC m=+0.195317949 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:04:58 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:04:58 standalone.localdomain podman[110699]: 2025-10-13 14:04:58.959895586 +0000 UTC m=+0.223720702 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-barbican-keystone-listener-container, tcib_managed=true, container_name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:04:58 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:04:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v896: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:04:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:00 standalone.localdomain podman[110792]: 2025-10-13 14:05:00.426548758 +0000 UTC m=+0.086248605 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, tcib_managed=true, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:05:00 standalone.localdomain podman[110792]: 2025-10-13 14:05:00.460033351 +0000 UTC m=+0.119733248 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:05:00 standalone.localdomain podman[110819]: 2025-10-13 14:05:00.559258996 +0000 UTC m=+0.089056659 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, batch=17.1_20250721.1, name=rhosp17/openstack-haproxy, io.openshift.expose-services=, release=1)
Oct 13 14:05:00 standalone.localdomain podman[110819]: 2025-10-13 14:05:00.569265659 +0000 UTC m=+0.099063292 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:05:00 standalone.localdomain ceph-mon[29756]: pgmap v896: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:00 standalone.localdomain podman[110856]: 2025-10-13 14:05:00.683155091 +0000 UTC m=+0.084014882 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true)
Oct 13 14:05:00 standalone.localdomain podman[110856]: 2025-10-13 14:05:00.714959388 +0000 UTC m=+0.115819179 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b)
Oct 13 14:05:01 standalone.localdomain ansible-async_wrapper.py[108408]: Done in kid B.
Oct 13 14:05:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v897: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:01 standalone.localdomain python3[110899]: ansible-ansible.legacy.async_status Invoked with jid=355681657509.108405 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:05:02 standalone.localdomain python3[110916]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:05:02 standalone.localdomain ceph-mon[29756]: pgmap v897: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:02 standalone.localdomain python3[110922]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:03 standalone.localdomain python3[110938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:03 standalone.localdomain python3[111003]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmptb79wu7p recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:05:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v898: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:03 standalone.localdomain python3[111009]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:04 standalone.localdomain python3[111098]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Oct 13 14:05:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:04 standalone.localdomain ceph-mon[29756]: pgmap v898: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:05:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:05:04 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:05:04 standalone.localdomain recover_tripleo_nova_virtqemud[111122]: 93291
Oct 13 14:05:04 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:05:04 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:05:04 standalone.localdomain podman[111106]: 2025-10-13 14:05:04.836845728 +0000 UTC m=+0.094635794 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, config_id=ovn_cluster_northd, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04)
Oct 13 14:05:04 standalone.localdomain podman[111107]: 2025-10-13 14:05:04.897094008 +0000 UTC m=+0.152256207 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, version=17.1.9, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, container_name=clustercheck, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1)
Oct 13 14:05:04 standalone.localdomain podman[111106]: 2025-10-13 14:05:04.906942516 +0000 UTC m=+0.164732632 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, config_id=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, version=17.1.9, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, container_name=ovn_cluster_northd, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04)
Oct 13 14:05:04 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:05:04 standalone.localdomain podman[111107]: 2025-10-13 14:05:04.959965487 +0000 UTC m=+0.215127676 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:05:04 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:05:05 standalone.localdomain python3[111254]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v899: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:05 standalone.localdomain ceph-mon[29756]: pgmap v899: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:06 standalone.localdomain python3[111315]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:06 standalone.localdomain python3[111339]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:06 standalone.localdomain python3[111343]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:06 standalone.localdomain python3[111397]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:06 standalone.localdomain python3[111442]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:07 standalone.localdomain python3[111455]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v900: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:07 standalone.localdomain python3[111467]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:07 standalone.localdomain runuser[111477]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:07 standalone.localdomain python3[111499]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:08 standalone.localdomain python3[111534]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:08 standalone.localdomain runuser[111477]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:08 standalone.localdomain python3[111548]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:05:08 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:05:08 standalone.localdomain ceph-mon[29756]: pgmap v900: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:08 standalone.localdomain runuser[111566]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:08 standalone.localdomain systemd-rc-local-generator[111604]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:05:08 standalone.localdomain systemd-sysv-generator[111607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:05:08 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:05:09 standalone.localdomain python3[111659]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:09 standalone.localdomain runuser[111566]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:09 standalone.localdomain runuser[111681]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:09 standalone.localdomain python3[111678]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v901: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:09 standalone.localdomain python3[111741]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:05:09 standalone.localdomain python3[111745]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:10 standalone.localdomain runuser[111681]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:10 standalone.localdomain python3[111760]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:05:10 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:05:10 standalone.localdomain systemd-sysv-generator[111797]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:05:10 standalone.localdomain systemd-rc-local-generator[111791]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:05:10 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:05:10 standalone.localdomain ceph-mon[29756]: pgmap v901: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:10 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 14:05:10 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 14:05:10 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 14:05:10 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 14:05:11 standalone.localdomain python3[111817]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Oct 13 14:05:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v902: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:12 standalone.localdomain ceph-mon[29756]: pgmap v902: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:05:12 standalone.localdomain podman[111870]: 2025-10-13 14:05:12.740209393 +0000 UTC m=+0.086076520 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:05:12 standalone.localdomain podman[111870]: 2025-10-13 14:05:12.776531909 +0000 UTC m=+0.122398946 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, version=17.1.9, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15)
Oct 13 14:05:12 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:05:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v903: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:05:13 standalone.localdomain python3[111958]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 14:05:13 standalone.localdomain podman[111959]: 2025-10-13 14:05:13.835581254 +0000 UTC m=+0.100773128 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, release=1, container_name=keystone, name=rhosp17/openstack-keystone, version=17.1.9, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12)
Oct 13 14:05:13 standalone.localdomain podman[111959]: 2025-10-13 14:05:13.870020227 +0000 UTC m=+0.135212081 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, com.redhat.component=openstack-keystone-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:05:13 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain podman[112151]: 2025-10-13 14:05:14.082105361 +0000 UTC m=+0.074894678 container create e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, com.redhat.component=openstack-cinder-scheduler-container, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-scheduler, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, release=1, distribution-scope=public, architecture=x86_64, container_name=cinder_scheduler, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T16:10:12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:05:14 standalone.localdomain podman[112152]: 2025-10-13 14:05:14.118764679 +0000 UTC m=+0.110707808 container create f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=configure_cms_options, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libpod-conmon-e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.scope.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e167f4177388f76c1ad6441c898b0a79fb6ad068ac7f87433fc33b2068b41b1/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain podman[112198]: 2025-10-13 14:05:14.143294084 +0000 UTC m=+0.097128717 container create bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, container_name=cinder_api_cron, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public)
Oct 13 14:05:14 standalone.localdomain podman[112151]: 2025-10-13 14:05:14.045070902 +0000 UTC m=+0.037860219 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libpod-conmon-f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632.scope.
Oct 13 14:05:14 standalone.localdomain podman[112152]: 2025-10-13 14:05:14.053848843 +0000 UTC m=+0.045791982 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:14 standalone.localdomain podman[112148]: 2025-10-13 14:05:14.163520095 +0000 UTC m=+0.160934435 container create 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=heat_api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libpod-conmon-bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.scope.
Oct 13 14:05:14 standalone.localdomain podman[112152]: 2025-10-13 14:05:14.170619372 +0000 UTC m=+0.162562491 container init f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=configure_cms_options, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d88352a2ff0b111ac4658de741be23293b9e0b9247be5bcd90b986f8e074eeae/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d88352a2ff0b111ac4658de741be23293b9e0b9247be5bcd90b986f8e074eeae/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain podman[112152]: 2025-10-13 14:05:14.180900872 +0000 UTC m=+0.172843991 container start f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=configure_cms_options, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:05:14 standalone.localdomain podman[112198]: 2025-10-13 14:05:14.080086644 +0000 UTC m=+0.033921317 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 14:05:14 standalone.localdomain podman[112152]: 2025-10-13 14:05:14.181587586 +0000 UTC m=+0.173530735 container attach f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, container_name=configure_cms_options, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:14 standalone.localdomain podman[112210]: 2025-10-13 14:05:14.183627533 +0000 UTC m=+0.122995706 container create 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, container_name=cinder_api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libpod-conmon-9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.scope.
Oct 13 14:05:14 standalone.localdomain podman[112210]: 2025-10-13 14:05:14.093428047 +0000 UTC m=+0.032796120 image pull  registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 14:05:14 standalone.localdomain podman[112148]: 2025-10-13 14:05:14.098102873 +0000 UTC m=+0.095517223 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9420cd69b63e4cf6dac0289efb2e21fb7a99dd6512be09e26d2ba629cde543/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9420cd69b63e4cf6dac0289efb2e21fb7a99dd6512be09e26d2ba629cde543/merged/var/log/heat supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libpod-conmon-4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.scope.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:05:14 standalone.localdomain podman[112151]: 2025-10-13 14:05:14.229976773 +0000 UTC m=+0.222766080 container init e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f573d32cfb4fe0600026989e0f3eb3e23bdbbcdc54637c567c3b456cea4c706a/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f573d32cfb4fe0600026989e0f3eb3e23bdbbcdc54637c567c3b456cea4c706a/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:05:14 standalone.localdomain podman[112148]: 2025-10-13 14:05:14.244923589 +0000 UTC m=+0.242337969 container init 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, name=rhosp17/openstack-heat-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, managed_by=tripleo_ansible, container_name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:05:14 standalone.localdomain sudo[112261]:   cinder : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:14 standalone.localdomain sudo[112261]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:14 standalone.localdomain sudo[112261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42407)
Oct 13 14:05:14 standalone.localdomain sudo[112265]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:14 standalone.localdomain sudo[112265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:05:14 standalone.localdomain podman[112151]: 2025-10-13 14:05:14.27296296 +0000 UTC m=+0.265752267 container start e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, container_name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, build-date=2025-07-21T16:10:12, tcib_managed=true, com.redhat.component=openstack-cinder-scheduler-container, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:05:14 standalone.localdomain ovs-vsctl[112268]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:ovn-cms-options=enable-chassis-as-gw
Oct 13 14:05:14 standalone.localdomain podman[112152]: 2025-10-13 14:05:14.279534939 +0000 UTC m=+0.271478078 container died f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, distribution-scope=public, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:05:14 standalone.localdomain systemd[1]: libpod-f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632.scope: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_scheduler --conmon-pidfile /run/cinder_scheduler.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bf2556c9454e19b68e3daf4f11f84a81 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=cinder_scheduler --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_scheduler.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/cinder:/var/log/cinder:z --volume /var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:05:14 standalone.localdomain podman[112198]: 2025-10-13 14:05:14.288537548 +0000 UTC m=+0.242372181 container init bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.openshift.expose-services=, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-cinder-api, distribution-scope=public, container_name=cinder_api_cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-cinder-api-container, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:05:14 standalone.localdomain sudo[112289]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:14 standalone.localdomain podman[112148]: 2025-10-13 14:05:14.305877204 +0000 UTC m=+0.303291554 container start 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-heat-api, maintainer=OpenStack TripleO Team)
Oct 13 14:05:14 standalone.localdomain sudo[112289]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:14 standalone.localdomain sudo[112289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:14 standalone.localdomain sudo[112261]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:14 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name heat_api --conmon-pidfile /run/heat_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=171dfb6084155b43ebab325c6674ca84 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=heat_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/heat_api.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/heat:/var/log/heat:z --volume /var/log/containers/httpd/heat-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:05:14 standalone.localdomain podman[112210]: 2025-10-13 14:05:14.322720413 +0000 UTC m=+0.262088486 container init 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=cinder_api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:58:55, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:05:14 standalone.localdomain sudo[112265]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:05:14 standalone.localdomain sudo[112318]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:14 standalone.localdomain sudo[112318]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:14 standalone.localdomain sudo[112318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:14 standalone.localdomain sudo[112289]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:05:14 standalone.localdomain crond[112288]: (CRON) STARTUP (1.5.7)
Oct 13 14:05:14 standalone.localdomain crond[112288]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 14:05:14 standalone.localdomain crond[112288]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 31% if used.)
Oct 13 14:05:14 standalone.localdomain crond[112288]: (CRON) INFO (running with inotify support)
Oct 13 14:05:14 standalone.localdomain podman[112210]: 2025-10-13 14:05:14.397565699 +0000 UTC m=+0.336933762 container start 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, vcs-type=git, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=cinder_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, distribution-scope=public, release=1)
Oct 13 14:05:14 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_api --conmon-pidfile /run/cinder_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bf2556c9454e19b68e3daf4f11f84a81 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=cinder_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_api.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/cinder:/var/log/cinder:z --volume /var/log/containers/httpd/cinder-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 14:05:14 standalone.localdomain sudo[112318]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:14 standalone.localdomain podman[112271]: 2025-10-13 14:05:14.457598693 +0000 UTC m=+0.169369676 container cleanup f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, architecture=x86_64, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:05:14 standalone.localdomain podman[112267]: 2025-10-13 14:05:14.407571551 +0000 UTC m=+0.125304882 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=starting, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=cinder_scheduler, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, name=rhosp17/openstack-cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, build-date=2025-07-21T16:10:12)
Oct 13 14:05:14 standalone.localdomain systemd[1]: libpod-conmon-f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632.scope: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain podman[112295]: 2025-10-13 14:05:14.517389478 +0000 UTC m=+0.210142510 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=starting, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, container_name=heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:05:14 standalone.localdomain podman[112198]: 2025-10-13 14:05:14.563902833 +0000 UTC m=+0.517737486 container start bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, container_name=cinder_api_cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:05:14 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name cinder_api_cron --conmon-pidfile /run/cinder_api_cron.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bf2556c9454e19b68e3daf4f11f84a81 --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron cinder --label config_id=tripleo_step4 --label container_name=cinder_api_cron --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/cinder_api_cron.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/cinder:/var/log/cinder:z --volume /var/log/containers/httpd/cinder-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1
Oct 13 14:05:14 standalone.localdomain podman[112267]: 2025-10-13 14:05:14.588725478 +0000 UTC m=+0.306458819 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-scheduler-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, build-date=2025-07-21T16:10:12, container_name=cinder_scheduler, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vcs-type=git)
Oct 13 14:05:14 standalone.localdomain podman[112267]: unhealthy
Oct 13 14:05:14 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:14 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Failed with result 'exit-code'.
Oct 13 14:05:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:14 standalone.localdomain ceph-mon[29756]: pgmap v903: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:14 standalone.localdomain podman[112316]: 2025-10-13 14:05:14.591542891 +0000 UTC m=+0.250962876 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=starting, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, container_name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:14 standalone.localdomain podman[112347]: 2025-10-13 14:05:14.650531271 +0000 UTC m=+0.266884655 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, container_name=cinder_api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, architecture=x86_64, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-cinder-api, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:05:14 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi
Oct 13 14:05:14 standalone.localdomain podman[112316]: 2025-10-13 14:05:14.776111912 +0000 UTC m=+0.435531907 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, vcs-type=git, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, build-date=2025-07-21T15:58:55, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:14 standalone.localdomain podman[112556]: 2025-10-13 14:05:14.815574252 +0000 UTC m=+0.091738218 container create 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:05:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-696b99ecafeb62775d4be1c37484863243dbe05bbb108c98eb21626c3e1b01e2-merged.mount: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6ec1b1884e0b6f756c649669fb955770606583007461e4c98e38292393eb632-userdata-shm.mount: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libpod-conmon-0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.scope.
Oct 13 14:05:14 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain podman[112556]: 2025-10-13 14:05:14.763585815 +0000 UTC m=+0.039749791 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1
Oct 13 14:05:14 standalone.localdomain systemd[1]: tmp-crun.Qwc6G0.mount: Deactivated successfully.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c62a6a2c8d47bc643de7e472c16fea3d5facb0eb72e0880a216e4f3b3830d9/merged/var/log/heat supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c62a6a2c8d47bc643de7e472c16fea3d5facb0eb72e0880a216e4f3b3830d9/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:05:14 standalone.localdomain podman[112556]: 2025-10-13 14:05:14.904536747 +0000 UTC m=+0.180700683 container init 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, container_name=heat_api_cfn, version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, release=1, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git)
Oct 13 14:05:14 standalone.localdomain sudo[112675]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:14 standalone.localdomain sudo[112675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:05:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:05:14 standalone.localdomain sudo[112675]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:14 standalone.localdomain podman[112651]: 2025-10-13 14:05:14.989374445 +0000 UTC m=+0.107320775 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., version=17.1.9, release=1, container_name=horizon, io.openshift.expose-services=, name=rhosp17/openstack-horizon, build-date=2025-07-21T13:58:15, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team)
Oct 13 14:05:15 standalone.localdomain podman[112681]: 2025-10-13 14:05:15.003835465 +0000 UTC m=+0.067753981 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:15 standalone.localdomain podman[112681]: 2025-10-13 14:05:15.048768047 +0000 UTC m=+0.112686543 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, config_id=tripleo_step1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, architecture=x86_64, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T12:58:43)
Oct 13 14:05:15 standalone.localdomain podman[112556]: 2025-10-13 14:05:15.05878695 +0000 UTC m=+0.334950876 container start 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:15 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:05:15 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name heat_api_cfn --conmon-pidfile /run/heat_api_cfn.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=8a8bc26dd2600d9ce5af7a68a05ecc83 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=heat_api_cfn --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/heat_api_cfn.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/heat:/var/log/heat:z --volume /var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z --volume /var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1
Oct 13 14:05:15 standalone.localdomain podman[112702]: 2025-10-13 14:05:15.080204841 +0000 UTC m=+0.123539054 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=starting, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, container_name=heat_api_cfn, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:05:15 standalone.localdomain podman[112651]: 2025-10-13 14:05:15.110101014 +0000 UTC m=+0.228047344 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-horizon-container, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15)
Oct 13 14:05:15 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:05:15 standalone.localdomain podman[112817]: 2025-10-13 14:05:15.178553658 +0000 UTC m=+0.064030877 container create f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libpod-conmon-f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.scope.
Oct 13 14:05:15 standalone.localdomain podman[112817]: 2025-10-13 14:05:15.141204407 +0000 UTC m=+0.026681636 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c9c267afdf127d08eca445cb214129244d6909cac33b961d5f917bf640f921/merged/var/log/containers supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:05:15 standalone.localdomain podman[112817]: 2025-10-13 14:05:15.292813563 +0000 UTC m=+0.178290782 container init f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, container_name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:05:15 standalone.localdomain sudo[112901]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:15 standalone.localdomain sudo[112901]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:05:15 standalone.localdomain podman[112817]: 2025-10-13 14:05:15.332662047 +0000 UTC m=+0.218139266 container start f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:05:15 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Oct 13 14:05:15 standalone.localdomain sudo[112901]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:15 standalone.localdomain crond[112897]: (CRON) STARTUP (1.5.7)
Oct 13 14:05:15 standalone.localdomain crond[112897]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 95% if used.)
Oct 13 14:05:15 standalone.localdomain crond[112897]: (CRON) INFO (running with inotify support)
Oct 13 14:05:15 standalone.localdomain podman[112891]: 2025-10-13 14:05:15.453887013 +0000 UTC m=+0.149127594 container create 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:11, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-heat-engine, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, config_id=tripleo_step4, version=17.1.9)
Oct 13 14:05:15 standalone.localdomain podman[112891]: 2025-10-13 14:05:15.361786284 +0000 UTC m=+0.057026885 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1
Oct 13 14:05:15 standalone.localdomain podman[112883]: 2025-10-13 14:05:15.362934812 +0000 UTC m=+0.067213654 image pull  registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 14:05:15 standalone.localdomain podman[112883]: 2025-10-13 14:05:15.484353385 +0000 UTC m=+0.188632207 container create 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, container_name=heat_api_cron, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container)
Oct 13 14:05:15 standalone.localdomain podman[112915]: 2025-10-13 14:05:15.434666104 +0000 UTC m=+0.098112869 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-type=git, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libpod-conmon-1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.scope.
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072572511f39c4d4ecb97b214b4148a1acd564ad346e67866a611cfc25e867ff/merged/var/log/heat supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:05:15 standalone.localdomain podman[112891]: 2025-10-13 14:05:15.536089043 +0000 UTC m=+0.231329644 container init 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, container_name=heat_engine, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v904: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:15 standalone.localdomain podman[112961]: 2025-10-13 14:05:15.547537283 +0000 UTC m=+0.114500984 container create 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, build-date=2025-07-21T16:06:43, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_api, distribution-scope=public, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, name=rhosp17/openstack-manila-api, architecture=x86_64, com.redhat.component=openstack-manila-api-container)
Oct 13 14:05:15 standalone.localdomain sudo[113010]:     heat : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:05:15 standalone.localdomain sudo[113010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42418)
Oct 13 14:05:15 standalone.localdomain podman[112891]: 2025-10-13 14:05:15.563840455 +0000 UTC m=+0.259081036 container start 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, release=1, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, tcib_managed=true)
Oct 13 14:05:15 standalone.localdomain podman[112915]: 2025-10-13 14:05:15.57273381 +0000 UTC m=+0.236180575 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12)
Oct 13 14:05:15 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name heat_engine --conmon-pidfile /run/heat_engine.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=02af0e891420e5c6be1d463bd6e7979c --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=heat_engine --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/heat_engine.log --network host --privileged=False --stop-timeout 60 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/heat:/var/log/heat:z --volume /var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libpod-conmon-8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.scope.
Oct 13 14:05:15 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103f3389f1c0c290f1c4785293a9b4615c276e3f45d72abe9bd5ad8e441a786/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103f3389f1c0c290f1c4785293a9b4615c276e3f45d72abe9bd5ad8e441a786/merged/var/log/heat supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libpod-conmon-2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038.scope.
Oct 13 14:05:15 standalone.localdomain podman[112961]: 2025-10-13 14:05:15.512031404 +0000 UTC m=+0.078995125 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 14:05:15 standalone.localdomain podman[112295]: 2025-10-13 14:05:15.623723963 +0000 UTC m=+1.316477005 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:15 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:05:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d229f95c042aa50f015ecf04790d97311b8723b668a1b17e0a931684b436bc8/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d229f95c042aa50f015ecf04790d97311b8723b668a1b17e0a931684b436bc8/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:15 standalone.localdomain sudo[113010]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:05:15 standalone.localdomain podman[112883]: 2025-10-13 14:05:15.649189869 +0000 UTC m=+0.353468691 container init 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=heat_api_cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:05:15 standalone.localdomain podman[112961]: 2025-10-13 14:05:15.65553153 +0000 UTC m=+0.222495241 container init 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, build-date=2025-07-21T16:06:43, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.component=openstack-manila-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']})
Oct 13 14:05:15 standalone.localdomain sudo[113057]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:15 standalone.localdomain sudo[113057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:15 standalone.localdomain sudo[113062]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:15 standalone.localdomain sudo[113062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:15 standalone.localdomain podman[112961]: 2025-10-13 14:05:15.675541895 +0000 UTC m=+0.242505596 container start 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=manila_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, batch=17.1_20250721.1, build-date=2025-07-21T16:06:43, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-manila-api-container, io.buildah.version=1.33.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:05:15 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name manila_api --conmon-pidfile /run/manila_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6f421a64683e0927a1c7cc208f5b5307 --label config_id=tripleo_step4 --label container_name=manila_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/manila_api.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/manila:/var/log/manila:z --volume /var/log/containers/httpd/manila-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 14:05:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:05:15 standalone.localdomain podman[112883]: 2025-10-13 14:05:15.705566272 +0000 UTC m=+0.409845084 container start 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-container, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, tcib_managed=true)
Oct 13 14:05:15 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name heat_api_cron --conmon-pidfile /run/heat_api_cron.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=171dfb6084155b43ebab325c6674ca84 --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron heat --label config_id=tripleo_step4 --label container_name=heat_api_cron --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/heat_api_cron.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/heat:/var/log/heat:z --volume /var/log/containers/httpd/heat-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1
Oct 13 14:05:15 standalone.localdomain podman[113012]: 2025-10-13 14:05:15.664547509 +0000 UTC m=+0.094600763 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=starting, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=heat_engine, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T15:44:11)
Oct 13 14:05:15 standalone.localdomain sudo[113057]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:15 standalone.localdomain crond[113056]: (CRON) STARTUP (1.5.7)
Oct 13 14:05:15 standalone.localdomain crond[113056]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 14:05:15 standalone.localdomain crond[113056]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 17% if used.)
Oct 13 14:05:15 standalone.localdomain crond[113056]: (CRON) INFO (running with inotify support)
Oct 13 14:05:15 standalone.localdomain sudo[113062]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:15 standalone.localdomain podman[113069]: 2025-10-13 14:05:15.782318141 +0000 UTC m=+0.090943401 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=starting, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public)
Oct 13 14:05:15 standalone.localdomain systemd[1]: tmp-crun.0FzEwW.mount: Deactivated successfully.
Oct 13 14:05:15 standalone.localdomain podman[113013]: 2025-10-13 14:05:15.772399221 +0000 UTC m=+0.200616033 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 14:05:15 standalone.localdomain podman[113069]: 2025-10-13 14:05:15.875519276 +0000 UTC m=+0.184144546 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:05:15 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:05:15 standalone.localdomain podman[113012]: 2025-10-13 14:05:15.906471834 +0000 UTC m=+0.336525088 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_id=tripleo_step4, container_name=heat_engine, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, name=rhosp17/openstack-heat-engine, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:05:15 standalone.localdomain podman[113012]: unhealthy
Oct 13 14:05:15 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:15 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:05:15 standalone.localdomain podman[113013]: 2025-10-13 14:05:15.928182425 +0000 UTC m=+0.356399247 container create 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-api, io.buildah.version=1.33.12, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, maintainer=OpenStack TripleO Team, container_name=manila_api_cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:15 standalone.localdomain haproxy[70940]: 172.17.0.100:54981 [13/Oct/2025:14:05:15.892] mysql mysql/standalone.internalapi.localdomain 1/0/94 7196 -- 12/12/11/11/0 0/0
Oct 13 14:05:16 standalone.localdomain haproxy[70940]: 172.17.0.100:51221 [13/Oct/2025:14:05:15.988] mysql mysql/standalone.internalapi.localdomain 1/0/17 7185 -- 12/12/11/11/0 0/0
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libpod-conmon-7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.scope.
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea91c370b256a874ce5d07e2995f6a00f5191e3d387d95202833d1f28499d361/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea91c370b256a874ce5d07e2995f6a00f5191e3d387d95202833d1f28499d361/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:05:16 standalone.localdomain podman[113013]: 2025-10-13 14:05:16.15955299 +0000 UTC m=+0.587769812 container init 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:06:43, container_name=manila_api_cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.component=openstack-manila-api-container, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, name=rhosp17/openstack-manila-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:16 standalone.localdomain sudo[113317]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:16 standalone.localdomain sudo[113317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:05:16 standalone.localdomain podman[113013]: 2025-10-13 14:05:16.206356474 +0000 UTC m=+0.634573276 container start 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T16:06:43, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, container_name=manila_api_cron, distribution-scope=public, tcib_managed=true, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:05:16 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name manila_api_cron --conmon-pidfile /run/manila_api_cron.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6f421a64683e0927a1c7cc208f5b5307 --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron manila --label config_id=tripleo_step4 --label container_name=manila_api_cron --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/manila_api_cron.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/manila:/var/log/manila:z --volume /var/log/containers/httpd/manila-api:/var/log/httpd:z registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1
Oct 13 14:05:16 standalone.localdomain podman[113276]: 2025-10-13 14:05:16.218331273 +0000 UTC m=+0.122016934 container create 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, build-date=2025-07-21T15:44:03, container_name=neutron_api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-server, release=1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:05:16 standalone.localdomain podman[113276]: 2025-10-13 14:05:16.142278507 +0000 UTC m=+0.045964178 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 14:05:16 standalone.localdomain sudo[113317]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:16 standalone.localdomain crond[113315]: (CRON) STARTUP (1.5.7)
Oct 13 14:05:16 standalone.localdomain crond[113315]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 14:05:16 standalone.localdomain crond[113315]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 72% if used.)
Oct 13 14:05:16 standalone.localdomain crond[113315]: (CRON) INFO (running with inotify support)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libpod-conmon-9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.scope.
Oct 13 14:05:16 standalone.localdomain podman[113343]: 2025-10-13 14:05:16.29294516 +0000 UTC m=+0.081612991 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=starting, container_name=manila_api_cron, distribution-scope=public, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, release=1, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T16:06:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:05:16 standalone.localdomain podman[113343]: 2025-10-13 14:05:16.300827342 +0000 UTC m=+0.089495193 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, name=rhosp17/openstack-manila-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-api-container, container_name=manila_api_cron, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vendor=Red Hat, Inc.)
Oct 13 14:05:16 standalone.localdomain haproxy[70940]: 172.17.0.100:50985 [13/Oct/2025:14:05:16.174] mysql mysql/standalone.internalapi.localdomain 1/0/130 9216 -- 13/13/12/12/0 0/0
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:16 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb8db4c42546afae0a3aee10f13d090e0d588e1e1565732d0102851e5608eab/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb8db4c42546afae0a3aee10f13d090e0d588e1e1565732d0102851e5608eab/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain podman[112702]: 2025-10-13 14:05:16.318686285 +0000 UTC m=+1.362020488 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99)
Oct 13 14:05:16 standalone.localdomain haproxy[70940]: 172.17.0.100:47113 [13/Oct/2025:14:05:16.309] mysql mysql/standalone.internalapi.localdomain 1/0/10 3865 -- 13/13/12/12/0 0/0
Oct 13 14:05:16 standalone.localdomain podman[113321]: 2025-10-13 14:05:16.338805514 +0000 UTC m=+0.158177885 container create ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, release=1, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, com.redhat.component=openstack-nova-conductor-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_conductor, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:17, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:05:16 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:05:16 standalone.localdomain podman[113276]: 2025-10-13 14:05:16.350497682 +0000 UTC m=+0.254183353 container init 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, build-date=2025-07-21T15:44:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, name=rhosp17/openstack-neutron-server, io.buildah.version=1.33.12, container_name=neutron_api)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libpod-conmon-ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.scope.
Oct 13 14:05:16 standalone.localdomain sudo[113411]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:16 standalone.localdomain sudo[113411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:16 standalone.localdomain podman[113353]: 2025-10-13 14:05:16.275641946 +0000 UTC m=+0.035363026 image pull  registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:05:16 standalone.localdomain podman[113353]: 2025-10-13 14:05:16.381526873 +0000 UTC m=+0.141247933 container create 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, build-date=2025-07-21T15:56:28, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, version=17.1.9, release=1, com.redhat.component=openstack-manila-scheduler-container, name=rhosp17/openstack-manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9ec403efa2eb805f6ccd9ed5698840a2ea6e37a8c8a3241e70550634ef8a13/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain podman[113321]: 2025-10-13 14:05:16.286412304 +0000 UTC m=+0.105784705 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1
Oct 13 14:05:16 standalone.localdomain podman[113276]: 2025-10-13 14:05:16.385464993 +0000 UTC m=+0.289150654 container start 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T15:44:03, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, container_name=neutron_api, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, name=rhosp17/openstack-neutron-server, version=17.1.9, release=1, config_id=tripleo_step4)
Oct 13 14:05:16 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name neutron_api --conmon-pidfile /run/neutron_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bfea4b567a7178c2bf424bc40994d7e4 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=neutron_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/neutron_api.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/log/containers/httpd/neutron-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:05:16 standalone.localdomain podman[113321]: 2025-10-13 14:05:16.415725818 +0000 UTC m=+0.235098199 container init ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-conductor-container, build-date=2025-07-21T15:44:17, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, container_name=nova_conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:05:16 standalone.localdomain sudo[113453]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:16 standalone.localdomain sudo[113453]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:16 standalone.localdomain sudo[113453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:05:16 standalone.localdomain sudo[113411]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:16 standalone.localdomain podman[113321]: 2025-10-13 14:05:16.455904422 +0000 UTC m=+0.275276803 container start ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-conductor-container, version=17.1.9, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., container_name=nova_conductor, build-date=2025-07-21T15:44:17, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:05:16 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_conductor --conmon-pidfile /run/nova_conductor.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=nova_conductor --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_conductor.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1
Oct 13 14:05:16 standalone.localdomain podman[112347]: 2025-10-13 14:05:16.479681183 +0000 UTC m=+2.096034577 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, io.buildah.version=1.33.12, container_name=cinder_api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, name=rhosp17/openstack-cinder-api, release=1, build-date=2025-07-21T15:58:55)
Oct 13 14:05:16 standalone.localdomain podman[113399]: 2025-10-13 14:05:16.395795306 +0000 UTC m=+0.048215642 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Oct 13 14:05:16 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:05:16 standalone.localdomain sudo[113453]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:16 standalone.localdomain podman[113399]: 2025-10-13 14:05:16.501623642 +0000 UTC m=+0.154043948 container create 3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, release=2, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, distribution-scope=public, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, tcib_managed=true, version=17.1.9, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libpod-conmon-3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14.scope.
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300d4c0e2feb52eaa1178d138869677b040789f75a3bb2c582bfc2b3fc939b12/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300d4c0e2feb52eaa1178d138869677b040789f75a3bb2c582bfc2b3fc939b12/merged/etc/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300d4c0e2feb52eaa1178d138869677b040789f75a3bb2c582bfc2b3fc939b12/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain podman[113399]: 2025-10-13 14:05:16.548975604 +0000 UTC m=+0.201395920 container init 3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, tcib_managed=true, architecture=x86_64, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, vcs-type=git, release=2, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, version=17.1.9)
Oct 13 14:05:16 standalone.localdomain podman[113423]: 2025-10-13 14:05:16.589885832 +0000 UTC m=+0.205115963 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=starting, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-server, container_name=neutron_api, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T15:44:03, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:05:16 standalone.localdomain podman[113459]: 2025-10-13 14:05:16.629599681 +0000 UTC m=+0.183747443 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=starting, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, build-date=2025-07-21T15:44:17, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-conductor, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_conductor, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:05:16 standalone.localdomain ceph-mon[29756]: pgmap v904: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:16 standalone.localdomain podman[113459]: 2025-10-13 14:05:16.669826718 +0000 UTC m=+0.223974480 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, container_name=nova_conductor, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, com.redhat.component=openstack-nova-conductor-container, build-date=2025-07-21T15:44:17, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:16 standalone.localdomain podman[113459]: unhealthy
Oct 13 14:05:16 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:16 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Failed with result 'exit-code'.
Oct 13 14:05:16 standalone.localdomain podman[113399]: 2025-10-13 14:05:16.710579761 +0000 UTC m=+0.363000077 container start 3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_libvirt_init_secret)
Oct 13 14:05:16 standalone.localdomain podman[113399]: 2025-10-13 14:05:16.71142692 +0000 UTC m=+0.363847266 container attach 3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, release=2, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_libvirt_init_secret, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:05:16 standalone.localdomain systemd[1]: libpod-3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14.scope: Deactivated successfully.
Oct 13 14:05:16 standalone.localdomain podman[113399]: 2025-10-13 14:05:16.716715096 +0000 UTC m=+0.369135422 container died 3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, container_name=nova_libvirt_init_secret, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, release=2, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libpod-conmon-194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.scope.
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1deaee7c92302295cfa6e4899573241006c9f9f1593a37994b270ad876377d/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain podman[113623]: 2025-10-13 14:05:16.743698801 +0000 UTC m=+0.064725451 container create 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public)
Oct 13 14:05:16 standalone.localdomain podman[113423]: 2025-10-13 14:05:16.750851959 +0000 UTC m=+0.366082100 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, architecture=x86_64, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=neutron_api, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:05:16 standalone.localdomain podman[113353]: 2025-10-13 14:05:16.758094629 +0000 UTC m=+0.517815689 container init 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, name=rhosp17/openstack-manila-scheduler, container_name=manila_scheduler, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-07-21T15:56:28, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']})
Oct 13 14:05:16 standalone.localdomain podman[113423]: unhealthy
Oct 13 14:05:16 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:16 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Failed with result 'exit-code'.
Oct 13 14:05:16 standalone.localdomain sudo[113674]:   manila : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:16 standalone.localdomain sudo[113674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42429)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:05:16 standalone.localdomain podman[113623]: 2025-10-13 14:05:16.708366538 +0000 UTC m=+0.029393198 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:05:16 standalone.localdomain podman[113353]: 2025-10-13 14:05:16.821179085 +0000 UTC m=+0.580900145 container start 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-scheduler, distribution-scope=public, build-date=2025-07-21T15:56:28, com.redhat.component=openstack-manila-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=manila_scheduler, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1)
Oct 13 14:05:16 standalone.localdomain sudo[113674]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:16 standalone.localdomain podman[113647]: 2025-10-13 14:05:16.835017254 +0000 UTC m=+0.109898700 container cleanup 3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, release=2, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0)
Oct 13 14:05:16 standalone.localdomain systemd[1]: libpod-conmon-3211c40ee443e50002113c5d71408f64c4a714459867b9370a99dfaa072a0b14.scope: Deactivated successfully.
Oct 13 14:05:16 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name manila_scheduler --conmon-pidfile /run/manila_scheduler.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6f421a64683e0927a1c7cc208f5b5307 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=manila_scheduler --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/manila_scheduler.log --network host --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/manila:/var/log/manila:z registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1
Oct 13 14:05:16 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libpod-conmon-6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.scope.
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36dd811e02ba985b5a336c17f03e47ad2b35e431e58c3e755073c9d6afb5f7b6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:16 standalone.localdomain podman[113677]: 2025-10-13 14:05:16.964325049 +0000 UTC m=+0.167330599 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=starting, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, release=1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, container_name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:05:16 standalone.localdomain podman[113623]: 2025-10-13 14:05:16.978675486 +0000 UTC m=+0.299702136 container init 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:05:16 standalone.localdomain sudo[113828]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:16 standalone.localdomain sudo[113828]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:16 standalone.localdomain sudo[113828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:05:17 standalone.localdomain podman[113623]: 2025-10-13 14:05:17.009705826 +0000 UTC m=+0.330732476 container start 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:17 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:05:17 standalone.localdomain podman[113765]: 2025-10-13 14:05:16.952747925 +0000 UTC m=+0.059123575 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1
Oct 13 14:05:17 standalone.localdomain podman[113765]: 2025-10-13 14:05:17.05499682 +0000 UTC m=+0.161372470 container create 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.openshift.expose-services=, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_scheduler, build-date=2025-07-21T16:02:54, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:05:17 standalone.localdomain sudo[113828]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:17 standalone.localdomain sshd[113856]: Server listening on 0.0.0.0 port 2022.
Oct 13 14:05:17 standalone.localdomain sshd[113856]: Server listening on :: port 2022.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libpod-conmon-37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.scope.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9405f1b2f2f7c524365dd7da82f79d342db2340e35c55f8c8aa59d7a473d1f4f/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:17 standalone.localdomain podman[113677]: 2025-10-13 14:05:17.114523278 +0000 UTC m=+0.317528838 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vcs-type=git, version=17.1.9, com.redhat.component=openstack-manila-scheduler-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-scheduler, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, architecture=x86_64, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, build-date=2025-07-21T15:56:28, distribution-scope=public, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, container_name=manila_scheduler, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:05:17 standalone.localdomain podman[113677]: unhealthy
Oct 13 14:05:17 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:17 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Failed with result 'exit-code'.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:05:17 standalone.localdomain podman[113765]: 2025-10-13 14:05:17.135531655 +0000 UTC m=+0.241907315 container init 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, com.redhat.component=openstack-nova-scheduler-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_scheduler, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:05:17 standalone.localdomain podman[113765]: 2025-10-13 14:05:17.155276381 +0000 UTC m=+0.261652021 container start 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_scheduler, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:02:54, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:05:17 standalone.localdomain sudo[113877]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:17 standalone.localdomain sudo[113877]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:17 standalone.localdomain sudo[113877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:05:17 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_scheduler --conmon-pidfile /run/nova_scheduler.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=nova_scheduler --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_scheduler.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1
Oct 13 14:05:17 standalone.localdomain podman[113834]: 2025-10-13 14:05:17.214540299 +0000 UTC m=+0.200929784 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute)
Oct 13 14:05:17 standalone.localdomain sudo[113877]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:17 standalone.localdomain podman[113882]: 2025-10-13 14:05:17.232971652 +0000 UTC m=+0.071193736 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=starting, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T16:02:54, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, container_name=nova_scheduler, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-scheduler-container, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, name=rhosp17/openstack-nova-scheduler, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9)
Oct 13 14:05:17 standalone.localdomain podman[113882]: 2025-10-13 14:05:17.362749702 +0000 UTC m=+0.200971786 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T16:02:54, config_id=tripleo_step4, container_name=nova_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-nova-scheduler-container, name=rhosp17/openstack-nova-scheduler, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:05:17 standalone.localdomain podman[113882]: unhealthy
Oct 13 14:05:17 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:17 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Failed with result 'exit-code'.
Oct 13 14:05:17 standalone.localdomain podman[113913]: 2025-10-13 14:05:17.402927847 +0000 UTC m=+0.184280782 container create e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.openshift.expose-services=, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, build-date=2025-07-21T15:24:10, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:05:17 standalone.localdomain podman[113913]: 2025-10-13 14:05:17.328212255 +0000 UTC m=+0.109565190 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libpod-conmon-e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.scope.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8675278b750d990aa4a2dbfe907b259973251cdfa372b24fe7e026c128a60e7a/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v905: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:05:17 standalone.localdomain podman[113913]: 2025-10-13 14:05:17.557511361 +0000 UTC m=+0.338864306 container init e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, name=rhosp17/openstack-nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, distribution-scope=public, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T15:24:10, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-novncproxy-container)
Oct 13 14:05:17 standalone.localdomain podman[113834]: 2025-10-13 14:05:17.573840223 +0000 UTC m=+0.560229688 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Oct 13 14:05:17 standalone.localdomain sudo[114049]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:17 standalone.localdomain sudo[114049]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:17 standalone.localdomain sudo[114049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:05:17 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:05:17 standalone.localdomain podman[113913]: 2025-10-13 14:05:17.614595146 +0000 UTC m=+0.395948091 container start e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-novncproxy-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., build-date=2025-07-21T15:24:10, container_name=nova_vnc_proxy, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, name=rhosp17/openstack-nova-novncproxy, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:05:17 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_vnc_proxy --conmon-pidfile /run/nova_vnc_proxy.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_vnc_proxy --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_vnc_proxy.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1
Oct 13 14:05:17 standalone.localdomain haproxy[70940]: Server heat_api/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 5ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:17 standalone.localdomain podman[114024]: 2025-10-13 14:05:17.706995435 +0000 UTC m=+0.174496906 container create c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=setup_ovs_manager, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:05:17 standalone.localdomain podman[114063]: 2025-10-13 14:05:17.721779656 +0000 UTC m=+0.117662639 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=starting, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-novncproxy, maintainer=OpenStack TripleO Team, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.expose-services=, container_name=nova_vnc_proxy, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:05:17 standalone.localdomain podman[114044]: 2025-10-13 14:05:17.738740249 +0000 UTC m=+0.147870361 container create 42a0e28a14f5b02bb473af4947fa59b3f287b8a8aefdb52b635575d15fef57da (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_reaper, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_reaper, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 14:05:17 standalone.localdomain podman[114024]: 2025-10-13 14:05:17.645962568 +0000 UTC m=+0.113464049 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Oct 13 14:05:17 standalone.localdomain sudo[114049]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:17 standalone.localdomain podman[114044]: 2025-10-13 14:05:17.677252947 +0000 UTC m=+0.086383059 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libpod-conmon-42a0e28a14f5b02bb473af4947fa59b3f287b8a8aefdb52b635575d15fef57da.scope.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libpod-conmon-c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405.scope.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057747de5dc3b18feb0c2569a593e1b23867d3fe8ee9a8da6a87f98766bbc5e9/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057747de5dc3b18feb0c2569a593e1b23867d3fe8ee9a8da6a87f98766bbc5e9/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057747de5dc3b18feb0c2569a593e1b23867d3fe8ee9a8da6a87f98766bbc5e9/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:17 standalone.localdomain podman[114044]: 2025-10-13 14:05:17.832448982 +0000 UTC m=+0.241579094 container init 42a0e28a14f5b02bb473af4947fa59b3f287b8a8aefdb52b635575d15fef57da (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_reaper, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, container_name=swift_account_reaper, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:05:17 standalone.localdomain systemd[1]: tmp-crun.N1UFSY.mount: Deactivated successfully.
Oct 13 14:05:17 standalone.localdomain systemd[1]: tmp-crun.hdgEDY.mount: Deactivated successfully.
Oct 13 14:05:17 standalone.localdomain podman[114024]: 2025-10-13 14:05:17.861427214 +0000 UTC m=+0.328928685 container init c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, tcib_managed=true, container_name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4)
Oct 13 14:05:17 standalone.localdomain podman[114044]: 2025-10-13 14:05:17.866926627 +0000 UTC m=+0.276056739 container start 42a0e28a14f5b02bb473af4947fa59b3f287b8a8aefdb52b635575d15fef57da (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_reaper, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=swift_account_reaper, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:05:17 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_account_reaper --conmon-pidfile /run/swift_account_reaper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --label config_id=tripleo_step4 --label container_name=swift_account_reaper --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_account_reaper.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift:z --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1
Oct 13 14:05:17 standalone.localdomain sudo[114172]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:17 standalone.localdomain sudo[114172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:17 standalone.localdomain podman[114024]: 2025-10-13 14:05:17.882737663 +0000 UTC m=+0.350239124 container start c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=setup_ovs_manager, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:05:17 standalone.localdomain podman[114024]: 2025-10-13 14:05:17.883550089 +0000 UTC m=+0.351051580 container attach c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, container_name=setup_ovs_manager)
Oct 13 14:05:17 standalone.localdomain sudo[114172]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:17 standalone.localdomain podman[114125]: 2025-10-13 14:05:17.924118847 +0000 UTC m=+0.168052562 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1
Oct 13 14:05:17 standalone.localdomain podman[114130]: 2025-10-13 14:05:17.998534908 +0000 UTC m=+0.233807666 container create dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:05:18 standalone.localdomain podman[114130]: 2025-10-13 14:05:17.912409707 +0000 UTC m=+0.147682485 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1
Oct 13 14:05:18 standalone.localdomain podman[114125]: 2025-10-13 14:05:18.025706271 +0000 UTC m=+0.269639966 container create ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, architecture=x86_64, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:05:18 standalone.localdomain podman[114063]: 2025-10-13 14:05:18.054334942 +0000 UTC m=+0.450217915 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container, build-date=2025-07-21T15:24:10, release=1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:05:18 standalone.localdomain podman[114063]: unhealthy
Oct 13 14:05:18 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:18 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Failed with result 'exit-code'.
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libpod-conmon-dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.scope.
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290c6a69c52e09ef654eb4e34437a2ecd4e29f978b5c8799ac61de8be76cdd0b/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290c6a69c52e09ef654eb4e34437a2ecd4e29f978b5c8799ac61de8be76cdd0b/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290c6a69c52e09ef654eb4e34437a2ecd4e29f978b5c8799ac61de8be76cdd0b/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:05:18 standalone.localdomain podman[114130]: 2025-10-13 14:05:18.165782313 +0000 UTC m=+0.401055101 container init dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libpod-conmon-ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.scope.
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:05:18 standalone.localdomain podman[114130]: 2025-10-13 14:05:18.184352799 +0000 UTC m=+0.419625557 container start dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, release=1, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=swift_account_server, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:05:18 standalone.localdomain sudo[114233]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:18 standalone.localdomain sudo[114233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:18 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_account_server --conmon-pidfile /run/swift_account_server.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=swift_account_server --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_account_server.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180eac1687c2c7a652d4f4066f69aec68a885bf0be9992b8f99ea4584f13d882/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180eac1687c2c7a652d4f4066f69aec68a885bf0be9992b8f99ea4584f13d882/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180eac1687c2c7a652d4f4066f69aec68a885bf0be9992b8f99ea4584f13d882/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain sudo[114233]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:18 standalone.localdomain account-server[114171]: Starting 2
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:05:18 standalone.localdomain podman[114125]: 2025-10-13 14:05:18.319066535 +0000 UTC m=+0.563000240 container init ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:05:18 standalone.localdomain podman[114234]: 2025-10-13 14:05:18.334565689 +0000 UTC m=+0.135990817 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=starting, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=swift_account_server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12)
Oct 13 14:05:18 standalone.localdomain sudo[114306]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:18 standalone.localdomain sudo[114306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:05:18 standalone.localdomain haproxy[70940]: 172.17.0.2:48722 [13/Oct/2025:14:05:18.374] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 16/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:05:18 standalone.localdomain podman[114125]: 2025-10-13 14:05:18.384830148 +0000 UTC m=+0.628763823 container start ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32)
Oct 13 14:05:18 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_container_server --conmon-pidfile /run/swift_container_server.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=swift_container_server --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_container_server.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1
Oct 13 14:05:18 standalone.localdomain haproxy[70940]: Server cinder/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 4ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:18 standalone.localdomain haproxy[70940]: Server heat_cfn/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 2ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:18 standalone.localdomain podman[114280]: 2025-10-13 14:05:18.490064174 +0000 UTC m=+0.188254654 container create 59d24ac6b1e68a9d9f38a32622f37532e6c5c6b98954ed9b63a3495aea9c83c8 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_updater, com.redhat.component=openstack-swift-container-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, container_name=swift_container_updater, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:05:18 standalone.localdomain sudo[114306]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:18 standalone.localdomain podman[114280]: 2025-10-13 14:05:18.428677565 +0000 UTC m=+0.126868055 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libpod-conmon-59d24ac6b1e68a9d9f38a32622f37532e6c5c6b98954ed9b63a3495aea9c83c8.scope.
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac93ada7b9e12cb2208c0d0c89c4ad77e146d6a19778550b80e3e09571ae2f3e/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac93ada7b9e12cb2208c0d0c89c4ad77e146d6a19778550b80e3e09571ae2f3e/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac93ada7b9e12cb2208c0d0c89c4ad77e146d6a19778550b80e3e09571ae2f3e/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain podman[114318]: 2025-10-13 14:05:18.62213056 +0000 UTC m=+0.246931482 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=starting, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, vendor=Red Hat, Inc.)
Oct 13 14:05:18 standalone.localdomain podman[114280]: 2025-10-13 14:05:18.643546011 +0000 UTC m=+0.341736491 container init 59d24ac6b1e68a9d9f38a32622f37532e6c5c6b98954ed9b63a3495aea9c83c8 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_updater, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, container_name=swift_container_updater, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:05:18 standalone.localdomain podman[114280]: 2025-10-13 14:05:18.655707785 +0000 UTC m=+0.353898255 container start 59d24ac6b1e68a9d9f38a32622f37532e6c5c6b98954ed9b63a3495aea9c83c8 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_updater, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, container_name=swift_container_updater, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12)
Oct 13 14:05:18 standalone.localdomain ceph-mon[29756]: pgmap v905: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:18 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_container_updater --conmon-pidfile /run/swift_container_updater.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --label config_id=tripleo_step4 --label container_name=swift_container_updater --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_container_updater.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1
Oct 13 14:05:18 standalone.localdomain podman[114234]: 2025-10-13 14:05:18.664372693 +0000 UTC m=+0.465797821 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, tcib_managed=true, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:05:18 standalone.localdomain sudo[114431]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:18 standalone.localdomain sudo[114431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:18 standalone.localdomain podman[114234]: unhealthy
Oct 13 14:05:18 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:18 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Failed with result 'exit-code'.
Oct 13 14:05:18 standalone.localdomain podman[114385]: 2025-10-13 14:05:18.696089986 +0000 UTC m=+0.131304892 container create e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_object_expirer, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_object_expirer, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:05:18 standalone.localdomain podman[114385]: 2025-10-13 14:05:18.65915597 +0000 UTC m=+0.094370906 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libpod-conmon-e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802.scope.
Oct 13 14:05:18 standalone.localdomain sudo[114431]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611ac2707d84481c333707eab6b4546b4ba86a2fffa29044d185ab9d3919bfd/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611ac2707d84481c333707eab6b4546b4ba86a2fffa29044d185ab9d3919bfd/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4611ac2707d84481c333707eab6b4546b4ba86a2fffa29044d185ab9d3919bfd/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:18 standalone.localdomain podman[114385]: 2025-10-13 14:05:18.802734888 +0000 UTC m=+0.237949804 container init e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_object_expirer, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_object_expirer, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, maintainer=OpenStack TripleO Team)
Oct 13 14:05:18 standalone.localdomain podman[114385]: 2025-10-13 14:05:18.810450385 +0000 UTC m=+0.245665301 container start e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_object_expirer, container_name=swift_object_expirer, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:05:18 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_object_expirer --conmon-pidfile /run/swift_object_expirer.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --label config_id=tripleo_step4 --label container_name=swift_object_expirer --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_object_expirer.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 14:05:18 standalone.localdomain sudo[114488]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:18 standalone.localdomain sudo[114488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:18 standalone.localdomain haproxy[70940]: 172.17.0.2:48722 [13/Oct/2025:14:05:18.388] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/476/476 201 8106 - - ---- 17/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:05:18 standalone.localdomain haproxy[70940]: 172.17.0.2:47196 [13/Oct/2025:14:05:18.867] placement placement/<NOSRV> 0/-1/-1/-1/0 503 217 - - SC-- 18/1/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:05:18 standalone.localdomain sudo[114488]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:18 standalone.localdomain podman[114318]: 2025-10-13 14:05:18.922975992 +0000 UTC m=+0.547776904 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:05:18 standalone.localdomain podman[114318]: unhealthy
Oct 13 14:05:18 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:18 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Failed with result 'exit-code'.
Oct 13 14:05:18 standalone.localdomain container-server[114305]: Option allow_versions is deprecated. Configure the versioned_writes middleware in the proxy-server instead. This option will be ignored in a future release.
Oct 13 14:05:18 standalone.localdomain container-server[114540]: Option allow_versions is deprecated. Configure the versioned_writes middleware in the proxy-server instead. This option will be ignored in a future release.
Oct 13 14:05:18 standalone.localdomain container-server[114305]: Started child 20 from parent 2
Oct 13 14:05:18 standalone.localdomain account-server[114232]: Started child 20 from parent 2
Oct 13 14:05:19 standalone.localdomain container-server[114542]: Option allow_versions is deprecated. Configure the versioned_writes middleware in the proxy-server instead. This option will be ignored in a future release.
Oct 13 14:05:19 standalone.localdomain account-server[114232]: Started child 21 from parent 2
Oct 13 14:05:19 standalone.localdomain container-server[114305]: Started child 21 from parent 2
Oct 13 14:05:19 standalone.localdomain container-server[114569]: Option allow_versions is deprecated. Configure the versioned_writes middleware in the proxy-server instead. This option will be ignored in a future release.
Oct 13 14:05:19 standalone.localdomain account-server[114232]: Started child 22 from parent 2
Oct 13 14:05:19 standalone.localdomain container-server[114305]: Started child 22 from parent 2
Oct 13 14:05:19 standalone.localdomain account-server[114232]: Started child 23 from parent 2
Oct 13 14:05:19 standalone.localdomain podman[114497]: 2025-10-13 14:05:19.080182113 +0000 UTC m=+0.214333309 container create 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, build-date=2025-07-21T14:56:28, tcib_managed=true, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, distribution-scope=public, release=1, vendor=Red Hat, Inc.)
Oct 13 14:05:19 standalone.localdomain podman[114497]: 2025-10-13 14:05:18.985366194 +0000 UTC m=+0.119517410 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1
Oct 13 14:05:19 standalone.localdomain container-server[114572]: Option allow_versions is deprecated. Configure the versioned_writes middleware in the proxy-server instead. This option will be ignored in a future release.
Oct 13 14:05:19 standalone.localdomain container-server[114305]: Started child 23 from parent 2
Oct 13 14:05:19 standalone.localdomain container-server[114430]: Starting 2
Oct 13 14:05:19 standalone.localdomain haproxy[70940]: 172.17.0.2:48724 [13/Oct/2025:14:05:19.169] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 22/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:05:19 standalone.localdomain systemd[1]: Started libpod-conmon-45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.scope.
Oct 13 14:05:19 standalone.localdomain podman[114573]: 2025-10-13 14:05:19.181105995 +0000 UTC m=+0.066530300 container create 47c57a4b5f5ee71cc5c313caaede512007071e00b14223567402343b3ae9a278 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_updater, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_updater, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12)
Oct 13 14:05:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:19 standalone.localdomain systemd[1]: Started libpod-conmon-47c57a4b5f5ee71cc5c313caaede512007071e00b14223567402343b3ae9a278.scope.
Oct 13 14:05:19 standalone.localdomain haproxy[70940]: 172.17.0.100:43155 [13/Oct/2025:14:05:19.007] mysql mysql/standalone.internalapi.localdomain 1/0/206 2414 -- 23/20/19/19/0 0/0
Oct 13 14:05:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0964a79985e2fd611aaf821e802f5e019f533471da6fd545000f2a550d3bf9/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0964a79985e2fd611aaf821e802f5e019f533471da6fd545000f2a550d3bf9/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab0964a79985e2fd611aaf821e802f5e019f533471da6fd545000f2a550d3bf9/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6615dd679f8372d0d093a72d395c2fc705d2d0f7becbcfead13f9068eb8b44e7/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6615dd679f8372d0d093a72d395c2fc705d2d0f7becbcfead13f9068eb8b44e7/merged/var/cache/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6615dd679f8372d0d093a72d395c2fc705d2d0f7becbcfead13f9068eb8b44e7/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:19 standalone.localdomain podman[114573]: 2025-10-13 14:05:19.232049987 +0000 UTC m=+0.117474292 container init 47c57a4b5f5ee71cc5c313caaede512007071e00b14223567402343b3ae9a278 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_updater, container_name=swift_object_updater, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-swift-object-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:05:19 standalone.localdomain sudo[114602]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:19 standalone.localdomain sudo[114602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:19 standalone.localdomain podman[114573]: 2025-10-13 14:05:19.154367187 +0000 UTC m=+0.039791492 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1
Oct 13 14:05:19 standalone.localdomain object-expirer[114487]: Starting 2
Oct 13 14:05:19 standalone.localdomain object-expirer[114487]: Option auto_create_account_prefix is deprecated. Configure auto_create_account_prefix under the swift-constraints section of swift.conf. This option will be ignored in a future release.
Oct 13 14:05:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:05:19 standalone.localdomain podman[114497]: 2025-10-13 14:05:19.279581036 +0000 UTC m=+0.413732242 container init 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server)
Oct 13 14:05:19 standalone.localdomain podman[114573]: 2025-10-13 14:05:19.288048927 +0000 UTC m=+0.173473222 container start 47c57a4b5f5ee71cc5c313caaede512007071e00b14223567402343b3ae9a278 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_updater, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, container_name=swift_object_updater, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, release=1, build-date=2025-07-21T14:56:28)
Oct 13 14:05:19 standalone.localdomain sudo[114602]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:19 standalone.localdomain sudo[114609]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:19 standalone.localdomain sudo[114609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:19 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_object_updater --conmon-pidfile /run/swift_object_updater.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --label config_id=tripleo_step4 --label container_name=swift_object_updater --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_object_updater.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1
Oct 13 14:05:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:05:19 standalone.localdomain podman[114497]: 2025-10-13 14:05:19.323541186 +0000 UTC m=+0.457692382 container start 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, container_name=swift_object_server, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible)
Oct 13 14:05:19 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_object_server --conmon-pidfile /run/swift_object_server.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=swift_object_server --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_object_server.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/cache/swift:/var/cache/swift --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1
Oct 13 14:05:19 standalone.localdomain sudo[114609]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:19 standalone.localdomain podman[114613]: 2025-10-13 14:05:19.391064158 +0000 UTC m=+0.078255020 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=starting, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v906: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:19 standalone.localdomain podman[114613]: 2025-10-13 14:05:19.559755641 +0000 UTC m=+0.246946513 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:05:19 standalone.localdomain podman[114613]: unhealthy
Oct 13 14:05:19 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:19 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Failed with result 'exit-code'.
Oct 13 14:05:19 standalone.localdomain haproxy[70940]: 172.17.0.2:48724 [13/Oct/2025:14:05:19.173] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/407/407 201 8106 - - ---- 22/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:05:19 standalone.localdomain haproxy[70940]: 172.17.0.2:47200 [13/Oct/2025:14:05:19.583] placement placement/<NOSRV> 0/-1/-1/-1/0 503 217 - - SC-- 23/2/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:05:19 standalone.localdomain object-server[114601]: Starting 2
Oct 13 14:05:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:19 standalone.localdomain haproxy[70940]: 172.17.0.2:44294 [13/Oct/2025:14:05:19.703] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 26/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:05:19 standalone.localdomain object-server[114608]: Started child 22 from parent 2
Oct 13 14:05:19 standalone.localdomain object-server[114608]: Started child 23 from parent 2
Oct 13 14:05:19 standalone.localdomain object-server[114608]: Started child 24 from parent 2
Oct 13 14:05:19 standalone.localdomain object-server[114608]: Started child 25 from parent 2
Oct 13 14:05:20 standalone.localdomain haproxy[70940]: 172.17.0.2:44294 [13/Oct/2025:14:05:19.708] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/303/303 201 8106 - - ---- 26/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:05:20 standalone.localdomain haproxy[70940]: 172.17.0.2:34182 [13/Oct/2025:14:05:20.016] placement placement/<NOSRV> 0/-1/-1/-1/0 503 217 - - SC-- 27/3/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:05:20 standalone.localdomain ceph-mon[29756]: pgmap v906: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:20 standalone.localdomain runuser[114803]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:21 standalone.localdomain haproxy[70940]: Server manila/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 857ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:21 standalone.localdomain ovs-vsctl[114856]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 13 14:05:21 standalone.localdomain runuser[114803]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:21 standalone.localdomain systemd[1]: libpod-c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405.scope: Deactivated successfully.
Oct 13 14:05:21 standalone.localdomain systemd[1]: libpod-c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405.scope: Consumed 2.880s CPU time.
Oct 13 14:05:21 standalone.localdomain podman[114873]: 2025-10-13 14:05:21.508100562 +0000 UTC m=+0.042622356 container died c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=setup_ovs_manager, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public)
Oct 13 14:05:21 standalone.localdomain runuser[114886]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405-userdata-shm.mount: Deactivated successfully.
Oct 13 14:05:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e1ccf73ed83c48b16553af543aff822fe00fea1e3c80e1686975885d7aa5f294-merged.mount: Deactivated successfully.
Oct 13 14:05:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v907: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:21 standalone.localdomain podman[114873]: 2025-10-13 14:05:21.62275996 +0000 UTC m=+0.157281724 container cleanup c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=setup_ovs_manager, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:05:21 standalone.localdomain systemd[1]: libpod-conmon-c46419555a2b01acc004e3cf715442a4f4285ca9471540f0334739420820f405.scope: Deactivated successfully.
Oct 13 14:05:21 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1760362656 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1760362656'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata
Oct 13 14:05:21 standalone.localdomain podman[115030]: 2025-10-13 14:05:21.971535904 +0000 UTC m=+0.096263978 container exec 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, version=17.1.9, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, container_name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:05:22 standalone.localdomain podman[115030]: 2025-10-13 14:05:22.00243326 +0000 UTC m=+0.127161324 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., container_name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, vcs-type=git, version=17.1.9, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, name=rhosp17/openstack-keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:22 standalone.localdomain podman[115087]: 2025-10-13 14:05:22.057448398 +0000 UTC m=+0.090061333 container create 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, build-date=2025-07-21T13:58:12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.component=openstack-placement-api-container, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, container_name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libpod-conmon-53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.scope.
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:22 standalone.localdomain podman[115134]: 2025-10-13 14:05:22.102183493 +0000 UTC m=+0.081442415 container create c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e410b4438f0c300953cb1d70d8d493e57beab26e7850ef1c9e5f40f4d7c5012/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e410b4438f0c300953cb1d70d8d493e57beab26e7850ef1c9e5f40f4d7c5012/merged/var/log/placement supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain podman[115087]: 2025-10-13 14:05:22.019822508 +0000 UTC m=+0.052435443 image pull  registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:05:22 standalone.localdomain podman[115087]: 2025-10-13 14:05:22.124941459 +0000 UTC m=+0.157554404 container init 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, container_name=placement_api, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T13:58:12, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, release=1, distribution-scope=public, com.redhat.component=openstack-placement-api-container)
Oct 13 14:05:22 standalone.localdomain podman[115141]: 2025-10-13 14:05:22.12587267 +0000 UTC m=+0.094743497 container create bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:05:22 standalone.localdomain sudo[115177]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:05:22 standalone.localdomain sudo[115177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libpod-conmon-c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.scope.
Oct 13 14:05:22 standalone.localdomain podman[115087]: 2025-10-13 14:05:22.151149279 +0000 UTC m=+0.183762214 container start 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, name=rhosp17/openstack-placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=placement_api, release=1, com.redhat.component=openstack-placement-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, tcib_managed=true)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:22 standalone.localdomain podman[115134]: 2025-10-13 14:05:22.063722806 +0000 UTC m=+0.042981728 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Oct 13 14:05:22 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name placement_api --conmon-pidfile /run/placement_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2c5d4de4e0570c35257b2db1f73cb503 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=placement_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/placement_api.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/placement:/var/log/placement:z --volume /var/log/containers/httpd/placement:/var/log/httpd:z --volume /var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2c307638fd1cbbca6e0bda1d22f05adab63cc3db84a8ec2dbb12be5e361693/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2c307638fd1cbbca6e0bda1d22f05adab63cc3db84a8ec2dbb12be5e361693/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2c307638fd1cbbca6e0bda1d22f05adab63cc3db84a8ec2dbb12be5e361693/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain podman[115141]: 2025-10-13 14:05:22.067323136 +0000 UTC m=+0.036193973 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libpod-conmon-bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.scope.
Oct 13 14:05:22 standalone.localdomain runuser[114886]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bb009abdf7407c55d58243c3d7fa70c1d552bab8d3e6d4259a3c852e69c497/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bb009abdf7407c55d58243c3d7fa70c1d552bab8d3e6d4259a3c852e69c497/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bb009abdf7407c55d58243c3d7fa70c1d552bab8d3e6d4259a3c852e69c497/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:05:22 standalone.localdomain podman[115134]: 2025-10-13 14:05:22.207899285 +0000 UTC m=+0.187158217 container init c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44)
Oct 13 14:05:22 standalone.localdomain runuser[115210]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:22 standalone.localdomain sudo[115177]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:05:22 standalone.localdomain podman[115134]: 2025-10-13 14:05:22.24509011 +0000 UTC m=+0.224349032 container start c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T13:28:44, tcib_managed=true, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:05:22 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:05:22 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Oct 13 14:05:22 standalone.localdomain podman[115141]: 2025-10-13 14:05:22.252380052 +0000 UTC m=+0.221250929 container init bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, io.buildah.version=1.33.12, release=1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started Session c8 of User root.
Oct 13 14:05:22 standalone.localdomain sudo[115248]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:22 standalone.localdomain sudo[115248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:05:22 standalone.localdomain podman[115141]: 2025-10-13 14:05:22.29898775 +0000 UTC m=+0.267858587 container start bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:05:22 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bfea4b567a7178c2bf424bc40994d7e4 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Oct 13 14:05:22 standalone.localdomain podman[115179]: 2025-10-13 14:05:22.32036025 +0000 UTC m=+0.165687874 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=starting, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, distribution-scope=public, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-placement-api, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:05:22 standalone.localdomain sudo[115248]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:22 standalone.localdomain systemd[1]: session-c8.scope: Deactivated successfully.
Oct 13 14:05:22 standalone.localdomain kernel: device br-int entered promiscuous mode
Oct 13 14:05:22 standalone.localdomain NetworkManager[5962]: <info>  [1760364322.4410] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11)
Oct 13 14:05:22 standalone.localdomain systemd-udevd[115365]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:05:22 standalone.localdomain podman[115278]: 2025-10-13 14:05:22.462695437 +0000 UTC m=+0.164301908 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:05:22 standalone.localdomain podman[115238]: 2025-10-13 14:05:22.428101328 +0000 UTC m=+0.176741811 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:05:22 standalone.localdomain haproxy[70940]: 172.17.0.100:60419 [13/Oct/2025:14:05:18.823] mysql mysql/standalone.internalapi.localdomain 1/0/3663 71796 -- 29/23/22/22/0 0/0
Oct 13 14:05:22 standalone.localdomain podman[115278]: 2025-10-13 14:05:22.510743883 +0000 UTC m=+0.212350344 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:05:22 standalone.localdomain podman[115278]: unhealthy
Oct 13 14:05:22 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:22 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:05:22 standalone.localdomain podman[115238]: 2025-10-13 14:05:22.5624549 +0000 UTC m=+0.311095383 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:05:22 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:05:22 standalone.localdomain ceph-mon[29756]: pgmap v907: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:22 standalone.localdomain podman[115562]: 2025-10-13 14:05:22.921833487 +0000 UTC m=+0.076205583 container create 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:05:22 standalone.localdomain podman[115571]: 2025-10-13 14:05:22.960535132 +0000 UTC m=+0.105156313 container create a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:05:11, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=nova_metadata, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:05:22 standalone.localdomain systemd[1]: Started libpod-conmon-21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.scope.
Oct 13 14:05:22 standalone.localdomain podman[115562]: 2025-10-13 14:05:22.884081683 +0000 UTC m=+0.038453789 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libpod-conmon-a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.scope.
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba/merged/var/lib/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba/merged/var/log/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain podman[115571]: 2025-10-13 14:05:22.911441142 +0000 UTC m=+0.056062343 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:23 standalone.localdomain podman[115582]: 2025-10-13 14:05:23.017061 +0000 UTC m=+0.152613661 container create c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, container_name=glance_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, name=rhosp17/openstack-glance-api)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:23 standalone.localdomain podman[115596]: 2025-10-13 14:05:22.931114415 +0000 UTC m=+0.047119086 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:23 standalone.localdomain podman[115582]: 2025-10-13 14:05:22.93576048 +0000 UTC m=+0.071313141 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87f962b5d36e9727ad9caa16c1660fa93b6a87798b880bf239662f5eb69d8c8/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87f962b5d36e9727ad9caa16c1660fa93b6a87798b880bf239662f5eb69d8c8/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:05:23 standalone.localdomain podman[115562]: 2025-10-13 14:05:23.038270254 +0000 UTC m=+0.192642360 container init 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron)
Oct 13 14:05:23 standalone.localdomain podman[115596]: 2025-10-13 14:05:23.045560916 +0000 UTC m=+0.161565567 container create 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, container_name=nova_api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, distribution-scope=public, tcib_managed=true)
Oct 13 14:05:23 standalone.localdomain sudo[115643]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:23 standalone.localdomain sudo[115643]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:23 standalone.localdomain podman[115602]: 2025-10-13 14:05:22.955452264 +0000 UTC m=+0.058946059 image pull  registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:05:23 standalone.localdomain sudo[115643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libpod-conmon-c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.scope.
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67/merged/var/log/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67/merged/var/lib/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain podman[115562]: 2025-10-13 14:05:23.076916697 +0000 UTC m=+0.231288813 container start 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api)
Oct 13 14:05:23 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name glance_api_cron --conmon-pidfile /run/glance_api_cron.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=20647e333af6a74e07ef3325107e31dd --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron glance --label config_id=tripleo_step4 --label container_name=glance_api_cron --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/glance_api_cron.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/glance:/var/log/glance:z --volume /var/log/containers/httpd/glance:/var/log/httpd:z --volume /var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json --volume /var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro --volume /var/lib/glance:/var/lib/glance:shared registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libpod-conmon-934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.scope.
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:05:23
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'backups', 'manila_metadata', 'volumes', '.mgr', 'images', 'vms']
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:05:23 standalone.localdomain podman[115602]: 2025-10-13 14:05:23.112054194 +0000 UTC m=+0.215547979 container create ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, release=1, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006dca8b53e2bed08474999b261511727460c03a977d9613a5d06166fa6108c2/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006dca8b53e2bed08474999b261511727460c03a977d9613a5d06166fa6108c2/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain sudo[115643]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:23 standalone.localdomain crond[115642]: (CRON) STARTUP (1.5.7)
Oct 13 14:05:23 standalone.localdomain crond[115642]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 14:05:23 standalone.localdomain crond[115642]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 36% if used.)
Oct 13 14:05:23 standalone.localdomain crond[115642]: (CRON) INFO (running with inotify support)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libpod-conmon-ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.scope.
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:05:23 standalone.localdomain podman[115582]: 2025-10-13 14:05:23.149569491 +0000 UTC m=+0.285122172 container init c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc.)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7/merged/var/log/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7/merged/var/lib/glance supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:05:23 standalone.localdomain podman[115596]: 2025-10-13 14:05:23.164577469 +0000 UTC m=+0.280582120 container init 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, name=rhosp17/openstack-nova-api, vcs-type=git, build-date=2025-07-21T16:05:11, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=nova_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., release=1)
Oct 13 14:05:23 standalone.localdomain sudo[115684]:   glance : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:23 standalone.localdomain sudo[115684]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:23 standalone.localdomain sudo[115684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42415)
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:05:23 standalone.localdomain sudo[115689]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:23 standalone.localdomain sudo[115689]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:23 standalone.localdomain sudo[115689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:05:23 standalone.localdomain podman[115571]: 2025-10-13 14:05:23.202815999 +0000 UTC m=+0.347437180 container init a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, tcib_managed=true, name=rhosp17/openstack-nova-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Oct 13 14:05:23 standalone.localdomain podman[115582]: 2025-10-13 14:05:23.20314354 +0000 UTC m=+0.338696201 container start c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, build-date=2025-07-21T13:58:20)
Oct 13 14:05:23 standalone.localdomain podman[115602]: 2025-10-13 14:05:23.205661273 +0000 UTC m=+0.309155058 container init ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=glance_api_internal, release=1)
Oct 13 14:05:23 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name glance_api --conmon-pidfile /run/glance_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=glance_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/glance_api.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/glance:/var/log/glance:z --volume /var/log/containers/httpd/glance:/var/log/httpd:z --volume /var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json --volume /var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/glance:/var/lib/glance:shared registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:05:23 standalone.localdomain podman[115596]: 2025-10-13 14:05:23.221755028 +0000 UTC m=+0.337759669 container start 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-api-container, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api)
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:05:23 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_api --conmon-pidfile /run/nova_api.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_api --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_api.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:05:23 standalone.localdomain sudo[115684]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:23 standalone.localdomain podman[115602]: 2025-10-13 14:05:23.245895919 +0000 UTC m=+0.349389714 container start ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, release=1, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_internal, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, distribution-scope=public, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Oct 13 14:05:23 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name glance_api_internal --conmon-pidfile /run/glance_api_internal.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=glance_api_internal --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/glance_api_internal.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/glance:/var/log/glance:z --volume /var/log/containers/httpd/glance:/var/log/httpd:z --volume /var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json --volume /var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/glance:/var/lib/glance:shared registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1
Oct 13 14:05:23 standalone.localdomain sudo[115713]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:23 standalone.localdomain sudo[115713]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:23 standalone.localdomain sudo[115713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:23 standalone.localdomain sudo[115714]:   glance : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:23 standalone.localdomain sudo[115714]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:23 standalone.localdomain sudo[115714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42415)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:05:23 standalone.localdomain sudo[115689]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:23 standalone.localdomain podman[115695]: 2025-10-13 14:05:23.318695227 +0000 UTC m=+0.117022338 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=starting, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, container_name=glance_api, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20)
Oct 13 14:05:23 standalone.localdomain podman[115571]: 2025-10-13 14:05:23.336093865 +0000 UTC m=+0.480715046 container start a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:05:23 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_metadata --conmon-pidfile /run/nova_metadata.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=512448c809be25559cd4e0a76027bdf9 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_metadata --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_metadata.log --network host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-metadata:/var/log/httpd:z --volume /var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:23 standalone.localdomain sudo[115714]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:23 standalone.localdomain sudo[115713]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:23 standalone.localdomain podman[115650]: 2025-10-13 14:05:23.217392443 +0000 UTC m=+0.135773760 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=starting, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, container_name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:05:23 standalone.localdomain podman[115179]: 2025-10-13 14:05:23.381141622 +0000 UTC m=+1.226469246 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-placement-api-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-placement-api, build-date=2025-07-21T13:58:12, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:05:23 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:05:23 standalone.localdomain podman[115755]: 2025-10-13 14:05:23.396405739 +0000 UTC m=+0.129681178 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=starting, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_metadata, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, architecture=x86_64, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:05:23 standalone.localdomain haproxy[70940]: Server nova_novncproxy/standalone.internalapi.localdomain is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:23 standalone.localdomain runuser[115210]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:23 standalone.localdomain podman[115755]: 2025-10-13 14:05:23.441438664 +0000 UTC m=+0.174714103 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-nova-api, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:05:23 standalone.localdomain podman[115755]: unhealthy
Oct 13 14:05:23 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:23 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Failed with result 'exit-code'.
Oct 13 14:05:23 standalone.localdomain podman[115650]: 2025-10-13 14:05:23.503550487 +0000 UTC m=+0.421931824 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_cron, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, version=17.1.9)
Oct 13 14:05:23 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:05:23 standalone.localdomain podman[115736]: 2025-10-13 14:05:23.528359801 +0000 UTC m=+0.277796007 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=starting, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:05:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v908: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:23 standalone.localdomain podman[115710]: 2025-10-13 14:05:23.646822895 +0000 UTC m=+0.426527827 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=starting, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, architecture=x86_64)
Oct 13 14:05:23 standalone.localdomain podman[115695]: 2025-10-13 14:05:23.656358453 +0000 UTC m=+0.454685554 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, container_name=glance_api)
Oct 13 14:05:23 standalone.localdomain podman[115695]: unhealthy
Oct 13 14:05:23 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:23 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Failed with result 'exit-code'.
Oct 13 14:05:23 standalone.localdomain podman[115736]: 2025-10-13 14:05:23.733759023 +0000 UTC m=+0.483195219 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, container_name=glance_api_internal, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible)
Oct 13 14:05:23 standalone.localdomain podman[115736]: unhealthy
Oct 13 14:05:23 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:23 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Failed with result 'exit-code'.
Oct 13 14:05:23 standalone.localdomain podman[116038]: 2025-10-13 14:05:23.884752138 +0000 UTC m=+0.063273442 container create 084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_wait_for_service, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, name=rhosp17/openstack-placement-api, release=1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=placement_wait_for_service, tcib_managed=true, com.redhat.component=openstack-placement-api-container, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libpod-conmon-084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59.scope.
Oct 13 14:05:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0c56f56276045e0525efb95a9b4d54690d2d3f7b572370d173646d1ca4046/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0c56f56276045e0525efb95a9b4d54690d2d3f7b572370d173646d1ca4046/merged/var/log/placement supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0c56f56276045e0525efb95a9b4d54690d2d3f7b572370d173646d1ca4046/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:23 standalone.localdomain podman[116038]: 2025-10-13 14:05:23.934434318 +0000 UTC m=+0.112955612 container init 084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_wait_for_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, release=1, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_wait_for_service, name=rhosp17/openstack-placement-api, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:05:23 standalone.localdomain podman[116038]: 2025-10-13 14:05:23.942783985 +0000 UTC m=+0.121305289 container start 084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_wait_for_service, distribution-scope=public, container_name=placement_wait_for_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.component=openstack-placement-api-container, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible)
Oct 13 14:05:23 standalone.localdomain podman[116038]: 2025-10-13 14:05:23.94321665 +0000 UTC m=+0.121737954 container attach 084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_wait_for_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-placement-api-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-placement-api, release=1, maintainer=OpenStack TripleO Team, container_name=placement_wait_for_service, build-date=2025-07-21T13:58:12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']})
Oct 13 14:05:23 standalone.localdomain sudo[116067]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:23 standalone.localdomain podman[116038]: 2025-10-13 14:05:23.8553007 +0000 UTC m=+0.033822014 image pull  registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 14:05:23 standalone.localdomain sudo[116067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:23 standalone.localdomain podman[116047]: 2025-10-13 14:05:23.970333941 +0000 UTC m=+0.110475391 container create 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, version=17.1.9, distribution-scope=public, container_name=swift_proxy, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:05:24 standalone.localdomain systemd[1]: Started libpod-conmon-84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.scope.
Oct 13 14:05:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78170edbfbc1c4123acff74d0c223c6ed428518291f85e84a08885022922a1b7/merged/srv/node supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:24 standalone.localdomain sudo[116067]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78170edbfbc1c4123acff74d0c223c6ed428518291f85e84a08885022922a1b7/merged/var/log/swift supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:24 standalone.localdomain podman[116047]: 2025-10-13 14:05:23.937007204 +0000 UTC m=+0.077148694 image pull  registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 14:05:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:05:24 standalone.localdomain podman[116047]: 2025-10-13 14:05:24.040350096 +0000 UTC m=+0.180491546 container init 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=swift_proxy, release=1, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:05:24 standalone.localdomain sudo[116082]:    swift : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:05:24 standalone.localdomain sudo[116082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42445)
Oct 13 14:05:24 standalone.localdomain podman[116047]: 2025-10-13 14:05:24.07027975 +0000 UTC m=+0.210421220 container start 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, vcs-type=git, container_name=swift_proxy, maintainer=OpenStack TripleO Team)
Oct 13 14:05:24 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name swift_proxy --conmon-pidfile /run/swift_proxy.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d8b7706e98ee5ba7e1c4d758abf97545 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=swift_proxy --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/swift_proxy.log --network host --user swift --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro --volume /srv/node:/srv/node --volume /dev:/dev --volume /var/log/containers/swift:/var/log/swift:z registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1
Oct 13 14:05:24 standalone.localdomain su[116097]: (to placement) root on none
Oct 13 14:05:24 standalone.localdomain su[116097]: pam_unix(su:session): session opened for user placement(uid=998) by (uid=0)
Oct 13 14:05:24 standalone.localdomain su[116097]: pam_lastlog(su:session): file /var/log/lastlog created
Oct 13 14:05:24 standalone.localdomain su[116097]: pam_lastlog(su:session): unable to open /var/log/btmp: No such file or directory
Oct 13 14:05:24 standalone.localdomain sudo[116082]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:24 standalone.localdomain podman[116083]: 2025-10-13 14:05:24.177536883 +0000 UTC m=+0.110948827 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=starting, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, managed_by=tripleo_ansible, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:05:24 standalone.localdomain sudo[116128]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpi0t_zvcp/privsep.sock
Oct 13 14:05:24 standalone.localdomain sudo[116128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:05:24 standalone.localdomain podman[116083]: 2025-10-13 14:05:24.333990009 +0000 UTC m=+0.267401943 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:05:24 standalone.localdomain podman[116083]: unhealthy
Oct 13 14:05:24 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:24 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Failed with result 'exit-code'.
Oct 13 14:05:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:24 standalone.localdomain ceph-mon[29756]: pgmap v908: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:24 standalone.localdomain kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 13 14:05:24 standalone.localdomain proxy-server[116081]: Starting Keystone auth_token middleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116081]: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True.
Oct 13 14:05:24 standalone.localdomain sudo[116128]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: Starting Keystone auth_token middleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "bind_port" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "workers" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "user" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "bind_ip" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "log_name" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "log_facility" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "log_level" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "log_headers" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "log_address" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "project_name" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "username" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "password" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "auth_url" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "signing_dir" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "project_domain_id" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "user_domain_id" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: STDERR: The option "__name__" is not known to keystonemiddleware
Oct 13 14:05:24 standalone.localdomain proxy-server[116182]: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to False. This is backwards compatible but deprecated behaviour. Please set this to True.
Oct 13 14:05:24 standalone.localdomain proxy-server[116081]: Started child 20 from parent 2
Oct 13 14:05:24 standalone.localdomain haproxy[70940]: 172.17.0.100:36403 [13/Oct/2025:14:01:58.892] mysql mysql/standalone.internalapi.localdomain 1/0/206092 2627801 -- 33/26/25/25/0 0/0
Oct 13 14:05:25 standalone.localdomain haproxy[70940]: Server neutron/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 5ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v909: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:25 standalone.localdomain haproxy[70940]: Server placement/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:25 standalone.localdomain podman[115710]: 2025-10-13 14:05:25.870014065 +0000 UTC m=+2.649719007 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, container_name=nova_api, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:05:25 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:05:26 standalone.localdomain haproxy[70940]: 172.17.0.2:44296 [13/Oct/2025:14:05:24.538] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1960/1960 201 8106 - - ---- 37/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:05:26 standalone.localdomain haproxy[70940]: 172.17.0.2:44296 [13/Oct/2025:14:05:26.503] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 37/4/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:05:26 standalone.localdomain haproxy[70940]: 172.17.0.2:44296 [13/Oct/2025:14:05:26.507] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/89/89 200 560 - - ---- 37/4/0/0/0 0/0 "GET /v3/services?name=placement HTTP/1.1"
Oct 13 14:05:26 standalone.localdomain haproxy[70940]: 172.17.0.2:44296 [13/Oct/2025:14:05:26.602] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 710 - - ---- 37/4/0/0/0 0/0 "GET /v3/endpoints?service_id=3816782cfb8e45d7822fb85dd96e83c3&interface=internal&region_id=regionOne HTTP/1.1"
Oct 13 14:05:26 standalone.localdomain haproxy[70940]: 172.17.0.2:34196 [13/Oct/2025:14:05:26.632] placement placement/standalone.internalapi.localdomain 0/0/0/2/2 200 381 - - ---- 38/4/0/0/0 0/0 "GET /placement/ HTTP/1.1"
Oct 13 14:05:26 standalone.localdomain su[116097]: pam_unix(su:session): session closed for user placement
Oct 13 14:05:26 standalone.localdomain systemd[1]: libpod-084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59.scope: Deactivated successfully.
Oct 13 14:05:26 standalone.localdomain ceph-mon[29756]: pgmap v909: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:05:26 standalone.localdomain podman[116413]: 2025-10-13 14:05:26.741683416 +0000 UTC m=+0.033877367 container died 084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_wait_for_service, distribution-scope=public, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=placement_wait_for_service, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, com.redhat.component=openstack-placement-api-container, io.buildah.version=1.33.12, name=rhosp17/openstack-placement-api, config_id=tripleo_step4, io.openshift.expose-services=, release=1, tcib_managed=true)
Oct 13 14:05:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0a0c56f56276045e0525efb95a9b4d54690d2d3f7b572370d173646d1ca4046-merged.mount: Deactivated successfully.
Oct 13 14:05:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59-userdata-shm.mount: Deactivated successfully.
Oct 13 14:05:26 standalone.localdomain podman[116412]: 2025-10-13 14:05:26.848136701 +0000 UTC m=+0.136772684 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, distribution-scope=public, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:05:26 standalone.localdomain podman[116412]: 2025-10-13 14:05:26.859829309 +0000 UTC m=+0.148465332 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, build-date=2025-07-21T13:27:18, version=17.1.9, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, release=1, name=rhosp17/openstack-keystone, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:05:26 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:05:26 standalone.localdomain podman[116413]: 2025-10-13 14:05:26.915197089 +0000 UTC m=+0.207391030 container cleanup 084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59 (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_wait_for_service, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_wait_for_service, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, name=rhosp17/openstack-placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, distribution-scope=public, config_id=tripleo_step4, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-placement-api-container, description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, tcib_managed=true)
Oct 13 14:05:26 standalone.localdomain systemd[1]: libpod-conmon-084a170dba311b76d3d20edb05518b0e436fd137029f1f41fde40d3f4b6f5e59.scope: Deactivated successfully.
Oct 13 14:05:26 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name placement_wait_for_service --conmon-pidfile /run/placement_wait_for_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=2c5d4de4e0570c35257b2db1f73cb503 --label config_id=tripleo_step4 --label container_name=placement_wait_for_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/placement_wait_for_service.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/placement:/var/log/placement:z --volume /var/log/containers/httpd/placement:/var/log/httpd:z --volume /var/lib/kolla/config_files/placement_api_wait_for_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1
Oct 13 14:05:27 standalone.localdomain podman[116551]: 2025-10-13 14:05:27.365984551 +0000 UTC m=+0.067580126 container create 9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_wait_for_api_service, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_wait_for_api_service, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, tcib_managed=true, com.redhat.component=openstack-nova-api-container, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:05:27 standalone.localdomain systemd[1]: Started libpod-conmon-9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf.scope.
Oct 13 14:05:27 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:27 standalone.localdomain podman[116551]: 2025-10-13 14:05:27.328692142 +0000 UTC m=+0.030287687 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a016b8ee5ebe6d708ab6632d2306c689124fcec83b86e66eb34d55977e3e3b6e/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a016b8ee5ebe6d708ab6632d2306c689124fcec83b86e66eb34d55977e3e3b6e/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a016b8ee5ebe6d708ab6632d2306c689124fcec83b86e66eb34d55977e3e3b6e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:27 standalone.localdomain podman[116551]: 2025-10-13 14:05:27.444595372 +0000 UTC m=+0.146190937 container init 9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_wait_for_api_service, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, container_name=nova_wait_for_api_service, build-date=2025-07-21T16:05:11, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.9, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=)
Oct 13 14:05:27 standalone.localdomain podman[116551]: 2025-10-13 14:05:27.455463553 +0000 UTC m=+0.157059118 container start 9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_wait_for_api_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-type=git, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, container_name=nova_wait_for_api_service, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, release=1)
Oct 13 14:05:27 standalone.localdomain podman[116551]: 2025-10-13 14:05:27.455718461 +0000 UTC m=+0.157314026 container attach 9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_wait_for_api_service, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, release=1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, container_name=nova_wait_for_api_service, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:27 standalone.localdomain sudo[116571]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:27 standalone.localdomain sudo[116571]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:27 standalone.localdomain sudo[116571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v910: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:27 standalone.localdomain sudo[116571]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:27 standalone.localdomain su[116576]: (to nova) root on none
Oct 13 14:05:27 standalone.localdomain su[116576]: pam_systemd(su:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:27 standalone.localdomain su[116576]: pam_unix(su:session): session opened for user nova(uid=42436) by (uid=0)
Oct 13 14:05:27 standalone.localdomain haproxy[70940]: Server swift_proxy_server/standalone.storage.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:27 standalone.localdomain haproxy[70940]: Server nova_osapi/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 3ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:28 standalone.localdomain haproxy[70940]: Server glance_api/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 5ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:28 standalone.localdomain haproxy[70940]: 172.17.0.2:44308 [13/Oct/2025:14:05:28.210] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 200 452 - - ---- 41/4/0/0/0 0/0 "GET /v3 HTTP/1.1"
Oct 13 14:05:28 standalone.localdomain haproxy[70940]: Server glance_api_internal/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 4ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:05:28 standalone.localdomain haproxy[70940]: 172.17.0.2:44308 [13/Oct/2025:14:05:28.215] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/379/379 201 8104 - - ---- 41/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:05:28 standalone.localdomain haproxy[70940]: 172.17.0.2:39856 [13/Oct/2025:14:05:28.600] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/2/2 200 493 - - ---- 42/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:05:28 standalone.localdomain su[116576]: pam_unix(su:session): session closed for user nova
Oct 13 14:05:28 standalone.localdomain systemd[1]: libpod-9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf.scope: Deactivated successfully.
Oct 13 14:05:28 standalone.localdomain podman[116551]: 2025-10-13 14:05:28.658925904 +0000 UTC m=+1.360521469 container died 9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_wait_for_api_service, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, tcib_managed=true, container_name=nova_wait_for_api_service, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:05:28 standalone.localdomain ceph-mon[29756]: pgmap v910: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf-userdata-shm.mount: Deactivated successfully.
Oct 13 14:05:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a016b8ee5ebe6d708ab6632d2306c689124fcec83b86e66eb34d55977e3e3b6e-merged.mount: Deactivated successfully.
Oct 13 14:05:28 standalone.localdomain podman[116588]: 2025-10-13 14:05:28.765462692 +0000 UTC m=+0.092638138 container cleanup 9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_wait_for_api_service, container_name=nova_wait_for_api_service, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:05:28 standalone.localdomain systemd[1]: libpod-conmon-9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf.scope: Deactivated successfully.
Oct 13 14:05:28 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_api_service --conmon-pidfile /run/nova_wait_for_api_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --label config_id=tripleo_step4 --label container_name=nova_wait_for_api_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'start_order': 3, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_api_service.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/nova_wait_for_api_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:05:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:05:29 standalone.localdomain haproxy[70940]: Server nova_metadata/standalone.internalapi.localdomain is UP, reason: Layer7 check passed, code: 200, check duration: 3ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 13 14:05:29 standalone.localdomain systemd[1]: tmp-crun.vI8FMQ.mount: Deactivated successfully.
Oct 13 14:05:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:05:29 standalone.localdomain podman[116640]: 2025-10-13 14:05:29.079472831 +0000 UTC m=+0.100397445 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container, name=rhosp17/openstack-barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_worker, managed_by=tripleo_ansible, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, architecture=x86_64)
Oct 13 14:05:29 standalone.localdomain podman[116639]: 2025-10-13 14:05:29.141741529 +0000 UTC m=+0.161911809 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-barbican-api-container, version=17.1.9, container_name=barbican_api, distribution-scope=public, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44)
Oct 13 14:05:29 standalone.localdomain podman[116678]: 2025-10-13 14:05:29.215188108 +0000 UTC m=+0.107894713 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, build-date=2025-07-21T16:18:19, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, release=1, io.openshift.expose-services=, container_name=barbican_keystone_listener)
Oct 13 14:05:29 standalone.localdomain podman[116640]: 2025-10-13 14:05:29.228283984 +0000 UTC m=+0.249208568 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, name=rhosp17/openstack-barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:36:22, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, config_id=tripleo_step3)
Oct 13 14:05:29 standalone.localdomain podman[116639]: 2025-10-13 14:05:29.234062656 +0000 UTC m=+0.254232976 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, container_name=barbican_api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-api)
Oct 13 14:05:29 standalone.localdomain podman[116678]: 2025-10-13 14:05:29.241675459 +0000 UTC m=+0.134382004 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=barbican_keystone_listener, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, release=1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, com.redhat.component=openstack-barbican-keystone-listener-container, vcs-type=git, tcib_managed=true)
Oct 13 14:05:29 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:05:29 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:05:29 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:05:29 standalone.localdomain podman[116741]: 2025-10-13 14:05:29.447939559 +0000 UTC m=+0.090742425 container create 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, container_name=nova_api_cron, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:05:29 standalone.localdomain podman[116741]: 2025-10-13 14:05:29.396296594 +0000 UTC m=+0.039099480 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:29 standalone.localdomain systemd[1]: Started libpod-conmon-9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.scope.
Oct 13 14:05:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d2e0c0f5814816175699e6bf4259e781f0fc918aa8a9354f4e7f06e62c6b8/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d2e0c0f5814816175699e6bf4259e781f0fc918aa8a9354f4e7f06e62c6b8/merged/var/log/httpd supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v911: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:05:29 standalone.localdomain podman[116741]: 2025-10-13 14:05:29.562889997 +0000 UTC m=+0.205692853 container init 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, container_name=nova_api_cron, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vcs-type=git, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11, release=1)
Oct 13 14:05:29 standalone.localdomain sudo[116761]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:29 standalone.localdomain sudo[116761]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:05:29 standalone.localdomain sudo[116761]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:05:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:05:29 standalone.localdomain podman[116741]: 2025-10-13 14:05:29.611130379 +0000 UTC m=+0.253933235 container start 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:05:11, distribution-scope=public, container_name=nova_api_cron, com.redhat.component=openstack-nova-api-container, release=1, vendor=Red Hat, Inc.)
Oct 13 14:05:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:29 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_api_cron --conmon-pidfile /run/nova_api_cron.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff6c887813d25a6bfef54f8920eca651 --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron nova --label config_id=tripleo_step4 --label container_name=nova_api_cron --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_api_cron.log --network host --privileged=False --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova:z --volume /var/log/containers/httpd/nova-api:/var/log/httpd:z --volume /var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1
Oct 13 14:05:29 standalone.localdomain sudo[116761]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:29 standalone.localdomain crond[116760]: (CRON) STARTUP (1.5.7)
Oct 13 14:05:29 standalone.localdomain crond[116760]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 13 14:05:29 standalone.localdomain crond[116760]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 93% if used.)
Oct 13 14:05:29 standalone.localdomain crond[116760]: (CRON) INFO (running with inotify support)
Oct 13 14:05:29 standalone.localdomain podman[116762]: 2025-10-13 14:05:29.717669977 +0000 UTC m=+0.101290015 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api_cron, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:05:29 standalone.localdomain podman[116762]: 2025-10-13 14:05:29.752037089 +0000 UTC m=+0.135657147 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, tcib_managed=true, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 14:05:29 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:05:30 standalone.localdomain podman[116897]: 2025-10-13 14:05:30.223612271 +0000 UTC m=+0.120553105 container create 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, build-date=2025-07-21T16:03:34)
Oct 13 14:05:30 standalone.localdomain podman[116891]: 2025-10-13 14:05:30.23410525 +0000 UTC m=+0.141085077 container create 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:05:30 standalone.localdomain podman[116891]: 2025-10-13 14:05:30.145794587 +0000 UTC m=+0.052774454 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 14:05:30 standalone.localdomain podman[116897]: 2025-10-13 14:05:30.146176549 +0000 UTC m=+0.043117363 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started libpod-conmon-054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.scope.
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started libpod-conmon-7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.scope.
Oct 13 14:05:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c400a0b1febfb43b3487c71ecbd5a108092b42e80cd739860482d3019963fe9e/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c400a0b1febfb43b3487c71ecbd5a108092b42e80cd739860482d3019963fe9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c400a0b1febfb43b3487c71ecbd5a108092b42e80cd739860482d3019963fe9e/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:05:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e79337f55b7749d10f6b1db0bdc54fe96b01e19a9e1616f21af66c8b2090d8/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:05:30 standalone.localdomain podman[116891]: 2025-10-13 14:05:30.365324999 +0000 UTC m=+0.272304906 container init 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, container_name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, release=1)
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:05:30 standalone.localdomain podman[116897]: 2025-10-13 14:05:30.389678357 +0000 UTC m=+0.286619171 container init 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, container_name=neutron_sriov_agent, vcs-type=git, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-sriov-agent-container, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9)
Oct 13 14:05:30 standalone.localdomain sudo[116929]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:30 standalone.localdomain sudo[116929]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:05:30 standalone.localdomain sudo[116934]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:05:30 standalone.localdomain podman[116891]: 2025-10-13 14:05:30.415573978 +0000 UTC m=+0.322553805 container start 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, build-date=2025-07-21T16:28:54, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:05:30 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:05:30 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name neutron_dhcp --cgroupns=host --conmon-pidfile /run/neutron_dhcp.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bfea4b567a7178c2bf424bc40994d7e4 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=neutron_dhcp --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/neutron_dhcp.log --network host --pid host --privileged=True --security-opt label=disable --ulimit nofile=16384 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started Session c9 of User root.
Oct 13 14:05:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:05:30 standalone.localdomain podman[116897]: 2025-10-13 14:05:30.447369073 +0000 UTC m=+0.344309877 container start 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:05:30 standalone.localdomain python3[111958]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bfea4b567a7178c2bf424bc40994d7e4 --healthcheck-command /openstack/healthcheck 5672 --label config_id=tripleo_step4 --label container_name=neutron_sriov_agent --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/neutron_sriov_agent.log --network host --pid host --privileged=True --ulimit nofile=16384 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /sys/class/net:/sys/class/net:rw registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1
Oct 13 14:05:30 standalone.localdomain sudo[116934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:05:30 standalone.localdomain sudo[116929]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:30 standalone.localdomain sudo[116934]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:30 standalone.localdomain systemd[1]: session-c9.scope: Deactivated successfully.
Oct 13 14:05:30 standalone.localdomain podman[116942]: 2025-10-13 14:05:30.541989286 +0000 UTC m=+0.088963166 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=starting, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, release=1, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, container_name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:05:30 standalone.localdomain podman[116935]: 2025-10-13 14:05:30.524084341 +0000 UTC m=+0.099856757 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=starting, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, version=17.1.9, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:05:30 standalone.localdomain ceph-mon[29756]: pgmap v911: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:30 standalone.localdomain podman[116935]: 2025-10-13 14:05:30.659932873 +0000 UTC m=+0.235705359 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54)
Oct 13 14:05:30 standalone.localdomain podman[116935]: unhealthy
Oct 13 14:05:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:05:30 standalone.localdomain podman[116942]: 2025-10-13 14:05:30.677652631 +0000 UTC m=+0.224626521 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-sriov-agent, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.buildah.version=1.33.12)
Oct 13 14:05:30 standalone.localdomain podman[116942]: unhealthy
Oct 13 14:05:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:05:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:05:31 standalone.localdomain python3[117047]: ansible-file Invoked with path=/etc/systemd/system/tripleo_cinder_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:31 standalone.localdomain python3[117049]: ansible-file Invoked with path=/etc/systemd/system/tripleo_cinder_api_cron.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:31 standalone.localdomain python3[117051]: ansible-file Invoked with path=/etc/systemd/system/tripleo_cinder_scheduler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v912: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:31 standalone.localdomain python3[117053]: ansible-file Invoked with path=/etc/systemd/system/tripleo_glance_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:31 standalone.localdomain python3[117055]: ansible-file Invoked with path=/etc/systemd/system/tripleo_glance_api_cron.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:32 standalone.localdomain python3[117065]: ansible-file Invoked with path=/etc/systemd/system/tripleo_glance_api_internal.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:32 standalone.localdomain python3[117067]: ansible-file Invoked with path=/etc/systemd/system/tripleo_heat_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:32 standalone.localdomain python3[117070]: ansible-file Invoked with path=/etc/systemd/system/tripleo_heat_api_cfn.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:32 standalone.localdomain ceph-mon[29756]: pgmap v912: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:32 standalone.localdomain python3[117074]: ansible-file Invoked with path=/etc/systemd/system/tripleo_heat_api_cron.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:32 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:05:32 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:05:32 standalone.localdomain python3[117077]: ansible-file Invoked with path=/etc/systemd/system/tripleo_heat_engine.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:32 standalone.localdomain python3[117079]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:33 standalone.localdomain python3[117090]: ansible-file Invoked with path=/etc/systemd/system/tripleo_manila_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:33 standalone.localdomain python3[117092]: ansible-file Invoked with path=/etc/systemd/system/tripleo_manila_api_cron.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:33 standalone.localdomain python3[117094]: ansible-file Invoked with path=/etc/systemd/system/tripleo_manila_scheduler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v913: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:33 standalone.localdomain python3[117096]: ansible-file Invoked with path=/etc/systemd/system/tripleo_neutron_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:33 standalone.localdomain python3[117122]: ansible-file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:33 standalone.localdomain python3[117153]: ansible-file Invoked with path=/etc/systemd/system/tripleo_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:34 standalone.localdomain runuser[117168]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:34 standalone.localdomain python3[117164]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:34 standalone.localdomain python3[117214]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:34 standalone.localdomain python3[117216]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:34 standalone.localdomain ceph-mon[29756]: pgmap v913: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:34 standalone.localdomain python3[117218]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:34 standalone.localdomain runuser[117168]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:34 standalone.localdomain python3[117224]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:34 standalone.localdomain runuser[117247]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:35 standalone.localdomain python3[117249]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:35 standalone.localdomain python3[117369]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:35 standalone.localdomain python3[117377]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v914: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:35 standalone.localdomain python3[117379]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:35 standalone.localdomain runuser[117247]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:05:35 standalone.localdomain runuser[117391]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:05:35 standalone.localdomain python3[117382]: ansible-file Invoked with path=/etc/systemd/system/tripleo_placement_api.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:35 standalone.localdomain systemd[1]: tmp-crun.CYeGSL.mount: Deactivated successfully.
Oct 13 14:05:35 standalone.localdomain podman[117403]: 2025-10-13 14:05:35.826561153 +0000 UTC m=+0.085673036 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=clustercheck, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, config_id=tripleo_step2, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:05:35 standalone.localdomain podman[117401]: 2025-10-13 14:05:35.852645539 +0000 UTC m=+0.111049149 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_cluster_northd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, config_id=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, build-date=2025-07-21T13:30:04, summary=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:05:35 standalone.localdomain podman[117401]: 2025-10-13 14:05:35.900107385 +0000 UTC m=+0.158511085 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, config_id=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-ovn-northd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public)
Oct 13 14:05:35 standalone.localdomain podman[117403]: 2025-10-13 14:05:35.91798937 +0000 UTC m=+0.177101233 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=clustercheck, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:05:35 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:05:35 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:05:35 standalone.localdomain python3[117467]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_account_reaper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:36 standalone.localdomain python3[117509]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_account_server.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:36 standalone.localdomain python3[117547]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_container_server.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:36 standalone.localdomain runuser[117391]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:36 standalone.localdomain python3[117554]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_container_updater.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:36 standalone.localdomain ceph-mon[29756]: pgmap v914: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:36 standalone.localdomain python3[117561]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_object_expirer.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:36 standalone.localdomain python3[117605]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_object_server.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:37 standalone.localdomain python3[117646]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_object_updater.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:37 standalone.localdomain python3[117658]: ansible-file Invoked with path=/etc/systemd/system/tripleo_swift_proxy.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:37 standalone.localdomain python3[117660]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_cinder_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v915: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:37 standalone.localdomain python3[117662]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_cinder_api_cron_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:37 standalone.localdomain python3[117664]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_cinder_scheduler_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:37 standalone.localdomain python3[117666]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_glance_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:38 standalone.localdomain sudo[117667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:05:38 standalone.localdomain sudo[117667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:05:38 standalone.localdomain sudo[117667]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:38 standalone.localdomain sudo[117692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 14:05:38 standalone.localdomain sudo[117692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:05:38 standalone.localdomain python3[117689]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_glance_api_cron_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:38 standalone.localdomain python3[117708]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_glance_api_internal_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:38 standalone.localdomain python3[117710]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_heat_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:38 standalone.localdomain ceph-mon[29756]: pgmap v915: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:38 standalone.localdomain python3[117739]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_heat_api_cfn_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:38 standalone.localdomain python3[117757]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_heat_api_cron_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:39 standalone.localdomain podman[117790]: 2025-10-13 14:05:39.021425585 +0000 UTC m=+0.090295711 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 14:05:39 standalone.localdomain python3[117787]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_heat_engine_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:39 standalone.localdomain podman[117790]: 2025-10-13 14:05:39.175682318 +0000 UTC m=+0.244552424 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, GIT_CLEAN=True, ceph=True, build-date=2025-09-24T08:57:55, version=7)
Oct 13 14:05:39 standalone.localdomain python3[117818]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:39 standalone.localdomain python3[117850]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_manila_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v916: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:39 standalone.localdomain python3[117890]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_manila_api_cron_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:39 standalone.localdomain sudo[117692]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:05:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:05:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:39 standalone.localdomain sudo[117911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:05:39 standalone.localdomain sudo[117911]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:05:39 standalone.localdomain sudo[117911]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:39 standalone.localdomain python3[117910]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_manila_scheduler_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:39 standalone.localdomain sudo[117926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:05:39 standalone.localdomain sudo[117926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:05:39 standalone.localdomain python3[117942]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_neutron_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:40 standalone.localdomain python3[117944]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:40 standalone.localdomain python3[117969]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_neutron_sriov_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:40 standalone.localdomain sudo[117926]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:05:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 032ada09-3395-4ca7-ba49-7b48d106dcf1 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:05:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 032ada09-3395-4ca7-ba49-7b48d106dcf1 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:05:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 032ada09-3395-4ca7-ba49-7b48d106dcf1 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:05:40 standalone.localdomain sudo[117989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:05:40 standalone.localdomain sudo[117989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:05:40 standalone.localdomain sudo[117989]: pam_unix(sudo:session): session closed for user root
Oct 13 14:05:40 standalone.localdomain python3[117988]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: pgmap v916: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:05:40 standalone.localdomain python3[118005]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_api_cron_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:40 standalone.localdomain python3[118007]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_conductor_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:41 standalone.localdomain python3[118009]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_metadata_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:41 standalone.localdomain python3[118011]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:41 standalone.localdomain python3[118021]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_scheduler_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:41 standalone.localdomain python3[118023]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v917: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:41 standalone.localdomain python3[118026]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:41 standalone.localdomain python3[118028]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:42 standalone.localdomain python3[118030]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_placement_api_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:42 standalone.localdomain python3[118039]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_account_reaper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:42 standalone.localdomain python3[118042]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_account_server_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:42 standalone.localdomain python3[118046]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_container_server_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:42 standalone.localdomain ceph-mon[29756]: pgmap v917: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:42 standalone.localdomain python3[118048]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_container_updater_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:43 standalone.localdomain python3[118050]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_object_expirer_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:43 standalone.localdomain python3[118059]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_object_server_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:43 standalone.localdomain python3[118062]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_object_updater_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v918: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:43 standalone.localdomain python3[118064]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_swift_proxy_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:05:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:05:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 43 completed events
Oct 13 14:05:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:05:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:43 standalone.localdomain podman[118068]: 2025-10-13 14:05:43.832256417 +0000 UTC m=+0.094141908 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:05:43 standalone.localdomain podman[118068]: 2025-10-13 14:05:43.869823314 +0000 UTC m=+0.131708815 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:05:43 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:05:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:05:43 standalone.localdomain podman[118145]: 2025-10-13 14:05:43.996408658 +0000 UTC m=+0.086563945 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=keystone, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, com.redhat.component=openstack-keystone-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:05:44 standalone.localdomain podman[118145]: 2025-10-13 14:05:44.034012818 +0000 UTC m=+0.124168095 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, name=rhosp17/openstack-keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, container_name=keystone, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 14:05:44 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:05:44 standalone.localdomain python3[118157]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_cinder_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:44 standalone.localdomain python3[118181]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_cinder_api_cron.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:44 standalone.localdomain ceph-mon[29756]: pgmap v918: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:05:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:05:44 standalone.localdomain python3[118183]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_cinder_scheduler.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:44 standalone.localdomain systemd[1]: tmp-crun.ABuqdS.mount: Deactivated successfully.
Oct 13 14:05:44 standalone.localdomain podman[118184]: 2025-10-13 14:05:44.842839392 +0000 UTC m=+0.106048774 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=starting, config_id=tripleo_step4, name=rhosp17/openstack-cinder-scheduler, release=1, batch=17.1_20250721.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, build-date=2025-07-21T16:10:12, com.redhat.component=openstack-cinder-scheduler-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, distribution-scope=public, version=17.1.9)
Oct 13 14:05:44 standalone.localdomain podman[118184]: 2025-10-13 14:05:44.866700154 +0000 UTC m=+0.129909526 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, build-date=2025-07-21T16:10:12, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, version=17.1.9, container_name=cinder_scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:05:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:05:44 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:05:44 standalone.localdomain systemd[1]: tmp-crun.GcXXUn.mount: Deactivated successfully.
Oct 13 14:05:44 standalone.localdomain podman[118207]: 2025-10-13 14:05:44.977895587 +0000 UTC m=+0.088155679 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.openshift.expose-services=, release=1, build-date=2025-07-21T15:58:55, distribution-scope=public, managed_by=tripleo_ansible, container_name=cinder_api_cron, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:05:44 standalone.localdomain podman[118207]: 2025-10-13 14:05:44.991956874 +0000 UTC m=+0.102217026 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, distribution-scope=public, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-api-container, build-date=2025-07-21T15:58:55, batch=17.1_20250721.1, vcs-type=git, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:05:45 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:05:45 standalone.localdomain python3[118229]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_glance_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v919: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:45 standalone.localdomain python3[118319]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_glance_api_cron.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:05:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:05:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:05:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:05:45 standalone.localdomain podman[118321]: 2025-10-13 14:05:45.837458266 +0000 UTC m=+0.095620138 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, version=17.1.9, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-horizon, container_name=horizon, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:15, vcs-type=git, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:05:45 standalone.localdomain podman[118322]: 2025-10-13 14:05:45.905254458 +0000 UTC m=+0.159267431 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=heat_api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc.)
Oct 13 14:05:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:05:45 standalone.localdomain python3[118352]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_glance_api_internal.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:45 standalone.localdomain podman[118322]: 2025-10-13 14:05:45.946377264 +0000 UTC m=+0.200390187 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, release=1, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:05:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:05:45 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:05:45 standalone.localdomain podman[118321]: 2025-10-13 14:05:45.970824365 +0000 UTC m=+0.228986257 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, container_name=horizon, managed_by=tripleo_ansible, release=1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, version=17.1.9, build-date=2025-07-21T13:58:15, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon)
Oct 13 14:05:45 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:05:46 standalone.localdomain podman[118325]: 2025-10-13 14:05:46.061109794 +0000 UTC m=+0.309790750 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:05:46 standalone.localdomain podman[118325]: 2025-10-13 14:05:46.071775798 +0000 UTC m=+0.320456734 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:05:46 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:05:46 standalone.localdomain podman[118320]: 2025-10-13 14:05:45.971740236 +0000 UTC m=+0.232157992 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, container_name=memcached, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1)
Oct 13 14:05:46 standalone.localdomain podman[118405]: 2025-10-13 14:05:46.154908269 +0000 UTC m=+0.184056474 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=starting, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, summary=Red Hat OpenStack Platform 17.1 heat-engine, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, tcib_managed=true, architecture=x86_64, container_name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-engine-container, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:05:46 standalone.localdomain podman[118320]: 2025-10-13 14:05:46.162219423 +0000 UTC m=+0.422637179 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=memcached, name=rhosp17/openstack-memcached, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:05:46 standalone.localdomain podman[118385]: 2025-10-13 14:05:46.125842974 +0000 UTC m=+0.203193640 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1)
Oct 13 14:05:46 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:05:46 standalone.localdomain podman[118405]: 2025-10-13 14:05:46.181840134 +0000 UTC m=+0.210988309 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=heat_engine, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, tcib_managed=true, config_id=tripleo_step4)
Oct 13 14:05:46 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:05:46 standalone.localdomain podman[118385]: 2025-10-13 14:05:46.208811039 +0000 UTC m=+0.286161715 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, container_name=heat_api_cron, release=1, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26)
Oct 13 14:05:46 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:05:46 standalone.localdomain python3[118490]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_heat_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:46 standalone.localdomain python3[118507]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_heat_api_cfn.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:46 standalone.localdomain ceph-mon[29756]: pgmap v919: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:05:46 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:05:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:05:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:05:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:05:46 standalone.localdomain object-server[118527]: Object update sweep starting on /srv/node/d1 (pid: 11)
Oct 13 14:05:46 standalone.localdomain object-server[118527]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 11)
Oct 13 14:05:46 standalone.localdomain object-server[118527]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:05:46 standalone.localdomain object-server[114601]: Object update sweep completed: 0.09s
Oct 13 14:05:46 standalone.localdomain podman[118526]: 2025-10-13 14:05:46.865537752 +0000 UTC m=+0.126377238 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, container_name=cinder_api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:58:55, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:05:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:05:46 standalone.localdomain podman[118533]: 2025-10-13 14:05:46.905719936 +0000 UTC m=+0.159935673 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-manila-api-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 manila-api, container_name=manila_api_cron, name=rhosp17/openstack-manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:06:43, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']})
Oct 13 14:05:46 standalone.localdomain podman[118525]: 2025-10-13 14:05:46.833664693 +0000 UTC m=+0.102037480 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:05:46 standalone.localdomain podman[118533]: 2025-10-13 14:05:46.947038699 +0000 UTC m=+0.201254436 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, build-date=2025-07-21T16:06:43, summary=Red Hat OpenStack Platform 17.1 manila-api, container_name=manila_api_cron, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-api-container, name=rhosp17/openstack-manila-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:46 standalone.localdomain podman[118534]: 2025-10-13 14:05:46.952458809 +0000 UTC m=+0.204158692 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=starting, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T15:44:17, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-conductor-container, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, container_name=nova_conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, managed_by=tripleo_ansible, release=1, vcs-type=git, name=rhosp17/openstack-nova-conductor, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:05:46 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:05:46 standalone.localdomain podman[118525]: 2025-10-13 14:05:46.963373591 +0000 UTC m=+0.231746378 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api-cfn, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, container_name=heat_api_cfn, vcs-type=git)
Oct 13 14:05:46 standalone.localdomain podman[118534]: 2025-10-13 14:05:46.976511327 +0000 UTC m=+0.228211210 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, io.openshift.expose-services=, release=1, build-date=2025-07-21T15:44:17, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=nova_conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, com.redhat.component=openstack-nova-conductor-container, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-conductor)
Oct 13 14:05:46 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:05:47 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:05:47 standalone.localdomain python3[118635]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_heat_api_cron.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:47 standalone.localdomain podman[118614]: 2025-10-13 14:05:47.058091516 +0000 UTC m=+0.175882241 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=starting, tcib_managed=true, batch=17.1_20250721.1, container_name=neutron_api, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, managed_by=tripleo_ansible, release=1, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:03, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-server)
Oct 13 14:05:47 standalone.localdomain podman[118526]: 2025-10-13 14:05:47.080719708 +0000 UTC m=+0.341559224 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., container_name=cinder_api, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:05:47 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:05:47 standalone.localdomain runuser[118706]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:47 standalone.localdomain podman[118614]: 2025-10-13 14:05:47.187928429 +0000 UTC m=+0.305719154 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12, container_name=neutron_api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:05:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:05:47 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:05:47 standalone.localdomain podman[118735]: 2025-10-13 14:05:47.273662766 +0000 UTC m=+0.063943695 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, container_name=manila_scheduler, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, name=rhosp17/openstack-manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:05:47 standalone.localdomain podman[118735]: 2025-10-13 14:05:47.293803895 +0000 UTC m=+0.084084844 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, distribution-scope=public, tcib_managed=true, vcs-type=git, version=17.1.9, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.component=openstack-manila-scheduler-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, release=1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:05:47 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:05:47 standalone.localdomain python3[118727]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_heat_engine.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v920: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:47 standalone.localdomain runuser[118706]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:47 standalone.localdomain python3[118789]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:05:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:05:47 standalone.localdomain ceph-mon[29756]: pgmap v920: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:47 standalone.localdomain podman[118806]: 2025-10-13 14:05:47.807574809 +0000 UTC m=+0.075810928 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=starting, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-scheduler, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, container_name=nova_scheduler)
Oct 13 14:05:47 standalone.localdomain podman[118807]: 2025-10-13 14:05:47.875551647 +0000 UTC m=+0.142026458 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:05:47 standalone.localdomain runuser[118839]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:47 standalone.localdomain podman[118806]: 2025-10-13 14:05:47.89459444 +0000 UTC m=+0.162830559 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-scheduler, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, container_name=nova_scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-scheduler-container, build-date=2025-07-21T16:02:54, io.buildah.version=1.33.12, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:05:47 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:05:48 standalone.localdomain python3[118861]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_manila_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:48 standalone.localdomain podman[118807]: 2025-10-13 14:05:48.202717203 +0000 UTC m=+0.469191944 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:05:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:05:48 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:05:48 standalone.localdomain podman[118896]: 2025-10-13 14:05:48.290575041 +0000 UTC m=+0.069056084 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=starting, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, container_name=nova_vnc_proxy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, build-date=2025-07-21T15:24:10, release=1, version=17.1.9)
Oct 13 14:05:48 standalone.localdomain python3[118909]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_manila_api_cron.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:48 standalone.localdomain runuser[118839]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:48 standalone.localdomain podman[118896]: 2025-10-13 14:05:48.594814746 +0000 UTC m=+0.373295769 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, tcib_managed=true, container_name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, name=rhosp17/openstack-nova-novncproxy, release=1, version=17.1.9, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:05:48 standalone.localdomain runuser[118942]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:05:48 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:05:48 standalone.localdomain python3[118938]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_manila_scheduler.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:05:48 standalone.localdomain podman[118990]: 2025-10-13 14:05:48.822892041 +0000 UTC m=+0.097680645 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=starting, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, distribution-scope=public, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:48 standalone.localdomain systemd[1]: tmp-crun.cvLWXv.mount: Deactivated successfully.
Oct 13 14:05:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:05:49 standalone.localdomain python3[119013]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_neutron_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:49 standalone.localdomain podman[118990]: 2025-10-13 14:05:49.073889308 +0000 UTC m=+0.348677932 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, release=1, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, container_name=swift_account_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:05:49 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:05:49 standalone.localdomain systemd[1]: tmp-crun.AZd0gk.mount: Deactivated successfully.
Oct 13 14:05:49 standalone.localdomain podman[119020]: 2025-10-13 14:05:49.159538962 +0000 UTC m=+0.092797573 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=starting, architecture=x86_64, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, container_name=swift_container_server, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:05:49 standalone.localdomain python3[119048]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_neutron_dhcp.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:49 standalone.localdomain runuser[118942]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:05:49 standalone.localdomain podman[119020]: 2025-10-13 14:05:49.458880674 +0000 UTC m=+0.392139285 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, container_name=swift_container_server, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:05:49 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:05:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v921: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:05:49 standalone.localdomain python3[119069]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:49 standalone.localdomain podman[119071]: 2025-10-13 14:05:49.813767941 +0000 UTC m=+0.081167767 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=starting, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-swift-object-container, distribution-scope=public, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, container_name=swift_object_server, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:05:50 standalone.localdomain podman[119071]: 2025-10-13 14:05:50.014896862 +0000 UTC m=+0.282296678 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, distribution-scope=public, vcs-type=git, build-date=2025-07-21T14:56:28, tcib_managed=true, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server)
Oct 13 14:05:50 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:05:50 standalone.localdomain python3[119098]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:50 standalone.localdomain python3[119101]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_api_cron.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:50 standalone.localdomain ceph-mon[29756]: pgmap v921: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:50 standalone.localdomain python3[119111]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_conductor.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:51 standalone.localdomain python3[119113]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_metadata.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:51 standalone.localdomain python3[119115]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v922: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:51 standalone.localdomain python3[119125]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_scheduler.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:52 standalone.localdomain python3[119127]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_nova_vnc_proxy.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:52 standalone.localdomain python3[119136]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:52 standalone.localdomain ceph-mon[29756]: pgmap v922: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:05:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:05:52 standalone.localdomain podman[119140]: 2025-10-13 14:05:52.828663385 +0000 UTC m=+0.095697819 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:05:52 standalone.localdomain podman[119141]: 2025-10-13 14:05:52.880706364 +0000 UTC m=+0.147587343 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git)
Oct 13 14:05:52 standalone.localdomain systemd[1]: tmp-crun.9ttZJJ.mount: Deactivated successfully.
Oct 13 14:05:52 standalone.localdomain podman[119141]: 2025-10-13 14:05:52.910896007 +0000 UTC m=+0.177776946 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T13:28:44, tcib_managed=true, version=17.1.9, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:05:52 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:05:52 standalone.localdomain podman[119140]: 2025-10-13 14:05:52.965147439 +0000 UTC m=+0.232181913 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:05:52 standalone.localdomain python3[119165]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:52 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:05:53 standalone.localdomain python3[119192]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_placement_api.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v923: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:05:53 standalone.localdomain python3[119202]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_account_reaper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:53 standalone.localdomain podman[119239]: 2025-10-13 14:05:53.843607805 +0000 UTC m=+0.105797165 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, architecture=x86_64, com.redhat.component=openstack-glance-api-container, container_name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:05:53 standalone.localdomain podman[119241]: 2025-10-13 14:05:53.893530513 +0000 UTC m=+0.154674898 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=starting, container_name=nova_metadata, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.9, distribution-scope=public, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64)
Oct 13 14:05:53 standalone.localdomain systemd[1]: tmp-crun.EtWsiO.mount: Deactivated successfully.
Oct 13 14:05:53 standalone.localdomain podman[119240]: 2025-10-13 14:05:53.873522778 +0000 UTC m=+0.138406027 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, name=rhosp17/openstack-placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:58:12, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=placement_api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1)
Oct 13 14:05:53 standalone.localdomain podman[119239]: 2025-10-13 14:05:53.927993808 +0000 UTC m=+0.190183158 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=glance_api_cron, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12)
Oct 13 14:05:53 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:05:53 standalone.localdomain podman[119300]: 2025-10-13 14:05:53.947701742 +0000 UTC m=+0.092017197 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=starting, release=1, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api)
Oct 13 14:05:53 standalone.localdomain podman[119242]: 2025-10-13 14:05:53.913136684 +0000 UTC m=+0.171067913 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=starting, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, container_name=glance_api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 14:05:53 standalone.localdomain podman[119241]: 2025-10-13 14:05:53.999514723 +0000 UTC m=+0.260659128 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=nova_metadata, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-nova-api-container, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:05:54 standalone.localdomain podman[119240]: 2025-10-13 14:05:54.004757247 +0000 UTC m=+0.269640506 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, tcib_managed=true, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=placement_api, com.redhat.component=openstack-placement-api-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:58:12, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:05:54 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:05:54 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:05:54 standalone.localdomain python3[119401]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_account_server.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:54 standalone.localdomain podman[119242]: 2025-10-13 14:05:54.125564949 +0000 UTC m=+0.383496208 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, managed_by=tripleo_ansible, architecture=x86_64, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:05:54 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:05:54 standalone.localdomain podman[119300]: 2025-10-13 14:05:54.142608155 +0000 UTC m=+0.286923630 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, release=1, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:58:20, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:05:54 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:05:54 standalone.localdomain python3[119414]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_container_server.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:54 standalone.localdomain ceph-mon[29756]: pgmap v923: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:05:54 standalone.localdomain python3[119424]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_container_updater.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:54 standalone.localdomain podman[119425]: 2025-10-13 14:05:54.819140536 +0000 UTC m=+0.084416805 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=starting, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_proxy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:05:55 standalone.localdomain podman[119425]: 2025-10-13 14:05:55.038816421 +0000 UTC m=+0.304092620 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team)
Oct 13 14:05:55 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:05:55 standalone.localdomain python3[119452]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_object_expirer.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:05:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v924: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:55 standalone.localdomain python3[119539]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_object_server.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:55 standalone.localdomain python3[119549]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_object_updater.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:05:56 standalone.localdomain systemd[1]: tmp-crun.CszVzh.mount: Deactivated successfully.
Oct 13 14:05:56 standalone.localdomain podman[119550]: 2025-10-13 14:05:56.126820408 +0000 UTC m=+0.105834796 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-nova-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:05:56 standalone.localdomain podman[119550]: 2025-10-13 14:05:56.166565108 +0000 UTC m=+0.145579526 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, build-date=2025-07-21T16:05:11, distribution-scope=public, name=rhosp17/openstack-nova-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=nova_api, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, release=1)
Oct 13 14:05:56 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:05:56 standalone.localdomain python3[119598]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364343.7308564-117033-12672121912509/source dest=/etc/systemd/system/tripleo_swift_proxy.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:05:56 standalone.localdomain python3[119620]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 14:05:56 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:05:56 standalone.localdomain ceph-mon[29756]: pgmap v924: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:56 standalone.localdomain systemd-sysv-generator[119657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:05:56 standalone.localdomain systemd-rc-local-generator[119652]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:05:56 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:05:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:05:57 standalone.localdomain podman[119747]: 2025-10-13 14:05:57.236326218 +0000 UTC m=+0.095395339 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, container_name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_id=tripleo_step3, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-keystone-container)
Oct 13 14:05:57 standalone.localdomain podman[119747]: 2025-10-13 14:05:57.265934511 +0000 UTC m=+0.125003672 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=keystone_cron, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:05:57 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:05:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v925: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:58 standalone.localdomain python3[119774]: ansible-systemd Invoked with state=restarted name=tripleo_cinder_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:05:58 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:05:58 standalone.localdomain systemd-rc-local-generator[119798]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:05:58 standalone.localdomain systemd-sysv-generator[119804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:05:58 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:05:58 standalone.localdomain ceph-mon[29756]: pgmap v925: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:58 standalone.localdomain systemd[1]: Starting cinder_api container...
Oct 13 14:05:58 standalone.localdomain systemd[1]: Started cinder_api container.
Oct 13 14:05:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v926: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:05:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:05:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:05:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:05:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:05:59 standalone.localdomain podman[119845]: 2025-10-13 14:05:59.822443021 +0000 UTC m=+0.083922089 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_keystone_listener, build-date=2025-07-21T16:18:19, tcib_managed=true, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c)
Oct 13 14:05:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:05:59 standalone.localdomain podman[119845]: 2025-10-13 14:05:59.847881376 +0000 UTC m=+0.109360494 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, release=1, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, container_name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:05:59 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:05:59 standalone.localdomain python3[119843]: ansible-systemd Invoked with state=restarted name=tripleo_cinder_api_cron.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:05:59 standalone.localdomain podman[119889]: 2025-10-13 14:05:59.919565406 +0000 UTC m=+0.079971857 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api_cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4)
Oct 13 14:05:59 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:05:59 standalone.localdomain podman[119844]: 2025-10-13 14:05:59.8973808 +0000 UTC m=+0.160833763 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, container_name=barbican_api, io.openshift.expose-services=, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:05:59 standalone.localdomain podman[119844]: 2025-10-13 14:05:59.983931304 +0000 UTC m=+0.247384297 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.component=openstack-barbican-api-container, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, container_name=barbican_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, build-date=2025-07-21T15:22:44, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:06:00 standalone.localdomain podman[119889]: 2025-10-13 14:06:00.005880013 +0000 UTC m=+0.166286484 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, name=rhosp17/openstack-nova-api, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, container_name=nova_api_cron, vendor=Red Hat, Inc.)
Oct 13 14:06:00 standalone.localdomain runuser[119933]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:00 standalone.localdomain systemd-rc-local-generator[119972]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:00 standalone.localdomain systemd-sysv-generator[119976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:00 standalone.localdomain podman[119846]: 2025-10-13 14:06:00.093008587 +0000 UTC m=+0.348673901 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, tcib_managed=true, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., container_name=barbican_worker, io.buildah.version=1.33.12, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, build-date=2025-07-21T15:36:22, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:06:00 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:00 standalone.localdomain podman[119846]: 2025-10-13 14:06:00.137694291 +0000 UTC m=+0.393359605 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-barbican-worker, com.redhat.component=openstack-barbican-worker-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, tcib_managed=true, container_name=barbican_worker, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9)
Oct 13 14:06:00 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:06:00 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:06:00 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:06:00 standalone.localdomain systemd[1]: Starting cinder_api_cron container...
Oct 13 14:06:00 standalone.localdomain systemd[1]: Started cinder_api_cron container.
Oct 13 14:06:00 standalone.localdomain podman[120043]: 2025-10-13 14:06:00.62994406 +0000 UTC m=+0.118611820 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-mariadb-container, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:06:00 standalone.localdomain podman[120043]: 2025-10-13 14:06:00.661917332 +0000 UTC m=+0.150585112 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team)
Oct 13 14:06:00 standalone.localdomain runuser[119933]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:00 standalone.localdomain ceph-mon[29756]: pgmap v926: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:06:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:06:00 standalone.localdomain podman[120069]: 2025-10-13 14:06:00.744399062 +0000 UTC m=+0.137558870 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-haproxy, vcs-type=git, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, version=17.1.9)
Oct 13 14:06:00 standalone.localdomain podman[120112]: 2025-10-13 14:06:00.77114468 +0000 UTC m=+0.060210881 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, container_name=neutron_dhcp, io.openshift.expose-services=)
Oct 13 14:06:00 standalone.localdomain podman[120112]: 2025-10-13 14:06:00.820122077 +0000 UTC m=+0.109188278 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_id=tripleo_step4, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:00 standalone.localdomain runuser[120169]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:00 standalone.localdomain podman[120138]: 2025-10-13 14:06:00.833865323 +0000 UTC m=+0.082689028 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, build-date=2025-07-21T13:08:11, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-haproxy-container, release=1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64)
Oct 13 14:06:00 standalone.localdomain systemd[1]: tmp-crun.BgSTlz.mount: Deactivated successfully.
Oct 13 14:06:00 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:06:00 standalone.localdomain podman[120069]: 2025-10-13 14:06:00.844202326 +0000 UTC m=+0.237362094 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, version=17.1.9, com.redhat.component=openstack-haproxy-container, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, name=rhosp17/openstack-haproxy, release=1, build-date=2025-07-21T13:08:11, architecture=x86_64)
Oct 13 14:06:00 standalone.localdomain podman[120113]: 2025-10-13 14:06:00.84552332 +0000 UTC m=+0.127831557 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=starting, build-date=2025-07-21T16:03:34, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, io.openshift.expose-services=, release=1)
Oct 13 14:06:00 standalone.localdomain podman[120136]: 2025-10-13 14:06:00.897827048 +0000 UTC m=+0.144176640 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:06:00 standalone.localdomain podman[120113]: 2025-10-13 14:06:00.928055261 +0000 UTC m=+0.210363508 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent)
Oct 13 14:06:00 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:06:00 standalone.localdomain podman[120235]: 2025-10-13 14:06:00.99273062 +0000 UTC m=+0.087211998 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:00 standalone.localdomain podman[120136]: 2025-10-13 14:06:00.996606908 +0000 UTC m=+0.242956520 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, architecture=x86_64, release=1)
Oct 13 14:06:01 standalone.localdomain anacron[91273]: Job `cron.daily' started
Oct 13 14:06:01 standalone.localdomain anacron[91273]: Job `cron.daily' terminated
Oct 13 14:06:01 standalone.localdomain runuser[120169]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:01 standalone.localdomain runuser[120263]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:01 standalone.localdomain python3[120251]: ansible-systemd Invoked with state=restarted name=tripleo_cinder_scheduler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:01 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v927: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:01 standalone.localdomain systemd-sysv-generator[120344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:01 standalone.localdomain systemd-rc-local-generator[120340]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:01 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:02 standalone.localdomain systemd[1]: Starting cinder_scheduler container...
Oct 13 14:06:02 standalone.localdomain systemd[1]: Started cinder_scheduler container.
Oct 13 14:06:02 standalone.localdomain runuser[120263]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: pgmap v927: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.716089) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364362716153, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1035, "num_deletes": 251, "total_data_size": 850390, "memory_usage": 869576, "flush_reason": "Manual Compaction"}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364362723343, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 834669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21721, "largest_seqno": 22755, "table_properties": {"data_size": 830117, "index_size": 2153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10024, "raw_average_key_size": 19, "raw_value_size": 820881, "raw_average_value_size": 1597, "num_data_blocks": 98, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364279, "oldest_key_time": 1760364279, "file_creation_time": 1760364362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 7305 microseconds, and 4394 cpu microseconds.
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.723395) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 834669 bytes OK
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.723420) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.725611) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.725635) EVENT_LOG_v1 {"time_micros": 1760364362725628, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.725657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 845438, prev total WAL file size 845438, number of live WAL files 2.
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.726275) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(815KB)], [50(4845KB)]
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364362726362, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 5796608, "oldest_snapshot_seqno": -1}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 3471 keys, 4839157 bytes, temperature: kUnknown
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364362754258, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 4839157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4815426, "index_size": 13971, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 83837, "raw_average_key_size": 24, "raw_value_size": 4752079, "raw_average_value_size": 1369, "num_data_blocks": 601, "num_entries": 3471, "num_filter_entries": 3471, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.754552) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 4839157 bytes
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.756151) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.0 rd, 172.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 4.7 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(12.7) write-amplify(5.8) OK, records in: 3991, records dropped: 520 output_compression: NoCompression
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.756177) EVENT_LOG_v1 {"time_micros": 1760364362756165, "job": 26, "event": "compaction_finished", "compaction_time_micros": 28000, "compaction_time_cpu_micros": 17246, "output_level": 6, "num_output_files": 1, "total_output_size": 4839157, "num_input_records": 3991, "num_output_records": 3471, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364362756396, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364362757103, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.726176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.757142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.757147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.757149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.757150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:06:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:06:02.757152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:06:02 standalone.localdomain python3[120392]: ansible-systemd Invoked with state=restarted name=tripleo_glance_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:03 standalone.localdomain systemd-rc-local-generator[120418]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:03 standalone.localdomain systemd-sysv-generator[120423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:03 standalone.localdomain systemd[1]: Starting glance_api container...
Oct 13 14:06:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v928: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:03 standalone.localdomain systemd[1]: Started glance_api container.
Oct 13 14:06:03 standalone.localdomain ceph-mon[29756]: pgmap v928: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:04 standalone.localdomain python3[120507]: ansible-systemd Invoked with state=restarted name=tripleo_glance_api_cron.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:04 standalone.localdomain systemd-rc-local-generator[120539]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:04 standalone.localdomain systemd-sysv-generator[120542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:04 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:05 standalone.localdomain systemd[1]: Starting glance_api_cron container...
Oct 13 14:06:05 standalone.localdomain systemd[1]: Started glance_api_cron container.
Oct 13 14:06:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v929: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:06:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:06:06 standalone.localdomain podman[120659]: 2025-10-13 14:06:06.111046355 +0000 UTC m=+0.082410089 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, config_id=ovn_cluster_northd, architecture=x86_64, name=rhosp17/openstack-ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:06:06 standalone.localdomain python3[120658]: ansible-systemd Invoked with state=restarted name=tripleo_glance_api_internal.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:06 standalone.localdomain podman[120659]: 2025-10-13 14:06:06.125410622 +0000 UTC m=+0.096774396 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, container_name=ovn_cluster_northd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-ovn-northd-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_northd, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:30:04, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:06:06 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:06:06 standalone.localdomain podman[120660]: 2025-10-13 14:06:06.166083822 +0000 UTC m=+0.136842456 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-type=git, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:06:06 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:06 standalone.localdomain podman[120660]: 2025-10-13 14:06:06.241800877 +0000 UTC m=+0.212559501 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true)
Oct 13 14:06:06 standalone.localdomain systemd-sysv-generator[120786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:06 standalone.localdomain systemd-rc-local-generator[120782]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:06 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:06 standalone.localdomain ceph-mon[29756]: pgmap v929: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:06 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:06:06 standalone.localdomain systemd[1]: Starting glance_api_internal container...
Oct 13 14:06:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:06:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 1800.0 total, 600.0 interval
                                                        Cumulative writes: 5064 writes, 22K keys, 5064 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                                        Cumulative WAL: 5064 writes, 5064 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1720 writes, 8394 keys, 1720 commit groups, 1.0 writes per commit group, ingest: 6.17 MB, 0.01 MB/s
                                                        Interval WAL: 1720 writes, 1720 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    106.0      0.10              0.05        13    0.008       0      0       0.0       0.0
                                                          L6      1/0    4.61 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.7    200.7    173.9      0.23              0.13        12    0.019     39K   6397       0.0       0.0
                                                         Sum      1/0    4.61 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   4.7    139.5    153.2      0.34              0.18        25    0.013     39K   6397       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4    165.3    176.6      0.21              0.11        14    0.015     27K   4112       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    200.7    173.9      0.23              0.13        12    0.019     39K   6397       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    108.7      0.10              0.05        12    0.008       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 1800.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.011, interval 0.007
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.05 GB write, 0.03 MB/s write, 0.05 GB read, 0.03 MB/s read, 0.3 seconds
                                                        Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 5.99 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.2e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(556,5.70 MB,1.85189%) FilterBlock(26,114.92 KB,0.0364378%) IndexBlock(26,176.39 KB,0.0559274%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 14:06:06 standalone.localdomain systemd[1]: Started glance_api_internal container.
Oct 13 14:06:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v930: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:07 standalone.localdomain python3[120898]: ansible-systemd Invoked with state=restarted name=tripleo_heat_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:07 standalone.localdomain systemd-rc-local-generator[120930]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:07 standalone.localdomain systemd-sysv-generator[120934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:08 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:08 standalone.localdomain systemd[1]: Starting heat_api container...
Oct 13 14:06:08 standalone.localdomain systemd[1]: Started heat_api container.
Oct 13 14:06:08 standalone.localdomain ceph-mon[29756]: pgmap v930: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:09 standalone.localdomain python3[120966]: ansible-systemd Invoked with state=restarted name=tripleo_heat_api_cfn.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v931: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:09 standalone.localdomain systemd-rc-local-generator[120995]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:09 standalone.localdomain systemd-sysv-generator[120999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:09 standalone.localdomain systemd[1]: Starting heat_api_cfn container...
Oct 13 14:06:10 standalone.localdomain systemd[1]: Started heat_api_cfn container.
Oct 13 14:06:10 standalone.localdomain ceph-mon[29756]: pgmap v931: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:11 standalone.localdomain python3[121028]: ansible-systemd Invoked with state=restarted name=tripleo_heat_api_cron.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:11 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:11 standalone.localdomain systemd-sysv-generator[121062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:11 standalone.localdomain systemd-rc-local-generator[121059]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:11 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v932: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:11 standalone.localdomain systemd[1]: Starting heat_api_cron container...
Oct 13 14:06:11 standalone.localdomain systemd[1]: Started heat_api_cron container.
Oct 13 14:06:12 standalone.localdomain python3[121095]: ansible-systemd Invoked with state=restarted name=tripleo_heat_engine.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:12 standalone.localdomain ceph-mon[29756]: pgmap v932: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:12 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:12 standalone.localdomain runuser[121105]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:12 standalone.localdomain systemd-rc-local-generator[121147]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:12 standalone.localdomain systemd-sysv-generator[121152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:12 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:13 standalone.localdomain systemd[1]: Starting heat_engine container...
Oct 13 14:06:13 standalone.localdomain systemd[1]: Started heat_engine container.
Oct 13 14:06:13 standalone.localdomain runuser[121105]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v933: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:13 standalone.localdomain runuser[121222]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:14 standalone.localdomain runuser[121222]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:14 standalone.localdomain runuser[121339]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:14 standalone.localdomain python3[121278]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:06:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:06:14 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:14 standalone.localdomain podman[121386]: 2025-10-13 14:06:14.496897105 +0000 UTC m=+0.127877938 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, version=17.1.9, com.redhat.component=openstack-keystone-container, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-keystone, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:06:14 standalone.localdomain systemd-sysv-generator[121445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:14 standalone.localdomain systemd-rc-local-generator[121442]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:14 standalone.localdomain podman[121386]: 2025-10-13 14:06:14.561975467 +0000 UTC m=+0.192956160 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64)
Oct 13 14:06:14 standalone.localdomain podman[121363]: 2025-10-13 14:06:14.562334318 +0000 UTC m=+0.192237655 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, container_name=iscsid, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:06:14 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:14 standalone.localdomain podman[121363]: 2025-10-13 14:06:14.643996361 +0000 UTC m=+0.273899748 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, architecture=x86_64, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, container_name=iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:06:14 standalone.localdomain ceph-mon[29756]: pgmap v933: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:14 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:06:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:06:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:06:14 standalone.localdomain systemd[1]: Starting logrotate_crond container...
Oct 13 14:06:15 standalone.localdomain podman[121476]: 2025-10-13 14:06:15.001854746 +0000 UTC m=+0.063837411 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-scheduler-container, build-date=2025-07-21T16:10:12, container_name=cinder_scheduler, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:06:15 standalone.localdomain systemd[1]: Started logrotate_crond container.
Oct 13 14:06:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:06:15 standalone.localdomain runuser[121339]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:15 standalone.localdomain podman[121476]: 2025-10-13 14:06:15.054917309 +0000 UTC m=+0.116899974 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, container_name=cinder_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-scheduler-container, architecture=x86_64, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-scheduler)
Oct 13 14:06:15 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:06:15 standalone.localdomain podman[121523]: 2025-10-13 14:06:15.10762776 +0000 UTC m=+0.060005924 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, container_name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, release=1, build-date=2025-07-21T15:58:55)
Oct 13 14:06:15 standalone.localdomain podman[121523]: 2025-10-13 14:06:15.114799188 +0000 UTC m=+0.067177342 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, container_name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, vcs-type=git)
Oct 13 14:06:15 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:06:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v934: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:15 standalone.localdomain python3[121628]: ansible-systemd Invoked with state=restarted name=tripleo_manila_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:15 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:16 standalone.localdomain systemd-rc-local-generator[121658]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:16 standalone.localdomain systemd-sysv-generator[121668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:16 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:06:16 standalone.localdomain systemd[1]: Starting manila_api container...
Oct 13 14:06:16 standalone.localdomain systemd[1]: tmp-crun.iFs3w7.mount: Deactivated successfully.
Oct 13 14:06:16 standalone.localdomain podman[121720]: 2025-10-13 14:06:16.606361787 +0000 UTC m=+0.100306512 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:58:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, name=rhosp17/openstack-horizon, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=horizon, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:06:16 standalone.localdomain podman[121718]: 2025-10-13 14:06:16.617563869 +0000 UTC m=+0.114242655 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, tcib_managed=true, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:16 standalone.localdomain podman[121719]: 2025-10-13 14:06:16.650420661 +0000 UTC m=+0.143971533 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:06:16 standalone.localdomain podman[121718]: 2025-10-13 14:06:16.682823756 +0000 UTC m=+0.179502552 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, name=rhosp17/openstack-memcached, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:06:16 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:06:16 standalone.localdomain ceph-mon[29756]: pgmap v934: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:16 standalone.localdomain podman[121720]: 2025-10-13 14:06:16.714995625 +0000 UTC m=+0.208940350 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, com.redhat.component=openstack-horizon-container, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, name=rhosp17/openstack-horizon, build-date=2025-07-21T13:58:15, vendor=Red Hat, Inc., config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']})
Oct 13 14:06:16 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:06:16 standalone.localdomain podman[121721]: 2025-10-13 14:06:16.683480979 +0000 UTC m=+0.162891502 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, version=17.1.9)
Oct 13 14:06:16 standalone.localdomain podman[121721]: 2025-10-13 14:06:16.770404386 +0000 UTC m=+0.249814999 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-heat-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container)
Oct 13 14:06:16 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:06:16 standalone.localdomain podman[121719]: 2025-10-13 14:06:16.788072273 +0000 UTC m=+0.281623205 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, distribution-scope=public, release=1, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 14:06:16 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:06:16 standalone.localdomain podman[121728]: 2025-10-13 14:06:16.843454202 +0000 UTC m=+0.313837684 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:06:16 standalone.localdomain podman[121717]: 2025-10-13 14:06:16.862457653 +0000 UTC m=+0.357791854 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, container_name=heat_engine, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:11)
Oct 13 14:06:16 standalone.localdomain systemd[1]: Started manila_api container.
Oct 13 14:06:16 standalone.localdomain podman[121728]: 2025-10-13 14:06:16.882122336 +0000 UTC m=+0.352505848 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, version=17.1.9, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., release=1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible)
Oct 13 14:06:16 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:06:16 standalone.localdomain podman[121717]: 2025-10-13 14:06:16.923940375 +0000 UTC m=+0.419274556 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, release=1, name=rhosp17/openstack-heat-engine, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:16 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:06:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v935: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:17 standalone.localdomain podman[121955]: 2025-10-13 14:06:17.634513765 +0000 UTC m=+0.136205975 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=manila_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.component=openstack-manila-api-container, distribution-scope=public, name=rhosp17/openstack-manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.buildah.version=1.33.12, build-date=2025-07-21T16:06:43)
Oct 13 14:06:17 standalone.localdomain podman[121955]: 2025-10-13 14:06:17.643634368 +0000 UTC m=+0.145326578 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, name=rhosp17/openstack-manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-api-container, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., release=1)
Oct 13 14:06:17 standalone.localdomain podman[121953]: 2025-10-13 14:06:17.566676053 +0000 UTC m=+0.077645331 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, build-date=2025-07-21T15:58:55, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, container_name=cinder_api, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:06:17 standalone.localdomain podman[121951]: 2025-10-13 14:06:17.683744331 +0000 UTC m=+0.197608725 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-heat-api-cfn, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, release=1, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:17 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain podman[121951]: 2025-10-13 14:06:17.725403904 +0000 UTC m=+0.239268268 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12)
Oct 13 14:06:17 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain podman[121952]: 2025-10-13 14:06:17.736071328 +0000 UTC m=+0.250148419 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, release=1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-manila-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, container_name=manila_scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64)
Oct 13 14:06:17 standalone.localdomain podman[121968]: 2025-10-13 14:06:17.600604229 +0000 UTC m=+0.099338551 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-server, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-type=git, container_name=neutron_api, version=17.1.9, build-date=2025-07-21T15:44:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:06:17 standalone.localdomain podman[121971]: 2025-10-13 14:06:17.652331027 +0000 UTC m=+0.141258322 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:17, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, container_name=nova_conductor, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1)
Oct 13 14:06:17 standalone.localdomain podman[121952]: 2025-10-13 14:06:17.760774119 +0000 UTC m=+0.274851180 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, build-date=2025-07-21T15:56:28, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, container_name=manila_scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-scheduler, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, distribution-scope=public)
Oct 13 14:06:17 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain podman[121968]: 2025-10-13 14:06:17.784809548 +0000 UTC m=+0.283543840 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., container_name=neutron_api, distribution-scope=public, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, build-date=2025-07-21T15:44:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:06:17 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain podman[121953]: 2025-10-13 14:06:17.803342882 +0000 UTC m=+0.314312180 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api, io.openshift.expose-services=, release=1, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:17 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain python3[122044]: ansible-systemd Invoked with state=restarted name=tripleo_manila_api_cron.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:17 standalone.localdomain podman[121971]: 2025-10-13 14:06:17.888887334 +0000 UTC m=+0.377814669 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, container_name=nova_conductor, distribution-scope=public, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:17, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-nova-conductor-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:06:17 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:06:17 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:18 standalone.localdomain systemd-rc-local-generator[122111]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:18 standalone.localdomain systemd-sysv-generator[122114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:18 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:18 standalone.localdomain systemd[1]: tmp-crun.pQqTfx.mount: Deactivated successfully.
Oct 13 14:06:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:06:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:06:18 standalone.localdomain systemd[1]: Starting manila_api_cron container...
Oct 13 14:06:18 standalone.localdomain systemd[1]: Started manila_api_cron container.
Oct 13 14:06:18 standalone.localdomain podman[122136]: 2025-10-13 14:06:18.510570602 +0000 UTC m=+0.119320194 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 14:06:18 standalone.localdomain podman[122135]: 2025-10-13 14:06:18.564617307 +0000 UTC m=+0.175907964 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, name=rhosp17/openstack-nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-scheduler-container, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, architecture=x86_64, build-date=2025-07-21T16:02:54, container_name=nova_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:18 standalone.localdomain podman[122135]: 2025-10-13 14:06:18.621582649 +0000 UTC m=+0.232873306 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T16:02:54, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.component=openstack-nova-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, container_name=nova_scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team)
Oct 13 14:06:18 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:06:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:06:18 standalone.localdomain ceph-mon[29756]: pgmap v935: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:18 standalone.localdomain systemd[1]: tmp-crun.vbFnGX.mount: Deactivated successfully.
Oct 13 14:06:18 standalone.localdomain podman[122192]: 2025-10-13 14:06:18.741909925 +0000 UTC m=+0.080287167 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-nova-novncproxy-container, container_name=nova_vnc_proxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-novncproxy, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:18 standalone.localdomain podman[122136]: 2025-10-13 14:06:18.873898259 +0000 UTC m=+0.482647821 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git)
Oct 13 14:06:18 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:06:19 standalone.localdomain podman[122192]: 2025-10-13 14:06:19.100939919 +0000 UTC m=+0.439317091 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, version=17.1.9, distribution-scope=public, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-novncproxy-container, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, container_name=nova_vnc_proxy, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10)
Oct 13 14:06:19 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:06:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:06:19 standalone.localdomain podman[122225]: 2025-10-13 14:06:19.249733222 +0000 UTC m=+0.107422659 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-swift-account-container, distribution-scope=public)
Oct 13 14:06:19 standalone.localdomain python3[122233]: ansible-systemd Invoked with state=restarted name=tripleo_manila_scheduler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:19 standalone.localdomain podman[122225]: 2025-10-13 14:06:19.448253925 +0000 UTC m=+0.305943372 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, batch=17.1_20250721.1, container_name=swift_account_server, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:19 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v936: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:19 standalone.localdomain systemd-rc-local-generator[122276]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:19 standalone.localdomain systemd-sysv-generator[122279]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:19 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:19 standalone.localdomain systemd[1]: tmp-crun.csdUNM.mount: Deactivated successfully.
Oct 13 14:06:19 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:06:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:06:19 standalone.localdomain systemd[1]: Starting manila_scheduler container...
Oct 13 14:06:20 standalone.localdomain systemd[1]: Started manila_scheduler container.
Oct 13 14:06:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:06:20 standalone.localdomain podman[122319]: 2025-10-13 14:06:20.168705404 +0000 UTC m=+0.101534834 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:06:20 standalone.localdomain podman[122294]: 2025-10-13 14:06:20.128648973 +0000 UTC m=+0.167996151 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:20 standalone.localdomain podman[122294]: 2025-10-13 14:06:20.317881779 +0000 UTC m=+0.357228957 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:06:20 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:06:20 standalone.localdomain podman[122319]: 2025-10-13 14:06:20.376879018 +0000 UTC m=+0.309708468 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vendor=Red Hat, Inc., container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:06:20 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:06:20 standalone.localdomain ceph-mon[29756]: pgmap v936: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:20 standalone.localdomain python3[122369]: ansible-systemd Invoked with state=restarted name=tripleo_neutron_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:21 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:06:21 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:21 standalone.localdomain recover_tripleo_nova_virtqemud[122372]: 93291
Oct 13 14:06:21 standalone.localdomain systemd-sysv-generator[122406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:21 standalone.localdomain systemd-rc-local-generator[122402]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:21 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:21 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:06:21 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:06:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v937: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:21 standalone.localdomain systemd[1]: Starting neutron_api container...
Oct 13 14:06:21 standalone.localdomain systemd[1]: Started neutron_api container.
Oct 13 14:06:22 standalone.localdomain python3[122440]: ansible-systemd Invoked with state=restarted name=tripleo_neutron_dhcp.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:22 standalone.localdomain ceph-mon[29756]: pgmap v937: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:22 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:22 standalone.localdomain systemd-rc-local-generator[122468]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:22 standalone.localdomain systemd-sysv-generator[122471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:22 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:06:23
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_data', 'images', 'manila_metadata', '.mgr', 'backups', 'vms']
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:06:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:06:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:06:23 standalone.localdomain systemd[1]: Starting neutron_dhcp container...
Oct 13 14:06:23 standalone.localdomain podman[122489]: 2025-10-13 14:06:23.366628497 +0000 UTC m=+0.086639049 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:23 standalone.localdomain systemd[1]: tmp-crun.YstdCr.mount: Deactivated successfully.
Oct 13 14:06:23 standalone.localdomain podman[122489]: 2025-10-13 14:06:23.489001841 +0000 UTC m=+0.209012443 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1)
Oct 13 14:06:23 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:06:23 standalone.localdomain podman[122490]: 2025-10-13 14:06:23.432199215 +0000 UTC m=+0.147519391 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-ovn-controller, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:06:23 standalone.localdomain tripleo-start-podman-container[122491]: Creating additional drop-in dependency for "neutron_dhcp" (054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2)
Oct 13 14:06:23 standalone.localdomain podman[122490]: 2025-10-13 14:06:23.561756468 +0000 UTC m=+0.277076614 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:06:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v938: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:23 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:23 standalone.localdomain systemd-rc-local-generator[122595]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:23 standalone.localdomain systemd-sysv-generator[122599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:23 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:24 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:06:24 standalone.localdomain systemd[1]: Started neutron_dhcp container.
Oct 13 14:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:06:24 standalone.localdomain podman[122605]: 2025-10-13 14:06:24.203678878 +0000 UTC m=+0.071118553 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_api, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-placement-api, release=1, build-date=2025-07-21T13:58:12, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc.)
Oct 13 14:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:06:24 standalone.localdomain podman[122604]: 2025-10-13 14:06:24.278118131 +0000 UTC m=+0.145163063 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:06:24 standalone.localdomain podman[122606]: 2025-10-13 14:06:24.243460999 +0000 UTC m=+0.104515942 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, container_name=nova_metadata, name=rhosp17/openstack-nova-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:06:24 standalone.localdomain podman[122604]: 2025-10-13 14:06:24.295016612 +0000 UTC m=+0.162061564 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:06:24 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:06:24 standalone.localdomain podman[122667]: 2025-10-13 14:06:24.315222463 +0000 UTC m=+0.093074362 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:24 standalone.localdomain podman[122671]: 2025-10-13 14:06:24.362531014 +0000 UTC m=+0.140948333 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=glance_api, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:06:24 standalone.localdomain podman[122605]: 2025-10-13 14:06:24.387338488 +0000 UTC m=+0.254778163 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, com.redhat.component=openstack-placement-api-container, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, name=rhosp17/openstack-placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, container_name=placement_api, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:24 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:06:24 standalone.localdomain podman[122606]: 2025-10-13 14:06:24.429869651 +0000 UTC m=+0.290924564 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, container_name=nova_metadata, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:06:24 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:06:24 standalone.localdomain podman[122667]: 2025-10-13 14:06:24.495737888 +0000 UTC m=+0.273589847 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, release=1, io.openshift.expose-services=, tcib_managed=true, container_name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:24 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:06:24 standalone.localdomain podman[122671]: 2025-10-13 14:06:24.587821696 +0000 UTC m=+0.366239005 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, release=1, version=17.1.9, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, container_name=glance_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4)
Oct 13 14:06:24 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:06:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:24 standalone.localdomain ceph-mon[29756]: pgmap v938: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:25 standalone.localdomain python3[122788]: ansible-systemd Invoked with state=restarted name=tripleo_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:06:25 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:25 standalone.localdomain podman[122790]: 2025-10-13 14:06:25.238073483 +0000 UTC m=+0.088315004 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, container_name=swift_proxy, distribution-scope=public, managed_by=tripleo_ansible, release=1)
Oct 13 14:06:25 standalone.localdomain systemd-sysv-generator[122894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:25 standalone.localdomain systemd-rc-local-generator[122891]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:25 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:25 standalone.localdomain podman[122790]: 2025-10-13 14:06:25.461927868 +0000 UTC m=+0.312169369 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:06:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v939: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:25 standalone.localdomain runuser[122948]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:25 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:06:25 standalone.localdomain systemd[1]: Starting neutron_sriov_agent container...
Oct 13 14:06:25 standalone.localdomain systemd[1]: Started neutron_sriov_agent container.
Oct 13 14:06:26 standalone.localdomain runuser[122948]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:26 standalone.localdomain runuser[123073]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:26 standalone.localdomain python3[123072]: ansible-systemd Invoked with state=restarted name=tripleo_nova_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:06:26 standalone.localdomain ceph-mon[29756]: pgmap v939: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:26 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:26 standalone.localdomain podman[123119]: 2025-10-13 14:06:26.784624289 +0000 UTC m=+0.094297822 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, tcib_managed=true, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, architecture=x86_64, container_name=nova_api, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:06:26 standalone.localdomain podman[123119]: 2025-10-13 14:06:26.815354549 +0000 UTC m=+0.125028052 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, name=rhosp17/openstack-nova-api, tcib_managed=true)
Oct 13 14:06:26 standalone.localdomain systemd-rc-local-generator[123174]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:26 standalone.localdomain systemd-sysv-generator[123177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:26 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:27 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:06:27 standalone.localdomain runuser[123073]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:27 standalone.localdomain systemd[1]: Starting nova_api container...
Oct 13 14:06:27 standalone.localdomain runuser[123235]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:27 standalone.localdomain systemd[1]: Started nova_api container.
Oct 13 14:06:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:06:27 standalone.localdomain systemd[1]: tmp-crun.seBSsH.mount: Deactivated successfully.
Oct 13 14:06:27 standalone.localdomain podman[123327]: 2025-10-13 14:06:27.451055943 +0000 UTC m=+0.105912019 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-keystone, release=1, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:06:27 standalone.localdomain podman[123327]: 2025-10-13 14:06:27.457588251 +0000 UTC m=+0.112444327 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, name=rhosp17/openstack-keystone, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, distribution-scope=public)
Oct 13 14:06:27 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:06:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v940: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:28 standalone.localdomain runuser[123235]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:28 standalone.localdomain python3[123374]: ansible-systemd Invoked with state=restarted name=tripleo_nova_api_cron.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:06:28 standalone.localdomain ceph-mon[29756]: pgmap v940: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:29 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:29 standalone.localdomain systemd-rc-local-generator[123416]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:29 standalone.localdomain systemd-sysv-generator[123423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v941: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:29 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:06:29 standalone.localdomain systemd[1]: Starting nova_api_cron container...
Oct 13 14:06:30 standalone.localdomain podman[123431]: 2025-10-13 14:06:30.01444342 +0000 UTC m=+0.084868689 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:19, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, container_name=barbican_keystone_listener, com.redhat.component=openstack-barbican-keystone-listener-container, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:06:30 standalone.localdomain systemd[1]: Started nova_api_cron container.
Oct 13 14:06:30 standalone.localdomain podman[123431]: 2025-10-13 14:06:30.067889125 +0000 UTC m=+0.138314414 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, container_name=barbican_keystone_listener, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:18:19, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1)
Oct 13 14:06:30 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:06:30 standalone.localdomain ceph-mon[29756]: pgmap v941: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:06:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:06:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:06:30 standalone.localdomain systemd[1]: tmp-crun.dWmOdm.mount: Deactivated successfully.
Oct 13 14:06:30 standalone.localdomain podman[123480]: 2025-10-13 14:06:30.825637723 +0000 UTC m=+0.083371310 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, container_name=nova_api_cron, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, tcib_managed=true, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:06:30 standalone.localdomain podman[123480]: 2025-10-13 14:06:30.862846929 +0000 UTC m=+0.120580456 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, com.redhat.component=openstack-nova-api-container, container_name=nova_api_cron, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:06:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:06:30 standalone.localdomain podman[123478]: 2025-10-13 14:06:30.873458411 +0000 UTC m=+0.133214816 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, architecture=x86_64, build-date=2025-07-21T15:22:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, container_name=barbican_api, vcs-type=git, name=rhosp17/openstack-barbican-api, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-barbican-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12)
Oct 13 14:06:30 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:06:30 standalone.localdomain podman[123479]: 2025-10-13 14:06:30.888151859 +0000 UTC m=+0.144941525 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:36:22, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, container_name=barbican_worker, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-worker-container, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:06:30 standalone.localdomain podman[123478]: 2025-10-13 14:06:30.904893295 +0000 UTC m=+0.164649720 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-barbican-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, com.redhat.component=openstack-barbican-api-container, container_name=barbican_api, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:44)
Oct 13 14:06:30 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:06:30 standalone.localdomain podman[123479]: 2025-10-13 14:06:30.960322056 +0000 UTC m=+0.217111712 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:36:22, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, name=rhosp17/openstack-barbican-worker, container_name=barbican_worker, version=17.1.9, config_id=tripleo_step3, com.redhat.component=openstack-barbican-worker-container, io.openshift.expose-services=, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:06:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:06:30 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:06:30 standalone.localdomain podman[123525]: 2025-10-13 14:06:30.978606823 +0000 UTC m=+0.094276331 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=neutron_dhcp, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:06:31 standalone.localdomain python3[123477]: ansible-systemd Invoked with state=restarted name=tripleo_nova_conductor.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:31 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:31 standalone.localdomain podman[123557]: 2025-10-13 14:06:31.062794709 +0000 UTC m=+0.078507828 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, release=1, tcib_managed=true, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1)
Oct 13 14:06:31 standalone.localdomain podman[123525]: 2025-10-13 14:06:31.085067109 +0000 UTC m=+0.200736657 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, vendor=Red Hat, Inc., container_name=neutron_dhcp, distribution-scope=public, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54)
Oct 13 14:06:31 standalone.localdomain podman[123557]: 2025-10-13 14:06:31.131873504 +0000 UTC m=+0.147586623 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git)
Oct 13 14:06:31 standalone.localdomain systemd-sysv-generator[123617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:31 standalone.localdomain systemd-rc-local-generator[123613]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:31 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:31 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:06:31 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:06:31 standalone.localdomain systemd[1]: Starting nova_conductor container...
Oct 13 14:06:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v942: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:31 standalone.localdomain systemd[1]: Started nova_conductor container.
Oct 13 14:06:32 standalone.localdomain python3[123648]: ansible-systemd Invoked with state=restarted name=tripleo_nova_metadata.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:32 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:32 standalone.localdomain ceph-mon[29756]: pgmap v942: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:32 standalone.localdomain systemd-sysv-generator[123685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:32 standalone.localdomain systemd-rc-local-generator[123678]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:32 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:33 standalone.localdomain systemd[1]: Starting nova_metadata container...
Oct 13 14:06:33 standalone.localdomain systemd[1]: Started nova_metadata container.
Oct 13 14:06:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v943: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:33 standalone.localdomain ceph-mon[29756]: pgmap v943: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:34 standalone.localdomain python3[123720]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:34 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:34 standalone.localdomain systemd-rc-local-generator[123771]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:34 standalone.localdomain systemd-sysv-generator[123778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:34 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:34 standalone.localdomain systemd[1]: Starting nova_migration_target container...
Oct 13 14:06:34 standalone.localdomain systemd[1]: Started nova_migration_target container.
Oct 13 14:06:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v944: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:35 standalone.localdomain python3[123902]: ansible-systemd Invoked with state=restarted name=tripleo_nova_scheduler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:35 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:35 standalone.localdomain systemd-rc-local-generator[123949]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:35 standalone.localdomain systemd-sysv-generator[123957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:36 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:06:36 standalone.localdomain systemd[1]: Starting nova_scheduler container...
Oct 13 14:06:36 standalone.localdomain podman[123964]: 2025-10-13 14:06:36.438629437 +0000 UTC m=+0.091560181 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-northd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, vendor=Red Hat, Inc., container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, config_id=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:06:36 standalone.localdomain podman[123964]: 2025-10-13 14:06:36.452812818 +0000 UTC m=+0.105743582 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:30:04, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, config_id=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.component=openstack-ovn-northd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:06:36 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:06:36 standalone.localdomain systemd[1]: Started nova_scheduler container.
Oct 13 14:06:36 standalone.localdomain ceph-mon[29756]: pgmap v944: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:06:36 standalone.localdomain systemd[1]: tmp-crun.PBb6qp.mount: Deactivated successfully.
Oct 13 14:06:36 standalone.localdomain podman[124045]: 2025-10-13 14:06:36.813711166 +0000 UTC m=+0.081695665 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:06:36 standalone.localdomain podman[124045]: 2025-10-13 14:06:36.893012726 +0000 UTC m=+0.160997215 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, container_name=clustercheck, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45)
Oct 13 14:06:36 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:06:37 standalone.localdomain python3[124125]: ansible-systemd Invoked with state=restarted name=tripleo_nova_vnc_proxy.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:37 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v945: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:37 standalone.localdomain systemd-sysv-generator[124207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:37 standalone.localdomain systemd-rc-local-generator[124203]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:37 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:38 standalone.localdomain systemd[1]: Starting nova_vnc_proxy container...
Oct 13 14:06:38 standalone.localdomain systemd[1]: Started nova_vnc_proxy container.
Oct 13 14:06:38 standalone.localdomain runuser[124231]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:38 standalone.localdomain ceph-mon[29756]: pgmap v945: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:39 standalone.localdomain python3[124285]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:39 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:39 standalone.localdomain runuser[124231]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:39 standalone.localdomain systemd-rc-local-generator[124328]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:39 standalone.localdomain systemd-sysv-generator[124335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:39 standalone.localdomain runuser[124323]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:39 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v946: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:39 standalone.localdomain systemd[1]: Starting ovn_controller container...
Oct 13 14:06:39 standalone.localdomain tripleo-start-podman-container[124390]: Creating additional drop-in dependency for "ovn_controller" (c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80)
Oct 13 14:06:39 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:39 standalone.localdomain runuser[124323]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:39 standalone.localdomain runuser[124437]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:40 standalone.localdomain systemd-sysv-generator[124508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:40 standalone.localdomain systemd-rc-local-generator[124504]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:40 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:40 standalone.localdomain systemd[1]: Started ovn_controller container.
Oct 13 14:06:40 standalone.localdomain sudo[124524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:06:40 standalone.localdomain sudo[124524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:06:40 standalone.localdomain sudo[124524]: pam_unix(sudo:session): session closed for user root
Oct 13 14:06:40 standalone.localdomain sudo[124541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:06:40 standalone.localdomain sudo[124541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:06:40 standalone.localdomain runuser[124437]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:40 standalone.localdomain ceph-mon[29756]: pgmap v946: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:41 standalone.localdomain sudo[124541]: pam_unix(sudo:session): session closed for user root
Oct 13 14:06:41 standalone.localdomain python3[124582]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:41 standalone.localdomain sudo[124600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:06:41 standalone.localdomain sudo[124600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:06:41 standalone.localdomain sudo[124600]: pam_unix(sudo:session): session closed for user root
Oct 13 14:06:41 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:41 standalone.localdomain sudo[124618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Oct 13 14:06:41 standalone.localdomain systemd-rc-local-generator[124655]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:41 standalone.localdomain systemd-sysv-generator[124659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v947: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:41 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:41 standalone.localdomain sudo[124618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:06:41 standalone.localdomain systemd[1]: Starting ovn_metadata_agent container...
Oct 13 14:06:42 standalone.localdomain systemd[1]: Started ovn_metadata_agent container.
Oct 13 14:06:42 standalone.localdomain sudo[124618]: pam_unix(sudo:session): session closed for user root
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:06:42 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev e1fd9be0-17f4-4a8a-b62b-640a557c1e05 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:06:42 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev e1fd9be0-17f4-4a8a-b62b-640a557c1e05 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:06:42 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event e1fd9be0-17f4-4a8a-b62b-640a557c1e05 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:06:42 standalone.localdomain sudo[124709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:06:42 standalone.localdomain sudo[124709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:06:42 standalone.localdomain sudo[124709]: pam_unix(sudo:session): session closed for user root
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: pgmap v947: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:06:42 standalone.localdomain python3[124727]: ansible-systemd Invoked with state=restarted name=tripleo_placement_api.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:42 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:43 standalone.localdomain systemd-sysv-generator[124766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:43 standalone.localdomain systemd-rc-local-generator[124762]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:43 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:43 standalone.localdomain systemd[1]: Starting placement_api container...
Oct 13 14:06:43 standalone.localdomain systemd[1]: Started placement_api container.
Oct 13 14:06:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v948: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 44 completed events
Oct 13 14:06:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:06:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:44 standalone.localdomain python3[124797]: ansible-systemd Invoked with state=restarted name=tripleo_swift_account_reaper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:44 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:44 standalone.localdomain systemd-sysv-generator[124879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:44 standalone.localdomain systemd-rc-local-generator[124875]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:44 standalone.localdomain ceph-mon[29756]: pgmap v948: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:06:44 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:06:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:06:45 standalone.localdomain systemd[1]: Starting swift_account_reaper container...
Oct 13 14:06:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:06:45 standalone.localdomain systemd[1]: Started swift_account_reaper container.
Oct 13 14:06:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:06:45 standalone.localdomain podman[124898]: 2025-10-13 14:06:45.125795766 +0000 UTC m=+0.076969177 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, release=1, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:06:45 standalone.localdomain systemd[1]: tmp-crun.3WgMA2.mount: Deactivated successfully.
Oct 13 14:06:45 standalone.localdomain podman[124899]: 2025-10-13 14:06:45.20166316 +0000 UTC m=+0.151003675 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:06:45 standalone.localdomain podman[124898]: 2025-10-13 14:06:45.215775324 +0000 UTC m=+0.166948735 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2)
Oct 13 14:06:45 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:06:45 standalone.localdomain podman[124936]: 2025-10-13 14:06:45.26834594 +0000 UTC m=+0.135385754 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, container_name=cinder_scheduler, io.openshift.expose-services=, com.redhat.component=openstack-cinder-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, distribution-scope=public)
Oct 13 14:06:45 standalone.localdomain podman[124899]: 2025-10-13 14:06:45.32068569 +0000 UTC m=+0.270026255 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, name=rhosp17/openstack-keystone, container_name=keystone, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:06:45 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:06:45 standalone.localdomain podman[124936]: 2025-10-13 14:06:45.344893185 +0000 UTC m=+0.211932999 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, tcib_managed=true, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.component=openstack-cinder-scheduler-container, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, name=rhosp17/openstack-cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_scheduler, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, build-date=2025-07-21T16:10:12)
Oct 13 14:06:45 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:06:45 standalone.localdomain podman[124943]: 2025-10-13 14:06:45.322788075 +0000 UTC m=+0.174832468 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.openshift.expose-services=, release=1, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.buildah.version=1.33.12, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:06:45 standalone.localdomain podman[124943]: 2025-10-13 14:06:45.402140525 +0000 UTC m=+0.254184878 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, name=rhosp17/openstack-cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.component=openstack-cinder-api-container, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=cinder_api_cron, version=17.1.9, build-date=2025-07-21T15:58:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 14:06:45 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:06:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v949: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:46 standalone.localdomain python3[125078]: ansible-systemd Invoked with state=restarted name=tripleo_swift_account_server.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:46 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:46 standalone.localdomain systemd-rc-local-generator[125114]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:46 standalone.localdomain systemd-sysv-generator[125118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:46 standalone.localdomain ceph-mon[29756]: pgmap v949: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:06:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:06:46 standalone.localdomain systemd[1]: Starting swift_account_server container...
Oct 13 14:06:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:06:46 standalone.localdomain systemd[1]: tmp-crun.CRUmNd.mount: Deactivated successfully.
Oct 13 14:06:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:06:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:06:46 standalone.localdomain podman[125167]: 2025-10-13 14:06:46.878077517 +0000 UTC m=+0.085035456 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, distribution-scope=public, build-date=2025-07-21T13:58:15, vcs-type=git, description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, name=rhosp17/openstack-horizon, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, container_name=horizon, config_id=tripleo_step3)
Oct 13 14:06:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:06:46 standalone.localdomain podman[125224]: 2025-10-13 14:06:46.981383044 +0000 UTC m=+0.077707611 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=)
Oct 13 14:06:47 standalone.localdomain podman[125166]: 2025-10-13 14:06:46.959575253 +0000 UTC m=+0.165723458 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, container_name=memcached, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, architecture=x86_64, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git)
Oct 13 14:06:47 standalone.localdomain podman[125179]: 2025-10-13 14:06:46.912794804 +0000 UTC m=+0.109571671 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, container_name=heat_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:06:47 standalone.localdomain podman[125270]: 2025-10-13 14:06:47.023774337 +0000 UTC m=+0.061376588 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:11, tcib_managed=true, name=rhosp17/openstack-heat-engine, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:06:47 standalone.localdomain podman[125166]: 2025-10-13 14:06:47.043783283 +0000 UTC m=+0.249931468 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T12:58:43, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, io.buildah.version=1.33.12, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, com.redhat.component=openstack-memcached-container, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:06:47 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain podman[125270]: 2025-10-13 14:06:47.074727624 +0000 UTC m=+0.112329875 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-heat-engine, version=17.1.9, build-date=2025-07-21T15:44:11, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team, container_name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true)
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started swift_account_server container.
Oct 13 14:06:47 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain podman[125179]: 2025-10-13 14:06:47.09474933 +0000 UTC m=+0.291526217 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, container_name=heat_api, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9)
Oct 13 14:06:47 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain podman[125224]: 2025-10-13 14:06:47.118441389 +0000 UTC m=+0.214765966 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, vcs-type=git, container_name=heat_api_cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:06:47 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain podman[125243]: 2025-10-13 14:06:47.136137703 +0000 UTC m=+0.217735208 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, distribution-scope=public, release=1, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team)
Oct 13 14:06:47 standalone.localdomain podman[125167]: 2025-10-13 14:06:47.165098703 +0000 UTC m=+0.372056712 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, com.redhat.component=openstack-horizon-container, container_name=horizon, description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, release=1, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:06:47 standalone.localdomain podman[125243]: 2025-10-13 14:06:47.173702078 +0000 UTC m=+0.255299553 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Oct 13 14:06:47 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v950: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:06:47 standalone.localdomain ceph-mon[29756]: pgmap v950: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:47 standalone.localdomain podman[125400]: 2025-10-13 14:06:47.816213028 +0000 UTC m=+0.086155181 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, name=rhosp17/openstack-manila-api, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-manila-api-container, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., container_name=manila_api_cron)
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:06:47 standalone.localdomain podman[125400]: 2025-10-13 14:06:47.850993787 +0000 UTC m=+0.120935920 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, build-date=2025-07-21T16:06:43, io.openshift.expose-services=, container_name=manila_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-api-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:06:47 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:06:47 standalone.localdomain podman[125423]: 2025-10-13 14:06:47.93562857 +0000 UTC m=+0.091798693 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, release=1, architecture=x86_64, build-date=2025-07-21T15:44:03, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-server, container_name=neutron_api, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, tcib_managed=true)
Oct 13 14:06:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:06:47 standalone.localdomain podman[125421]: 2025-10-13 14:06:47.993277443 +0000 UTC m=+0.156945317 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.component=openstack-heat-api-cfn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, version=17.1.9, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public)
Oct 13 14:06:48 standalone.localdomain podman[125481]: 2025-10-13 14:06:48.035191142 +0000 UTC m=+0.082305151 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, com.redhat.component=openstack-nova-conductor-container, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, container_name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12)
Oct 13 14:06:48 standalone.localdomain python3[125411]: ansible-systemd Invoked with state=restarted name=tripleo_swift_container_server.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:48 standalone.localdomain podman[125422]: 2025-10-13 14:06:48.052554266 +0000 UTC m=+0.212400063 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, name=rhosp17/openstack-manila-scheduler, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-manila-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']})
Oct 13 14:06:48 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:48 standalone.localdomain podman[125434]: 2025-10-13 14:06:48.103808362 +0000 UTC m=+0.252932669 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, build-date=2025-07-21T15:58:55, tcib_managed=true, distribution-scope=public)
Oct 13 14:06:48 standalone.localdomain podman[125423]: 2025-10-13 14:06:48.112889192 +0000 UTC m=+0.269059295 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.component=openstack-neutron-server-container, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, name=rhosp17/openstack-neutron-server, container_name=neutron_api, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, distribution-scope=public, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:06:48 standalone.localdomain podman[125421]: 2025-10-13 14:06:48.126387358 +0000 UTC m=+0.290055202 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, tcib_managed=true, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:06:48 standalone.localdomain podman[125434]: 2025-10-13 14:06:48.137912942 +0000 UTC m=+0.287037259 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, name=rhosp17/openstack-cinder-api, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git)
Oct 13 14:06:48 standalone.localdomain podman[125481]: 2025-10-13 14:06:48.155717549 +0000 UTC m=+0.202831578 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, com.redhat.component=openstack-nova-conductor-container, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:06:48 standalone.localdomain podman[125422]: 2025-10-13 14:06:48.18076944 +0000 UTC m=+0.340615257 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-manila-scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=manila_scheduler, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, build-date=2025-07-21T15:56:28, com.redhat.component=openstack-manila-scheduler-container, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, distribution-scope=public, version=17.1.9)
Oct 13 14:06:48 standalone.localdomain systemd-rc-local-generator[125571]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:48 standalone.localdomain systemd-sysv-generator[125575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:48 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:48 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:06:48 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:06:48 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:06:48 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:06:48 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:06:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:06:48 standalone.localdomain systemd[1]: Starting swift_container_server container...
Oct 13 14:06:48 standalone.localdomain podman[125586]: 2025-10-13 14:06:48.820540865 +0000 UTC m=+0.085523421 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-scheduler-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, container_name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T16:02:54)
Oct 13 14:06:48 standalone.localdomain systemd[1]: Started swift_container_server container.
Oct 13 14:06:48 standalone.localdomain podman[125586]: 2025-10-13 14:06:48.850922869 +0000 UTC m=+0.115905435 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, com.redhat.component=openstack-nova-scheduler-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-scheduler, build-date=2025-07-21T16:02:54, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, container_name=nova_scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, summary=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:06:48 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:06:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v951: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:06:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:06:49 standalone.localdomain podman[125634]: 2025-10-13 14:06:49.814654878 +0000 UTC m=+0.081609471 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-nova-novncproxy-container, config_id=tripleo_step4, container_name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:24:10, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:49 standalone.localdomain podman[125633]: 2025-10-13 14:06:49.8768243 +0000 UTC m=+0.144805764 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1)
Oct 13 14:06:49 standalone.localdomain python3[125632]: ansible-systemd Invoked with state=restarted name=tripleo_swift_container_updater.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:06:49 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:50 standalone.localdomain podman[125685]: 2025-10-13 14:06:50.042820876 +0000 UTC m=+0.067832428 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:06:50 standalone.localdomain systemd-sysv-generator[125735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:50 standalone.localdomain systemd-rc-local-generator[125731]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:50 standalone.localdomain podman[125634]: 2025-10-13 14:06:50.150843877 +0000 UTC m=+0.417798460 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, container_name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:06:50 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:50 standalone.localdomain podman[125633]: 2025-10-13 14:06:50.24654807 +0000 UTC m=+0.514529524 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, version=17.1.9)
Oct 13 14:06:50 standalone.localdomain podman[125685]: 2025-10-13 14:06:50.285897871 +0000 UTC m=+0.310909403 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., container_name=swift_account_server, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:06:50 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:06:50 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:06:50 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:06:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:06:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:06:50 standalone.localdomain systemd[1]: Starting swift_container_updater container...
Oct 13 14:06:50 standalone.localdomain systemd[1]: Started swift_container_updater container.
Oct 13 14:06:50 standalone.localdomain podman[125753]: 2025-10-13 14:06:50.607295675 +0000 UTC m=+0.073923984 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, config_id=tripleo_step4)
Oct 13 14:06:50 standalone.localdomain podman[125754]: 2025-10-13 14:06:50.663638738 +0000 UTC m=+0.129365960 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, distribution-scope=public)
Oct 13 14:06:50 standalone.localdomain ceph-mon[29756]: pgmap v951: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:50 standalone.localdomain systemd[1]: tmp-crun.pNPxCV.mount: Deactivated successfully.
Oct 13 14:06:50 standalone.localdomain podman[125753]: 2025-10-13 14:06:50.829757377 +0000 UTC m=+0.296385696 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc.)
Oct 13 14:06:50 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:06:50 standalone.localdomain podman[125754]: 2025-10-13 14:06:50.90592841 +0000 UTC m=+0.371655692 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-swift-container-container)
Oct 13 14:06:50 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:06:51 standalone.localdomain runuser[125834]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:51 standalone.localdomain python3[125829]: ansible-systemd Invoked with state=restarted name=tripleo_swift_object_expirer.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:51 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v952: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:51 standalone.localdomain systemd-sysv-generator[125900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:51 standalone.localdomain systemd-rc-local-generator[125897]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:51 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:51 standalone.localdomain runuser[125834]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:52 standalone.localdomain runuser[125940]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:52 standalone.localdomain systemd[1]: Starting swift_object_expirer container...
Oct 13 14:06:52 standalone.localdomain systemd[1]: Started swift_object_expirer container.
Oct 13 14:06:52 standalone.localdomain ceph-mon[29756]: pgmap v952: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:52 standalone.localdomain runuser[125940]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:52 standalone.localdomain runuser[126010]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:06:53 standalone.localdomain python3[126026]: ansible-systemd Invoked with state=restarted name=tripleo_swift_object_server.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:53 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:06:53 standalone.localdomain systemd-rc-local-generator[126086]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:53 standalone.localdomain systemd-sysv-generator[126093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:53 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v953: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:53 standalone.localdomain runuser[126010]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:06:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:06:53 standalone.localdomain systemd[1]: Starting swift_object_server container...
Oct 13 14:06:53 standalone.localdomain systemd[1]: tmp-crun.D0HcK9.mount: Deactivated successfully.
Oct 13 14:06:53 standalone.localdomain podman[126116]: 2025-10-13 14:06:53.791334132 +0000 UTC m=+0.077478668 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1)
Oct 13 14:06:53 standalone.localdomain systemd[1]: Started swift_object_server container.
Oct 13 14:06:53 standalone.localdomain podman[126116]: 2025-10-13 14:06:53.835706847 +0000 UTC m=+0.121851423 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:06:53 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:06:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:06:54 standalone.localdomain ceph-mon[29756]: pgmap v953: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:06:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:06:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:06:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:06:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:06:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:06:54 standalone.localdomain python3[126198]: ansible-systemd Invoked with state=restarted name=tripleo_swift_object_updater.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:54 standalone.localdomain podman[126252]: 2025-10-13 14:06:54.830955974 +0000 UTC m=+0.096512669 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, release=1, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:06:54 standalone.localdomain podman[126256]: 2025-10-13 14:06:54.850842791 +0000 UTC m=+0.107847965 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, vcs-type=git, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:06:54 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:54 standalone.localdomain podman[126252]: 2025-10-13 14:06:54.861299691 +0000 UTC m=+0.126856376 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, container_name=glance_api_cron, distribution-scope=public, release=1, tcib_managed=true, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:06:54 standalone.localdomain podman[126254]: 2025-10-13 14:06:54.811731027 +0000 UTC m=+0.078122127 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-type=git, com.redhat.component=openstack-nova-api-container, container_name=nova_metadata, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true)
Oct 13 14:06:54 standalone.localdomain podman[126254]: 2025-10-13 14:06:54.992831318 +0000 UTC m=+0.259222418 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, container_name=nova_metadata, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, name=rhosp17/openstack-nova-api)
Oct 13 14:06:55 standalone.localdomain podman[126253]: 2025-10-13 14:06:55.006241847 +0000 UTC m=+0.270412469 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-placement-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-placement-api, vcs-type=git, container_name=placement_api, managed_by=tripleo_ansible, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:06:55 standalone.localdomain systemd-sysv-generator[126394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:55 standalone.localdomain podman[126256]: 2025-10-13 14:06:55.025712202 +0000 UTC m=+0.282717346 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:06:55 standalone.localdomain systemd-rc-local-generator[126388]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:55 standalone.localdomain podman[126255]: 2025-10-13 14:06:54.998644316 +0000 UTC m=+0.259536308 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:06:55 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:55 standalone.localdomain podman[126253]: 2025-10-13 14:06:55.101245619 +0000 UTC m=+0.365416301 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, architecture=x86_64, container_name=placement_api, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:58:12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, com.redhat.component=openstack-placement-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:06:55 standalone.localdomain podman[126257]: 2025-10-13 14:06:55.113895036 +0000 UTC m=+0.366899277 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, vcs-type=git, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=glance_api, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:06:55 standalone.localdomain podman[126255]: 2025-10-13 14:06:55.207760652 +0000 UTC m=+0.468652614 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.expose-services=, release=1, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:06:55 standalone.localdomain podman[126257]: 2025-10-13 14:06:55.355035711 +0000 UTC m=+0.608039992 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:06:55 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:06:55 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:06:55 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:06:55 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:06:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:06:55 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:06:55 standalone.localdomain systemd[1]: Starting swift_object_updater container...
Oct 13 14:06:55 standalone.localdomain systemd[1]: Started swift_object_updater container.
Oct 13 14:06:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v954: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:06:55 standalone.localdomain podman[126533]: 2025-10-13 14:06:55.827098548 +0000 UTC m=+0.095665043 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, container_name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, version=17.1.9, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., architecture=x86_64, release=1)
Oct 13 14:06:56 standalone.localdomain podman[126533]: 2025-10-13 14:06:56.067953685 +0000 UTC m=+0.336520190 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, build-date=2025-07-21T14:48:37, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:06:56 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:06:56 standalone.localdomain python3[126562]: ansible-systemd Invoked with state=restarted name=tripleo_swift_proxy.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:06:56 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:06:56 standalone.localdomain systemd-rc-local-generator[126638]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:06:56 standalone.localdomain systemd-sysv-generator[126643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:06:56 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:06:56 standalone.localdomain ceph-mon[29756]: pgmap v954: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:56 standalone.localdomain systemd[1]: Starting swift_proxy container...
Oct 13 14:06:57 standalone.localdomain systemd[1]: Started swift_proxy container.
Oct 13 14:06:57 standalone.localdomain python3[126719]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:06:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v955: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:06:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:06:57 standalone.localdomain systemd[1]: tmp-crun.9mfaQe.mount: Deactivated successfully.
Oct 13 14:06:57 standalone.localdomain podman[126767]: 2025-10-13 14:06:57.844687311 +0000 UTC m=+0.107585517 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, managed_by=tripleo_ansible, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, maintainer=OpenStack TripleO Team)
Oct 13 14:06:57 standalone.localdomain systemd[1]: tmp-crun.LGS3tc.mount: Deactivated successfully.
Oct 13 14:06:57 standalone.localdomain podman[126768]: 2025-10-13 14:06:57.887798058 +0000 UTC m=+0.148186987 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, container_name=nova_api, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:06:57 standalone.localdomain podman[126768]: 2025-10-13 14:06:57.918851826 +0000 UTC m=+0.179240785 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, version=17.1.9, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, container_name=nova_api)
Oct 13 14:06:57 standalone.localdomain podman[126767]: 2025-10-13 14:06:57.926737797 +0000 UTC m=+0.189635963 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, version=17.1.9, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, release=1, container_name=keystone_cron, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, config_id=tripleo_step3, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:06:57 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:06:57 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:06:58 standalone.localdomain python3[126840]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=standalone step=4 update_config_hash_only=False
Oct 13 14:06:58 standalone.localdomain ceph-mon[29756]: pgmap v955: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:59 standalone.localdomain python3[126848]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:06:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v956: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:06:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:00 standalone.localdomain python3[126879]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=if ! openstack volume type show "tripleo"; then
                                                            openstack volume type create --public "tripleo"
                                                        fi
                                                        eval $(openstack volume type show __DEFAULT__ -f shell -c id -c description)
                                                        if [ -n "$id" ]; then
                                                            vols=$(openstack volume list -f value -c ID)
                                                            tripleo_descr="For internal use, 'tripleo' is the default volume type"
                                                            if [ -z "$vols" ]; then
                                                                openstack volume type delete $id
                                                            elif [ "$description" != "$tripleo_descr" ]; then
                                                                openstack volume type set $id --description "$tripleo_descr"
                                                            fi
                                                        fi
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 13 14:07:00 standalone.localdomain ceph-mon[29756]: pgmap v956: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:07:00 standalone.localdomain systemd[1]: tmp-crun.MMUr5w.mount: Deactivated successfully.
Oct 13 14:07:00 standalone.localdomain podman[126893]: 2025-10-13 14:07:00.830566027 +0000 UTC m=+0.122099220 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-mariadb-container)
Oct 13 14:07:00 standalone.localdomain systemd[1]: tmp-crun.l0M0Nc.mount: Deactivated successfully.
Oct 13 14:07:00 standalone.localdomain podman[126902]: 2025-10-13 14:07:00.850363732 +0000 UTC m=+0.106406481 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, container_name=barbican_keystone_listener, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, version=17.1.9)
Oct 13 14:07:00 standalone.localdomain podman[126893]: 2025-10-13 14:07:00.863938346 +0000 UTC m=+0.155471539 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, release=1, distribution-scope=public)
Oct 13 14:07:00 standalone.localdomain podman[126902]: 2025-10-13 14:07:00.884980559 +0000 UTC m=+0.141023288 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, com.redhat.component=openstack-barbican-keystone-listener-container, version=17.1.9, architecture=x86_64, distribution-scope=public, release=1, managed_by=tripleo_ansible)
Oct 13 14:07:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:07:00 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:07:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:07:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:07:01 standalone.localdomain podman[126944]: 2025-10-13 14:07:01.012021399 +0000 UTC m=+0.128972760 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc.)
Oct 13 14:07:01 standalone.localdomain podman[126981]: 2025-10-13 14:07:01.101746099 +0000 UTC m=+0.081057286 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, build-date=2025-07-21T15:36:22, container_name=barbican_worker, name=rhosp17/openstack-barbican-worker, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-worker-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vcs-type=git)
Oct 13 14:07:01 standalone.localdomain podman[126990]: 2025-10-13 14:07:01.11911533 +0000 UTC m=+0.094960801 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-haproxy, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vcs-type=git)
Oct 13 14:07:01 standalone.localdomain podman[126944]: 2025-10-13 14:07:01.122882725 +0000 UTC m=+0.239834086 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, build-date=2025-07-21T13:08:11, name=rhosp17/openstack-haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:07:01 standalone.localdomain podman[126980]: 2025-10-13 14:07:01.137298456 +0000 UTC m=+0.125569647 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=barbican_api, io.buildah.version=1.33.12, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, release=1, com.redhat.component=openstack-barbican-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:22:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:07:01 standalone.localdomain podman[126981]: 2025-10-13 14:07:01.148098656 +0000 UTC m=+0.127409823 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, architecture=x86_64, container_name=barbican_worker, config_id=tripleo_step3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-worker-container, io.buildah.version=1.33.12, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, release=1, build-date=2025-07-21T15:36:22, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:01 standalone.localdomain podman[126953]: 2025-10-13 14:07:01.05363209 +0000 UTC m=+0.144725292 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9)
Oct 13 14:07:01 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:07:01 standalone.localdomain podman[126980]: 2025-10-13 14:07:01.169762568 +0000 UTC m=+0.158033779 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.buildah.version=1.33.12, config_id=tripleo_step3, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:22:44, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, version=17.1.9, container_name=barbican_api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team)
Oct 13 14:07:01 standalone.localdomain podman[126953]: 2025-10-13 14:07:01.187255552 +0000 UTC m=+0.278348724 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, name=rhosp17/openstack-nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, config_id=tripleo_step4, architecture=x86_64, release=1, container_name=nova_api_cron, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:07:01 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:07:01 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:07:01 standalone.localdomain podman[126997]: 2025-10-13 14:07:01.270718311 +0000 UTC m=+0.238807306 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:08:05, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=)
Oct 13 14:07:01 standalone.localdomain podman[126997]: 2025-10-13 14:07:01.302808561 +0000 UTC m=+0.270897566 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, name=rhosp17/openstack-rabbitmq)
Oct 13 14:07:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v957: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:07:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:07:01 standalone.localdomain ceph-mon[29756]: pgmap v957: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:01 standalone.localdomain podman[127090]: 2025-10-13 14:07:01.817780819 +0000 UTC m=+0.083579014 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, container_name=neutron_dhcp, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:07:01 standalone.localdomain systemd[1]: tmp-crun.U4YaHE.mount: Deactivated successfully.
Oct 13 14:07:01 standalone.localdomain podman[127091]: 2025-10-13 14:07:01.895118812 +0000 UTC m=+0.157455051 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=neutron_sriov_agent, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Oct 13 14:07:01 standalone.localdomain podman[127090]: 2025-10-13 14:07:01.924990443 +0000 UTC m=+0.190788648 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:07:01 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:07:01 standalone.localdomain podman[127091]: 2025-10-13 14:07:01.983851611 +0000 UTC m=+0.246187810 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, build-date=2025-07-21T16:03:34, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:01 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:07:02 standalone.localdomain haproxy[70940]: 172.21.0.2:53938 [13/Oct/2025:14:07:02.144] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/6/6 300 515 - - ---- 38/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:02 standalone.localdomain haproxy[70940]: 172.21.0.2:53938 [13/Oct/2025:14:07:02.151] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/341/341 201 8100 - - ---- 38/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:02 standalone.localdomain haproxy[70940]: 172.17.0.2:56340 [13/Oct/2025:14:07:02.613] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 40/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:03 standalone.localdomain haproxy[70940]: 172.17.0.2:56340 [13/Oct/2025:14:07:02.621] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/419/419 201 8166 - - ---- 40/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:03 standalone.localdomain haproxy[70940]: 172.17.0.2:56340 [13/Oct/2025:14:07:03.044] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 8095 - - ---- 40/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:03 standalone.localdomain haproxy[70940]: 172.21.0.2:49262 [13/Oct/2025:14:07:02.598] cinder cinder/standalone.internalapi.localdomain 0/0/1/709/710 404 393 - - ---- 41/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/types/tripleo HTTP/1.1"
Oct 13 14:07:03 standalone.localdomain haproxy[70940]: 172.21.0.2:49262 [13/Oct/2025:14:07:03.312] cinder cinder/standalone.internalapi.localdomain 0/0/0/15/15 200 321 - - ---- 41/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/types?name=tripleo&is_public=None HTTP/1.1"
Oct 13 14:07:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v958: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:04 standalone.localdomain runuser[127174]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:04 standalone.localdomain ceph-mon[29756]: pgmap v958: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:04 standalone.localdomain runuser[127174]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:05 standalone.localdomain haproxy[70940]: 172.21.0.2:53954 [13/Oct/2025:14:07:05.113] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 40/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:05 standalone.localdomain runuser[127304]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:05 standalone.localdomain haproxy[70940]: 172.21.0.2:53954 [13/Oct/2025:14:07:05.117] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/321/321 201 8100 - - ---- 40/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:05 standalone.localdomain haproxy[70940]: 172.17.0.2:56340 [13/Oct/2025:14:07:05.474] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/46/46 200 8095 - - ---- 41/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:05 standalone.localdomain haproxy[70940]: 172.21.0.2:49272 [13/Oct/2025:14:07:05.470] cinder cinder/standalone.internalapi.localdomain 0/0/0/84/84 200 484 - - ---- 41/1/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/types HTTP/1.1"
Oct 13 14:07:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v959: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:05 standalone.localdomain runuser[127304]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:05 standalone.localdomain runuser[127450]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:06 standalone.localdomain runuser[127450]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:06 standalone.localdomain ceph-mon[29756]: pgmap v959: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:07:06 standalone.localdomain systemd[1]: tmp-crun.VPetFI.mount: Deactivated successfully.
Oct 13 14:07:06 standalone.localdomain podman[127560]: 2025-10-13 14:07:06.833354497 +0000 UTC m=+0.098188960 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, name=rhosp17/openstack-ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, release=1, version=17.1.9, distribution-scope=public, tcib_managed=true)
Oct 13 14:07:06 standalone.localdomain podman[127560]: 2025-10-13 14:07:06.848441298 +0000 UTC m=+0.113275711 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, managed_by=tripleo_ansible, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-type=git, release=1, config_id=ovn_cluster_northd, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:30:04, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public)
Oct 13 14:07:06 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:07:07 standalone.localdomain haproxy[70940]: 172.21.0.2:53956 [13/Oct/2025:14:07:07.380] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 40/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v960: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:07 standalone.localdomain haproxy[70940]: 172.21.0.2:53956 [13/Oct/2025:14:07:07.386] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/302/302 201 8100 - - ---- 40/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:07:07 standalone.localdomain haproxy[70940]: 172.17.0.2:56340 [13/Oct/2025:14:07:07.721] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 8095 - - ---- 41/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:07 standalone.localdomain haproxy[70940]: 172.21.0.2:49282 [13/Oct/2025:14:07:07.715] cinder cinder/standalone.internalapi.localdomain 0/0/0/62/62 404 397 - - ---- 41/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/types/__DEFAULT__ HTTP/1.1"
Oct 13 14:07:07 standalone.localdomain haproxy[70940]: 172.21.0.2:49282 [13/Oct/2025:14:07:07.782] cinder cinder/standalone.internalapi.localdomain 0/0/0/11/11 200 530 - - ---- 41/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/types?name=__DEFAULT__&is_public=None HTTP/1.1"
Oct 13 14:07:07 standalone.localdomain podman[127678]: 2025-10-13 14:07:07.838110825 +0000 UTC m=+0.091118594 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, container_name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, distribution-scope=public, config_id=tripleo_step2, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, architecture=x86_64, tcib_managed=true)
Oct 13 14:07:07 standalone.localdomain podman[127678]: 2025-10-13 14:07:07.893653932 +0000 UTC m=+0.146661641 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, batch=17.1_20250721.1, config_id=tripleo_step2, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:07 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:07:08 standalone.localdomain ceph-mon[29756]: pgmap v960: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:07:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 1800.1 total, 600.0 interval
                                                        Cumulative writes: 4786 writes, 21K keys, 4786 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                        Cumulative WAL: 4786 writes, 621 syncs, 7.71 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 520 writes, 521 keys, 520 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                                        Interval WAL: 520 writes, 260 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 14:07:09 standalone.localdomain haproxy[70940]: 172.21.0.2:53958 [13/Oct/2025:14:07:09.515] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 40/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v961: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:09 standalone.localdomain haproxy[70940]: 172.21.0.2:53958 [13/Oct/2025:14:07:09.520] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/309/309 201 8100 - - ---- 40/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:09 standalone.localdomain haproxy[70940]: 172.17.0.2:47996 [13/Oct/2025:14:07:09.936] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 42/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47996 [13/Oct/2025:14:07:09.942] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/392/392 201 8164 - - ---- 42/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47996 [13/Oct/2025:14:07:10.339] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/35/35 200 8095 - - ---- 42/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:10 standalone.localdomain haproxy[70940]: 172.21.0.2:36864 [13/Oct/2025:14:07:09.925] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/703/703 200 377 - - ---- 42/1/0/0/0 0/0 "GET /v2.1/servers/detail HTTP/1.1"
Oct 13 14:07:10 standalone.localdomain haproxy[70940]: 172.21.0.2:53516 [13/Oct/2025:14:07:10.633] cinder cinder/standalone.internalapi.localdomain 0/0/0/76/76 200 316 - - ---- 43/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail HTTP/1.1"
Oct 13 14:07:10 standalone.localdomain ceph-mon[29756]: pgmap v961: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v962: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:12 standalone.localdomain haproxy[70940]: 172.21.0.2:59176 [13/Oct/2025:14:07:12.465] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 41/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:07:12 standalone.localdomain ceph-mon[29756]: pgmap v962: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:12 standalone.localdomain haproxy[70940]: 172.21.0.2:59176 [13/Oct/2025:14:07:12.471] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/341/341 201 8100 - - ---- 41/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:12 standalone.localdomain haproxy[70940]: 172.17.0.2:56340 [13/Oct/2025:14:07:12.864] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 8095 - - ---- 42/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:07:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53532 [13/Oct/2025:14:07:12.859] cinder cinder/standalone.internalapi.localdomain 0/0/0/52/52 200 527 - - ---- 42/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/types/e5ef7e77-1481-466f-86a0-4a5237613609 HTTP/1.1"
Oct 13 14:07:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53532 [13/Oct/2025:14:07:12.915] cinder cinder/standalone.internalapi.localdomain 0/0/0/72/72 202 254 - - ---- 42/1/0/0/0 0/0 "DELETE /v3/e44641a80bcb466cb3dd688e48b72d8e/types/e5ef7e77-1481-466f-86a0-4a5237613609 HTTP/1.1"
Oct 13 14:07:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v963: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:13 standalone.localdomain python3[127800]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:07:13 standalone.localdomain ceph-mon[29756]: pgmap v963: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:13 standalone.localdomain python3[127814]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760364433.572404-127790-274016529102693/source _original_basename=tmp3mjxlks2 follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:07:14 standalone.localdomain python3[127824]: ansible-ansible.legacy.command Invoked with _raw_params=pcs resource config "openstack-cinder-backup"
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:07:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:15 standalone.localdomain python3[127891]: ansible-ansible.legacy.command Invoked with _raw_params=puppet apply  --detailed-exitcodes --summarize --color=false --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation' -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::cinder::backup_bundle'
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:07:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 14:07:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v964: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:07:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:07:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:07:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:07:15 standalone.localdomain podman[127981]: 2025-10-13 14:07:15.830172584 +0000 UTC m=+0.083586245 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, architecture=x86_64, vendor=Red Hat, Inc., release=1, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_cron, build-date=2025-07-21T15:58:55, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, version=17.1.9)
Oct 13 14:07:15 standalone.localdomain podman[127981]: 2025-10-13 14:07:15.881720968 +0000 UTC m=+0.135134649 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, container_name=cinder_api_cron, build-date=2025-07-21T15:58:55, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:07:15 standalone.localdomain podman[127979]: 2025-10-13 14:07:15.889785304 +0000 UTC m=+0.144838595 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, distribution-scope=public, name=rhosp17/openstack-keystone, container_name=keystone, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:07:15 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:07:15 standalone.localdomain podman[127979]: 2025-10-13 14:07:15.926875327 +0000 UTC m=+0.181928618 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container)
Oct 13 14:07:15 standalone.localdomain podman[127975]: 2025-10-13 14:07:15.94791615 +0000 UTC m=+0.203365643 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., container_name=iscsid, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:07:15 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:07:15 standalone.localdomain podman[127975]: 2025-10-13 14:07:15.958797112 +0000 UTC m=+0.214246585 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:07:15 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:07:16 standalone.localdomain podman[127983]: 2025-10-13 14:07:16.004510238 +0000 UTC m=+0.250406569 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-scheduler, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., container_name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.component=openstack-cinder-scheduler-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:07:16 standalone.localdomain podman[127983]: 2025-10-13 14:07:16.027774738 +0000 UTC m=+0.273671049 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T16:10:12, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-scheduler, container_name=cinder_scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:07:16 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:07:16 standalone.localdomain ceph-mon[29756]: pgmap v964: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:17 standalone.localdomain runuser[128137]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v965: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:17 standalone.localdomain runuser[128137]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:07:17 standalone.localdomain runuser[128385]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:17 standalone.localdomain systemd[1]: tmp-crun.6JpjMc.mount: Deactivated successfully.
Oct 13 14:07:17 standalone.localdomain podman[128290]: 2025-10-13 14:07:17.862005321 +0000 UTC m=+0.123626997 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-horizon, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, container_name=horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5)
Oct 13 14:07:17 standalone.localdomain podman[128288]: 2025-10-13 14:07:17.837915835 +0000 UTC m=+0.102140281 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc.)
Oct 13 14:07:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:07:17 standalone.localdomain podman[128290]: 2025-10-13 14:07:17.914711121 +0000 UTC m=+0.176332787 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, com.redhat.component=openstack-horizon-container, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, build-date=2025-07-21T13:58:15, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, container_name=horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, tcib_managed=true, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:17 standalone.localdomain podman[128288]: 2025-10-13 14:07:17.922998444 +0000 UTC m=+0.187222880 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, tcib_managed=true, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached)
Oct 13 14:07:17 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:07:17 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:07:17 standalone.localdomain podman[128291]: 2025-10-13 14:07:17.813427737 +0000 UTC m=+0.067839723 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api, build-date=2025-07-21T15:56:26, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:07:17 standalone.localdomain podman[128289]: 2025-10-13 14:07:17.972364042 +0000 UTC m=+0.236289849 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.buildah.version=1.33.12, architecture=x86_64, release=1, tcib_managed=true, version=17.1.9, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team)
Oct 13 14:07:17 standalone.localdomain podman[128291]: 2025-10-13 14:07:17.996287652 +0000 UTC m=+0.250699608 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, com.redhat.component=openstack-heat-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:18 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain podman[128289]: 2025-10-13 14:07:18.006782743 +0000 UTC m=+0.270708500 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, managed_by=tripleo_ansible, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:07:18 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain podman[128287]: 2025-10-13 14:07:17.917775034 +0000 UTC m=+0.182562827 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:18 standalone.localdomain podman[128293]: 2025-10-13 14:07:18.071553501 +0000 UTC m=+0.324774001 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1)
Oct 13 14:07:18 standalone.localdomain podman[128440]: 2025-10-13 14:07:18.134628907 +0000 UTC m=+0.226747997 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, container_name=manila_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-manila-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, release=1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container)
Oct 13 14:07:18 standalone.localdomain podman[128440]: 2025-10-13 14:07:18.148275444 +0000 UTC m=+0.240394594 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, vcs-type=git, container_name=manila_api_cron, com.redhat.component=openstack-manila-api-container, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, architecture=x86_64, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:07:18 standalone.localdomain podman[128293]: 2025-10-13 14:07:18.159306461 +0000 UTC m=+0.412527031 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, release=1, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12)
Oct 13 14:07:18 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain podman[128287]: 2025-10-13 14:07:18.206586595 +0000 UTC m=+0.471374448 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, maintainer=OpenStack TripleO Team, container_name=heat_engine, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, vcs-type=git)
Oct 13 14:07:18 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain runuser[128385]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:18 standalone.localdomain runuser[128524]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:18 standalone.localdomain ceph-mon[29756]: pgmap v965: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:07:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:07:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:07:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:07:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:07:18 standalone.localdomain podman[128585]: 2025-10-13 14:07:18.84944173 +0000 UTC m=+0.106789324 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, container_name=manila_scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-scheduler-container, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-scheduler, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:56:28, tcib_managed=true, config_id=tripleo_step4)
Oct 13 14:07:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:07:18 standalone.localdomain podman[128585]: 2025-10-13 14:07:18.926669008 +0000 UTC m=+0.184016602 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, name=rhosp17/openstack-manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, container_name=manila_scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, architecture=x86_64, build-date=2025-07-21T15:56:28, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, distribution-scope=public, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:07:18 standalone.localdomain podman[128584]: 2025-10-13 14:07:18.876264259 +0000 UTC m=+0.142538565 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, name=rhosp17/openstack-heat-api-cfn, container_name=heat_api_cfn, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:07:18 standalone.localdomain podman[128587]: 2025-10-13 14:07:18.831616905 +0000 UTC m=+0.091723553 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, build-date=2025-07-21T15:44:03, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=neutron_api, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:07:18 standalone.localdomain podman[128584]: 2025-10-13 14:07:18.96077634 +0000 UTC m=+0.227050656 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12)
Oct 13 14:07:18 standalone.localdomain podman[128679]: 2025-10-13 14:07:18.971977412 +0000 UTC m=+0.079066246 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-07-21T16:02:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, version=17.1.9, container_name=nova_scheduler, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true)
Oct 13 14:07:18 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain podman[128586]: 2025-10-13 14:07:18.93324734 +0000 UTC m=+0.195651827 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, container_name=cinder_api, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-api-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:07:18 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:07:18 standalone.localdomain podman[128679]: 2025-10-13 14:07:18.995302615 +0000 UTC m=+0.102391459 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, release=1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.component=openstack-nova-scheduler-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T16:02:54, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-scheduler, architecture=x86_64, name=rhosp17/openstack-nova-scheduler, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, io.buildah.version=1.33.12, container_name=nova_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:07:19 standalone.localdomain podman[128589]: 2025-10-13 14:07:18.911743272 +0000 UTC m=+0.163995640 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, version=17.1.9, build-date=2025-07-21T15:44:17, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:19 standalone.localdomain podman[128587]: 2025-10-13 14:07:19.031929073 +0000 UTC m=+0.292035751 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:03, config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-neutron-server, version=17.1.9, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-neutron-server-container)
Oct 13 14:07:19 standalone.localdomain podman[128589]: 2025-10-13 14:07:19.045941461 +0000 UTC m=+0.298193859 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, com.redhat.component=openstack-nova-conductor-container, container_name=nova_conductor)
Oct 13 14:07:19 standalone.localdomain podman[128586]: 2025-10-13 14:07:19.066383735 +0000 UTC m=+0.328788242 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, batch=17.1_20250721.1, container_name=cinder_api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:07:19 standalone.localdomain runuser[128524]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:19 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:07:19 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:07:19 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:07:19 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:07:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v966: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:20 standalone.localdomain ceph-mon[29756]: pgmap v966: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:07:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:07:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:07:20 standalone.localdomain systemd[1]: tmp-crun.s0GAaG.mount: Deactivated successfully.
Oct 13 14:07:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:07:20 standalone.localdomain systemd[1]: tmp-crun.bBQri1.mount: Deactivated successfully.
Oct 13 14:07:20 standalone.localdomain podman[128818]: 2025-10-13 14:07:20.905018562 +0000 UTC m=+0.166326021 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22)
Oct 13 14:07:20 standalone.localdomain podman[128817]: 2025-10-13 14:07:20.818414187 +0000 UTC m=+0.083208102 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, container_name=nova_migration_target, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:07:20 standalone.localdomain podman[128819]: 2025-10-13 14:07:20.860700409 +0000 UTC m=+0.117412527 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, io.buildah.version=1.33.12, release=1, container_name=nova_vnc_proxy, distribution-scope=public, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T15:24:10, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-novncproxy-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:07:20 standalone.localdomain podman[128868]: 2025-10-13 14:07:20.967307335 +0000 UTC m=+0.090408873 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, distribution-scope=public, release=1, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:07:21 standalone.localdomain podman[128909]: 2025-10-13 14:07:21.045157892 +0000 UTC m=+0.058801227 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, vcs-type=git)
Oct 13 14:07:21 standalone.localdomain podman[128818]: 2025-10-13 14:07:21.108732334 +0000 UTC m=+0.370039763 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=swift_account_server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22)
Oct 13 14:07:21 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:07:21 standalone.localdomain podman[128868]: 2025-10-13 14:07:21.148778707 +0000 UTC m=+0.271880235 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true)
Oct 13 14:07:21 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:07:21 standalone.localdomain podman[128819]: 2025-10-13 14:07:21.172830482 +0000 UTC m=+0.429542650 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-novncproxy, release=1, io.openshift.expose-services=, com.redhat.component=openstack-nova-novncproxy-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T15:24:10, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git)
Oct 13 14:07:21 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:07:21 standalone.localdomain podman[128817]: 2025-10-13 14:07:21.200864758 +0000 UTC m=+0.465658653 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:07:21 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:07:21 standalone.localdomain podman[128909]: 2025-10-13 14:07:21.230821183 +0000 UTC m=+0.244464498 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, release=1, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:07:21 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:07:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v967: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:21 standalone.localdomain crontab[128955]: (root) LIST (root)
Oct 13 14:07:21 standalone.localdomain ceph-mon[29756]: pgmap v967: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:07:23
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'images', '.mgr', 'volumes', 'vms', 'manila_metadata', 'backups']
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:07:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v968: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:24 standalone.localdomain ceph-mon[29756]: pgmap v968: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:07:24 standalone.localdomain podman[129000]: 2025-10-13 14:07:24.8293044 +0000 UTC m=+0.088376500 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4)
Oct 13 14:07:24 standalone.localdomain podman[129000]: 2025-10-13 14:07:24.895724679 +0000 UTC m=+0.154796759 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:07:24 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:07:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v969: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:07:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:07:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:07:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:07:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:07:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:07:25 standalone.localdomain podman[129140]: 2025-10-13 14:07:25.857570086 +0000 UTC m=+0.112185388 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:58:20, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:25 standalone.localdomain podman[129140]: 2025-10-13 14:07:25.868154479 +0000 UTC m=+0.122769791 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, build-date=2025-07-21T13:58:20, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:07:25 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:07:25 standalone.localdomain systemd[1]: tmp-crun.l9ijjy.mount: Deactivated successfully.
Oct 13 14:07:25 standalone.localdomain podman[129162]: 2025-10-13 14:07:25.991155836 +0000 UTC m=+0.218168905 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, container_name=glance_api, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:07:26 standalone.localdomain podman[129155]: 2025-10-13 14:07:25.94971357 +0000 UTC m=+0.184098524 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:07:26 standalone.localdomain podman[129144]: 2025-10-13 14:07:25.969662469 +0000 UTC m=+0.217582166 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-placement-api, release=1, summary=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-placement-api-container, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1)
Oct 13 14:07:26 standalone.localdomain podman[129144]: 2025-10-13 14:07:26.052817649 +0000 UTC m=+0.300737376 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, com.redhat.component=openstack-placement-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, vcs-type=git)
Oct 13 14:07:26 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:07:26 standalone.localdomain podman[129148]: 2025-10-13 14:07:25.920988133 +0000 UTC m=+0.161699460 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, architecture=x86_64, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, release=1, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, container_name=nova_metadata, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true)
Oct 13 14:07:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:07:26 standalone.localdomain podman[129148]: 2025-10-13 14:07:26.105304663 +0000 UTC m=+0.346015990 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, container_name=nova_metadata, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:26 standalone.localdomain podman[129156]: 2025-10-13 14:07:26.057471692 +0000 UTC m=+0.297058144 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ovn_controller, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:07:26 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:07:26 standalone.localdomain podman[129155]: 2025-10-13 14:07:26.172777673 +0000 UTC m=+0.407162587 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_internal, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:07:26 standalone.localdomain podman[129319]: 2025-10-13 14:07:26.180953283 +0000 UTC m=+0.081185071 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, version=17.1.9, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:26 standalone.localdomain podman[129162]: 2025-10-13 14:07:26.185580754 +0000 UTC m=+0.412593833 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, release=1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, container_name=glance_api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:07:26 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:07:26 standalone.localdomain podman[129156]: 2025-10-13 14:07:26.195659022 +0000 UTC m=+0.435245514 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Oct 13 14:07:26 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:07:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:07:26 standalone.localdomain podman[129319]: 2025-10-13 14:07:26.38453032 +0000 UTC m=+0.284762118 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., version=17.1.9, container_name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12)
Oct 13 14:07:26 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:07:26 standalone.localdomain ceph-mon[29756]: pgmap v969: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:27 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:07:27 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 51, saving inputs in /var/lib/pacemaker/pengine/pe-input-51.bz2
Oct 13 14:07:27 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 51 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-51.bz2): Complete
Oct 13 14:07:27 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:07:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v970: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:07:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:07:28 standalone.localdomain ceph-mon[29756]: pgmap v970: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:07:28 standalone.localdomain podman[129518]: 2025-10-13 14:07:28.820545973 +0000 UTC m=+0.085637277 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, architecture=x86_64, build-date=2025-07-21T16:05:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:07:28 standalone.localdomain systemd[1]: tmp-crun.oijarj.mount: Deactivated successfully.
Oct 13 14:07:28 standalone.localdomain podman[129517]: 2025-10-13 14:07:28.893992916 +0000 UTC m=+0.158696988 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, name=rhosp17/openstack-keystone, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:07:28 standalone.localdomain podman[129518]: 2025-10-13 14:07:28.910396297 +0000 UTC m=+0.175487601 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, name=rhosp17/openstack-nova-api, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, container_name=nova_api)
Oct 13 14:07:28 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:07:28 standalone.localdomain podman[129517]: 2025-10-13 14:07:28.92686312 +0000 UTC m=+0.191567222 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9)
Oct 13 14:07:28 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:07:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v971: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:29 standalone.localdomain runuser[129581]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:30 standalone.localdomain runuser[129581]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:30 standalone.localdomain runuser[129653]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:30 standalone.localdomain ceph-mon[29756]: pgmap v971: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:31 standalone.localdomain runuser[129653]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:31 standalone.localdomain runuser[129713]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v972: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:07:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:07:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:07:31 standalone.localdomain ceph-mon[29756]: pgmap v972: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:07:31 standalone.localdomain runuser[129713]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:31 standalone.localdomain systemd[1]: tmp-crun.Gs2HuG.mount: Deactivated successfully.
Oct 13 14:07:31 standalone.localdomain podman[129782]: 2025-10-13 14:07:31.846273366 +0000 UTC m=+0.097941142 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, name=rhosp17/openstack-barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T15:36:22, version=17.1.9, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-worker-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, container_name=barbican_worker)
Oct 13 14:07:31 standalone.localdomain podman[129781]: 2025-10-13 14:07:31.827191513 +0000 UTC m=+0.086912546 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, tcib_managed=true, build-date=2025-07-21T16:18:19)
Oct 13 14:07:31 standalone.localdomain podman[129793]: 2025-10-13 14:07:31.88604536 +0000 UTC m=+0.138223332 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:07:31 standalone.localdomain podman[129782]: 2025-10-13 14:07:31.893922461 +0000 UTC m=+0.145590257 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.buildah.version=1.33.12, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, container_name=barbican_worker, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc.)
Oct 13 14:07:31 standalone.localdomain podman[129793]: 2025-10-13 14:07:31.894006544 +0000 UTC m=+0.146184476 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., container_name=nova_api_cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:07:31 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:07:31 standalone.localdomain podman[129780]: 2025-10-13 14:07:31.940923617 +0000 UTC m=+0.200318810 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, build-date=2025-07-21T15:22:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true)
Oct 13 14:07:31 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:07:31 standalone.localdomain podman[129781]: 2025-10-13 14:07:31.961275668 +0000 UTC m=+0.220996711 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, config_id=tripleo_step3, version=17.1.9, container_name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-barbican-keystone-listener-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:07:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:07:31 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:07:31 standalone.localdomain podman[129780]: 2025-10-13 14:07:31.991830732 +0000 UTC m=+0.251225915 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, release=1, com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:44, name=rhosp17/openstack-barbican-api, architecture=x86_64, vcs-type=git, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api)
Oct 13 14:07:32 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:07:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:07:32 standalone.localdomain podman[129871]: 2025-10-13 14:07:32.075567479 +0000 UTC m=+0.088855755 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp)
Oct 13 14:07:32 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:07:32 standalone.localdomain podman[129877]: 2025-10-13 14:07:32.090917938 +0000 UTC m=+0.072580438 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, config_id=tripleo_step4, container_name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:07:32 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 52, saving inputs in /var/lib/pacemaker/pengine/pe-input-52.bz2
Oct 13 14:07:32 standalone.localdomain podman[129871]: 2025-10-13 14:07:32.139797481 +0000 UTC m=+0.153085747 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12)
Oct 13 14:07:32 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation openstack-cinder-backup-podman-0_monitor_0 locally on standalone
Oct 13 14:07:32 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for openstack-cinder-backup-podman-0 on standalone
Oct 13 14:07:32 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:07:32 standalone.localdomain podman[129877]: 2025-10-13 14:07:32.164801765 +0000 UTC m=+0.146464265 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:32 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:07:32 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for openstack-cinder-backup-podman-0 on standalone: not running
Oct 13 14:07:32 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 52 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-52.bz2): Complete
Oct 13 14:07:32 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:07:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v973: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:34 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:07:34 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 53, saving inputs in /var/lib/pacemaker/pengine/pe-input-53.bz2
Oct 13 14:07:34 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 53 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-53.bz2): Complete
Oct 13 14:07:34 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:07:34 standalone.localdomain ceph-mon[29756]: pgmap v973: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v974: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:36 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:07:36 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      openstack-cinder-backup-podman-0     (                             standalone )
Oct 13 14:07:36 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 54, saving inputs in /var/lib/pacemaker/pengine/pe-input-54.bz2
Oct 13 14:07:36 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation openstack-cinder-backup-podman-0_start_0 locally on standalone
Oct 13 14:07:36 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for openstack-cinder-backup-podman-0 on standalone
Oct 13 14:07:36 standalone.localdomain ceph-mon[29756]: pgmap v974: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:36 standalone.localdomain podman(openstack-cinder-backup-podman-0)[130230]: INFO: running container openstack-cinder-backup-podman-0 for the first time
Oct 13 14:07:36 standalone.localdomain podman[130234]: 2025-10-13 14:07:36.955584797 +0000 UTC m=+0.078719005 container create 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, name=rhosp17/openstack-cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:24, architecture=x86_64, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-backup, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-backup-container, release=1, distribution-scope=public)
Oct 13 14:07:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:07:37 standalone.localdomain podman[130234]: 2025-10-13 14:07:36.91342135 +0000 UTC m=+0.036555618 image pull  cluster.common.tag/cinder-backup:pcmklatest
Oct 13 14:07:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:07:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6a2ce9ac4bdda31bae9808821f8518f622a69e5dcca675d77c846d63c16ec0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6a2ce9ac4bdda31bae9808821f8518f622a69e5dcca675d77c846d63c16ec0/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6a2ce9ac4bdda31bae9808821f8518f622a69e5dcca675d77c846d63c16ec0/merged/var/lib/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6a2ce9ac4bdda31bae9808821f8518f622a69e5dcca675d77c846d63c16ec0/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:37 standalone.localdomain podman[130234]: 2025-10-13 14:07:37.023828291 +0000 UTC m=+0.146962489 container init 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, batch=17.1_20250721.1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:24, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1)
Oct 13 14:07:37 standalone.localdomain podman[130234]: 2025-10-13 14:07:37.031047032 +0000 UTC m=+0.154181250 container start 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, distribution-scope=public, architecture=x86_64, tcib_managed=true, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-backup-container, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T16:18:24, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:07:37 standalone.localdomain sudo[130263]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:07:37 standalone.localdomain podman(openstack-cinder-backup-podman-0)[130269]: INFO: 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1
Oct 13 14:07:37 standalone.localdomain systemd[1]: Started Session c10 of User root.
Oct 13 14:07:37 standalone.localdomain sudo[130263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:07:37 standalone.localdomain podman[130248]: 2025-10-13 14:07:37.090720695 +0000 UTC m=+0.100946535 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, com.redhat.component=openstack-ovn-northd-container, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_northd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_cluster_northd, architecture=x86_64, build-date=2025-07-21T13:30:04, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:07:37 standalone.localdomain podman[130248]: 2025-10-13 14:07:37.129350355 +0000 UTC m=+0.139576155 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-northd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_cluster_northd, build-date=2025-07-21T13:30:04, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, config_id=ovn_cluster_northd)
Oct 13 14:07:37 standalone.localdomain sudo[130263]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:37 standalone.localdomain systemd[1]: session-c10.scope: Deactivated successfully.
Oct 13 14:07:37 standalone.localdomain sudo[130298]:     root : PWD=/ ; USER=root ; COMMAND=/usr/bin/chown -R cinder:kolla /var/lib/cinder
Oct 13 14:07:37 standalone.localdomain systemd[1]: Started Session c11 of User root.
Oct 13 14:07:37 standalone.localdomain sudo[130298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:07:37 standalone.localdomain sudo[130298]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:37 standalone.localdomain systemd[1]: session-c11.scope: Deactivated successfully.
Oct 13 14:07:37 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:07:37 standalone.localdomain podman(openstack-cinder-backup-podman-0)[130303]: INFO: Creating drop-in dependency for "openstack-cinder-backup-podman-0" (4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1)
Oct 13 14:07:37 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:07:37 standalone.localdomain systemd-rc-local-generator[130348]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:07:37 standalone.localdomain systemd-sysv-generator[130351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:07:37 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:07:37 standalone.localdomain python3[130322]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/cinder.md5sum follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:07:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v975: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:37 standalone.localdomain ceph-mon[29756]: pgmap v975: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:37 standalone.localdomain python3[130403]: ansible-tripleo_diff_exec Invoked with command=/var/lib/container-config-scripts/pacemaker_restart_bundle.sh cinder_backup openstack-cinder-backup openstack-cinder-backup _ Started state_file=/var/lib/config-data/puppet-generated/cinder.md5sum state_file_suffix=.cinder_backup_previous_run environment={'TRIPLEO_MINOR_UPDATE': '', 'TRIPLEO_HA_WRAPPER_RESOURCE_EXISTS': 'False'} return_codes=[0]
Oct 13 14:07:37 standalone.localdomain podman[130438]: 2025-10-13 14:07:37.894149913 +0000 UTC m=+0.123880604 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, name=rhosp17/openstack-cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T16:18:24, description=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, com.redhat.component=openstack-cinder-backup-container, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1)
Oct 13 14:07:37 standalone.localdomain podman[130438]: 2025-10-13 14:07:37.925777499 +0000 UTC m=+0.155508180 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, summary=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, com.redhat.component=openstack-cinder-backup-container)
Oct 13 14:07:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:07:37 standalone.localdomain pcmkrestart[130500]: Initial deployment, skipping the restart of openstack-cinder-backup
Oct 13 14:07:38 standalone.localdomain podman[130481]: 2025-10-13 14:07:38.039605926 +0000 UTC m=+0.085489852 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, release=1, version=17.1.9, build-date=2025-07-21T16:18:24, com.redhat.component=openstack-cinder-backup-container, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, description=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:07:38 standalone.localdomain podman[130481]: 2025-10-13 14:07:38.072133709 +0000 UTC m=+0.118017635 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, architecture=x86_64, name=rhosp17/openstack-cinder-backup, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:18:24, com.redhat.component=openstack-cinder-backup-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:38 standalone.localdomain podman(openstack-cinder-backup-podman-0)[130528]: NOTICE: Container openstack-cinder-backup-podman-0  started successfully
Oct 13 14:07:38 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for openstack-cinder-backup-podman-0 on standalone: ok
Oct 13 14:07:38 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation openstack-cinder-backup-podman-0_monitor_60000 locally on standalone
Oct 13 14:07:38 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for openstack-cinder-backup-podman-0 on standalone
Oct 13 14:07:38 standalone.localdomain podman[130535]: 2025-10-13 14:07:38.239582854 +0000 UTC m=+0.094957282 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, tcib_managed=true, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:24, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, com.redhat.component=openstack-cinder-backup-container)
Oct 13 14:07:38 standalone.localdomain podman[130491]: 2025-10-13 14:07:38.211744803 +0000 UTC m=+0.227310843 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_id=tripleo_step2, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, container_name=clustercheck, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:38 standalone.localdomain podman[130535]: 2025-10-13 14:07:38.272772418 +0000 UTC m=+0.128146766 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, build-date=2025-07-21T16:18:24, description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-backup, tcib_managed=true, release=1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, architecture=x86_64, distribution-scope=public)
Oct 13 14:07:38 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for openstack-cinder-backup-podman-0 on standalone: ok
Oct 13 14:07:38 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 54 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-54.bz2): Complete
Oct 13 14:07:38 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:07:38 standalone.localdomain python3[130536]: ansible-stat Invoked with path=/tmp/tripleo_ha_image_openstack-cinder-backup follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:07:38 standalone.localdomain podman[130491]: 2025-10-13 14:07:38.295955455 +0000 UTC m=+0.311521505 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, container_name=clustercheck, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, managed_by=tripleo_ansible)
Oct 13 14:07:38 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:07:38 standalone.localdomain haproxy[70940]: 172.17.0.100:34251 [13/Oct/2025:14:07:38.653] mysql mysql/standalone.internalapi.localdomain 1/0/150 12536 -- 40/40/39/39/0 0/0
Oct 13 14:07:38 standalone.localdomain haproxy[70940]: 172.17.0.100:58869 [13/Oct/2025:14:07:38.812] mysql mysql/standalone.internalapi.localdomain 1/0/14 3954 -- 40/40/39/39/0 0/0
Oct 13 14:07:38 standalone.localdomain systemd[1]: tmp-crun.fa3qQm.mount: Deactivated successfully.
Oct 13 14:07:39 standalone.localdomain python3[130695]: ansible-ansible.legacy.command Invoked with _raw_params=pcs resource config "openstack-cinder-volume"
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:07:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v976: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:39 standalone.localdomain python3[130710]: ansible-ansible.legacy.command Invoked with _raw_params=puppet apply  --detailed-exitcodes --summarize --color=false --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation' -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::cinder::volume_bundle'
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:07:40 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:14:07:40 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx4caccc0c248c4477aee39-0068ed07ac" "proxy-server 2" 0.0017 "-" 20 -
Oct 13 14:07:40 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx4caccc0c248c4477aee39-0068ed07ac)
Oct 13 14:07:40 standalone.localdomain ceph-mon[29756]: pgmap v976: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v977: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:42 standalone.localdomain runuser[130775]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:42 standalone.localdomain sudo[130828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:07:42 standalone.localdomain sudo[130828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:42 standalone.localdomain sudo[130828]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:42 standalone.localdomain sudo[130843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 14:07:42 standalone.localdomain sudo[130843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:42 standalone.localdomain ceph-mon[29756]: pgmap v977: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:43 standalone.localdomain runuser[130775]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:43 standalone.localdomain sudo[130843]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:07:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:07:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:43 standalone.localdomain sudo[130915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:07:43 standalone.localdomain sudo[130915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:43 standalone.localdomain sudo[130915]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:43 standalone.localdomain sudo[130930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:07:43 standalone.localdomain sudo[130930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:43 standalone.localdomain runuser[130945]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v978: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:43 standalone.localdomain sudo[130930]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:43 standalone.localdomain sudo[131094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:07:43 standalone.localdomain sudo[131094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:43 standalone.localdomain sudo[131094]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:43 standalone.localdomain runuser[130945]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:43 standalone.localdomain runuser[131121]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:43 standalone.localdomain sudo[131112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- inventory --format=json-pretty --filter-for-batch
Oct 13 14:07:43 standalone.localdomain sudo[131112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:44 standalone.localdomain ceph-mon[29756]: pgmap v978: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:44 standalone.localdomain runuser[131121]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:44 standalone.localdomain podman[131236]: 
Oct 13 14:07:44 standalone.localdomain podman[131236]: 2025-10-13 14:07:44.687678485 +0000 UTC m=+0.065796711 container create 1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_mcnulty, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 14:07:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:44 standalone.localdomain systemd[1]: Started libpod-conmon-1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de.scope.
Oct 13 14:07:44 standalone.localdomain podman[131236]: 2025-10-13 14:07:44.6574037 +0000 UTC m=+0.035521926 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 14:07:44 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:07:44 standalone.localdomain podman[131236]: 2025-10-13 14:07:44.785856654 +0000 UTC m=+0.163974890 container init 1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_mcnulty, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 14:07:44 standalone.localdomain podman[131236]: 2025-10-13 14:07:44.795033634 +0000 UTC m=+0.173151860 container start 1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_mcnulty, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=)
Oct 13 14:07:44 standalone.localdomain podman[131236]: 2025-10-13 14:07:44.795291511 +0000 UTC m=+0.173409747 container attach 1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_mcnulty, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, name=rhceph, io.buildah.version=1.33.12, GIT_CLEAN=True)
Oct 13 14:07:44 standalone.localdomain stupefied_mcnulty[131252]: 167 167
Oct 13 14:07:44 standalone.localdomain systemd[1]: libpod-1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de.scope: Deactivated successfully.
Oct 13 14:07:44 standalone.localdomain podman[131236]: 2025-10-13 14:07:44.803381289 +0000 UTC m=+0.181499585 container died 1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_mcnulty, vcs-type=git, architecture=x86_64, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7)
Oct 13 14:07:44 standalone.localdomain podman[131258]: 2025-10-13 14:07:44.902490456 +0000 UTC m=+0.087700670 container remove 1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_mcnulty, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.component=rhceph-container, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 14:07:44 standalone.localdomain systemd[1]: libpod-conmon-1bb0c1b7f9ae4a590636a04c355b56dbbfac00f9b09943105bfed799b7d9a6de.scope: Deactivated successfully.
Oct 13 14:07:45 standalone.localdomain podman[131280]: 
Oct 13 14:07:45 standalone.localdomain podman[131280]: 2025-10-13 14:07:45.104073773 +0000 UTC m=+0.087890996 container create d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_moser, architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph)
Oct 13 14:07:45 standalone.localdomain systemd[1]: Started libpod-conmon-d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881.scope.
Oct 13 14:07:45 standalone.localdomain podman[131280]: 2025-10-13 14:07:45.063694679 +0000 UTC m=+0.047511922 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 14:07:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:07:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a2a0100606262a1aa6f555569d94d05d7389a9e75b28b206c08bb9860b3b21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a2a0100606262a1aa6f555569d94d05d7389a9e75b28b206c08bb9860b3b21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a2a0100606262a1aa6f555569d94d05d7389a9e75b28b206c08bb9860b3b21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53a2a0100606262a1aa6f555569d94d05d7389a9e75b28b206c08bb9860b3b21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 14:07:45 standalone.localdomain podman[131280]: 2025-10-13 14:07:45.185983305 +0000 UTC m=+0.169800518 container init d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_moser, ceph=True, distribution-scope=public, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc.)
Oct 13 14:07:45 standalone.localdomain podman[131280]: 2025-10-13 14:07:45.197005691 +0000 UTC m=+0.180822904 container start d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_moser, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, release=553, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 14:07:45 standalone.localdomain podman[131280]: 2025-10-13 14:07:45.198045993 +0000 UTC m=+0.181863206 container attach d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_moser, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:07:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v979: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c13ee385817e50c43885fdbace23d3571b7f7440a489567d44ff1f514ca1f60e-merged.mount: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:07:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:07:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:07:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:07:46 standalone.localdomain podman[132555]: 2025-10-13 14:07:46.228583769 +0000 UTC m=+0.091936949 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, architecture=x86_64, build-date=2025-07-21T13:27:15, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:07:46 standalone.localdomain podman[132592]: 2025-10-13 14:07:46.242261817 +0000 UTC m=+0.098841360 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, release=1, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, container_name=cinder_api_cron, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:07:46 standalone.localdomain podman[132608]: 2025-10-13 14:07:46.281569317 +0000 UTC m=+0.137657615 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-scheduler, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, container_name=cinder_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.component=openstack-cinder-scheduler-container, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4)
Oct 13 14:07:46 standalone.localdomain podman[132555]: 2025-10-13 14:07:46.29609526 +0000 UTC m=+0.159448420 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid)
Oct 13 14:07:46 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain podman[132608]: 2025-10-13 14:07:46.332795571 +0000 UTC m=+0.188883829 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, build-date=2025-07-21T16:10:12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9)
Oct 13 14:07:46 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain podman[132579]: 2025-10-13 14:07:46.382652444 +0000 UTC m=+0.243995213 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., container_name=keystone, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-keystone)
Oct 13 14:07:46 standalone.localdomain podman[132592]: 2025-10-13 14:07:46.435087286 +0000 UTC m=+0.291666829 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-cinder-api-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, release=1, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, container_name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:07:46 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain podman[132579]: 2025-10-13 14:07:46.488103106 +0000 UTC m=+0.349445945 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.openshift.expose-services=, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true)
Oct 13 14:07:46 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]: [
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:     {
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "available": false,
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "ceph_device": false,
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "lsm_data": {},
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "lvs": [],
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "path": "/dev/sr0",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "rejected_reasons": [
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "Insufficient space (<5GB)",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "Has a FileSystem"
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         ],
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         "sys_api": {
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "actuators": null,
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "device_nodes": "sr0",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "human_readable_size": "482.00 KB",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "id_bus": "ata",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "model": "QEMU DVD-ROM",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "nr_requests": "2",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "partitions": {},
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "path": "/dev/sr0",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "removable": "1",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "rev": "2.5+",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "ro": "0",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "rotational": "1",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "sas_address": "",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "sas_device_handle": "",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "scheduler_mode": "mq-deadline",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "sectors": 0,
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "sectorsize": "2048",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "size": 493568.0,
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "support_discard": "0",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "type": "disk",
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:             "vendor": "QEMU"
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:         }
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]:     }
Oct 13 14:07:46 standalone.localdomain bold_moser[131348]: ]
Oct 13 14:07:46 standalone.localdomain systemd[1]: libpod-d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881.scope: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain systemd[1]: libpod-d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881.scope: Consumed 1.292s CPU time.
Oct 13 14:07:46 standalone.localdomain podman[131280]: 2025-10-13 14:07:46.598788856 +0000 UTC m=+1.582606059 container died d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_moser, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_BRANCH=main, RELEASE=main, vcs-type=git, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-53a2a0100606262a1aa6f555569d94d05d7389a9e75b28b206c08bb9860b3b21-merged.mount: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain podman[133956]: 2025-10-13 14:07:46.70730502 +0000 UTC m=+0.095404705 container remove d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_moser, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 13 14:07:46 standalone.localdomain systemd[1]: libpod-conmon-d37fbb0e94f7d585be5f462fad5fc3432ede45ca82951f84bdf1cf4e9884d881.scope: Deactivated successfully.
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: pgmap v979: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:46 standalone.localdomain sudo[131112]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:07:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:07:46 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 6495044b-19bd-43b4-8d98-ae71e58942ee (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:07:46 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 6495044b-19bd-43b4-8d98-ae71e58942ee (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:07:46 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 6495044b-19bd-43b4-8d98-ae71e58942ee (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:07:46 standalone.localdomain sudo[133970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:07:46 standalone.localdomain sudo[133970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:07:46 standalone.localdomain sudo[133970]: pam_unix(sudo:session): session closed for user root
Oct 13 14:07:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v980: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:07:47 standalone.localdomain ceph-mon[29756]: pgmap v980: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:07:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:07:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 45 completed events
Oct 13 14:07:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:07:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:48 standalone.localdomain podman[134142]: 2025-10-13 14:07:48.841172455 +0000 UTC m=+0.089965780 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:52, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:07:48 standalone.localdomain systemd[1]: tmp-crun.3TJfhZ.mount: Deactivated successfully.
Oct 13 14:07:48 standalone.localdomain crontab[134244]: (root) LIST (root)
Oct 13 14:07:48 standalone.localdomain podman[134142]: 2025-10-13 14:07:48.923673094 +0000 UTC m=+0.172466399 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20250721.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52)
Oct 13 14:07:48 standalone.localdomain podman[134137]: 2025-10-13 14:07:48.926235882 +0000 UTC m=+0.180251016 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, name=rhosp17/openstack-memcached, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:07:48 standalone.localdomain podman[134140]: 2025-10-13 14:07:48.888572642 +0000 UTC m=+0.141532734 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, tcib_managed=true, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, vcs-type=git, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, container_name=horizon, architecture=x86_64)
Oct 13 14:07:48 standalone.localdomain podman[134137]: 2025-10-13 14:07:48.943005375 +0000 UTC m=+0.197020519 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, release=1, distribution-scope=public, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, tcib_managed=true, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible)
Oct 13 14:07:48 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:07:48 standalone.localdomain podman[134140]: 2025-10-13 14:07:48.970779563 +0000 UTC m=+0.223739655 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, com.redhat.component=openstack-horizon-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=horizon, maintainer=OpenStack TripleO Team, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:58:15, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc.)
Oct 13 14:07:48 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:07:49 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:07:49 standalone.localdomain podman[134138]: 2025-10-13 14:07:48.909707137 +0000 UTC m=+0.164600748 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, name=rhosp17/openstack-manila-api, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, release=1, com.redhat.component=openstack-manila-api-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T16:06:43, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, container_name=manila_api_cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:07:49 standalone.localdomain podman[134138]: 2025-10-13 14:07:49.040384009 +0000 UTC m=+0.295277630 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=manila_api_cron, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-manila-api, vcs-type=git, com.redhat.component=openstack-manila-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc.)
Oct 13 14:07:49 standalone.localdomain podman[134139]: 2025-10-13 14:07:49.053256521 +0000 UTC m=+0.306925384 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, tcib_managed=true)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134136]: 2025-10-13 14:07:48.878601277 +0000 UTC m=+0.132665202 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, version=17.1.9, container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:07:49 standalone.localdomain podman[134139]: 2025-10-13 14:07:49.088798767 +0000 UTC m=+0.342467630 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-type=git, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134141]: 2025-10-13 14:07:48.980469578 +0000 UTC m=+0.225833068 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T15:56:26, container_name=heat_api)
Oct 13 14:07:49 standalone.localdomain podman[134136]: 2025-10-13 14:07:49.16320321 +0000 UTC m=+0.417267115 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-engine, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, release=1, container_name=heat_engine, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:07:49 standalone.localdomain podman[134141]: 2025-10-13 14:07:49.174937809 +0000 UTC m=+0.420301359 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9)
Oct 13 14:07:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:07:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:07:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:07:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:07:49 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134280]: 2025-10-13 14:07:49.19168642 +0000 UTC m=+0.177738270 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, container_name=manila_scheduler, tcib_managed=true, com.redhat.component=openstack-manila-scheduler-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:28, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']})
Oct 13 14:07:49 standalone.localdomain podman[134279]: 2025-10-13 14:07:49.137411692 +0000 UTC m=+0.122385829 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, container_name=heat_api_cfn, com.redhat.component=openstack-heat-api-cfn-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134280]: 2025-10-13 14:07:49.21593541 +0000 UTC m=+0.201987290 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, com.redhat.component=openstack-manila-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, release=1, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, container_name=manila_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:28, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-manila-scheduler, io.openshift.expose-services=)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134279]: 2025-10-13 14:07:49.286430284 +0000 UTC m=+0.271404401 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, name=rhosp17/openstack-heat-api-cfn, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:07:49 standalone.localdomain podman[134332]: 2025-10-13 14:07:49.294838001 +0000 UTC m=+0.107717812 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-scheduler, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., container_name=nova_scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:02:54, io.openshift.expose-services=, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4f7cb55437a2b42333072591935a511528e79935)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134334]: 2025-10-13 14:07:49.258696297 +0000 UTC m=+0.065103520 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, container_name=neutron_api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1)
Oct 13 14:07:49 standalone.localdomain podman[134332]: 2025-10-13 14:07:49.318785362 +0000 UTC m=+0.131665153 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, build-date=2025-07-21T16:02:54, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step4, container_name=nova_scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134333]: 2025-10-13 14:07:49.281556145 +0000 UTC m=+0.078533200 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, name=rhosp17/openstack-cinder-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, container_name=cinder_api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T15:58:55, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-type=git, distribution-scope=public)
Oct 13 14:07:49 standalone.localdomain podman[134333]: 2025-10-13 14:07:49.363782206 +0000 UTC m=+0.160759231 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, com.redhat.component=openstack-cinder-api-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T15:58:55, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, container_name=cinder_api, description=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible)
Oct 13 14:07:49 standalone.localdomain podman[134342]: 2025-10-13 14:07:49.372694448 +0000 UTC m=+0.170153197 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, release=1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:17, config_id=tripleo_step4, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-nova-conductor-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain podman[134334]: 2025-10-13 14:07:49.408868873 +0000 UTC m=+0.215276086 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, name=rhosp17/openstack-neutron-server, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T15:44:03, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vendor=Red Hat, Inc., container_name=neutron_api, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:07:49 standalone.localdomain podman[134342]: 2025-10-13 14:07:49.422798778 +0000 UTC m=+0.220257517 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_conductor, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, description=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, com.redhat.component=openstack-nova-conductor-container, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, name=rhosp17/openstack-nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, release=1)
Oct 13 14:07:49 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:07:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v981: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:07:49 standalone.localdomain ceph-mon[29756]: pgmap v981: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v982: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:07:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:07:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:07:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:07:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:07:51 standalone.localdomain podman[134470]: 2025-10-13 14:07:51.84011336 +0000 UTC m=+0.092273670 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-novncproxy-container, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, container_name=nova_vnc_proxy, managed_by=tripleo_ansible, release=1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:24:10, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, vcs-type=git)
Oct 13 14:07:51 standalone.localdomain systemd[1]: tmp-crun.Lp4Ekt.mount: Deactivated successfully.
Oct 13 14:07:51 standalone.localdomain podman[134468]: 2025-10-13 14:07:51.894727708 +0000 UTC m=+0.154745708 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, container_name=swift_container_server, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container)
Oct 13 14:07:51 standalone.localdomain podman[134463]: 2025-10-13 14:07:51.944565449 +0000 UTC m=+0.203270808 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, release=1, vcs-type=git, container_name=swift_object_server, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:07:51 standalone.localdomain podman[134469]: 2025-10-13 14:07:51.990153233 +0000 UTC m=+0.243196559 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:07:52 standalone.localdomain podman[134464]: 2025-10-13 14:07:52.053445585 +0000 UTC m=+0.312321429 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:07:52 standalone.localdomain podman[134468]: 2025-10-13 14:07:52.09583986 +0000 UTC m=+0.355857880 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:07:52 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:07:52 standalone.localdomain podman[134463]: 2025-10-13 14:07:52.15771721 +0000 UTC m=+0.416422569 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc.)
Oct 13 14:07:52 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:07:52 standalone.localdomain podman[134469]: 2025-10-13 14:07:52.209378328 +0000 UTC m=+0.462421664 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-swift-account-container, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:07:52 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:07:52 standalone.localdomain podman[134470]: 2025-10-13 14:07:52.260358395 +0000 UTC m=+0.512518655 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, release=1, architecture=x86_64, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T15:24:10, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9)
Oct 13 14:07:52 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:07:52 standalone.localdomain podman[134464]: 2025-10-13 14:07:52.456794534 +0000 UTC m=+0.715670428 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute)
Oct 13 14:07:52 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:07:52 standalone.localdomain ceph-mon[29756]: pgmap v982: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:52 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:07:53 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 55, saving inputs in /var/lib/pacemaker/pengine/pe-input-55.bz2
Oct 13 14:07:53 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 55 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-55.bz2): Complete
Oct 13 14:07:53 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:07:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v983: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:54 standalone.localdomain ceph-mon[29756]: pgmap v983: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:55 standalone.localdomain runuser[134691]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v984: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:07:55 standalone.localdomain runuser[134691]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:55 standalone.localdomain systemd[1]: tmp-crun.VLNclk.mount: Deactivated successfully.
Oct 13 14:07:55 standalone.localdomain podman[134798]: 2025-10-13 14:07:55.827609428 +0000 UTC m=+0.094825247 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53)
Oct 13 14:07:55 standalone.localdomain podman[134798]: 2025-10-13 14:07:55.880818284 +0000 UTC m=+0.148034133 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true)
Oct 13 14:07:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:07:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:07:55 standalone.localdomain runuser[134887]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:55 standalone.localdomain podman[134857]: 2025-10-13 14:07:55.979130436 +0000 UTC m=+0.078040335 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:58:20, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:07:55 standalone.localdomain podman[134857]: 2025-10-13 14:07:55.98876375 +0000 UTC m=+0.087673619 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, release=1, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:07:55 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:07:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:07:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:07:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:07:56 standalone.localdomain podman[134987]: 2025-10-13 14:07:56.203129138 +0000 UTC m=+0.074508767 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, build-date=2025-07-21T13:58:12, com.redhat.component=openstack-placement-api-container, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:07:56 standalone.localdomain podman[134988]: 2025-10-13 14:07:56.219317891 +0000 UTC m=+0.081371246 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, release=1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, name=rhosp17/openstack-nova-api, architecture=x86_64)
Oct 13 14:07:56 standalone.localdomain podman[134987]: 2025-10-13 14:07:56.224327474 +0000 UTC m=+0.095707083 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-placement-api-container, container_name=placement_api, build-date=2025-07-21T13:58:12, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-placement-api, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team)
Oct 13 14:07:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:07:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:07:56 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:07:56 standalone.localdomain podman[135017]: 2025-10-13 14:07:56.271098053 +0000 UTC m=+0.068455562 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:07:56 standalone.localdomain podman[134988]: 2025-10-13 14:07:56.291039522 +0000 UTC m=+0.153092977 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, com.redhat.component=openstack-nova-api-container, container_name=nova_metadata, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:07:56 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:07:56 standalone.localdomain podman[135046]: 2025-10-13 14:07:56.379356449 +0000 UTC m=+0.134166058 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, release=1, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, container_name=ovn_controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:07:56 standalone.localdomain podman[135050]: 2025-10-13 14:07:56.35479018 +0000 UTC m=+0.111271880 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, release=1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, container_name=glance_api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, architecture=x86_64)
Oct 13 14:07:56 standalone.localdomain podman[135046]: 2025-10-13 14:07:56.43305133 +0000 UTC m=+0.187860969 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, release=1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:07:56 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:07:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:07:56 standalone.localdomain podman[135017]: 2025-10-13 14:07:56.459111176 +0000 UTC m=+0.256468735 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:07:56 standalone.localdomain runuser[134887]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:56 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:07:56 standalone.localdomain runuser[135122]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:07:56 standalone.localdomain podman[135114]: 2025-10-13 14:07:56.533671343 +0000 UTC m=+0.068010689 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, container_name=swift_proxy, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, io.buildah.version=1.33.12)
Oct 13 14:07:56 standalone.localdomain podman[135050]: 2025-10-13 14:07:56.546759762 +0000 UTC m=+0.303241472 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, distribution-scope=public, name=rhosp17/openstack-glance-api, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, version=17.1.9, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:07:56 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:07:56 standalone.localdomain podman[135114]: 2025-10-13 14:07:56.731520625 +0000 UTC m=+0.265859931 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:07:56 standalone.localdomain ceph-mon[29756]: pgmap v984: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:56 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:07:57 standalone.localdomain runuser[135122]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:07:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v985: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:58 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:07:58 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 56, saving inputs in /var/lib/pacemaker/pengine/pe-input-56.bz2
Oct 13 14:07:58 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation openstack-cinder-volume-podman-0_monitor_0 locally on standalone
Oct 13 14:07:58 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for openstack-cinder-volume-podman-0 on standalone
Oct 13 14:07:58 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for openstack-cinder-volume-podman-0 on standalone: not running
Oct 13 14:07:58 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 56 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-56.bz2): Complete
Oct 13 14:07:58 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:07:58 standalone.localdomain ceph-mon[29756]: pgmap v985: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v986: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:07:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:07:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:07:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:07:59 standalone.localdomain podman[135384]: 2025-10-13 14:07:59.822542394 +0000 UTC m=+0.081376847 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, container_name=nova_api)
Oct 13 14:07:59 standalone.localdomain podman[135384]: 2025-10-13 14:07:59.857278694 +0000 UTC m=+0.116113177 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, container_name=nova_api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:07:59 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:07:59 standalone.localdomain podman[135383]: 2025-10-13 14:07:59.93345276 +0000 UTC m=+0.192863811 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, architecture=x86_64, container_name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, tcib_managed=true, com.redhat.component=openstack-keystone-container, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, io.openshift.expose-services=)
Oct 13 14:07:59 standalone.localdomain podman[135383]: 2025-10-13 14:07:59.943885559 +0000 UTC m=+0.203296651 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, container_name=keystone_cron, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:07:59 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:08:00 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:08:00 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 57, saving inputs in /var/lib/pacemaker/pengine/pe-input-57.bz2
Oct 13 14:08:00 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 57 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-57.bz2): Complete
Oct 13 14:08:00 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:08:00 standalone.localdomain ceph-mon[29756]: pgmap v986: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:01 standalone.localdomain systemd[1]: tmp-crun.I3QYb2.mount: Deactivated successfully.
Oct 13 14:08:01 standalone.localdomain podman[135447]: 2025-10-13 14:08:01.032836149 +0000 UTC m=+0.113864189 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, tcib_managed=true, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb)
Oct 13 14:08:01 standalone.localdomain podman[135447]: 2025-10-13 14:08:01.061916327 +0000 UTC m=+0.142944427 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:01 standalone.localdomain podman[135490]: 2025-10-13 14:08:01.265097013 +0000 UTC m=+0.097301873 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-haproxy-container, name=rhosp17/openstack-haproxy, tcib_managed=true)
Oct 13 14:08:01 standalone.localdomain podman[135490]: 2025-10-13 14:08:01.293302824 +0000 UTC m=+0.125507724 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, build-date=2025-07-21T13:08:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:08:01 standalone.localdomain podman[135524]: 2025-10-13 14:08:01.444367328 +0000 UTC m=+0.103019767 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, release=1, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:08:05, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, tcib_managed=true)
Oct 13 14:08:01 standalone.localdomain podman[135524]: 2025-10-13 14:08:01.472969942 +0000 UTC m=+0.131622401 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, version=17.1.9, release=1)
Oct 13 14:08:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v987: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:08:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:08:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:08:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:08:02 standalone.localdomain podman[135562]: 2025-10-13 14:08:02.05475307 +0000 UTC m=+0.072765953 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:08:02 standalone.localdomain podman[135562]: 2025-10-13 14:08:02.064823818 +0000 UTC m=+0.082836701 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, container_name=nova_api_cron, release=1)
Oct 13 14:08:02 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:08:02 standalone.localdomain podman[135591]: 2025-10-13 14:08:02.12577132 +0000 UTC m=+0.082374158 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-barbican-keystone-listener-container, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:18:19, container_name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true)
Oct 13 14:08:02 standalone.localdomain podman[135590]: 2025-10-13 14:08:02.170537997 +0000 UTC m=+0.130549559 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=barbican_api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, version=17.1.9, com.redhat.component=openstack-barbican-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:22:44, description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:08:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:08:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:08:02 standalone.localdomain podman[135591]: 2025-10-13 14:08:02.197417848 +0000 UTC m=+0.154020776 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.openshift.expose-services=, container_name=barbican_keystone_listener, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-barbican-keystone-listener-container, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19)
Oct 13 14:08:02 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:08:02 standalone.localdomain podman[135640]: 2025-10-13 14:08:02.329796732 +0000 UTC m=+0.140993768 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, release=1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=neutron_dhcp)
Oct 13 14:08:02 standalone.localdomain podman[135561]: 2025-10-13 14:08:02.29602662 +0000 UTC m=+0.312751083 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-worker, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:36:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, distribution-scope=public, architecture=x86_64, container_name=barbican_worker, version=17.1.9)
Oct 13 14:08:02 standalone.localdomain podman[135590]: 2025-10-13 14:08:02.358326003 +0000 UTC m=+0.318337585 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, summary=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, release=1, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, version=17.1.9, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true)
Oct 13 14:08:02 standalone.localdomain podman[135641]: 2025-10-13 14:08:02.257781022 +0000 UTC m=+0.069419122 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, release=1, vcs-type=git, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:08:02 standalone.localdomain podman[135640]: 2025-10-13 14:08:02.371819655 +0000 UTC m=+0.183016581 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:08:02 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:08:02 standalone.localdomain podman[135641]: 2025-10-13 14:08:02.391043961 +0000 UTC m=+0.202682051 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, tcib_managed=true, container_name=neutron_sriov_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent)
Oct 13 14:08:02 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:08:02 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:08:02 standalone.localdomain podman[135561]: 2025-10-13 14:08:02.429920109 +0000 UTC m=+0.446644562 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, build-date=2025-07-21T15:36:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, name=rhosp17/openstack-barbican-worker, architecture=x86_64, container_name=barbican_worker, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:08:02 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:08:02 standalone.localdomain ceph-mon[29756]: pgmap v987: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:02 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:08:02 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      openstack-cinder-volume-podman-0     (                             standalone )
Oct 13 14:08:02 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 58, saving inputs in /var/lib/pacemaker/pengine/pe-input-58.bz2
Oct 13 14:08:02 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation openstack-cinder-volume-podman-0_start_0 locally on standalone
Oct 13 14:08:02 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for openstack-cinder-volume-podman-0 on standalone
Oct 13 14:08:03 standalone.localdomain podman(openstack-cinder-volume-podman-0)[135771]: INFO: running container openstack-cinder-volume-podman-0 for the first time
Oct 13 14:08:03 standalone.localdomain podman[135775]: 2025-10-13 14:08:03.290761072 +0000 UTC m=+0.088240167 container create a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, build-date=2025-07-21T16:13:39, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, tcib_managed=true, version=17.1.9, release=1, io.openshift.expose-services=, com.redhat.component=openstack-cinder-volume-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64)
Oct 13 14:08:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:08:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88091550f23654f6033122dfc572214e3b2bb01634a4ff86212dbe1ecacd4fa1/merged/var/log/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88091550f23654f6033122dfc572214e3b2bb01634a4ff86212dbe1ecacd4fa1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88091550f23654f6033122dfc572214e3b2bb01634a4ff86212dbe1ecacd4fa1/merged/var/lib/cinder supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88091550f23654f6033122dfc572214e3b2bb01634a4ff86212dbe1ecacd4fa1/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:03 standalone.localdomain podman[135775]: 2025-10-13 14:08:03.252618316 +0000 UTC m=+0.050097431 image pull  cluster.common.tag/cinder-volume:pcmklatest
Oct 13 14:08:03 standalone.localdomain podman[135775]: 2025-10-13 14:08:03.357804429 +0000 UTC m=+0.155283524 container init a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-volume, com.redhat.component=openstack-cinder-volume-container, build-date=2025-07-21T16:13:39, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, io.openshift.expose-services=, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:08:03 standalone.localdomain podman[135775]: 2025-10-13 14:08:03.366617778 +0000 UTC m=+0.164096893 container start a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, tcib_managed=true, com.redhat.component=openstack-cinder-volume-container, description=Red Hat OpenStack Platform 17.1 cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, version=17.1.9, name=rhosp17/openstack-cinder-volume, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git)
Oct 13 14:08:03 standalone.localdomain podman(openstack-cinder-volume-podman-0)[135800]: INFO: a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1
Oct 13 14:08:03 standalone.localdomain sudo[135798]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:08:03 standalone.localdomain systemd[1]: Started Session c12 of User root.
Oct 13 14:08:03 standalone.localdomain sudo[135798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:08:03 standalone.localdomain sudo[135798]: pam_unix(sudo:session): session closed for user root
Oct 13 14:08:03 standalone.localdomain systemd[1]: session-c12.scope: Deactivated successfully.
Oct 13 14:08:03 standalone.localdomain podman(openstack-cinder-volume-podman-0)[135823]: INFO: Creating drop-in dependency for "openstack-cinder-volume-podman-0" (a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1)
Oct 13 14:08:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:08:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v988: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:03 standalone.localdomain systemd-rc-local-generator[135853]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:08:03 standalone.localdomain systemd-sysv-generator[135856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:08:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:08:03 standalone.localdomain ceph-mon[29756]: pgmap v988: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:03 standalone.localdomain python3[135834]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/cinder.md5sum follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:08:04 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:08:04 standalone.localdomain recover_tripleo_nova_virtqemud[135891]: 93291
Oct 13 14:08:04 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:08:04 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:08:04 standalone.localdomain systemd[1]: tmp-crun.e24Yks.mount: Deactivated successfully.
Oct 13 14:08:04 standalone.localdomain podman[135884]: 2025-10-13 14:08:04.159579947 +0000 UTC m=+0.082213822 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, summary=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, version=17.1.9, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-volume, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:13:39, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, architecture=x86_64)
Oct 13 14:08:04 standalone.localdomain podman[135884]: 2025-10-13 14:08:04.191979496 +0000 UTC m=+0.114613411 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-cinder-volume, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:13:39, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, com.redhat.component=openstack-cinder-volume-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:08:04 standalone.localdomain python3[135875]: ansible-tripleo_diff_exec Invoked with command=/var/lib/container-config-scripts/pacemaker_restart_bundle.sh cinder_volume openstack-cinder-volume openstack-cinder-volume _ Started state_file=/var/lib/config-data/puppet-generated/cinder.md5sum state_file_suffix=.cinder_volume_previous_run environment={'TRIPLEO_MINOR_UPDATE': '', 'TRIPLEO_HA_WRAPPER_RESOURCE_EXISTS': 'False'} return_codes=[0]
Oct 13 14:08:04 standalone.localdomain podman[135917]: 2025-10-13 14:08:04.298781278 +0000 UTC m=+0.080027714 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:13:39, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, io.openshift.expose-services=)
Oct 13 14:08:04 standalone.localdomain podman[135917]: 2025-10-13 14:08:04.326596968 +0000 UTC m=+0.107843454 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:13:39, name=rhosp17/openstack-cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:08:04 standalone.localdomain podman(openstack-cinder-volume-podman-0)[135955]: NOTICE: Container openstack-cinder-volume-podman-0  started successfully
Oct 13 14:08:04 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for openstack-cinder-volume-podman-0 on standalone: ok
Oct 13 14:08:04 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation openstack-cinder-volume-podman-0_monitor_60000 locally on standalone
Oct 13 14:08:04 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for openstack-cinder-volume-podman-0 on standalone
Oct 13 14:08:04 standalone.localdomain pcmkrestart[135960]: Initial deployment, skipping the restart of openstack-cinder-volume
Oct 13 14:08:04 standalone.localdomain podman[135963]: 2025-10-13 14:08:04.499676974 +0000 UTC m=+0.113476126 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, name=rhosp17/openstack-cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, build-date=2025-07-21T16:13:39, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:08:04 standalone.localdomain podman[135963]: 2025-10-13 14:08:04.529165365 +0000 UTC m=+0.142964567 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:08:04 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for openstack-cinder-volume-podman-0 on standalone: ok
Oct 13 14:08:04 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 58 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-58.bz2): Complete
Oct 13 14:08:04 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:08:04 standalone.localdomain python3[135986]: ansible-stat Invoked with path=/tmp/tripleo_ha_image_openstack-cinder-volume follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:08:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:04 standalone.localdomain haproxy[70940]: 172.17.0.100:43613 [13/Oct/2025:14:08:04.888] mysql mysql/standalone.internalapi.localdomain 1/0/86 5898 -- 42/42/41/41/0 0/0
Oct 13 14:08:05 standalone.localdomain haproxy[70940]: 172.17.0.100:41467 [13/Oct/2025:14:08:04.988] mysql mysql/standalone.internalapi.localdomain 1/0/28 12776 -- 42/42/41/41/0 0/0
Oct 13 14:08:05 standalone.localdomain haproxy[70940]: 172.17.0.100:52099 [13/Oct/2025:14:08:05.016] mysql mysql/standalone.internalapi.localdomain 1/0/18 4258 -- 42/42/41/41/0 0/0
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/840569222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/840569222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2942634520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2942634520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3115114652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3115114652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain python3[136161]: ansible-ansible.legacy.command Invoked with _raw_params=pcs resource config "openstack-manila-share"
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v989: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/840569222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/840569222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2942634520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2942634520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3115114652' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:08:05 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3115114652' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:08:06 standalone.localdomain python3[136292]: ansible-ansible.legacy.command Invoked with _raw_params=puppet apply  --detailed-exitcodes --summarize --color=false --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation' -e 'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::manila::share_bundle'
                                                         _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:06 standalone.localdomain ceph-mon[29756]: pgmap v989: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v990: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:08:07 standalone.localdomain runuser[136416]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:07 standalone.localdomain ceph-mon[29756]: pgmap v990: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:07 standalone.localdomain systemd[1]: tmp-crun.M8uvkb.mount: Deactivated successfully.
Oct 13 14:08:07 standalone.localdomain podman[136405]: 2025-10-13 14:08:07.835404046 +0000 UTC m=+0.107394671 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, container_name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:30:04, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 14:08:07 standalone.localdomain podman[136405]: 2025-10-13 14:08:07.870004093 +0000 UTC m=+0.141994718 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_id=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, build-date=2025-07-21T13:30:04, summary=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:07 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:08:08 standalone.localdomain runuser[136416]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:08 standalone.localdomain runuser[136565]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:08:08 standalone.localdomain podman[136612]: 2025-10-13 14:08:08.83053229 +0000 UTC m=+0.091545958 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, io.buildah.version=1.33.12, release=1, version=17.1.9, container_name=clustercheck, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45)
Oct 13 14:08:08 standalone.localdomain podman[136612]: 2025-10-13 14:08:08.875960187 +0000 UTC m=+0.136973865 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, version=17.1.9, name=rhosp17/openstack-mariadb, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, architecture=x86_64)
Oct 13 14:08:08 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:08:09 standalone.localdomain runuser[136565]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:09 standalone.localdomain runuser[136664]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:09 standalone.localdomain sshd[136711]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:08:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v991: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:09 standalone.localdomain runuser[136664]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:10 standalone.localdomain sshd[136711]: Received disconnect from 193.46.255.217 port 37130:11:  [preauth]
Oct 13 14:08:10 standalone.localdomain sshd[136711]: Disconnected from authenticating user root 193.46.255.217 port 37130 [preauth]
Oct 13 14:08:10 standalone.localdomain ceph-mon[29756]: pgmap v991: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v992: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:11 standalone.localdomain ceph-mon[29756]: pgmap v992: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v993: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:14 standalone.localdomain ceph-mon[29756]: pgmap v993: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v994: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:08:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:08:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:08:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:08:16 standalone.localdomain ceph-mon[29756]: pgmap v994: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:16 standalone.localdomain podman[137030]: 2025-10-13 14:08:16.839791293 +0000 UTC m=+0.103177653 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, container_name=cinder_api_cron, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:08:16 standalone.localdomain podman[137031]: 2025-10-13 14:08:16.87504601 +0000 UTC m=+0.136567083 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T16:10:12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, container_name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:16 standalone.localdomain podman[137030]: 2025-10-13 14:08:16.928562704 +0000 UTC m=+0.191949064 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=cinder_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:08:16 standalone.localdomain podman[137029]: 2025-10-13 14:08:16.941093796 +0000 UTC m=+0.205449205 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, vendor=Red Hat, Inc., release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, container_name=keystone, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:08:16 standalone.localdomain podman[137029]: 2025-10-13 14:08:16.975128056 +0000 UTC m=+0.239483375 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, com.redhat.component=openstack-keystone-container, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, container_name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, release=1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:08:16 standalone.localdomain podman[137031]: 2025-10-13 14:08:16.983836413 +0000 UTC m=+0.245357486 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-cinder-scheduler, vendor=Red Hat, Inc., vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, build-date=2025-07-21T16:10:12, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, container_name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:08:16 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:08:17 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:08:17 standalone.localdomain podman[137028]: 2025-10-13 14:08:16.987840834 +0000 UTC m=+0.246573471 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:08:17 standalone.localdomain podman[137028]: 2025-10-13 14:08:17.071819679 +0000 UTC m=+0.330552296 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-07-21T13:27:15, vcs-type=git)
Oct 13 14:08:17 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:08:17 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:08:17 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 59, saving inputs in /var/lib/pacemaker/pengine/pe-input-59.bz2
Oct 13 14:08:17 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 59 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-59.bz2): Complete
Oct 13 14:08:17 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:08:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v995: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:17 standalone.localdomain ceph-mon[29756]: pgmap v995: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v996: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:08:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:08:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:08:19 standalone.localdomain podman[137291]: 2025-10-13 14:08:19.856627374 +0000 UTC m=+0.111192187 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, release=1, build-date=2025-07-21T15:56:28, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']})
Oct 13 14:08:19 standalone.localdomain podman[137291]: 2025-10-13 14:08:19.891574942 +0000 UTC m=+0.146139725 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-scheduler, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, architecture=x86_64, build-date=2025-07-21T15:56:28, io.openshift.expose-services=, vcs-type=git, container_name=manila_scheduler, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:19 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:08:19 standalone.localdomain podman[137290]: 2025-10-13 14:08:19.90886791 +0000 UTC m=+0.162912936 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-type=git, version=17.1.9, release=1, build-date=2025-07-21T14:49:55, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, name=rhosp17/openstack-heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 14:08:19 standalone.localdomain podman[137292]: 2025-10-13 14:08:19.892852171 +0000 UTC m=+0.139878143 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:11, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, container_name=heat_engine, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team)
Oct 13 14:08:19 standalone.localdomain podman[137337]: 2025-10-13 14:08:19.946673874 +0000 UTC m=+0.158939065 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, name=rhosp17/openstack-horizon, com.redhat.component=openstack-horizon-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, release=1, container_name=horizon, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, build-date=2025-07-21T13:58:15, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:08:20 standalone.localdomain podman[137297]: 2025-10-13 14:08:20.001386166 +0000 UTC m=+0.245756787 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, name=rhosp17/openstack-nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_scheduler, build-date=2025-07-21T16:02:54, vendor=Red Hat, Inc.)
Oct 13 14:08:20 standalone.localdomain podman[137326]: 2025-10-13 14:08:20.00937862 +0000 UTC m=+0.235896496 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vendor=Red Hat, Inc., release=1, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public)
Oct 13 14:08:20 standalone.localdomain podman[137305]: 2025-10-13 14:08:19.970046949 +0000 UTC m=+0.209341455 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, tcib_managed=true, container_name=memcached, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git)
Oct 13 14:08:20 standalone.localdomain podman[137290]: 2025-10-13 14:08:20.034477337 +0000 UTC m=+0.288522413 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, vcs-type=git, container_name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, architecture=x86_64)
Oct 13 14:08:20 standalone.localdomain podman[137297]: 2025-10-13 14:08:20.04310185 +0000 UTC m=+0.287472491 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:02:54, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-scheduler, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=nova_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, architecture=x86_64, distribution-scope=public)
Oct 13 14:08:20 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137322]: 2025-10-13 14:08:20.049154215 +0000 UTC m=+0.274329530 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, name=rhosp17/openstack-manila-api, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, container_name=manila_api_cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-api-container, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:06:43)
Oct 13 14:08:20 standalone.localdomain podman[137305]: 2025-10-13 14:08:20.052815846 +0000 UTC m=+0.292110382 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1)
Oct 13 14:08:20 standalone.localdomain podman[137322]: 2025-10-13 14:08:20.060949405 +0000 UTC m=+0.286124720 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, com.redhat.component=openstack-manila-api-container, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, container_name=manila_api_cron, description=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-api, managed_by=tripleo_ansible)
Oct 13 14:08:20 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137326]: 2025-10-13 14:08:20.090685344 +0000 UTC m=+0.317203220 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=heat_api_cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:08:20 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137348]: 2025-10-13 14:08:20.107014821 +0000 UTC m=+0.314064313 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.9, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:20 standalone.localdomain podman[137292]: 2025-10-13 14:08:20.12398139 +0000 UTC m=+0.371007372 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_id=tripleo_step4, name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., container_name=heat_engine, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:08:20 standalone.localdomain podman[137337]: 2025-10-13 14:08:20.131441978 +0000 UTC m=+0.343707229 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, build-date=2025-07-21T13:58:15, container_name=horizon, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.component=openstack-horizon-container, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:08:20 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137348]: 2025-10-13 14:08:20.143440454 +0000 UTC m=+0.350489956 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, release=1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, config_id=tripleo_step4)
Oct 13 14:08:20 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137346]: 2025-10-13 14:08:20.205623754 +0000 UTC m=+0.410706765 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-server, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, config_id=tripleo_step4, container_name=neutron_api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1)
Oct 13 14:08:20 standalone.localdomain podman[137338]: 2025-10-13 14:08:20.258086376 +0000 UTC m=+0.470965216 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, distribution-scope=public, container_name=heat_api, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:08:20 standalone.localdomain podman[137347]: 2025-10-13 14:08:20.271550817 +0000 UTC m=+0.472805012 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, config_id=tripleo_step4, container_name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:17, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true)
Oct 13 14:08:20 standalone.localdomain podman[137347]: 2025-10-13 14:08:20.297730146 +0000 UTC m=+0.498984291 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T15:44:17, description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-conductor, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:08:20 standalone.localdomain podman[137338]: 2025-10-13 14:08:20.318991466 +0000 UTC m=+0.531870296 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:20 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137346]: 2025-10-13 14:08:20.357868964 +0000 UTC m=+0.562952025 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, distribution-scope=public, container_name=neutron_api, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03)
Oct 13 14:08:20 standalone.localdomain podman[137312]: 2025-10-13 14:08:20.3135704 +0000 UTC m=+0.546757820 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:58:55, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1)
Oct 13 14:08:20 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain podman[137312]: 2025-10-13 14:08:20.399780464 +0000 UTC m=+0.632967794 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, name=rhosp17/openstack-cinder-api)
Oct 13 14:08:20 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:08:20 standalone.localdomain runuser[137589]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:20 standalone.localdomain ceph-mon[29756]: pgmap v996: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:21 standalone.localdomain runuser[137589]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:21 standalone.localdomain runuser[137666]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v997: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:21 standalone.localdomain ceph-mon[29756]: pgmap v997: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:21 standalone.localdomain runuser[137666]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:21 standalone.localdomain runuser[137732]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:22 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:08:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:08:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:08:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:08:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:08:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:08:22 standalone.localdomain runuser[137732]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:22 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 60, saving inputs in /var/lib/pacemaker/pengine/pe-input-60.bz2
Oct 13 14:08:22 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation openstack-manila-share-podman-0_monitor_0 locally on standalone
Oct 13 14:08:22 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of probe operation for openstack-manila-share-podman-0 on standalone
Oct 13 14:08:22 standalone.localdomain systemd[1]: tmp-crun.w5VykP.mount: Deactivated successfully.
Oct 13 14:08:22 standalone.localdomain podman[137815]: 2025-10-13 14:08:22.839092417 +0000 UTC m=+0.088638019 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.buildah.version=1.33.12, version=17.1.9, release=1)
Oct 13 14:08:22 standalone.localdomain systemd[1]: tmp-crun.738YSA.mount: Deactivated successfully.
Oct 13 14:08:22 standalone.localdomain podman[137805]: 2025-10-13 14:08:22.880589383 +0000 UTC m=+0.133594710 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 14:08:22 standalone.localdomain podman[137802]: 2025-10-13 14:08:22.892385744 +0000 UTC m=+0.145229926 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public)
Oct 13 14:08:22 standalone.localdomain podman[137822]: 2025-10-13 14:08:22.862880623 +0000 UTC m=+0.102215343 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-nova-novncproxy-container, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, name=rhosp17/openstack-nova-novncproxy, build-date=2025-07-21T15:24:10, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:08:22 standalone.localdomain podman[137804]: 2025-10-13 14:08:22.965720824 +0000 UTC m=+0.226441667 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Oct 13 14:08:22 standalone.localdomain pacemaker-controld[57911]:  notice: Result of probe operation for openstack-manila-share-podman-0 on standalone: not running
Oct 13 14:08:22 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 60 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-60.bz2): Complete
Oct 13 14:08:22 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:08:23 standalone.localdomain podman[137815]: 2025-10-13 14:08:23.019742134 +0000 UTC m=+0.269287756 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, release=1, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:08:23 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:08:23 standalone.localdomain podman[137802]: 2025-10-13 14:08:23.059809617 +0000 UTC m=+0.312653779 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, container_name=swift_object_server, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:08:23 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:08:23 standalone.localdomain podman[137805]: 2025-10-13 14:08:23.076154287 +0000 UTC m=+0.329159614 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, release=1, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-swift-container, architecture=x86_64, managed_by=tripleo_ansible, container_name=swift_container_server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git)
Oct 13 14:08:23 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:08:23
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'vms', 'manila_data', 'images', 'manila_metadata']
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:08:23 standalone.localdomain podman[137822]: 2025-10-13 14:08:23.130763265 +0000 UTC m=+0.370097975 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-nova-novncproxy-container, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-novncproxy, batch=17.1_20250721.1, container_name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T15:24:10, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1)
Oct 13 14:08:23 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:08:23 standalone.localdomain podman[137804]: 2025-10-13 14:08:23.33689547 +0000 UTC m=+0.597616363 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, container_name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:08:23 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:08:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v998: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:24 standalone.localdomain ceph-mon[29756]: pgmap v998: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:24 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:08:24 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 61, saving inputs in /var/lib/pacemaker/pengine/pe-input-61.bz2
Oct 13 14:08:24 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 61 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-61.bz2): Complete
Oct 13 14:08:24 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:08:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v999: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:08:26 standalone.localdomain podman[138127]: 2025-10-13 14:08:26.300573199 +0000 UTC m=+0.101844092 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, release=1, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, container_name=glance_api_cron, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, tcib_managed=true, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:08:26 standalone.localdomain podman[138127]: 2025-10-13 14:08:26.309901694 +0000 UTC m=+0.111172587 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, build-date=2025-07-21T13:58:20, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:08:26 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain podman[138156]: 2025-10-13 14:08:26.403133791 +0000 UTC m=+0.076075284 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, distribution-scope=public, release=1, build-date=2025-07-21T13:58:12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, name=rhosp17/openstack-placement-api, vcs-type=git)
Oct 13 14:08:26 standalone.localdomain systemd[1]: tmp-crun.IHaWSH.mount: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain podman[138157]: 2025-10-13 14:08:26.445538576 +0000 UTC m=+0.110551567 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_metadata, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, release=1, com.redhat.component=openstack-nova-api-container, distribution-scope=public, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:08:26 standalone.localdomain podman[138156]: 2025-10-13 14:08:26.452051445 +0000 UTC m=+0.124992958 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, container_name=placement_api, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:12, com.redhat.component=openstack-placement-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:08:26 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:08:26 standalone.localdomain podman[138128]: 2025-10-13 14:08:26.518050651 +0000 UTC m=+0.318138108 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:08:26 standalone.localdomain podman[138157]: 2025-10-13 14:08:26.536451394 +0000 UTC m=+0.201464435 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, release=1, architecture=x86_64, config_id=tripleo_step4, container_name=nova_metadata, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:08:26 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain podman[138128]: 2025-10-13 14:08:26.590856385 +0000 UTC m=+0.390943822 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, version=17.1.9, container_name=ovn_metadata_agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=)
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:08:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain podman[138220]: 2025-10-13 14:08:26.615637702 +0000 UTC m=+0.104915236 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, container_name=glance_api_internal, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:08:26 standalone.localdomain podman[138203]: 2025-10-13 14:08:26.666698241 +0000 UTC m=+0.191230281 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:08:26 standalone.localdomain podman[138247]: 2025-10-13 14:08:26.708958332 +0000 UTC m=+0.091511946 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, release=1, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64)
Oct 13 14:08:26 standalone.localdomain podman[138203]: 2025-10-13 14:08:26.715415509 +0000 UTC m=+0.239947579 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:08:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain ceph-mon[29756]: pgmap v999: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:08:26 standalone.localdomain podman[138220]: 2025-10-13 14:08:26.850885707 +0000 UTC m=+0.340163231 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:08:26 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:08:26 standalone.localdomain podman[138295]: 2025-10-13 14:08:26.930492508 +0000 UTC m=+0.084206313 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, build-date=2025-07-21T14:48:37)
Oct 13 14:08:26 standalone.localdomain podman[138247]: 2025-10-13 14:08:26.958257746 +0000 UTC m=+0.340811390 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-glance-api, container_name=glance_api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:08:26 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:08:27 standalone.localdomain podman[138295]: 2025-10-13 14:08:27.128995491 +0000 UTC m=+0.282709306 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, container_name=swift_proxy, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=)
Oct 13 14:08:27 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:08:27 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:08:27 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Start      openstack-manila-share-podman-0      (                             standalone )
Oct 13 14:08:27 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 62, saving inputs in /var/lib/pacemaker/pengine/pe-input-62.bz2
Oct 13 14:08:27 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating start operation openstack-manila-share-podman-0_start_0 locally on standalone
Oct 13 14:08:27 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of start operation for openstack-manila-share-podman-0 on standalone
Oct 13 14:08:27 standalone.localdomain podman(openstack-manila-share-podman-0)[138423]: INFO: running container openstack-manila-share-podman-0 for the first time
Oct 13 14:08:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1000: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:27 standalone.localdomain podman[138427]: 2025-10-13 14:08:27.639458862 +0000 UTC m=+0.113604641 container create cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, com.redhat.component=openstack-manila-share-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, architecture=x86_64, release=1, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, name=rhosp17/openstack-manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, build-date=2025-07-21T15:22:36, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:08:27 standalone.localdomain systemd[1]: tmp-crun.lN7Lsm.mount: Deactivated successfully.
Oct 13 14:08:27 standalone.localdomain podman[138427]: 2025-10-13 14:08:27.585160644 +0000 UTC m=+0.059306463 image pull  cluster.common.tag/manila-share:pcmklatest
Oct 13 14:08:27 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:08:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fab759af8d1b0f9e496bf7fab8e111ee85c632828a3c4090f86bd90b5fee65d/merged/var/lib/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fab759af8d1b0f9e496bf7fab8e111ee85c632828a3c4090f86bd90b5fee65d/merged/var/log/manila supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fab759af8d1b0f9e496bf7fab8e111ee85c632828a3c4090f86bd90b5fee65d/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:08:27 standalone.localdomain podman[138427]: 2025-10-13 14:08:27.717232807 +0000 UTC m=+0.191378556 container init cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, release=1, com.redhat.component=openstack-manila-share-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, name=rhosp17/openstack-manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=)
Oct 13 14:08:27 standalone.localdomain podman[138427]: 2025-10-13 14:08:27.726314764 +0000 UTC m=+0.200460513 container start cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, name=rhosp17/openstack-manila-share, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T15:22:36, distribution-scope=public, version=17.1.9)
Oct 13 14:08:27 standalone.localdomain podman(openstack-manila-share-podman-0)[138466]: INFO: cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84
Oct 13 14:08:27 standalone.localdomain sudo[138468]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:08:27 standalone.localdomain systemd[1]: Started Session c13 of User root.
Oct 13 14:08:27 standalone.localdomain sudo[138468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:08:27 standalone.localdomain sudo[138468]: pam_unix(sudo:session): session closed for user root
Oct 13 14:08:27 standalone.localdomain systemd[1]: session-c13.scope: Deactivated successfully.
Oct 13 14:08:27 standalone.localdomain podman(openstack-manila-share-podman-0)[138523]: INFO: Creating drop-in dependency for "openstack-manila-share-podman-0" (cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84)
Oct 13 14:08:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:08:28 standalone.localdomain systemd-rc-local-generator[138591]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:08:28 standalone.localdomain systemd-sysv-generator[138600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:08:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:08:28 standalone.localdomain python3[138537]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/manila.md5sum follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:08:28 standalone.localdomain python3[138616]: ansible-tripleo_diff_exec Invoked with command=/var/lib/container-config-scripts/pacemaker_restart_bundle.sh manila_share openstack-manila-share openstack-manila-share _ Started state_file=/var/lib/config-data/puppet-generated/manila.md5sum state_file_suffix=.previous_run environment={'TRIPLEO_MINOR_UPDATE': '', 'TRIPLEO_HA_WRAPPER_RESOURCE_EXISTS': 'False'} return_codes=[0]
Oct 13 14:08:28 standalone.localdomain podman[138618]: 2025-10-13 14:08:28.571606762 +0000 UTC m=+0.108756253 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:22:36, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-share, tcib_managed=true, release=1)
Oct 13 14:08:28 standalone.localdomain podman[138618]: 2025-10-13 14:08:28.604992752 +0000 UTC m=+0.142142243 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, description=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-manila-share-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, batch=17.1_20250721.1, build-date=2025-07-21T15:22:36, name=rhosp17/openstack-manila-share, vcs-type=git, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:08:28 standalone.localdomain pcmkrestart[138661]: Initial deployment, skipping the restart of openstack-manila-share
Oct 13 14:08:28 standalone.localdomain podman[138650]: 2025-10-13 14:08:28.719625213 +0000 UTC m=+0.090853236 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-share-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, name=rhosp17/openstack-manila-share, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1)
Oct 13 14:08:28 standalone.localdomain podman[138650]: 2025-10-13 14:08:28.747188065 +0000 UTC m=+0.118416078 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, tcib_managed=true, com.redhat.component=openstack-manila-share-container, build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-manila-share, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:08:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 13 14:08:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3923363892' entity='client.manila' cmd={"prefix": "version", "format": "json"} : dispatch
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14284 -' entity='client.manila' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 13 14:08:28 standalone.localdomain ceph-mgr[29999]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 13 14:08:28 standalone.localdomain ceph-mon[29756]: pgmap v1000: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3923363892' entity='client.manila' cmd={"prefix": "version", "format": "json"} : dispatch
Oct 13 14:08:28 standalone.localdomain podman(openstack-manila-share-podman-0)[138712]: NOTICE: Container openstack-manila-share-podman-0  started successfully
Oct 13 14:08:28 standalone.localdomain pacemaker-controld[57911]:  notice: Result of start operation for openstack-manila-share-podman-0 on standalone: ok
Oct 13 14:08:28 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating monitor operation openstack-manila-share-podman-0_monitor_60000 locally on standalone
Oct 13 14:08:28 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of monitor operation for openstack-manila-share-podman-0 on standalone
Oct 13 14:08:28 standalone.localdomain podman[138718]: 2025-10-13 14:08:28.89503815 +0000 UTC m=+0.049438530 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, com.redhat.component=openstack-manila-share-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:22:36, release=1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:08:28 standalone.localdomain python3[138694]: ansible-stat Invoked with path=/tmp/tripleo_ha_image_openstack-manila-share follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:08:28 standalone.localdomain podman[138718]: 2025-10-13 14:08:28.92285166 +0000 UTC m=+0.077252020 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, architecture=x86_64, build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, release=1, version=17.1.9, name=rhosp17/openstack-manila-share, batch=17.1_20250721.1, com.redhat.component=openstack-manila-share-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, description=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1)
Oct 13 14:08:28 standalone.localdomain pacemaker-controld[57911]:  notice: Result of monitor operation for openstack-manila-share-podman-0 on standalone: ok
Oct 13 14:08:28 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 62 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-62.bz2): Complete
Oct 13 14:08:28 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:08:29 standalone.localdomain python3[138762]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_proxy /usr/local/bin/kolla_set_configs  _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1001: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:29 standalone.localdomain podman[138763]: 2025-10-13 14:08:29.709811935 +0000 UTC m=+0.109728762 container exec 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., container_name=swift_proxy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:08:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:29 standalone.localdomain ceph-mon[29756]: from='client.14284 -' entity='client.manila' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 13 14:08:29 standalone.localdomain podman[138763]: 2025-10-13 14:08:29.815931937 +0000 UTC m=+0.215848734 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77)
Oct 13 14:08:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:08:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:08:30 standalone.localdomain systemd[1]: tmp-crun.Kt4Db3.mount: Deactivated successfully.
Oct 13 14:08:30 standalone.localdomain podman[138804]: 2025-10-13 14:08:30.009829549 +0000 UTC m=+0.089844525 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, tcib_managed=true, distribution-scope=public, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, name=rhosp17/openstack-nova-api, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 14:08:30 standalone.localdomain podman[138824]: 2025-10-13 14:08:30.068992425 +0000 UTC m=+0.062902171 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, release=1, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-keystone, version=17.1.9, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:08:30 standalone.localdomain podman[138804]: 2025-10-13 14:08:30.079115315 +0000 UTC m=+0.159130271 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-nova-api-container, version=17.1.9, release=1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., container_name=nova_api, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible)
Oct 13 14:08:30 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:08:30 standalone.localdomain podman[138824]: 2025-10-13 14:08:30.130008669 +0000 UTC m=+0.123918415 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible)
Oct 13 14:08:30 standalone.localdomain python3[138820]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_account_auditor /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:30 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:08:30 standalone.localdomain python3[138867]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_account_reaper /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:30 standalone.localdomain podman[138868]: 2025-10-13 14:08:30.585430829 +0000 UTC m=+0.118801580 container exec 42a0e28a14f5b02bb473af4947fa59b3f287b8a8aefdb52b635575d15fef57da (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_reaper, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, container_name=swift_account_reaper, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, release=1, vcs-type=git, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:08:30 standalone.localdomain podman[138868]: 2025-10-13 14:08:30.692895811 +0000 UTC m=+0.226266602 container exec_died 42a0e28a14f5b02bb473af4947fa59b3f287b8a8aefdb52b635575d15fef57da (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_reaper, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., container_name=swift_account_reaper, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift:z', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:08:30 standalone.localdomain ceph-mon[29756]: pgmap v1001: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:30 standalone.localdomain python3[138899]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_account_replicator /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:31 standalone.localdomain python3[138922]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_account_server /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:31 standalone.localdomain podman[138923]: 2025-10-13 14:08:31.289580376 +0000 UTC m=+0.079694915 container exec dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:08:31 standalone.localdomain podman[138923]: 2025-10-13 14:08:31.37419985 +0000 UTC m=+0.164314479 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, release=1, architecture=x86_64, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:08:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1002: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:31 standalone.localdomain python3[138955]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_container_auditor /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:31 standalone.localdomain ceph-mon[29756]: pgmap v1002: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:31 standalone.localdomain python3[138970]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_container_replicator /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:32 standalone.localdomain python3[138991]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_container_server /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:08:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:08:32 standalone.localdomain podman[138992]: 2025-10-13 14:08:32.368178179 +0000 UTC m=+0.116097417 container exec ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, version=17.1.9, release=1, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, container_name=swift_container_server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public)
Oct 13 14:08:32 standalone.localdomain systemd[1]: tmp-crun.TftUP6.mount: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:08:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:08:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:08:32 standalone.localdomain podman[139007]: 2025-10-13 14:08:32.452546585 +0000 UTC m=+0.096925460 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, distribution-scope=public, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, container_name=nova_api_cron, vcs-type=git, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:08:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:08:32 standalone.localdomain podman[138992]: 2025-10-13 14:08:32.481016736 +0000 UTC m=+0.228936034 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, vendor=Red Hat, Inc.)
Oct 13 14:08:32 standalone.localdomain podman[139007]: 2025-10-13 14:08:32.488822633 +0000 UTC m=+0.133201478 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, tcib_managed=true, distribution-scope=public, vcs-type=git)
Oct 13 14:08:32 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain podman[139006]: 2025-10-13 14:08:32.542463382 +0000 UTC m=+0.191497480 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener, container_name=barbican_keystone_listener, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-keystone-listener-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:08:32 standalone.localdomain podman[139060]: 2025-10-13 14:08:32.607243291 +0000 UTC m=+0.139397479 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.component=openstack-barbican-worker-container, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-barbican-worker, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, container_name=barbican_worker)
Oct 13 14:08:32 standalone.localdomain podman[139040]: 2025-10-13 14:08:32.616834943 +0000 UTC m=+0.165502215 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:08:32 standalone.localdomain podman[139041]: 2025-10-13 14:08:32.658279529 +0000 UTC m=+0.200597498 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, build-date=2025-07-21T15:22:44, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, container_name=barbican_api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, com.redhat.component=openstack-barbican-api-container)
Oct 13 14:08:32 standalone.localdomain podman[139060]: 2025-10-13 14:08:32.66319082 +0000 UTC m=+0.195345008 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_id=tripleo_step3, container_name=barbican_worker, vcs-type=git, build-date=2025-07-21T15:36:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:08:32 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain podman[139041]: 2025-10-13 14:08:32.689970337 +0000 UTC m=+0.232288316 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.openshift.expose-services=, container_name=barbican_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:08:32 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain podman[139040]: 2025-10-13 14:08:32.699988664 +0000 UTC m=+0.248655926 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, container_name=neutron_dhcp, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:08:32 standalone.localdomain python3[139110]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_container_updater /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:32 standalone.localdomain podman[139043]: 2025-10-13 14:08:32.708137942 +0000 UTC m=+0.248570103 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.buildah.version=1.33.12)
Oct 13 14:08:32 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain podman[139006]: 2025-10-13 14:08:32.740675026 +0000 UTC m=+0.389709144 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.expose-services=, name=rhosp17/openstack-barbican-keystone-listener, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener)
Oct 13 14:08:32 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain podman[139157]: 2025-10-13 14:08:32.822122083 +0000 UTC m=+0.104105170 container exec 59d24ac6b1e68a9d9f38a32622f37532e6c5c6b98954ed9b63a3495aea9c83c8 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_updater, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., container_name=swift_container_updater, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:32 standalone.localdomain podman[139043]: 2025-10-13 14:08:32.874822143 +0000 UTC m=+0.415254404 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:32 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:08:32 standalone.localdomain podman[139157]: 2025-10-13 14:08:32.911983598 +0000 UTC m=+0.193966745 container exec_died 59d24ac6b1e68a9d9f38a32622f37532e6c5c6b98954ed9b63a3495aea9c83c8 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_updater, com.redhat.component=openstack-swift-container-container, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=swift_container_updater, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-swift-container)
Oct 13 14:08:33 standalone.localdomain python3[139205]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_object_auditor /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:33 standalone.localdomain runuser[139224]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:33 standalone.localdomain python3[139219]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_object_expirer /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1003: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:33 standalone.localdomain podman[139269]: 2025-10-13 14:08:33.649622878 +0000 UTC m=+0.143443473 container exec e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_object_expirer, vcs-type=git, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., container_name=swift_object_expirer, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:08:33 standalone.localdomain podman[139269]: 2025-10-13 14:08:33.75708924 +0000 UTC m=+0.250909785 container exec_died e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_object_expirer, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_expirer, version=17.1.9, architecture=x86_64, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:08:34 standalone.localdomain python3[139299]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_object_replicator /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:34 standalone.localdomain runuser[139224]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:34 standalone.localdomain python3[139335]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_object_server /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:34 standalone.localdomain runuser[139336]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:34 standalone.localdomain podman[139344]: 2025-10-13 14:08:34.397700336 +0000 UTC m=+0.113584411 container exec 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 14:08:34 standalone.localdomain podman[139344]: 2025-10-13 14:08:34.501004531 +0000 UTC m=+0.216888586 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:08:34 standalone.localdomain ceph-mon[29756]: pgmap v1003: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:34 standalone.localdomain python3[139412]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec -u root swift_object_updater /usr/local/bin/kolla_set_configs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:08:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:34 standalone.localdomain podman[139413]: 2025-10-13 14:08:34.903845274 +0000 UTC m=+0.149837227 container exec 47c57a4b5f5ee71cc5c313caaede512007071e00b14223567402343b3ae9a278 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_updater, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., container_name=swift_object_updater, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:08:34 standalone.localdomain podman[139413]: 2025-10-13 14:08:34.976799842 +0000 UTC m=+0.222791795 container exec_died 47c57a4b5f5ee71cc5c313caaede512007071e00b14223567402343b3ae9a278 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_updater, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T14:56:28, container_name=swift_object_updater, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public)
Oct 13 14:08:35 standalone.localdomain runuser[139336]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:35 standalone.localdomain runuser[139458]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:35 standalone.localdomain python3[139479]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:08:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1004: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:35 standalone.localdomain runuser[139458]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:36 standalone.localdomain ansible-async_wrapper.py[139615]: Invoked with 498832324596 3600 /root/.ansible/tmp/ansible-tmp-1760364516.0282023-139603-209624106228897/AnsiballZ_command.py _
Oct 13 14:08:36 standalone.localdomain ansible-async_wrapper.py[139700]: Starting module and watcher
Oct 13 14:08:36 standalone.localdomain ansible-async_wrapper.py[139700]: Start watching 139701 (3600)
Oct 13 14:08:36 standalone.localdomain ansible-async_wrapper.py[139701]: Start module (139701)
Oct 13 14:08:36 standalone.localdomain ansible-async_wrapper.py[139615]: Return async_wrapper task started.
Oct 13 14:08:36 standalone.localdomain python3[139706]: ansible-ansible.legacy.async_status Invoked with jid=498832324596.139615 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:08:36 standalone.localdomain ceph-mon[29756]: pgmap v1004: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1005: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:08:38 standalone.localdomain podman[139871]: 2025-10-13 14:08:38.413187929 +0000 UTC m=+0.092423934 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:18:24, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-backup-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git)
Oct 13 14:08:38 standalone.localdomain podman[139871]: 2025-10-13 14:08:38.441199464 +0000 UTC m=+0.120435459 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, com.redhat.component=openstack-cinder-backup-container, maintainer=OpenStack TripleO Team, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:24, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-backup)
Oct 13 14:08:38 standalone.localdomain podman[139889]: 2025-10-13 14:08:38.504326083 +0000 UTC m=+0.082815261 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, container_name=ovn_cluster_northd, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-northd, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-northd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, config_id=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:08:38 standalone.localdomain podman[139889]: 2025-10-13 14:08:38.539179677 +0000 UTC m=+0.117668845 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, tcib_managed=true, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, config_id=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_cluster_northd, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:30:04, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:08:38 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:08:38 standalone.localdomain ceph-mon[29756]: pgmap v1005: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1006: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:08:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:39 standalone.localdomain podman[139973]: 2025-10-13 14:08:39.818496091 +0000 UTC m=+0.078701385 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, vcs-type=git, build-date=2025-07-21T12:58:45, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=)
Oct 13 14:08:39 standalone.localdomain podman[139973]: 2025-10-13 14:08:39.869188179 +0000 UTC m=+0.129393513 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, vcs-type=git, container_name=clustercheck, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, config_id=tripleo_step2)
Oct 13 14:08:39 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:08:40 standalone.localdomain ceph-mon[29756]: pgmap v1006: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:41 standalone.localdomain ansible-async_wrapper.py[139700]: 139701 still running (3600)
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (file: /etc/puppet/hiera.yaml)
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: Undefined variable '::deploy_config_name';
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (file & line not available)
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (file & line not available)
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Oct 13 14:08:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1007: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:41 standalone.localdomain ceph-mon[29756]: pgmap v1007: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 6]
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Oct 13 14:08:41 standalone.localdomain puppet-user[139709]: Notice: Compiled catalog for standalone.localdomain in environment production in 0.74 seconds
Oct 13 14:08:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1008: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:44 standalone.localdomain ceph-mon[29756]: pgmap v1008: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1009: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:46 standalone.localdomain ansible-async_wrapper.py[139700]: 139701 still running (3595)
Oct 13 14:08:46 standalone.localdomain runuser[140330]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:46 standalone.localdomain puppet-user[139709]: Notice: /Stage[main]/Pacemaker::Resource_defaults/Pcmk_resource_default[resource-stickiness]/ensure: created
Oct 13 14:08:46 standalone.localdomain python3[140378]: ansible-ansible.legacy.async_status Invoked with jid=498832324596.139615 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:08:46 standalone.localdomain ceph-mon[29756]: pgmap v1009: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:46 standalone.localdomain sudo[140381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:08:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:08:46 standalone.localdomain sudo[140381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:08:46 standalone.localdomain sudo[140381]: pam_unix(sudo:session): session closed for user root
Oct 13 14:08:47 standalone.localdomain sudo[140406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:08:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:08:47 standalone.localdomain runuser[140330]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:08:47 standalone.localdomain sudo[140406]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:08:47 standalone.localdomain podman[140399]: 2025-10-13 14:08:47.081543022 +0000 UTC m=+0.107822904 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T15:58:55, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, vcs-type=git, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_cron, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, architecture=x86_64)
Oct 13 14:08:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:08:47 standalone.localdomain podman[140399]: 2025-10-13 14:08:47.112792227 +0000 UTC m=+0.139072079 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, tcib_managed=true, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api_cron, distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:08:47 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:08:47 standalone.localdomain podman[140431]: 2025-10-13 14:08:47.160043129 +0000 UTC m=+0.096262410 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, release=1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:27:18, container_name=keystone, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:08:47 standalone.localdomain runuser[140521]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:47 standalone.localdomain podman[140465]: 2025-10-13 14:08:47.233630178 +0000 UTC m=+0.129194167 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.9, container_name=iscsid, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, release=1)
Oct 13 14:08:47 standalone.localdomain podman[140434]: 2025-10-13 14:08:47.25892699 +0000 UTC m=+0.178654138 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, container_name=cinder_scheduler, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-cinder-scheduler-container, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, name=rhosp17/openstack-cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1)
Oct 13 14:08:47 standalone.localdomain podman[140465]: 2025-10-13 14:08:47.266166001 +0000 UTC m=+0.161729970 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, tcib_managed=true, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=)
Oct 13 14:08:47 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:08:47 standalone.localdomain podman[140431]: 2025-10-13 14:08:47.288019388 +0000 UTC m=+0.224238679 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, tcib_managed=true, container_name=keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, architecture=x86_64)
Oct 13 14:08:47 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:08:47 standalone.localdomain podman[140434]: 2025-10-13 14:08:47.306032579 +0000 UTC m=+0.225759707 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, tcib_managed=true, distribution-scope=public, release=1, vcs-type=git, build-date=2025-07-21T16:10:12, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, name=rhosp17/openstack-cinder-scheduler, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=cinder_scheduler)
Oct 13 14:08:47 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:08:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1010: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:47 standalone.localdomain sudo[140406]: pam_unix(sudo:session): session closed for user root
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:08:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 57be1fe7-3eda-4d9e-8806-52a27f71bf67 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:08:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 57be1fe7-3eda-4d9e-8806-52a27f71bf67 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:08:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 57be1fe7-3eda-4d9e-8806-52a27f71bf67 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:08:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:08:47 standalone.localdomain sudo[140644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:08:47 standalone.localdomain sudo[140644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:08:47 standalone.localdomain sudo[140644]: pam_unix(sudo:session): session closed for user root
Oct 13 14:08:47 standalone.localdomain runuser[140521]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:47 standalone.localdomain runuser[140710]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:48 standalone.localdomain runuser[140710]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:08:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 46 completed events
Oct 13 14:08:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:08:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:08:48 standalone.localdomain ceph-mon[29756]: pgmap v1010: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:48 standalone.localdomain puppet-user[139709]: Notice: /Stage[main]/Pacemaker::Resource_op_defaults/Pcmk_resource_op_default[bundle]/ensure: created
Oct 13 14:08:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1011: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:08:49 standalone.localdomain ceph-mon[29756]: pgmap v1011: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Deprecation Warning: This command is deprecated and will be removed. Please use 'pcs property config' instead.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:08:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Notice: Applied catalog in 8.49 seconds
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Application:
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Initial environment: production
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Converged environment: production
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:          Run mode: user
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Changes:
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:             Total: 2
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Events:
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:           Success: 2
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:             Total: 2
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Resources:
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:           Changed: 2
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:       Out of sync: 2
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:             Total: 37
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Time:
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:          Schedule: 0.00
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:         File line: 0.00
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:           Package: 0.00
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:            Augeas: 0.01
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:              User: 0.01
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:              File: 0.08
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:           Service: 0.22
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Config retrieval: 0.90
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:     Pcmk property: 1.55
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:          Last run: 1760364530
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:              Exec: 2.02
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Pcmk resource default: 2.19
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Pcmk resource op default: 2.20
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Transaction evaluation: 8.46
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:    Catalog application: 8.49
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:        Filebucket: 0.00
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:             Total: 8.50
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]: Version:
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:            Config: 1760364521
Oct 13 14:08:50 standalone.localdomain puppet-user[139709]:            Puppet: 7.10.0
Oct 13 14:08:50 standalone.localdomain podman[140918]: 2025-10-13 14:08:50.64993348 +0000 UTC m=+0.091645591 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public)
Oct 13 14:08:50 standalone.localdomain podman[140918]: 2025-10-13 14:08:50.659701478 +0000 UTC m=+0.101413599 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, container_name=logrotate_crond, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.expose-services=, release=1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:08:50 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain ansible-async_wrapper.py[139701]: Module complete (139701)
Oct 13 14:08:50 standalone.localdomain podman[140863]: 2025-10-13 14:08:50.695681828 +0000 UTC m=+0.167055664 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, tcib_managed=true, release=1)
Oct 13 14:08:50 standalone.localdomain podman[140857]: 2025-10-13 14:08:50.663124403 +0000 UTC m=+0.131814527 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-nova-scheduler, container_name=nova_scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T16:02:54, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1)
Oct 13 14:08:50 standalone.localdomain podman[140907]: 2025-10-13 14:08:50.71704857 +0000 UTC m=+0.164425234 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=heat_api, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:56:26, distribution-scope=public, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:08:50 standalone.localdomain podman[140910]: 2025-10-13 14:08:50.748624924 +0000 UTC m=+0.196721779 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, name=rhosp17/openstack-neutron-server, io.buildah.version=1.33.12, vcs-type=git, container_name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., release=1)
Oct 13 14:08:50 standalone.localdomain podman[140854]: 2025-10-13 14:08:50.759703072 +0000 UTC m=+0.242855328 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, distribution-scope=public, container_name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, version=17.1.9, config_id=tripleo_step4)
Oct 13 14:08:50 standalone.localdomain podman[140907]: 2025-10-13 14:08:50.764999494 +0000 UTC m=+0.212376168 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:08:50 standalone.localdomain podman[140863]: 2025-10-13 14:08:50.790954787 +0000 UTC m=+0.262328653 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, name=rhosp17/openstack-memcached, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1)
Oct 13 14:08:50 standalone.localdomain podman[140857]: 2025-10-13 14:08:50.795933249 +0000 UTC m=+0.264623373 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, config_id=tripleo_step4, name=rhosp17/openstack-nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-nova-scheduler-container, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:50 standalone.localdomain podman[140850]: 2025-10-13 14:08:50.802220962 +0000 UTC m=+0.287463062 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, container_name=manila_scheduler, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-scheduler, build-date=2025-07-21T15:56:28, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140854]: 2025-10-13 14:08:50.806796201 +0000 UTC m=+0.289948487 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T15:44:11, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, release=1, vcs-type=git, version=17.1.9)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140849]: 2025-10-13 14:08:50.785645485 +0000 UTC m=+0.278138377 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, container_name=heat_api_cfn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140850]: 2025-10-13 14:08:50.851039372 +0000 UTC m=+0.336281472 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, config_id=tripleo_step4, name=rhosp17/openstack-manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:28, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, com.redhat.component=openstack-manila-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, release=1)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140872]: 2025-10-13 14:08:50.864696909 +0000 UTC m=+0.330293149 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, release=1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, tcib_managed=true, container_name=cinder_api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:58:55, io.openshift.expose-services=)
Oct 13 14:08:50 standalone.localdomain podman[140849]: 2025-10-13 14:08:50.869803736 +0000 UTC m=+0.362296658 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.buildah.version=1.33.12, tcib_managed=true, container_name=heat_api_cfn, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140890]: 2025-10-13 14:08:50.853822777 +0000 UTC m=+0.305255324 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, release=1, container_name=horizon, io.buildah.version=1.33.12, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, maintainer=OpenStack TripleO Team)
Oct 13 14:08:50 standalone.localdomain podman[140910]: 2025-10-13 14:08:50.910625802 +0000 UTC m=+0.358722667 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_step4, container_name=neutron_api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, version=17.1.9, release=1, com.redhat.component=openstack-neutron-server-container, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:08:50 standalone.localdomain podman[140911]: 2025-10-13 14:08:50.918348188 +0000 UTC m=+0.356177229 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-conductor-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, build-date=2025-07-21T15:44:17, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140874]: 2025-10-13 14:08:50.725350063 +0000 UTC m=+0.187624971 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-manila-api-container, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, build-date=2025-07-21T16:06:43, batch=17.1_20250721.1, container_name=manila_api_cron, summary=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git)
Oct 13 14:08:50 standalone.localdomain podman[140890]: 2025-10-13 14:08:50.937974637 +0000 UTC m=+0.389407234 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, container_name=horizon, io.openshift.expose-services=, name=rhosp17/openstack-horizon, architecture=x86_64, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:08:50 standalone.localdomain podman[140911]: 2025-10-13 14:08:50.946993883 +0000 UTC m=+0.384822914 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-conductor, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, container_name=nova_conductor, io.buildah.version=1.33.12, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain podman[140874]: 2025-10-13 14:08:50.961909589 +0000 UTC m=+0.424184517 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, com.redhat.component=openstack-manila-api-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-manila-api, architecture=x86_64, build-date=2025-07-21T16:06:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:08:50 standalone.localdomain podman[140872]: 2025-10-13 14:08:50.970403538 +0000 UTC m=+0.435999858 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, container_name=cinder_api, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, distribution-scope=public, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, release=1, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=)
Oct 13 14:08:50 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:08:50 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:08:51 standalone.localdomain podman[140877]: 2025-10-13 14:08:50.913047416 +0000 UTC m=+0.378364487 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-type=git, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, build-date=2025-07-21T15:56:26, container_name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container)
Oct 13 14:08:51 standalone.localdomain podman[140877]: 2025-10-13 14:08:51.046203193 +0000 UTC m=+0.511520284 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:08:51 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:08:51 standalone.localdomain ansible-async_wrapper.py[139700]: Done in kid B.
Oct 13 14:08:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1012: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:52 standalone.localdomain ceph-mon[29756]: pgmap v1012: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:08:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1013: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:08:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:08:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:08:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:08:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:08:53 standalone.localdomain podman[141169]: 2025-10-13 14:08:53.87490966 +0000 UTC m=+0.086375138 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-swift-account)
Oct 13 14:08:53 standalone.localdomain podman[141167]: 2025-10-13 14:08:53.924242767 +0000 UTC m=+0.139579844 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:08:53 standalone.localdomain podman[141166]: 2025-10-13 14:08:53.994558185 +0000 UTC m=+0.211659915 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, container_name=swift_object_server, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 14:08:54 standalone.localdomain podman[141168]: 2025-10-13 14:08:54.040928041 +0000 UTC m=+0.249820982 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc.)
Oct 13 14:08:54 standalone.localdomain podman[141169]: 2025-10-13 14:08:54.071933438 +0000 UTC m=+0.283398926 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, release=1, batch=17.1_20250721.1, container_name=swift_account_server, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:08:54 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:08:54 standalone.localdomain podman[141170]: 2025-10-13 14:08:54.144621008 +0000 UTC m=+0.354411116 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, container_name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T15:24:10, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vendor=Red Hat, Inc.)
Oct 13 14:08:54 standalone.localdomain podman[141166]: 2025-10-13 14:08:54.20004363 +0000 UTC m=+0.417145330 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:08:54 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:08:54 standalone.localdomain podman[141168]: 2025-10-13 14:08:54.215519103 +0000 UTC m=+0.424412084 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-swift-container-container, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, release=1, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:08:54 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:08:54 standalone.localdomain podman[141167]: 2025-10-13 14:08:54.319946943 +0000 UTC m=+0.535284000 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:08:54 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:08:54 standalone.localdomain podman[141170]: 2025-10-13 14:08:54.485990665 +0000 UTC m=+0.695780793 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, container_name=nova_vnc_proxy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-novncproxy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:08:54 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:08:54 standalone.localdomain ceph-mon[29756]: pgmap v1013: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1014: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:08:56 standalone.localdomain ceph-mon[29756]: pgmap v1014: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:56 standalone.localdomain podman[141495]: 2025-10-13 14:08:56.840627652 +0000 UTC m=+0.097723655 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:12, tcib_managed=true, name=rhosp17/openstack-placement-api, container_name=placement_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-placement-api-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:08:56 standalone.localdomain podman[141494]: 2025-10-13 14:08:56.855692152 +0000 UTC m=+0.113385753 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:08:56 standalone.localdomain podman[141494]: 2025-10-13 14:08:56.888324499 +0000 UTC m=+0.146018100 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, tcib_managed=true, build-date=2025-07-21T13:58:20, container_name=glance_api_cron, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:08:56 standalone.localdomain podman[141496]: 2025-10-13 14:08:56.905163093 +0000 UTC m=+0.162703000 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:08:56 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:08:56 standalone.localdomain python3[141493]: ansible-ansible.legacy.async_status Invoked with jid=498832324596.139615 mode=status _async_dir=/tmp/.ansible_async
Oct 13 14:08:56 standalone.localdomain podman[141497]: 2025-10-13 14:08:56.942579936 +0000 UTC m=+0.199994309 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:08:56 standalone.localdomain podman[141498]: 2025-10-13 14:08:56.959589185 +0000 UTC m=+0.208207379 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, version=17.1.9, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:08:56 standalone.localdomain podman[141496]: 2025-10-13 14:08:56.9649664 +0000 UTC m=+0.222506317 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.component=openstack-nova-api-container, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, container_name=nova_metadata, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:08:56 standalone.localdomain podman[141498]: 2025-10-13 14:08:56.974511762 +0000 UTC m=+0.223129956 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, io.openshift.expose-services=, release=1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:08:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:08:56 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:08:56 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:08:57 standalone.localdomain podman[141497]: 2025-10-13 14:08:57.014200854 +0000 UTC m=+0.271615197 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:08:57 standalone.localdomain podman[141495]: 2025-10-13 14:08:57.020380882 +0000 UTC m=+0.277476915 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:12, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:08:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:08:57 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:08:57 standalone.localdomain podman[141620]: 2025-10-13 14:08:57.112216497 +0000 UTC m=+0.120025747 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, architecture=x86_64, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, config_id=tripleo_step4, container_name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api)
Oct 13 14:08:57 standalone.localdomain podman[141574]: 2025-10-13 14:08:57.155705006 +0000 UTC m=+0.275075043 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container)
Oct 13 14:08:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:08:57 standalone.localdomain podman[141574]: 2025-10-13 14:08:57.351491165 +0000 UTC m=+0.470861212 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, container_name=glance_api_internal, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, distribution-scope=public, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:08:57 standalone.localdomain podman[141620]: 2025-10-13 14:08:57.361159751 +0000 UTC m=+0.368968971 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, architecture=x86_64, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20)
Oct 13 14:08:57 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:08:57 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:08:57 standalone.localdomain podman[141709]: 2025-10-13 14:08:57.443076173 +0000 UTC m=+0.086221665 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=swift_proxy, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:08:57 standalone.localdomain python3[141710]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:08:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1015: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:57 standalone.localdomain podman[141709]: 2025-10-13 14:08:57.67023053 +0000 UTC m=+0.313376082 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=)
Oct 13 14:08:57 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:08:57 standalone.localdomain python3[141744]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:08:58 standalone.localdomain python3[141810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:08:58 standalone.localdomain python3[141855]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpnx38ufjj recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:08:58 standalone.localdomain python3[141861]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:08:58 standalone.localdomain ceph-mon[29756]: pgmap v1015: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:59 standalone.localdomain runuser[141938]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:08:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1016: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:08:59 standalone.localdomain python3[142000]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Oct 13 14:08:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:08:59 standalone.localdomain runuser[141938]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:00 standalone.localdomain runuser[142035]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:00 standalone.localdomain python3[142058]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:00 standalone.localdomain runuser[142035]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:09:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:09:00 standalone.localdomain runuser[142099]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:00 standalone.localdomain ceph-mon[29756]: pgmap v1016: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:00 standalone.localdomain podman[142111]: 2025-10-13 14:09:00.813993839 +0000 UTC m=+0.080656354 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, container_name=nova_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public)
Oct 13 14:09:00 standalone.localdomain systemd[1]: tmp-crun.M3uZ6k.mount: Deactivated successfully.
Oct 13 14:09:00 standalone.localdomain podman[142108]: 2025-10-13 14:09:00.870720492 +0000 UTC m=+0.136469709 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, release=1)
Oct 13 14:09:00 standalone.localdomain podman[142111]: 2025-10-13 14:09:00.872752453 +0000 UTC m=+0.139414978 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, batch=17.1_20250721.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=nova_api)
Oct 13 14:09:00 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:09:00 standalone.localdomain podman[142108]: 2025-10-13 14:09:00.953879561 +0000 UTC m=+0.219628808 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, release=1, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, distribution-scope=public)
Oct 13 14:09:00 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:09:00 standalone.localdomain python3[142201]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:09:01 standalone.localdomain podman[142213]: 2025-10-13 14:09:01.20951125 +0000 UTC m=+0.101543193 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, build-date=2025-07-21T12:58:45, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1)
Oct 13 14:09:01 standalone.localdomain podman[142213]: 2025-10-13 14:09:01.240132455 +0000 UTC m=+0.132164378 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, name=rhosp17/openstack-mariadb, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1)
Oct 13 14:09:01 standalone.localdomain python3[142247]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:01 standalone.localdomain podman[142263]: 2025-10-13 14:09:01.427551538 +0000 UTC m=+0.097768676 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, distribution-scope=public, build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, name=rhosp17/openstack-haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, batch=17.1_20250721.1, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:01 standalone.localdomain podman[142263]: 2025-10-13 14:09:01.456178503 +0000 UTC m=+0.126395651 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container, vcs-type=git)
Oct 13 14:09:01 standalone.localdomain runuser[142099]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:01 standalone.localdomain podman[142301]: 2025-10-13 14:09:01.587280567 +0000 UTC m=+0.081386207 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:09:01 standalone.localdomain python3[142292]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1017: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:01 standalone.localdomain podman[142301]: 2025-10-13 14:09:01.614996103 +0000 UTC m=+0.109101713 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, build-date=2025-07-21T13:08:05, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rabbitmq, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:09:01 standalone.localdomain ceph-mon[29756]: pgmap v1017: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:01 standalone.localdomain python3[142344]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:02 standalone.localdomain python3[142356]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:02 standalone.localdomain python3[142369]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:02 standalone.localdomain python3[142373]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:09:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:09:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:09:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:09:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:09:02 standalone.localdomain podman[142392]: 2025-10-13 14:09:02.835198638 +0000 UTC m=+0.089237041 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, version=17.1.9, io.openshift.expose-services=, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, release=1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-07-21T16:28:54, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:09:02 standalone.localdomain systemd[1]: tmp-crun.3aDXD2.mount: Deactivated successfully.
Oct 13 14:09:02 standalone.localdomain podman[142392]: 2025-10-13 14:09:02.914472571 +0000 UTC m=+0.168510994 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T16:28:54, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, container_name=neutron_dhcp, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible)
Oct 13 14:09:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:09:02 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:09:02 standalone.localdomain podman[142379]: 2025-10-13 14:09:02.932299561 +0000 UTC m=+0.193846596 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-api-container, build-date=2025-07-21T15:22:44, vendor=Red Hat, Inc., container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, release=1, distribution-scope=public, version=17.1.9, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:09:02 standalone.localdomain podman[142380]: 2025-10-13 14:09:02.89171956 +0000 UTC m=+0.153174872 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-barbican-worker, maintainer=OpenStack TripleO Team, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, container_name=barbican_worker, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, io.openshift.expose-services=, config_id=tripleo_step3, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.component=openstack-barbican-worker-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:09:02 standalone.localdomain podman[142381]: 2025-10-13 14:09:02.983174909 +0000 UTC m=+0.238328717 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, vcs-type=git, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:09:02 standalone.localdomain podman[142483]: 2025-10-13 14:09:02.995445327 +0000 UTC m=+0.062910590 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:09:03 standalone.localdomain python3[142453]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:03 standalone.localdomain podman[142436]: 2025-10-13 14:09:02.911716476 +0000 UTC m=+0.070270447 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, container_name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, tcib_managed=true, vcs-type=git)
Oct 13 14:09:03 standalone.localdomain podman[142381]: 2025-10-13 14:09:03.020807419 +0000 UTC m=+0.275961217 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:09:03 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:09:03 standalone.localdomain podman[142436]: 2025-10-13 14:09:03.045600013 +0000 UTC m=+0.204154024 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-barbican-keystone-listener, container_name=barbican_keystone_listener, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, build-date=2025-07-21T16:18:19, release=1)
Oct 13 14:09:03 standalone.localdomain podman[142379]: 2025-10-13 14:09:03.064594089 +0000 UTC m=+0.326141124 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, version=17.1.9, build-date=2025-07-21T15:22:44, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-barbican-api, release=1, description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-barbican-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, container_name=barbican_api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:09:03 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:09:03 standalone.localdomain podman[142380]: 2025-10-13 14:09:03.07697663 +0000 UTC m=+0.338431942 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., architecture=x86_64, release=1, tcib_managed=true, com.redhat.component=openstack-barbican-worker-container, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, name=rhosp17/openstack-barbican-worker, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 14:09:03 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:09:03 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:09:03 standalone.localdomain podman[142483]: 2025-10-13 14:09:03.099103792 +0000 UTC m=+0.166569115 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-type=git, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-sriov-agent, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:09:03 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:09:03 standalone.localdomain python3[142530]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1018: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:03 standalone.localdomain python3[142541]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:09:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:09:03 standalone.localdomain systemd-rc-local-generator[142566]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:09:03 standalone.localdomain systemd-sysv-generator[142569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:09:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:09:04 standalone.localdomain python3[142599]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:04 standalone.localdomain ceph-mon[29756]: pgmap v1018: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:04 standalone.localdomain podman[142603]: 2025-10-13 14:09:04.682811675 +0000 UTC m=+0.096559177 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, version=17.1.9, tcib_managed=true, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:13:39, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:09:04 standalone.localdomain podman[142603]: 2025-10-13 14:09:04.714006247 +0000 UTC m=+0.127753779 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, com.redhat.component=openstack-cinder-volume-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T16:13:39, io.openshift.expose-services=, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:09:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:04 standalone.localdomain python3[142618]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:05 standalone.localdomain python3[142659]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:05 standalone.localdomain python3[142663]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1019: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:05 standalone.localdomain python3[142669]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:09:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:09:05 standalone.localdomain systemd-sysv-generator[142710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:09:05 standalone.localdomain systemd-rc-local-generator[142705]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:09:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:09:06 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 14:09:06 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 14:09:06 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 14:09:06 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 14:09:06 standalone.localdomain ceph-mon[29756]: pgmap v1019: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:06 standalone.localdomain python3[142859]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Oct 13 14:09:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1020: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:09:08 standalone.localdomain ceph-mon[29756]: pgmap v1020: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:08 standalone.localdomain podman[143033]: 2025-10-13 14:09:08.827375528 +0000 UTC m=+0.092541193 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, build-date=2025-07-21T13:30:04, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_cluster_northd, config_id=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.component=openstack-ovn-northd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git)
Oct 13 14:09:08 standalone.localdomain podman[143033]: 2025-10-13 14:09:08.868995261 +0000 UTC m=+0.134160926 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, io.buildah.version=1.33.12, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, version=17.1.9, build-date=2025-07-21T13:30:04, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64, container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, distribution-scope=public, vendor=Red Hat, Inc., config_id=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container)
Oct 13 14:09:08 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:09:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1021: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:09 standalone.localdomain python3[143069]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Oct 13 14:09:10 standalone.localdomain podman[143117]: 2025-10-13 14:09:10.164947434 +0000 UTC m=+0.087699983 container create 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, release=1, version=17.1.9, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started libpod-conmon-277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.scope.
Oct 13 14:09:10 standalone.localdomain podman[143117]: 2025-10-13 14:09:10.111383524 +0000 UTC m=+0.034136103 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa059f493f94192e7dce82495f13d5d800e8608118e1cb05a72f419570640fd4/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa059f493f94192e7dce82495f13d5d800e8608118e1cb05a72f419570640fd4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa059f493f94192e7dce82495f13d5d800e8608118e1cb05a72f419570640fd4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa059f493f94192e7dce82495f13d5d800e8608118e1cb05a72f419570640fd4/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa059f493f94192e7dce82495f13d5d800e8608118e1cb05a72f419570640fd4/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:09:10 standalone.localdomain podman[143117]: 2025-10-13 14:09:10.255758814 +0000 UTC m=+0.178511333 container init 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.9, config_id=tripleo_step5, architecture=x86_64, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:09:10 standalone.localdomain podman[143131]: 2025-10-13 14:09:10.274083698 +0000 UTC m=+0.075087564 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, distribution-scope=public, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:09:10 standalone.localdomain sudo[143156]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:09:10 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:09:10 standalone.localdomain podman[143117]: 2025-10-13 14:09:10.295756267 +0000 UTC m=+0.218508806 container start 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started Session c14 of User root.
Oct 13 14:09:10 standalone.localdomain python3[143069]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:09:10 standalone.localdomain sudo[143156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:09:10 standalone.localdomain podman[143131]: 2025-10-13 14:09:10.329773655 +0000 UTC m=+0.130777521 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, name=rhosp17/openstack-mariadb, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:09:10 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:09:10 standalone.localdomain sudo[143156]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:10 standalone.localdomain systemd[1]: session-c14.scope: Deactivated successfully.
Oct 13 14:09:10 standalone.localdomain podman[143167]: 2025-10-13 14:09:10.385852733 +0000 UTC m=+0.081856543 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:09:10 standalone.localdomain podman[143167]: 2025-10-13 14:09:10.435512584 +0000 UTC m=+0.131516444 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, distribution-scope=public, config_id=tripleo_step5, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:09:10 standalone.localdomain podman[143167]: unhealthy
Oct 13 14:09:10 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:09:10 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:09:10 standalone.localdomain ceph-mon[29756]: pgmap v1021: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:10 standalone.localdomain podman[143266]: 2025-10-13 14:09:10.854449586 +0000 UTC m=+0.100248750 container create e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']})
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started libpod-conmon-e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8.scope.
Oct 13 14:09:10 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:09:10 standalone.localdomain podman[143266]: 2025-10-13 14:09:10.810573264 +0000 UTC m=+0.056372438 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8876cdfee2ff3f96001da3a351ac752d9af9c42182b5980adef66a8d52c7f993/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8876cdfee2ff3f96001da3a351ac752d9af9c42182b5980adef66a8d52c7f993/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 14:09:10 standalone.localdomain podman[143266]: 2025-10-13 14:09:10.924701592 +0000 UTC m=+0.170500756 container init e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_wait_for_compute_service, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:09:10 standalone.localdomain podman[143266]: 2025-10-13 14:09:10.933095481 +0000 UTC m=+0.178894615 container start e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_wait_for_compute_service, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public)
Oct 13 14:09:10 standalone.localdomain podman[143266]: 2025-10-13 14:09:10.93338877 +0000 UTC m=+0.179187944 container attach e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, container_name=nova_wait_for_compute_service, name=rhosp17/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:09:10 standalone.localdomain sudo[143285]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 14:09:10 standalone.localdomain sudo[143285]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 14:09:10 standalone.localdomain sudo[143285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:09:11 standalone.localdomain sudo[143285]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:11 standalone.localdomain haproxy[70940]: 172.17.0.2:52212 [13/Oct/2025:14:09:11.501] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 200 452 - - ---- 44/1/0/0/0 0/0 "GET /v3 HTTP/1.1"
Oct 13 14:09:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1022: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:11 standalone.localdomain haproxy[70940]: 172.17.0.2:52212 [13/Oct/2025:14:09:11.507] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/268/268 201 8104 - - ---- 44/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:09:11 standalone.localdomain haproxy[70940]: 172.17.0.2:52224 [13/Oct/2025:14:09:11.790] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/20/20 200 8099 - - ---- 46/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:09:11 standalone.localdomain haproxy[70940]: 172.17.0.2:58234 [13/Oct/2025:14:09:11.779] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/83/83 200 380 - - ---- 47/1/0/0/0 0/0 "GET /v2.1/os-services?binary=nova-compute HTTP/1.1"
Oct 13 14:09:12 standalone.localdomain runuser[143303]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:12 standalone.localdomain haproxy[70940]: 172.17.0.2:52240 [13/Oct/2025:14:09:12.276] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 48/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:09:12 standalone.localdomain haproxy[70940]: 172.17.0.2:52240 [13/Oct/2025:14:09:12.283] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/309/309 201 8106 - - ---- 48/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:09:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45796 [13/Oct/2025:14:09:12.597] placement placement/standalone.internalapi.localdomain 0/0/0/2/2 200 381 - - ---- 49/1/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:09:12 standalone.localdomain runuser[143303]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:12 standalone.localdomain ceph-mon[29756]: pgmap v1022: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:12 standalone.localdomain runuser[143376]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:13 standalone.localdomain runuser[143376]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:13 standalone.localdomain runuser[143478]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1023: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:09:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3713647211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:09:14 standalone.localdomain runuser[143478]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:14 standalone.localdomain haproxy[70940]: 172.17.0.2:52242 [13/Oct/2025:14:09:14.457] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 51/4/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:09:14 standalone.localdomain haproxy[70940]: 172.17.0.2:52242 [13/Oct/2025:14:09:14.464] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/294/294 201 8106 - - ---- 51/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:09:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:14 standalone.localdomain haproxy[70940]: 172.17.0.2:52242 [13/Oct/2025:14:09:14.764] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 8101 - - ---- 51/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:09:14 standalone.localdomain ceph-mon[29756]: pgmap v1023: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3713647211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:09:14 standalone.localdomain haproxy[70940]: 172.17.0.2:45796 [13/Oct/2025:14:09:14.451] placement placement/standalone.internalapi.localdomain 0/0/0/352/352 200 271 - - ---- 51/1/0/0/0 0/0 "GET /placement/resource_providers?in_tree=e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 HTTP/1.1"
Oct 13 14:09:14 standalone.localdomain haproxy[70940]: 172.17.0.2:45796 [13/Oct/2025:14:09:14.807] placement placement/standalone.internalapi.localdomain 0/0/0/26/26 200 1250 - - ---- 51/1/0/0/0 0/0 "POST /placement/resource_providers HTTP/1.1"
Oct 13 14:09:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:09:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/149615377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:09:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45796 [13/Oct/2025:14:09:15.369] placement placement/standalone.internalapi.localdomain 0/0/0/17/17 200 704 - - ---- 51/1/0/0/0 0/0 "PUT /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/inventories HTTP/1.1"
Oct 13 14:09:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45796 [13/Oct/2025:14:09:15.395] placement placement/standalone.internalapi.localdomain 0/0/0/8/8 200 1684 - - ---- 51/1/0/0/0 0/0 "GET /placement/traits?name=in:COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,HW_CPU_X86_CLMU
Oct 13 14:09:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45796 [13/Oct/2025:14:09:15.406] placement placement/standalone.internalapi.localdomain 0/0/0/50/50 200 1719 - - ---- 51/1/0/0/0 0/0 "PUT /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/traits HTTP/1.1"
Oct 13 14:09:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1024: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/149615377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:09:15 standalone.localdomain ceph-mon[29756]: pgmap v1024: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1025: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:09:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:09:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:09:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:09:17 standalone.localdomain podman[143797]: 2025-10-13 14:09:17.851059055 +0000 UTC m=+0.102171129 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_scheduler, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:10:12, tcib_managed=true, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:09:17 standalone.localdomain podman[143797]: 2025-10-13 14:09:17.890710847 +0000 UTC m=+0.141822971 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=cinder_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, name=rhosp17/openstack-cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f)
Oct 13 14:09:17 standalone.localdomain podman[143795]: 2025-10-13 14:09:17.893090921 +0000 UTC m=+0.150026706 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, vcs-type=git, release=1, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, container_name=keystone, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:09:17 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:09:17 standalone.localdomain podman[143796]: 2025-10-13 14:09:17.953876325 +0000 UTC m=+0.207839477 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=cinder_api_cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:17 standalone.localdomain podman[143796]: 2025-10-13 14:09:17.965660888 +0000 UTC m=+0.219624030 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=cinder_api_cron, build-date=2025-07-21T15:58:55, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:09:17 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:09:17 standalone.localdomain podman[143795]: 2025-10-13 14:09:17.977965186 +0000 UTC m=+0.234900971 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, name=rhosp17/openstack-keystone, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, version=17.1.9, container_name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:09:17 standalone.localdomain podman[143794]: 2025-10-13 14:09:17.934610511 +0000 UTC m=+0.196271601 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, release=1, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-type=git)
Oct 13 14:09:18 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:09:18 standalone.localdomain podman[143794]: 2025-10-13 14:09:18.020788807 +0000 UTC m=+0.282449837 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, release=1, tcib_managed=true, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:09:18 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: pgmap v1025: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.692544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364558692578, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2116, "num_deletes": 251, "total_data_size": 2031073, "memory_usage": 2078336, "flush_reason": "Manual Compaction"}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364558701297, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1958555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22756, "largest_seqno": 24871, "table_properties": {"data_size": 1950318, "index_size": 5004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17080, "raw_average_key_size": 19, "raw_value_size": 1933455, "raw_average_value_size": 2242, "num_data_blocks": 230, "num_entries": 862, "num_filter_entries": 862, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364363, "oldest_key_time": 1760364363, "file_creation_time": 1760364558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8791 microseconds, and 3812 cpu microseconds.
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.701335) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1958555 bytes OK
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.701353) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.703273) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.703285) EVENT_LOG_v1 {"time_micros": 1760364558703281, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.703302) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2022034, prev total WAL file size 2022034, number of live WAL files 2.
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.704008) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1912KB)], [53(4725KB)]
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364558704089, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 6797712, "oldest_snapshot_seqno": -1}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 3813 keys, 5864510 bytes, temperature: kUnknown
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364558739601, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 5864510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5836910, "index_size": 17007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91307, "raw_average_key_size": 23, "raw_value_size": 5765912, "raw_average_value_size": 1512, "num_data_blocks": 733, "num_entries": 3813, "num_filter_entries": 3813, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.739935) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 5864510 bytes
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.742076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.7 rd, 164.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 4.6 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 4333, records dropped: 520 output_compression: NoCompression
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.742106) EVENT_LOG_v1 {"time_micros": 1760364558742093, "job": 28, "event": "compaction_finished", "compaction_time_micros": 35644, "compaction_time_cpu_micros": 22057, "output_level": 6, "num_output_files": 1, "total_output_size": 5864510, "num_input_records": 4333, "num_output_records": 3813, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364558742579, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364558743152, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.703878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.743226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.743236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.743241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.743245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:09:18 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:09:18.743249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:09:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1026: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:09:20 standalone.localdomain ceph-mon[29756]: pgmap v1026: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:09:20 standalone.localdomain podman[143988]: 2025-10-13 14:09:20.832776026 +0000 UTC m=+0.099775765 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:09:20 standalone.localdomain podman[143988]: 2025-10-13 14:09:20.865497735 +0000 UTC m=+0.132497444 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible)
Oct 13 14:09:20 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:09:20 standalone.localdomain podman[144009]: 2025-10-13 14:09:20.928963012 +0000 UTC m=+0.082299519 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, release=1, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:20 standalone.localdomain podman[144006]: 2025-10-13 14:09:20.955694925 +0000 UTC m=+0.114940524 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-scheduler-container, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-scheduler, container_name=nova_scheduler, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:09:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:09:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:09:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:09:21 standalone.localdomain podman[144095]: 2025-10-13 14:09:21.038001672 +0000 UTC m=+0.063990593 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:44:17)
Oct 13 14:09:21 standalone.localdomain podman[144006]: 2025-10-13 14:09:21.063096286 +0000 UTC m=+0.222341885 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_scheduler, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:02:54, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-scheduler, maintainer=OpenStack TripleO Team, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144131]: 2025-10-13 14:09:21.078558782 +0000 UTC m=+0.061452165 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-api-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, managed_by=tripleo_ansible, build-date=2025-07-21T16:06:43, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-manila-api, tcib_managed=true, architecture=x86_64)
Oct 13 14:09:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:09:21 standalone.localdomain podman[144042]: 2025-10-13 14:09:21.103720988 +0000 UTC m=+0.196360104 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=heat_api_cfn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:09:21 standalone.localdomain podman[144130]: 2025-10-13 14:09:21.129649266 +0000 UTC m=+0.114459838 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api)
Oct 13 14:09:21 standalone.localdomain podman[144043]: 2025-10-13 14:09:21.141347648 +0000 UTC m=+0.233911901 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-scheduler, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, architecture=x86_64, container_name=manila_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28)
Oct 13 14:09:21 standalone.localdomain podman[144042]: 2025-10-13 14:09:21.149143977 +0000 UTC m=+0.241783083 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.component=openstack-heat-api-cfn-container, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:09:21 standalone.localdomain podman[144007]: 2025-10-13 14:09:21.057545415 +0000 UTC m=+0.211191201 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, container_name=heat_engine, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144131]: 2025-10-13 14:09:21.163024125 +0000 UTC m=+0.145917528 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, architecture=x86_64, tcib_managed=true, container_name=manila_api_cron, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-api-container, release=1, distribution-scope=public, build-date=2025-07-21T16:06:43, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:09:21 standalone.localdomain podman[144044]: 2025-10-13 14:09:21.011514365 +0000 UTC m=+0.097264129 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, distribution-scope=public, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, tcib_managed=true, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team)
Oct 13 14:09:21 standalone.localdomain podman[144130]: 2025-10-13 14:09:21.181793134 +0000 UTC m=+0.166603696 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, container_name=cinder_api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.component=openstack-cinder-api-container, build-date=2025-07-21T15:58:55, release=1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:09:21 standalone.localdomain podman[144007]: 2025-10-13 14:09:21.191526234 +0000 UTC m=+0.345172000 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, build-date=2025-07-21T15:44:11, name=rhosp17/openstack-heat-engine, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144095]: 2025-10-13 14:09:21.209295792 +0000 UTC m=+0.235284733 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, name=rhosp17/openstack-nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-conductor-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:17, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, container_name=nova_conductor)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144044]: 2025-10-13 14:09:21.245797207 +0000 UTC m=+0.331547021 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, container_name=memcached, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, release=1, config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144090]: 2025-10-13 14:09:21.289880396 +0000 UTC m=+0.321828041 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, name=rhosp17/openstack-neutron-server, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, container_name=neutron_api, io.buildah.version=1.33.12, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:21 standalone.localdomain podman[144191]: 2025-10-13 14:09:21.199444888 +0000 UTC m=+0.106982378 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:09:21 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144191]: 2025-10-13 14:09:21.335045338 +0000 UTC m=+0.242582818 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, version=17.1.9, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144009]: 2025-10-13 14:09:21.365355702 +0000 UTC m=+0.518692239 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T15:56:26, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144043]: 2025-10-13 14:09:21.443754588 +0000 UTC m=+0.536318901 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, com.redhat.component=openstack-manila-scheduler-container, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, name=rhosp17/openstack-manila-scheduler, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.openshift.expose-services=, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, build-date=2025-07-21T15:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144092]: 2025-10-13 14:09:21.526781187 +0000 UTC m=+0.550760336 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, container_name=horizon, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, name=rhosp17/openstack-horizon, build-date=2025-07-21T13:58:15, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, version=17.1.9, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:09:21 standalone.localdomain podman[144090]: 2025-10-13 14:09:21.557842994 +0000 UTC m=+0.589790689 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, build-date=2025-07-21T15:44:03, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, container_name=neutron_api)
Oct 13 14:09:21 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[144092]: 2025-10-13 14:09:21.613246882 +0000 UTC m=+0.637226091 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, container_name=horizon, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:15, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:09:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1027: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:21 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain systemd[1]: tmp-crun.BaUkRT.mount: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain ceph-mon[29756]: pgmap v1027: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:21 standalone.localdomain haproxy[70940]: 172.17.0.2:45742 [13/Oct/2025:14:09:21.879] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/41/41 200 593 - - ---- 49/1/0/0/0 0/0 "GET /v2.1/os-services?binary=nova-compute HTTP/1.1"
Oct 13 14:09:21 standalone.localdomain systemd[1]: libpod-e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8.scope: Deactivated successfully.
Oct 13 14:09:21 standalone.localdomain podman[143266]: 2025-10-13 14:09:21.960014541 +0000 UTC m=+11.205813745 container died e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp17/openstack-nova-compute, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Oct 13 14:09:22 standalone.localdomain systemd[1]: tmp-crun.A9E0Ax.mount: Deactivated successfully.
Oct 13 14:09:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8-userdata-shm.mount: Deactivated successfully.
Oct 13 14:09:22 standalone.localdomain podman[144299]: 2025-10-13 14:09:22.072124266 +0000 UTC m=+0.102957234 container cleanup e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, version=17.1.9, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:09:22 standalone.localdomain systemd[1]: libpod-conmon-e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8.scope: Deactivated successfully.
Oct 13 14:09:22 standalone.localdomain python3[143069]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=5ce329a35cfc30978bc40d323681fc5e --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Oct 13 14:09:22 standalone.localdomain python3[144350]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:22 standalone.localdomain python3[144352]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:09:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8876cdfee2ff3f96001da3a351ac752d9af9c42182b5980adef66a8d52c7f993-merged.mount: Deactivated successfully.
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:09:23
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'images', 'vms', '.mgr', 'volumes', 'manila_data', 'backups']
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:09:23 standalone.localdomain python3[144364]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760364562.7826364-144340-221228649841784/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:09:23 standalone.localdomain python3[144366]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 14:09:23 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:09:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1028: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:23 standalone.localdomain systemd-sysv-generator[144401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:09:23 standalone.localdomain systemd-rc-local-generator[144398]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:09:23 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:09:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:09:24 standalone.localdomain podman[144411]: 2025-10-13 14:09:24.218196392 +0000 UTC m=+0.079805641 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:09:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:09:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:09:24 standalone.localdomain podman[144433]: 2025-10-13 14:09:24.339836061 +0000 UTC m=+0.090852491 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, tcib_managed=true, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, distribution-scope=public)
Oct 13 14:09:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:09:24 standalone.localdomain podman[144434]: 2025-10-13 14:09:24.416003698 +0000 UTC m=+0.159354551 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, container_name=swift_container_server, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:09:24 standalone.localdomain podman[144467]: 2025-10-13 14:09:24.431253149 +0000 UTC m=+0.065685396 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:09:24 standalone.localdomain podman[144411]: 2025-10-13 14:09:24.44978731 +0000 UTC m=+0.311396559 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true)
Oct 13 14:09:24 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:09:24 standalone.localdomain podman[144433]: 2025-10-13 14:09:24.564319689 +0000 UTC m=+0.315336059 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server)
Oct 13 14:09:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:09:24 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:09:24 standalone.localdomain podman[144520]: 2025-10-13 14:09:24.654022074 +0000 UTC m=+0.069594275 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, name=rhosp17/openstack-nova-novncproxy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, build-date=2025-07-21T15:24:10, distribution-scope=public, container_name=nova_vnc_proxy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:09:24 standalone.localdomain podman[144434]: 2025-10-13 14:09:24.658998378 +0000 UTC m=+0.402349251 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:09:24 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:09:24 standalone.localdomain ceph-mon[29756]: pgmap v1028: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:24 standalone.localdomain podman[144467]: 2025-10-13 14:09:24.780845574 +0000 UTC m=+0.415277861 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=nova_migration_target, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true)
Oct 13 14:09:24 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:09:24 standalone.localdomain runuser[144550]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:24 standalone.localdomain podman[144520]: 2025-10-13 14:09:24.928808915 +0000 UTC m=+0.344381116 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-nova-novncproxy-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, architecture=x86_64, container_name=nova_vnc_proxy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, version=17.1.9, name=rhosp17/openstack-nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:09:24 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:09:25 standalone.localdomain python3[144570]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:09:25 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:09:25 standalone.localdomain systemd-sysv-generator[144625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:09:25 standalone.localdomain systemd-rc-local-generator[144622]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:09:25 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:09:25 standalone.localdomain runuser[144550]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:25 standalone.localdomain runuser[144660]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:25 standalone.localdomain systemd[1]: tmp-crun.vZXZ4k.mount: Deactivated successfully.
Oct 13 14:09:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1029: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:25 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:09:25 standalone.localdomain systemd[1]: Starting nova_compute container...
Oct 13 14:09:25 standalone.localdomain recover_tripleo_nova_virtqemud[144708]: 93291
Oct 13 14:09:25 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:09:25 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:09:25 standalone.localdomain tripleo-start-podman-container[144707]: Creating additional drop-in dependency for "nova_compute" (277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c)
Oct 13 14:09:25 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:09:26 standalone.localdomain systemd-sysv-generator[144768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:09:26 standalone.localdomain systemd-rc-local-generator[144765]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:09:26 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:09:26 standalone.localdomain runuser[144660]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:26 standalone.localdomain runuser[144839]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:26 standalone.localdomain systemd[1]: Started nova_compute container.
Oct 13 14:09:26 standalone.localdomain python3[144981]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:26 standalone.localdomain ceph-mon[29756]: pgmap v1029: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:27 standalone.localdomain runuser[144839]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1030: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:09:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:09:27 standalone.localdomain python3[145066]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=standalone step=5 update_config_hash_only=False
Oct 13 14:09:27 standalone.localdomain ceph-mon[29756]: pgmap v1030: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:27 standalone.localdomain podman[145068]: 2025-10-13 14:09:27.858603216 +0000 UTC m=+0.116580344 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, release=1, com.redhat.component=openstack-glance-api-container, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron)
Oct 13 14:09:27 standalone.localdomain podman[145101]: 2025-10-13 14:09:27.927314854 +0000 UTC m=+0.148172248 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, container_name=glance_api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, build-date=2025-07-21T13:58:20)
Oct 13 14:09:27 standalone.localdomain podman[145084]: 2025-10-13 14:09:27.882326027 +0000 UTC m=+0.107357159 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:09:27 standalone.localdomain podman[145083]: 2025-10-13 14:09:27.908481763 +0000 UTC m=+0.152901933 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, container_name=glance_api_internal, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:09:27 standalone.localdomain podman[145084]: 2025-10-13 14:09:27.965327665 +0000 UTC m=+0.190358767 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git)
Oct 13 14:09:27 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:09:27 standalone.localdomain podman[145072]: 2025-10-13 14:09:27.977575393 +0000 UTC m=+0.226913805 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_metadata, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:09:28 standalone.localdomain podman[145069]: 2025-10-13 14:09:28.014534632 +0000 UTC m=+0.270275281 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, build-date=2025-07-21T13:58:12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-placement-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_id=tripleo_step4, container_name=placement_api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, release=1, com.redhat.component=openstack-placement-api-container)
Oct 13 14:09:28 standalone.localdomain podman[145072]: 2025-10-13 14:09:28.034445036 +0000 UTC m=+0.283783498 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:05:11, container_name=nova_metadata, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api)
Oct 13 14:09:28 standalone.localdomain podman[145068]: 2025-10-13 14:09:28.042804183 +0000 UTC m=+0.300781341 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true)
Oct 13 14:09:28 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain podman[145069]: 2025-10-13 14:09:28.05371558 +0000 UTC m=+0.309456219 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, name=rhosp17/openstack-placement-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-placement-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:09:28 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain podman[145083]: 2025-10-13 14:09:28.117769454 +0000 UTC m=+0.362189594 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:09:28 standalone.localdomain podman[145070]: 2025-10-13 14:09:28.125439371 +0000 UTC m=+0.369802669 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, container_name=swift_proxy, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:28 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain podman[145101]: 2025-10-13 14:09:28.133719686 +0000 UTC m=+0.354577090 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=glance_api, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:09:28 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain podman[145094]: 2025-10-13 14:09:28.185134451 +0000 UTC m=+0.420322147 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, build-date=2025-07-21T13:28:44)
Oct 13 14:09:28 standalone.localdomain python3[145234]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:28 standalone.localdomain podman[145094]: 2025-10-13 14:09:28.209722599 +0000 UTC m=+0.444910275 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:09:28 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain podman[145070]: 2025-10-13 14:09:28.320770061 +0000 UTC m=+0.565133379 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:09:28 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:09:28 standalone.localdomain systemd[1]: tmp-crun.gCaFAL.mount: Deactivated successfully.
Oct 13 14:09:29 standalone.localdomain systemd[1]: tmp-crun.STEa3v.mount: Deactivated successfully.
Oct 13 14:09:29 standalone.localdomain podman[145371]: 2025-10-13 14:09:29.045416196 +0000 UTC m=+0.088117686 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-share-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, batch=17.1_20250721.1, name=rhosp17/openstack-manila-share, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, release=1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:09:29 standalone.localdomain podman[145371]: 2025-10-13 14:09:29.078893278 +0000 UTC m=+0.121594708 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-manila-share-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, release=1, name=rhosp17/openstack-manila-share, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:09:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1031: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:29 standalone.localdomain python3[145427]: ansible-ansible.legacy.command Invoked with _raw_params=podman exec nova_api nova-manage cell_v2 discover_hosts --by-service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:09:30 standalone.localdomain podman[145428]: 2025-10-13 14:09:30.114600581 +0000 UTC m=+0.100511039 container exec 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=nova_api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:09:30 standalone.localdomain ceph-mon[29756]: pgmap v1031: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1032: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:09:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:09:31 standalone.localdomain podman[145463]: 2025-10-13 14:09:31.830983603 +0000 UTC m=+0.094759231 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:09:31 standalone.localdomain podman[145463]: 2025-10-13 14:09:31.843876001 +0000 UTC m=+0.107651639 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, build-date=2025-07-21T13:27:18, summary=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, container_name=keystone_cron, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9)
Oct 13 14:09:31 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:09:31 standalone.localdomain podman[145464]: 2025-10-13 14:09:31.935374121 +0000 UTC m=+0.196775397 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, container_name=nova_api, config_id=tripleo_step4, tcib_managed=true, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, version=17.1.9, com.redhat.component=openstack-nova-api-container, distribution-scope=public, build-date=2025-07-21T16:05:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:09:31 standalone.localdomain podman[145464]: 2025-10-13 14:09:31.973268569 +0000 UTC m=+0.234669895 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, container_name=nova_api, tcib_managed=true, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 14:09:31 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:09:32 standalone.localdomain haproxy[70940]: 172.17.0.100:37149 [13/Oct/2025:14:09:31.856] mysql mysql/standalone.internalapi.localdomain 1/0/364 5951 -- 47/47/46/46/0 0/0
Oct 13 14:09:32 standalone.localdomain haproxy[70940]: 172.17.0.100:60739 [13/Oct/2025:14:09:31.964] mysql mysql/standalone.internalapi.localdomain 1/0/256 3535 -- 46/46/45/45/0 0/0
Oct 13 14:09:32 standalone.localdomain podman[145428]: 2025-10-13 14:09:32.340859078 +0000 UTC m=+2.326769546 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, container_name=nova_api, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:09:32 standalone.localdomain ceph-mon[29756]: pgmap v1032: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1033: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:09:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:09:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:09:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:09:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:09:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:09:33 standalone.localdomain ceph-mon[29756]: pgmap v1033: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:33 standalone.localdomain podman[145558]: 2025-10-13 14:09:33.867853233 +0000 UTC m=+0.107061611 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, container_name=nova_api_cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:09:33 standalone.localdomain podman[145558]: 2025-10-13 14:09:33.8777915 +0000 UTC m=+0.116999878 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:09:33 standalone.localdomain podman[145537]: 2025-10-13 14:09:33.848839868 +0000 UTC m=+0.100862970 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=)
Oct 13 14:09:33 standalone.localdomain podman[145537]: 2025-10-13 14:09:33.908838457 +0000 UTC m=+0.160861569 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.expose-services=, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-neutron-dhcp-agent)
Oct 13 14:09:33 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:09:33 standalone.localdomain podman[145539]: 2025-10-13 14:09:33.959177358 +0000 UTC m=+0.208631301 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, release=1, version=17.1.9, architecture=x86_64, container_name=barbican_keystone_listener, name=rhosp17/openstack-barbican-keystone-listener, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:19, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c)
Oct 13 14:09:33 standalone.localdomain podman[145538]: 2025-10-13 14:09:33.913677206 +0000 UTC m=+0.164548393 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-api-container, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Oct 13 14:09:33 standalone.localdomain podman[145539]: 2025-10-13 14:09:33.977735951 +0000 UTC m=+0.227189914 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=barbican_keystone_listener, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-barbican-keystone-listener-container, release=1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, architecture=x86_64)
Oct 13 14:09:33 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:09:34 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:09:34 standalone.localdomain podman[145540]: 2025-10-13 14:09:34.060772779 +0000 UTC m=+0.305614640 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-neutron-sriov-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:09:34 standalone.localdomain podman[145538]: 2025-10-13 14:09:34.098795182 +0000 UTC m=+0.349666459 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.component=openstack-barbican-api-container, config_id=tripleo_step3, version=17.1.9, name=rhosp17/openstack-barbican-api, io.buildah.version=1.33.12, container_name=barbican_api, summary=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1)
Oct 13 14:09:34 standalone.localdomain podman[145548]: 2025-10-13 14:09:34.061657917 +0000 UTC m=+0.303416283 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc., build-date=2025-07-21T15:36:22, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-worker-container, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, container_name=barbican_worker, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:09:34 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:09:34 standalone.localdomain podman[145540]: 2025-10-13 14:09:34.118760097 +0000 UTC m=+0.363601978 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, container_name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, release=1, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:09:34 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:09:34 standalone.localdomain podman[145548]: 2025-10-13 14:09:34.143912172 +0000 UTC m=+0.385670528 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., container_name=barbican_worker, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, io.buildah.version=1.33.12, name=rhosp17/openstack-barbican-worker, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true)
Oct 13 14:09:34 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:09:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1034: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:36 standalone.localdomain ceph-mon[29756]: pgmap v1034: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1035: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:37 standalone.localdomain runuser[145888]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:37 standalone.localdomain ceph-mon[29756]: pgmap v1035: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:38 standalone.localdomain runuser[145888]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:38 standalone.localdomain podman[146009]: 2025-10-13 14:09:38.576620396 +0000 UTC m=+0.094295826 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, com.redhat.component=openstack-cinder-backup-container, name=rhosp17/openstack-cinder-backup, build-date=2025-07-21T16:18:24, tcib_managed=true, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12)
Oct 13 14:09:38 standalone.localdomain podman[146009]: 2025-10-13 14:09:38.604785595 +0000 UTC m=+0.122461035 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, distribution-scope=public, build-date=2025-07-21T16:18:24, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:09:38 standalone.localdomain runuser[146070]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:39 standalone.localdomain runuser[146070]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:39 standalone.localdomain runuser[146134]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1036: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:09:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:39 standalone.localdomain systemd[1]: tmp-crun.sdRuPv.mount: Deactivated successfully.
Oct 13 14:09:39 standalone.localdomain podman[146181]: 2025-10-13 14:09:39.831547476 +0000 UTC m=+0.102149479 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, config_id=ovn_cluster_northd, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64, container_name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:30:04, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, com.redhat.component=openstack-ovn-northd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:09:39 standalone.localdomain podman[146181]: 2025-10-13 14:09:39.869815006 +0000 UTC m=+0.140417009 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:30:04, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, config_id=ovn_cluster_northd, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9)
Oct 13 14:09:39 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:09:40 standalone.localdomain runuser[146134]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:09:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:09:40 standalone.localdomain ceph-mon[29756]: pgmap v1036: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:40 standalone.localdomain podman[146219]: 2025-10-13 14:09:40.825708078 +0000 UTC m=+0.090634294 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:09:40 standalone.localdomain podman[146218]: 2025-10-13 14:09:40.880391024 +0000 UTC m=+0.147500808 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, container_name=nova_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:09:40 standalone.localdomain podman[146219]: 2025-10-13 14:09:40.889929297 +0000 UTC m=+0.154855513 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, container_name=clustercheck, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:09:40 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:09:40 standalone.localdomain podman[146218]: 2025-10-13 14:09:40.915053692 +0000 UTC m=+0.182163506 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, distribution-scope=public, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:09:40 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:09:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1037: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:41 standalone.localdomain ceph-mon[29756]: pgmap v1037: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:42 standalone.localdomain sudo[146302]:     root : PWD=/root ; USER=root ; COMMAND=/bin/chown -R root /root/tripleo-heat-installer-templates
Oct 13 14:09:42 standalone.localdomain sudo[146302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:42 standalone.localdomain sudo[146302]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:42 standalone.localdomain sudo[146305]:     root : PWD=/root ; USER=root ; COMMAND=/bin/chown -R root /root/standalone-ansible-fa02z7re
Oct 13 14:09:42 standalone.localdomain sudo[146305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:42 standalone.localdomain sudo[146305]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:42 standalone.localdomain sudo[146308]:     root : PWD=/root ; USER=root ; COMMAND=/bin/chown -R root /root/.tripleo
Oct 13 14:09:42 standalone.localdomain sudo[146308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:42 standalone.localdomain sudo[146308]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain sudo[146311]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cp /etc/openstack/clouds.yaml /root/.config/openstack
Oct 13 14:09:43 standalone.localdomain sudo[146311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:43 standalone.localdomain sudo[146311]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain sudo[146314]:     root : PWD=/root ; USER=root ; COMMAND=/bin/chmod 0600 /root/.config/openstack/clouds.yaml
Oct 13 14:09:43 standalone.localdomain sudo[146314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:43 standalone.localdomain sudo[146314]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain sudo[146324]:     root : PWD=/root ; USER=root ; COMMAND=/bin/chown -R 0:0 /root/.config
Oct 13 14:09:43 standalone.localdomain sudo[146324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:43 standalone.localdomain sudo[146324]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain sudo[40302]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain sshd[23285]: Received disconnect from 192.168.122.11 port 57482:11: disconnected by user
Oct 13 14:09:43 standalone.localdomain sshd[23285]: Disconnected from user root 192.168.122.11 port 57482
Oct 13 14:09:43 standalone.localdomain sshd[23282]: pam_unix(sshd:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain systemd[1]: session-25.scope: Deactivated successfully.
Oct 13 14:09:43 standalone.localdomain systemd[1]: session-25.scope: Consumed 28min 20.140s CPU time.
Oct 13 14:09:43 standalone.localdomain systemd-logind[45629]: Session 25 logged out. Waiting for processes to exit.
Oct 13 14:09:43 standalone.localdomain systemd-logind[45629]: Removed session 25.
Oct 13 14:09:43 standalone.localdomain sshd[146330]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:43 standalone.localdomain sshd[146330]: Accepted publickey for root from 192.168.122.11 port 42338 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:09:43 standalone.localdomain systemd-logind[45629]: New session 41 of user root.
Oct 13 14:09:43 standalone.localdomain systemd[1]: Started Session 41 of User root.
Oct 13 14:09:43 standalone.localdomain sshd[146330]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:43 standalone.localdomain sshd[146333]: Received disconnect from 192.168.122.11 port 42338:11: disconnected by user
Oct 13 14:09:43 standalone.localdomain sshd[146333]: Disconnected from user root 192.168.122.11 port 42338
Oct 13 14:09:43 standalone.localdomain sshd[146330]: pam_unix(sshd:session): session closed for user root
Oct 13 14:09:43 standalone.localdomain systemd[1]: session-41.scope: Deactivated successfully.
Oct 13 14:09:43 standalone.localdomain systemd-logind[45629]: Session 41 logged out. Waiting for processes to exit.
Oct 13 14:09:43 standalone.localdomain systemd-logind[45629]: Removed session 41.
Oct 13 14:09:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1038: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:44 standalone.localdomain ceph-mon[29756]: pgmap v1038: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1039: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:45 standalone.localdomain ceph-mon[29756]: pgmap v1039: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1040: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:47 standalone.localdomain sudo[146555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:09:47 standalone.localdomain sudo[146555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:09:47 standalone.localdomain sudo[146555]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:47 standalone.localdomain sudo[146570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:09:47 standalone.localdomain sudo[146570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:09:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:09:48 standalone.localdomain podman[146585]: 2025-10-13 14:09:48.070202398 +0000 UTC m=+0.099142396 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T16:10:12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, container_name=cinder_scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, managed_by=tripleo_ansible, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:09:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:09:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:09:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:09:48 standalone.localdomain podman[146585]: 2025-10-13 14:09:48.094611081 +0000 UTC m=+0.123551069 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.openshift.expose-services=, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T16:10:12, name=rhosp17/openstack-cinder-scheduler, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, container_name=cinder_scheduler, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:09:48 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:09:48 standalone.localdomain podman[146606]: 2025-10-13 14:09:48.174826553 +0000 UTC m=+0.084582108 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-07-21T13:27:15, release=1, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, container_name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Oct 13 14:09:48 standalone.localdomain podman[146607]: 2025-10-13 14:09:48.239701343 +0000 UTC m=+0.140422329 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, container_name=keystone, vcs-type=git, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:09:48 standalone.localdomain podman[146616]: 2025-10-13 14:09:48.300159376 +0000 UTC m=+0.197484938 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=cinder_api_cron, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T15:58:55, architecture=x86_64, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container)
Oct 13 14:09:48 standalone.localdomain podman[146606]: 2025-10-13 14:09:48.310608698 +0000 UTC m=+0.220364233 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:09:48 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:09:48 standalone.localdomain podman[146607]: 2025-10-13 14:09:48.327089676 +0000 UTC m=+0.227810612 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, release=1, build-date=2025-07-21T13:27:18, container_name=keystone, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:09:48 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:09:48 standalone.localdomain podman[146616]: 2025-10-13 14:09:48.384286269 +0000 UTC m=+0.281611861 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, version=17.1.9, container_name=cinder_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:09:48 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:09:48 standalone.localdomain sudo[146570]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: pgmap v1040: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:09:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1047f7cc-747a-42eb-9021-b53327423033 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:09:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1047f7cc-747a-42eb-9021-b53327423033 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:09:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1047f7cc-747a-42eb-9021-b53327423033 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:09:48 standalone.localdomain sudo[146793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:09:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 47 completed events
Oct 13 14:09:48 standalone.localdomain sudo[146793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:09:48 standalone.localdomain sudo[146793]: pam_unix(sudo:session): session closed for user root
Oct 13 14:09:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:09:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1041: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:09:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:09:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:09:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:09:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:09:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:50 standalone.localdomain runuser[146828]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:50 standalone.localdomain ceph-mon[29756]: pgmap v1041: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:51 standalone.localdomain runuser[146828]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1042: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:51 standalone.localdomain runuser[146897]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:09:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:09:51 standalone.localdomain ceph-mon[29756]: pgmap v1042: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:51 standalone.localdomain systemd[1]: tmp-crun.z9GpmR.mount: Deactivated successfully.
Oct 13 14:09:51 standalone.localdomain podman[146944]: 2025-10-13 14:09:51.908211733 +0000 UTC m=+0.155969348 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, architecture=x86_64, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1)
Oct 13 14:09:51 standalone.localdomain podman[146944]: 2025-10-13 14:09:51.975987001 +0000 UTC m=+0.223744606 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:09:51 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:09:51 standalone.localdomain podman[146957]: 2025-10-13 14:09:51.938667421 +0000 UTC m=+0.179666188 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vcs-type=git, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, container_name=memcached, release=1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:09:52 standalone.localdomain podman[146957]: 2025-10-13 14:09:52.021750532 +0000 UTC m=+0.262749309 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, release=1, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, build-date=2025-07-21T12:58:43, container_name=memcached, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:09:52 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[146990]: 2025-10-13 14:09:51.95970605 +0000 UTC m=+0.175290004 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, container_name=heat_api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git)
Oct 13 14:09:52 standalone.localdomain podman[146977]: 2025-10-13 14:09:52.109968221 +0000 UTC m=+0.326402931 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, name=rhosp17/openstack-horizon, release=1, io.openshift.expose-services=, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, version=17.1.9, tcib_managed=true, build-date=2025-07-21T13:58:15, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-horizon-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, container_name=horizon, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:09:52 standalone.localdomain podman[146975]: 2025-10-13 14:09:51.961810094 +0000 UTC m=+0.184976372 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:09:52 standalone.localdomain podman[146973]: 2025-10-13 14:09:52.124303803 +0000 UTC m=+0.358387598 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-api, com.redhat.component=openstack-manila-api-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:09:52 standalone.localdomain podman[146963]: 2025-10-13 14:09:51.878241139 +0000 UTC m=+0.112169579 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, container_name=cinder_api, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:09:52 standalone.localdomain podman[146942]: 2025-10-13 14:09:52.039756537 +0000 UTC m=+0.297711317 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.expose-services=, release=1, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:09:52 standalone.localdomain podman[146975]: 2025-10-13 14:09:52.147532329 +0000 UTC m=+0.370698607 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, architecture=x86_64, container_name=heat_api_cron, version=17.1.9, com.redhat.component=openstack-heat-api-container, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:09:52 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[146963]: 2025-10-13 14:09:52.165822783 +0000 UTC m=+0.399751233 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, name=rhosp17/openstack-cinder-api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:09:52 standalone.localdomain podman[146942]: 2025-10-13 14:09:52.173827169 +0000 UTC m=+0.431781979 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, container_name=heat_api_cfn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:09:52 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[147002]: 2025-10-13 14:09:51.981571923 +0000 UTC m=+0.196862138 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, release=1, version=17.1.9, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, name=rhosp17/openstack-cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:09:52 standalone.localdomain podman[146990]: 2025-10-13 14:09:52.194512307 +0000 UTC m=+0.410096301 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, container_name=heat_api, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:09:52 standalone.localdomain podman[146997]: 2025-10-13 14:09:52.10083369 +0000 UTC m=+0.314025290 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:17, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, container_name=nova_conductor, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-conductor, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:09:52 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[146943]: 2025-10-13 14:09:52.014787947 +0000 UTC m=+0.270697293 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, tcib_managed=true, build-date=2025-07-21T15:56:28, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, container_name=manila_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, name=rhosp17/openstack-manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:09:52 standalone.localdomain podman[146994]: 2025-10-13 14:09:52.07294532 +0000 UTC m=+0.290499885 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, build-date=2025-07-21T15:44:03, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:09:52 standalone.localdomain podman[146997]: 2025-10-13 14:09:52.231074533 +0000 UTC m=+0.444266123 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, managed_by=tripleo_ansible, container_name=nova_conductor, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:44:17, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:09:52 standalone.localdomain podman[146973]: 2025-10-13 14:09:52.236725618 +0000 UTC m=+0.470809403 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, release=1, distribution-scope=public, container_name=manila_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, com.redhat.component=openstack-manila-api-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-api)
Oct 13 14:09:52 standalone.localdomain podman[146956]: 2025-10-13 14:09:51.981943996 +0000 UTC m=+0.226343298 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-scheduler-container, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_scheduler, build-date=2025-07-21T16:02:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, config_id=tripleo_step4, name=rhosp17/openstack-nova-scheduler, release=1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, architecture=x86_64, distribution-scope=public)
Oct 13 14:09:52 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[146943]: 2025-10-13 14:09:52.245273152 +0000 UTC m=+0.501182478 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-manila-scheduler-container, container_name=manila_scheduler, name=rhosp17/openstack-manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, build-date=2025-07-21T15:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:09:52 standalone.localdomain podman[146977]: 2025-10-13 14:09:52.249651886 +0000 UTC m=+0.466086656 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-horizon-container, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-horizon, release=1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:58:15)
Oct 13 14:09:52 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[147002]: 2025-10-13 14:09:52.268411035 +0000 UTC m=+0.483701290 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:09:52 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain podman[146994]: 2025-10-13 14:09:52.304543009 +0000 UTC m=+0.522097614 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=neutron_api, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-server-container, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, build-date=2025-07-21T15:44:03, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-neutron-server)
Oct 13 14:09:52 standalone.localdomain podman[146956]: 2025-10-13 14:09:52.31953419 +0000 UTC m=+0.563933492 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-nova-scheduler, batch=17.1_20250721.1, container_name=nova_scheduler, vcs-type=git, build-date=2025-07-21T16:02:54, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:09:52 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:09:52 standalone.localdomain runuser[146897]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:52 standalone.localdomain runuser[147252]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:09:53 standalone.localdomain runuser[147252]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:09:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1043: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:53 standalone.localdomain systemd[1]: Stopping User Manager for UID 0...
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Activating special unit Exit the Session...
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Removed slice User Background Tasks Slice.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped target Main User Target.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped target Basic System.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped target Paths.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped target Sockets.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped target Timers.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Closed D-Bus User Message Bus Socket.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Stopped Create User's Volatile Files and Directories.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Removed slice User Application Slice.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Reached target Shutdown.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Finished Exit the Session.
Oct 13 14:09:53 standalone.localdomain systemd[22830]: Reached target Exit the Session.
Oct 13 14:09:53 standalone.localdomain systemd[1]: user@0.service: Deactivated successfully.
Oct 13 14:09:53 standalone.localdomain systemd[1]: Stopped User Manager for UID 0.
Oct 13 14:09:53 standalone.localdomain systemd[1]: user@0.service: Consumed 5.177s CPU time, no IO.
Oct 13 14:09:53 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 14:09:53 standalone.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 14:09:53 standalone.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 14:09:53 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 14:09:53 standalone.localdomain systemd[1]: Removed slice User Slice of UID 0.
Oct 13 14:09:53 standalone.localdomain systemd[1]: user-0.slice: Consumed 28min 35.786s CPU time.
Oct 13 14:09:54 standalone.localdomain ceph-mon[29756]: pgmap v1043: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:09:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:09:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:09:54 standalone.localdomain sshd[147398]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:54 standalone.localdomain sshd[147398]: error: kex_exchange_identification: Connection closed by remote host
Oct 13 14:09:54 standalone.localdomain sshd[147398]: Connection closed by 192.168.122.11 port 48194
Oct 13 14:09:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:09:54 standalone.localdomain podman[147365]: 2025-10-13 14:09:54.802057527 +0000 UTC m=+0.072259408 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:09:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:09:54 standalone.localdomain podman[147368]: 2025-10-13 14:09:54.869912338 +0000 UTC m=+0.133668610 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9)
Oct 13 14:09:54 standalone.localdomain podman[147366]: 2025-10-13 14:09:54.905759593 +0000 UTC m=+0.170411453 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, container_name=swift_container_server, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:09:54 standalone.localdomain systemd[1]: tmp-crun.vn4ZM7.mount: Deactivated successfully.
Oct 13 14:09:54 standalone.localdomain podman[147417]: 2025-10-13 14:09:54.944155057 +0000 UTC m=+0.072440064 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:09:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:09:54 standalone.localdomain podman[147365]: 2025-10-13 14:09:54.986815622 +0000 UTC m=+0.257017403 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:09:54 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:09:55 standalone.localdomain podman[147459]: 2025-10-13 14:09:55.032718087 +0000 UTC m=+0.071427103 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, io.buildah.version=1.33.12, container_name=nova_vnc_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container, build-date=2025-07-21T15:24:10)
Oct 13 14:09:55 standalone.localdomain podman[147368]: 2025-10-13 14:09:55.057731447 +0000 UTC m=+0.321487669 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account)
Oct 13 14:09:55 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:09:55 standalone.localdomain podman[147366]: 2025-10-13 14:09:55.095869733 +0000 UTC m=+0.360521623 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:09:55 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:09:55 standalone.localdomain sshd[147490]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:55 standalone.localdomain sshd[147492]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:55 standalone.localdomain sshd[147494]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:55 standalone.localdomain sshd[147491]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:55 standalone.localdomain sshd[147493]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:55 standalone.localdomain sshd[147490]: Connection closed by 192.168.122.11 port 48206 [preauth]
Oct 13 14:09:55 standalone.localdomain sshd[147491]: Connection closed by 192.168.122.11 port 48210 [preauth]
Oct 13 14:09:55 standalone.localdomain sshd[147492]: Connection closed by 192.168.122.11 port 48212 [preauth]
Oct 13 14:09:55 standalone.localdomain sshd[147493]: Unable to negotiate with 192.168.122.11 port 48228: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 13 14:09:55 standalone.localdomain sshd[147494]: Unable to negotiate with 192.168.122.11 port 48240: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 13 14:09:55 standalone.localdomain podman[147417]: 2025-10-13 14:09:55.274569361 +0000 UTC m=+0.402854388 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_migration_target, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:09:55 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:09:55 standalone.localdomain podman[147459]: 2025-10-13 14:09:55.288624954 +0000 UTC m=+0.327333940 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, build-date=2025-07-21T15:24:10, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-nova-novncproxy, io.buildah.version=1.33.12)
Oct 13 14:09:55 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:09:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1044: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:55 standalone.localdomain ceph-mon[29756]: pgmap v1044: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:56 standalone.localdomain sshd[147514]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:09:56 standalone.localdomain sshd[147514]: Accepted publickey for root from 192.168.122.11 port 48244 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:09:56 standalone.localdomain systemd[1]: Created slice User Slice of UID 0.
Oct 13 14:09:56 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 14:09:56 standalone.localdomain systemd-logind[45629]: New session 42 of user root.
Oct 13 14:09:56 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 14:09:56 standalone.localdomain systemd[1]: Starting User Manager for UID 0...
Oct 13 14:09:56 standalone.localdomain systemd[147530]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Queued start job for default target Main User Target.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Created slice User Application Slice.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Reached target Paths.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Reached target Timers.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Starting D-Bus User Message Bus Socket...
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Starting Create User's Volatile Files and Directories...
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Listening on D-Bus User Message Bus Socket.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Reached target Sockets.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Finished Create User's Volatile Files and Directories.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Reached target Basic System.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Reached target Main User Target.
Oct 13 14:09:56 standalone.localdomain systemd[147530]: Startup finished in 237ms.
Oct 13 14:09:56 standalone.localdomain systemd[1]: Started User Manager for UID 0.
Oct 13 14:09:56 standalone.localdomain systemd[1]: Started Session 42 of User root.
Oct 13 14:09:56 standalone.localdomain sshd[147514]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:09:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:09:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1110252632' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:09:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:09:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1110252632' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:09:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1110252632' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:09:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1110252632' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:09:57 standalone.localdomain python3[147780]: ansible-ansible.legacy.stat Invoked with path=/root/tripleo-standalone-passwords.yaml follow=True get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1045: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:58 standalone.localdomain python3[147992]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=True get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:09:58 standalone.localdomain ceph-mon[29756]: pgmap v1045: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:09:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:09:58 standalone.localdomain podman[148114]: 2025-10-13 14:09:58.869037309 +0000 UTC m=+0.116017277 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:09:58 standalone.localdomain podman[148132]: 2025-10-13 14:09:58.925240391 +0000 UTC m=+0.148004143 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, architecture=x86_64, version=17.1.9, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:09:58 standalone.localdomain systemd[1]: tmp-crun.hTYZni.mount: Deactivated successfully.
Oct 13 14:09:58 standalone.localdomain podman[148138]: 2025-10-13 14:09:58.984330042 +0000 UTC m=+0.207432634 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=)
Oct 13 14:09:59 standalone.localdomain podman[148115]: 2025-10-13 14:09:59.027529704 +0000 UTC m=+0.270561720 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, container_name=nova_metadata, tcib_managed=true, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:09:59 standalone.localdomain podman[148132]: 2025-10-13 14:09:59.035388466 +0000 UTC m=+0.258152148 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:09:59 standalone.localdomain podman[148138]: 2025-10-13 14:09:59.04072039 +0000 UTC m=+0.263822942 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:09:59 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain podman[148113]: 2025-10-13 14:09:59.072079967 +0000 UTC m=+0.324413190 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, com.redhat.component=openstack-placement-api-container, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, io.openshift.expose-services=, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true)
Oct 13 14:09:59 standalone.localdomain podman[148114]: 2025-10-13 14:09:59.096627303 +0000 UTC m=+0.343607301 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37)
Oct 13 14:09:59 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain podman[148126]: 2025-10-13 14:09:59.145848111 +0000 UTC m=+0.382600384 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, config_id=tripleo_step4, container_name=glance_api_internal)
Oct 13 14:09:59 standalone.localdomain podman[148113]: 2025-10-13 14:09:59.149337288 +0000 UTC m=+0.401670441 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:12, batch=17.1_20250721.1, container_name=placement_api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-placement-api-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67)
Oct 13 14:09:59 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain podman[148115]: 2025-10-13 14:09:59.198350349 +0000 UTC m=+0.441382435 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, container_name=nova_metadata, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, release=1, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:09:59 standalone.localdomain podman[148112]: 2025-10-13 14:09:59.147558873 +0000 UTC m=+0.399900166 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_cron, tcib_managed=true, com.redhat.component=openstack-glance-api-container, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, release=1)
Oct 13 14:09:59 standalone.localdomain podman[148112]: 2025-10-13 14:09:59.231866232 +0000 UTC m=+0.484207505 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:58:20, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 14:09:59 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain podman[148144]: 2025-10-13 14:09:59.245801841 +0000 UTC m=+0.468474070 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, release=1, vcs-type=git, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, container_name=glance_api, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:09:59 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain podman[148126]: 2025-10-13 14:09:59.323035732 +0000 UTC m=+0.559788115 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, release=1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:09:59 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain python3[148401]: ansible-ansible.builtin.lineinfile Invoked with path=/root/.ssh/authorized_keys insertafter=EOF value=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjKugkjRYwasypXRQCjmNKsHyHQbYeza3+zMIuVtFa4xUmQUXK0ObIX3Y/0LubiVeDts2rPaGbc9m50+t0OC8pr9tHfJHL+an7sZBNDTxWgi6AhO6cmswpRXQVxr0TMJm2Yc0rp+CK6mz3/ZM7zdzR1Opkh9tZn4B22qIoMSuDScNY60hiNYzPyISgXDNKoldJTvsCOZ7PcFmgSEu63nJ5tdUgGEmNF9gfdIC6CSgSbctPtA8pJsitAi/BiV4UG6pJDkogUkJCmgebHQae79F4rKmfEKmMPHuPO7/r4fnWc4jC30NY1XMU2FcsrhxM9xkuipqFEnnhq0Ak0KA2d5NTULF+1VHO47Q6hZCvlI0WhRATmCt7deLr+etP3mD2qbeb7w/RwZFdPvXRKp/Z4GrBq+tplzNxWIodxqx5XjWvhwQvXEu0+ISfyehOO/PJ7/HypWFFgsG8pUdWtC5e3O8PiN6WfEJR4YicNADfAVzK0lGReUN8LRwRtwxh7sjfwpc= zuul-build-sshkey line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjKugkjRYwasypXRQCjmNKsHyHQbYeza3+zMIuVtFa4xUmQUXK0ObIX3Y/0LubiVeDts2rPaGbc9m50+t0OC8pr9tHfJHL+an7sZBNDTxWgi6AhO6cmswpRXQVxr0TMJm2Yc0rp+CK6mz3/ZM7zdzR1Opkh9tZn4B22qIoMSuDScNY60hiNYzPyISgXDNKoldJTvsCOZ7PcFmgSEu63nJ5tdUgGEmNF9gfdIC6CSgSbctPtA8pJsitAi/BiV4UG6pJDkogUkJCmgebHQae79F4rKmfEKmMPHuPO7/r4fnWc4jC30NY1XMU2FcsrhxM9xkuipqFEnnhq0Ak0KA2d5NTULF+1VHO47Q6hZCvlI0WhRATmCt7deLr+etP3mD2qbeb7w/RwZFdPvXRKp/Z4GrBq+tplzNxWIodxqx5XjWvhwQvXEu0+ISfyehOO/PJ7/HypWFFgsG8pUdWtC5e3O8PiN6WfEJR4YicNADfAVzK0lGReUN8LRwRtwxh7sjfwpc= zuul-build-sshkey state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:09:59 standalone.localdomain podman[148144]: 2025-10-13 14:09:59.455115993 +0000 UTC m=+0.677788242 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, release=1, version=17.1.9, container_name=glance_api, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, build-date=2025-07-21T13:58:20)
Oct 13 14:09:59 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:09:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1046: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:09:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:00 standalone.localdomain ceph-mon[29756]: pgmap v1046: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:01 standalone.localdomain podman[148441]: 2025-10-13 14:10:01.389002369 +0000 UTC m=+0.095009340 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, name=rhosp17/openstack-mariadb, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:10:01 standalone.localdomain podman[148441]: 2025-10-13 14:10:01.395969883 +0000 UTC m=+0.101976794 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:10:01 standalone.localdomain podman[148475]: 2025-10-13 14:10:01.587755274 +0000 UTC m=+0.094164803 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-haproxy-container, name=rhosp17/openstack-haproxy, release=1, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:10:01 standalone.localdomain podman[148475]: 2025-10-13 14:10:01.622082832 +0000 UTC m=+0.128492311 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:10:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1047: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:01 standalone.localdomain podman[148514]: 2025-10-13 14:10:01.749803778 +0000 UTC m=+0.091700306 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-rabbitmq-container, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:10:01 standalone.localdomain podman[148514]: 2025-10-13 14:10:01.783112025 +0000 UTC m=+0.125008523 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-rabbitmq-container, build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:10:02 standalone.localdomain ceph-mon[29756]: pgmap v1047: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:10:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:10:02 standalone.localdomain podman[148554]: 2025-10-13 14:10:02.845478909 +0000 UTC m=+0.101160428 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, release=1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:02 standalone.localdomain podman[148555]: 2025-10-13 14:10:02.888578368 +0000 UTC m=+0.143469322 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:10:02 standalone.localdomain podman[148554]: 2025-10-13 14:10:02.9113569 +0000 UTC m=+0.167038409 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, container_name=keystone_cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:10:02 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:10:02 standalone.localdomain podman[148555]: 2025-10-13 14:10:02.942904832 +0000 UTC m=+0.197795806 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container)
Oct 13 14:10:02 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:10:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1048: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:03 standalone.localdomain runuser[148615]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:04 standalone.localdomain ceph-mon[29756]: pgmap v1048: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:10:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:10:04 standalone.localdomain runuser[148615]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:10:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:10:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:10:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:10:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:04 standalone.localdomain systemd[1]: tmp-crun.DNkp7i.mount: Deactivated successfully.
Oct 13 14:10:04 standalone.localdomain systemd[1]: tmp-crun.eKTUOS.mount: Deactivated successfully.
Oct 13 14:10:04 standalone.localdomain podman[148678]: 2025-10-13 14:10:04.827701435 +0000 UTC m=+0.084848406 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 14:10:04 standalone.localdomain podman[148709]: 2025-10-13 14:10:04.891983087 +0000 UTC m=+0.130456133 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T16:05:11)
Oct 13 14:10:04 standalone.localdomain podman[148676]: 2025-10-13 14:10:04.848087523 +0000 UTC m=+0.110512907 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:54, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, vcs-type=git, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:10:04 standalone.localdomain runuser[148795]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:04 standalone.localdomain podman[148678]: 2025-10-13 14:10:04.918800924 +0000 UTC m=+0.175947885 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-api-container, container_name=barbican_api, release=1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-api, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, build-date=2025-07-21T15:22:44, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:10:04 standalone.localdomain podman[148709]: 2025-10-13 14:10:04.924782968 +0000 UTC m=+0.163255944 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:10:04 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:10:04 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:10:04 standalone.localdomain podman[148676]: 2025-10-13 14:10:04.933377903 +0000 UTC m=+0.195803277 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., container_name=neutron_dhcp, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-neutron-dhcp-agent, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:10:04 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:10:04 standalone.localdomain podman[148702]: 2025-10-13 14:10:04.99267367 +0000 UTC m=+0.234532690 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, com.redhat.component=openstack-barbican-worker-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1)
Oct 13 14:10:05 standalone.localdomain podman[148702]: 2025-10-13 14:10:05.015797783 +0000 UTC m=+0.257656743 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, release=1, tcib_managed=true, com.redhat.component=openstack-barbican-worker-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-worker, config_id=tripleo_step3, container_name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:10:05 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:10:05 standalone.localdomain podman[148699]: 2025-10-13 14:10:04.969685482 +0000 UTC m=+0.210624173 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:05 standalone.localdomain podman[148684]: 2025-10-13 14:10:05.082633263 +0000 UTC m=+0.331753606 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, release=1, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.expose-services=, container_name=barbican_keystone_listener, io.buildah.version=1.33.12)
Oct 13 14:10:05 standalone.localdomain podman[148684]: 2025-10-13 14:10:05.11270358 +0000 UTC m=+0.361823963 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-keystone-listener, release=1, container_name=barbican_keystone_listener, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-barbican-keystone-listener-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12)
Oct 13 14:10:05 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:10:05 standalone.localdomain podman[148699]: 2025-10-13 14:10:05.156284312 +0000 UTC m=+0.397222983 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, release=1, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, container_name=neutron_sriov_agent, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:10:05 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:10:05 standalone.localdomain podman[148701]: 2025-10-13 14:10:05.188203047 +0000 UTC m=+0.424561137 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, com.redhat.component=openstack-cinder-volume-container, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, version=17.1.9, tcib_managed=true)
Oct 13 14:10:05 standalone.localdomain podman[148701]: 2025-10-13 14:10:05.220058108 +0000 UTC m=+0.456416208 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.buildah.version=1.33.12, build-date=2025-07-21T16:13:39, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:05 standalone.localdomain runuser[148795]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:05 standalone.localdomain runuser[148906]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1049: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:05 standalone.localdomain ceph-mon[29756]: pgmap v1049: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:06 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:10:06 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txe4b36a5cefd6425fb8043-0068ed083e" "proxy-server 2" 0.0017 "-" 23 -
Oct 13 14:10:06 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txe4b36a5cefd6425fb8043-0068ed083e)
Oct 13 14:10:06 standalone.localdomain runuser[148906]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1050: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:08 standalone.localdomain ceph-mon[29756]: pgmap v1050: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1051: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:10 standalone.localdomain ceph-mon[29756]: pgmap v1051: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:10:10 standalone.localdomain podman[149262]: 2025-10-13 14:10:10.795506744 +0000 UTC m=+0.065293813 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, architecture=x86_64, container_name=ovn_cluster_northd, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:30:04, maintainer=OpenStack TripleO Team, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9)
Oct 13 14:10:10 standalone.localdomain podman[149262]: 2025-10-13 14:10:10.80508737 +0000 UTC m=+0.074874469 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, release=1, name=rhosp17/openstack-ovn-northd, build-date=2025-07-21T13:30:04, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:10:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:10:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1052: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:10:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:10:11 standalone.localdomain systemd[1]: tmp-crun.1s05mv.mount: Deactivated successfully.
Oct 13 14:10:11 standalone.localdomain podman[149290]: 2025-10-13 14:10:11.848212381 +0000 UTC m=+0.105163632 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:10:11 standalone.localdomain podman[149290]: 2025-10-13 14:10:11.914962618 +0000 UTC m=+0.171913899 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step2, release=1, distribution-scope=public, build-date=2025-07-21T12:58:45, container_name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:10:11 standalone.localdomain systemd[1]: tmp-crun.hYCOor.mount: Deactivated successfully.
Oct 13 14:10:11 standalone.localdomain podman[149289]: 2025-10-13 14:10:11.932638313 +0000 UTC m=+0.192671570 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, container_name=nova_compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.)
Oct 13 14:10:11 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:10:11 standalone.localdomain podman[149289]: 2025-10-13 14:10:11.988972329 +0000 UTC m=+0.249005526 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:10:12 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:10:12 standalone.localdomain ceph-mon[29756]: pgmap v1052: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:10:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3562527442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:10:13 standalone.localdomain sshd[149393]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:10:13 standalone.localdomain haproxy[70940]: 172.17.0.2:35772 [13/Oct/2025:14:10:13.103] placement placement/standalone.internalapi.localdomain 0/0/0/15/15 200 298 - - ---- 46/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:10:13 standalone.localdomain sshd[149393]: Accepted publickey for zuul from 38.102.83.114 port 39250 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:10:13 standalone.localdomain systemd[1]: Starting User Manager for UID 1000...
Oct 13 14:10:13 standalone.localdomain systemd-logind[45629]: New session 44 of user zuul.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Queued start job for default target Main User Target.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Created slice User Application Slice.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Reached target Paths.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Reached target Timers.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Starting D-Bus User Message Bus Socket...
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Starting Create User's Volatile Files and Directories...
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Finished Create User's Volatile Files and Directories.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Listening on D-Bus User Message Bus Socket.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Reached target Sockets.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Reached target Basic System.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Reached target Main User Target.
Oct 13 14:10:13 standalone.localdomain systemd[149397]: Startup finished in 210ms.
Oct 13 14:10:13 standalone.localdomain systemd[1]: Started User Manager for UID 1000.
Oct 13 14:10:13 standalone.localdomain systemd[1]: Started Session 44 of User zuul.
Oct 13 14:10:13 standalone.localdomain sshd[149393]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 14:10:13 standalone.localdomain sudo[149445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crubiqapitjaxduqhqfjfjebihkvhpxh ; /usr/bin/python3
Oct 13 14:10:13 standalone.localdomain sudo[149445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:10:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1174396327' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:10:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1053: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3562527442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:10:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1174396327' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:10:13 standalone.localdomain python3[149447]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 14:10:14 standalone.localdomain ceph-mon[29756]: pgmap v1053: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:14 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 80.2 (267 of 333 items), suggesting rotation.
Oct 13 14:10:14 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 14:10:14 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 14:10:14 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 14:10:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1054: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:15 standalone.localdomain ceph-mon[29756]: pgmap v1054: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:16 standalone.localdomain sudo[149445]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:16 standalone.localdomain sudo[149624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqjlsaiooxctaxmjkemmfekrmsgokfte ; /usr/bin/python3
Oct 13 14:10:16 standalone.localdomain sudo[149624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:17 standalone.localdomain runuser[149639]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:17 standalone.localdomain python3[149626]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 14:10:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1055: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:17 standalone.localdomain runuser[149639]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:17 standalone.localdomain runuser[149742]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:18 standalone.localdomain runuser[149742]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:18 standalone.localdomain runuser[149845]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:18 standalone.localdomain ceph-mon[29756]: pgmap v1055: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:10:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:10:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:10:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:10:18 standalone.localdomain podman[149906]: 2025-10-13 14:10:18.847625256 +0000 UTC m=+0.097297060 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:10:12, name=rhosp17/openstack-cinder-scheduler, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-cinder-scheduler-container, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, container_name=cinder_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1)
Oct 13 14:10:18 standalone.localdomain systemd[1]: tmp-crun.kPaqT6.mount: Deactivated successfully.
Oct 13 14:10:18 standalone.localdomain podman[149899]: 2025-10-13 14:10:18.883201632 +0000 UTC m=+0.130933226 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, version=17.1.9, build-date=2025-07-21T15:58:55, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=cinder_api_cron, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:10:18 standalone.localdomain podman[149894]: 2025-10-13 14:10:18.892584432 +0000 UTC m=+0.151510181 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, vcs-type=git, architecture=x86_64, distribution-scope=public, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:10:18 standalone.localdomain podman[149906]: 2025-10-13 14:10:18.930694476 +0000 UTC m=+0.180366280 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, build-date=2025-07-21T16:10:12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.component=openstack-cinder-scheduler-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9)
Oct 13 14:10:18 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:10:18 standalone.localdomain podman[149899]: 2025-10-13 14:10:18.946189964 +0000 UTC m=+0.193921568 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api_cron, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-api)
Oct 13 14:10:18 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:10:18 standalone.localdomain podman[149894]: 2025-10-13 14:10:18.981998898 +0000 UTC m=+0.240924697 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, release=1, architecture=x86_64, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, version=17.1.9, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=)
Oct 13 14:10:18 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:10:19 standalone.localdomain podman[149895]: 2025-10-13 14:10:19.070352041 +0000 UTC m=+0.326141713 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, container_name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone)
Oct 13 14:10:19 standalone.localdomain podman[149895]: 2025-10-13 14:10:19.135007153 +0000 UTC m=+0.390796855 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., release=1, container_name=keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:10:19 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:10:19 standalone.localdomain runuser[149845]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1056: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:20 standalone.localdomain ceph-mon[29756]: pgmap v1056: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:20 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 14:10:20 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 14:10:21 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 14:10:21 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 14:10:21 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 14:10:21 standalone.localdomain systemd[1]: run-r69ae6863b915498aba0d292dcf74332e.service: Deactivated successfully.
Oct 13 14:10:21 standalone.localdomain systemd[1]: run-r042bee12a81e4b00ab20eb736fd2aad4.service: Deactivated successfully.
Oct 13 14:10:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1057: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:21 standalone.localdomain sudo[149624]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:22 standalone.localdomain sudo[150231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fnzqmgrivfrjljrhxudttypjqtwytydg ; /usr/bin/python3
Oct 13 14:10:22 standalone.localdomain sudo[150231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:10:22 standalone.localdomain podman[150238]: 2025-10-13 14:10:22.245463834 +0000 UTC m=+0.087128177 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, container_name=memcached, io.openshift.expose-services=, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:10:22 standalone.localdomain podman[150238]: 2025-10-13 14:10:22.279353248 +0000 UTC m=+0.121017541 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, build-date=2025-07-21T12:58:43, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:10:22 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:10:22 standalone.localdomain systemd[1]: tmp-crun.idLtqM.mount: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:10:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:10:22 standalone.localdomain python3[150236]: ansible-ansible.legacy.dnf Invoked with name=['openstack-tempest'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 14:10:22 standalone.localdomain podman[150237]: 2025-10-13 14:10:22.364567185 +0000 UTC m=+0.206323211 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:10:22 standalone.localdomain podman[150357]: 2025-10-13 14:10:22.420745826 +0000 UTC m=+0.060027291 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, name=rhosp17/openstack-nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, release=1, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_scheduler, build-date=2025-07-21T16:02:54, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:10:22 standalone.localdomain podman[150271]: 2025-10-13 14:10:22.439840415 +0000 UTC m=+0.169048551 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=heat_api, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:10:22 standalone.localdomain podman[150316]: 2025-10-13 14:10:22.479247609 +0000 UTC m=+0.179634117 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron)
Oct 13 14:10:22 standalone.localdomain podman[150237]: 2025-10-13 14:10:22.498886565 +0000 UTC m=+0.340642571 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, release=1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1)
Oct 13 14:10:22 standalone.localdomain podman[150268]: 2025-10-13 14:10:22.498225064 +0000 UTC m=+0.232170697 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.component=openstack-cinder-api-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, version=17.1.9, container_name=cinder_api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, release=1, vcs-type=git, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:10:22 standalone.localdomain podman[150295]: 2025-10-13 14:10:22.405772275 +0000 UTC m=+0.118966558 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, build-date=2025-07-21T13:58:15, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., name=rhosp17/openstack-horizon, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, managed_by=tripleo_ansible, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-horizon-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:10:22 standalone.localdomain podman[150271]: 2025-10-13 14:10:22.51075666 +0000 UTC m=+0.239964796 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-heat-api, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, container_name=heat_api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain podman[150268]: 2025-10-13 14:10:22.57724279 +0000 UTC m=+0.311188423 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, name=rhosp17/openstack-cinder-api, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, architecture=x86_64, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain podman[150357]: 2025-10-13 14:10:22.607721469 +0000 UTC m=+0.247002944 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, container_name=nova_scheduler, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, build-date=2025-07-21T16:02:54, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, vcs-type=git)
Oct 13 14:10:22 standalone.localdomain podman[150284]: 2025-10-13 14:10:22.562785684 +0000 UTC m=+0.283008514 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, config_id=tripleo_step4, build-date=2025-07-21T15:44:17, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain podman[150270]: 2025-10-13 14:10:22.623677381 +0000 UTC m=+0.344529130 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, tcib_managed=true, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:10:22 standalone.localdomain podman[150278]: 2025-10-13 14:10:22.52631026 +0000 UTC m=+0.246377105 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-manila-scheduler-container, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T15:56:28, name=rhosp17/openstack-manila-scheduler, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d)
Oct 13 14:10:22 standalone.localdomain podman[150358]: 2025-10-13 14:10:22.583880374 +0000 UTC m=+0.213053158 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, version=17.1.9, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-server)
Oct 13 14:10:22 standalone.localdomain podman[150295]: 2025-10-13 14:10:22.638444066 +0000 UTC m=+0.351638369 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 horizon, release=1, io.openshift.expose-services=, build-date=2025-07-21T13:58:15, io.buildah.version=1.33.12, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, batch=17.1_20250721.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-type=git, version=17.1.9, com.redhat.component=openstack-horizon-container, container_name=horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:10:22 standalone.localdomain podman[150284]: 2025-10-13 14:10:22.645959728 +0000 UTC m=+0.366182558 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-conductor-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:17)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain podman[150278]: 2025-10-13 14:10:22.656324787 +0000 UTC m=+0.376391632 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-manila-scheduler, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.component=openstack-manila-scheduler-container, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, build-date=2025-07-21T15:56:28, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true)
Oct 13 14:10:22 standalone.localdomain podman[150316]: 2025-10-13 14:10:22.663709115 +0000 UTC m=+0.364095613 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain podman[150270]: 2025-10-13 14:10:22.708150755 +0000 UTC m=+0.429002514 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, name=rhosp17/openstack-heat-api, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain podman[150358]: 2025-10-13 14:10:22.718780333 +0000 UTC m=+0.347953137 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, version=17.1.9, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, release=1, container_name=neutron_api, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-server, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc.)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain ceph-mon[29756]: pgmap v1057: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:22 standalone.localdomain podman[150267]: 2025-10-13 14:10:22.540847098 +0000 UTC m=+0.278463734 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-cfn-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, build-date=2025-07-21T14:49:55, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-heat-api-cfn, release=1, config_id=tripleo_step4, distribution-scope=public)
Oct 13 14:10:22 standalone.localdomain podman[150331]: 2025-10-13 14:10:22.711833478 +0000 UTC m=+0.387056901 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, container_name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-api, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, com.redhat.component=openstack-manila-api-container, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:10:22 standalone.localdomain podman[150267]: 2025-10-13 14:10:22.779927196 +0000 UTC m=+0.517543862 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:49:55, distribution-scope=public)
Oct 13 14:10:22 standalone.localdomain podman[150331]: 2025-10-13 14:10:22.791608827 +0000 UTC m=+0.466832320 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-manila-api-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, build-date=2025-07-21T16:06:43, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, container_name=manila_api_cron, io.buildah.version=1.33.12, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, release=1)
Oct 13 14:10:22 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:10:22 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:10:23
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'backups', 'volumes', 'images', 'vms', '.mgr', 'manila_metadata']
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:10:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1058: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:24 standalone.localdomain ceph-mon[29756]: pgmap v1058: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1059: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:10:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:10:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:10:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:10:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:10:25 standalone.localdomain systemd[1]: tmp-crun.je7xQD.mount: Deactivated successfully.
Oct 13 14:10:25 standalone.localdomain ceph-mon[29756]: pgmap v1059: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:25 standalone.localdomain podman[150572]: 2025-10-13 14:10:25.832321447 +0000 UTC m=+0.097194107 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, tcib_managed=true, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:10:25 standalone.localdomain podman[150576]: 2025-10-13 14:10:25.87977837 +0000 UTC m=+0.139646425 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:24:10, container_name=nova_vnc_proxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-novncproxy, vcs-type=git, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-nova-novncproxy-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:10:25 standalone.localdomain podman[150575]: 2025-10-13 14:10:25.860029712 +0000 UTC m=+0.120618039 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, container_name=swift_account_server, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:10:25 standalone.localdomain podman[150574]: 2025-10-13 14:10:25.947423125 +0000 UTC m=+0.209531379 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, release=1, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:10:26 standalone.localdomain podman[150573]: 2025-10-13 14:10:26.003630247 +0000 UTC m=+0.266618698 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:10:26 standalone.localdomain podman[150572]: 2025-10-13 14:10:26.067765024 +0000 UTC m=+0.332637694 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, batch=17.1_20250721.1, container_name=swift_object_server, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:10:26 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:10:26 standalone.localdomain podman[150575]: 2025-10-13 14:10:26.080859108 +0000 UTC m=+0.341447425 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:10:26 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:10:26 standalone.localdomain podman[150574]: 2025-10-13 14:10:26.115810245 +0000 UTC m=+0.377918479 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:10:26 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:10:26 standalone.localdomain podman[150576]: 2025-10-13 14:10:26.183766549 +0000 UTC m=+0.443634604 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-nova-novncproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.buildah.version=1.33.12, container_name=nova_vnc_proxy, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:24:10, description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:10:26 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:10:26 standalone.localdomain podman[150573]: 2025-10-13 14:10:26.32880341 +0000 UTC m=+0.591791871 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute)
Oct 13 14:10:26 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:10:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1060: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:10:28 standalone.localdomain sudo[150231]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:28 standalone.localdomain sudo[150950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybkyqbuyhsvuoausvamvummljbdbpegy ; /usr/bin/python3
Oct 13 14:10:28 standalone.localdomain sudo[150950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:28 standalone.localdomain ceph-mon[29756]: pgmap v1060: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:28 standalone.localdomain python3[150961]: ansible-ansible.legacy.dnf Invoked with name=['python-subunit'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 14:10:29 standalone.localdomain systemd[1]: tmp-crun.MrMICH.mount: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain podman[151007]: 2025-10-13 14:10:29.233001793 +0000 UTC m=+0.111108876 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, release=1, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-manila-share-container, description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, name=rhosp17/openstack-manila-share, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:10:29 standalone.localdomain podman[151007]: 2025-10-13 14:10:29.278913658 +0000 UTC m=+0.157020741 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vcs-type=git, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T15:22:36, name=rhosp17/openstack-manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1)
Oct 13 14:10:29 standalone.localdomain systemd[1]: tmp-crun.b7cQFd.mount: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain podman[151026]: 2025-10-13 14:10:29.377247298 +0000 UTC m=+0.107464533 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-placement-api, version=17.1.9, com.redhat.component=openstack-placement-api-container, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:58:12, io.buildah.version=1.33.12, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:10:29 standalone.localdomain podman[151027]: 2025-10-13 14:10:29.426170157 +0000 UTC m=+0.160353163 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:10:29 standalone.localdomain podman[151029]: 2025-10-13 14:10:29.400226397 +0000 UTC m=+0.121191197 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:10:29 standalone.localdomain podman[151051]: 2025-10-13 14:10:29.464068774 +0000 UTC m=+0.180356169 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, release=1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12)
Oct 13 14:10:29 standalone.localdomain podman[151051]: 2025-10-13 14:10:29.473818545 +0000 UTC m=+0.190105960 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, container_name=glance_api_cron, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:10:29 standalone.localdomain podman[151029]: 2025-10-13 14:10:29.48565183 +0000 UTC m=+0.206616620 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, container_name=ovn_controller, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:10:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:10:29 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain podman[151026]: 2025-10-13 14:10:29.492687067 +0000 UTC m=+0.222904272 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, com.redhat.component=openstack-placement-api-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-placement-api, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, container_name=placement_api, description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:10:29 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain podman[151165]: 2025-10-13 14:10:29.590167471 +0000 UTC m=+0.092534713 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, container_name=glance_api, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:29 standalone.localdomain podman[151028]: 2025-10-13 14:10:29.543945377 +0000 UTC m=+0.278087562 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:10:29 standalone.localdomain podman[151111]: 2025-10-13 14:10:29.562822039 +0000 UTC m=+0.167616458 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T16:05:11, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-api-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:10:29 standalone.localdomain podman[151027]: 2025-10-13 14:10:29.613615994 +0000 UTC m=+0.347798990 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, io.buildah.version=1.33.12, release=1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:29 standalone.localdomain podman[151028]: 2025-10-13 14:10:29.627734949 +0000 UTC m=+0.361877104 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:10:29 standalone.localdomain podman[151112]: 2025-10-13 14:10:29.63360943 +0000 UTC m=+0.232572219 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:10:29 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1061: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:29 standalone.localdomain podman[151111]: 2025-10-13 14:10:29.644930939 +0000 UTC m=+0.249725338 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, container_name=nova_metadata, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, distribution-scope=public, com.redhat.component=openstack-nova-api-container, vcs-type=git, config_id=tripleo_step4, architecture=x86_64)
Oct 13 14:10:29 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain podman[151165]: 2025-10-13 14:10:29.751757792 +0000 UTC m=+0.254125044 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-glance-api-container, distribution-scope=public, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:10:29 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:10:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:29 standalone.localdomain podman[151112]: 2025-10-13 14:10:29.819893052 +0000 UTC m=+0.418855841 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., container_name=glance_api_internal, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:58:20)
Oct 13 14:10:29 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:10:30 standalone.localdomain runuser[151238]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:30 standalone.localdomain ceph-mon[29756]: pgmap v1061: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:30 standalone.localdomain runuser[151238]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:30 standalone.localdomain runuser[151307]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:31 standalone.localdomain sudo[150950]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1062: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:31 standalone.localdomain runuser[151307]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:31 standalone.localdomain runuser[151369]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:31 standalone.localdomain sudo[151429]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttsywmmvdhyedaccdyguxkslgyvxyqyl ; /usr/bin/python3
Oct 13 14:10:31 standalone.localdomain sudo[151429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:32 standalone.localdomain python3[151431]: ansible-ansible.builtin.file Invoked with path=/home/zuul/ci-framework-data/tests/pre-adoption-tempest/ state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:10:32 standalone.localdomain sudo[151429]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:32 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=25100 DF PROTO=TCP SPT=44120 DPT=19885 SEQ=2877125080 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956BE300000000001030307) 
Oct 13 14:10:32 standalone.localdomain sudo[151445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpnmpknbbovaqpudgipktnodbkvvdpta ; /usr/bin/python3
Oct 13 14:10:32 standalone.localdomain sudo[151445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:32 standalone.localdomain python3[151450]: ansible-ansible.legacy.command Invoked with creates=/home/zuul/ci-framework-data/tests/pre-adoption-tempest//dpa_tempest_workspace _raw_params=tempest init /home/zuul/ci-framework-data/tests/pre-adoption-tempest//dpa_tempest_workspace zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000038-1-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None removes=None stdin=None
Oct 13 14:10:32 standalone.localdomain runuser[151369]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:32 standalone.localdomain ceph-mon[29756]: pgmap v1062: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:32 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:10:32 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:10:33 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=25101 DF PROTO=TCP SPT=44120 DPT=19885 SEQ=2877125080 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956C2310000000001030307) 
Oct 13 14:10:33 standalone.localdomain sudo[151445]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1063: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:10:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:10:33 standalone.localdomain systemd[1]: tmp-crun.Ds2qck.mount: Deactivated successfully.
Oct 13 14:10:33 standalone.localdomain podman[151481]: 2025-10-13 14:10:33.822971054 +0000 UTC m=+0.087290171 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc.)
Oct 13 14:10:33 standalone.localdomain podman[151481]: 2025-10-13 14:10:33.836925374 +0000 UTC m=+0.101244541 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:33 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:10:33 standalone.localdomain systemd[1]: tmp-crun.s0DCUO.mount: Deactivated successfully.
Oct 13 14:10:33 standalone.localdomain podman[151482]: 2025-10-13 14:10:33.935164812 +0000 UTC m=+0.197483057 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, name=rhosp17/openstack-nova-api, release=1, build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, managed_by=tripleo_ansible)
Oct 13 14:10:33 standalone.localdomain podman[151482]: 2025-10-13 14:10:33.998283808 +0000 UTC m=+0.260602063 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-type=git, container_name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:10:34 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:10:34 standalone.localdomain ceph-mon[29756]: pgmap v1063: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:35 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=25102 DF PROTO=TCP SPT=44120 DPT=19885 SEQ=2877125080 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956CA310000000001030307) 
Oct 13 14:10:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1064: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:10:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:10:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:10:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:10:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:10:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:10:35 standalone.localdomain ceph-mon[29756]: pgmap v1064: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:35 standalone.localdomain podman[151558]: 2025-10-13 14:10:35.864171017 +0000 UTC m=+0.110563370 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-barbican-worker-container, build-date=2025-07-21T15:36:22, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, container_name=barbican_worker, batch=17.1_20250721.1)
Oct 13 14:10:35 standalone.localdomain podman[151558]: 2025-10-13 14:10:35.908509953 +0000 UTC m=+0.154902276 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-type=git, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, batch=17.1_20250721.1, container_name=barbican_worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, io.openshift.expose-services=, build-date=2025-07-21T15:36:22, description=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:10:35 standalone.localdomain systemd[1]: tmp-crun.XCyDxc.mount: Deactivated successfully.
Oct 13 14:10:35 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:10:35 standalone.localdomain podman[151545]: 2025-10-13 14:10:35.89642953 +0000 UTC m=+0.155979237 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:10:35 standalone.localdomain podman[151544]: 2025-10-13 14:10:35.963022433 +0000 UTC m=+0.226131910 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, vcs-type=git, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:10:35 standalone.localdomain podman[151545]: 2025-10-13 14:10:35.981876925 +0000 UTC m=+0.241426672 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:44, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git)
Oct 13 14:10:35 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:10:36 standalone.localdomain podman[151544]: 2025-10-13 14:10:36.029268335 +0000 UTC m=+0.292377792 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12, distribution-scope=public, container_name=neutron_dhcp, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step4)
Oct 13 14:10:36 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:10:36 standalone.localdomain podman[151552]: 2025-10-13 14:10:36.0761223 +0000 UTC m=+0.324583816 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, distribution-scope=public, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git)
Oct 13 14:10:36 standalone.localdomain podman[151560]: 2025-10-13 14:10:36.116985289 +0000 UTC m=+0.356679675 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, vcs-type=git, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:10:36 standalone.localdomain podman[151552]: 2025-10-13 14:10:36.132805607 +0000 UTC m=+0.381267073 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:10:36 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:10:36 standalone.localdomain podman[151560]: 2025-10-13 14:10:36.155988911 +0000 UTC m=+0.395683307 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git)
Oct 13 14:10:36 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:10:36 standalone.localdomain podman[151546]: 2025-10-13 14:10:35.928765487 +0000 UTC m=+0.180928187 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=barbican_keystone_listener, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-keystone-listener-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:10:36 standalone.localdomain podman[151546]: 2025-10-13 14:10:36.217039223 +0000 UTC m=+0.469202003 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, build-date=2025-07-21T16:18:19, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, container_name=barbican_keystone_listener, distribution-scope=public, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-keystone-listener-container, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener)
Oct 13 14:10:36 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:10:37 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=24783 DF PROTO=TCP SPT=44134 DPT=19885 SEQ=2562795060 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956D2010000000001030307) 
Oct 13 14:10:37 standalone.localdomain sudo[151843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykfacmqcdlrhjroxmptiahzhkvtjfwvd ; /usr/bin/python3
Oct 13 14:10:37 standalone.localdomain sudo[151843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:37 standalone.localdomain python3[151845]: ansible-ansible.legacy.command Invoked with _raw_params=OS_CLOUD=standalone openstack network create tempest-ext-nw --external --provider-network-type flat --provider-physical-network datacentre -f json _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000039-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:10:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1065: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:38 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=24784 DF PROTO=TCP SPT=44134 DPT=19885 SEQ=2562795060 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956D5F10000000001030307) 
Oct 13 14:10:38 standalone.localdomain ceph-mon[29756]: pgmap v1065: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:38 standalone.localdomain podman[151939]: 2025-10-13 14:10:38.7337033 +0000 UTC m=+0.096351440 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-backup-container, io.openshift.expose-services=, name=rhosp17/openstack-cinder-backup, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:10:38 standalone.localdomain podman[151939]: 2025-10-13 14:10:38.766050748 +0000 UTC m=+0.128698938 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-cinder-backup-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:24, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, tcib_managed=true, release=1, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:10:39 standalone.localdomain haproxy[70940]: 172.21.0.2:36950 [13/Oct/2025:14:10:39.226] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 46/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:39 standalone.localdomain haproxy[70940]: 172.21.0.2:36950 [13/Oct/2025:14:10:39.230] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/260/260 201 8100 - - ---- 46/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:39 standalone.localdomain haproxy[70940]: 172.17.0.2:40792 [13/Oct/2025:14:10:39.521] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 48/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1066: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:39 standalone.localdomain haproxy[70940]: 172.17.0.2:40792 [13/Oct/2025:14:10:39.526] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 201 8104 - - ---- 48/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:39 standalone.localdomain haproxy[70940]: 172.17.0.2:40792 [13/Oct/2025:14:10:39.817] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 8095 - - ---- 48/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:40 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=24785 DF PROTO=TCP SPT=44134 DPT=19885 SEQ=2562795060 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956DDF10000000001030307) 
Oct 13 14:10:40 standalone.localdomain ceph-mon[29756]: pgmap v1066: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:40 standalone.localdomain haproxy[70940]: 172.21.0.2:37964 [13/Oct/2025:14:10:39.507] neutron neutron/standalone.internalapi.localdomain 0/0/0/1339/1339 201 890 - - ---- 49/1/0/0/0 0/0 "POST /v2.0/networks HTTP/1.1"
Oct 13 14:10:41 standalone.localdomain sudo[151843]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1067: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:10:41 standalone.localdomain systemd[1]: tmp-crun.tWg5A6.mount: Deactivated successfully.
Oct 13 14:10:41 standalone.localdomain podman[152057]: 2025-10-13 14:10:41.817635434 +0000 UTC m=+0.081684779 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, batch=17.1_20250721.1, config_id=ovn_cluster_northd, container_name=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-northd)
Oct 13 14:10:41 standalone.localdomain podman[152057]: 2025-10-13 14:10:41.830900843 +0000 UTC m=+0.094950218 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-northd-container, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, release=1, batch=17.1_20250721.1, config_id=ovn_cluster_northd, architecture=x86_64, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, container_name=ovn_cluster_northd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:41 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:10:42 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=42919 DF PROTO=TCP SPT=42330 DPT=19885 SEQ=3798428374 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956E6280000000001030307) 
Oct 13 14:10:42 standalone.localdomain sudo[152089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-redctbrmkugcaqirwsgcrauyaqmrzatr ; /usr/bin/python3
Oct 13 14:10:42 standalone.localdomain sudo[152089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:10:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:10:42 standalone.localdomain podman[152093]: 2025-10-13 14:10:42.586050048 +0000 UTC m=+0.076207650 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.component=openstack-mariadb-container, version=17.1.9, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:10:42 standalone.localdomain systemd[1]: tmp-crun.gCVYOh.mount: Deactivated successfully.
Oct 13 14:10:42 standalone.localdomain podman[152093]: 2025-10-13 14:10:42.635815182 +0000 UTC m=+0.125972784 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, version=17.1.9, build-date=2025-07-21T12:58:45)
Oct 13 14:10:42 standalone.localdomain python3[152091]: ansible-ansible.legacy.command Invoked with _raw_params=OS_CLOUD=standalone openstack subnet create tempest-ext-nw-subnet --subnet-range 192.168.24.0/24 --allocation-pool start=192.168.24.150,end=192.168.24.250 --gateway 192.168.24.1 --no-dhcp --network tempest-ext-nw  -f json _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-00000000003b-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:10:42 standalone.localdomain podman[152092]: 2025-10-13 14:10:42.651521076 +0000 UTC m=+0.142310797 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:10:42 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:10:42 standalone.localdomain podman[152092]: 2025-10-13 14:10:42.676160676 +0000 UTC m=+0.166950377 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.9, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc.)
Oct 13 14:10:42 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:10:42 standalone.localdomain ceph-mon[29756]: pgmap v1067: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:43 standalone.localdomain runuser[152174]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:43 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=42920 DF PROTO=TCP SPT=42330 DPT=19885 SEQ=3798428374 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956EA310000000001030307) 
Oct 13 14:10:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1068: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:43 standalone.localdomain runuser[152174]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:43 standalone.localdomain runuser[152246]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:44 standalone.localdomain haproxy[70940]: 172.21.0.2:51448 [13/Oct/2025:14:10:44.176] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 48/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:44 standalone.localdomain haproxy[70940]: 172.21.0.2:51448 [13/Oct/2025:14:10:44.180] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 8100 - - ---- 48/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:44 standalone.localdomain runuser[152246]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:44 standalone.localdomain haproxy[70940]: 172.17.0.2:40792 [13/Oct/2025:14:10:44.512] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 8095 - - ---- 49/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:44 standalone.localdomain runuser[152307]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:44 standalone.localdomain haproxy[70940]: 172.21.0.2:47098 [13/Oct/2025:14:10:44.508] neutron neutron/standalone.internalapi.localdomain 0/0/0/51/51 404 297 - - ---- 49/1/0/0/0 0/0 "GET /v2.0/networks/tempest-ext-nw HTTP/1.1"
Oct 13 14:10:44 standalone.localdomain haproxy[70940]: 172.21.0.2:47098 [13/Oct/2025:14:10:44.561] neutron neutron/standalone.internalapi.localdomain 0/0/0/51/51 200 894 - - ---- 49/1/0/0/0 0/0 "GET /v2.0/networks?name=tempest-ext-nw HTTP/1.1"
Oct 13 14:10:44 standalone.localdomain ceph-mon[29756]: pgmap v1068: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:44 standalone.localdomain haproxy[70940]: 172.21.0.2:47098 [13/Oct/2025:14:10:44.616] neutron neutron/standalone.internalapi.localdomain 0/0/0/324/324 201 832 - - ---- 49/1/0/0/0 0/0 "POST /v2.0/subnets HTTP/1.1"
Oct 13 14:10:45 standalone.localdomain sudo[152089]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:45 standalone.localdomain runuser[152307]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:45 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=42921 DF PROTO=TCP SPT=42330 DPT=19885 SEQ=3798428374 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956F2310000000001030307) 
Oct 13 14:10:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1069: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:45 standalone.localdomain ceph-mon[29756]: pgmap v1069: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:46 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:10:46 standalone.localdomain object-server[152392]: Object update sweep starting on /srv/node/d1 (pid: 13)
Oct 13 14:10:46 standalone.localdomain object-server[152392]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 13)
Oct 13 14:10:46 standalone.localdomain object-server[152392]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:10:46 standalone.localdomain object-server[114601]: Object update sweep completed: 0.08s
Oct 13 14:10:47 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=16796 DF PROTO=TCP SPT=42340 DPT=19885 SEQ=2779711697 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956FA2B0000000001030307) 
Oct 13 14:10:47 standalone.localdomain sudo[152541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjqjafnfosuiaqdnadouanbmorghphmi ; /usr/bin/python3
Oct 13 14:10:47 standalone.localdomain sudo[152541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1070: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:47 standalone.localdomain python3[152543]: ansible-ansible.legacy.command Invoked with _raw_params=discover-tempest-config --out /home/zuul/ci-framework-data/tests/pre-adoption-tempest//dpa_tempest_workspace/etc/tempest.conf --deployer-input ~/tempest-deployer-input.conf --os-cloud standalone --create --network b5c071a4-7b94-4335-b0f2-67d1f6b021bf --create DEFAULT.log_file /home/zuul/ci-framework-data/tests/pre-adoption-tempest/dpa_tempest_workspace/logs/tempest.log zuul_log_id=fa163ec2-ffbe-5cff-7786-00000000003d-1-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:10:48 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=16797 DF PROTO=TCP SPT=42340 DPT=19885 SEQ=2779711697 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0956FE310000000001030307) 
Oct 13 14:10:48 standalone.localdomain haproxy[70940]: 172.21.0.2:51454 [13/Oct/2025:14:10:48.618] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 48/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:48 standalone.localdomain ceph-mon[29756]: pgmap v1070: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:48 standalone.localdomain sudo[152636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:10:48 standalone.localdomain sudo[152636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:10:48 standalone.localdomain sudo[152636]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:48 standalone.localdomain haproxy[70940]: 172.21.0.2:51460 [13/Oct/2025:14:10:48.624] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 8100 - - ---- 48/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:48 standalone.localdomain sudo[152659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:10:48 standalone.localdomain sudo[152659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:10:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:10:48 standalone.localdomain haproxy[70940]: 172.21.0.2:51464 [13/Oct/2025:14:10:48.935] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 200 919 - - ---- 48/2/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:10:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:10:49 standalone.localdomain podman[152676]: 2025-10-13 14:10:49.083036739 +0000 UTC m=+0.110208479 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-07-21T16:10:12, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-cinder-scheduler-container, distribution-scope=public, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, container_name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51468 [13/Oct/2025:14:10:48.974] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/123/123 201 541 - - ---- 48/2/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51472 [13/Oct/2025:14:10:49.098] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/92/92 200 1242 - - ---- 48/2/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain systemd[1]: tmp-crun.OwOmbv.mount: Deactivated successfully.
Oct 13 14:10:49 standalone.localdomain podman[152676]: 2025-10-13 14:10:49.26575832 +0000 UTC m=+0.292930040 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T16:10:12, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, batch=17.1_20250721.1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, container_name=cinder_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1)
Oct 13 14:10:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:10:49 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:10:49 standalone.localdomain podman[152737]: 2025-10-13 14:10:49.268745902 +0000 UTC m=+0.168365770 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, container_name=cinder_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, name=rhosp17/openstack-cinder-api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51482 [13/Oct/2025:14:10:49.192] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/291/291 201 546 - - ---- 48/2/0/0/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain podman[152734]: 2025-10-13 14:10:49.487788754 +0000 UTC m=+0.391887580 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.9, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:10:49 standalone.localdomain podman[152737]: 2025-10-13 14:10:49.511943678 +0000 UTC m=+0.411563596 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, name=rhosp17/openstack-cinder-api, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51498 [13/Oct/2025:14:10:49.489] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/52/52 201 549 - - ---- 48/2/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51504 [13/Oct/2025:14:10:49.544] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/65/65 200 1572 - - ---- 48/2/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:10:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1071: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:49 standalone.localdomain podman[152773]: 2025-10-13 14:10:49.702862323 +0000 UTC m=+0.418381396 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container)
Oct 13 14:10:49 standalone.localdomain podman[152734]: 2025-10-13 14:10:49.787979896 +0000 UTC m=+0.692078692 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:10:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:40422 [13/Oct/2025:14:10:49.611] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/275/275 201 554 - - ---- 49/3/0/0/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain podman[152773]: 2025-10-13 14:10:49.895078808 +0000 UTC m=+0.610597931 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, release=1, container_name=keystone, distribution-scope=public, version=17.1.9, config_id=tripleo_step3, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:10:49 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:40436 [13/Oct/2025:14:10:49.891] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 1572 - - ---- 48/2/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain ceph-mon[29756]: pgmap v1071: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:49 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:40450 [13/Oct/2025:14:10:49.918] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 4995 - - ---- 48/2/0/0/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:10:49 standalone.localdomain haproxy[70940]: 172.21.0.2:40462 [13/Oct/2025:14:10:49.948] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/17/17 200 2640 - - ---- 48/2/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:10:50 standalone.localdomain haproxy[70940]: 172.21.0.2:40470 [13/Oct/2025:14:10:49.969] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/55/55 204 165 - - ---- 48/2/0/0/0 0/0 "PUT /v3/projects/c979cdad54804317be260cfeb61b08c7/users/3d9eeef137fc40c78332936114fd7ee4/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:10:50 standalone.localdomain haproxy[70940]: 172.21.0.2:40482 [13/Oct/2025:14:10:50.025] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/75/75 200 3099 - - ---- 48/2/0/0/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:10:50 standalone.localdomain sudo[152659]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:10:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ba0ebca0-66f1-4b4b-8e37-fae491fd9a29 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:10:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ba0ebca0-66f1-4b4b-8e37-fae491fd9a29 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:10:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ba0ebca0-66f1-4b4b-8e37-fae491fd9a29 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:10:50 standalone.localdomain sudo[152843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:10:50 standalone.localdomain sudo[152843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:10:50 standalone.localdomain sudo[152843]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:50 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=16798 DF PROTO=TCP SPT=42340 DPT=19885 SEQ=2779711697 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095706310000000001030307) 
Oct 13 14:10:50 standalone.localdomain haproxy[70940]: 172.21.0.2:56408 [13/Oct/2025:14:10:50.901] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/4/4 200 493 - - ---- 48/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:10:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:10:50 standalone.localdomain haproxy[70940]: 172.17.0.2:54476 [13/Oct/2025:14:10:50.913] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/80/80 200 8095 - - ---- 49/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:56418 [13/Oct/2025:14:10:50.906] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/125/125 200 623 - - ---- 49/1/0/0/0 0/0 "GET /v2.1/os-hosts HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:40496 [13/Oct/2025:14:10:51.035] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 300 515 - - ---- 49/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:51200 [13/Oct/2025:14:10:51.049] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/5/5 300 1507 - - ---- 49/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:51214 [13/Oct/2025:14:10:51.056] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/40/40 200 280 - - ---- 49/1/0/0/0 0/0 "GET /v2/info/stores HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:51230 [13/Oct/2025:14:10:51.099] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/100/100 200 242 - - ---- 50/1/0/0/0 0/0 "GET /v2/images HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:51932 [13/Oct/2025:14:10:51.203] neutron neutron/standalone.internalapi.localdomain 0/0/0/9/9 200 16918 - - ---- 50/1/0/0/0 0/0 "GET /v2.0/extensions.json HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:51940 [13/Oct/2025:14:10:51.213] neutron neutron/standalone.internalapi.localdomain 0/0/0/3/3 200 231 - - ---- 50/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.17.0.2:54492 [13/Oct/2025:14:10:51.227] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 51/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1072: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.17.0.2:54492 [13/Oct/2025:14:10:51.235] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/435/435 201 8102 - - ---- 51/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.17.0.2:54492 [13/Oct/2025:14:10:51.675] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 200 1048 - - ---- 51/3/0/0/0 0/0 "GET /v3/auth/tokens?nocatalog HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain proxy-server[116182]: 172.21.0.2 172.18.0.100 13/Oct/2025/14/10/51 GET /info HTTP/1.0 200 - python-urllib3/1.26.5 gAAAAABo7QhoKwsq... - 1671 - tx146074b15fdb44d8bdd0e-0068ed086b - 0.4935 - - 1760364651.219648600 1760364651.713121891 -
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:54154 [13/Oct/2025:14:10:51.218] swift_proxy_server swift_proxy_server/standalone.storage.localdomain 0/0/0/496/496 200 1899 - - ---- 51/1/0/0/0 0/0 "GET /info HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:54166 [13/Oct/2025:14:10:51.716] swift_proxy_server swift_proxy_server/standalone.storage.localdomain 0/0/0/1/1 200 206 - - ---- 51/1/0/0/0 0/0 "GET /healthcheck HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:40502 [13/Oct/2025:14:10:51.719] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 200 2640 - - ---- 51/4/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:51946 [13/Oct/2025:14:10:51.755] neutron neutron/standalone.internalapi.localdomain 0/0/0/93/93 200 932 - - ---- 51/1/0/0/0 0/0 "GET /v2.0/networks HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain haproxy[70940]: 172.21.0.2:40508 [13/Oct/2025:14:10:51.851] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 1572 - - ---- 51/4/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:10:51 standalone.localdomain ceph-mon[29756]: pgmap v1072: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:10:52 standalone.localdomain haproxy[70940]: 172.21.0.2:51952 [13/Oct/2025:14:10:51.880] neutron neutron/standalone.internalapi.localdomain 0/0/0/938/938 201 896 - - ---- 51/1/0/0/0 0/0 "POST /v2.0/networks HTTP/1.1"
Oct 13 14:10:52 standalone.localdomain systemd[1]: tmp-crun.91EfHg.mount: Deactivated successfully.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:10:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:10:52 standalone.localdomain podman[152883]: 2025-10-13 14:10:52.922819078 +0000 UTC m=+0.174845170 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=cinder_api, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:10:52 standalone.localdomain podman[152908]: 2025-10-13 14:10:52.886470987 +0000 UTC m=+0.116227332 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-cron-container, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:10:52 standalone.localdomain podman[152880]: 2025-10-13 14:10:52.967528716 +0000 UTC m=+0.228548255 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, version=17.1.9)
Oct 13 14:10:52 standalone.localdomain podman[152895]: 2025-10-13 14:10:52.88656122 +0000 UTC m=+0.141832422 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, container_name=heat_api, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:10:53 standalone.localdomain podman[152994]: 2025-10-13 14:10:53.00658477 +0000 UTC m=+0.118199074 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, com.redhat.component=openstack-manila-api-container, config_id=tripleo_step4, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, container_name=manila_api_cron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, build-date=2025-07-21T16:06:43)
Oct 13 14:10:53 standalone.localdomain podman[152895]: 2025-10-13 14:10:53.019836419 +0000 UTC m=+0.275107681 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, config_id=tripleo_step4)
Oct 13 14:10:53 standalone.localdomain podman[152921]: 2025-10-13 14:10:53.01988039 +0000 UTC m=+0.259137099 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, release=1, name=rhosp17/openstack-heat-api, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, container_name=heat_api_cron, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152993]: 2025-10-13 14:10:53.064096803 +0000 UTC m=+0.176890433 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, architecture=x86_64, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, batch=17.1_20250721.1, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:10:53 standalone.localdomain podman[152879]: 2025-10-13 14:10:53.072938695 +0000 UTC m=+0.330700424 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-scheduler-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, name=rhosp17/openstack-nova-scheduler, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, release=1, container_name=nova_scheduler, build-date=2025-07-21T16:02:54)
Oct 13 14:10:53 standalone.localdomain podman[152880]: 2025-10-13 14:10:53.084965456 +0000 UTC m=+0.345985005 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, container_name=memcached, name=rhosp17/openstack-memcached, io.buildah.version=1.33.12, release=1)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152908]: 2025-10-13 14:10:53.121320796 +0000 UTC m=+0.351077161 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:10:53 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:10:53 standalone.localdomain podman[152994]: 2025-10-13 14:10:53.174085083 +0000 UTC m=+0.285699357 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-api, architecture=x86_64, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:10:53 standalone.localdomain podman[152903]: 2025-10-13 14:10:53.184603087 +0000 UTC m=+0.438844537 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, vcs-type=git, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, build-date=2025-07-21T15:44:17, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_conductor, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, name=rhosp17/openstack-nova-conductor, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152924]: 2025-10-13 14:10:53.124411751 +0000 UTC m=+0.367005492 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=neutron_api, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-server, release=1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, build-date=2025-07-21T15:44:03, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:10:53 standalone.localdomain podman[152878]: 2025-10-13 14:10:53.227847359 +0000 UTC m=+0.496592346 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T15:44:11, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, tcib_managed=true)
Oct 13 14:10:53 standalone.localdomain podman[152891]: 2025-10-13 14:10:53.235854356 +0000 UTC m=+0.485098232 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-horizon, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, build-date=2025-07-21T13:58:15, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.expose-services=, com.redhat.component=openstack-horizon-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=horizon, summary=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:10:53 standalone.localdomain podman[152993]: 2025-10-13 14:10:53.257999829 +0000 UTC m=+0.370793499 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, config_id=tripleo_step4, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, name=rhosp17/openstack-heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152877]: 2025-10-13 14:10:53.274204679 +0000 UTC m=+0.541493472 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-manila-scheduler-container, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:56:28)
Oct 13 14:10:53 standalone.localdomain podman[152877]: 2025-10-13 14:10:53.297735664 +0000 UTC m=+0.565024427 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-scheduler, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, batch=17.1_20250721.1, container_name=manila_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, build-date=2025-07-21T15:56:28, com.redhat.component=openstack-manila-scheduler-container, version=17.1.9, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:10:53 standalone.localdomain podman[152921]: 2025-10-13 14:10:53.305369049 +0000 UTC m=+0.544625768 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:10:53 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152903]: 2025-10-13 14:10:53.311675733 +0000 UTC m=+0.565917203 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, vcs-type=git, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.component=openstack-nova-conductor-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152883]: 2025-10-13 14:10:53.349079826 +0000 UTC m=+0.601105888 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, version=17.1.9, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, container_name=cinder_api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152879]: 2025-10-13 14:10:53.359474797 +0000 UTC m=+0.617236536 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, container_name=nova_scheduler, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-scheduler-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:10:53 standalone.localdomain podman[152924]: 2025-10-13 14:10:53.365540113 +0000 UTC m=+0.608133834 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain podman[152891]: 2025-10-13 14:10:53.403029539 +0000 UTC m=+0.652273415 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-horizon-container, vendor=Red Hat, Inc., container_name=horizon, config_id=tripleo_step3, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, version=17.1.9)
Oct 13 14:10:53 standalone.localdomain podman[152878]: 2025-10-13 14:10:53.413675568 +0000 UTC m=+0.682420535 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T15:44:11, name=rhosp17/openstack-heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:10:53 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1073: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 48 completed events
Oct 13 14:10:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:10:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:10:53 standalone.localdomain haproxy[70940]: 172.21.0.2:51968 [13/Oct/2025:14:10:52.820] neutron neutron/standalone.internalapi.localdomain 0/0/0/1017/1017 201 835 - - ---- 51/1/0/0/0 0/0 "POST /v2.0/subnets HTTP/1.1"
Oct 13 14:10:53 standalone.localdomain haproxy[70940]: 172.21.0.2:40514 [13/Oct/2025:14:10:53.843] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 200 1572 - - ---- 51/4/0/0/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:10:53 standalone.localdomain haproxy[70940]: 172.21.0.2:40526 [13/Oct/2025:14:10:53.886] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 200 4995 - - ---- 51/4/0/0/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:10:53 standalone.localdomain haproxy[70940]: 172.21.0.2:40528 [13/Oct/2025:14:10:53.929] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 200 2640 - - ---- 51/4/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:40536 [13/Oct/2025:14:10:53.965] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/55/55 204 165 - - ---- 52/4/0/0/0 0/0 "PUT /v3/projects/c979cdad54804317be260cfeb61b08c7/users/9c9572cd8f6841f2ab3ddc58e39e48f4/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:52732 [13/Oct/2025:14:10:54.022] placement placement/standalone.internalapi.localdomain 0/0/0/2/2 200 381 - - ---- 52/1/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:55514 [13/Oct/2025:14:10:54.024] manila manila/standalone.internalapi.localdomain 0/0/1/18/19 300 882 - - ---- 51/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:55522 [13/Oct/2025:14:10:54.046] manila manila/standalone.internalapi.localdomain 0/0/0/245/245 200 1080 - - ---- 51/1/0/0/0 0/0 "GET /v1/e44641a80bcb466cb3dd688e48b72d8e/scheduler-stats/pools/detail HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:55526 [13/Oct/2025:14:10:54.293] manila manila/standalone.internalapi.localdomain 0/0/0/16/16 300 882 - - ---- 51/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain sudo[153196]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpng8tw8s9/privsep.sock
Oct 13 14:10:54 standalone.localdomain sudo[153196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:10:54 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=16799 DF PROTO=TCP SPT=42340 DPT=19885 SEQ=2779711697 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095715F10000000001030307) 
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:55542 [13/Oct/2025:14:10:54.312] manila manila/standalone.internalapi.localdomain 0/0/0/282/282 200 1080 - - ---- 50/1/0/0/0 0/0 "GET /v2/scheduler-stats/pools/detail HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:33082 [13/Oct/2025:14:10:54.597] cinder cinder/standalone.internalapi.localdomain 0/0/0/9/9 200 5548 - - ---- 50/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/extensions HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:33096 [13/Oct/2025:14:10:54.608] cinder cinder/standalone.internalapi.localdomain 0/0/0/3/3 300 942 - - ---- 50/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:33110 [13/Oct/2025:14:10:54.613] cinder cinder/standalone.internalapi.localdomain 0/0/0/64/64 200 361 - - ---- 50/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/scheduler-stats/get_pools HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain ceph-mon[29756]: pgmap v1073: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56428 [13/Oct/2025:14:10:54.681] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/36/36 200 377 - - ---- 50/1/0/0/0 0/0 "GET /v2.1/flavors HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56436 [13/Oct/2025:14:10:54.721] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/45/45 200 822 - - ---- 50/1/0/0/0 0/0 "POST /v2.1/flavors HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56446 [13/Oct/2025:14:10:54.771] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/20/20 200 405 - - ---- 50/1/0/0/0 0/0 "POST /v2.1/flavors/1f304cc6-f658-4c81-8601-a769ec289d72/os-extra_specs HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56448 [13/Oct/2025:14:10:54.794] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/13/13 200 823 - - ---- 50/1/0/0/0 0/0 "POST /v2.1/flavors HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56462 [13/Oct/2025:14:10:54.811] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/21/21 200 405 - - ---- 50/1/0/0/0 0/0 "POST /v2.1/flavors/a4dc5724-1446-4a58-9cd5-9b31504ed897/os-extra_specs HTTP/1.1"
Oct 13 14:10:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:54 standalone.localdomain haproxy[70940]: 172.21.0.2:51232 [13/Oct/2025:14:10:54.837] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/19/19 200 242 - - ---- 50/1/0/0/0 0/0 "GET /v2/images HTTP/1.1"
Oct 13 14:10:55 standalone.localdomain sudo[153196]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:55 standalone.localdomain haproxy[70940]: 172.21.0.2:51236 [13/Oct/2025:14:10:55.493] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/22/22 201 1001 - - ---- 50/1/0/0/0 0/0 "POST /v2/images HTTP/1.1"
Oct 13 14:10:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1074: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:55 standalone.localdomain runuser[153312]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 13 14:10:55 standalone.localdomain kernel: device tapb7ed1856-72 entered promiscuous mode
Oct 13 14:10:55 standalone.localdomain NetworkManager[5962]: <info>  [1760364655.8451] manager: (tapb7ed1856-72): new Generic device (/org/freedesktop/NetworkManager/Devices/12)
Oct 13 14:10:55 standalone.localdomain systemd-udevd[153328]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:10:55 standalone.localdomain ceph-mon[29756]: pgmap v1074: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e48 e48: 1 total, 1 up, 1 in
Oct 13 14:10:55 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e48: 1 total, 1 up, 1 in
Oct 13 14:10:55 standalone.localdomain kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:10:56 standalone.localdomain sudo[153426]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
Oct 13 14:10:56 standalone.localdomain sudo[153426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Oct 13 14:10:56 standalone.localdomain podman[153425]: 2025-10-13 14:10:56.444772072 +0000 UTC m=+0.095440714 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-novncproxy-container, build-date=2025-07-21T15:24:10, container_name=nova_vnc_proxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:10:56 standalone.localdomain podman[153444]: 2025-10-13 14:10:56.578191844 +0000 UTC m=+0.221489448 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:10:56 standalone.localdomain podman[153423]: 2025-10-13 14:10:56.477650215 +0000 UTC m=+0.133130954 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_container_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, release=1, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:10:56 standalone.localdomain podman[153424]: 2025-10-13 14:10:56.502640225 +0000 UTC m=+0.156616418 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=swift_account_server, io.buildah.version=1.33.12, vcs-type=git, release=1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:10:56 standalone.localdomain podman[153422]: 2025-10-13 14:10:56.561549231 +0000 UTC m=+0.216312228 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, batch=17.1_20250721.1)
Oct 13 14:10:56 standalone.localdomain runuser[153312]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:56 standalone.localdomain podman[153423]: 2025-10-13 14:10:56.693675063 +0000 UTC m=+0.349155802 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, container_name=swift_container_server, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:10:56 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:10:56 standalone.localdomain podman[153424]: 2025-10-13 14:10:56.704545088 +0000 UTC m=+0.358521301 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, tcib_managed=true, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=swift_account_server, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:10:56 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:10:56 standalone.localdomain podman[153422]: 2025-10-13 14:10:56.743796178 +0000 UTC m=+0.398559165 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=swift_object_server, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:10:56 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:10:56 standalone.localdomain podman[153588]: 2025-10-13 14:10:56.766708764 +0000 UTC m=+0.064172848 container create b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:10:56 standalone.localdomain podman[153425]: 2025-10-13 14:10:56.794070447 +0000 UTC m=+0.444739069 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_vnc_proxy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-novncproxy-container, build-date=2025-07-21T15:24:10)
Oct 13 14:10:56 standalone.localdomain runuser[153605]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started libpod-conmon-b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c.scope.
Oct 13 14:10:56 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:10:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:10:56 standalone.localdomain podman[153588]: 2025-10-13 14:10:56.729152137 +0000 UTC m=+0.026616221 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 14:10:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5889d2ca5f80aba463de7e4a66b72ebab8dece7bfb670659690bed14931210/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:10:56 standalone.localdomain podman[153588]: 2025-10-13 14:10:56.845939137 +0000 UTC m=+0.143403221 container init b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, build-date=2025-07-21T16:28:54, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:10:56 standalone.localdomain podman[153588]: 2025-10-13 14:10:56.851717435 +0000 UTC m=+0.149181519 container start b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:10:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 13 14:10:56 standalone.localdomain ceph-mon[29756]: osdmap e48: 1 total, 1 up, 1 in
Oct 13 14:10:56 standalone.localdomain dnsmasq[153655]: started, version 2.85 cachesize 150
Oct 13 14:10:56 standalone.localdomain dnsmasq[153655]: DNS service limited to local subnets
Oct 13 14:10:56 standalone.localdomain dnsmasq[153655]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 14:10:56 standalone.localdomain dnsmasq[153655]: warning: no upstream servers configured
Oct 13 14:10:56 standalone.localdomain dnsmasq-dhcp[153655]: DHCP, static leases only on 192.168.199.0, lease time 1d
Oct 13 14:10:56 standalone.localdomain dnsmasq[153655]: read /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts - 0 addresses
Oct 13 14:10:56 standalone.localdomain dnsmasq-dhcp[153655]: read /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 14:10:56 standalone.localdomain dnsmasq-dhcp[153655]: read /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 14:10:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e49 e49: 1 total, 1 up, 1 in
Oct 13 14:10:56 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e49: 1 total, 1 up, 1 in
Oct 13 14:10:56 standalone.localdomain podman[153444]: 2025-10-13 14:10:56.952015356 +0000 UTC m=+0.595312970 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, vcs-type=git)
Oct 13 14:10:56 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:10:57 standalone.localdomain runuser[153605]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:57 standalone.localdomain runuser[153811]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3753288602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3753288602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:10:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1077: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: osdmap e49: 1 total, 1 up, 1 in
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3753288602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3753288602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: pgmap v1077: 177 pgs: 177 active+clean; 451 KiB data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e50 e50: 1 total, 1 up, 1 in
Oct 13 14:10:57 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e50: 1 total, 1 up, 1 in
Oct 13 14:10:57 standalone.localdomain haproxy[70940]: 172.21.0.2:51248 [13/Oct/2025:14:10:55.518] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/2457/2457 204 188 - - ---- 50/1/0/0/0 0/0 "PUT /v2/images/4cc48d12-8835-4f87-ba22-f39b71f102d9/file HTTP/1.1"
Oct 13 14:10:58 standalone.localdomain haproxy[70940]: 172.21.0.2:51252 [13/Oct/2025:14:10:57.979] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/21/21 200 1104 - - ---- 50/1/0/0/0 0/0 "GET /v2/images HTTP/1.1"
Oct 13 14:10:58 standalone.localdomain haproxy[70940]: 172.21.0.2:51266 [13/Oct/2025:14:10:58.002] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/19/19 201 1005 - - ---- 50/1/0/0/0 0/0 "POST /v2/images HTTP/1.1"
Oct 13 14:10:58 standalone.localdomain runuser[153811]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:10:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 13 14:10:58 standalone.localdomain ceph-mon[29756]: osdmap e50: 1 total, 1 up, 1 in
Oct 13 14:10:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e51 e51: 1 total, 1 up, 1 in
Oct 13 14:10:58 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e51: 1 total, 1 up, 1 in
Oct 13 14:10:58 standalone.localdomain sshd[147676]: Received disconnect from 192.168.122.11 port 48244:11: disconnected by user
Oct 13 14:10:58 standalone.localdomain sshd[147676]: Disconnected from user root 192.168.122.11 port 48244
Oct 13 14:10:58 standalone.localdomain sshd[147514]: pam_unix(sshd:session): session closed for user root
Oct 13 14:10:58 standalone.localdomain systemd[1]: session-42.scope: Deactivated successfully.
Oct 13 14:10:58 standalone.localdomain systemd[1]: session-42.scope: Consumed 1.490s CPU time.
Oct 13 14:10:58 standalone.localdomain systemd-logind[45629]: Session 42 logged out. Waiting for processes to exit.
Oct 13 14:10:58 standalone.localdomain systemd-logind[45629]: Removed session 42.
Oct 13 14:10:59 standalone.localdomain haproxy[70940]: 172.21.0.2:51274 [13/Oct/2025:14:10:58.023] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/999/999 204 188 - - ---- 51/1/0/0/0 0/0 "PUT /v2/images/671a050b-a98d-4371-8e0e-c6d9b13e3835/file HTTP/1.1"
Oct 13 14:10:59 standalone.localdomain haproxy[70940]: 172.21.0.2:51970 [13/Oct/2025:14:10:59.025] neutron neutron/standalone.internalapi.localdomain 0/0/0/98/98 200 1664 - - ---- 50/1/0/0/0 0/0 "GET /v2.0/networks HTTP/1.1"
Oct 13 14:10:59 standalone.localdomain haproxy[70940]: 172.21.0.2:33112 [13/Oct/2025:14:10:59.128] cinder cinder/standalone.internalapi.localdomain 0/0/0/14/14 200 500 - - ---- 50/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/os-services?binary=cinder-backup HTTP/1.1"
Oct 13 14:10:59 standalone.localdomain haproxy[70940]: Connect from 172.21.0.2:56754 to 172.21.0.2:80 (horizon/HTTP)
Oct 13 14:10:59 standalone.localdomain haproxy[70940]: Connect from 172.21.0.2:56764 to 172.21.0.2:80 (horizon/HTTP)
Oct 13 14:10:59 standalone.localdomain haproxy[70940]: 172.21.0.2:40544 [13/Oct/2025:14:10:59.364] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 200 29891 - - ---- 50/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:10:59 standalone.localdomain sudo[152541]: pam_unix(sudo:session): session closed for user root
Oct 13 14:10:59 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11314 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095729A60000000001030307) 
Oct 13 14:10:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1080: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 63 KiB/s rd, 7.8 MiB/s wr, 90 op/s
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:10:59 standalone.localdomain sudo[154129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnubrkvbfxfyivozeqizinsgceadptge ; /usr/bin/python3
Oct 13 14:10:59 standalone.localdomain sudo[154129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:10:59 standalone.localdomain podman[154134]: 2025-10-13 14:10:59.813818102 +0000 UTC m=+0.074155136 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_metadata, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, release=1, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:10:59 standalone.localdomain podman[154133]: 2025-10-13 14:10:59.831963632 +0000 UTC m=+0.092724819 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:10:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:10:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:10:59 standalone.localdomain systemd[1]: tmp-crun.SmujqM.mount: Deactivated successfully.
Oct 13 14:10:59 standalone.localdomain podman[154134]: 2025-10-13 14:10:59.895093847 +0000 UTC m=+0.155430901 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, distribution-scope=public, release=1, container_name=nova_metadata, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:10:59 standalone.localdomain ceph-mon[29756]: osdmap e51: 1 total, 1 up, 1 in
Oct 13 14:10:59 standalone.localdomain ceph-mon[29756]: pgmap v1080: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 63 KiB/s rd, 7.8 MiB/s wr, 90 op/s
Oct 13 14:10:59 standalone.localdomain python3[154163]: ansible-ansible.legacy.command Invoked with _raw_params=tempest run --config-file /home/zuul/ci-framework-data/tests/pre-adoption-tempest//dpa_tempest_workspace/etc/tempest.conf -r tempest.api.identity zuul_log_id=fa163ec2-ffbe-5cff-7786-00000000003e-1-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:10:59 standalone.localdomain podman[154206]: 2025-10-13 14:10:59.922623606 +0000 UTC m=+0.088718216 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, release=1, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:10:59 standalone.localdomain podman[154136]: 2025-10-13 14:10:59.887549504 +0000 UTC m=+0.132133303 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:10:59 standalone.localdomain podman[154229]: 2025-10-13 14:10:59.942687504 +0000 UTC m=+0.075829128 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, release=1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=glance_api_internal, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:10:59 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:10:59 standalone.localdomain podman[154130]: 2025-10-13 14:10:59.919609133 +0000 UTC m=+0.180398161 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, version=17.1.9, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:10:59 standalone.localdomain podman[154136]: 2025-10-13 14:10:59.973817673 +0000 UTC m=+0.218401492 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, version=17.1.9, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, release=1, io.buildah.version=1.33.12)
Oct 13 14:10:59 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:10:59 standalone.localdomain podman[154130]: 2025-10-13 14:10:59.998829634 +0000 UTC m=+0.259618632 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, container_name=glance_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12)
Oct 13 14:11:00 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:11:00 standalone.localdomain podman[154206]: 2025-10-13 14:11:00.102712466 +0000 UTC m=+0.268807076 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, distribution-scope=public, name=rhosp17/openstack-glance-api, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, architecture=x86_64)
Oct 13 14:11:00 standalone.localdomain podman[154133]: 2025-10-13 14:11:00.121742783 +0000 UTC m=+0.382503970 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., release=1, version=17.1.9, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 14:11:00 standalone.localdomain podman[154131]: 2025-10-13 14:10:59.925822114 +0000 UTC m=+0.186603822 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, build-date=2025-07-21T13:58:12, release=1, version=17.1.9, distribution-scope=public, tcib_managed=true, architecture=x86_64, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, summary=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-placement-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=placement_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:11:00 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:11:00 standalone.localdomain podman[154131]: 2025-10-13 14:11:00.167808603 +0000 UTC m=+0.428590291 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, build-date=2025-07-21T13:58:12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, com.redhat.component=openstack-placement-api-container)
Oct 13 14:11:00 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:11:00 standalone.localdomain podman[154135]: 2025-10-13 14:11:00.126981534 +0000 UTC m=+0.381786588 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1)
Oct 13 14:11:00 standalone.localdomain podman[154229]: 2025-10-13 14:11:00.192143342 +0000 UTC m=+0.325285016 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:11:00 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:11:00 standalone.localdomain podman[154135]: 2025-10-13 14:11:00.213875203 +0000 UTC m=+0.468680237 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9)
Oct 13 14:11:00 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:11:00 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:11:00 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11315 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09572DB10000000001030307) 
Oct 13 14:11:01 standalone.localdomain podman[154347]: 2025-10-13 14:11:01.500389125 +0000 UTC m=+0.068813031 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, release=1)
Oct 13 14:11:01 standalone.localdomain podman[154347]: 2025-10-13 14:11:01.531875895 +0000 UTC m=+0.100299831 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, name=rhosp17/openstack-mariadb, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:11:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1081: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 43 KiB/s rd, 5.4 MiB/s wr, 62 op/s
Oct 13 14:11:01 standalone.localdomain podman[154379]: 2025-10-13 14:11:01.728694102 +0000 UTC m=+0.072158175 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, version=17.1.9, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, build-date=2025-07-21T13:08:11, name=rhosp17/openstack-haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:11:01 standalone.localdomain podman[154379]: 2025-10-13 14:11:01.761805293 +0000 UTC m=+0.105269316 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, com.redhat.component=openstack-haproxy-container, architecture=x86_64, name=rhosp17/openstack-haproxy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11)
Oct 13 14:11:01 standalone.localdomain systemd[1]: tmp-crun.XyFBBc.mount: Deactivated successfully.
Oct 13 14:11:01 standalone.localdomain podman[154413]: 2025-10-13 14:11:01.889845459 +0000 UTC m=+0.066092918 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, tcib_managed=true, distribution-scope=public)
Oct 13 14:11:01 standalone.localdomain podman[154413]: 2025-10-13 14:11:01.921012519 +0000 UTC m=+0.097260078 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-type=git, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, build-date=2025-07-21T13:08:05, com.redhat.component=openstack-rabbitmq-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:11:02 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11316 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095735B10000000001030307) 
Oct 13 14:11:02 standalone.localdomain ceph-mon[29756]: pgmap v1081: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 43 KiB/s rd, 5.4 MiB/s wr, 62 op/s
Oct 13 14:11:03 standalone.localdomain haproxy[70940]: 172.21.0.2:58172 [13/Oct/2025:14:11:02.800] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/294/294 201 8100 - - ---- 54/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:03 standalone.localdomain haproxy[70940]: 172.21.0.2:58178 [13/Oct/2025:14:11:02.821] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/540/540 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:03 standalone.localdomain haproxy[70940]: 172.21.0.2:58190 [13/Oct/2025:14:11:02.836] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/800/800 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1082: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 37 KiB/s rd, 4.6 MiB/s wr, 53 op/s
Oct 13 14:11:03 standalone.localdomain haproxy[70940]: 172.21.0.2:58198 [13/Oct/2025:14:11:02.891] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1013/1013 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:04 standalone.localdomain haproxy[70940]: 172.21.0.2:58202 [13/Oct/2025:14:11:03.043] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1125/1125 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:04 standalone.localdomain haproxy[70940]: 172.21.0.2:58212 [13/Oct/2025:14:11:03.052] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1404/1404 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:11:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:11:04 standalone.localdomain haproxy[70940]: 172.21.0.2:58228 [13/Oct/2025:14:11:03.094] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1632/1632 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:04 standalone.localdomain ceph-mon[29756]: pgmap v1082: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 37 KiB/s rd, 4.6 MiB/s wr, 53 op/s
Oct 13 14:11:04 standalone.localdomain systemd[1]: tmp-crun.TE3gmA.mount: Deactivated successfully.
Oct 13 14:11:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 13 14:11:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 e52: 1 total, 1 up, 1 in
Oct 13 14:11:04 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e52: 1 total, 1 up, 1 in
Oct 13 14:11:04 standalone.localdomain podman[154484]: 2025-10-13 14:11:04.862056839 +0000 UTC m=+0.125792949 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, container_name=keystone_cron, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:11:04 standalone.localdomain podman[154484]: 2025-10-13 14:11:04.870712145 +0000 UTC m=+0.134448265 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, architecture=x86_64, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:27:18)
Oct 13 14:11:04 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:11:04 standalone.localdomain podman[154485]: 2025-10-13 14:11:04.831699023 +0000 UTC m=+0.091739328 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.9)
Oct 13 14:11:04 standalone.localdomain podman[154485]: 2025-10-13 14:11:04.910289275 +0000 UTC m=+0.170329570 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, release=1, vcs-type=git, build-date=2025-07-21T16:05:11, container_name=nova_api, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:11:04 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:11:05 standalone.localdomain haproxy[70940]: 172.21.0.2:58244 [13/Oct/2025:14:11:03.100] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1911/1911 201 8100 - - ---- 55/8/7/7/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:05 standalone.localdomain haproxy[70940]: 172.21.0.2:58256 [13/Oct/2025:14:11:03.186] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2092/2092 201 8100 - - ---- 55/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:05 standalone.localdomain podman[154533]: 2025-10-13 14:11:05.399929257 +0000 UTC m=+0.129308346 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:13:39, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-cinder-volume, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 14:11:05 standalone.localdomain podman[154533]: 2025-10-13 14:11:05.428459746 +0000 UTC m=+0.157838835 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, io.buildah.version=1.33.12, release=1, distribution-scope=public, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, description=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, version=17.1.9, com.redhat.component=openstack-cinder-volume-container)
Oct 13 14:11:05 standalone.localdomain haproxy[70940]: 172.21.0.2:58258 [13/Oct/2025:14:11:03.368] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2186/2186 201 8100 - - ---- 55/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1084: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 32 KiB/s rd, 4.0 MiB/s wr, 46 op/s
Oct 13 14:11:05 standalone.localdomain systemd[1]: tmp-crun.bhZgd9.mount: Deactivated successfully.
Oct 13 14:11:05 standalone.localdomain haproxy[70940]: 172.21.0.2:58266 [13/Oct/2025:14:11:03.641] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2187/2187 201 8100 - - ---- 55/6/5/5/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:05 standalone.localdomain ceph-mon[29756]: osdmap e52: 1 total, 1 up, 1 in
Oct 13 14:11:05 standalone.localdomain ceph-mon[29756]: pgmap v1084: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 32 KiB/s rd, 4.0 MiB/s wr, 46 op/s
Oct 13 14:11:06 standalone.localdomain haproxy[70940]: 172.21.0.2:58280 [13/Oct/2025:14:11:03.914] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2250/2250 201 8100 - - ---- 55/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:11:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:11:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:11:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:11:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:11:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:11:06 standalone.localdomain systemd[1]: tmp-crun.DAi8q5.mount: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain podman[154580]: 2025-10-13 14:11:06.444533293 +0000 UTC m=+0.099221909 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-dhcp-agent, release=1, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, build-date=2025-07-21T16:28:54, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:11:06 standalone.localdomain podman[154600]: 2025-10-13 14:11:06.462415424 +0000 UTC m=+0.098377353 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, release=1, vcs-type=git, build-date=2025-07-21T16:05:11)
Oct 13 14:11:06 standalone.localdomain haproxy[70940]: 172.21.0.2:58292 [13/Oct/2025:14:11:04.176] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2311/2311 201 8100 - - ---- 55/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:06 standalone.localdomain podman[154592]: 2025-10-13 14:11:06.493478002 +0000 UTC m=+0.136396355 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, vendor=Red Hat, Inc., release=1)
Oct 13 14:11:06 standalone.localdomain podman[154580]: 2025-10-13 14:11:06.498720623 +0000 UTC m=+0.153409289 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp)
Oct 13 14:11:06 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain podman[154582]: 2025-10-13 14:11:06.542567205 +0000 UTC m=+0.185982143 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., container_name=barbican_keystone_listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T16:18:19, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-keystone-listener-container, name=rhosp17/openstack-barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:11:06 standalone.localdomain podman[154594]: 2025-10-13 14:11:06.588532561 +0000 UTC m=+0.229557796 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, name=rhosp17/openstack-barbican-worker, tcib_managed=true, com.redhat.component=openstack-barbican-worker-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:36:22, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, container_name=barbican_worker, release=1)
Oct 13 14:11:06 standalone.localdomain podman[154600]: 2025-10-13 14:11:06.595272229 +0000 UTC m=+0.231234138 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api_cron, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:11:06 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain podman[154581]: 2025-10-13 14:11:06.6089264 +0000 UTC m=+0.262922495 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, com.redhat.component=openstack-barbican-api-container, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, container_name=barbican_api, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:22:44, name=rhosp17/openstack-barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, tcib_managed=true)
Oct 13 14:11:06 standalone.localdomain podman[154592]: 2025-10-13 14:11:06.614594775 +0000 UTC m=+0.257513168 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:11:06 standalone.localdomain podman[154582]: 2025-10-13 14:11:06.623919162 +0000 UTC m=+0.267334120 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, batch=17.1_20250721.1, name=rhosp17/openstack-barbican-keystone-listener, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, container_name=barbican_keystone_listener, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git)
Oct 13 14:11:06 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain podman[154581]: 2025-10-13 14:11:06.639253055 +0000 UTC m=+0.293249140 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, container_name=barbican_api, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-barbican-api-container, release=1, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:22:44)
Oct 13 14:11:06 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain podman[154594]: 2025-10-13 14:11:06.692342721 +0000 UTC m=+0.333367966 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, release=1, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=barbican_worker, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container)
Oct 13 14:11:06 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:11:06 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11317 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095745710000000001030307) 
Oct 13 14:11:06 standalone.localdomain haproxy[70940]: 172.21.0.2:58302 [13/Oct/2025:14:11:04.464] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2307/2307 201 8100 - - ---- 55/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:06 standalone.localdomain systemd[1]: tmp-crun.JYSkGt.mount: Deactivated successfully.
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 172.21.0.2:58308 [13/Oct/2025:14:11:04.732] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2356/2356 201 8100 - - ---- 55/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43238 [13/Oct/2025:14:11:05.016] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/2089/2089 200 510 - - ---- 55/7/6/6/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 172.21.0.2:58312 [13/Oct/2025:14:11:05.286] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2092/2092 201 8100 - - ---- 55/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43246 [13/Oct/2025:14:11:05.558] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1836/1836 200 510 - - ---- 55/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43252 [13/Oct/2025:14:11:05.832] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1577/1577 200 510 - - ---- 55/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43258 [13/Oct/2025:14:11:06.168] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1254/1254 200 510 - - ---- 55/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43270 [13/Oct/2025:14:11:06.491] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/944/944 200 510 - - ---- 56/9/8/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43278 [13/Oct/2025:14:11:06.776] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/674/674 200 510 - - ---- 55/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43284 [13/Oct/2025:14:11:07.090] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/375/375 200 510 - - ---- 55/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43290 [13/Oct/2025:14:11:07.114] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/372/372 201 578 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43300 [13/Oct/2025:14:11:07.382] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/149/149 200 510 - - ---- 55/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43306 [13/Oct/2025:14:11:07.400] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/158/158 201 574 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43314 [13/Oct/2025:14:11:07.415] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/201/201 201 610 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1085: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 41 op/s
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43328 [13/Oct/2025:14:11:07.429] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/247/247 201 582 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43338 [13/Oct/2025:14:11:07.440] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/289/289 201 582 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43348 [13/Oct/2025:14:11:07.453] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/328/328 201 596 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:07 standalone.localdomain haproxy[70940]: 192.168.122.99:43362 [13/Oct/2025:14:11:07.470] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 201 590 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:08 standalone.localdomain haproxy[70940]: 192.168.122.99:43370 [13/Oct/2025:14:11:07.489] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/639/639 201 500 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:08 standalone.localdomain haproxy[70940]: 192.168.122.99:43376 [13/Oct/2025:14:11:07.539] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/617/617 201 582 - - ---- 55/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:08 standalone.localdomain haproxy[70940]: 192.168.122.99:43380 [13/Oct/2025:14:11:07.562] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/882/882 201 498 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:08 standalone.localdomain runuser[154917]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:08 standalone.localdomain haproxy[70940]: 192.168.122.99:43382 [13/Oct/2025:14:11:07.620] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1084/1084 201 516 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:08 standalone.localdomain ceph-mon[29756]: pgmap v1085: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail; 29 KiB/s rd, 3.6 MiB/s wr, 41 op/s
Oct 13 14:11:08 standalone.localdomain systemd[1]: Stopping User Manager for UID 0...
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Activating special unit Exit the Session...
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped target Main User Target.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped target Basic System.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped target Paths.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped target Sockets.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped target Timers.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Closed D-Bus User Message Bus Socket.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Stopped Create User's Volatile Files and Directories.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Removed slice User Application Slice.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Reached target Shutdown.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Finished Exit the Session.
Oct 13 14:11:08 standalone.localdomain systemd[147530]: Reached target Exit the Session.
Oct 13 14:11:08 standalone.localdomain systemd[1]: user@0.service: Deactivated successfully.
Oct 13 14:11:08 standalone.localdomain systemd[1]: Stopped User Manager for UID 0.
Oct 13 14:11:08 standalone.localdomain haproxy[70940]: 192.168.122.99:43386 [13/Oct/2025:14:11:07.679] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1303/1303 201 502 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:08 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 14:11:08 standalone.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 14:11:08 standalone.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 14:11:08 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 14:11:09 standalone.localdomain systemd[1]: Removed slice User Slice of UID 0.
Oct 13 14:11:09 standalone.localdomain systemd[1]: user-0.slice: Consumed 2.362s CPU time.
Oct 13 14:11:09 standalone.localdomain haproxy[70940]: 192.168.122.99:43394 [13/Oct/2025:14:11:07.731] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1523/1523 201 502 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:09 standalone.localdomain runuser[154917]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:09 standalone.localdomain runuser[155071]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:09 standalone.localdomain haproxy[70940]: 192.168.122.99:43402 [13/Oct/2025:14:11:07.785] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1735/1735 201 509 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1086: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:09 standalone.localdomain haproxy[70940]: 192.168.122.99:43412 [13/Oct/2025:14:11:07.838] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1955/1955 201 506 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:09 standalone.localdomain haproxy[70940]: 192.168.122.99:43418 [13/Oct/2025:14:11:08.130] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1686/1686 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:43430 [13/Oct/2025:14:11:08.158] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1918/1918 201 502 - - ---- 55/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain runuser[155071]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:10 standalone.localdomain runuser[155125]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:43442 [13/Oct/2025:14:11:08.445] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1661/1661 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:43458 [13/Oct/2025:14:11:08.707] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1417/1417 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:43472 [13/Oct/2025:14:11:08.984] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1153/1153 200 2700 - - ---- 56/9/8/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:43478 [13/Oct/2025:14:11:09.259] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/892/892 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:43482 [13/Oct/2025:14:11:09.523] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/640/640 200 2700 - - ---- 56/9/8/8/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34492 [13/Oct/2025:14:11:09.795] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/381/381 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34508 [13/Oct/2025:14:11:09.819] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/384/384 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/cc054b469f9740b3abe54a3e348e7912/users/9246bdae8a134c218b20550a2f4e554c/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34524 [13/Oct/2025:14:11:10.077] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/171/171 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34530 [13/Oct/2025:14:11:10.109] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/172/172 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/a70ed38184164aba8e34a63bbbacc51f/users/17d8f35a3335444f8e6303657404a3d0/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34534 [13/Oct/2025:14:11:10.126] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/206/206 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/995d7261fda2423391fc1ff4ea567667/users/f657c9375b254705bd6ef49a390a14a7/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34550 [13/Oct/2025:14:11:10.139] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/252/252 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/7ab28603f6f94e898dbe5b4dae450b58/users/546e2e1736164c389984f658c8fbfd57/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34556 [13/Oct/2025:14:11:10.152] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/300/300 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/c2fe7618d8674deda12a14c2ae3528ab/users/6664c1481bd143399b877a8eeb01fd60/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34560 [13/Oct/2025:14:11:10.166] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/354/354 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/53bee85e2faf4ca9a286c485d060df94/users/30424e168f744318a01216b07f845ee0/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34562 [13/Oct/2025:14:11:10.180] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/406/406 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/ec9c5c705d494e1897b1c4b3a1631d56/users/73f866596f6d49a59be43453e4b6c08d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34572 [13/Oct/2025:14:11:10.205] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/425/425 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34582 [13/Oct/2025:14:11:10.250] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/408/408 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/a7b02bed147c4a6f87f4a733e2f52dc5/users/edc137d8358f4729be004bfb04f36f53/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34596 [13/Oct/2025:14:11:10.283] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/419/419 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34598 [13/Oct/2025:14:11:10.334] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/383/383 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain ceph-mon[29756]: pgmap v1086: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34614 [13/Oct/2025:14:11:10.394] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/337/337 200 2700 - - ---- 56/9/8/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34630 [13/Oct/2025:14:11:10.454] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/291/291 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34638 [13/Oct/2025:14:11:10.523] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/237/237 200 2700 - - ---- 56/9/8/8/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34642 [13/Oct/2025:14:11:10.589] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/184/184 200 2700 - - ---- 55/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34650 [13/Oct/2025:14:11:10.634] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/156/156 204 165 - - ---- 55/8/7/7/0 0/0 "PUT /v3/projects/cc054b469f9740b3abe54a3e348e7912/users/9246bdae8a134c218b20550a2f4e554c/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain runuser[155125]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34662 [13/Oct/2025:14:11:10.660] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/171/171 200 2700 - - ---- 55/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34678 [13/Oct/2025:14:11:10.705] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/142/142 204 165 - - ---- 55/7/6/6/0 0/0 "PUT /v3/projects/a70ed38184164aba8e34a63bbbacc51f/users/17d8f35a3335444f8e6303657404a3d0/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34686 [13/Oct/2025:14:11:10.720] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/170/170 204 165 - - ---- 55/6/5/5/0 0/0 "PUT /v3/projects/995d7261fda2423391fc1ff4ea567667/users/f657c9375b254705bd6ef49a390a14a7/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34690 [13/Oct/2025:14:11:10.734] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/202/202 204 165 - - ---- 55/5/4/4/0 0/0 "PUT /v3/projects/7ab28603f6f94e898dbe5b4dae450b58/users/546e2e1736164c389984f658c8fbfd57/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:10 standalone.localdomain haproxy[70940]: 192.168.122.99:34702 [13/Oct/2025:14:11:10.748] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/229/229 204 165 - - ---- 55/4/3/3/0 0/0 "PUT /v3/projects/c2fe7618d8674deda12a14c2ae3528ab/users/6664c1481bd143399b877a8eeb01fd60/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:11 standalone.localdomain haproxy[70940]: 192.168.122.99:34704 [13/Oct/2025:14:11:10.762] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/256/256 204 165 - - ---- 55/3/2/2/0 0/0 "PUT /v3/projects/53bee85e2faf4ca9a286c485d060df94/users/30424e168f744318a01216b07f845ee0/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:11 standalone.localdomain haproxy[70940]: 192.168.122.99:34714 [13/Oct/2025:14:11:10.777] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/287/287 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/projects/ec9c5c705d494e1897b1c4b3a1631d56/users/73f866596f6d49a59be43453e4b6c08d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:11 standalone.localdomain haproxy[70940]: 172.21.0.2:35280 [13/Oct/2025:14:11:10.799] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/665/665 201 8116 - - ---- 55/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:11 standalone.localdomain haproxy[70940]: 192.168.122.99:34722 [13/Oct/2025:14:11:10.834] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/694/694 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/projects/a7b02bed147c4a6f87f4a733e2f52dc5/users/edc137d8358f4729be004bfb04f36f53/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1087: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:11 standalone.localdomain haproxy[70940]: 172.21.0.2:35288 [13/Oct/2025:14:11:10.854] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1024/1024 201 8112 - - ---- 56/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:12 standalone.localdomain haproxy[70940]: 172.21.0.2:35298 [13/Oct/2025:14:11:10.896] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1302/1303 201 8148 - - ---- 55/6/5/5/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:11:12 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:11:12 standalone.localdomain recover_tripleo_nova_virtqemud[155213]: 93291
Oct 13 14:11:12 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:11:12 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:11:12 standalone.localdomain haproxy[70940]: 172.21.0.2:35314 [13/Oct/2025:14:11:10.943] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1599/1599 201 8120 - - ---- 55/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:12 standalone.localdomain podman[155211]: 2025-10-13 14:11:12.597213854 +0000 UTC m=+0.104368933 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, tcib_managed=true, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, build-date=2025-07-21T13:30:04, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=ovn_cluster_northd, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:11:12 standalone.localdomain podman[155211]: 2025-10-13 14:11:12.624930175 +0000 UTC m=+0.132085284 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, build-date=2025-07-21T13:30:04, config_id=ovn_cluster_northd, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.openshift.expose-services=, container_name=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:11:12 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:11:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:11:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:11:12 standalone.localdomain ceph-mon[29756]: pgmap v1087: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:12 standalone.localdomain podman[155240]: 2025-10-13 14:11:12.835396122 +0000 UTC m=+0.092038374 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true)
Oct 13 14:11:12 standalone.localdomain haproxy[70940]: 172.21.0.2:35322 [13/Oct/2025:14:11:10.984] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1872/1872 201 8120 - - ---- 55/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:12 standalone.localdomain podman[155240]: 2025-10-13 14:11:12.886054836 +0000 UTC m=+0.142697068 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:11:12 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:11:12 standalone.localdomain podman[155241]: 2025-10-13 14:11:12.952854095 +0000 UTC m=+0.200584194 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2)
Oct 13 14:11:13 standalone.localdomain podman[155241]: 2025-10-13 14:11:13.001676214 +0000 UTC m=+0.249406323 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:11:13 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:11:13 standalone.localdomain haproxy[70940]: 172.21.0.2:35326 [13/Oct/2025:14:11:11.022] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2149/2149 201 8134 - - ---- 56/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:13 standalone.localdomain haproxy[70940]: 172.21.0.2:35332 [13/Oct/2025:14:11:11.070] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2416/2416 201 8128 - - ---- 55/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:13 standalone.localdomain haproxy[70940]: 192.168.122.99:34738 [13/Oct/2025:14:11:11.466] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/2074/2074 201 580 - - ---- 55/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:13 standalone.localdomain systemd[1]: tmp-crun.LJv1WF.mount: Deactivated successfully.
Oct 13 14:11:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1088: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:13 standalone.localdomain haproxy[70940]: 172.21.0.2:35338 [13/Oct/2025:14:11:11.533] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2336/2336 201 8120 - - ---- 56/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:13 standalone.localdomain haproxy[70940]: 192.168.122.99:34746 [13/Oct/2025:14:11:11.880] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/2038/2038 201 576 - - ---- 56/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:13 standalone.localdomain haproxy[70940]: 192.168.122.99:34758 [13/Oct/2025:14:11:12.201] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1766/1766 201 610 - - ---- 56/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34770 [13/Oct/2025:14:11:12.545] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1478/1478 201 584 - - ---- 56/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34782 [13/Oct/2025:14:11:12.860] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1213/1213 201 584 - - ---- 56/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34794 [13/Oct/2025:14:11:13.172] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/4/949/953 201 596 - - ---- 57/9/8/8/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34808 [13/Oct/2025:14:11:13.489] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/689/689 201 588 - - ---- 56/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3202467494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 172.17.0.2:49002 [13/Oct/2025:14:11:14.388] placement placement/standalone.internalapi.localdomain 0/0/0/9/9 200 298 - - ---- 57/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34812 [13/Oct/2025:14:11:13.543] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/944/944 201 500 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34822 [13/Oct/2025:14:11:13.874] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/658/658 201 580 - - ---- 57/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: pgmap v1088: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3202467494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:11:14 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11318 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095764F10000000001030307) 
Oct 13 14:11:14 standalone.localdomain haproxy[70940]: 192.168.122.99:34830 [13/Oct/2025:14:11:13.921] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/908/908 201 498 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1042771189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:11:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:15 standalone.localdomain haproxy[70940]: 192.168.122.99:34832 [13/Oct/2025:14:11:13.970] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1120/1120 201 515 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:15 standalone.localdomain haproxy[70940]: 192.168.122.99:34844 [13/Oct/2025:14:11:14.026] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1324/1324 201 502 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:15 standalone.localdomain haproxy[70940]: 192.168.122.99:34856 [13/Oct/2025:14:11:14.075] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1578/1578 201 502 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1089: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1042771189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:11:15 standalone.localdomain haproxy[70940]: 192.168.122.99:34870 [13/Oct/2025:14:11:14.127] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1805/1805 201 508 - - ---- 58/9/8/8/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34872 [13/Oct/2025:14:11:14.179] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/2011/2011 201 504 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34876 [13/Oct/2025:14:11:14.489] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1719/1719 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34882 [13/Oct/2025:14:11:14.533] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1940/1940 201 500 - - ---- 57/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34890 [13/Oct/2025:14:11:14.832] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1652/1652 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34904 [13/Oct/2025:14:11:15.091] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1405/1405 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34908 [13/Oct/2025:14:11:15.352] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1157/1157 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34916 [13/Oct/2025:14:11:15.655] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/872/872 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34932 [13/Oct/2025:14:11:15.935] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34948 [13/Oct/2025:14:11:16.193] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/359/359 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34964 [13/Oct/2025:14:11:16.210] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/373/373 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/8663c786e40e473ba326cdf580ab887d/users/2cc3076eb1d342548f5e45a8988ec05e/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34972 [13/Oct/2025:14:11:16.476] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/150/150 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34980 [13/Oct/2025:14:11:16.488] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/165/165 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/9966dc18c2ea40a0ae49e06de28b70e4/users/f1221768e62f4eefa3c4554167480f93/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:34992 [13/Oct/2025:14:11:16.500] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/237/237 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/9465038575e24dca948e14d8ae9cbc00/users/a5fe2580f42c42a29703503f5a611532/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain ceph-mon[29756]: pgmap v1089: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:35008 [13/Oct/2025:14:11:16.512] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/334/334 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/25a2c6e538c94819b72494fd6641c8f0/users/b90160d59360402ea0025fa93ac460e4/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:16 standalone.localdomain haproxy[70940]: 192.168.122.99:35014 [13/Oct/2025:14:11:16.530] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/401/401 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/f37605dc1af34c04802ac56a422aba80/users/ed437132214146b4aa0224ba3e63cb59/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35022 [13/Oct/2025:14:11:16.544] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/465/465 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/a663fa086a124da2bdd0b67d421aa2e9/users/843494dd95af4ec89b34dbe4eff72cb1/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35034 [13/Oct/2025:14:11:16.556] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/528/528 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/d77d536bdc074c768beae139f6057980/users/7c73a6df3f474f9687aa67305f443f67/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35042 [13/Oct/2025:14:11:16.588] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/533/533 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35050 [13/Oct/2025:14:11:16.630] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/516/516 204 165 - - ---- 58/9/8/7/0 0/0 "PUT /v3/projects/3c09edecb4504cf0a7553efa3eafcb89/users/3d242ff8224b4bdf8ade1e41650ae328/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35058 [13/Oct/2025:14:11:16.655] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/535/535 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35064 [13/Oct/2025:14:11:16.739] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/468/468 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35068 [13/Oct/2025:14:11:16.849] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/381/381 200 2700 - - ---- 58/9/8/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35082 [13/Oct/2025:14:11:16.935] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/313/313 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35098 [13/Oct/2025:14:11:17.012] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/253/253 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35104 [13/Oct/2025:14:11:17.085] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/199/199 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35120 [13/Oct/2025:14:11:17.122] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/185/185 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/8663c786e40e473ba326cdf580ab887d/users/2cc3076eb1d342548f5e45a8988ec05e/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35132 [13/Oct/2025:14:11:17.147] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/220/220 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35140 [13/Oct/2025:14:11:17.193] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/193/193 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/9966dc18c2ea40a0ae49e06de28b70e4/users/f1221768e62f4eefa3c4554167480f93/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35148 [13/Oct/2025:14:11:17.209] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/224/224 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/9465038575e24dca948e14d8ae9cbc00/users/a5fe2580f42c42a29703503f5a611532/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35162 [13/Oct/2025:14:11:17.232] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/251/251 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/25a2c6e538c94819b72494fd6641c8f0/users/b90160d59360402ea0025fa93ac460e4/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35172 [13/Oct/2025:14:11:17.250] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/282/282 204 165 - - ---- 58/9/8/8/0 0/0 "PUT /v3/projects/f37605dc1af34c04802ac56a422aba80/users/ed437132214146b4aa0224ba3e63cb59/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35178 [13/Oct/2025:14:11:17.268] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/362/362 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/a663fa086a124da2bdd0b67d421aa2e9/users/843494dd95af4ec89b34dbe4eff72cb1/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1090: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35190 [13/Oct/2025:14:11:17.286] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/416/416 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/d77d536bdc074c768beae139f6057980/users/7c73a6df3f474f9687aa67305f443f67/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35192 [13/Oct/2025:14:11:17.309] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/446/446 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35194 [13/Oct/2025:14:11:17.369] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/407/407 204 165 - - ---- 58/9/7/7/0 0/0 "PUT /v3/projects/3c09edecb4504cf0a7553efa3eafcb89/users/3d242ff8224b4bdf8ade1e41650ae328/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35202 [13/Oct/2025:14:11:17.389] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/431/431 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35208 [13/Oct/2025:14:11:17.435] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/399/399 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35210 [13/Oct/2025:14:11:17.485] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/364/364 200 2700 - - ---- 58/9/8/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35226 [13/Oct/2025:14:11:17.534] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/332/332 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35242 [13/Oct/2025:14:11:17.632] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/248/248 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35254 [13/Oct/2025:14:11:17.706] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/187/187 200 2700 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35270 [13/Oct/2025:14:11:17.757] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/157/157 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/8663c786e40e473ba326cdf580ab887d/users/2cc3076eb1d342548f5e45a8988ec05e/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35274 [13/Oct/2025:14:11:17.779] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/183/183 200 2700 - - ---- 57/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:17 standalone.localdomain haproxy[70940]: 192.168.122.99:35288 [13/Oct/2025:14:11:17.824] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/161/161 204 165 - - ---- 57/7/6/6/0 0/0 "PUT /v3/projects/9966dc18c2ea40a0ae49e06de28b70e4/users/f1221768e62f4eefa3c4554167480f93/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 192.168.122.99:35294 [13/Oct/2025:14:11:17.837] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/194/194 204 165 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/9465038575e24dca948e14d8ae9cbc00/users/a5fe2580f42c42a29703503f5a611532/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 192.168.122.99:35298 [13/Oct/2025:14:11:17.851] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/227/227 204 165 - - ---- 57/5/4/4/0 0/0 "PUT /v3/projects/25a2c6e538c94819b72494fd6641c8f0/users/b90160d59360402ea0025fa93ac460e4/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 192.168.122.99:35312 [13/Oct/2025:14:11:17.870] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/253/253 204 165 - - ---- 57/4/3/3/0 0/0 "PUT /v3/projects/f37605dc1af34c04802ac56a422aba80/users/ed437132214146b4aa0224ba3e63cb59/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 192.168.122.99:35322 [13/Oct/2025:14:11:17.882] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/298/298 204 165 - - ---- 57/3/2/2/0 0/0 "PUT /v3/projects/a663fa086a124da2bdd0b67d421aa2e9/users/843494dd95af4ec89b34dbe4eff72cb1/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 192.168.122.99:35338 [13/Oct/2025:14:11:17.897] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/329/329 204 165 - - ---- 57/2/1/1/0 0/0 "PUT /v3/projects/d77d536bdc074c768beae139f6057980/users/7c73a6df3f474f9687aa67305f443f67/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 172.21.0.2:35346 [13/Oct/2025:14:11:17.920] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/630/630 201 8178 - - ---- 57/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 192.168.122.99:35342 [13/Oct/2025:14:11:17.964] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/640/640 204 165 - - ---- 57/2/1/1/0 0/0 "PUT /v3/projects/3c09edecb4504cf0a7553efa3eafcb89/users/3d242ff8224b4bdf8ade1e41650ae328/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:18 standalone.localdomain ceph-mon[29756]: pgmap v1090: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:18 standalone.localdomain haproxy[70940]: 172.21.0.2:35360 [13/Oct/2025:14:11:17.994] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/940/940 201 8174 - - ---- 58/7/6/6/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:19 standalone.localdomain haproxy[70940]: 172.21.0.2:35368 [13/Oct/2025:14:11:18.037] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1301/1301 201 8208 - - ---- 58/6/5/5/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:19 standalone.localdomain haproxy[70940]: 172.21.0.2:35370 [13/Oct/2025:14:11:18.083] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1555/1555 201 8182 - - ---- 57/5/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1091: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:11:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:11:19 standalone.localdomain podman[155663]: 2025-10-13 14:11:19.806395474 +0000 UTC m=+0.069721230 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, container_name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, version=17.1.9, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=)
Oct 13 14:11:19 standalone.localdomain podman[155663]: 2025-10-13 14:11:19.829157612 +0000 UTC m=+0.092483368 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, build-date=2025-07-21T16:10:12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_scheduler, com.redhat.component=openstack-cinder-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:11:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:11:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:19 standalone.localdomain podman[155662]: 2025-10-13 14:11:19.918696959 +0000 UTC m=+0.180637533 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, name=rhosp17/openstack-cinder-api, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, release=1, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, architecture=x86_64)
Oct 13 14:11:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:11:19 standalone.localdomain podman[155662]: 2025-10-13 14:11:19.936867316 +0000 UTC m=+0.198807930 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, container_name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:11:19 standalone.localdomain haproxy[70940]: 172.21.0.2:35380 [13/Oct/2025:14:11:18.132] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1812/1812 201 8182 - - ---- 57/4/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:19 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:11:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:11:20 standalone.localdomain podman[155714]: 2025-10-13 14:11:20.078825292 +0000 UTC m=+0.096951526 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, vcs-type=git, version=17.1.9)
Oct 13 14:11:20 standalone.localdomain podman[155703]: 2025-10-13 14:11:20.037934977 +0000 UTC m=+0.097504672 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, vcs-type=git, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:11:20 standalone.localdomain podman[155703]: 2025-10-13 14:11:20.116800207 +0000 UTC m=+0.176369962 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:11:20 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:11:20 standalone.localdomain haproxy[70940]: 172.21.0.2:35392 [13/Oct/2025:14:11:18.189] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2049/2049 201 8194 - - ---- 57/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:20 standalone.localdomain haproxy[70940]: 172.21.0.2:35404 [13/Oct/2025:14:11:18.236] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2306/2306 201 8186 - - ---- 57/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:20 standalone.localdomain haproxy[70940]: 192.168.122.99:35348 [13/Oct/2025:14:11:18.552] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/2046/2046 201 497 - - ---- 57/7/6/6/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:20 standalone.localdomain ceph-mon[29756]: pgmap v1091: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:20 standalone.localdomain systemd[1]: tmp-crun.ysNDDI.mount: Deactivated successfully.
Oct 13 14:11:20 standalone.localdomain haproxy[70940]: 172.21.0.2:35414 [13/Oct/2025:14:11:18.609] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2328/2328 201 8312 - - ---- 58/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:20 standalone.localdomain haproxy[70940]: 192.168.122.99:35352 [13/Oct/2025:14:11:18.938] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/2041/2041 201 484 - - ---- 57/8/7/7/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:35364 [13/Oct/2025:14:11:19.343] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1710/1710 201 446 - - ---- 57/8/7/7/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58374 [13/Oct/2025:14:11:19.644] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1461/1461 201 452 - - ---- 57/8/7/7/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58380 [13/Oct/2025:14:11:19.951] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1199/1199 201 473 - - ---- 58/9/8/8/0 0/0 "POST /v3/OS-OAUTH1/consumers HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain podman[155714]: 2025-10-13 14:11:21.179444939 +0000 UTC m=+1.197571203 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., container_name=keystone, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, tcib_managed=true)
Oct 13 14:11:21 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58388 [13/Oct/2025:14:11:20.243] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/957/957 200 591 - - ---- 57/8/7/7/0 0/0 "GET /v3/projects/53bee85e2faf4ca9a286c485d060df94 HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58396 [13/Oct/2025:14:11:20.548] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/700/700 400 494 - - ---- 57/8/7/7/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58404 [13/Oct/2025:14:11:20.601] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/695/695 201 540 - - ---- 57/8/7/7/0 0/0 "POST /v3/OS-EP-FILTER/endpoint_groups HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58408 [13/Oct/2025:14:11:20.940] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/404/404 201 558 - - ---- 57/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58420 [13/Oct/2025:14:11:20.982] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/424/424 201 488 - - ---- 57/8/7/7/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain runuser[155762]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58422 [13/Oct/2025:14:11:21.055] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/421/421 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/995d7261fda2423391fc1ff4ea567667/users/f657c9375b254705bd6ef49a390a14a7/roles/4ee7cc114fd24766bf163e2872cf3611 HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58434 [13/Oct/2025:14:11:21.108] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/437/437 201 617 - - ---- 57/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58450 [13/Oct/2025:14:11:21.151] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/443/443 200 422 - - ---- 57/8/7/7/0 0/0 "GET /v3/OS-OAUTH1/consumers/3b8dc00414224bdba75ff441a68667e3 HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1092: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58466 [13/Oct/2025:14:11:21.201] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/464/464 201 546 - - ---- 57/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58472 [13/Oct/2025:14:11:21.251] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/462/462 400 623 - - ---- 57/8/7/7/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58484 [13/Oct/2025:14:11:21.298] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/472/472 201 496 - - ---- 57/8/7/7/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58496 [13/Oct/2025:14:11:21.346] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/485/485 201 557 - - ---- 57/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:21 standalone.localdomain haproxy[70940]: 192.168.122.99:58506 [13/Oct/2025:14:11:21.408] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/512/512 200 493 - - ---- 57/8/7/7/0 0/0 "PATCH /v3/groups/dc4a3048351c4e488cb35b1ced52f2f6 HTTP/1.1"
Oct 13 14:11:22 standalone.localdomain runuser[155762]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:22 standalone.localdomain runuser[155831]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:22 standalone.localdomain haproxy[70940]: 192.168.122.99:58514 [13/Oct/2025:14:11:21.479] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/796/796 201 865 - - ---- 57/8/7/7/0 0/0 "POST /v3/users/f657c9375b254705bd6ef49a390a14a7/application_credentials HTTP/1.1"
Oct 13 14:11:22 standalone.localdomain haproxy[70940]: 192.168.122.99:58518 [13/Oct/2025:14:11:21.548] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1037/1037 201 554 - - ---- 57/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:22 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:14:11:22 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx524ee02c7dbf4e27ad69e-0068ed088a" "proxy-server 2" 0.0008 "-" 20 -
Oct 13 14:11:22 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx524ee02c7dbf4e27ad69e-0068ed088a)
Oct 13 14:11:22 standalone.localdomain haproxy[70940]: 192.168.122.99:58534 [13/Oct/2025:14:11:21.598] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1142/1142 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/OS-OAUTH1/consumers/3b8dc00414224bdba75ff441a68667e3 HTTP/1.1"
Oct 13 14:11:22 standalone.localdomain ceph-mon[29756]: pgmap v1092: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:22 standalone.localdomain haproxy[70940]: 192.168.122.99:58544 [13/Oct/2025:14:11:21.668] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1154/1154 200 8887 - - ---- 57/7/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:11:22 standalone.localdomain runuser[155831]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:22 standalone.localdomain runuser[155895]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:22 standalone.localdomain haproxy[70940]: 192.168.122.99:58556 [13/Oct/2025:14:11:21.717] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1186/1186 201 484 - - ---- 57/7/6/6/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:22 standalone.localdomain haproxy[70940]: 192.168.122.99:58568 [13/Oct/2025:14:11:21.773] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1196/1196 201 540 - - ---- 57/7/6/6/0 0/0 "POST /v3/OS-EP-FILTER/endpoint_groups HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58582 [13/Oct/2025:14:11:21.834] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1192/1192 200 497 - - ---- 57/7/6/6/0 0/0 "GET /v3/users/edc137d8358f4729be004bfb04f36f53 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58594 [13/Oct/2025:14:11:21.924] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1157/1157 200 493 - - ---- 57/7/6/6/0 0/0 "GET /v3/groups/dc4a3048351c4e488cb35b1ced52f2f6 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:11:23
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'backups', 'volumes', 'manila_metadata', '.mgr', 'manila_data']
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 172.21.0.2:52508 [13/Oct/2025:14:11:22.277] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1138/1138 201 8552 - - ---- 57/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58608 [13/Oct/2025:14:11:22.588] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/882/882 200 2930 - - ---- 57/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58624 [13/Oct/2025:14:11:22.746] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/774/774 201 473 - - ---- 57/8/7/7/0 0/0 "POST /v3/OS-OAUTH1/consumers HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58640 [13/Oct/2025:14:11:22.826] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/736/736 200 591 - - ---- 58/9/8/8/0 0/0 "GET /v3/projects/53bee85e2faf4ca9a286c485d060df94 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58646 [13/Oct/2025:14:11:22.907] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/720/720 403 329 - - ---- 58/9/8/7/0 0/0 "DELETE /v3/domains/ef20ef4f7b4b435996c32aff6b0d48ce HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58652 [13/Oct/2025:14:11:22.972] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/673/673 200 966 - - ---- 57/8/7/7/0 0/0 "GET /v3/OS-EP-FILTER/endpoint_groups HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1093: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58664 [13/Oct/2025:14:11:23.028] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/642/642 201 638 - - ---- 57/8/7/7/0 0/0 "POST /v3/credentials HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58666 [13/Oct/2025:14:11:23.084] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/614/614 200 509 - - ---- 57/8/7/7/0 0/0 "PATCH /v3/groups/dc4a3048351c4e488cb35b1ced52f2f6 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:11:23 standalone.localdomain runuser[155895]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:11:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58674 [13/Oct/2025:14:11:23.419] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/371/371 204 165 - - ---- 57/8/7/7/0 0/0 "DELETE /v3/users/f657c9375b254705bd6ef49a390a14a7/application_credentials/8706fea0833643999c8522de2f3fcb80 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58678 [13/Oct/2025:14:11:23.474] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/385/385 204 165 - - ---- 57/8/7/7/0 0/0 "PUT /v3/projects/88bdeae29fb74280acc76f7fc89a7cc1/users/30c1689f9cf143b28e847143b0ac2629/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain systemd[1]: tmp-crun.Sk08Of.mount: Deactivated successfully.
Oct 13 14:11:23 standalone.localdomain podman[155960]: 2025-10-13 14:11:23.85874637 +0000 UTC m=+0.097950316 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, com.redhat.component=openstack-nova-scheduler-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, architecture=x86_64, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58692 [13/Oct/2025:14:11:23.523] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/402/402 200 422 - - ---- 57/7/6/6/0 0/0 "GET /v3/OS-OAUTH1/consumers/e5876d10e39f46aaa52986323fdc8924 HTTP/1.1"
Oct 13 14:11:23 standalone.localdomain podman[155960]: 2025-10-13 14:11:23.941886271 +0000 UTC m=+0.181090207 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=nova_scheduler, managed_by=tripleo_ansible, release=1, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, name=rhosp17/openstack-nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, tcib_managed=true, com.redhat.component=openstack-nova-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:11:23 standalone.localdomain podman[155953]: 2025-10-13 14:11:23.911696435 +0000 UTC m=+0.155239253 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:11:23 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:11:23 standalone.localdomain haproxy[70940]: 192.168.122.99:58696 [13/Oct/2025:14:11:23.564] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/414/414 200 541 - - ---- 57/7/6/6/0 0/0 "GET /v3/projects/f85f8a84c0e94da29bb84777b75e5731 HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain podman[155953]: 2025-10-13 14:11:24.000731467 +0000 UTC m=+0.244274305 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, batch=17.1_20250721.1, container_name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-engine-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58712 [13/Oct/2025:14:11:23.629] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/417/417 200 480 - - ---- 57/7/6/6/0 0/0 "PATCH /v3/domains/ef20ef4f7b4b435996c32aff6b0d48ce HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58716 [13/Oct/2025:14:11:23.646] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/450/450 200 535 - - ---- 57/7/6/6/0 0/0 "GET /v3/OS-EP-FILTER/endpoint_groups/09610200bef4494aa6e8f95024a524af HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain podman[155961]: 2025-10-13 14:11:23.952089755 +0000 UTC m=+0.169179702 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, managed_by=tripleo_ansible, container_name=memcached, architecture=x86_64, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:11:24 standalone.localdomain podman[155983]: 2025-10-13 14:11:23.892461975 +0000 UTC m=+0.121361664 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:24 standalone.localdomain podman[156018]: 2025-10-13 14:11:24.15207877 +0000 UTC m=+0.349367370 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:11:24 standalone.localdomain podman[156018]: 2025-10-13 14:11:24.163915983 +0000 UTC m=+0.361204593 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:11:24 standalone.localdomain podman[155951]: 2025-10-13 14:11:24.071829208 +0000 UTC m=+0.300516751 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, release=1, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, distribution-scope=public)
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58732 [13/Oct/2025:14:11:23.673] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/503/503 200 638 - - ---- 57/7/6/6/0 0/0 "PATCH /v3/credentials/83483464ee97335280872071c4f41eb0715ffc012f5805567713684314784748 HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[155998]: 2025-10-13 14:11:24.202697433 +0000 UTC m=+0.417150629 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, container_name=heat_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26)
Oct 13 14:11:24 standalone.localdomain podman[155952]: 2025-10-13 14:11:24.168968248 +0000 UTC m=+0.417917952 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, com.redhat.component=openstack-manila-scheduler-container, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-scheduler, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T15:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., container_name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:11:24 standalone.localdomain podman[155961]: 2025-10-13 14:11:24.233599811 +0000 UTC m=+0.450689708 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T12:58:43, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-memcached-container, container_name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, tcib_managed=true, vcs-type=git, architecture=x86_64)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[155952]: 2025-10-13 14:11:24.249804879 +0000 UTC m=+0.498754573 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-manila-scheduler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_scheduler, build-date=2025-07-21T15:56:28, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-manila-scheduler-container, summary=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58736 [13/Oct/2025:14:11:23.702] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/558/558 204 165 - - ---- 57/7/6/6/0 0/0 "DELETE /v3/groups/dc4a3048351c4e488cb35b1ced52f2f6 HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain podman[155951]: 2025-10-13 14:11:24.261117546 +0000 UTC m=+0.489805119 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, vcs-type=git)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[155969]: 2025-10-13 14:11:24.22281875 +0000 UTC m=+0.456084523 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, tcib_managed=true, build-date=2025-07-21T15:58:55, name=rhosp17/openstack-cinder-api, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=cinder_api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:11:24 standalone.localdomain podman[155990]: 2025-10-13 14:11:24.130692974 +0000 UTC m=+0.325553059 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, com.redhat.component=openstack-horizon-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, container_name=horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=)
Oct 13 14:11:24 standalone.localdomain podman[155976]: 2025-10-13 14:11:24.235950033 +0000 UTC m=+0.445485138 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, name=rhosp17/openstack-manila-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T16:06:43, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-api-container, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:11:24 standalone.localdomain podman[156002]: 2025-10-13 14:11:24.250564182 +0000 UTC m=+0.466084091 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, build-date=2025-07-21T15:44:03, managed_by=tripleo_ansible, container_name=neutron_api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1)
Oct 13 14:11:24 standalone.localdomain podman[155969]: 2025-10-13 14:11:24.306812077 +0000 UTC m=+0.540077850 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, name=rhosp17/openstack-cinder-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T15:58:55, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, container_name=cinder_api, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[155983]: 2025-10-13 14:11:24.334162817 +0000 UTC m=+0.563062516 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T15:56:26, version=17.1.9)
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58740 [13/Oct/2025:14:11:23.794] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/565/565 204 165 - - ---- 57/7/6/6/0 0/0 "DELETE /v3/roles/4ee7cc114fd24766bf163e2872cf3611 HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain podman[155990]: 2025-10-13 14:11:24.365086475 +0000 UTC m=+0.559946550 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, tcib_managed=true, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, architecture=x86_64, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-horizon, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-horizon-container, container_name=horizon, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:11:24 standalone.localdomain podman[155976]: 2025-10-13 14:11:24.367714916 +0000 UTC m=+0.577249991 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:06:43, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, com.redhat.component=openstack-manila-api-container)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[156003]: 2025-10-13 14:11:24.354680096 +0000 UTC m=+0.560865819 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, distribution-scope=public, vcs-type=git, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-conductor-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, batch=17.1_20250721.1, container_name=nova_conductor, config_id=tripleo_step4)
Oct 13 14:11:24 standalone.localdomain podman[155998]: 2025-10-13 14:11:24.432671999 +0000 UTC m=+0.647125175 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, version=17.1.9, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, container_name=heat_api, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[156003]: 2025-10-13 14:11:24.442830721 +0000 UTC m=+0.649016444 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_conductor, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain podman[156002]: 2025-10-13 14:11:24.500247672 +0000 UTC m=+0.715767561 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:44:03, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, container_name=neutron_api, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true)
Oct 13 14:11:24 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 172.21.0.2:52510 [13/Oct/2025:14:11:23.863] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/828/828 201 8489 - - ---- 56/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58748 [13/Oct/2025:14:11:23.927] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/819/819 204 165 - - ---- 56/7/6/6/0 0/0 "DELETE /v3/OS-OAUTH1/consumers/e5876d10e39f46aaa52986323fdc8924 HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58756 [13/Oct/2025:14:11:23.982] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/815/815 200 8887 - - ---- 56/7/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain ceph-mon[29756]: pgmap v1093: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58770 [13/Oct/2025:14:11:24.048] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/873/873 204 165 - - ---- 56/7/6/6/0 0/0 "DELETE /v3/domains/ef20ef4f7b4b435996c32aff6b0d48ce HTTP/1.1"
Oct 13 14:11:24 standalone.localdomain haproxy[70940]: 192.168.122.99:58772 [13/Oct/2025:14:11:24.097] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/868/868 200 202 - - ---- 56/7/6/6/0 0/0 "HEAD /v3/OS-EP-FILTER/endpoint_groups/09610200bef4494aa6e8f95024a524af HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58774 [13/Oct/2025:14:11:24.177] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/834/834 200 638 - - ---- 56/7/6/6/0 0/0 "GET /v3/credentials/83483464ee97335280872071c4f41eb0715ffc012f5805567713684314784748 HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58788 [13/Oct/2025:14:11:24.263] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/800/800 201 533 - - ---- 56/7/6/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58794 [13/Oct/2025:14:11:24.361] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/817/817 204 165 - - ---- 57/8/7/6/0 0/0 "DELETE /v3/users/f657c9375b254705bd6ef49a390a14a7 HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 172.21.0.2:52522 [13/Oct/2025:14:11:24.698] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/789/789 201 8489 - - ---- 56/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58806 [13/Oct/2025:14:11:24.750] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/785/785 404 282 - - ---- 56/8/7/7/0 0/0 "GET /v3/OS-OAUTH1/consumers/e5876d10e39f46aaa52986323fdc8924 HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58822 [13/Oct/2025:14:11:24.801] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/784/784 200 8507 - - ---- 56/8/7/7/0 0/0 "GET /v3/projects?domain_id=default HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58830 [13/Oct/2025:14:11:24.924] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/709/709 404 321 - - ---- 56/8/7/7/0 0/0 "DELETE /v3/domains/2663af57ed7a4cde82242e855b1d567d HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1094: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58844 [13/Oct/2025:14:11:24.968] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/795/795 204 165 - - ---- 56/8/7/7/0 0/0 "DELETE /v3/OS-EP-FILTER/endpoint_groups/09610200bef4494aa6e8f95024a524af HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain haproxy[70940]: 192.168.122.99:58856 [13/Oct/2025:14:11:25.014] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/880/880 204 165 - - ---- 56/8/7/7/0 0/0 "DELETE /v3/credentials/83483464ee97335280872071c4f41eb0715ffc012f5805567713684314784748 HTTP/1.1"
Oct 13 14:11:25 standalone.localdomain ceph-mon[29756]: pgmap v1094: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58868 [13/Oct/2025:14:11:25.065] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1151/1151 201 528 - - ---- 57/9/8/8/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58872 [13/Oct/2025:14:11:25.179] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1136/1136 204 165 - - ---- 56/8/7/7/0 0/0 "DELETE /v3/users/a5fe2580f42c42a29703503f5a611532 HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58878 [13/Oct/2025:14:11:25.488] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/883/883 200 8484 - - ---- 56/7/6/6/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58888 [13/Oct/2025:14:11:25.536] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/886/886 404 282 - - ---- 56/7/6/6/0 0/0 "DELETE /v3/OS-OAUTH1/consumers/e5876d10e39f46aaa52986323fdc8924 HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58896 [13/Oct/2025:14:11:25.586] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/883/883 200 8887 - - ---- 56/7/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58904 [13/Oct/2025:14:11:25.637] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/881/881 201 460 - - ---- 56/7/6/6/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58920 [13/Oct/2025:14:11:25.765] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/799/799 200 652 - - ---- 56/7/6/6/0 0/0 "GET /v3/OS-EP-FILTER/endpoint_groups HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58930 [13/Oct/2025:14:11:25.899] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/713/713 201 636 - - ---- 56/7/6/6/0 0/0 "POST /v3/credentials HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 192.168.122.99:58946 [13/Oct/2025:14:11:26.220] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/457/457 204 165 - - ---- 56/7/6/6/0 0/0 "PUT /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/e68f49c4a0fa47ef99a3e48029c07e34 HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:11:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:11:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:11:26 standalone.localdomain podman[156270]: 2025-10-13 14:11:26.807761737 +0000 UTC m=+0.076472747 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, com.redhat.component=openstack-swift-container-container, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:11:26 standalone.localdomain systemd[1]: tmp-crun.OAMJVo.mount: Deactivated successfully.
Oct 13 14:11:26 standalone.localdomain podman[156271]: 2025-10-13 14:11:26.842919916 +0000 UTC m=+0.103627020 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, distribution-scope=public)
Oct 13 14:11:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:11:26 standalone.localdomain podman[156298]: 2025-10-13 14:11:26.883601944 +0000 UTC m=+0.074557568 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:11:26 standalone.localdomain podman[156320]: 2025-10-13 14:11:26.933536976 +0000 UTC m=+0.078106308 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, build-date=2025-07-21T15:24:10, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.component=openstack-nova-novncproxy-container, name=rhosp17/openstack-nova-novncproxy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, container_name=nova_vnc_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.buildah.version=1.33.12)
Oct 13 14:11:26 standalone.localdomain podman[156270]: 2025-10-13 14:11:26.96235396 +0000 UTC m=+0.231064940 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, vcs-type=git, build-date=2025-07-21T15:54:32, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:26 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:11:26 standalone.localdomain haproxy[70940]: 172.21.0.2:52524 [13/Oct/2025:14:11:26.319] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/658/658 201 8368 - - ---- 57/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:11:27 standalone.localdomain podman[156298]: 2025-10-13 14:11:27.05883377 +0000 UTC m=+0.249789364 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, release=1, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:56:28, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible)
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:58952 [13/Oct/2025:14:11:26.376] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/685/685 204 165 - - ---- 57/7/6/6/0 0/0 "DELETE /v3/users/30c1689f9cf143b28e847143b0ac2629 HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain podman[156374]: 2025-10-13 14:11:27.06470369 +0000 UTC m=+0.066414609 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:11:27 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:58968 [13/Oct/2025:14:11:26.427] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/680/680 201 474 - - ---- 58/8/7/7/0 0/0 "POST /v3/OS-OAUTH1/consumers HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain podman[156271]: 2025-10-13 14:11:27.110129154 +0000 UTC m=+0.370836248 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, tcib_managed=true, release=1, version=17.1.9, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:11:27 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:58982 [13/Oct/2025:14:11:26.473] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/679/679 200 8507 - - ---- 57/7/6/6/0 0/0 "GET /v3/projects?domain_id=default HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:58996 [13/Oct/2025:14:11:26.520] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/681/681 409 420 - - ---- 57/7/6/6/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain podman[156320]: 2025-10-13 14:11:27.239497773 +0000 UTC m=+0.384067145 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, release=1, com.redhat.component=openstack-nova-novncproxy-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_vnc_proxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:59008 [13/Oct/2025:14:11:26.567] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/679/679 404 329 - - ---- 57/7/6/6/0 0/0 "DELETE /v3/OS-EP-FILTER/endpoint_groups/09610200bef4494aa6e8f95024a524af HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:59016 [13/Oct/2025:14:11:26.614] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/673/673 201 638 - - ---- 57/7/6/6/0 0/0 "POST /v3/credentials HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain podman[156374]: 2025-10-13 14:11:27.412763678 +0000 UTC m=+0.414474667 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, container_name=nova_migration_target)
Oct 13 14:11:27 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:59018 [13/Oct/2025:14:11:26.680] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/925/925 201 526 - - ---- 57/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1095: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 172.17.0.2:43886 [13/Oct/2025:14:11:26.985] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/786/786 200 8363 - - ---- 57/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain haproxy[70940]: 192.168.122.99:59034 [13/Oct/2025:14:11:27.064] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/913/913 204 165 - - ---- 57/7/6/6/0 0/0 "DELETE /v3/projects/88bdeae29fb74280acc76f7fc89a7cc1 HTTP/1.1"
Oct 13 14:11:27 standalone.localdomain ceph-mon[29756]: pgmap v1095: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59050 [13/Oct/2025:14:11:27.108] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/931/931 201 473 - - ---- 57/7/6/6/0 0/0 "POST /v3/OS-OAUTH1/consumers HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 172.21.0.2:37584 [13/Oct/2025:14:11:26.980] neutron neutron/standalone.internalapi.localdomain 0/0/0/1105/1105 200 2972 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=995d7261fda2423391fc1ff4ea567667&name=default HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59054 [13/Oct/2025:14:11:27.156] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/934/934 200 8489 - - ---- 57/7/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59058 [13/Oct/2025:14:11:27.202] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/974/974 200 456 - - ---- 57/7/6/6/0 0/0 "PATCH /v3/domains/4673d2fb2599463a90e759b9f9de58fd HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0043440946049692905 of space, bias 1.0, pg target 0.4344094604969291 quantized to 32 (current 32)
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59072 [13/Oct/2025:14:11:27.246] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1194/1194 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/services/36313b9672f2462599b9faad6737572b HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59086 [13/Oct/2025:14:11:27.288] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1233/1233 200 1147 - - ---- 58/7/6/6/0 0/0 "GET /v3/credentials HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 172.21.0.2:37592 [13/Oct/2025:14:11:28.088] neutron neutron/standalone.internalapi.localdomain 0/0/0/497/497 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/09b1b65f-a2bc-4176-bb64-44640c955d9b HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59096 [13/Oct/2025:14:11:27.608] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/993/993 204 165 - - ---- 58/8/7/7/0 0/0 "PUT /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/3ece0b4c05dd450199ac631d937058c5 HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59104 [13/Oct/2025:14:11:27.980] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/684/684 200 448 - - ---- 58/8/7/7/0 0/0 "PATCH /v3/domains/4616292bee4e467f9df6e86a1933b00f HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59112 [13/Oct/2025:14:11:28.039] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/665/665 200 738 - - ---- 58/8/7/7/0 0/0 "GET /v3/OS-OAUTH1/consumers HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59120 [13/Oct/2025:14:11:28.091] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/660/660 200 739 - - ---- 58/8/7/7/0 0/0 "GET /v3/projects?name=tempest-ListProjectsStaticTestJSON-1311419457 HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59136 [13/Oct/2025:14:11:28.178] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/676/676 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/domains/4673d2fb2599463a90e759b9f9de58fd HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59152 [13/Oct/2025:14:11:28.446] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/460/460 201 494 - - ---- 58/8/7/7/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:28 standalone.localdomain haproxy[70940]: 192.168.122.99:59166 [13/Oct/2025:14:11:28.524] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/438/438 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/credentials/f74efc914be426cb6ab2f1f39f891b26229b4b364c7e0d989ebd71c4ef4c73ea HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59182 [13/Oct/2025:14:11:28.588] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/473/473 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/projects/995d7261fda2423391fc1ff4ea567667 HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 172.21.0.2:37596 [13/Oct/2025:14:11:29.065] neutron neutron/standalone.internalapi.localdomain 0/0/0/121/121 200 2972 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=9465038575e24dca948e14d8ae9cbc00&name=default HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59184 [13/Oct/2025:14:11:28.603] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/763/763 201 530 - - ---- 58/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 172.21.0.2:37604 [13/Oct/2025:14:11:29.192] neutron neutron/standalone.internalapi.localdomain 0/0/2/203/205 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/554b38ea-3872-4739-8af7-5fb8726d8cfc HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain podman[156686]: 2025-10-13 14:11:29.437465007 +0000 UTC m=+0.105222330 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-share, com.redhat.component=openstack-manila-share-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, release=1, build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:29 standalone.localdomain systemd[1]: tmp-crun.XPiZQ7.mount: Deactivated successfully.
Oct 13 14:11:29 standalone.localdomain podman[156686]: 2025-10-13 14:11:29.466943171 +0000 UTC m=+0.134700474 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, name=rhosp17/openstack-manila-share, vcs-type=git, build-date=2025-07-21T15:22:36, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-manila-share-container, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, release=1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59194 [13/Oct/2025:14:11:28.666] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/874/874 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/domains/4616292bee4e467f9df6e86a1933b00f HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59198 [13/Oct/2025:14:11:28.708] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/900/900 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/OS-OAUTH1/consumers/1c5683ad49044117bd38e637ef4a1879 HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1096: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59212 [13/Oct/2025:14:11:28.756] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/963/963 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/projects/f85f8a84c0e94da29bb84777b75e5731 HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59216 [13/Oct/2025:14:11:28.858] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/915/915 404 321 - - ---- 58/8/7/7/0 0/0 "PATCH /v3/domains/ef20ef4f7b4b435996c32aff6b0d48ce HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59220 [13/Oct/2025:14:11:28.908] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/930/930 201 540 - - ---- 58/8/7/7/0 0/0 "POST /v3/OS-EP-FILTER/endpoint_groups HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59230 [13/Oct/2025:14:11:28.965] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/917/917 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/credentials/498d172b22fd71c1cd917b9e272ae546a3840c45154ffd1c1aeb1629fb8cab10 HTTP/1.1"
Oct 13 14:11:29 standalone.localdomain haproxy[70940]: 192.168.122.99:59232 [13/Oct/2025:14:11:29.369] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/571/571 204 165 - - ---- 58/8/7/7/0 0/0 "PUT /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/8ccd361a15d44db8a3ead77db9324e34 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:59234 [13/Oct/2025:14:11:29.398] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/621/621 204 165 - - ---- 58/8/7/7/0 0/0 "DELETE /v3/projects/9465038575e24dca948e14d8ae9cbc00 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:59240 [13/Oct/2025:14:11:29.542] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/581/581 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/546e2e1736164c389984f658c8fbfd57 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:60964 [13/Oct/2025:14:11:29.611] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/574/574 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/OS-OAUTH1/consumers/8ac69b52d87e4484a755d9cffb1c02a3 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:60968 [13/Oct/2025:14:11:29.720] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/556/556 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/30424e168f744318a01216b07f845ee0 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:60970 [13/Oct/2025:14:11:29.775] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/597/597 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/73f866596f6d49a59be43453e4b6c08d HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:60978 [13/Oct/2025:14:11:29.840] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/588/588 201 495 - - ---- 59/8/7/7/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:60988 [13/Oct/2025:14:11:29.885] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/646/646 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/projects/49c69dda71fd4d4ea7ade1fc5b484e12 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:60998 [13/Oct/2025:14:11:29.943] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/648/648 200 1282 - - ---- 58/7/6/6/0 0/0 "GET /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:11:30 standalone.localdomain ceph-mon[29756]: pgmap v1096: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:11:30 standalone.localdomain systemd[1]: tmp-crun.01PUmS.mount: Deactivated successfully.
Oct 13 14:11:30 standalone.localdomain podman[156733]: 2025-10-13 14:11:30.852088338 +0000 UTC m=+0.101024371 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, batch=17.1_20250721.1)
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 172.21.0.2:43450 [13/Oct/2025:14:11:30.022] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/890/890 201 8499 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain podman[156758]: 2025-10-13 14:11:30.911543842 +0000 UTC m=+0.140132561 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, container_name=glance_api, release=1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:58:20, architecture=x86_64, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, tcib_managed=true)
Oct 13 14:11:30 standalone.localdomain podman[156740]: 2025-10-13 14:11:30.895007684 +0000 UTC m=+0.140626575 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4)
Oct 13 14:11:30 standalone.localdomain haproxy[70940]: 192.168.122.99:32768 [13/Oct/2025:14:11:30.125] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/838/838 204 165 - - ---- 59/7/6/6/0 0/0 "DELETE /v3/users/b90160d59360402ea0025fa93ac460e4 HTTP/1.1"
Oct 13 14:11:30 standalone.localdomain podman[156725]: 2025-10-13 14:11:30.832428074 +0000 UTC m=+0.096389438 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vcs-type=git, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, managed_by=tripleo_ansible, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public)
Oct 13 14:11:31 standalone.localdomain podman[156748]: 2025-10-13 14:11:31.009427185 +0000 UTC m=+0.248963730 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:31 standalone.localdomain podman[156745]: 2025-10-13 14:11:31.017897845 +0000 UTC m=+0.257025567 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 192.168.122.99:32770 [13/Oct/2025:14:11:30.190] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/840/840 201 474 - - ---- 58/6/5/5/0 0/0 "POST /v3/OS-OAUTH1/consumers HTTP/1.1"
Oct 13 14:11:31 standalone.localdomain podman[156733]: 2025-10-13 14:11:31.06339707 +0000 UTC m=+0.312333093 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:11:31 standalone.localdomain podman[156725]: 2025-10-13 14:11:31.07251628 +0000 UTC m=+0.336477654 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, release=1, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, container_name=glance_api_cron, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:11:31 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain podman[156740]: 2025-10-13 14:11:31.080721672 +0000 UTC m=+0.326340403 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:11:31 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain podman[156745]: 2025-10-13 14:11:31.090896204 +0000 UTC m=+0.330023896 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, version=17.1.9, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:11:31 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain podman[156748]: 2025-10-13 14:11:31.113671123 +0000 UTC m=+0.353207648 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:11:31 standalone.localdomain podman[156758]: 2025-10-13 14:11:31.136359009 +0000 UTC m=+0.364947698 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., release=1, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 192.168.122.99:32786 [13/Oct/2025:14:11:30.279] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/860/860 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/users/843494dd95af4ec89b34dbe4eff72cb1 HTTP/1.1"
Oct 13 14:11:31 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain podman[156726]: 2025-10-13 14:11:30.961929237 +0000 UTC m=+0.219787714 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, build-date=2025-07-21T13:58:12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-placement-api-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:11:31 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11319 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0957A4F10000000001030307) 
Oct 13 14:11:31 standalone.localdomain podman[156734]: 2025-10-13 14:11:31.143301982 +0000 UTC m=+0.387028985 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, version=17.1.9, release=1)
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 192.168.122.99:32798 [13/Oct/2025:14:11:30.374] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/851/851 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/users/7c73a6df3f474f9687aa67305f443f67 HTTP/1.1"
Oct 13 14:11:31 standalone.localdomain podman[156734]: 2025-10-13 14:11:31.225892125 +0000 UTC m=+0.469619079 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=nova_metadata, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:31 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain podman[156726]: 2025-10-13 14:11:31.248168659 +0000 UTC m=+0.506027136 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, build-date=2025-07-21T13:58:12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, name=rhosp17/openstack-placement-api, distribution-scope=public, com.redhat.component=openstack-placement-api-container, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:11:31 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 192.168.122.99:32802 [13/Oct/2025:14:11:30.431] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/854/854 200 537 - - ---- 58/4/3/3/0 0/0 "PATCH /v3/OS-EP-FILTER/endpoint_groups/9c5adf07e43c4789b41fe2fde8fd869b HTTP/1.1"
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 192.168.122.99:32816 [13/Oct/2025:14:11:30.534] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/841/841 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/projects/9edcf97fc432453da6d79a29b8b153cb HTTP/1.1"
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 192.168.122.99:32826 [13/Oct/2025:14:11:30.593] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/837/837 204 165 - - ---- 58/4/3/3/0 0/0 "HEAD /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/e68f49c4a0fa47ef99a3e48029c07e34 HTTP/1.1"
Oct 13 14:11:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1097: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:31 standalone.localdomain haproxy[70940]: 172.21.0.2:43460 [13/Oct/2025:14:11:30.917] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/875/875 201 8499 - - ---- 58/5/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 172.21.0.2:43476 [13/Oct/2025:14:11:30.964] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/1/1094/1095 201 8499 - - ---- 58/4/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 192.168.122.99:32834 [13/Oct/2025:14:11:31.032] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1076/1076 200 423 - - ---- 58/5/4/4/0 0/0 "PATCH /v3/OS-OAUTH1/consumers/9941192f20ea46b7b3538a22ac49efba HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 172.21.0.2:43484 [13/Oct/2025:14:11:31.142] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1235/1235 201 8499 - - ---- 59/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 172.21.0.2:43492 [13/Oct/2025:14:11:31.228] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1413/1413 201 8499 - - ---- 59/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 192.168.122.99:32844 [13/Oct/2025:14:11:31.287] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1413/1413 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/services/e80d9edd51dc4f158180169e01cd8df4 HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain ceph-mon[29756]: pgmap v1097: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 192.168.122.99:32860 [13/Oct/2025:14:11:31.376] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1418/1418 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/users/edc137d8358f4729be004bfb04f36f53 HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 192.168.122.99:32862 [13/Oct/2025:14:11:31.431] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1426/1426 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/e68f49c4a0fa47ef99a3e48029c07e34 HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 192.168.122.99:32872 [13/Oct/2025:14:11:31.795] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1111/1111 200 510 - - ---- 60/5/4/4/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:32 standalone.localdomain haproxy[70940]: 172.17.0.2:43886 [13/Oct/2025:14:11:32.067] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/895/895 200 8361 - - ---- 60/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32886 [13/Oct/2025:14:11:32.112] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/905/905 200 423 - - ---- 60/5/4/4/0 0/0 "GET /v3/OS-OAUTH1/consumers/9941192f20ea46b7b3538a22ac49efba HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.17.0.2:46050 [13/Oct/2025:14:11:32.384] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/658/658 200 8361 - - ---- 60/3/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.17.0.2:46058 [13/Oct/2025:14:11:32.651] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/423/423 200 8361 - - ---- 60/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32898 [13/Oct/2025:14:11:32.703] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/439/439 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/OS-EP-FILTER/endpoint_groups/9c5adf07e43c4789b41fe2fde8fd869b HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32900 [13/Oct/2025:14:11:32.797] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/416/416 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/users/3d242ff8224b4bdf8ade1e41650ae328 HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:53696 [13/Oct/2025:14:11:32.062] neutron neutron/standalone.internalapi.localdomain 0/0/0/1189/1189 200 2972 - - ---- 61/3/2/2/0 0/0 "GET /v2.0/security-groups?tenant_id=7ab28603f6f94e898dbe5b4dae450b58&name=default HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32916 [13/Oct/2025:14:11:32.861] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/418/418 204 165 - - ---- 62/5/4/4/0 0/0 "HEAD /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/3ece0b4c05dd450199ac631d937058c5 HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:53700 [13/Oct/2025:14:11:32.380] neutron neutron/standalone.internalapi.localdomain 0/0/0/953/953 200 2972 - - ---- 62/3/2/2/0 0/0 "GET /v2.0/security-groups?tenant_id=a663fa086a124da2bdd0b67d421aa2e9&name=default HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32926 [13/Oct/2025:14:11:32.910] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/436/436 201 576 - - ---- 62/4/3/3/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32940 [13/Oct/2025:14:11:33.018] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/389/389 204 165 - - ---- 62/4/3/3/0 0/0 "DELETE /v3/OS-OAUTH1/consumers/9941192f20ea46b7b3538a22ac49efba HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:53708 [13/Oct/2025:14:11:32.644] neutron neutron/standalone.internalapi.localdomain 0/0/0/779/779 200 2972 - - ---- 63/3/2/2/0 0/0 "GET /v2.0/security-groups?tenant_id=ec9c5c705d494e1897b1c4b3a1631d56&name=default HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32946 [13/Oct/2025:14:11:33.145] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/321/321 204 165 - - ---- 63/4/3/3/0 0/0 "DELETE /v3/services/5f32ad5c06ac4cad816b7b3196cc2f3f HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.17.0.100:37025 [13/Oct/2025:14:11:33.546] mysql mysql/standalone.internalapi.localdomain 1/0/39 4301 -- 66/55/54/54/0 0/0
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.17.0.100:44123 [13/Oct/2025:14:11:33.304] mysql mysql/standalone.internalapi.localdomain 1/0/342 33713 -- 65/54/53/53/0 0/0
Oct 13 14:11:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1098: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:53712 [13/Oct/2025:14:11:33.253] neutron neutron/standalone.internalapi.localdomain 0/0/0/439/439 204 152 - - ---- 64/3/2/2/0 0/0 "DELETE /v2.0/security-groups/5d7fa714-6dd2-4805-a786-e0d1f27e4814 HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.17.0.100:43235 [13/Oct/2025:14:10:40.185] mysql mysql/standalone.internalapi.localdomain 1/0/53510 657777 -- 64/53/52/52/0 0/0
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.17.0.100:59807 [13/Oct/2025:14:05:22.498] mysql mysql/standalone.internalapi.localdomain 1/0/371223 574045 -- 63/52/51/51/0 0/0
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:53714 [13/Oct/2025:14:11:33.335] neutron neutron/standalone.internalapi.localdomain 0/0/0/387/387 204 152 - - ---- 63/2/1/1/0 0/0 "DELETE /v2.0/security-groups/836c725d-6320-406b-8d67-ca6eed3f64c2 HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:53728 [13/Oct/2025:14:11:33.425] neutron neutron/standalone.internalapi.localdomain 0/0/0/318/318 204 152 - - ---- 62/1/0/0/0 0/0 "DELETE /v2.0/security-groups/7373346a-ce41-47c7-90e5-87f81212fff2 HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 172.21.0.2:43508 [13/Oct/2025:14:11:33.215] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/620/620 201 8234 - - ---- 62/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:33 standalone.localdomain haproxy[70940]: 192.168.122.99:32958 [13/Oct/2025:14:11:33.282] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/711/711 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/3ece0b4c05dd450199ac631d937058c5 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:32964 [13/Oct/2025:14:11:33.349] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/948/948 201 499 - - ---- 62/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:32970 [13/Oct/2025:14:11:33.412] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/948/948 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/users/6664c1481bd143399b877a8eeb01fd60 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain runuser[156959]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:32974 [13/Oct/2025:14:11:33.469] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/952/952 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/OS-EP-FILTER/endpoint_groups/9399e445f4074fc699488624cec0f792 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:32990 [13/Oct/2025:14:11:33.695] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/808/808 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/projects/7ab28603f6f94e898dbe5b4dae450b58 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:33002 [13/Oct/2025:14:11:33.725] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/859/859 204 165 - - ---- 62/6/5/5/0 0/0 "DELETE /v3/projects/a663fa086a124da2bdd0b67d421aa2e9 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:33008 [13/Oct/2025:14:11:33.747] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/934/934 204 165 - - ---- 62/5/4/4/0 0/0 "DELETE /v3/projects/ec9c5c705d494e1897b1c4b3a1631d56 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain ceph-mon[29756]: pgmap v1098: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 172.21.0.2:53744 [13/Oct/2025:14:11:34.506] neutron neutron/standalone.internalapi.localdomain 0/0/0/248/248 200 2972 - - ---- 62/4/3/3/0 0/0 "GET /v2.0/security-groups?tenant_id=25a2c6e538c94819b72494fd6641c8f0&name=default HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 172.17.0.2:46058 [13/Oct/2025:14:11:33.844] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/930/930 200 8229 - - ---- 62/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:33016 [13/Oct/2025:14:11:33.995] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/826/826 204 165 - - ---- 62/4/3/3/0 0/0 "HEAD /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/8ccd361a15d44db8a3ead77db9324e34 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:33018 [13/Oct/2025:14:11:34.301] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/538/538 200 2700 - - ---- 62/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 172.21.0.2:53746 [13/Oct/2025:14:11:34.586] neutron neutron/standalone.internalapi.localdomain 0/0/0/301/301 200 2972 - - ---- 63/5/4/3/0 0/0 "GET /v2.0/security-groups?tenant_id=53bee85e2faf4ca9a286c485d060df94&name=default HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:33020 [13/Oct/2025:14:11:34.363] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/545/545 204 165 - - ---- 62/4/3/3/0 0/0 "DELETE /v3/users/ed437132214146b4aa0224ba3e63cb59 HTTP/1.1"
Oct 13 14:11:34 standalone.localdomain haproxy[70940]: 192.168.122.99:33022 [13/Oct/2025:14:11:34.423] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/545/545 204 165 - - ---- 62/3/2/2/0 0/0 "DELETE /v3/services/8311775355bd423e97a1a0c00ff22ea0 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33034 [13/Oct/2025:14:11:34.823] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/205/205 204 165 - - ---- 63/3/2/2/0 0/0 "DELETE /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users/8ccd361a15d44db8a3ead77db9324e34 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33048 [13/Oct/2025:14:11:34.841] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/255/255 204 165 - - ---- 63/3/2/2/0 0/0 "PUT /v3/projects/dafccc5472334499b3f1b5b59042ec7d/users/3a357ded3f3d4e379ce776c0f3a29d56/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53760 [13/Oct/2025:14:11:34.683] neutron neutron/standalone.internalapi.localdomain 0/0/0/416/416 200 2972 - - ---- 63/4/3/3/0 0/0 "GET /v2.0/security-groups?tenant_id=d77d536bdc074c768beae139f6057980&name=default HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain runuser[156959]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53730 [13/Oct/2025:14:11:33.839] neutron neutron/standalone.internalapi.localdomain 0/0/0/1343/1343 200 2972 - - ---- 67/4/3/3/0 0/0 "GET /v2.0/security-groups?tenant_id=a7b02bed147c4a6f87f4a733e2f52dc5&name=default HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.17.0.100:45619 [13/Oct/2025:14:11:33.238] mysql mysql/standalone.internalapi.localdomain 1/0/2010 104061 -- 67/56/55/55/0 0/0
Oct 13 14:11:35 standalone.localdomain runuser[157029]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.17.0.100:50815 [13/Oct/2025:14:11:28.358] mysql mysql/standalone.internalapi.localdomain 1/0/7032 151316 -- 66/55/54/54/0 0/0
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:43510 [13/Oct/2025:14:11:34.913] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/525/525 201 8100 - - ---- 65/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53770 [13/Oct/2025:14:11:34.757] neutron neutron/standalone.internalapi.localdomain 0/0/0/688/688 204 152 - - ---- 65/5/4/4/0 0/0 "DELETE /v2.0/security-groups/b7ae4bec-d27d-4504-ba70-9b297aeed667 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53778 [13/Oct/2025:14:11:34.890] neutron neutron/standalone.internalapi.localdomain 0/0/0/594/594 204 152 - - ---- 65/4/3/3/0 0/0 "DELETE /v2.0/security-groups/2b6d8b85-f8cc-4435-ba0c-a15d3f1ed3fb HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.17.0.100:40705 [13/Oct/2025:14:11:33.570] mysql mysql/standalone.internalapi.localdomain 1/0/1915 69342 -- 64/54/53/53/0 0/0
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33050 [13/Oct/2025:14:11:34.970] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/545/545 204 165 - - ---- 64/5/4/4/0 0/0 "DELETE /v3/users/9246bdae8a134c218b20550a2f4e554c HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.17.0.100:48915 [13/Oct/2025:14:11:35.146] mysql mysql/standalone.internalapi.localdomain 1/0/374 22233 -- 64/53/52/52/0 0/0
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.17.0.100:33889 [13/Oct/2025:14:11:35.177] mysql mysql/standalone.internalapi.localdomain 1/0/345 18912 -- 63/52/51/51/0 0/0
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53810 [13/Oct/2025:14:11:35.184] neutron neutron/standalone.internalapi.localdomain 0/0/0/367/367 204 152 - - ---- 62/3/2/2/0 0/0 "DELETE /v2.0/security-groups/204dd87c-9e8d-497c-8db6-3b8756cf6dc6 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53794 [13/Oct/2025:14:11:35.101] neutron neutron/standalone.internalapi.localdomain 0/0/0/465/465 204 152 - - ---- 62/2/1/1/0 0/0 "DELETE /v2.0/security-groups/0191e006-daa4-4f2c-b22e-b549c5411ca3 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33066 [13/Oct/2025:14:11:35.029] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/553/553 200 347 - - ---- 62/7/6/6/0 0/0 "GET /v3/groups/41d172d5c1e04f278dd25bb6ceae4290/users HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33068 [13/Oct/2025:14:11:35.098] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/532/532 200 2700 - - ---- 62/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1099: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33072 [13/Oct/2025:14:11:35.446] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/247/247 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/projects/25a2c6e538c94819b72494fd6641c8f0 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:11:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.17.0.2:46058 [13/Oct/2025:14:11:35.453] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/340/340 200 8095 - - ---- 62/4/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain podman[157075]: 2025-10-13 14:11:35.818821408 +0000 UTC m=+0.082215764 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33076 [13/Oct/2025:14:11:35.485] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/379/379 204 165 - - ---- 62/6/5/5/0 0/0 "DELETE /v3/projects/53bee85e2faf4ca9a286c485d060df94 HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain ceph-mon[29756]: pgmap v1099: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:35 standalone.localdomain systemd[1]: tmp-crun.yAbDO3.mount: Deactivated successfully.
Oct 13 14:11:35 standalone.localdomain podman[157074]: 2025-10-13 14:11:35.887828455 +0000 UTC m=+0.151056256 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, release=1, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, build-date=2025-07-21T13:27:18, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1)
Oct 13 14:11:35 standalone.localdomain podman[157074]: 2025-10-13 14:11:35.89581355 +0000 UTC m=+0.159041331 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:11:35 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:11:35 standalone.localdomain podman[157075]: 2025-10-13 14:11:35.906981443 +0000 UTC m=+0.170375819 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_api, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:35 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 192.168.122.99:33090 [13/Oct/2025:14:11:35.516] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/436/436 204 165 - - ---- 62/5/4/4/0 0/0 "DELETE /v3/users/2cc3076eb1d342548f5e45a8988ec05e HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain runuser[157029]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:35 standalone.localdomain haproxy[70940]: 172.21.0.2:53818 [13/Oct/2025:14:11:35.440] neutron neutron/standalone.internalapi.localdomain 0/0/0/548/548 200 2972 - - ---- 62/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=f37605dc1af34c04802ac56a422aba80&name=default HTTP/1.1"
Oct 13 14:11:35 standalone.localdomain runuser[157130]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 192.168.122.99:33102 [13/Oct/2025:14:11:35.553] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/479/479 204 165 - - ---- 62/4/3/3/0 0/0 "DELETE /v3/projects/a7b02bed147c4a6f87f4a733e2f52dc5 HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 192.168.122.99:33112 [13/Oct/2025:14:11:35.568] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/554/554 204 165 - - ---- 62/3/2/2/0 0/0 "DELETE /v3/projects/d77d536bdc074c768beae139f6057980 HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 172.21.0.2:53826 [13/Oct/2025:14:11:36.034] neutron neutron/standalone.internalapi.localdomain 0/0/0/181/181 200 2972 - - ---- 62/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=3c09edecb4504cf0a7553efa3eafcb89&name=default HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 192.168.122.99:33126 [13/Oct/2025:14:11:35.585] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/636/636 204 165 - - ---- 62/2/1/1/0 0/0 "DELETE /v3/users/8ccd361a15d44db8a3ead77db9324e34 HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 172.21.0.2:53820 [13/Oct/2025:14:11:35.990] neutron neutron/standalone.internalapi.localdomain 0/0/0/270/270 204 152 - - ---- 62/2/1/1/0 0/0 "DELETE /v2.0/security-groups/60c51701-f432-403c-9226-ea2628d00cfb HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 192.168.122.99:33128 [13/Oct/2025:14:11:35.632] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/646/646 204 165 - - ---- 62/3/2/2/0 0/0 "PUT /v3/projects/dafccc5472334499b3f1b5b59042ec7d/users/3a357ded3f3d4e379ce776c0f3a29d56/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 172.21.0.2:53842 [13/Oct/2025:14:11:36.217] neutron neutron/standalone.internalapi.localdomain 0/0/0/175/175 204 152 - - ---- 62/1/0/0/0 0/0 "DELETE /v2.0/security-groups/127f7f18-8cec-4162-96c4-bba81869b7f4 HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 172.21.0.2:43522 [13/Oct/2025:14:11:35.696] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/899/899 201 8100 - - ---- 62/8/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain runuser[157130]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:11:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:11:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:11:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:11:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:11:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:11:36 standalone.localdomain systemd[1]: tmp-crun.1BTkJ0.mount: Deactivated successfully.
Oct 13 14:11:36 standalone.localdomain podman[157198]: 2025-10-13 14:11:36.815687683 +0000 UTC m=+0.080445690 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=neutron_sriov_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:11:36 standalone.localdomain podman[157194]: 2025-10-13 14:11:36.843292619 +0000 UTC m=+0.113836803 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, com.redhat.component=openstack-neutron-dhcp-agent-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:11:36 standalone.localdomain podman[157197]: 2025-10-13 14:11:36.826287367 +0000 UTC m=+0.086182884 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T16:18:19, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_id=tripleo_step3, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, container_name=barbican_keystone_listener)
Oct 13 14:11:36 standalone.localdomain podman[157209]: 2025-10-13 14:11:36.891522609 +0000 UTC m=+0.141818802 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, name=rhosp17/openstack-barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, container_name=barbican_worker)
Oct 13 14:11:36 standalone.localdomain haproxy[70940]: 172.21.0.2:43528 [13/Oct/2025:14:11:35.867] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1038/1038 201 8100 - - ---- 62/8/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:36 standalone.localdomain podman[157197]: 2025-10-13 14:11:36.910732588 +0000 UTC m=+0.170628105 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:19, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, container_name=barbican_keystone_listener, vcs-type=git)
Oct 13 14:11:36 standalone.localdomain podman[157194]: 2025-10-13 14:11:36.916806465 +0000 UTC m=+0.187350529 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:54, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team)
Oct 13 14:11:36 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:11:36 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:11:36 standalone.localdomain podman[157196]: 2025-10-13 14:11:36.792666496 +0000 UTC m=+0.063002904 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, vcs-type=git, build-date=2025-07-21T15:22:44, container_name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:11:36 standalone.localdomain podman[157209]: 2025-10-13 14:11:36.962159776 +0000 UTC m=+0.212455999 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, maintainer=OpenStack TripleO Team, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.component=openstack-barbican-worker-container, release=1, vcs-type=git, container_name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T15:36:22, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, name=rhosp17/openstack-barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:11:36 standalone.localdomain podman[157198]: 2025-10-13 14:11:36.971654347 +0000 UTC m=+0.236412374 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent)
Oct 13 14:11:36 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:11:36 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:11:37 standalone.localdomain podman[157219]: 2025-10-13 14:11:37.028279954 +0000 UTC m=+0.285965404 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api_cron, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=)
Oct 13 14:11:37 standalone.localdomain podman[157219]: 2025-10-13 14:11:37.038927381 +0000 UTC m=+0.296612841 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, container_name=nova_api_cron, version=17.1.9, name=rhosp17/openstack-nova-api)
Oct 13 14:11:37 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:11:37 standalone.localdomain podman[157196]: 2025-10-13 14:11:37.080249929 +0000 UTC m=+0.350586377 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, release=1, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step3, container_name=barbican_api, distribution-scope=public, name=rhosp17/openstack-barbican-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:11:37 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:11:37 standalone.localdomain haproxy[70940]: 172.21.0.2:43538 [13/Oct/2025:14:11:35.955] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1220/1220 201 8100 - - ---- 62/8/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:37 standalone.localdomain haproxy[70940]: 172.21.0.2:43540 [13/Oct/2025:14:11:36.123] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1332/1332 201 8100 - - ---- 62/7/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:37 standalone.localdomain haproxy[70940]: 192.168.122.99:33132 [13/Oct/2025:14:11:36.222] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1341/1341 204 165 - - ---- 62/3/2/2/0 0/0 "DELETE /v3/users/3ece0b4c05dd450199ac631d937058c5 HTTP/1.1"
Oct 13 14:11:37 standalone.localdomain haproxy[70940]: 192.168.122.99:33138 [13/Oct/2025:14:11:36.262] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1382/1382 204 165 - - ---- 62/3/2/2/0 0/0 "DELETE /v3/projects/f37605dc1af34c04802ac56a422aba80 HTTP/1.1"
Oct 13 14:11:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1100: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:37 standalone.localdomain haproxy[70940]: 172.21.0.2:53854 [13/Oct/2025:14:11:37.647] neutron neutron/standalone.internalapi.localdomain 0/0/0/105/105 200 2972 - - ---- 62/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=c2fe7618d8674deda12a14c2ae3528ab&name=default HTTP/1.1"
Oct 13 14:11:37 standalone.localdomain systemd[1]: tmp-crun.q7XsTr.mount: Deactivated successfully.
Oct 13 14:11:37 standalone.localdomain haproxy[70940]: 172.21.0.2:53866 [13/Oct/2025:14:11:37.754] neutron neutron/standalone.internalapi.localdomain 0/0/0/186/186 204 152 - - ---- 62/2/1/1/0 0/0 "DELETE /v2.0/security-groups/5bdef9e9-88d4-4b63-9bfd-41d0cd4e6cb7 HTTP/1.1"
Oct 13 14:11:38 standalone.localdomain haproxy[70940]: 172.21.0.2:43552 [13/Oct/2025:14:11:36.283] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1775/1775 201 8114 - - ---- 62/7/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:38 standalone.localdomain haproxy[70940]: 192.168.122.99:33150 [13/Oct/2025:14:11:36.394] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1781/1781 204 165 - - ---- 62/4/3/3/0 0/0 "DELETE /v3/projects/3c09edecb4504cf0a7553efa3eafcb89 HTTP/1.1"
Oct 13 14:11:38 standalone.localdomain haproxy[70940]: 172.21.0.2:43560 [13/Oct/2025:14:11:36.599] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1876/1876 201 8100 - - ---- 62/7/4/4/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:38 standalone.localdomain ceph-mon[29756]: pgmap v1100: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:38 standalone.localdomain haproxy[70940]: 172.21.0.2:43576 [13/Oct/2025:14:11:36.912] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1836/1836 201 8100 - - ---- 62/6/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:38 standalone.localdomain haproxy[70940]: 172.17.0.2:46058 [13/Oct/2025:14:11:37.188] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1617/1617 200 8095 - - ---- 62/5/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:38 standalone.localdomain podman[157527]: 2025-10-13 14:11:38.918099104 +0000 UTC m=+0.110451339 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-cinder-backup-container, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, name=rhosp17/openstack-cinder-backup, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, release=1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:11:38 standalone.localdomain podman[157527]: 2025-10-13 14:11:38.949978132 +0000 UTC m=+0.142330297 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, summary=Red Hat OpenStack Platform 17.1 cinder-backup, architecture=x86_64, com.redhat.component=openstack-cinder-backup-container, name=rhosp17/openstack-cinder-backup, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, release=1, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, build-date=2025-07-21T16:18:24, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:11:38 standalone.localdomain haproxy[70940]: 172.21.0.2:53846 [13/Oct/2025:14:11:37.180] neutron neutron/standalone.internalapi.localdomain 0/0/0/1816/1816 200 2972 - - ---- 62/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=cc054b469f9740b3abe54a3e348e7912&name=default HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 172.21.0.2:43582 [13/Oct/2025:14:11:37.465] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1619/1619 201 8100 - - ---- 62/5/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33152 [13/Oct/2025:14:11:37.567] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1608/1608 204 165 - - ---- 62/6/5/5/0 0/0 "DELETE /v3/users/e68f49c4a0fa47ef99a3e48029c07e34 HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 172.21.0.2:53876 [13/Oct/2025:14:11:39.001] neutron neutron/standalone.internalapi.localdomain 0/0/0/202/202 204 152 - - ---- 62/1/0/0/0 0/0 "DELETE /v2.0/security-groups/bfe330da-1db0-46ab-b0fe-c107abb6db57 HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33154 [13/Oct/2025:14:11:37.943] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1339/1339 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/projects/c2fe7618d8674deda12a14c2ae3528ab HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33166 [13/Oct/2025:14:11:38.061] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1272/1272 201 578 - - ---- 62/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1101: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 172.21.0.2:43584 [13/Oct/2025:14:11:38.178] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1497/1497 201 8100 - - ---- 62/5/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33174 [13/Oct/2025:14:11:38.477] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1221/1221 200 510 - - ---- 62/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33176 [13/Oct/2025:14:11:38.751] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/969/969 200 510 - - ---- 62/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33182 [13/Oct/2025:14:11:39.088] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/652/652 200 510 - - ---- 62/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33192 [13/Oct/2025:14:11:39.177] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/634/634 204 165 - - ---- 62/6/5/5/0 0/0 "DELETE /v3/groups/41d172d5c1e04f278dd25bb6ceae4290 HTTP/1.1"
Oct 13 14:11:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:39 standalone.localdomain haproxy[70940]: 192.168.122.99:33206 [13/Oct/2025:14:11:39.207] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/701/701 204 165 - - ---- 62/6/5/5/0 0/0 "DELETE /v3/projects/cc054b469f9740b3abe54a3e348e7912 HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 172.21.0.2:40256 [13/Oct/2025:14:11:39.911] neutron neutron/standalone.internalapi.localdomain 0/0/0/195/195 200 2972 - - ---- 62/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=8663c786e40e473ba326cdf580ab887d&name=default HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 172.21.0.2:43590 [13/Oct/2025:14:11:39.284] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/926/926 201 8100 - - ---- 62/5/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 172.21.0.2:40260 [13/Oct/2025:14:11:40.109] neutron neutron/standalone.internalapi.localdomain 0/0/0/195/195 204 152 - - ---- 62/1/0/0/0 0/0 "DELETE /v2.0/security-groups/47a2a901-0cb9-46d2-ba3c-73bc8d7823a5 HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 192.168.122.99:33214 [13/Oct/2025:14:11:39.335] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1136/1136 201 499 - - ---- 62/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 172.21.0.2:39566 [13/Oct/2025:14:11:39.680] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1056/1056 201 8100 - - ---- 62/5/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain ceph-mon[29756]: pgmap v1101: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 192.168.122.99:45128 [13/Oct/2025:14:11:39.703] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1062/1062 201 596 - - ---- 62/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 192.168.122.99:45144 [13/Oct/2025:14:11:39.723] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1096/1096 201 576 - - ---- 62/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 192.168.122.99:45158 [13/Oct/2025:14:11:39.742] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1129/1129 201 592 - - ---- 62/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:40 standalone.localdomain haproxy[70940]: 192.168.122.99:45162 [13/Oct/2025:14:11:39.816] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1107/1107 201 534 - - ---- 62/7/6/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:41 standalone.localdomain haproxy[70940]: 172.21.0.2:39572 [13/Oct/2025:14:11:40.215] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1000/1000 201 8100 - - ---- 62/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:41 standalone.localdomain haproxy[70940]: 192.168.122.99:45172 [13/Oct/2025:14:11:40.308] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/971/971 204 165 - - ---- 62/8/7/7/0 0/0 "DELETE /v3/projects/8663c786e40e473ba326cdf580ab887d HTTP/1.1"
Oct 13 14:11:41 standalone.localdomain haproxy[70940]: 192.168.122.99:45184 [13/Oct/2025:14:11:40.475] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/849/849 200 2700 - - ---- 63/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:41 standalone.localdomain haproxy[70940]: 192.168.122.99:45188 [13/Oct/2025:14:11:40.740] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/604/604 200 510 - - ---- 62/7/6/6/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:41 standalone.localdomain haproxy[70940]: 192.168.122.99:45202 [13/Oct/2025:14:11:40.768] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/849/849 201 509 - - ---- 62/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1102: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:41 standalone.localdomain haproxy[70940]: 192.168.122.99:45212 [13/Oct/2025:14:11:40.823] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1055/1055 201 499 - - ---- 62/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45218 [13/Oct/2025:14:11:40.874] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1266/1266 201 507 - - ---- 63/8/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45224 [13/Oct/2025:14:11:40.925] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1257/1257 201 534 - - ---- 62/7/6/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45232 [13/Oct/2025:14:11:41.218] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/985/985 200 510 - - ---- 62/7/6/6/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 172.21.0.2:39576 [13/Oct/2025:14:11:41.283] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1186/1186 201 8100 - - ---- 62/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45242 [13/Oct/2025:14:11:41.327] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1169/1169 204 165 - - ---- 62/7/6/6/0 0/0 "PUT /v3/projects/9337f12d85fb43dfb41b789d366e25b0/users/d0415f0af4dd4a22aaf2124bacf1cae8/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45250 [13/Oct/2025:14:11:41.346] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1211/1211 201 592 - - ---- 62/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45256 [13/Oct/2025:14:11:41.619] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/987/987 200 2700 - - ---- 62/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45262 [13/Oct/2025:14:11:41.881] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/739/739 200 2700 - - ---- 62/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45278 [13/Oct/2025:14:11:42.140] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/3/489/492 200 2700 - - ---- 63/8/7/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45294 [13/Oct/2025:14:11:42.185] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/491/491 201 532 - - ---- 62/7/6/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain haproxy[70940]: 192.168.122.99:45306 [13/Oct/2025:14:11:42.205] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/492/492 201 574 - - ---- 62/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:11:42 standalone.localdomain ceph-mon[29756]: pgmap v1102: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:42 standalone.localdomain podman[157670]: 2025-10-13 14:11:42.826901717 +0000 UTC m=+0.101140174 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-northd, build-date=2025-07-21T13:30:04, io.buildah.version=1.33.12, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container)
Oct 13 14:11:42 standalone.localdomain podman[157670]: 2025-10-13 14:11:42.874112936 +0000 UTC m=+0.148351373 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, tcib_managed=true, container_name=ovn_cluster_northd, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, build-date=2025-07-21T13:30:04, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:11:42 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 172.21.0.2:39592 [13/Oct/2025:14:11:42.477] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/554/554 201 8100 - - ---- 61/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45312 [13/Oct/2025:14:11:42.498] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/556/556 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45324 [13/Oct/2025:14:11:42.561] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/786/786 201 507 - - ---- 60/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45338 [13/Oct/2025:14:11:42.607] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/770/770 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/56870b58ae40491a95a78e2c7a0320d2/users/9e54162d92344631bb5a200d471462a9/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45340 [13/Oct/2025:14:11:42.622] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/804/804 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/e0143409e3de40f4a710d083cdb21201/users/40c2dffc7dd248129d9e9328aa9f3ff6/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45342 [13/Oct/2025:14:11:42.634] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/862/862 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/699b2f70d8e742e3af63910a4fe07787/users/2f944f7f97924bc09890334e03fd090a/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45358 [13/Oct/2025:14:11:42.679] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/880/880 200 1257 - - ---- 60/8/7/7/0 0/0 "GET /v3/groups HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1103: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:11:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:11:43 standalone.localdomain systemd[1]: tmp-crun.xQpcTp.mount: Deactivated successfully.
Oct 13 14:11:43 standalone.localdomain podman[157699]: 2025-10-13 14:11:43.800688793 +0000 UTC m=+0.067863323 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Oct 13 14:11:43 standalone.localdomain podman[157699]: 2025-10-13 14:11:43.853011929 +0000 UTC m=+0.120186479 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:11:43 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45374 [13/Oct/2025:14:11:42.702] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1183/1183 201 498 - - ---- 60/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain podman[157700]: 2025-10-13 14:11:43.854092242 +0000 UTC m=+0.116594639 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.expose-services=, container_name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45382 [13/Oct/2025:14:11:43.035] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/876/876 200 510 - - ---- 60/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain haproxy[70940]: 192.168.122.99:45386 [13/Oct/2025:14:11:43.056] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/873/873 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/9337f12d85fb43dfb41b789d366e25b0/users/d0415f0af4dd4a22aaf2124bacf1cae8/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:43 standalone.localdomain podman[157700]: 2025-10-13 14:11:43.9391188 +0000 UTC m=+0.201621177 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, container_name=clustercheck, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T12:58:45, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container)
Oct 13 14:11:43 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45402 [13/Oct/2025:14:11:43.349] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/654/654 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45412 [13/Oct/2025:14:11:43.380] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/639/639 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45428 [13/Oct/2025:14:11:43.429] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45432 [13/Oct/2025:14:11:43.498] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/551/551 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45434 [13/Oct/2025:14:11:43.561] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/545/545 204 165 - - ---- 60/8/7/7/0 0/0 "DELETE /v3/groups/37f5a0feaa96485489943ee47be4fa82 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45448 [13/Oct/2025:14:11:43.889] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/259/259 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45464 [13/Oct/2025:14:11:43.914] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/254/254 201 584 - - ---- 60/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45472 [13/Oct/2025:14:11:43.932] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/279/279 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45484 [13/Oct/2025:14:11:44.007] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/228/228 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/b4183140b95c4e72831517ba29d8157a/users/b92813fc734948bcbe44071573ead780/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45500 [13/Oct/2025:14:11:44.022] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/257/257 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/56870b58ae40491a95a78e2c7a0320d2/users/9e54162d92344631bb5a200d471462a9/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45504 [13/Oct/2025:14:11:44.036] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/287/287 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/e0143409e3de40f4a710d083cdb21201/users/40c2dffc7dd248129d9e9328aa9f3ff6/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45508 [13/Oct/2025:14:11:44.051] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/320/320 204 165 - - ---- 60/6/5/5/0 0/0 "PUT /v3/projects/699b2f70d8e742e3af63910a4fe07787/users/2f944f7f97924bc09890334e03fd090a/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45522 [13/Oct/2025:14:11:44.109] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/332/332 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/groups/f80fd3e43f7d4a63bef8ef98aaef7086 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45530 [13/Oct/2025:14:11:44.151] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/352/352 204 165 - - ---- 61/6/4/4/0 0/0 "PUT /v3/projects/83a3230fcc4b4fffb8daa3ac839cad44/users/159085cea54044308c908b757d445f19/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain ceph-mon[29756]: pgmap v1103: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45538 [13/Oct/2025:14:11:44.170] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/622/622 201 503 - - ---- 60/5/4/4/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45540 [13/Oct/2025:14:11:44.213] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/596/596 204 165 - - ---- 60/5/4/4/0 0/0 "PUT /v3/projects/9337f12d85fb43dfb41b789d366e25b0/users/d0415f0af4dd4a22aaf2124bacf1cae8/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain haproxy[70940]: 192.168.122.99:45544 [13/Oct/2025:14:11:44.237] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/617/617 200 2700 - - ---- 60/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:45 standalone.localdomain haproxy[70940]: 172.21.0.2:39604 [13/Oct/2025:14:11:44.286] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/903/903 201 8134 - - ---- 60/5/3/3/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:45 standalone.localdomain haproxy[70940]: 172.21.0.2:39618 [13/Oct/2025:14:11:44.330] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1164/1165 201 8114 - - ---- 60/4/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1104: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:45 standalone.localdomain haproxy[70940]: 172.21.0.2:39626 [13/Oct/2025:14:11:44.381] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1427/1427 201 8130 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:45 standalone.localdomain ceph-mon[29756]: pgmap v1104: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:45 standalone.localdomain haproxy[70940]: 192.168.122.99:45554 [13/Oct/2025:14:11:44.444] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1434/1434 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/groups/392906db2cf84de793b86580c12b7aa1 HTTP/1.1"
Oct 13 14:11:45 standalone.localdomain haproxy[70940]: 192.168.122.99:45564 [13/Oct/2025:14:11:44.507] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1416/1416 200 2700 - - ---- 61/8/7/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:45 standalone.localdomain haproxy[70940]: 192.168.122.99:45576 [13/Oct/2025:14:11:44.795] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1144/1144 200 2700 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 172.21.0.2:39630 [13/Oct/2025:14:11:44.818] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1425/1425 201 8176 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45582 [13/Oct/2025:14:11:44.856] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1403/1403 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/b4183140b95c4e72831517ba29d8157a/users/b92813fc734948bcbe44071573ead780/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45584 [13/Oct/2025:14:11:45.192] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1117/1117 201 596 - - ---- 60/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45596 [13/Oct/2025:14:11:45.497] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/870/870 201 576 - - ---- 60/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45612 [13/Oct/2025:14:11:45.813] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/610/610 201 590 - - ---- 60/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45616 [13/Oct/2025:14:11:45.883] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/886/886 201 530 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45630 [13/Oct/2025:14:11:45.928] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/905/905 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/83a3230fcc4b4fffb8daa3ac839cad44/users/159085cea54044308c908b757d445f19/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45646 [13/Oct/2025:14:11:45.943] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/953/953 204 165 - - ---- 60/6/5/5/0 0/0 "PUT /v3/projects/5f53e3d6e8674d1fae5d3c14fc857e80/users/3c9ccd70b129457c88e28da7ed342a30/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:46 standalone.localdomain haproxy[70940]: 192.168.122.99:45662 [13/Oct/2025:14:11:46.246] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/699/699 201 497 - - ---- 60/6/5/5/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:47 standalone.localdomain haproxy[70940]: 172.21.0.2:39634 [13/Oct/2025:14:11:46.268] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1001/1001 201 8264 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:47 standalone.localdomain runuser[157792]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:47 standalone.localdomain haproxy[70940]: 192.168.122.99:45672 [13/Oct/2025:14:11:46.312] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1248/1248 201 508 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1105: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:47 standalone.localdomain haproxy[70940]: 192.168.122.99:45686 [13/Oct/2025:14:11:46.370] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1447/1447 201 498 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain runuser[157792]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45702 [13/Oct/2025:14:11:46.425] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1648/1648 201 505 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45716 [13/Oct/2025:14:11:46.771] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1360/1360 201 532 - - ---- 60/7/6/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain runuser[157996]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 172.21.0.2:39646 [13/Oct/2025:14:11:46.840] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1597/1598 201 8246 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45718 [13/Oct/2025:14:11:46.898] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1554/1554 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45732 [13/Oct/2025:14:11:46.949] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1546/1546 201 566 - - ---- 60/8/7/7/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45742 [13/Oct/2025:14:11:47.273] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1256/1256 201 592 - - ---- 60/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45748 [13/Oct/2025:14:11:47.563] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1008/1008 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45764 [13/Oct/2025:14:11:47.820] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/766/766 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45772 [13/Oct/2025:14:11:48.076] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/531/531 200 2700 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45778 [13/Oct/2025:14:11:48.133] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/554/554 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/groups/196a10c1713142fe87ab296408acb03e/users/138b62f76a0f48028e78d5c25df73ef3 HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain ceph-mon[29756]: pgmap v1105: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45780 [13/Oct/2025:14:11:48.443] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/301/301 201 574 - - ---- 60/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45790 [13/Oct/2025:14:11:48.455] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/334/334 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/5f53e3d6e8674d1fae5d3c14fc857e80/users/3c9ccd70b129457c88e28da7ed342a30/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:48 standalone.localdomain runuser[157996]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:48 standalone.localdomain runuser[158099]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:11:48 standalone.localdomain haproxy[70940]: 192.168.122.99:45800 [13/Oct/2025:14:11:48.495] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/341/341 200 380 - - ---- 59/7/6/6/0 0/0 "GET /v3/regions/tempest-region-549838454 HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45804 [13/Oct/2025:14:11:48.532] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/603/603 201 506 - - ---- 59/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45806 [13/Oct/2025:14:11:48.574] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/589/589 204 165 - - ---- 59/7/6/6/0 0/0 "PUT /v3/projects/ffa315e21ceb48feb0b76550c3510add/users/1ef4cd5ff6bf4e7a8a8c47ff5bae60fb/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45816 [13/Oct/2025:14:11:48.587] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/635/635 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/6ad7a2bf79104bf1ac18df44fba91e94/users/e46141d47388480f964b43f3b80ff0f3/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45832 [13/Oct/2025:14:11:48.609] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/667/667 204 165 - - ---- 59/7/6/6/0 0/0 "PUT /v3/projects/19f21c44e3fd4f7c9dc20d45e995e6d3/users/212c1349536041ebbf768e38304aaa6f/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45836 [13/Oct/2025:14:11:48.690] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/632/632 201 533 - - ---- 59/7/6/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain runuser[158099]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45844 [13/Oct/2025:14:11:48.746] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/857/857 201 497 - - ---- 60/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1106: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 172.21.0.2:39662 [13/Oct/2025:14:11:48.800] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1131/1131 201 8437 - - ---- 59/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45860 [13/Oct/2025:14:11:48.839] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1132/1132 201 496 - - ---- 59/8/7/7/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:49 standalone.localdomain haproxy[70940]: 192.168.122.99:45872 [13/Oct/2025:14:11:49.135] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/856/856 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:45874 [13/Oct/2025:14:11:49.166] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/843/843 200 2700 - - ---- 60/9/8/8/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:45880 [13/Oct/2025:14:11:49.225] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/799/799 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:45886 [13/Oct/2025:14:11:49.279] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/759/759 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:45898 [13/Oct/2025:14:11:49.323] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/755/755 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/groups/b92e2de3cd71427b873ecdb2510bf58a/users/138b62f76a0f48028e78d5c25df73ef3 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55560 [13/Oct/2025:14:11:49.606] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/519/519 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55576 [13/Oct/2025:14:11:49.935] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/222/222 201 582 - - ---- 59/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55580 [13/Oct/2025:14:11:49.974] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/251/251 201 570 - - ---- 59/8/7/7/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55586 [13/Oct/2025:14:11:49.996] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/291/291 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/0617fcc814d7403884ae063321c7b217/users/060ce40f5943498a97ba48f3a36a08fc/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55588 [13/Oct/2025:14:11:50.011] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/315/315 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/ffa315e21ceb48feb0b76550c3510add/users/1ef4cd5ff6bf4e7a8a8c47ff5bae60fb/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55602 [13/Oct/2025:14:11:50.025] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/341/341 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/6ad7a2bf79104bf1ac18df44fba91e94/users/e46141d47388480f964b43f3b80ff0f3/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55606 [13/Oct/2025:14:11:50.042] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/19f21c44e3fd4f7c9dc20d45e995e6d3/users/212c1349536041ebbf768e38304aaa6f/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55612 [13/Oct/2025:14:11:50.080] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/377/377 200 1039 - - ---- 59/8/7/7/0 0/0 "GET /v3/users/138b62f76a0f48028e78d5c25df73ef3/groups HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55622 [13/Oct/2025:14:11:50.128] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/402/402 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/87a1a19f89ac46ca90a271c04ae6c45f/users/98bc3d66902a4dfe97d2f6a840be3dbc/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain sudo[158247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:11:50 standalone.localdomain sudo[158247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:11:50 standalone.localdomain sudo[158247]: pam_unix(sudo:session): session closed for user root
Oct 13 14:11:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:11:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:11:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:11:50 standalone.localdomain sudo[158264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:11:50 standalone.localdomain sudo[158264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:11:50 standalone.localdomain systemd[1]: tmp-crun.qFZzfe.mount: Deactivated successfully.
Oct 13 14:11:50 standalone.localdomain podman[158265]: 2025-10-13 14:11:50.664199407 +0000 UTC m=+0.077275042 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.buildah.version=1.33.12, container_name=cinder_scheduler, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-scheduler, release=1, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:11:50 standalone.localdomain systemd[1]: tmp-crun.KmmS23.mount: Deactivated successfully.
Oct 13 14:11:50 standalone.localdomain podman[158262]: 2025-10-13 14:11:50.719183154 +0000 UTC m=+0.135452737 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid)
Oct 13 14:11:50 standalone.localdomain podman[158265]: 2025-10-13 14:11:50.721846766 +0000 UTC m=+0.134922421 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cinder-scheduler-container, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1)
Oct 13 14:11:50 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:11:50 standalone.localdomain ceph-mon[29756]: pgmap v1106: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:50 standalone.localdomain podman[158263]: 2025-10-13 14:11:50.783550899 +0000 UTC m=+0.197933734 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, vcs-type=git, tcib_managed=true, build-date=2025-07-21T15:58:55, container_name=cinder_api_cron, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, distribution-scope=public, release=1, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, config_id=tripleo_step4)
Oct 13 14:11:50 standalone.localdomain podman[158263]: 2025-10-13 14:11:50.791189253 +0000 UTC m=+0.205572058 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, container_name=cinder_api_cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T15:58:55, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b)
Oct 13 14:11:50 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:11:50 standalone.localdomain podman[158262]: 2025-10-13 14:11:50.810170605 +0000 UTC m=+0.226440208 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55628 [13/Oct/2025:14:11:50.160] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/665/665 201 501 - - ---- 59/8/7/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55634 [13/Oct/2025:14:11:50.227] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/641/641 200 382 - - ---- 59/8/7/7/0 0/0 "GET /v3/regions/tempest-region-1219306133 HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55642 [13/Oct/2025:14:11:50.288] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/613/613 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55658 [13/Oct/2025:14:11:50.331] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/595/595 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55662 [13/Oct/2025:14:11:50.369] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/577/577 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:50 standalone.localdomain haproxy[70940]: 192.168.122.99:55672 [13/Oct/2025:14:11:50.410] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/551/551 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55682 [13/Oct/2025:14:11:50.461] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/569/569 204 165 - - ---- 59/8/7/7/0 0/0 "DELETE /v3/groups/b92e2de3cd71427b873ecdb2510bf58a HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55692 [13/Oct/2025:14:11:50.534] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/539/539 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55698 [13/Oct/2025:14:11:50.828] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/273/273 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55710 [13/Oct/2025:14:11:50.871] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/305/305 201 568 - - ---- 59/8/7/7/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain sudo[158264]: pam_unix(sudo:session): session closed for user root
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55716 [13/Oct/2025:14:11:50.906] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/296/296 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/0617fcc814d7403884ae063321c7b217/users/060ce40f5943498a97ba48f3a36a08fc/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55730 [13/Oct/2025:14:11:50.930] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/324/324 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/ffa315e21ceb48feb0b76550c3510add/users/1ef4cd5ff6bf4e7a8a8c47ff5bae60fb/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:11:51 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev cf44f4c9-80f0-463b-a2cd-1a82ae8b6681 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:11:51 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev cf44f4c9-80f0-463b-a2cd-1a82ae8b6681 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:11:51 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event cf44f4c9-80f0-463b-a2cd-1a82ae8b6681 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55740 [13/Oct/2025:14:11:50.948] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/356/356 204 165 - - ---- 59/7/6/6/0 0/0 "PUT /v3/projects/6ad7a2bf79104bf1ac18df44fba91e94/users/e46141d47388480f964b43f3b80ff0f3/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain sudo[158376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:11:51 standalone.localdomain sudo[158376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:11:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:11:51 standalone.localdomain sudo[158376]: pam_unix(sudo:session): session closed for user root
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55742 [13/Oct/2025:14:11:50.964] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/392/392 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/projects/19f21c44e3fd4f7c9dc20d45e995e6d3/users/212c1349536041ebbf768e38304aaa6f/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55754 [13/Oct/2025:14:11:51.031] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/385/385 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/groups/196a10c1713142fe87ab296408acb03e HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain podman[158391]: 2025-10-13 14:11:51.437420059 +0000 UTC m=+0.090832377 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, batch=17.1_20250721.1, container_name=keystone, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, config_id=tripleo_step3, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, distribution-scope=public, name=rhosp17/openstack-keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55768 [13/Oct/2025:14:11:51.076] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/382/382 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/87a1a19f89ac46ca90a271c04ae6c45f/users/98bc3d66902a4dfe97d2f6a840be3dbc/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55772 [13/Oct/2025:14:11:51.103] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/406/406 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/de95c089676d4bbfbb51e3f4e94d9f9b/users/d86880aecab84377ad8f89c8f74127d2/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55786 [13/Oct/2025:14:11:51.178] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/372/372 200 382 - - ---- 59/5/4/4/0 0/0 "GET /v3/regions/tempest-region-1887968760 HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 192.168.122.99:55798 [13/Oct/2025:14:11:51.204] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/415/415 200 2700 - - ---- 59/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:51 standalone.localdomain systemd[1]: tmp-crun.8ltMJJ.mount: Deactivated successfully.
Oct 13 14:11:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1107: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:11:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:11:51 standalone.localdomain haproxy[70940]: 172.21.0.2:55380 [13/Oct/2025:14:11:51.259] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/684/684 201 9012 - - ---- 59/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 172.21.0.2:55386 [13/Oct/2025:14:11:51.311] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1027/1027 201 8992 - - ---- 59/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 172.21.0.2:55402 [13/Oct/2025:14:11:51.361] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1382/1382 201 9006 - - ---- 59/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain ceph-mon[29756]: pgmap v1107: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 192.168.122.99:55808 [13/Oct/2025:14:11:51.420] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1415/1415 204 165 - - ---- 59/8/7/7/0 0/0 "DELETE /v3/users/138b62f76a0f48028e78d5c25df73ef3 HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain podman[158391]: 2025-10-13 14:11:52.844953802 +0000 UTC m=+1.498366180 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:11:52 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 192.168.122.99:55814 [13/Oct/2025:14:11:51.461] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1423/1423 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 192.168.122.99:55818 [13/Oct/2025:14:11:51.512] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1385/1385 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 192.168.122.99:55834 [13/Oct/2025:14:11:51.553] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1385/1385 200 12868 - - ---- 59/8/7/7/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:11:52 standalone.localdomain haproxy[70940]: 192.168.122.99:55836 [13/Oct/2025:14:11:51.623] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1337/1337 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/0617fcc814d7403884ae063321c7b217/users/060ce40f5943498a97ba48f3a36a08fc/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55844 [13/Oct/2025:14:11:51.947] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1063/1063 201 484 - - ---- 59/7/6/6/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55854 [13/Oct/2025:14:11:52.342] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/724/724 201 566 - - ---- 59/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55866 [13/Oct/2025:14:11:52.747] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/384/384 201 566 - - ---- 59/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55880 [13/Oct/2025:14:11:52.843] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 200 480 - - ---- 59/7/6/6/0 0/0 "PATCH /v3/domains/697678ce715243a2a448efd9f6d8a61a HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55890 [13/Oct/2025:14:11:52.887] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/366/366 204 165 - - ---- 59/7/6/6/0 0/0 "PUT /v3/projects/87a1a19f89ac46ca90a271c04ae6c45f/users/98bc3d66902a4dfe97d2f6a840be3dbc/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55902 [13/Oct/2025:14:11:52.900] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/402/402 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/projects/de95c089676d4bbfbb51e3f4e94d9f9b/users/d86880aecab84377ad8f89c8f74127d2/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55904 [13/Oct/2025:14:11:52.942] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/407/407 200 563 - - ---- 59/6/5/5/0 0/0 "GET /v3/endpoints/bbb714d590ca47d6ad5f5a50fac03233 HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1108: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 172.21.0.2:55412 [13/Oct/2025:14:11:52.970] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/731/731 201 9008 - - ---- 59/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain haproxy[70940]: 192.168.122.99:55918 [13/Oct/2025:14:11:53.012] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/769/769 201 345 - - ---- 59/7/6/6/0 0/0 "PUT /v3/domains/6431372d1658415b971f4d9c966eebff/config HTTP/1.1"
Oct 13 14:11:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 49 completed events
Oct 13 14:11:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:11:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55920 [13/Oct/2025:14:11:53.068] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1007/1007 201 580 - - ---- 59/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55928 [13/Oct/2025:14:11:53.134] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/991/991 403 345 - - ---- 59/7/6/6/0 0/0 "DELETE /v3/projects/ec979bb8ab3d4ad0a2d0268274808864 HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55938 [13/Oct/2025:14:11:53.209] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1037/1037 204 165 - - ---- 59/7/6/6/0 0/0 "DELETE /v3/domains/697678ce715243a2a448efd9f6d8a61a HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 172.21.0.2:55414 [13/Oct/2025:14:11:53.258] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1313/1313 201 8990 - - ---- 60/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55942 [13/Oct/2025:14:11:53.303] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1321/1321 200 2700 - - ---- 59/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55944 [13/Oct/2025:14:11:53.351] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1318/1318 204 165 - - ---- 59/8/7/7/0 0/0 "DELETE /v3/endpoints/bbb714d590ca47d6ad5f5a50fac03233 HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55952 [13/Oct/2025:14:11:53.702] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1013/1013 201 496 - - ---- 59/8/7/7/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55966 [13/Oct/2025:14:11:53.785] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/983/983 200 246 - - ---- 59/8/7/7/0 0/0 "GET /v3/domains/6431372d1658415b971f4d9c966eebff/config/identity HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:11:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55972 [13/Oct/2025:14:11:54.078] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/739/739 200 9690 - - ---- 59/8/7/7/0 0/0 "GET /v3/users?domain_id=default HTTP/1.1"
Oct 13 14:11:54 standalone.localdomain ceph-mon[29756]: pgmap v1108: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:11:54 standalone.localdomain systemd[1]: tmp-crun.mK5mgn.mount: Deactivated successfully.
Oct 13 14:11:54 standalone.localdomain podman[158505]: 2025-10-13 14:11:54.874652774 +0000 UTC m=+0.096767910 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, com.redhat.component=openstack-nova-conductor-container, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, container_name=nova_conductor, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, name=rhosp17/openstack-nova-conductor, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:11:54 standalone.localdomain podman[158485]: 2025-10-13 14:11:54.87516901 +0000 UTC m=+0.099255276 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, vcs-type=git, container_name=horizon, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, name=rhosp17/openstack-horizon, build-date=2025-07-21T13:58:15, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, version=17.1.9, com.redhat.component=openstack-horizon-container, description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:11:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:11:54 standalone.localdomain podman[158503]: 2025-10-13 14:11:54.911582117 +0000 UTC m=+0.139199812 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 14:11:54 standalone.localdomain podman[158503]: 2025-10-13 14:11:54.961497049 +0000 UTC m=+0.189114734 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, name=rhosp17/openstack-heat-api, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, distribution-scope=public, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:11:54 standalone.localdomain podman[158453]: 2025-10-13 14:11:54.968866485 +0000 UTC m=+0.222292012 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:11:54 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:11:54 standalone.localdomain haproxy[70940]: 192.168.122.99:55980 [13/Oct/2025:14:11:54.128] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/867/867 204 165 - - ---- 59/8/7/7/0 0/0 "DELETE /v3/projects/ec979bb8ab3d4ad0a2d0268274808864 HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain podman[158453]: 2025-10-13 14:11:55.014105353 +0000 UTC m=+0.267530880 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, release=1, com.redhat.component=openstack-heat-engine-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158473]: 2025-10-13 14:11:55.044619589 +0000 UTC m=+0.279003961 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, container_name=cinder_api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:55 standalone.localdomain podman[158463]: 2025-10-13 14:11:54.945678533 +0000 UTC m=+0.189519915 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, container_name=memcached, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:11:55 standalone.localdomain podman[158485]: 2025-10-13 14:11:55.056093541 +0000 UTC m=+0.280179797 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, vcs-type=git, build-date=2025-07-21T13:58:15, container_name=horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, com.redhat.component=openstack-horizon-container, description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158504]: 2025-10-13 14:11:55.017512397 +0000 UTC m=+0.245696629 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, build-date=2025-07-21T15:44:03, container_name=neutron_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, name=rhosp17/openstack-neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:11:55 standalone.localdomain podman[158478]: 2025-10-13 14:11:54.896784033 +0000 UTC m=+0.136375235 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-api, vcs-type=git, batch=17.1_20250721.1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, distribution-scope=public, container_name=manila_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-manila-api-container)
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:55994 [13/Oct/2025:14:11:54.249] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/866/866 204 165 - - ---- 59/8/7/7/0 0/0 "DELETE /v3/users/17d8f35a3335444f8e6303657404a3d0 HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain podman[158473]: 2025-10-13 14:11:55.125743838 +0000 UTC m=+0.360128200 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public)
Oct 13 14:11:55 standalone.localdomain podman[158505]: 2025-10-13 14:11:55.125882562 +0000 UTC m=+0.347997708 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, container_name=nova_conductor, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, name=rhosp17/openstack-nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, tcib_managed=true, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-nova-conductor-container)
Oct 13 14:11:55 standalone.localdomain podman[158504]: 2025-10-13 14:11:55.153580962 +0000 UTC m=+0.381765214 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, com.redhat.component=openstack-neutron-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:44:03, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, name=rhosp17/openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, container_name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, release=1)
Oct 13 14:11:55 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:55998 [13/Oct/2025:14:11:54.576] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/601/601 201 442 - - ---- 59/8/7/7/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158508]: 2025-10-13 14:11:54.996811023 +0000 UTC m=+0.217141164 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=logrotate_crond, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56000 [13/Oct/2025:14:11:54.628] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/640/640 204 165 - - ---- 59/8/7/7/0 0/0 "PUT /v3/projects/de95c089676d4bbfbb51e3f4e94d9f9b/users/d86880aecab84377ad8f89c8f74127d2/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain podman[158459]: 2025-10-13 14:11:55.267331522 +0000 UTC m=+0.523968987 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, tcib_managed=true, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, batch=17.1_20250721.1, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-scheduler, release=1, managed_by=tripleo_ansible)
Oct 13 14:11:55 standalone.localdomain podman[158478]: 2025-10-13 14:11:55.279213496 +0000 UTC m=+0.518804728 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=manila_api_cron, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.component=openstack-manila-api-container, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, build-date=2025-07-21T16:06:43, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:11:55 standalone.localdomain podman[158449]: 2025-10-13 14:11:55.288140861 +0000 UTC m=+0.549196742 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, container_name=manila_scheduler, release=1, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, build-date=2025-07-21T15:56:28, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, name=rhosp17/openstack-manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, maintainer=OpenStack TripleO Team)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158459]: 2025-10-13 14:11:55.29236527 +0000 UTC m=+0.549002755 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, build-date=2025-07-21T16:02:54, version=17.1.9, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, name=rhosp17/openstack-nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, managed_by=tripleo_ansible)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158484]: 2025-10-13 14:11:55.206205196 +0000 UTC m=+0.448108509 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, distribution-scope=public, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:11:55 standalone.localdomain podman[158449]: 2025-10-13 14:11:55.317776129 +0000 UTC m=+0.578832020 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-manila-scheduler-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, container_name=manila_scheduler, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T15:56:28, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true)
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56004 [13/Oct/2025:14:11:54.673] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/656/656 200 12520 - - ---- 59/7/6/6/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158484]: 2025-10-13 14:11:55.338062431 +0000 UTC m=+0.579965734 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, config_id=tripleo_step4, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, container_name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56010 [13/Oct/2025:14:11:54.719] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/657/657 400 494 - - ---- 59/7/6/6/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain podman[158463]: 2025-10-13 14:11:55.381312429 +0000 UTC m=+0.625153811 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, container_name=memcached, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, release=1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56022 [13/Oct/2025:14:11:54.769] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/647/647 200 232 - - ---- 59/7/6/6/0 0/0 "GET /v3/domains/6431372d1658415b971f4d9c966eebff/config/identity/driver HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain podman[158447]: 2025-10-13 14:11:55.419251892 +0000 UTC m=+0.686077480 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible)
Oct 13 14:11:55 standalone.localdomain podman[158508]: 2025-10-13 14:11:55.434115419 +0000 UTC m=+0.654445610 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, distribution-scope=public, tcib_managed=true)
Oct 13 14:11:55 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain podman[158447]: 2025-10-13 14:11:55.474600021 +0000 UTC m=+0.741425659 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, container_name=heat_api_cfn, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:11:55 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:11:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1109: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56038 [13/Oct/2025:14:11:54.824] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/903/903 200 690 - - ---- 59/7/6/6/0 0/0 "PATCH /v3/users/5eb934ddc2cb456eaeef21fa72817dcd HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56048 [13/Oct/2025:14:11:54.999] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/864/864 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/2f944f7f97924bc09890334e03fd090a HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain ceph-mon[29756]: pgmap v1109: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56060 [13/Oct/2025:14:11:55.118] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/827/827 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/f1221768e62f4eefa3c4554167480f93 HTTP/1.1"
Oct 13 14:11:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56062 [13/Oct/2025:14:11:55.178] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/816/816 201 442 - - ---- 59/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 172.21.0.2:55430 [13/Oct/2025:14:11:55.272] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1037/1037 201 8946 - - ---- 60/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56072 [13/Oct/2025:14:11:55.330] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1025/1025 404 323 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/endpoints/bbb714d590ca47d6ad5f5a50fac03233 HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56080 [13/Oct/2025:14:11:55.379] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1019/1019 400 492 - - ---- 59/6/5/5/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56082 [13/Oct/2025:14:11:55.417] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1024/1024 200 308 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains/6431372d1658415b971f4d9c966eebff/config/ldap HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 172.21.0.2:55432 [13/Oct/2025:14:11:55.730] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/727/727 401 377 - - ---- 59/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56094 [13/Oct/2025:14:11:55.867] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/673/673 204 165 - - ---- 59/7/6/6/0 0/0 "DELETE /v3/users/212c1349536041ebbf768e38304aaa6f HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 172.21.0.2:55444 [13/Oct/2025:14:11:55.947] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/883/883 201 8866 - - ---- 59/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56098 [13/Oct/2025:14:11:55.998] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/874/874 201 441 - - ---- 60/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56104 [13/Oct/2025:14:11:56.314] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/605/605 201 541 - - ---- 60/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56114 [13/Oct/2025:14:11:56.359] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/607/607 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/regions/tempest-region-1887968760 HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56118 [13/Oct/2025:14:11:56.404] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/635/635 201 566 - - ---- 60/6/5/5/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56128 [13/Oct/2025:14:11:56.444] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/642/642 200 247 - - ---- 60/6/5/5/0 0/0 "GET /v3/domains/6431372d1658415b971f4d9c966eebff/config/ldap/url HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56142 [13/Oct/2025:14:11:56.459] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/723/723 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/users/5eb934ddc2cb456eaeef21fa72817dcd HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:11:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2352174365' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:11:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:11:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2352174365' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 172.21.0.2:55452 [13/Oct/2025:14:11:56.543] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/955/955 201 9047 - - ---- 60/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2352174365' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:11:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2352174365' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 172.17.0.2:48818 [13/Oct/2025:14:11:56.838] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/716/716 200 9042 - - ---- 61/2/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56148 [13/Oct/2025:14:11:56.874] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/745/745 201 484 - - ---- 61/6/5/5/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1110: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 172.21.0.2:36342 [13/Oct/2025:14:11:56.833] neutron neutron/standalone.internalapi.localdomain 0/0/0/870/870 200 2972 - - ---- 62/3/2/1/0 0/0 "GET /v2.0/security-groups?tenant_id=9966dc18c2ea40a0ae49e06de28b70e4&name=default HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56158 [13/Oct/2025:14:11:56.922] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/796/796 201 540 - - ---- 61/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:11:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:11:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:11:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:11:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:11:57 standalone.localdomain podman[158957]: 2025-10-13 14:11:57.801877282 +0000 UTC m=+0.066313556 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, release=1, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56168 [13/Oct/2025:14:11:56.971] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/885/885 200 12866 - - ---- 61/6/5/5/0 0/0 "GET /v3/endpoints HTTP/1.1"
Oct 13 14:11:57 standalone.localdomain podman[158963]: 2025-10-13 14:11:57.90449691 +0000 UTC m=+0.161305439 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-swift-account, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:11:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56176 [13/Oct/2025:14:11:57.042] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/909/909 200 380 - - ---- 62/7/6/5/0 0/0 "GET /v3/regions/tempest-region-987243360 HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain podman[158956]: 2025-10-13 14:11:57.829524821 +0000 UTC m=+0.093164610 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 172.21.0.2:36364 [13/Oct/2025:14:11:57.704] neutron neutron/standalone.internalapi.localdomain 0/0/0/327/327 204 152 - - ---- 62/2/1/1/0 0/0 "DELETE /v2.0/security-groups/47783b86-3327-459d-b6fa-cf46e543da49 HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain podman[158957]: 2025-10-13 14:11:58.044762294 +0000 UTC m=+0.309198538 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=swift_container_server, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container)
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56178 [13/Oct/2025:14:11:57.087] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/961/961 200 264 - - ---- 61/7/6/6/0 0/0 "GET /v3/domains/6431372d1658415b971f4d9c966eebff/config/ldap/user_tree_dn HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:11:58 standalone.localdomain podman[158974]: 2025-10-13 14:11:57.863519973 +0000 UTC m=+0.113233045 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, com.redhat.component=openstack-nova-novncproxy-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:24:10, container_name=nova_vnc_proxy, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:11:58 standalone.localdomain podman[158955]: 2025-10-13 14:11:58.076539239 +0000 UTC m=+0.344016956 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, config_id=tripleo_step4)
Oct 13 14:11:58 standalone.localdomain podman[158963]: 2025-10-13 14:11:58.099542214 +0000 UTC m=+0.356350743 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:11:58 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56194 [13/Oct/2025:14:11:57.184] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/969/969 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/projects/a5332e764bb7479182e42746ddc28ee2 HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain podman[158974]: 2025-10-13 14:11:58.165756986 +0000 UTC m=+0.415470058 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, build-date=2025-07-21T15:24:10, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:11:58 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:11:58 standalone.localdomain podman[158956]: 2025-10-13 14:11:58.205786664 +0000 UTC m=+0.469426443 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:11:58 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 172.17.0.2:48824 [13/Oct/2025:14:11:57.506] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/725/725 200 9042 - - ---- 61/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain podman[158955]: 2025-10-13 14:11:58.253731225 +0000 UTC m=+0.521208952 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, container_name=swift_object_server, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:11:58 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56196 [13/Oct/2025:14:11:57.621] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/700/700 201 604 - - ---- 61/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56198 [13/Oct/2025:14:11:57.720] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/732/732 201 564 - - ---- 61/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 172.21.0.2:36358 [13/Oct/2025:14:11:57.501] neutron neutron/standalone.internalapi.localdomain 0/0/0/977/977 200 2973 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=699b2f70d8e742e3af63910a4fe07787&name=default HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56204 [13/Oct/2025:14:11:57.860] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/647/647 200 703 - - ---- 61/7/6/6/0 0/0 "GET /v3/endpoints?service_id=98ee203e970e43ff84ccc4f04d53f50c HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56218 [13/Oct/2025:14:11:57.954] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/604/604 400 494 - - ---- 61/7/6/6/0 0/0 "PATCH /v3/endpoints/a0cee7470c204fde88c97ac47ec5d0fa HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain ceph-mon[29756]: pgmap v1110: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56222 [13/Oct/2025:14:11:58.033] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/608/608 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/projects/9966dc18c2ea40a0ae49e06de28b70e4 HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 172.21.0.2:36372 [13/Oct/2025:14:11:58.480] neutron neutron/standalone.internalapi.localdomain 0/0/0/179/179 204 152 - - ---- 61/2/1/1/0 0/0 "DELETE /v2.0/security-groups/4b02910d-a2c0-4918-ba8d-44eedbc11f29 HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56236 [13/Oct/2025:14:11:58.050] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/639/639 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/domains/6431372d1658415b971f4d9c966eebff/config HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56240 [13/Oct/2025:14:11:58.155] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/581/581 404 321 - - ---- 61/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 172.21.0.2:36388 [13/Oct/2025:14:11:58.643] neutron neutron/standalone.internalapi.localdomain 0/0/0/120/120 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=a70ed38184164aba8e34a63bbbacc51f&name=default HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56250 [13/Oct/2025:14:11:58.323] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/465/465 201 507 - - ---- 62/8/7/6/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56264 [13/Oct/2025:14:11:58.454] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/398/398 200 8437 - - ---- 61/7/6/6/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56280 [13/Oct/2025:14:11:58.508] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/388/388 200 707 - - ---- 61/7/6/6/0 0/0 "GET /v3/endpoints?service_id=e3499078cd8c4418ab5bf995047400fc HTTP/1.1"
Oct 13 14:11:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56288 [13/Oct/2025:14:11:58.559] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/395/395 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/endpoints/a0cee7470c204fde88c97ac47ec5d0fa HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 172.21.0.2:36394 [13/Oct/2025:14:11:58.765] neutron neutron/standalone.internalapi.localdomain 0/0/0/254/254 204 152 - - ---- 61/1/0/0/0 0/0 "DELETE /v2.0/security-groups/25ba7c46-e8cf-46c5-b320-686ae67cb02b HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56304 [13/Oct/2025:14:11:58.661] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/400/400 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/699b2f70d8e742e3af63910a4fe07787 HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56312 [13/Oct/2025:14:11:58.691] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/447/447 200 480 - - ---- 61/7/6/6/0 0/0 "PATCH /v3/domains/6431372d1658415b971f4d9c966eebff HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 172.21.0.2:36402 [13/Oct/2025:14:11:59.062] neutron neutron/standalone.internalapi.localdomain 0/0/0/120/120 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=19f21c44e3fd4f7c9dc20d45e995e6d3&name=default HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56314 [13/Oct/2025:14:11:58.740] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/492/492 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/users/40c2dffc7dd248129d9e9328aa9f3ff6 HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56324 [13/Oct/2025:14:11:58.792] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/507/507 201 442 - - ---- 61/7/6/6/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56340 [13/Oct/2025:14:11:58.854] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/502/502 200 647 - - ---- 61/7/6/6/0 0/0 "GET /v3/projects?enabled=False HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56352 [13/Oct/2025:14:11:58.900] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/514/514 200 4515 - - ---- 61/7/6/6/0 0/0 "GET /v3/endpoints?interface=public HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 172.21.0.2:36412 [13/Oct/2025:14:11:59.185] neutron neutron/standalone.internalapi.localdomain 0/0/0/236/236 204 152 - - ---- 61/1/0/0/0 0/0 "DELETE /v2.0/security-groups/f6985e3c-18f0-4a36-b869-65933b42cf3d HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56362 [13/Oct/2025:14:11:58.957] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/514/514 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/regions/tempest-region-987243360 HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56372 [13/Oct/2025:14:11:59.021] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/537/537 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/a70ed38184164aba8e34a63bbbacc51f HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1111: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56382 [13/Oct/2025:14:11:59.140] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/532/532 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/domains/6431372d1658415b971f4d9c966eebff HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56386 [13/Oct/2025:14:11:59.232] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/516/516 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/users/e46141d47388480f964b43f3b80ff0f3 HTTP/1.1"
Oct 13 14:11:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 192.168.122.99:56398 [13/Oct/2025:14:11:59.300] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/791/791 201 649 - - ---- 61/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain runuser[159232]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 192.168.122.99:56406 [13/Oct/2025:14:11:59.360] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/810/810 200 699 - - ---- 61/6/5/5/0 0/0 "GET /v3/projects?parent_id=4f3f32edc5214c749550d5697e57f7e1 HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 192.168.122.99:56420 [13/Oct/2025:14:11:59.416] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/824/824 200 4545 - - ---- 61/6/5/5/0 0/0 "GET /v3/endpoints?interface=internal HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 192.168.122.99:56426 [13/Oct/2025:14:11:59.423] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/908/908 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/projects/19f21c44e3fd4f7c9dc20d45e995e6d3 HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 192.168.122.99:56442 [13/Oct/2025:14:11:59.473] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/927/927 201 565 - - ---- 61/5/4/4/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 172.21.0.2:55456 [13/Oct/2025:14:11:59.561] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1162/1162 201 9046 - - ---- 61/5/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain ceph-mon[29756]: pgmap v1111: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:00 standalone.localdomain runuser[159232]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:00 standalone.localdomain haproxy[70940]: 192.168.122.99:40080 [13/Oct/2025:14:11:59.677] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1112/1112 201 483 - - ---- 61/5/4/4/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:12:00 standalone.localdomain runuser[159293]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 172.21.0.2:45220 [13/Oct/2025:14:11:59.751] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1333/1333 201 9046 - - ---- 61/5/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 192.168.122.99:40088 [13/Oct/2025:14:12:00.095] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1046/1046 201 477 - - ---- 62/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 192.168.122.99:40098 [13/Oct/2025:14:12:00.173] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1059/1059 204 165 - - ---- 61/5/4/4/0 0/0 "DELETE /v3/projects/63870a3b988644258d69e894caf920ab HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 192.168.122.99:40110 [13/Oct/2025:14:12:00.245] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1041/1041 201 495 - - ---- 61/5/4/4/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain runuser[159293]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 172.21.0.2:45224 [13/Oct/2025:14:12:00.333] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1294/1294 201 9178 - - ---- 61/4/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain runuser[159371]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:01 standalone.localdomain systemd[1]: tmp-crun.kuGgYx.mount: Deactivated successfully.
Oct 13 14:12:01 standalone.localdomain podman[159353]: 2025-10-13 14:12:01.655366917 +0000 UTC m=+0.094177339 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, version=17.1.9, build-date=2025-07-21T12:58:45, vcs-type=git, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:12:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1112: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 192.168.122.99:40112 [13/Oct/2025:14:12:00.402] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1277/1277 200 380 - - ---- 61/5/4/4/0 0/0 "GET /v3/regions/tempest-region-729517988 HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain podman[159353]: 2025-10-13 14:12:01.684906594 +0000 UTC m=+0.123716936 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container)
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:12:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:12:01 standalone.localdomain systemd[1]: tmp-crun.HWGDQp.mount: Deactivated successfully.
Oct 13 14:12:01 standalone.localdomain podman[159426]: 2025-10-13 14:12:01.805601417 +0000 UTC m=+0.097919205 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1)
Oct 13 14:12:01 standalone.localdomain podman[159519]: 2025-10-13 14:12:01.87969944 +0000 UTC m=+0.085457523 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.component=openstack-haproxy-container, build-date=2025-07-21T13:08:11, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:12:01 standalone.localdomain podman[159402]: 2025-10-13 14:12:01.781650552 +0000 UTC m=+0.095658136 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible)
Oct 13 14:12:01 standalone.localdomain podman[159404]: 2025-10-13 14:12:01.895829755 +0000 UTC m=+0.202533825 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-placement-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-placement-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, build-date=2025-07-21T13:58:12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true)
Oct 13 14:12:01 standalone.localdomain podman[159421]: 2025-10-13 14:12:01.851253757 +0000 UTC m=+0.144068560 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:12:01 standalone.localdomain podman[159402]: 2025-10-13 14:12:01.91910389 +0000 UTC m=+0.233111434 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:58:20)
Oct 13 14:12:01 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:12:01 standalone.localdomain podman[159421]: 2025-10-13 14:12:01.929255431 +0000 UTC m=+0.222070224 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_metadata, build-date=2025-07-21T16:05:11, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64)
Oct 13 14:12:01 standalone.localdomain podman[159447]: 2025-10-13 14:12:01.956213808 +0000 UTC m=+0.236570080 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., container_name=glance_api, distribution-scope=public, release=1, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true)
Oct 13 14:12:01 standalone.localdomain haproxy[70940]: 172.21.0.2:45238 [13/Oct/2025:14:12:00.729] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1228/1228 201 9178 - - ---- 61/4/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:01 standalone.localdomain podman[159411]: 2025-10-13 14:12:01.988313942 +0000 UTC m=+0.285157770 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=)
Oct 13 14:12:02 standalone.localdomain podman[159443]: 2025-10-13 14:12:02.002617471 +0000 UTC m=+0.290304527 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, version=17.1.9, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:12:02 standalone.localdomain podman[159404]: 2025-10-13 14:12:02.020918443 +0000 UTC m=+0.327622533 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, name=rhosp17/openstack-placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, config_id=tripleo_step4, container_name=placement_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, com.redhat.component=openstack-placement-api-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64)
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40128 [13/Oct/2025:14:12:00.791] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1236/1236 201 345 - - ---- 61/6/5/5/0 0/0 "PUT /v3/domains/9416b299d2f647e8802249a3f6e22a0b/config HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain podman[159599]: 2025-10-13 14:12:02.041737991 +0000 UTC m=+0.079560091 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:12:02 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain podman[159443]: 2025-10-13 14:12:02.050681097 +0000 UTC m=+0.338368163 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:02 standalone.localdomain podman[159426]: 2025-10-13 14:12:02.058948979 +0000 UTC m=+0.351266777 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, container_name=glance_api_internal, build-date=2025-07-21T13:58:20)
Oct 13 14:12:02 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain podman[159519]: 2025-10-13 14:12:02.065363607 +0000 UTC m=+0.271121680 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-haproxy-container)
Oct 13 14:12:02 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 172.17.0.2:48824 [13/Oct/2025:14:12:01.093] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1008/1008 200 9173 - - ---- 61/3/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain podman[159430]: 2025-10-13 14:12:02.152067917 +0000 UTC m=+0.440557978 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:12:02 standalone.localdomain podman[159447]: 2025-10-13 14:12:02.170477571 +0000 UTC m=+0.450833883 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:12:02 standalone.localdomain podman[159599]: 2025-10-13 14:12:02.18184384 +0000 UTC m=+0.219665940 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:12:02 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40142 [13/Oct/2025:14:12:01.145] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1044/1044 204 165 - - ---- 61/6/5/5/0 0/0 "PUT /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/8091c3067b0d4f8a973234f0f15b9591 HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain podman[159430]: 2025-10-13 14:12:02.209159778 +0000 UTC m=+0.497649869 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:12:02 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain podman[159411]: 2025-10-13 14:12:02.238940312 +0000 UTC m=+0.535784180 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:48:37, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:12:02 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51914 [13/Oct/2025:14:12:01.087] neutron neutron/standalone.internalapi.localdomain 0/0/0/1189/1189 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=e0143409e3de40f4a710d083cdb21201&name=default HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40148 [13/Oct/2025:14:12:01.235] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1067/1067 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/projects/4f3f32edc5214c749550d5697e57f7e1 HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40156 [13/Oct/2025:14:12:01.289] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1085/1085 201 565 - - ---- 61/6/5/5/0 0/0 "POST /v3/endpoints HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain runuser[159371]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51916 [13/Oct/2025:14:12:02.278] neutron neutron/standalone.internalapi.localdomain 0/0/0/179/179 204 152 - - ---- 61/1/0/0/0 0/0 "DELETE /v2.0/security-groups/207e45fe-d498-4f0c-be60-3bf462ca582d HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 172.21.0.2:45250 [13/Oct/2025:14:12:01.635] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1053/1053 201 9360 - - ---- 61/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40160 [13/Oct/2025:14:12:01.683] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1063/1063 400 492 - - ---- 61/8/7/7/0 0/0 "PATCH /v3/endpoints/5c78d9ee479949219f54420079c44932 HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain ceph-mon[29756]: pgmap v1112: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40170 [13/Oct/2025:14:12:01.960] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/811/811 200 510 - - ---- 61/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40174 [13/Oct/2025:14:12:02.028] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/809/809 200 344 - - ---- 61/8/7/7/0 0/0 "PATCH /v3/domains/9416b299d2f647e8802249a3f6e22a0b/config HTTP/1.1"
Oct 13 14:12:02 standalone.localdomain haproxy[70940]: 192.168.122.99:40190 [13/Oct/2025:14:12:02.192] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/727/727 200 430 - - ---- 61/8/7/7/0 0/0 "GET /v3/role_assignments?scope.project.id=fe3dffb53f0b414a8e095b67f132e54d&user.id=00515c3559934eb5861b350991c52f53&effective HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40196 [13/Oct/2025:14:12:02.304] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/746/746 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/787d27d5e9f34ec3a7884de1cadd9ba9 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40202 [13/Oct/2025:14:12:02.376] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/685/685 200 380 - - ---- 62/9/7/7/0 0/0 "GET /v3/regions/tempest-region-974918093 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40206 [13/Oct/2025:14:12:02.459] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/702/702 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/e0143409e3de40f4a710d083cdb21201 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40220 [13/Oct/2025:14:12:02.690] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/525/525 200 510 - - ---- 61/7/6/6/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40224 [13/Oct/2025:14:12:02.749] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/530/530 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/endpoints/5c78d9ee479949219f54420079c44932 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 172.21.0.2:51932 [13/Oct/2025:14:12:03.163] neutron neutron/standalone.internalapi.localdomain 0/0/0/135/135 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=6ad7a2bf79104bf1ac18df44fba91e94&name=default HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40234 [13/Oct/2025:14:12:02.772] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/546/546 201 576 - - ---- 61/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40236 [13/Oct/2025:14:12:02.840] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/535/535 200 344 - - ---- 61/7/6/6/0 0/0 "GET /v3/domains/9416b299d2f647e8802249a3f6e22a0b/config HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=11320 DF PROTO=TCP SPT=49792 DPT=19885 SEQ=1750138889 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095822F10000000001030307) 
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40242 [13/Oct/2025:14:12:02.923] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/530/530 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/8091c3067b0d4f8a973234f0f15b9591 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 172.21.0.2:51948 [13/Oct/2025:14:12:03.301] neutron neutron/standalone.internalapi.localdomain 0/0/0/229/229 204 152 - - ---- 61/1/0/0/0 0/0 "DELETE /v2.0/security-groups/d8d916f1-a5e9-4fe8-8428-be2548ab3e41 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40244 [13/Oct/2025:14:12:03.053] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/485/485 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/users/3c9ccd70b129457c88e28da7ed342a30 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40260 [13/Oct/2025:14:12:03.062] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/536/536 200 566 - - ---- 61/8/7/7/0 0/0 "PATCH /v3/endpoints/d61bd615a2dc44e88fd1ec9c9d60ac66 HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40264 [13/Oct/2025:14:12:03.217] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/444/444 201 576 - - ---- 61/8/7/7/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1113: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:03 standalone.localdomain haproxy[70940]: 192.168.122.99:40268 [13/Oct/2025:14:12:03.281] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/449/449 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/regions/tempest-region-729517988 HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40282 [13/Oct/2025:14:12:03.322] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/714/714 201 499 - - ---- 62/9/8/7/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40288 [13/Oct/2025:14:12:03.378] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/701/701 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/domains/9416b299d2f647e8802249a3f6e22a0b/config HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40290 [13/Oct/2025:14:12:03.454] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/685/685 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/roles/8091c3067b0d4f8a973234f0f15b9591 HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40296 [13/Oct/2025:14:12:03.533] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/673/673 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/6ad7a2bf79104bf1ac18df44fba91e94 HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40308 [13/Oct/2025:14:12:03.542] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/730/730 204 165 - - ---- 62/7/6/6/0 0/0 "DELETE /v3/users/d86880aecab84377ad8f89c8f74127d2 HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40310 [13/Oct/2025:14:12:03.599] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/713/713 200 382 - - ---- 61/6/5/5/0 0/0 "GET /v3/regions/tempest-region-1205665261 HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40312 [13/Oct/2025:14:12:03.664] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/935/935 201 499 - - ---- 61/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40314 [13/Oct/2025:14:12:03.734] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/924/924 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/services/40e9c1437511469885cc33a3c7f908b6 HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40322 [13/Oct/2025:14:12:04.041] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/640/640 200 3603 - - ---- 61/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40330 [13/Oct/2025:14:12:04.083] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/643/643 404 360 - - ---- 61/6/5/5/0 0/0 "GET /v3/domains/9416b299d2f647e8802249a3f6e22a0b/config HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain ceph-mon[29756]: pgmap v1113: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:04 standalone.localdomain haproxy[70940]: 192.168.122.99:40338 [13/Oct/2025:14:12:04.145] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/641/641 204 165 - - ---- 61/6/5/5/0 0/0 "PUT /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/7c0bffc586494bf494dbaf620e203c6b HTTP/1.1"
Oct 13 14:12:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 172.21.0.2:45256 [13/Oct/2025:14:12:04.209] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/916/916 201 8865 - - ---- 61/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 172.21.0.2:45268 [13/Oct/2025:14:12:04.273] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1126/1126 201 8865 - - ---- 61/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 192.168.122.99:40352 [13/Oct/2025:14:12:04.313] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1138/1138 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/endpoints/d61bd615a2dc44e88fd1ec9c9d60ac66 HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 192.168.122.99:40362 [13/Oct/2025:14:12:04.602] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/877/877 200 3603 - - ---- 61/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain systemd[1]: tmp-crun.5tb8PB.mount: Deactivated successfully.
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 192.168.122.99:40378 [13/Oct/2025:14:12:04.660] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/882/882 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/users/b92813fc734948bcbe44071573ead780 HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain podman[159725]: 2025-10-13 14:12:05.54624817 +0000 UTC m=+0.076265500 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-volume-container, name=rhosp17/openstack-cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, build-date=2025-07-21T16:13:39, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 192.168.122.99:40394 [13/Oct/2025:14:12:04.684] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/912/912 204 165 - - ---- 61/6/5/5/0 0/0 "PUT /v3/projects/1057aedfc5ed4ffeb2b2beab0ef4a1d1/users/037469faa2bc4eecba568af2e037728d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain podman[159744]: 2025-10-13 14:12:05.620625083 +0000 UTC m=+0.068273786 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:13:39, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-volume, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f)
Oct 13 14:12:05 standalone.localdomain podman[159725]: 2025-10-13 14:12:05.625996148 +0000 UTC m=+0.156013528 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, build-date=2025-07-21T16:13:39, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-cinder-volume, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team)
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 192.168.122.99:40398 [13/Oct/2025:14:12:04.730] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/918/918 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/domains/9416b299d2f647e8802249a3f6e22a0b/config HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1114: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 192.168.122.99:40410 [13/Oct/2025:14:12:04.790] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/914/914 201 728 - - ---- 61/6/5/5/0 0/0 "PUT /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:05 standalone.localdomain ceph-mon[29756]: pgmap v1114: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:05 standalone.localdomain haproxy[70940]: 172.21.0.2:45284 [13/Oct/2025:14:12:05.130] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/864/864 201 8865 - - ---- 61/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 172.17.0.2:48824 [13/Oct/2025:14:12:05.407] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/647/647 200 8860 - - ---- 61/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40412 [13/Oct/2025:14:12:05.454] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/658/658 204 165 - - ---- 62/8/6/6/0 0/0 "DELETE /v3/regions/tempest-region-1205665261 HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40416 [13/Oct/2025:14:12:05.481] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/677/677 204 165 - - ---- 61/7/6/6/0 0/0 "PUT /v3/projects/6558c1761bba47bf904e47a566fb0dc1/users/0112c550304d4b87bacf25897e05bc39/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 172.21.0.2:51960 [13/Oct/2025:14:12:05.401] neutron neutron/standalone.internalapi.localdomain 0/0/0/837/837 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=de95c089676d4bbfbb51e3f4e94d9f9b&name=default HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40432 [13/Oct/2025:14:12:05.544] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/701/701 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/users/060ce40f5943498a97ba48f3a36a08fc HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40448 [13/Oct/2025:14:12:05.597] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/699/699 200 3603 - - ---- 61/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40458 [13/Oct/2025:14:12:05.649] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/720/720 200 479 - - ---- 61/6/5/5/0 0/0 "PATCH /v3/domains/9416b299d2f647e8802249a3f6e22a0b HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:12:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40460 [13/Oct/2025:14:12:05.704] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/758/758 200 1264 - - ---- 61/6/5/5/0 0/0 "GET /v3/role_assignments?scope.project.id=fe3dffb53f0b414a8e095b67f132e54d&user.id=00515c3559934eb5861b350991c52f53&effective HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain podman[159765]: 2025-10-13 14:12:06.463686408 +0000 UTC m=+0.070587226 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, tcib_managed=true, io.buildah.version=1.33.12, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, release=1, container_name=keystone_cron, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:12:06 standalone.localdomain podman[159765]: 2025-10-13 14:12:06.472802327 +0000 UTC m=+0.079702825 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, tcib_managed=true, container_name=keystone_cron, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 172.21.0.2:51970 [13/Oct/2025:14:12:06.240] neutron neutron/standalone.internalapi.localdomain 0/0/0/240/240 204 152 - - ---- 61/1/0/0/0 0/0 "DELETE /v2.0/security-groups/e6741bf1-f5b6-4a91-9ddd-9d75b178e06d HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40472 [13/Oct/2025:14:12:05.997] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/520/520 200 510 - - ---- 61/7/6/6/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain podman[159766]: 2025-10-13 14:12:06.518557441 +0000 UTC m=+0.120541619 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, com.redhat.component=openstack-nova-api-container, release=1, container_name=nova_api, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:12:06 standalone.localdomain systemd[1]: tmp-crun.GtpGVP.mount: Deactivated successfully.
Oct 13 14:12:06 standalone.localdomain podman[159766]: 2025-10-13 14:12:06.56023212 +0000 UTC m=+0.162216238 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, vcs-type=git, release=1, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40482 [13/Oct/2025:14:12:06.113] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/453/453 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/regions/tempest-region-974918093 HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40488 [13/Oct/2025:14:12:06.160] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/431/431 200 3603 - - ---- 61/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 172.21.0.2:45288 [13/Oct/2025:14:12:06.246] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/606/606 201 8865 - - ---- 61/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:06 standalone.localdomain haproxy[70940]: 192.168.122.99:40498 [13/Oct/2025:14:12:06.299] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/574/574 204 165 - - ---- 61/7/6/6/0 0/0 "PUT /v3/projects/1057aedfc5ed4ffeb2b2beab0ef4a1d1/users/037469faa2bc4eecba568af2e037728d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 192.168.122.99:40508 [13/Oct/2025:14:12:06.371] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/719/719 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/domains/9416b299d2f647e8802249a3f6e22a0b HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 192.168.122.99:40510 [13/Oct/2025:14:12:06.464] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/704/704 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 192.168.122.99:40524 [13/Oct/2025:14:12:06.482] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/794/794 204 165 - - ---- 61/6/5/5/0 0/0 "DELETE /v3/projects/de95c089676d4bbfbb51e3f4e94d9f9b HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 192.168.122.99:40528 [13/Oct/2025:14:12:06.519] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/825/825 201 580 - - ---- 61/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 192.168.122.99:40536 [13/Oct/2025:14:12:06.570] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/841/841 204 165 - - ---- 61/5/4/4/0 0/0 "DELETE /v3/services/4840f9ef23f245aeb1a7966c16c7c964 HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 172.21.0.2:51986 [13/Oct/2025:14:12:07.279] neutron neutron/standalone.internalapi.localdomain 0/0/0/176/176 200 2976 - - ---- 61/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=5f53e3d6e8674d1fae5d3c14fc857e80&name=default HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 192.168.122.99:40540 [13/Oct/2025:14:12:06.594] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/879/879 204 165 - - ---- 61/5/4/4/0 0/0 "PUT /v3/projects/6558c1761bba47bf904e47a566fb0dc1/users/0112c550304d4b87bacf25897e05bc39/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 172.17.0.2:48824 [13/Oct/2025:14:12:06.861] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/732/732 200 8728 - - ---- 60/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1115: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:12:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:12:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:12:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:12:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:12:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 172.21.0.2:51996 [13/Oct/2025:14:12:07.459] neutron neutron/standalone.internalapi.localdomain 0/0/0/366/366 204 152 - - ---- 60/2/1/1/0 0/0 "DELETE /v2.0/security-groups/8b5d244f-1408-4402-9856-ce1f36d91d14 HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain systemd[1]: tmp-crun.TK8eST.mount: Deactivated successfully.
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 172.21.0.2:51984 [13/Oct/2025:14:12:06.855] neutron neutron/standalone.internalapi.localdomain 0/0/0/1001/1001 200 2976 - - ---- 60/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=0617fcc814d7403884ae063321c7b217&name=default HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain podman[159902]: 2025-10-13 14:12:07.851281459 +0000 UTC m=+0.118006221 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, managed_by=tripleo_ansible, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.component=openstack-barbican-keystone-listener-container, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-barbican-keystone-listener)
Oct 13 14:12:07 standalone.localdomain podman[159901]: 2025-10-13 14:12:07.888364357 +0000 UTC m=+0.157284737 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, release=1, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=barbican_api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:07 standalone.localdomain podman[159908]: 2025-10-13 14:12:07.902988366 +0000 UTC m=+0.152798719 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_id=tripleo_step3, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, container_name=barbican_worker, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, tcib_managed=true, build-date=2025-07-21T15:36:22, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:12:07 standalone.localdomain podman[159901]: 2025-10-13 14:12:07.90964495 +0000 UTC m=+0.178565340 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-api-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:44, name=rhosp17/openstack-barbican-api, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, container_name=barbican_api, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:12:07 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:12:07 standalone.localdomain haproxy[70940]: 172.21.0.2:45304 [13/Oct/2025:14:12:06.880] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1049/1049 201 8747 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:07 standalone.localdomain podman[159902]: 2025-10-13 14:12:07.946935814 +0000 UTC m=+0.213660596 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_keystone_listener, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, build-date=2025-07-21T16:18:19, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:12:07 standalone.localdomain podman[159900]: 2025-10-13 14:12:07.952539896 +0000 UTC m=+0.218256248 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, container_name=neutron_dhcp, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:12:07 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 192.168.122.99:40544 [13/Oct/2025:14:12:07.092] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/908/908 201 482 - - ---- 61/7/6/6/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:12:08 standalone.localdomain podman[159900]: 2025-10-13 14:12:08.007931145 +0000 UTC m=+0.273647507 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54)
Oct 13 14:12:08 standalone.localdomain podman[159903]: 2025-10-13 14:12:08.020950785 +0000 UTC m=+0.275234136 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vcs-type=git, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-neutron-sriov-agent-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, container_name=neutron_sriov_agent)
Oct 13 14:12:08 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:12:08 standalone.localdomain podman[159919]: 2025-10-13 14:12:07.879747982 +0000 UTC m=+0.134495677 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, architecture=x86_64, container_name=nova_api_cron, io.openshift.expose-services=, tcib_managed=true, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api)
Oct 13 14:12:08 standalone.localdomain podman[159908]: 2025-10-13 14:12:08.053529354 +0000 UTC m=+0.303339717 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.buildah.version=1.33.12, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, build-date=2025-07-21T15:36:22)
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 172.21.0.2:52002 [13/Oct/2025:14:12:07.860] neutron neutron/standalone.internalapi.localdomain 0/0/0/229/229 204 152 - - ---- 60/1/0/0/0 0/0 "DELETE /v2.0/security-groups/5ed44b59-b8ba-401d-9046-50506d72427d HTTP/1.1"
Oct 13 14:12:08 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:12:08 standalone.localdomain podman[159903]: 2025-10-13 14:12:08.107354826 +0000 UTC m=+0.361638177 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, release=1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969)
Oct 13 14:12:08 standalone.localdomain podman[159919]: 2025-10-13 14:12:08.116498286 +0000 UTC m=+0.371245961 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, container_name=nova_api_cron, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:12:08 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:12:08 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 192.168.122.99:40550 [13/Oct/2025:14:12:07.170] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/967/967 200 779 - - ---- 60/7/6/6/0 0/0 "GET /v3/role_assignments?scope.project.id=fe3dffb53f0b414a8e095b67f132e54d&user.id=00515c3559934eb5861b350991c52f53&effective HTTP/1.1"
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 192.168.122.99:40556 [13/Oct/2025:14:12:07.346] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1109/1109 201 501 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 192.168.122.99:40562 [13/Oct/2025:14:12:07.413] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1141/1141 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/endpoints/c431c5c4a7b14833bea8c9db649273e3 HTTP/1.1"
Oct 13 14:12:08 standalone.localdomain ceph-mon[29756]: pgmap v1115: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:08 standalone.localdomain systemd[1]: tmp-crun.Byuxmu.mount: Deactivated successfully.
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 172.21.0.2:45320 [13/Oct/2025:14:12:07.478] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1411/1411 201 8562 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:08 standalone.localdomain haproxy[70940]: 192.168.122.99:40574 [13/Oct/2025:14:12:07.827] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1144/1144 204 165 - - ---- 60/8/7/7/0 0/0 "DELETE /v3/projects/5f53e3d6e8674d1fae5d3c14fc857e80 HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40576 [13/Oct/2025:14:12:07.932] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1097/1097 201 576 - - ---- 60/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40590 [13/Oct/2025:14:12:08.002] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1080/1080 201 345 - - ---- 60/7/6/6/0 0/0 "PUT /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40598 [13/Oct/2025:14:12:08.091] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1066/1066 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/projects/0617fcc814d7403884ae063321c7b217 HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40606 [13/Oct/2025:14:12:08.138] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1089/1089 404 346 - - ---- 61/7/5/5/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40616 [13/Oct/2025:14:12:08.458] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/836/836 200 3603 - - ---- 60/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52008 [13/Oct/2025:14:12:09.159] neutron neutron/standalone.internalapi.localdomain 0/0/0/156/156 200 2976 - - ---- 60/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=b4183140b95c4e72831517ba29d8157a&name=default HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40622 [13/Oct/2025:14:12:08.557] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/795/795 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/regions/tempest-region-1219306133 HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40624 [13/Oct/2025:14:12:08.891] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/495/495 201 576 - - ---- 60/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 172.21.0.2:52024 [13/Oct/2025:14:12:09.318] neutron neutron/standalone.internalapi.localdomain 0/0/0/183/183 204 152 - - ---- 60/1/0/0/0 0/0 "DELETE /v2.0/security-groups/a0d251ce-15d5-4e2f-a87a-b9b45f293bb1 HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1116: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 172.21.0.2:45324 [13/Oct/2025:14:12:08.975] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/701/701 201 8548 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40626 [13/Oct/2025:14:12:09.033] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/899/899 201 498 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:09 standalone.localdomain haproxy[70940]: 192.168.122.99:40634 [13/Oct/2025:14:12:09.085] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/901/901 200 378 - - ---- 60/7/6/6/0 0/0 "PATCH /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config/identity HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:40636 [13/Oct/2025:14:12:09.230] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/814/814 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/7c0bffc586494bf494dbaf620e203c6b HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:40644 [13/Oct/2025:14:12:09.296] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/803/803 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/2c8c1f323f334df2a4b8f7cb2248488b/users/8dd8d7ad6008415084b5cb01b047b4ea/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:40658 [13/Oct/2025:14:12:09.353] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/804/804 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/services/e3499078cd8c4418ab5bf995047400fc HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:40662 [13/Oct/2025:14:12:09.388] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1060/1060 201 498 - - ---- 61/8/7/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:40676 [13/Oct/2025:14:12:09.503] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1002/1002 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/projects/b4183140b95c4e72831517ba29d8157a HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain ceph-mon[29756]: pgmap v1116: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 172.21.0.2:59098 [13/Oct/2025:14:12:09.679] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1177/1177 201 8415 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:56338 [13/Oct/2025:14:12:09.935] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/937/937 200 3603 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:56350 [13/Oct/2025:14:12:09.989] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/930/930 200 379 - - ---- 60/7/6/6/0 0/0 "PATCH /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config/identity/driver HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:56366 [13/Oct/2025:14:12:10.047] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/917/917 201 478 - - ---- 60/7/6/6/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:10 standalone.localdomain haproxy[70940]: 192.168.122.99:56382 [13/Oct/2025:14:12:10.101] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/879/879 200 3603 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56390 [13/Oct/2025:14:12:10.159] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/870/870 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/endpoints/1e1607ea58b84a089ab8cdd3d0dd0289 HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56394 [13/Oct/2025:14:12:10.453] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/598/598 200 3603 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 172.21.0.2:59108 [13/Oct/2025:14:12:10.509] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/813/813 201 8234 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56398 [13/Oct/2025:14:12:10.860] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/475/475 200 510 - - ---- 60/7/6/6/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56404 [13/Oct/2025:14:12:10.874] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/486/486 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/1ac812309ced40e6ae6c399baafa16c0/users/fcea1cdffee440ed9d551e550ea18ad2/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56414 [13/Oct/2025:14:12:10.924] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/482/482 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config/identity/driver HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56430 [13/Oct/2025:14:12:10.968] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/484/484 200 610 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles?domain_id=c7ca48b48fe14afc95f21d1df9f56183 HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56444 [13/Oct/2025:14:12:10.983] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/515/515 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/2c8c1f323f334df2a4b8f7cb2248488b/users/8dd8d7ad6008415084b5cb01b047b4ea/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56450 [13/Oct/2025:14:12:11.031] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/543/543 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/regions/tempest-region-549838454 HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56454 [13/Oct/2025:14:12:11.053] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/579/579 204 165 - - ---- 60/6/5/5/0 0/0 "PUT /v3/projects/f1aab92925d047599fbe5e2ebedfc878/users/2ce10303f8f140a1bc69a24310c37e0a/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1117: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 172.21.0.2:59112 [13/Oct/2025:14:12:11.325] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/597/597 201 8234 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56470 [13/Oct/2025:14:12:11.337] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 201 574 - - ---- 60/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:11 standalone.localdomain haproxy[70940]: 192.168.122.99:56474 [13/Oct/2025:14:12:11.361] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/624/624 200 3603 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56482 [13/Oct/2025:14:12:11.408] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/662/662 404 380 - - ---- 60/7/6/6/0 0/0 "GET /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config/identity/driver HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56490 [13/Oct/2025:14:12:11.454] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/676/676 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/roles/9135167604ac4a1c9eb5757f43b1c8f4 HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 172.21.0.2:59118 [13/Oct/2025:14:12:11.510] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/927/927 201 8252 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56502 [13/Oct/2025:14:12:11.576] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/917/917 204 165 - - ---- 60/8/7/7/0 0/0 "DELETE /v3/services/98ee203e970e43ff84ccc4f04d53f50c HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56518 [13/Oct/2025:14:12:11.635] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/911/911 200 3603 - - ---- 60/8/7/7/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56528 [13/Oct/2025:14:12:11.926] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/638/638 200 510 - - ---- 60/8/7/7/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:12:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/439948426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:12:12 standalone.localdomain ceph-mon[29756]: pgmap v1117: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/439948426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56534 [13/Oct/2025:14:12:11.947] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/884/884 201 498 - - ---- 61/9/8/8/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56544 [13/Oct/2025:14:12:11.988] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/866/866 204 165 - - ---- 60/8/7/7/0 0/0 "PUT /v3/projects/1ac812309ced40e6ae6c399baafa16c0/users/fcea1cdffee440ed9d551e550ea18ad2/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56558 [13/Oct/2025:14:12:12.072] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/838/838 204 165 - - ---- 60/8/7/7/0 0/0 "DELETE /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config/identity HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 192.168.122.99:56560 [13/Oct/2025:14:12:12.132] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/827/827 200 350 - - ---- 61/8/7/7/0 0/0 "GET /v3/roles?domain_id=c7ca48b48fe14afc95f21d1df9f56183 HTTP/1.1"
Oct 13 14:12:12 standalone.localdomain haproxy[70940]: 172.17.0.2:34838 [13/Oct/2025:14:12:12.950] placement placement/standalone.internalapi.localdomain 0/0/0/14/14 200 298 - - ---- 61/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain runuser[160271]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56566 [13/Oct/2025:14:12:12.452] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/618/618 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/users/8dd8d7ad6008415084b5cb01b047b4ea HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56570 [13/Oct/2025:14:12:12.494] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/691/691 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/users/3a357ded3f3d4e379ce776c0f3a29d56 HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56584 [13/Oct/2025:14:12:12.549] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/690/690 204 165 - - ---- 61/7/6/6/0 0/0 "PUT /v3/projects/f1aab92925d047599fbe5e2ebedfc878/users/2ce10303f8f140a1bc69a24310c37e0a/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56596 [13/Oct/2025:14:12:12.567] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/728/728 201 600 - - ---- 61/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56604 [13/Oct/2025:14:12:12.833] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/516/516 200 3603 - - ---- 61/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56616 [13/Oct/2025:14:12:12.855] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/514/514 200 3603 - - ---- 61/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56624 [13/Oct/2025:14:12:12.914] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/526/526 404 363 - - ---- 61/7/6/6/0 0/0 "GET /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config/identity HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:12:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3411195987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56638 [13/Oct/2025:14:12:12.963] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/534/534 404 319 - - ---- 62/8/7/6/0 0/0 "DELETE /v3/roles/9135167604ac4a1c9eb5757f43b1c8f4 HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain runuser[160271]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1118: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 172.21.0.2:59128 [13/Oct/2025:14:12:13.072] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/701/701 201 8100 - - ---- 61/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3411195987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:12:13 standalone.localdomain podman[160361]: 2025-10-13 14:12:13.798805221 +0000 UTC m=+0.071101032 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container, version=17.1.9, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, tcib_managed=true, container_name=ovn_cluster_northd, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, architecture=x86_64)
Oct 13 14:12:13 standalone.localdomain systemd[149397]: Starting Mark boot as successful...
Oct 13 14:12:13 standalone.localdomain systemd[149397]: Finished Mark boot as successful.
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56648 [13/Oct/2025:14:12:13.187] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/643/643 204 165 - - ---- 61/7/6/6/0 0/0 "DELETE /v3/users/d0415f0af4dd4a22aaf2124bacf1cae8 HTTP/1.1"
Oct 13 14:12:13 standalone.localdomain podman[160361]: 2025-10-13 14:12:13.835991502 +0000 UTC m=+0.108287303 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.buildah.version=1.33.12, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=ovn_cluster_northd, container_name=ovn_cluster_northd, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-northd-container)
Oct 13 14:12:13 standalone.localdomain runuser[160381]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:13 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:12:13 standalone.localdomain haproxy[70940]: 192.168.122.99:56664 [13/Oct/2025:14:12:13.242] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/636/636 200 3603 - - ---- 62/7/6/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56674 [13/Oct/2025:14:12:13.297] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/849/849 201 511 - - ---- 61/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56680 [13/Oct/2025:14:12:13.352] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/829/829 204 165 - - ---- 61/6/5/5/0 0/0 "PUT /v3/projects/5128a58785f94eca8da2f446be10caf7/users/de844f7880684129a21bbf09ac52abb2/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56692 [13/Oct/2025:14:12:13.370] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/866/866 204 165 - - ---- 61/6/5/5/0 0/0 "PUT /v3/projects/1ac812309ced40e6ae6c399baafa16c0/users/fcea1cdffee440ed9d551e550ea18ad2/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56694 [13/Oct/2025:14:12:13.443] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/844/844 204 165 - - ---- 61/5/4/4/0 0/0 "DELETE /v3/domains/513e43506f734dc9a938f4e6ada3ea99/config HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56706 [13/Oct/2025:14:12:13.502] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/843/843 204 165 - - ---- 61/5/4/4/0 0/0 "PUT /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain runuser[160381]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:14 standalone.localdomain runuser[160443]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 172.17.0.2:48824 [13/Oct/2025:14:12:13.784] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/642/642 200 8095 - - ---- 61/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 172.21.0.2:52546 [13/Oct/2025:14:12:13.778] neutron neutron/standalone.internalapi.localdomain 0/0/0/768/768 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=2c8c1f323f334df2a4b8f7cb2248488b&name=default HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 172.21.0.2:59138 [13/Oct/2025:14:12:13.832] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/865/865 201 8100 - - ---- 61/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:12:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56720 [13/Oct/2025:14:12:13.881] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/846/846 204 165 - - ---- 61/5/4/4/0 0/0 "PUT /v3/projects/f1aab92925d047599fbe5e2ebedfc878/users/2ce10303f8f140a1bc69a24310c37e0a/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 172.21.0.2:52560 [13/Oct/2025:14:12:14.548] neutron neutron/standalone.internalapi.localdomain 0/0/0/185/185 204 152 - - ---- 61/2/1/1/0 0/0 "DELETE /v2.0/security-groups/431de9ed-cb9b-4d30-89c3-605527fb5668 HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56730 [13/Oct/2025:14:12:14.148] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/626/626 200 3603 - - ---- 61/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain ceph-mon[29756]: pgmap v1118: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:14 standalone.localdomain podman[160491]: 2025-10-13 14:12:14.788663531 +0000 UTC m=+0.059612620 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step2, build-date=2025-07-21T12:58:45, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, vcs-type=git, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:12:14 standalone.localdomain haproxy[70940]: 192.168.122.99:56736 [13/Oct/2025:14:12:14.183] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/608/608 200 3603 - - ---- 61/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:14 standalone.localdomain podman[160491]: 2025-10-13 14:12:14.834962811 +0000 UTC m=+0.105911950 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, config_id=tripleo_step2, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, tcib_managed=true, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:12:14 standalone.localdomain podman[160490]: 2025-10-13 14:12:14.853474589 +0000 UTC m=+0.123351835 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step5, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:48:37)
Oct 13 14:12:14 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:12:14 standalone.localdomain podman[160490]: 2025-10-13 14:12:14.885804661 +0000 UTC m=+0.155681907 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:12:14 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:12:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:15 standalone.localdomain runuser[160443]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 172.21.0.2:59140 [13/Oct/2025:14:12:14.245] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/856/856 201 8174 - - ---- 61/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56746 [13/Oct/2025:14:12:14.290] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/871/871 200 478 - - ---- 61/6/5/5/0 0/0 "PATCH /v3/domains/513e43506f734dc9a938f4e6ada3ea99 HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56754 [13/Oct/2025:14:12:14.346] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/859/859 200 612 - - ---- 61/6/5/5/0 0/0 "GET /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/groups/565b6f9688f1414498e7d7c16c7b0c08/roles HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 172.17.0.2:48824 [13/Oct/2025:14:12:14.701] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/581/581 200 8095 - - ---- 61/2/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 172.21.0.2:52562 [13/Oct/2025:14:12:14.699] neutron neutron/standalone.internalapi.localdomain 0/0/0/764/764 200 2976 - - ---- 61/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=dafccc5472334499b3f1b5b59042ec7d&name=default HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 172.21.0.2:59146 [13/Oct/2025:14:12:14.732] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/880/880 201 8174 - - ---- 61/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 172.21.0.2:52568 [13/Oct/2025:14:12:15.465] neutron neutron/standalone.internalapi.localdomain 0/0/0/167/167 204 152 - - ---- 61/1/0/0/0 0/0 "DELETE /v2.0/security-groups/f3423536-9c35-41dd-85b1-4790fc878201 HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56760 [13/Oct/2025:14:12:14.736] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/939/939 204 165 - - ---- 61/8/7/7/0 0/0 "DELETE /v3/projects/2c8c1f323f334df2a4b8f7cb2248488b HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1119: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56770 [13/Oct/2025:14:12:14.776] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/965/965 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/625d325fc7e34c579a4ac6c402228c21/users/5de097abad124c08a8d74282f061da3d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56772 [13/Oct/2025:14:12:14.794] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/988/988 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/5128a58785f94eca8da2f446be10caf7/users/de844f7880684129a21bbf09ac52abb2/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56778 [13/Oct/2025:14:12:15.104] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/733/733 201 536 - - ---- 60/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain ceph-mon[29756]: pgmap v1119: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56788 [13/Oct/2025:14:12:15.164] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/788/788 204 165 - - ---- 61/7/5/5/0 0/0 "DELETE /v3/domains/513e43506f734dc9a938f4e6ada3ea99 HTTP/1.1"
Oct 13 14:12:15 standalone.localdomain haproxy[70940]: 192.168.122.99:56796 [13/Oct/2025:14:12:15.207] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/786/786 204 165 - - ---- 60/6/5/5/0 0/0 "HEAD /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 192.168.122.99:56804 [13/Oct/2025:14:12:15.615] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/416/416 201 446 - - ---- 60/6/5/5/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 192.168.122.99:56812 [13/Oct/2025:14:12:15.634] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/476/476 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/projects/dafccc5472334499b3f1b5b59042ec7d HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 192.168.122.99:56820 [13/Oct/2025:14:12:15.743] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/450/450 200 3603 - - ---- 60/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 172.21.0.2:52572 [13/Oct/2025:14:12:16.113] neutron neutron/standalone.internalapi.localdomain 0/0/0/171/171 200 2976 - - ---- 60/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=9337f12d85fb43dfb41b789d366e25b0&name=default HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 172.21.0.2:52580 [13/Oct/2025:14:12:16.287] neutron neutron/standalone.internalapi.localdomain 0/0/0/188/188 204 152 - - ---- 60/1/0/0/0 0/0 "DELETE /v2.0/security-groups/2f6b0e24-f759-472f-908c-406cdca8f755 HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 172.21.0.2:59152 [13/Oct/2025:14:12:15.792] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/715/715 201 8244 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 192.168.122.99:56828 [13/Oct/2025:14:12:15.840] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1002/1002 201 617 - - ---- 60/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:16 standalone.localdomain haproxy[70940]: 192.168.122.99:56830 [13/Oct/2025:14:12:15.958] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/976/976 200 1730 - - ---- 60/7/6/6/0 0/0 "GET /v3/domains/config/default HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56836 [13/Oct/2025:14:12:15.996] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1020/1020 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56842 [13/Oct/2025:14:12:16.034] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1055/1055 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/services/53a22e7a5eb84e649f7f87dae76bd961 HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56850 [13/Oct/2025:14:12:16.196] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/953/953 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/625d325fc7e34c579a4ac6c402228c21/users/5de097abad124c08a8d74282f061da3d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56862 [13/Oct/2025:14:12:16.479] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/748/748 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/projects/9337f12d85fb43dfb41b789d366e25b0 HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56876 [13/Oct/2025:14:12:16.511] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/764/764 201 572 - - ---- 61/6/5/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56878 [13/Oct/2025:14:12:16.847] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/482/482 201 451 - - ---- 60/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56892 [13/Oct/2025:14:12:16.937] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/440/440 200 265 - - ---- 60/5/4/4/0 0/0 "GET /v3/domains/config/identity/default HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56908 [13/Oct/2025:14:12:17.021] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/412/412 204 165 - - ---- 60/5/4/4/0 0/0 "PUT /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 192.168.122.99:56914 [13/Oct/2025:14:12:17.093] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/386/386 201 494 - - ---- 60/5/4/4/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:12:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1120: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:17 standalone.localdomain haproxy[70940]: 172.21.0.2:59160 [13/Oct/2025:14:12:17.158] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/643/643 201 8271 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 172.21.0.2:59170 [13/Oct/2025:14:12:17.230] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/882/882 201 8233 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 192.168.122.99:56922 [13/Oct/2025:14:12:17.277] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1092/1092 201 496 - - ---- 60/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 192.168.122.99:56928 [13/Oct/2025:14:12:17.330] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1100/1100 201 453 - - ---- 60/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 192.168.122.99:56934 [13/Oct/2025:14:12:17.379] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1111/1111 200 231 - - ---- 60/6/5/5/0 0/0 "GET /v3/domains/config/identity/driver/default HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 192.168.122.99:56938 [13/Oct/2025:14:12:17.437] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1119/1119 200 613 - - ---- 60/6/5/5/0 0/0 "GET /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/groups/565b6f9688f1414498e7d7c16c7b0c08/roles HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 192.168.122.99:56944 [13/Oct/2025:14:12:17.482] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1104/1104 200 485 - - ---- 60/6/5/5/0 0/0 "PATCH /v3/services/d0546825111d4d78bd1cfd74dab412cf HTTP/1.1"
Oct 13 14:12:18 standalone.localdomain ceph-mon[29756]: pgmap v1120: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:18 standalone.localdomain haproxy[70940]: 192.168.122.99:56946 [13/Oct/2025:14:12:17.805] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1095/1095 201 995 - - ---- 60/6/5/5/0 0/0 "POST /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 172.21.0.2:59184 [13/Oct/2025:14:12:18.115] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1061/1061 201 8233 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:56960 [13/Oct/2025:14:12:18.373] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/821/821 200 4075 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:56974 [13/Oct/2025:14:12:18.432] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/801/801 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/ff59c404f4e34798b46c8a4591a56f7e/users/6c4a2a05ad284cc390f87dd761410ab1/roles/fa4dbfc9ce7f4152baf782ed2970e410 HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:56986 [13/Oct/2025:14:12:18.495] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/819/819 200 234 - - ---- 60/7/6/6/0 0/0 "GET /v3/domains/config/identity/list_limit/default HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:56988 [13/Oct/2025:14:12:18.561] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/865/865 204 165 - - ---- 61/7/6/6/0 0/0 "PUT /v3/groups/565b6f9688f1414498e7d7c16c7b0c08/users/00515c3559934eb5861b350991c52f53 HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:56992 [13/Oct/2025:14:12:18.589] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/911/911 200 485 - - ---- 60/6/5/5/0 0/0 "GET /v3/services/d0546825111d4d78bd1cfd74dab412cf HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:56998 [13/Oct/2025:14:12:18.904] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/650/650 200 890 - - ---- 60/6/5/5/0 0/0 "GET /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/5b33548ba9d74051a1cc513e391fddee HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:57000 [13/Oct/2025:14:12:19.179] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/420/420 200 510 - - ---- 60/5/4/4/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:57008 [13/Oct/2025:14:12:19.197] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/430/430 204 165 - - ---- 60/5/4/4/0 0/0 "PUT /v3/projects/a23b4d807f024441940b662138ce53ea/users/5fee77256fad4ce08187671918ae31bb/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:57018 [13/Oct/2025:14:12:19.236] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/441/441 204 165 - - ---- 60/5/4/4/0 0/0 "PUT /v3/projects/ff59c404f4e34798b46c8a4591a56f7e/users/6c4a2a05ad284cc390f87dd761410ab1/roles/10cce3b904f140a0aeb00bf9f3ad55ac HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1121: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:19 standalone.localdomain haproxy[70940]: 192.168.122.99:57032 [13/Oct/2025:14:12:19.316] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/401/401 200 1679 - - ---- 60/5/4/4/0 0/0 "GET /v3/domains/config/ldap/default HTTP/1.1"
Oct 13 14:12:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 172.21.0.2:59190 [13/Oct/2025:14:12:19.429] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/613/613 201 8260 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:57048 [13/Oct/2025:14:12:19.503] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/services/d0546825111d4d78bd1cfd74dab412cf HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 172.21.0.2:59200 [13/Oct/2025:14:12:19.555] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/882/883 201 8315 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52518 [13/Oct/2025:14:12:19.601] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/945/945 201 578 - - ---- 60/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52520 [13/Oct/2025:14:12:19.630] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/966/966 200 4075 - - ---- 60/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52530 [13/Oct/2025:14:12:19.680] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/987/987 200 9049 - - ---- 60/7/6/6/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52546 [13/Oct/2025:14:12:19.720] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1008/1008 200 241 - - ---- 60/6/5/5/0 0/0 "GET /v3/domains/config/ldap/url/default HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain ceph-mon[29756]: pgmap v1121: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52554 [13/Oct/2025:14:12:20.044] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/751/751 204 165 - - ---- 60/6/5/5/0 0/0 "HEAD /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52564 [13/Oct/2025:14:12:20.111] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/754/754 404 322 - - ---- 60/6/5/5/0 0/0 "GET /v3/services/d0546825111d4d78bd1cfd74dab412cf HTTP/1.1"
Oct 13 14:12:20 standalone.localdomain haproxy[70940]: 192.168.122.99:52580 [13/Oct/2025:14:12:20.440] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/474/474 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/5b33548ba9d74051a1cc513e391fddee HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain haproxy[70940]: 192.168.122.99:52596 [13/Oct/2025:14:12:20.547] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/629/629 201 500 - - ---- 60/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain haproxy[70940]: 192.168.122.99:52610 [13/Oct/2025:14:12:20.598] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/601/601 204 165 - - ---- 60/6/5/5/0 0/0 "PUT /v3/projects/a23b4d807f024441940b662138ce53ea/users/5fee77256fad4ce08187671918ae31bb/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain haproxy[70940]: 172.21.0.2:42178 [13/Oct/2025:14:12:20.669] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/835/835 201 8139 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain haproxy[70940]: 192.168.122.99:52622 [13/Oct/2025:14:12:20.733] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/813/813 200 228 - - ---- 60/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user/default HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain haproxy[70940]: 192.168.122.99:52636 [13/Oct/2025:14:12:20.799] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/811/811 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain haproxy[70940]: 192.168.122.99:52644 [13/Oct/2025:14:12:20.869] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/789/789 201 480 - - ---- 60/6/5/5/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:12:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1122: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:12:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:12:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:12:21 standalone.localdomain systemd[1]: tmp-crun.ajCXT7.mount: Deactivated successfully.
Oct 13 14:12:21 standalone.localdomain podman[160880]: 2025-10-13 14:12:21.831087163 +0000 UTC m=+0.086096812 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, vcs-type=git, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, build-date=2025-07-21T15:58:55, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1)
Oct 13 14:12:21 standalone.localdomain podman[160880]: 2025-10-13 14:12:21.835917851 +0000 UTC m=+0.090927450 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.buildah.version=1.33.12, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1)
Oct 13 14:12:21 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:12:21 standalone.localdomain podman[160881]: 2025-10-13 14:12:21.887000539 +0000 UTC m=+0.137589533 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, container_name=cinder_scheduler, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, vcs-type=git)
Oct 13 14:12:21 standalone.localdomain podman[160879]: 2025-10-13 14:12:21.928734669 +0000 UTC m=+0.187879625 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:27:15, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-iscsid-container, distribution-scope=public)
Oct 13 14:12:21 standalone.localdomain podman[160879]: 2025-10-13 14:12:21.934454334 +0000 UTC m=+0.193599240 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-iscsid-container)
Oct 13 14:12:21 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:12:21 standalone.localdomain podman[160881]: 2025-10-13 14:12:21.949922819 +0000 UTC m=+0.200511823 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:10:12, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, container_name=cinder_scheduler, distribution-scope=public, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc.)
Oct 13 14:12:21 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52656 [13/Oct/2025:14:12:20.920] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1098/1098 201 1020 - - ---- 60/6/5/5/0 0/0 "POST /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52672 [13/Oct/2025:14:12:21.177] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/941/941 200 4075 - - ---- 60/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52682 [13/Oct/2025:14:12:21.202] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/946/946 200 4075 - - ---- 61/7/6/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 172.21.0.2:42186 [13/Oct/2025:14:12:21.512] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/955/955 201 8305 - - ---- 61/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52694 [13/Oct/2025:14:12:21.549] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1012/1012 200 245 - - ---- 60/7/6/6/0 0/0 "GET /v3/domains/config/ldap/suffix/default HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52710 [13/Oct/2025:14:12:21.613] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1036/1036 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/groups/565b6f9688f1414498e7d7c16c7b0c08/users/00515c3559934eb5861b350991c52f53 HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52726 [13/Oct/2025:14:12:21.661] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1066/1066 201 480 - - ---- 60/7/6/6/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain ceph-mon[29756]: pgmap v1122: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52742 [13/Oct/2025:14:12:22.020] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/798/798 204 165 - - ---- 60/7/6/6/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/d44fee4be99c46d7b2ca3f3074526015 HTTP/1.1"
Oct 13 14:12:22 standalone.localdomain haproxy[70940]: 192.168.122.99:52758 [13/Oct/2025:14:12:22.119] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/806/806 204 165 - - ---- 60/7/6/6/0 0/0 "PUT /v3/projects/e19449e443e14a8b861fc92b2d1714d7/users/0b3c8f3c6e62469c8e5b71c548cb8980/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52764 [13/Oct/2025:14:12:22.151] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/861/861 204 165 - - ---- 59/7/6/6/0 0/0 "PUT /v3/projects/a23b4d807f024441940b662138ce53ea/users/5fee77256fad4ce08187671918ae31bb/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:12:23
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_data', 'manila_metadata', 'backups', 'vms', 'images', '.mgr']
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52772 [13/Oct/2025:14:12:22.469] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/720/720 201 1041 - - ---- 59/6/5/5/0 0/0 "POST /v3/OS-TRUST/trusts HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52778 [13/Oct/2025:14:12:22.565] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/692/692 200 236 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains/config/ldap/query_scope/default HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52786 [13/Oct/2025:14:12:22.656] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/652/652 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/system/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52802 [13/Oct/2025:14:12:22.730] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/627/627 201 481 - - ---- 59/6/5/5/0 0/0 "POST /v3/services HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1123: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52812 [13/Oct/2025:14:12:22.822] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/874/874 201 994 - - ---- 59/6/5/5/0 0/0 "POST /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:12:23 standalone.localdomain haproxy[70940]: 192.168.122.99:52814 [13/Oct/2025:14:12:22.928] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/835/835 200 4075 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:23 standalone.localdomain systemd[1]: tmp-crun.QGSDwA.mount: Deactivated successfully.
Oct 13 14:12:23 standalone.localdomain podman[160958]: 2025-10-13 14:12:23.841585146 +0000 UTC m=+0.110659866 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, container_name=keystone, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, io.buildah.version=1.33.12)
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 172.21.0.2:42200 [13/Oct/2025:14:12:23.021] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1089/1089 201 8669 - - ---- 59/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52830 [13/Oct/2025:14:12:23.192] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/994/994 200 764 - - ---- 59/7/6/6/0 0/0 "GET /v3/OS-TRUST/trusts/ HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52844 [13/Oct/2025:14:12:23.259] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/963/963 200 230 - - ---- 59/7/6/6/0 0/0 "GET /v3/domains/config/ldap/page_size/default HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52858 [13/Oct/2025:14:12:23.310] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/956/956 200 578 - - ---- 59/7/6/6/0 0/0 "GET /v3/system/groups/565b6f9688f1414498e7d7c16c7b0c08/roles HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52866 [13/Oct/2025:14:12:23.358] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/934/934 200 3948 - - ---- 60/8/7/6/0 0/0 "GET /v3/services HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52876 [13/Oct/2025:14:12:23.697] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/895/895 201 996 - - ---- 60/8/6/6/0 0/0 "POST /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52892 [13/Oct/2025:14:12:23.766] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/849/849 204 165 - - ---- 59/7/6/6/0 0/0 "PUT /v3/projects/e19449e443e14a8b861fc92b2d1714d7/users/0b3c8f3c6e62469c8e5b71c548cb8980/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain podman[160958]: 2025-10-13 14:12:24.648971626 +0000 UTC m=+0.918046326 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-keystone-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:24 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52894 [13/Oct/2025:14:12:24.115] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/555/555 201 483 - - ---- 59/6/5/5/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52910 [13/Oct/2025:14:12:24.188] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/581/581 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/OS-TRUST/trusts/7dc97903f71d441fa54af49feadd9a93 HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain ceph-mon[29756]: pgmap v1123: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52912 [13/Oct/2025:14:12:24.223] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/614/614 200 248 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains/config/ldap/alias_dereferencing/default HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52918 [13/Oct/2025:14:12:24.268] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/628/628 204 165 - - ---- 59/6/5/5/0 0/0 "HEAD /v3/system/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52924 [13/Oct/2025:14:12:24.294] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/651/651 200 617 - - ---- 59/6/5/5/0 0/0 "GET /v3/services?type=tempest-ServicesTestJSON-Type-959586780 HTTP/1.1"
Oct 13 14:12:24 standalone.localdomain haproxy[70940]: 192.168.122.99:52926 [13/Oct/2025:14:12:24.596] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/402/402 200 1703 - - ---- 59/6/5/5/0 0/0 "GET /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 172.21.0.2:42212 [13/Oct/2025:14:12:24.623] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/688/688 201 8615 - - ---- 58/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain runuser[161003]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52934 [13/Oct/2025:14:12:24.672] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/996/996 201 555 - - ---- 58/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1124: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52950 [13/Oct/2025:14:12:24.773] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/929/929 404 320 - - ---- 58/7/6/6/0 0/0 "GET /v3/OS-TRUST/trusts/7dc97903f71d441fa54af49feadd9a93 HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52956 [13/Oct/2025:14:12:24.839] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/891/891 200 235 - - ---- 58/7/6/6/0 0/0 "GET /v3/domains/config/ldap/debug_level/default HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52964 [13/Oct/2025:14:12:24.900] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/873/873 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/system/groups/565b6f9688f1414498e7d7c16c7b0c08/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:12:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52978 [13/Oct/2025:14:12:24.949] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/864/864 200 617 - - ---- 58/7/6/6/0 0/0 "GET /v3/services?type=tempest-ServicesTestJSON-Type-656955742 HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52980 [13/Oct/2025:14:12:25.001] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/850/850 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/f685dc9ea7af4fb9894c4c4fc648213a HTTP/1.1"
Oct 13 14:12:25 standalone.localdomain systemd[1]: tmp-crun.GYNsC9.mount: Deactivated successfully.
Oct 13 14:12:25 standalone.localdomain podman[161055]: 2025-10-13 14:12:25.895621944 +0000 UTC m=+0.143248916 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, container_name=heat_engine, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-heat-engine, tcib_managed=true, architecture=x86_64)
Oct 13 14:12:25 standalone.localdomain ceph-mon[29756]: pgmap v1124: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:25 standalone.localdomain haproxy[70940]: 192.168.122.99:52982 [13/Oct/2025:14:12:25.314] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/612/612 201 578 - - ---- 58/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain podman[161061]: 2025-10-13 14:12:25.989551056 +0000 UTC m=+0.237948191 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, vcs-type=git, build-date=2025-07-21T16:02:54, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-nova-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4f7cb55437a2b42333072591935a511528e79935, container_name=nova_scheduler, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:12:26 standalone.localdomain podman[161111]: 2025-10-13 14:12:26.052678542 +0000 UTC m=+0.255991294 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:17, distribution-scope=public, name=rhosp17/openstack-nova-conductor, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_conductor, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:12:26 standalone.localdomain podman[161073]: 2025-10-13 14:12:26.033092211 +0000 UTC m=+0.250304780 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api, name=rhosp17/openstack-cinder-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T15:58:55, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:12:26 standalone.localdomain podman[161086]: 2025-10-13 14:12:26.096767115 +0000 UTC m=+0.324512877 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, summary=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-horizon-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-horizon, config_id=tripleo_step3, io.buildah.version=1.33.12, release=1, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64)
Oct 13 14:12:26 standalone.localdomain podman[161111]: 2025-10-13 14:12:26.109823586 +0000 UTC m=+0.313136318 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=nova_conductor, name=rhosp17/openstack-nova-conductor, tcib_managed=true, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.component=openstack-nova-conductor-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, description=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4)
Oct 13 14:12:26 standalone.localdomain podman[161101]: 2025-10-13 14:12:25.963768195 +0000 UTC m=+0.179998414 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, release=1, container_name=neutron_api, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-server, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1)
Oct 13 14:12:26 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161061]: 2025-10-13 14:12:26.12495878 +0000 UTC m=+0.373355895 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=nova_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-nova-scheduler-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:02:54, description=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler)
Oct 13 14:12:26 standalone.localdomain podman[161086]: 2025-10-13 14:12:26.131853442 +0000 UTC m=+0.359599224 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.buildah.version=1.33.12, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-horizon-container, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, container_name=horizon, name=rhosp17/openstack-horizon, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, release=1, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, version=17.1.9)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161075]: 2025-10-13 14:12:25.998362796 +0000 UTC m=+0.225349334 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, release=1, description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-manila-api-container, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, container_name=manila_api_cron, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:12:26 standalone.localdomain podman[161073]: 2025-10-13 14:12:26.162924725 +0000 UTC m=+0.380137304 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-cinder-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161081]: 2025-10-13 14:12:26.200293051 +0000 UTC m=+0.419668026 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, container_name=heat_api_cron, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, build-date=2025-07-21T15:56:26, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:12:26 standalone.localdomain podman[161095]: 2025-10-13 14:12:25.873138404 +0000 UTC m=+0.093230241 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container)
Oct 13 14:12:26 standalone.localdomain runuser[161003]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:26 standalone.localdomain podman[161116]: 2025-10-13 14:12:26.241249048 +0000 UTC m=+0.457375164 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4)
Oct 13 14:12:26 standalone.localdomain podman[161101]: 2025-10-13 14:12:26.251554864 +0000 UTC m=+0.467785113 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, build-date=2025-07-21T15:44:03, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, version=17.1.9, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_id=tripleo_step4, tcib_managed=true, container_name=neutron_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:52990 [13/Oct/2025:14:12:25.670] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/595/595 201 531 - - ---- 58/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain podman[161095]: 2025-10-13 14:12:26.267926286 +0000 UTC m=+0.488018163 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, container_name=heat_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161081]: 2025-10-13 14:12:26.28661545 +0000 UTC m=+0.505990465 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=heat_api_cron, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, distribution-scope=public, release=1, name=rhosp17/openstack-heat-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161116]: 2025-10-13 14:12:26.30388741 +0000 UTC m=+0.520013526 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161048]: 2025-10-13 14:12:26.022519748 +0000 UTC m=+0.284164100 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, build-date=2025-07-21T14:49:55, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, container_name=heat_api_cfn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true)
Oct 13 14:12:26 standalone.localdomain podman[161075]: 2025-10-13 14:12:26.330189746 +0000 UTC m=+0.557176324 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, build-date=2025-07-21T16:06:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, container_name=manila_api_cron, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-api, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:12:26 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:53004 [13/Oct/2025:14:12:25.704] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/654/654 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/roles/10cce3b904f140a0aeb00bf9f3ad55ac HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain podman[161055]: 2025-10-13 14:12:26.366953815 +0000 UTC m=+0.614580797 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, release=1, distribution-scope=public, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161049]: 2025-10-13 14:12:26.077140493 +0000 UTC m=+0.332989907 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, build-date=2025-07-21T15:56:28, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-scheduler-container, distribution-scope=public, name=rhosp17/openstack-manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, container_name=manila_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1)
Oct 13 14:12:26 standalone.localdomain podman[161049]: 2025-10-13 14:12:26.412612136 +0000 UTC m=+0.668461570 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, io.openshift.expose-services=, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, build-date=2025-07-21T15:56:28, distribution-scope=public, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, version=17.1.9, container_name=manila_scheduler)
Oct 13 14:12:26 standalone.localdomain runuser[161342]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:53016 [13/Oct/2025:14:12:25.732] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/684/684 200 239 - - ---- 59/8/7/6/0 0/0 "GET /v3/domains/config/ldap/chase_referrals/default HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain podman[161048]: 2025-10-13 14:12:26.463051003 +0000 UTC m=+0.724695395 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, release=1, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:53024 [13/Oct/2025:14:12:25.776] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/706/706 204 165 - - ---- 58/7/6/6/0 0/0 "PUT /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain podman[161062]: 2025-10-13 14:12:26.46489861 +0000 UTC m=+0.702138753 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-memcached, container_name=memcached, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1)
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:53034 [13/Oct/2025:14:12:25.815] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/716/716 200 618 - - ---- 58/7/6/6/0 0/0 "GET /v3/services?type=tempest-ServicesTestJSON-Type-399592971 HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain podman[161062]: 2025-10-13 14:12:26.548853946 +0000 UTC m=+0.786094109 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, release=1, vcs-type=git, container_name=memcached, tcib_managed=true, version=17.1.9, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached)
Oct 13 14:12:26 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:53044 [13/Oct/2025:14:12:25.854] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/737/737 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/90cf7269fff0405ba105b4245a2fd887 HTTP/1.1"
Oct 13 14:12:26 standalone.localdomain haproxy[70940]: 192.168.122.99:53058 [13/Oct/2025:14:12:25.928] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/960/960 201 499 - - ---- 58/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53068 [13/Oct/2025:14:12:26.270] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/745/745 200 550 - - ---- 58/7/6/6/0 0/0 "GET /v3/users/55bc2bdd41ac4ba4802cbc83685a49fe HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53078 [13/Oct/2025:14:12:26.360] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/743/743 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/roles/fa4dbfc9ce7f4152baf782ed2970e410 HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain runuser[161342]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53092 [13/Oct/2025:14:12:26.417] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/723/723 200 236 - - ---- 58/7/6/6/0 0/0 "GET /v3/domains/config/ldap/user_tree_dn/default HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain runuser[161416]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53104 [13/Oct/2025:14:12:26.484] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/698/698 200 611 - - ---- 59/8/7/6/0 0/0 "GET /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/users/00515c3559934eb5861b350991c52f53/roles HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53108 [13/Oct/2025:14:12:26.532] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/695/695 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/services/ed36cb831cd5421aaed67b6ea7668d5f HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53114 [13/Oct/2025:14:12:26.594] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/952/952 201 995 - - ---- 58/7/6/6/0 0/0 "POST /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53118 [13/Oct/2025:14:12:26.891] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/756/756 200 3603 - - ---- 58/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1125: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53128 [13/Oct/2025:14:12:27.019] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/683/683 200 687 - - ---- 58/7/6/6/0 0/0 "GET /v3/users?domain_id=88aff2dc361a4176bdc24f5876ac8206 HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain runuser[161416]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53144 [13/Oct/2025:14:12:27.105] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/699/699 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/6c4a2a05ad284cc390f87dd761410ab1 HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53160 [13/Oct/2025:14:12:27.142] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/708/708 200 235 - - ---- 58/7/6/6/0 0/0 "GET /v3/domains/config/ldap/user_filter/default HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53170 [13/Oct/2025:14:12:27.184] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/717/717 204 165 - - ---- 58/7/6/6/0 0/0 "HEAD /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:27 standalone.localdomain haproxy[70940]: 192.168.122.99:53186 [13/Oct/2025:14:12:27.229] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/734/734 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/services/ca1ed5c32df344bc9cead8c93aabe33b HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53198 [13/Oct/2025:14:12:27.548] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/740/740 201 996 - - ---- 58/7/6/6/0 0/0 "POST /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53204 [13/Oct/2025:14:12:27.649] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/720/720 204 165 - - ---- 58/7/6/6/0 0/0 "PUT /v3/projects/c1d2a15ead2d4ed583ce1c00d7ed3f50/users/e34036a84ff54672b5084cf1843c365d/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53218 [13/Oct/2025:14:12:27.706] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/722/722 200 9869 - - ---- 58/7/6/6/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0043440946049692905 of space, bias 1.0, pg target 0.4344094604969291 quantized to 32 (current 32)
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53228 [13/Oct/2025:14:12:27.806] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/717/717 204 165 - - ---- 59/8/7/6/0 0/0 "DELETE /v3/projects/ff59c404f4e34798b46c8a4591a56f7e HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53242 [13/Oct/2025:14:12:27.852] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/753/753 200 251 - - ---- 59/8/7/6/0 0/0 "GET /v3/domains/config/ldap/user_objectclass/default HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53250 [13/Oct/2025:14:12:27.903] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/796/796 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/domains/c7ca48b48fe14afc95f21d1df9f56183/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:12:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:12:28 standalone.localdomain ceph-mon[29756]: pgmap v1125: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:12:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:12:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53256 [13/Oct/2025:14:12:27.965] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/820/820 204 165 - - ---- 59/8/7/7/0 0/0 "DELETE /v3/services/2bd719ef043743b8937d7836fa3e94cf HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain systemd[1]: tmp-crun.rCmCjc.mount: Deactivated successfully.
Oct 13 14:12:28 standalone.localdomain podman[161666]: 2025-10-13 14:12:28.851550823 +0000 UTC m=+0.095006276 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-type=git, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, container_name=swift_object_server, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53272 [13/Oct/2025:14:12:28.290] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/571/571 200 1089 - - ---- 58/7/6/6/0 0/0 "GET /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials?name=tempest-application_credential-1211096726 HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain podman[161669]: 2025-10-13 14:12:28.901110083 +0000 UTC m=+0.137725666 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:12:28 standalone.localdomain haproxy[70940]: 192.168.122.99:53284 [13/Oct/2025:14:12:28.371] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/540/540 200 3603 - - ---- 58/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:28 standalone.localdomain systemd[1]: tmp-crun.3Ei7dy.mount: Deactivated successfully.
Oct 13 14:12:28 standalone.localdomain podman[161668]: 2025-10-13 14:12:28.957868744 +0000 UTC m=+0.201014927 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53298 [13/Oct/2025:14:12:28.431] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/580/580 200 678 - - ---- 58/7/6/6/0 0/0 "GET /v3/users?name=tempest-test_user-1103188674 HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain podman[161667]: 2025-10-13 14:12:29.025534301 +0000 UTC m=+0.271927814 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37)
Oct 13 14:12:29 standalone.localdomain podman[161666]: 2025-10-13 14:12:29.035861387 +0000 UTC m=+0.279316840 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, release=1, build-date=2025-07-21T14:56:28, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:12:29 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:12:29 standalone.localdomain podman[161675]: 2025-10-13 14:12:29.081837328 +0000 UTC m=+0.317067988 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, name=rhosp17/openstack-nova-novncproxy, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, tcib_managed=true, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:24:10, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_vnc_proxy, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:12:29 standalone.localdomain podman[161669]: 2025-10-13 14:12:29.093795475 +0000 UTC m=+0.330411088 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, container_name=swift_account_server, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:12:29 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53304 [13/Oct/2025:14:12:28.527] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/619/619 201 536 - - ---- 58/7/6/6/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain podman[161668]: 2025-10-13 14:12:29.15753324 +0000 UTC m=+0.400679443 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, container_name=swift_container_server, name=rhosp17/openstack-swift-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:12:29 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53320 [13/Oct/2025:14:12:28.607] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/588/588 200 241 - - ---- 58/7/6/6/0 0/0 "GET /v3/domains/config/ldap/user_id_attribute/default HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53324 [13/Oct/2025:14:12:28.703] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/544/544 204 165 - - ---- 58/7/6/6/0 0/0 "PUT /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53336 [13/Oct/2025:14:12:28.788] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/552/552 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/0112c550304d4b87bacf25897e05bc39 HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain podman[161675]: 2025-10-13 14:12:29.364876752 +0000 UTC m=+0.600107402 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container, version=17.1.9, build-date=2025-07-21T15:24:10, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:12:29 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:12:29 standalone.localdomain podman[161667]: 2025-10-13 14:12:29.385648259 +0000 UTC m=+0.632041812 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 14:12:29 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53348 [13/Oct/2025:14:12:28.864] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/560/560 204 165 - - ---- 59/8/7/6/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/52ced692d0f546e489941355805b84b1 HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53360 [13/Oct/2025:14:12:28.913] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/615/615 204 165 - - ---- 58/7/6/6/0 0/0 "PUT /v3/projects/c1d2a15ead2d4ed583ce1c00d7ed3f50/users/e34036a84ff54672b5084cf1843c365d/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53362 [13/Oct/2025:14:12:29.015] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/579/579 200 634 - - ---- 58/7/6/6/0 0/0 "GET /v3/users?enabled=False HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain podman[161837]: 2025-10-13 14:12:29.603912765 +0000 UTC m=+0.100961058 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-share-container, summary=Red Hat OpenStack Platform 17.1 manila-share, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, version=17.1.9, build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-manila-share, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 14:12:29 standalone.localdomain podman[161837]: 2025-10-13 14:12:29.632001307 +0000 UTC m=+0.129049550 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, description=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, batch=17.1_20250721.1, com.redhat.component=openstack-manila-share-container, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true)
Oct 13 14:12:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1126: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53364 [13/Oct/2025:14:12:29.148] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/736/736 201 617 - - ---- 58/7/6/6/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.921095) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364749921129, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2382, "num_deletes": 501, "total_data_size": 1973991, "memory_usage": 2021072, "flush_reason": "Manual Compaction"}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53372 [13/Oct/2025:14:12:29.197] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/730/730 200 243 - - ---- 58/7/6/6/0 0/0 "GET /v3/domains/config/ldap/user_name_attribute/default HTTP/1.1"
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364749932018, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1733645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24872, "largest_seqno": 27253, "table_properties": {"data_size": 1725375, "index_size": 4461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 21669, "raw_average_key_size": 19, "raw_value_size": 1705943, "raw_average_value_size": 1552, "num_data_blocks": 203, "num_entries": 1099, "num_filter_entries": 1099, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364559, "oldest_key_time": 1760364559, "file_creation_time": 1760364749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 10980 microseconds, and 5330 cpu microseconds.
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.932073) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1733645 bytes OK
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.932094) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.934020) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.934044) EVENT_LOG_v1 {"time_micros": 1760364749934036, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.934068) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1962900, prev total WAL file size 1962900, number of live WAL files 2.
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.935109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1693KB)], [56(5727KB)]
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364749935178, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 7598155, "oldest_snapshot_seqno": -1}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 3911 keys, 4728982 bytes, temperature: kUnknown
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364749956108, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 4728982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4704034, "index_size": 14100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94277, "raw_average_key_size": 24, "raw_value_size": 4634438, "raw_average_value_size": 1184, "num_data_blocks": 601, "num_entries": 3911, "num_filter_entries": 3911, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.956315) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 4728982 bytes
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.957945) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 361.9 rd, 225.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 5.6 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(7.1) write-amplify(2.7) OK, records in: 4912, records dropped: 1001 output_compression: NoCompression
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.957960) EVENT_LOG_v1 {"time_micros": 1760364749957954, "job": 30, "event": "compaction_finished", "compaction_time_micros": 20995, "compaction_time_cpu_micros": 11870, "output_level": 6, "num_output_files": 1, "total_output_size": 4728982, "num_input_records": 4912, "num_output_records": 3911, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364749958159, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364749958529, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.935003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.958569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.958575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.958578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.958581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:29.958584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:29 standalone.localdomain haproxy[70940]: 192.168.122.99:53374 [13/Oct/2025:14:12:29.248] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/722/722 200 612 - - ---- 58/7/6/6/0 0/0 "GET /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:53390 [13/Oct/2025:14:12:29.341] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/709/709 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/users/2ce10303f8f140a1bc69a24310c37e0a HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:53398 [13/Oct/2025:14:12:29.426] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/712/712 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d/application_credentials/0d8e242997e9423b877d6414588673ab HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:53408 [13/Oct/2025:14:12:29.530] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/700/700 200 3603 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:57208 [13/Oct/2025:14:12:29.600] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/814/814 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/users/46cd3059e2884c7c8ff6951aaa75ecfc HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:57212 [13/Oct/2025:14:12:29.888] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/588/588 201 451 - - ---- 58/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:57226 [13/Oct/2025:14:12:29.930] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/598/598 200 245 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user_mail_attribute/default HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:57236 [13/Oct/2025:14:12:29.974] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/616/616 204 165 - - ---- 58/6/5/5/0 0/0 "HEAD /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain ceph-mon[29756]: pgmap v1126: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 172.21.0.2:39778 [13/Oct/2025:14:12:30.051] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/831/831 201 8100 - - ---- 58/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:57244 [13/Oct/2025:14:12:30.148] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/792/792 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/5de097abad124c08a8d74282f061da3d HTTP/1.1"
Oct 13 14:12:30 standalone.localdomain haproxy[70940]: 192.168.122.99:57246 [13/Oct/2025:14:12:30.233] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/758/758 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/c1d2a15ead2d4ed583ce1c00d7ed3f50/users/e34036a84ff54672b5084cf1843c365d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 192.168.122.99:57262 [13/Oct/2025:14:12:30.416] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/668/668 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/users/55bc2bdd41ac4ba4802cbc83685a49fe HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 192.168.122.99:57272 [13/Oct/2025:14:12:30.477] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/660/660 201 453 - - ---- 59/4/3/3/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 192.168.122.99:57280 [13/Oct/2025:14:12:30.530] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/655/655 200 259 - - ---- 59/4/3/3/0 0/0 "GET /v3/domains/config/ldap/user_description_attribute/default HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 192.168.122.99:57288 [13/Oct/2025:14:12:30.594] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/651/651 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/projects/fe3dffb53f0b414a8e095b67f132e54d/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 172.17.0.2:37106 [13/Oct/2025:14:12:30.891] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/450/450 200 8095 - - ---- 59/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 172.21.0.2:46944 [13/Oct/2025:14:12:30.887] neutron neutron/standalone.internalapi.localdomain 0/0/0/632/632 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=f1aab92925d047599fbe5e2ebedfc878&name=default HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 172.21.0.2:39780 [13/Oct/2025:14:12:30.943] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/665/665 201 8100 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1127: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 172.21.0.2:46960 [13/Oct/2025:14:12:31.521] neutron neutron/standalone.internalapi.localdomain 0/0/0/166/166 204 152 - - ---- 60/2/1/1/0 0/0 "DELETE /v2.0/security-groups/f60d0922-0639-421e-adcf-bf51ddf106b1 HTTP/1.1"
Oct 13 14:12:31 standalone.localdomain haproxy[70940]: 172.21.0.2:39784 [13/Oct/2025:14:12:31.001] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/923/923 201 8176 - - ---- 59/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57290 [13/Oct/2025:14:12:31.085] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/917/917 200 479 - - ---- 59/6/5/5/0 0/0 "PATCH /v3/domains/88aff2dc361a4176bdc24f5876ac8206 HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57306 [13/Oct/2025:14:12:31.139] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/988/988 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/projects/ee48481d21ec43539581baba0711956a/users/85fd5d3ecb02402a8f54af8cc5220167/roles/0ef1be7853d54dbf91960cec9015d2dc HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57314 [13/Oct/2025:14:12:31.189] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1030/1030 200 253 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user_pass_attribute/default HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57320 [13/Oct/2025:14:12:31.253] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1050/1050 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/system/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 172.17.0.2:37106 [13/Oct/2025:14:12:31.618] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/772/773 200 8095 - - ---- 59/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57322 [13/Oct/2025:14:12:31.688] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/1/757/758 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/projects/f1aab92925d047599fbe5e2ebedfc878 HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57336 [13/Oct/2025:14:12:31.928] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/566/566 201 482 - - ---- 59/5/4/4/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57346 [13/Oct/2025:14:12:32.004] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/644/644 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/domains/88aff2dc361a4176bdc24f5876ac8206 HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57356 [13/Oct/2025:14:12:32.129] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/575/575 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/ee48481d21ec43539581baba0711956a/users/85fd5d3ecb02402a8f54af8cc5220167/roles/8aa4a53c4d724802bca0e40728ca9a9a HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 172.21.0.2:46964 [13/Oct/2025:14:12:31.609] neutron neutron/standalone.internalapi.localdomain 0/0/0/1107/1107 200 2976 - - ---- 59/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=625d325fc7e34c579a4ac6c402228c21&name=default HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:12:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57368 [13/Oct/2025:14:12:32.223] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/535/535 200 251 - - ---- 59/5/4/4/0 0/0 "GET /v3/domains/config/ldap/user_enabled_attribute/default HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain ceph-mon[29756]: pgmap v1127: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 172.21.0.2:46968 [13/Oct/2025:14:12:32.447] neutron neutron/standalone.internalapi.localdomain 0/0/0/376/376 200 2976 - - ---- 59/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=6558c1761bba47bf904e47a566fb0dc1&name=default HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57374 [13/Oct/2025:14:12:32.308] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/547/547 200 577 - - ---- 59/5/4/4/0 0/0 "GET /v3/system/users/00515c3559934eb5861b350991c52f53/roles HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain systemd[1]: tmp-crun.9gpxIq.mount: Deactivated successfully.
Oct 13 14:12:32 standalone.localdomain podman[161936]: 2025-10-13 14:12:32.90198742 +0000 UTC m=+0.164976022 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, release=1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, container_name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:32 standalone.localdomain podman[161938]: 2025-10-13 14:12:32.917536518 +0000 UTC m=+0.170172492 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:58:20, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal)
Oct 13 14:12:32 standalone.localdomain podman[161935]: 2025-10-13 14:12:32.871542286 +0000 UTC m=+0.136316342 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, release=1, config_id=tripleo_step4, com.redhat.component=openstack-placement-api-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-placement-api, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, build-date=2025-07-21T13:58:12, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:12:32 standalone.localdomain haproxy[70940]: 192.168.122.99:57378 [13/Oct/2025:14:12:32.497] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/445/445 201 610 - - ---- 60/6/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:32 standalone.localdomain podman[161935]: 2025-10-13 14:12:32.964092147 +0000 UTC m=+0.228866203 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:12, managed_by=tripleo_ansible, container_name=placement_api, description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:12:32 standalone.localdomain podman[161964]: 2025-10-13 14:12:32.974584498 +0000 UTC m=+0.182925683 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, container_name=glance_api, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1)
Oct 13 14:12:32 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain podman[161939]: 2025-10-13 14:12:32.848671345 +0000 UTC m=+0.111099849 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:12:33 standalone.localdomain podman[161934]: 2025-10-13 14:12:33.021510768 +0000 UTC m=+0.270936714 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, com.redhat.component=openstack-glance-api-container, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible)
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57380 [13/Oct/2025:14:12:32.651] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/386/386 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/users/de844f7880684129a21bbf09ac52abb2 HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain podman[161961]: 2025-10-13 14:12:33.055523051 +0000 UTC m=+0.303069309 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:12:33 standalone.localdomain podman[161936]: 2025-10-13 14:12:33.080743635 +0000 UTC m=+0.343732257 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container, version=17.1.9, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:48:37, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 172.21.0.2:46984 [13/Oct/2025:14:12:32.719] neutron neutron/standalone.internalapi.localdomain 0/0/0/364/364 204 152 - - ---- 59/2/1/1/0 0/0 "DELETE /v2.0/security-groups/97c74cd3-e9b3-4f1a-9e4b-3f4acd9dba84 HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain podman[161937]: 2025-10-13 14:12:33.102473652 +0000 UTC m=+0.361450231 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, distribution-scope=public, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_metadata, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, release=1, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 172.21.0.2:46994 [13/Oct/2025:14:12:32.826] neutron neutron/standalone.internalapi.localdomain 0/0/0/290/290 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/24d40b91-58da-426d-b871-85b971727bf8 HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57390 [13/Oct/2025:14:12:32.707] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/413/413 200 8474 - - ---- 59/7/6/6/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain podman[161938]: 2025-10-13 14:12:33.129753599 +0000 UTC m=+0.382389563 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:12:33 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain podman[161961]: 2025-10-13 14:12:33.153740754 +0000 UTC m=+0.401287022 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57396 [13/Oct/2025:14:12:32.763] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/400/400 200 244 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user_enabled_invert/default HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain podman[161939]: 2025-10-13 14:12:33.180895098 +0000 UTC m=+0.443323612 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=)
Oct 13 14:12:33 standalone.localdomain podman[161964]: 2025-10-13 14:12:33.19074667 +0000 UTC m=+0.399087845 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible)
Oct 13 14:12:33 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain podman[161937]: 2025-10-13 14:12:33.205529164 +0000 UTC m=+0.464505763 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release=1, vcs-type=git, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, container_name=nova_metadata, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4)
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57412 [13/Oct/2025:14:12:32.858] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/348/348 204 165 - - ---- 59/6/5/5/0 0/0 "HEAD /v3/system/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57418 [13/Oct/2025:14:12:32.942] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/2/304/306 201 508 - - ---- 59/6/5/5/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain podman[161934]: 2025-10-13 14:12:33.28233296 +0000 UTC m=+0.531758896 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, version=17.1.9, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:12:33 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57430 [13/Oct/2025:14:12:33.039] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/347/347 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/5fee77256fad4ce08187671918ae31bb HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57432 [13/Oct/2025:14:12:33.085] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/412/412 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/625d325fc7e34c579a4ac6c402228c21 HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain haproxy[70940]: 192.168.122.99:57446 [13/Oct/2025:14:12:33.117] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/484/484 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/projects/6558c1761bba47bf904e47a566fb0dc1 HTTP/1.1"
Oct 13 14:12:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1128: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:34 standalone.localdomain haproxy[70940]: 172.21.0.2:39798 [13/Oct/2025:14:12:33.121] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/891/891 201 8139 - - ---- 58/4/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:34 standalone.localdomain haproxy[70940]: 192.168.122.99:57452 [13/Oct/2025:14:12:33.164] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/887/887 200 238 - - ---- 58/3/2/2/0 0/0 "GET /v3/domains/config/ldap/user_enabled_mask/default HTTP/1.1"
Oct 13 14:12:34 standalone.localdomain haproxy[70940]: 192.168.122.99:57464 [13/Oct/2025:14:12:33.208] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/884/884 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/system/users/00515c3559934eb5861b350991c52f53/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:34 standalone.localdomain haproxy[70940]: 192.168.122.99:57472 [13/Oct/2025:14:12:33.251] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1130/1130 201 652 - - ---- 59/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:34 standalone.localdomain haproxy[70940]: 172.21.0.2:39808 [13/Oct/2025:14:12:33.390] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1325/1325 201 8100 - - ---- 58/4/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: pgmap v1128: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.919299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364754919367, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 296, "num_deletes": 256, "total_data_size": 49462, "memory_usage": 55752, "flush_reason": "Manual Compaction"}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364754924629, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 49342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27254, "largest_seqno": 27549, "table_properties": {"data_size": 47433, "index_size": 150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4589, "raw_average_key_size": 16, "raw_value_size": 43632, "raw_average_value_size": 157, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364750, "oldest_key_time": 1760364750, "file_creation_time": 1760364754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 5364 microseconds, and 1016 cpu microseconds.
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.924673) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 49342 bytes OK
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.924694) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.927008) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.927030) EVENT_LOG_v1 {"time_micros": 1760364754927023, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.927084) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 47278, prev total WAL file size 47767, number of live WAL files 2.
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.929533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(48KB)], [59(4618KB)]
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364754929621, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 4778324, "oldest_snapshot_seqno": -1}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 3669 keys, 4645177 bytes, temperature: kUnknown
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364754955310, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 4645177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4621270, "index_size": 13635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 90442, "raw_average_key_size": 24, "raw_value_size": 4555377, "raw_average_value_size": 1241, "num_data_blocks": 576, "num_entries": 3669, "num_filter_entries": 3669, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.955600) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 4645177 bytes
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.957615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.3 rd, 180.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 4.5 +0.0 blob) out(4.4 +0.0 blob), read-write-amplify(191.0) write-amplify(94.1) OK, records in: 4188, records dropped: 519 output_compression: NoCompression
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.957664) EVENT_LOG_v1 {"time_micros": 1760364754957645, "job": 32, "event": "compaction_finished", "compaction_time_micros": 25782, "compaction_time_cpu_micros": 15852, "output_level": 6, "num_output_files": 1, "total_output_size": 4645177, "num_input_records": 4188, "num_output_records": 3669, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364754957887, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364754958540, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.929331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.958589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.958595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.958598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.958601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:34 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:12:34.958604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:12:34 standalone.localdomain haproxy[70940]: 172.21.0.2:39816 [13/Oct/2025:14:12:33.605] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1374/1374 201 8100 - - ---- 58/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 172.21.0.2:39822 [13/Oct/2025:14:12:34.020] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1227/1227 201 8139 - - ---- 58/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57484 [13/Oct/2025:14:12:34.054] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1205/1205 200 246 - - ---- 58/4/3/3/0 0/0 "GET /v3/domains/config/ldap/user_enabled_default/default HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57496 [13/Oct/2025:14:12:34.097] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1180/1180 201 477 - - ---- 58/4/3/3/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57512 [13/Oct/2025:14:12:34.386] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/914/914 201 446 - - ---- 58/4/3/3/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 172.17.0.2:37106 [13/Oct/2025:14:12:34.724] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/632/632 200 8095 - - ---- 58/2/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 172.21.0.2:46996 [13/Oct/2025:14:12:34.719] neutron neutron/standalone.internalapi.localdomain 0/0/0/817/817 200 2976 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=5128a58785f94eca8da2f446be10caf7&name=default HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 172.21.0.2:39836 [13/Oct/2025:14:12:34.985] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/688/688 201 8100 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1129: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 172.21.0.2:47004 [13/Oct/2025:14:12:35.539] neutron neutron/standalone.internalapi.localdomain 0/0/0/161/161 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/5fffcfc2-47c8-4c54-a121-9e5c22a3a302 HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57522 [13/Oct/2025:14:12:35.250] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/479/479 201 1041 - - ---- 58/6/5/5/0 0/0 "POST /v3/OS-TRUST/trusts HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57534 [13/Oct/2025:14:12:35.261] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/479/479 200 263 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user_attribute_ignore/default HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57542 [13/Oct/2025:14:12:35.279] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/474/474 201 477 - - ---- 58/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57546 [13/Oct/2025:14:12:35.303] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/469/469 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/4895aeb2d9c34b65b1b196886b092d1c/inherited_to_projects HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57562 [13/Oct/2025:14:12:35.675] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/140/140 200 510 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57566 [13/Oct/2025:14:12:35.703] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/157/157 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/projects/5128a58785f94eca8da2f446be10caf7 HTTP/1.1"
Oct 13 14:12:35 standalone.localdomain ceph-mon[29756]: pgmap v1129: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:35 standalone.localdomain haproxy[70940]: 192.168.122.99:57568 [13/Oct/2025:14:12:35.733] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/231/231 200 813 - - ---- 58/5/4/4/0 0/0 "GET /v3/OS-TRUST/trusts/?trustor_user_id=85fd5d3ecb02402a8f54af8cc5220167 HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 172.21.0.2:47016 [13/Oct/2025:14:12:35.863] neutron neutron/standalone.internalapi.localdomain 0/0/0/161/161 200 2976 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=a23b4d807f024441940b662138ce53ea&name=default HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57578 [13/Oct/2025:14:12:35.743] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/301/301 200 257 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/user_default_project_id_attribute/default HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57582 [13/Oct/2025:14:12:35.756] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/342/342 201 738 - - ---- 59/6/5/4/0 0/0 "PUT /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf/implies/68d6367058624737a941e7f24c17bdc4 HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57588 [13/Oct/2025:14:12:35.776] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/371/371 204 165 - - ---- 58/5/4/4/0 0/0 "HEAD /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/4895aeb2d9c34b65b1b196886b092d1c/inherited_to_projects HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57596 [13/Oct/2025:14:12:35.818] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/379/379 201 574 - - ---- 58/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57608 [13/Oct/2025:14:12:35.968] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/285/285 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/OS-TRUST/trusts/b35e27dfd4c644e39dddc6bf8718499d HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 172.21.0.2:47026 [13/Oct/2025:14:12:36.027] neutron neutron/standalone.internalapi.localdomain 0/0/0/263/263 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/b7d46597-cd05-40ab-a7b5-8786efa64f08 HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57618 [13/Oct/2025:14:12:36.047] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/250/250 200 247 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user_enabled_emulation/default HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57634 [13/Oct/2025:14:12:36.101] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/243/243 201 483 - - ---- 58/6/5/5/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57648 [13/Oct/2025:14:12:36.149] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/247/247 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/4895aeb2d9c34b65b1b196886b092d1c/inherited_to_projects HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57656 [13/Oct/2025:14:12:36.200] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/484/484 201 498 - - ---- 58/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:12:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57664 [13/Oct/2025:14:12:36.255] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/479/479 404 320 - - ---- 58/6/5/5/0 0/0 "GET /v3/OS-TRUST/trusts/b35e27dfd4c644e39dddc6bf8718499d HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57672 [13/Oct/2025:14:12:36.294] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/500/500 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/projects/a23b4d807f024441940b662138ce53ea HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain systemd[1]: tmp-crun.L8VMMq.mount: Deactivated successfully.
Oct 13 14:12:36 standalone.localdomain podman[162152]: 2025-10-13 14:12:36.834276504 +0000 UTC m=+0.095334406 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, distribution-scope=public, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-keystone, tcib_managed=true, io.buildah.version=1.33.12, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., container_name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57676 [13/Oct/2025:14:12:36.299] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/544/544 200 249 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/user_enabled_emulation_dn/default HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain systemd[1]: tmp-crun.f5rP6r.mount: Deactivated successfully.
Oct 13 14:12:36 standalone.localdomain podman[162152]: 2025-10-13 14:12:36.876913052 +0000 UTC m=+0.137970924 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone)
Oct 13 14:12:36 standalone.localdomain podman[162153]: 2025-10-13 14:12:36.884546417 +0000 UTC m=+0.142930167 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, container_name=nova_api, release=1, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:12:36 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57678 [13/Oct/2025:14:12:36.347] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/551/551 201 477 - - ---- 58/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:36 standalone.localdomain podman[162153]: 2025-10-13 14:12:36.929679991 +0000 UTC m=+0.188063751 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vcs-type=git, architecture=x86_64, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, container_name=nova_api)
Oct 13 14:12:36 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:12:36 standalone.localdomain haproxy[70940]: 192.168.122.99:57694 [13/Oct/2025:14:12:36.399] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/579/579 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/4895aeb2d9c34b65b1b196886b092d1c HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57700 [13/Oct/2025:14:12:36.689] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/353/353 200 4075 - - ---- 58/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57714 [13/Oct/2025:14:12:36.739] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/376/376 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/8aa4a53c4d724802bca0e40728ca9a9a HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 172.21.0.2:39838 [13/Oct/2025:14:12:36.797] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/628/628 201 8100 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57730 [13/Oct/2025:14:12:36.847] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/659/659 200 264 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/user_enabled_emulation_use_group_config/default HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57742 [13/Oct/2025:14:12:36.901] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/684/684 201 738 - - ---- 58/5/4/4/0 0/0 "PUT /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf/implies/207de820e422450f885bb1a89e1d9dae HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57748 [13/Oct/2025:14:12:36.984] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/664/664 201 446 - - ---- 58/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1130: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57760 [13/Oct/2025:14:12:37.046] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/708/708 204 165 - - ---- 58/5/4/4/0 0/0 "PUT /v3/projects/dcda7d2fbed640718072d8d15b65fd30/users/7b2e404fade44d6bad896a8467fcf68e/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:37 standalone.localdomain haproxy[70940]: 192.168.122.99:57776 [13/Oct/2025:14:12:37.117] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/711/711 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/0ef1be7853d54dbf91960cec9015d2dc HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 172.21.0.2:39844 [13/Oct/2025:14:12:37.432] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/701/701 201 8100 - - ---- 59/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57786 [13/Oct/2025:14:12:37.508] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/673/673 200 255 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/user_additional_attribute_mapping/default HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57794 [13/Oct/2025:14:12:37.586] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/653/653 201 733 - - ---- 58/6/5/5/0 0/0 "PUT /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf/implies/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57800 [13/Oct/2025:14:12:37.649] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/654/654 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/20c72d93270b4bec87e7b92e3e157cf9/inherited_to_projects HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57816 [13/Oct/2025:14:12:37.759] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/596/596 200 3833 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain runuser[162353]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57826 [13/Oct/2025:14:12:37.832] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/655/655 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/users/85fd5d3ecb02402a8f54af8cc5220167 HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57828 [13/Oct/2025:14:12:38.134] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/415/415 200 510 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57842 [13/Oct/2025:14:12:38.184] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/411/411 200 237 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/group_tree_dn/default HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57856 [13/Oct/2025:14:12:38.241] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/415/415 403 325 - - ---- 58/6/5/5/0 0/0 "PUT /v3/roles/ddf8a916dd4e483ba79315e6ae209170/implies/040c149e9e914c1fbf38f69fad3b2dcf HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57868 [13/Oct/2025:14:12:38.305] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/395/395 204 165 - - ---- 58/6/5/5/0 0/0 "HEAD /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/20c72d93270b4bec87e7b92e3e157cf9/inherited_to_projects HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57884 [13/Oct/2025:14:12:38.358] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/364/364 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/dcda7d2fbed640718072d8d15b65fd30/users/7b2e404fade44d6bad896a8467fcf68e/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:12:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:12:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:12:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:12:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:12:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:12:38 standalone.localdomain ceph-mon[29756]: pgmap v1130: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:38 standalone.localdomain systemd[1]: tmp-crun.mrY2xr.mount: Deactivated successfully.
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57892 [13/Oct/2025:14:12:38.490] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/335/335 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/projects/ee48481d21ec43539581baba0711956a HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain podman[162442]: 2025-10-13 14:12:38.82728711 +0000 UTC m=+0.085389121 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, config_id=tripleo_step4, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64)
Oct 13 14:12:38 standalone.localdomain podman[162441]: 2025-10-13 14:12:38.858830428 +0000 UTC m=+0.120974022 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, container_name=barbican_keystone_listener, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-barbican-keystone-listener-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57896 [13/Oct/2025:14:12:38.554] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/334/334 201 594 - - ---- 58/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain podman[162460]: 2025-10-13 14:12:38.914578579 +0000 UTC m=+0.169532543 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, container_name=nova_api_cron, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:12:38 standalone.localdomain podman[162440]: 2025-10-13 14:12:38.925588736 +0000 UTC m=+0.190035041 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, version=17.1.9, container_name=barbican_api, batch=17.1_20250721.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, distribution-scope=public)
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57906 [13/Oct/2025:14:12:38.598] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/331/331 200 236 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/group_filter/default HTTP/1.1"
Oct 13 14:12:38 standalone.localdomain podman[162460]: 2025-10-13 14:12:38.945339132 +0000 UTC m=+0.200293086 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-api-container, tcib_managed=true, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=nova_api_cron, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:12:38 standalone.localdomain podman[162439]: 2025-10-13 14:12:38.972544406 +0000 UTC m=+0.233730121 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, container_name=neutron_dhcp, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:12:38 standalone.localdomain haproxy[70940]: 192.168.122.99:57912 [13/Oct/2025:14:12:38.658] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/338/338 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf/implies/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain podman[162447]: 2025-10-13 14:12:39.025251283 +0000 UTC m=+0.282630272 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T15:36:22, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-worker-container, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-worker, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker)
Oct 13 14:12:39 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain podman[162441]: 2025-10-13 14:12:39.058024499 +0000 UTC m=+0.320168093 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, release=1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, container_name=barbican_keystone_listener)
Oct 13 14:12:39 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 192.168.122.99:57926 [13/Oct/2025:14:12:38.704] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/388/388 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/20c72d93270b4bec87e7b92e3e157cf9/inherited_to_projects HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain podman[162555]: 2025-10-13 14:12:39.09749651 +0000 UTC m=+0.117061352 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, build-date=2025-07-21T16:18:24, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, name=rhosp17/openstack-cinder-backup, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1)
Oct 13 14:12:39 standalone.localdomain podman[162439]: 2025-10-13 14:12:39.102851275 +0000 UTC m=+0.364036880 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, batch=17.1_20250721.1, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, name=rhosp17/openstack-neutron-dhcp-agent, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, container_name=neutron_dhcp)
Oct 13 14:12:39 standalone.localdomain podman[162442]: 2025-10-13 14:12:39.10303408 +0000 UTC m=+0.361136091 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:39 standalone.localdomain podman[162440]: 2025-10-13 14:12:39.101223044 +0000 UTC m=+0.365669329 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, batch=17.1_20250721.1, container_name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, release=1, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:12:39 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain podman[162447]: 2025-10-13 14:12:39.149715653 +0000 UTC m=+0.407094632 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, architecture=x86_64, container_name=barbican_worker, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, io.openshift.expose-services=, release=1, name=rhosp17/openstack-barbican-worker, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-worker-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4)
Oct 13 14:12:39 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain podman[162555]: 2025-10-13 14:12:39.181112236 +0000 UTC m=+0.200677108 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, version=17.1.9, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-backup, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, build-date=2025-07-21T16:18:24, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:12:39 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain runuser[162353]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:39 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain runuser[162669]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 172.21.0.2:39852 [13/Oct/2025:14:12:38.728] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/700/700 201 8112 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 192.168.122.99:57932 [13/Oct/2025:14:12:38.829] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/668/668 201 536 - - ---- 58/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1131: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 192.168.122.99:57944 [13/Oct/2025:14:12:38.890] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/913/913 201 508 - - ---- 58/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 192.168.122.99:57952 [13/Oct/2025:14:12:38.932] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/923/923 200 251 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/group_objectclass/default HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain recover_tripleo_nova_virtqemud[162756]: 93291
Oct 13 14:12:39 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:12:39 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 192.168.122.99:57962 [13/Oct/2025:14:12:39.000] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/911/911 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf/implies/207de820e422450f885bb1a89e1d9dae HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:39 standalone.localdomain runuser[162669]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:39 standalone.localdomain haproxy[70940]: 192.168.122.99:57966 [13/Oct/2025:14:12:39.093] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/885/885 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/roles/20c72d93270b4bec87e7b92e3e157cf9 HTTP/1.1"
Oct 13 14:12:39 standalone.localdomain runuser[162766]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:57968 [13/Oct/2025:14:12:39.431] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/604/604 201 574 - - ---- 58/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:57984 [13/Oct/2025:14:12:39.498] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/835/835 201 623 - - ---- 58/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:40434 [13/Oct/2025:14:12:39.804] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/576/576 200 3603 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:40438 [13/Oct/2025:14:12:39.858] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/572/572 200 242 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/group_id_attribute/default HTTP/1.1"
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:40444 [13/Oct/2025:14:12:39.920] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/586/586 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/roles/207de820e422450f885bb1a89e1d9dae HTTP/1.1"
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:40450 [13/Oct/2025:14:12:39.981] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/580/580 201 447 - - ---- 58/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:40 standalone.localdomain ceph-mon[29756]: pgmap v1131: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:40 standalone.localdomain runuser[162766]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:40 standalone.localdomain haproxy[70940]: 192.168.122.99:40462 [13/Oct/2025:14:12:40.037] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/886/886 201 497 - - ---- 58/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40470 [13/Oct/2025:14:12:40.335] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/699/699 201 451 - - ---- 58/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40476 [13/Oct/2025:14:12:40.382] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/703/703 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/dfc1dd5f33a448e09863a9e23b14834f/users/86861b7f6977497ea2f5bc5f76ef2c37/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40488 [13/Oct/2025:14:12:40.433] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/697/697 200 244 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/group_name_attribute/default HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40494 [13/Oct/2025:14:12:40.507] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/698/698 200 479 - - ---- 58/6/5/5/0 0/0 "PATCH /v3/domains/c3c76986b3844c18ac786ca55bb2a30c HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40502 [13/Oct/2025:14:12:40.564] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/693/693 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/29a100cba4d3401381f2c91ac678d9d8/inherited_to_projects HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40510 [13/Oct/2025:14:12:40.927] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/380/380 200 4069 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40524 [13/Oct/2025:14:12:41.036] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/322/322 201 453 - - ---- 58/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40526 [13/Oct/2025:14:12:41.087] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/291/291 200 4306 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40538 [13/Oct/2025:14:12:41.133] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/292/292 200 246 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/group_members_are_ids/default HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40548 [13/Oct/2025:14:12:41.208] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/338/338 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/domains/c3c76986b3844c18ac786ca55bb2a30c HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40560 [13/Oct/2025:14:12:41.260] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/335/335 200 650 - - ---- 58/6/5/5/0 0/0 "GET /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/inherited_to_projects HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40576 [13/Oct/2025:14:12:41.310] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/343/343 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/c68bdcc67389489eaf1560e1592ad4eb/users/1626c6cbb1ad405886368172699487d1/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1132: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40584 [13/Oct/2025:14:12:41.362] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/351/351 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/00b56f438d36464889d9e22ce509d45a/users/88c2c2b03e474d15a7f3116ca4e98823/roles/47c2d2d0c2e74a21a819b98618dc5a90 HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40594 [13/Oct/2025:14:12:41.381] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/380/380 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/dfc1dd5f33a448e09863a9e23b14834f/users/86861b7f6977497ea2f5bc5f76ef2c37/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40600 [13/Oct/2025:14:12:41.428] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/385/385 200 250 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/group_member_attribute/default HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40610 [13/Oct/2025:14:12:41.549] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/316/316 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf/implies/68d6367058624737a941e7f24c17bdc4 HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40616 [13/Oct/2025:14:12:41.599] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/310/310 204 165 - - ---- 58/5/4/4/0 0/0 "HEAD /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/29a100cba4d3401381f2c91ac678d9d8/inherited_to_projects HTTP/1.1"
Oct 13 14:12:41 standalone.localdomain haproxy[70940]: 192.168.122.99:40620 [13/Oct/2025:14:12:41.656] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/298/298 200 4306 - - ---- 59/6/5/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40628 [13/Oct/2025:14:12:41.716] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/292/292 204 165 - - ---- 58/5/4/4/0 0/0 "PUT /v3/projects/00b56f438d36464889d9e22ce509d45a/users/88c2c2b03e474d15a7f3116ca4e98823/roles/cab33031ea804577aa06042398ed163c HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 172.21.0.2:55640 [13/Oct/2025:14:12:41.768] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/559/559 201 8132 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40642 [13/Oct/2025:14:12:41.817] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/557/557 200 253 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/group_desc_attribute/default HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40648 [13/Oct/2025:14:12:41.868] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/565/565 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/roles/68d6367058624737a941e7f24c17bdc4 HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40660 [13/Oct/2025:14:12:41.913] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/580/580 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6/roles/29a100cba4d3401381f2c91ac678d9d8/inherited_to_projects HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40670 [13/Oct/2025:14:12:41.959] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/638/638 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/c68bdcc67389489eaf1560e1592ad4eb/users/1626c6cbb1ad405886368172699487d1/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40676 [13/Oct/2025:14:12:42.010] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/650/650 200 9491 - - ---- 58/6/5/5/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40684 [13/Oct/2025:14:12:42.330] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/382/382 201 594 - - ---- 58/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40700 [13/Oct/2025:14:12:42.376] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/388/388 200 244 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/group_attribute_ignore/default HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain ceph-mon[29756]: pgmap v1132: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40712 [13/Oct/2025:14:12:42.436] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/393/393 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/040c149e9e914c1fbf38f69fad3b2dcf HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40728 [13/Oct/2025:14:12:42.495] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/387/387 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/29a100cba4d3401381f2c91ac678d9d8 HTTP/1.1"
Oct 13 14:12:42 standalone.localdomain haproxy[70940]: 192.168.122.99:40730 [13/Oct/2025:14:12:42.600] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/326/326 200 4075 - - ---- 58/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:43 standalone.localdomain haproxy[70940]: 172.21.0.2:55648 [13/Oct/2025:14:12:42.664] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/588/588 201 8141 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:43 standalone.localdomain haproxy[70940]: 192.168.122.99:40736 [13/Oct/2025:14:12:42.715] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/844/844 201 507 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:43 standalone.localdomain haproxy[70940]: 192.168.122.99:40748 [13/Oct/2025:14:12:42.766] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/850/850 200 256 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains/config/ldap/group_additional_attribute_mapping/default HTTP/1.1"
Oct 13 14:12:43 standalone.localdomain haproxy[70940]: 192.168.122.99:40762 [13/Oct/2025:14:12:42.832] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/852/852 201 728 - - ---- 58/5/4/4/0 0/0 "PUT /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1133: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:43 standalone.localdomain haproxy[70940]: 192.168.122.99:40778 [13/Oct/2025:14:12:42.885] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/849/849 201 446 - - ---- 58/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:43 standalone.localdomain haproxy[70940]: 192.168.122.99:40790 [13/Oct/2025:14:12:42.929] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/855/855 204 165 - - ---- 58/5/4/4/0 0/0 "PUT /v3/projects/c68bdcc67389489eaf1560e1592ad4eb/users/1626c6cbb1ad405886368172699487d1/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 172.21.0.2:55664 [13/Oct/2025:14:12:43.255] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/830/830 201 8141 - - ---- 58/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40796 [13/Oct/2025:14:12:43.561] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/578/578 200 4305 - - ---- 58/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40812 [13/Oct/2025:14:12:43.619] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/563/563 200 238 - - ---- 59/6/4/4/0 0/0 "GET /v3/domains/config/ldap/tls_cacertfile/default HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40828 [13/Oct/2025:14:12:43.687] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/584/584 204 165 - - ---- 58/5/4/4/0 0/0 "HEAD /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40840 [13/Oct/2025:14:12:43.736] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/625/625 204 165 - - ---- 58/5/4/4/0 0/0 "PUT /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/a6194228ed704a778ca86838711577e1/inherited_to_projects HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 172.21.0.2:55672 [13/Oct/2025:14:12:43.788] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/909/909 201 8172 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:12:44 standalone.localdomain ceph-mon[29756]: pgmap v1133: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40842 [13/Oct/2025:14:12:44.088] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/717/717 201 1066 - - ---- 58/6/5/5/0 0/0 "POST /v3/OS-TRUST/trusts HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain podman[162866]: 2025-10-13 14:12:44.809659611 +0000 UTC m=+0.073787394 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:30:04, container_name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, summary=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, batch=17.1_20250721.1, config_id=ovn_cluster_northd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 14:12:44 standalone.localdomain podman[162866]: 2025-10-13 14:12:44.822782634 +0000 UTC m=+0.086910407 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, build-date=2025-07-21T13:30:04, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, name=rhosp17/openstack-ovn-northd)
Oct 13 14:12:44 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40856 [13/Oct/2025:14:12:44.141] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/735/735 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/b8ff58559ddc448185bdca26cc022f4f/users/5c62c2625aaa401aa23fe742117ddcaa/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:44 standalone.localdomain haproxy[70940]: 192.168.122.99:40858 [13/Oct/2025:14:12:44.183] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/781/781 200 237 - - ---- 58/6/5/5/0 0/0 "GET /v3/domains/config/ldap/tls_cacertdir/default HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40870 [13/Oct/2025:14:12:44.274] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/770/770 200 723 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40878 [13/Oct/2025:14:12:44.364] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/732/732 200 648 - - ---- 58/6/5/5/0 0/0 "GET /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/inherited_to_projects HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40884 [13/Oct/2025:14:12:44.700] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/473/473 201 567 - - ---- 58/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40888 [13/Oct/2025:14:12:44.808] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/440/440 200 1061 - - ---- 58/6/5/5/0 0/0 "GET /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578 HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40890 [13/Oct/2025:14:12:44.879] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/434/434 200 4305 - - ---- 60/8/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40892 [13/Oct/2025:14:12:44.966] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/393/393 200 232 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/use_tls/default HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40902 [13/Oct/2025:14:12:45.045] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/373/373 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1134: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:12:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:12:45 standalone.localdomain haproxy[70940]: 192.168.122.99:40910 [13/Oct/2025:14:12:45.097] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/707/707 204 165 - - ---- 57/6/5/5/0 0/0 "HEAD /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/a6194228ed704a778ca86838711577e1/inherited_to_projects HTTP/1.1"
Oct 13 14:12:45 standalone.localdomain podman[162893]: 2025-10-13 14:12:45.851734783 +0000 UTC m=+0.111955556 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:12:45 standalone.localdomain podman[162893]: 2025-10-13 14:12:45.884288672 +0000 UTC m=+0.144509465 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Oct 13 14:12:45 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:12:45 standalone.localdomain systemd[1]: tmp-crun.kgDZ4r.mount: Deactivated successfully.
Oct 13 14:12:45 standalone.localdomain podman[162894]: 2025-10-13 14:12:45.964281505 +0000 UTC m=+0.221042422 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:12:46 standalone.localdomain ceph-mon[29756]: pgmap v1134: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:46 standalone.localdomain podman[162894]: 2025-10-13 14:12:46.078932793 +0000 UTC m=+0.335693710 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, distribution-scope=public)
Oct 13 14:12:46 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40920 [13/Oct/2025:14:12:45.175] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1024/1024 201 580 - - ---- 57/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40928 [13/Oct/2025:14:12:45.250] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1000/1000 200 573 - - ---- 57/6/5/5/0 0/0 "GET /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578/roles HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40932 [13/Oct/2025:14:12:45.315] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/985/985 204 165 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/b8ff58559ddc448185bdca26cc022f4f/users/5c62c2625aaa401aa23fe742117ddcaa/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40940 [13/Oct/2025:14:12:45.361] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/975/975 200 240 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/tls_req_cert/default HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40950 [13/Oct/2025:14:12:45.421] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/959/959 404 346 - - ---- 57/6/5/5/0 0/0 "GET /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40954 [13/Oct/2025:14:12:45.807] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/631/631 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/a6194228ed704a778ca86838711577e1/inherited_to_projects HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40958 [13/Oct/2025:14:12:46.202] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/288/288 200 575 - - ---- 58/7/5/5/0 0/0 "GET /v3/users/b31200d1027246c29ebc4b93437a8cbb HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40970 [13/Oct/2025:14:12:46.254] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/284/284 200 446 - - ---- 57/6/5/5/0 0/0 "GET /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578/roles/47c2d2d0c2e74a21a819b98618dc5a90 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40978 [13/Oct/2025:14:12:46.301] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/279/279 200 4305 - - ---- 57/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:40988 [13/Oct/2025:14:12:46.337] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/281/281 200 232 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/use_pool/default HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41004 [13/Oct/2025:14:12:46.382] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/282/282 404 346 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41006 [13/Oct/2025:14:12:46.441] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/280/280 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/a6194228ed704a778ca86838711577e1 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41008 [13/Oct/2025:14:12:46.492] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/302/302 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/users/b31200d1027246c29ebc4b93437a8cbb HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41020 [13/Oct/2025:14:12:46.543] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/305/305 200 202 - - ---- 57/6/5/5/0 0/0 "HEAD /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578/roles/47c2d2d0c2e74a21a819b98618dc5a90 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41028 [13/Oct/2025:14:12:46.584] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/310/310 204 165 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/b8ff58559ddc448185bdca26cc022f4f/users/5c62c2625aaa401aa23fe742117ddcaa/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41032 [13/Oct/2025:14:12:46.622] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/315/315 200 231 - - ---- 57/5/4/4/0 0/0 "GET /v3/domains/config/ldap/pool_size/default HTTP/1.1"
Oct 13 14:12:46 standalone.localdomain haproxy[70940]: 192.168.122.99:41036 [13/Oct/2025:14:12:46.670] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/315/315 201 728 - - ---- 57/5/4/4/0 0/0 "PUT /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41042 [13/Oct/2025:14:12:46.727] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/315/315 201 446 - - ---- 57/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41056 [13/Oct/2025:14:12:46.799] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/343/343 204 165 - - ---- 58/6/4/4/0 0/0 "DELETE /v3/projects/f24da31dd72b44e5a1465ac4f40e1de4 HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41060 [13/Oct/2025:14:12:46.851] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/340/340 404 319 - - ---- 57/5/4/4/0 0/0 "GET /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578/roles/cab33031ea804577aa06042398ed163c HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 172.21.0.2:55688 [13/Oct/2025:14:12:46.901] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/618/618 201 8192 - - ---- 57/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41068 [13/Oct/2025:14:12:46.939] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/628/628 200 235 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/pool_retry_max/default HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41078 [13/Oct/2025:14:12:46.988] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/636/636 201 727 - - ---- 57/6/5/5/0 0/0 "PUT /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/a8c80f0b51b545afbbc544ab0e449dce HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1135: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41082 [13/Oct/2025:14:12:47.043] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/667/667 201 616 - - ---- 57/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41084 [13/Oct/2025:14:12:47.145] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/651/651 201 565 - - ---- 57/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41094 [13/Oct/2025:14:12:47.192] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/694/694 404 209 - - ---- 57/6/5/5/0 0/0 "HEAD /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578/roles/cab33031ea804577aa06042398ed163c HTTP/1.1"
Oct 13 14:12:47 standalone.localdomain haproxy[70940]: 192.168.122.99:41100 [13/Oct/2025:14:12:47.521] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/439/439 201 567 - - ---- 57/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41106 [13/Oct/2025:14:12:47.571] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/451/451 200 239 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/pool_retry_delay/default HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41120 [13/Oct/2025:14:12:47.627] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/470/470 201 727 - - ---- 57/6/5/5/0 0/0 "PUT /v3/roles/a8c80f0b51b545afbbc544ab0e449dce/implies/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41126 [13/Oct/2025:14:12:47.712] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/441/441 204 165 - - ---- 57/6/5/5/0 0/0 "PUT /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/69d078ebcf8542708bfd697be9a2344d/inherited_to_projects HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41132 [13/Oct/2025:14:12:47.799] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/722/722 201 621 - - ---- 57/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41140 [13/Oct/2025:14:12:47.890] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/702/702 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578 HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41156 [13/Oct/2025:14:12:47.963] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/714/714 201 312 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags/tempest-tag-1172067207 HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41172 [13/Oct/2025:14:12:48.024] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/700/700 200 245 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/pool_connection_timeout/default HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain ceph-mon[29756]: pgmap v1135: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41184 [13/Oct/2025:14:12:48.100] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/674/674 200 1809 - - ---- 57/6/5/5/0 0/0 "GET /v3/role_inferences HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41200 [13/Oct/2025:14:12:48.155] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/687/687 200 850 - - ---- 57/6/5/5/0 0/0 "GET /v3/role_assignments?scope.project.id=b11c1c6866a544b5bb839d591f0d9f80&user.id=ff3d5d8eaf8e411a8039bab9bfc480b9&effective HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41202 [13/Oct/2025:14:12:48.523] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/339/339 201 446 - - ---- 57/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41218 [13/Oct/2025:14:12:48.595] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/296/296 404 320 - - ---- 57/6/5/5/0 0/0 "GET /v3/OS-TRUST/trusts/85b177291b3b40feaf3dba7657d15578 HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41228 [13/Oct/2025:14:12:48.679] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/235/235 204 165 - - ---- 57/6/5/5/0 0/0 "GET /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags/tempest-tag-1172067207 HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41240 [13/Oct/2025:14:12:48.728] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/196/196 200 247 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/pool_connection_lifetime/default HTTP/1.1"
Oct 13 14:12:48 standalone.localdomain haproxy[70940]: 192.168.122.99:41242 [13/Oct/2025:14:12:48.776] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/162/162 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/a8c80f0b51b545afbbc544ab0e449dce/implies/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41254 [13/Oct/2025:14:12:48.844] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/162/162 200 850 - - ---- 57/6/5/5/0 0/0 "GET /v3/role_assignments?scope.project.id=9eec79f8c06e44b4aae93792f6ebef01&user.id=ff3d5d8eaf8e411a8039bab9bfc480b9&effective HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41260 [13/Oct/2025:14:12:48.865] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/191/191 200 616 - - ---- 57/6/5/5/0 0/0 "GET /v3/users/7635d9f723cd4e8591f5fc90d4c5145b HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41272 [13/Oct/2025:14:12:48.895] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/227/227 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/cab33031ea804577aa06042398ed163c HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41274 [13/Oct/2025:14:12:48.918] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/280/280 200 344 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41286 [13/Oct/2025:14:12:48.926] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/321/321 200 237 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/use_auth_pool/default HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41298 [13/Oct/2025:14:12:48.940] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/364/364 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/a8c80f0b51b545afbbc544ab0e449dce HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41304 [13/Oct/2025:14:12:49.008] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/348/348 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/OS-INHERIT/domains/91c7459d11e443cab3351ad0c2d8dcc3/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/69d078ebcf8542708bfd697be9a2344d/inherited_to_projects HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41306 [13/Oct/2025:14:12:49.057] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/334/334 200 441 - - ---- 57/6/5/5/0 0/0 "GET /v3/roles/bafab5c6a3314fd5a256d8b665e21592 HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41314 [13/Oct/2025:14:12:49.125] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/323/323 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/47c2d2d0c2e74a21a819b98618dc5a90 HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41316 [13/Oct/2025:14:12:49.201] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/300/300 200 344 - - ---- 57/6/5/5/0 0/0 "GET /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41326 [13/Oct/2025:14:12:49.248] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/295/295 200 237 - - ---- 58/7/6/5/0 0/0 "GET /v3/domains/config/ldap/auth_pool_size/default HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41336 [13/Oct/2025:14:12:49.305] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/288/288 204 165 - - ---- 58/7/6/6/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41352 [13/Oct/2025:14:12:49.359] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/307/307 200 430 - - ---- 57/6/5/5/0 0/0 "GET /v3/role_assignments?scope.project.id=b11c1c6866a544b5bb839d591f0d9f80&user.id=ff3d5d8eaf8e411a8039bab9bfc480b9&effective HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1136: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41364 [13/Oct/2025:14:12:49.393] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/327/327 201 566 - - ---- 57/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41368 [13/Oct/2025:14:12:49.451] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/347/347 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/users/88c2c2b03e474d15a7f3116ca4e98823 HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41374 [13/Oct/2025:14:12:49.503] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/397/397 204 165 - - ---- 58/7/6/5/0 0/0 "DELETE /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags/tempest-tag-1084861882 HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:41388 [13/Oct/2025:14:12:49.547] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/394/394 200 251 - - ---- 57/6/5/5/0 0/0 "GET /v3/domains/config/ldap/auth_pool_connection_lifetime/default HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain haproxy[70940]: 192.168.122.99:56304 [13/Oct/2025:14:12:49.600] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/386/386 200 4063 - - ---- 57/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56308 [13/Oct/2025:14:12:49.669] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/385/385 200 430 - - ---- 57/6/5/5/0 0/0 "GET /v3/role_assignments?scope.project.id=9eec79f8c06e44b4aae93792f6ebef01&user.id=ff3d5d8eaf8e411a8039bab9bfc480b9&effective HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56322 [13/Oct/2025:14:12:49.723] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/373/373 200 561 - - ---- 58/7/6/5/0 0/0 "GET /v3/projects/12a64a108ce2446da9121bee46e681f7 HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56326 [13/Oct/2025:14:12:49.800] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/383/383 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/projects/00b56f438d36464889d9e22ce509d45a HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56340 [13/Oct/2025:14:12:49.903] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/343/343 404 316 - - ---- 57/6/5/5/0 0/0 "GET /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags/tempest-tag-1084861882 HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56356 [13/Oct/2025:14:12:49.946] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/350/350 404 321 - - ---- 57/6/5/5/0 0/0 "PATCH /v3/domains/513e43506f734dc9a938f4e6ada3ea99 HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56368 [13/Oct/2025:14:12:49.992] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/349/349 201 442 - - ---- 57/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56380 [13/Oct/2025:14:12:50.057] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/360/360 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/projects/9eec79f8c06e44b4aae93792f6ebef01 HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56394 [13/Oct/2025:14:12:50.097] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 204 165 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/12a64a108ce2446da9121bee46e681f7/users/7635d9f723cd4e8591f5fc90d4c5145b/roles/bafab5c6a3314fd5a256d8b665e21592 HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56406 [13/Oct/2025:14:12:50.188] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/324/324 201 536 - - ---- 57/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56414 [13/Oct/2025:14:12:50.249] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/325/325 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56424 [13/Oct/2025:14:12:50.299] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/323/323 404 321 - - ---- 57/6/5/5/0 0/0 "PATCH /v3/domains/9416b299d2f647e8802249a3f6e22a0b HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56440 [13/Oct/2025:14:12:50.344] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/330/330 200 440 - - ---- 57/6/5/5/0 0/0 "PATCH /v3/roles/60d61b070085457c9597fd522874d32a HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56442 [13/Oct/2025:14:12:50.421] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/306/306 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/roles/69d078ebcf8542708bfd697be9a2344d HTTP/1.1"
Oct 13 14:12:50 standalone.localdomain ceph-mon[29756]: pgmap v1136: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:50 standalone.localdomain haproxy[70940]: 192.168.122.99:56456 [13/Oct/2025:14:12:50.464] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/310/310 201 566 - - ---- 57/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56458 [13/Oct/2025:14:12:50.517] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/564/564 201 623 - - ---- 57/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56470 [13/Oct/2025:14:12:50.578] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/561/561 200 267 - - ---- 57/6/5/5/0 0/0 "GET /v3/projects/73cfc65a7cc8449698b5209fa92edeac/tags HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56484 [13/Oct/2025:14:12:50.624] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/571/571 404 321 - - ---- 57/6/5/5/0 0/0 "PATCH /v3/domains/6431372d1658415b971f4d9c966eebff HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56492 [13/Oct/2025:14:12:50.676] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/566/566 200 440 - - ---- 57/6/5/5/0 0/0 "GET /v3/roles/60d61b070085457c9597fd522874d32a HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56498 [13/Oct/2025:14:12:50.731] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/561/561 201 447 - - ---- 57/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56510 [13/Oct/2025:14:12:50.777] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/560/560 200 561 - - ---- 57/6/5/5/0 0/0 "GET /v3/projects/c972bf712e614c2a9c40898a334705f0 HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56520 [13/Oct/2025:14:12:51.083] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/278/278 201 450 - - ---- 57/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain sudo[163263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:12:51 standalone.localdomain sudo[163263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:12:51 standalone.localdomain sudo[163263]: pam_unix(sudo:session): session closed for user root
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56524 [13/Oct/2025:14:12:51.141] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/282/282 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/projects/73cfc65a7cc8449698b5209fa92edeac HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain sudo[163278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:12:51 standalone.localdomain sudo[163278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:12:51 standalone.localdomain runuser[163297]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56536 [13/Oct/2025:14:12:51.197] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/335/335 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/users/9e54162d92344631bb5a200d471462a9 HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56544 [13/Oct/2025:14:12:51.245] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/343/343 200 4527 - - ---- 57/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56546 [13/Oct/2025:14:12:51.294] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/361/361 201 616 - - ---- 58/7/6/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1137: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56556 [13/Oct/2025:14:12:51.338] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/364/364 204 165 - - ---- 57/6/5/5/0 0/0 "PUT /v3/projects/c972bf712e614c2a9c40898a334705f0/users/7635d9f723cd4e8591f5fc90d4c5145b/roles/bafab5c6a3314fd5a256d8b665e21592 HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56570 [13/Oct/2025:14:12:51.365] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/387/387 201 454 - - ---- 57/6/5/5/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56582 [13/Oct/2025:14:12:51.427] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/439/439 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/users/86861b7f6977497ea2f5bc5f76ef2c37 HTTP/1.1"
Oct 13 14:12:51 standalone.localdomain haproxy[70940]: 192.168.122.99:56584 [13/Oct/2025:14:12:51.535] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/413/413 204 165 - - ---- 57/6/5/5/0 0/0 "DELETE /v3/users/1ef4cd5ff6bf4e7a8a8c47ff5bae60fb HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56586 [13/Oct/2025:14:12:51.591] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/421/421 204 165 - - ---- 57/5/4/4/0 0/0 "DELETE /v3/roles/60d61b070085457c9597fd522874d32a HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain sudo[163278]: pam_unix(sudo:session): session closed for user root
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56590 [13/Oct/2025:14:12:51.657] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/413/413 204 165 - - ---- 57/5/4/4/0 0/0 "PUT /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/25012deabffe443492ba94423c244148/inherited_to_projects HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:12:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 12976568-d26d-4b07-9b43-24abc3870b92 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:12:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 12976568-d26d-4b07-9b43-24abc3870b92 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:12:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 12976568-d26d-4b07-9b43-24abc3870b92 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56600 [13/Oct/2025:14:12:51.704] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/442/442 200 1044 - - ---- 57/5/4/4/0 0/0 "GET /v3/users/7635d9f723cd4e8591f5fc90d4c5145b/projects HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain sudo[163377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:12:52 standalone.localdomain runuser[163297]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:52 standalone.localdomain sudo[163377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:12:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:12:52 standalone.localdomain sudo[163377]: pam_unix(sudo:session): session closed for user root
Oct 13 14:12:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:12:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56604 [13/Oct/2025:14:12:51.754] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/465/465 204 165 - - ---- 57/5/4/4/0 0/0 "PUT /v3/projects/3f394795664c480f9c752554fc575f44/users/26ae00f4529e4d50b86ed2eee44b4485/roles/5a21574795494b5eaabccbe739a19656 HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain podman[163401]: 2025-10-13 14:12:52.275687931 +0000 UTC m=+0.055291868 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.component=openstack-cinder-scheduler-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64)
Oct 13 14:12:52 standalone.localdomain podman[163401]: 2025-10-13 14:12:52.298850341 +0000 UTC m=+0.078454278 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, vcs-type=git, build-date=2025-07-21T16:10:12, distribution-scope=public, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_scheduler, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=)
Oct 13 14:12:52 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56610 [13/Oct/2025:14:12:51.868] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/451/451 204 165 - - ---- 57/5/4/4/0 0/0 "DELETE /v3/users/5c62c2625aaa401aa23fe742117ddcaa HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain podman[163396]: 2025-10-13 14:12:52.335680312 +0000 UTC m=+0.119342423 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, build-date=2025-07-21T15:58:55, container_name=cinder_api_cron, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, summary=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64)
Oct 13 14:12:52 standalone.localdomain runuser[163464]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:52 standalone.localdomain podman[163396]: 2025-10-13 14:12:52.373723369 +0000 UTC m=+0.157385480 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, container_name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55)
Oct 13 14:12:52 standalone.localdomain systemd[1]: tmp-crun.fGUikS.mount: Deactivated successfully.
Oct 13 14:12:52 standalone.localdomain podman[163394]: 2025-10-13 14:12:52.389166463 +0000 UTC m=+0.173155884 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, architecture=x86_64, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:12:52 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:12:52 standalone.localdomain podman[163394]: 2025-10-13 14:12:52.401738198 +0000 UTC m=+0.185727609 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:12:52 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 172.21.0.2:36674 [13/Oct/2025:14:12:51.951] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/711/711 201 8100 - - ---- 58/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56616 [13/Oct/2025:14:12:52.018] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/729/729 201 728 - - ---- 58/4/3/3/0 0/0 "PUT /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: pgmap v1137: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:12:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56628 [13/Oct/2025:14:12:52.072] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/760/760 200 851 - - ---- 58/4/3/3/0 0/0 "GET /v3/role_assignments?scope.project.id=275a7b86bfdc49739cbb7db8cfcbc214&user.id=ff3d5d8eaf8e411a8039bab9bfc480b9&effective HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain haproxy[70940]: 192.168.122.99:56642 [13/Oct/2025:14:12:52.149] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/777/777 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/projects/c972bf712e614c2a9c40898a334705f0 HTTP/1.1"
Oct 13 14:12:52 standalone.localdomain runuser[163464]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:52 standalone.localdomain runuser[163525]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56654 [13/Oct/2025:14:12:52.221] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/800/800 204 165 - - ---- 58/4/3/3/0 0/0 "PUT /v3/projects/3f394795664c480f9c752554fc575f44/users/26ae00f4529e4d50b86ed2eee44b4485/roles/37693efafb6845dcb535241785a23173 HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 172.21.0.2:36684 [13/Oct/2025:14:12:52.321] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/993/993 201 8100 - - ---- 58/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 172.17.0.2:60222 [13/Oct/2025:14:12:52.673] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/693/693 200 8095 - - ---- 59/2/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56666 [13/Oct/2025:14:12:52.751] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/674/674 201 727 - - ---- 59/4/3/3/0 0/0 "PUT /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/a8c80f0b51b545afbbc544ab0e449dce HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56678 [13/Oct/2025:14:12:52.836] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/653/653 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/OS-INHERIT/projects/b11c1c6866a544b5bb839d591f0d9f80/users/ff3d5d8eaf8e411a8039bab9bfc480b9/roles/25012deabffe443492ba94423c244148/inherited_to_projects HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 172.21.0.2:56460 [13/Oct/2025:14:12:52.666] neutron neutron/standalone.internalapi.localdomain 0/0/0/874/874 200 2976 - - ---- 59/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=ffa315e21ceb48feb0b76550c3510add&name=default HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56692 [13/Oct/2025:14:12:52.928] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/653/653 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/projects/12a64a108ce2446da9121bee46e681f7 HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56696 [13/Oct/2025:14:12:53.023] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/622/622 200 9019 - - ---- 59/4/3/3/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain runuser[163525]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1138: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 172.17.0.2:60232 [13/Oct/2025:14:12:53.324] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/408/408 200 8095 - - ---- 59/3/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 172.21.0.2:56464 [13/Oct/2025:14:12:53.541] neutron neutron/standalone.internalapi.localdomain 0/0/0/227/227 204 152 - - ---- 59/2/1/1/0 0/0 "DELETE /v2.0/security-groups/97cf547a-af01-4311-88e4-ad19358bb612 HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56708 [13/Oct/2025:14:12:53.427] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/359/359 201 727 - - ---- 60/5/4/3/0 0/0 "PUT /v3/roles/a8c80f0b51b545afbbc544ab0e449dce/implies/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:12:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:12:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56714 [13/Oct/2025:14:12:53.492] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/359/359 200 430 - - ---- 59/4/3/3/0 0/0 "GET /v3/role_assignments?scope.project.id=275a7b86bfdc49739cbb7db8cfcbc214&user.id=ff3d5d8eaf8e411a8039bab9bfc480b9&effective HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 192.168.122.99:56720 [13/Oct/2025:14:12:53.583] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/322/322 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/roles/bafab5c6a3314fd5a256d8b665e21592 HTTP/1.1"
Oct 13 14:12:53 standalone.localdomain haproxy[70940]: 172.21.0.2:56462 [13/Oct/2025:14:12:53.316] neutron neutron/standalone.internalapi.localdomain 0/0/0/595/595 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=dfc1dd5f33a448e09863a9e23b14834f&name=default HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56472 [13/Oct/2025:14:12:53.913] neutron neutron/standalone.internalapi.localdomain 0/0/0/163/163 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/0a2bbdaa-1e69-4d55-8b9f-cc47cd058ab2 HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:36690 [13/Oct/2025:14:12:53.648] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/574/574 201 8141 - - ---- 59/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 192.168.122.99:56726 [13/Oct/2025:14:12:53.770] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/534/534 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/ffa315e21ceb48feb0b76550c3510add HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 192.168.122.99:56728 [13/Oct/2025:14:12:53.788] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/559/559 200 691 - - ---- 59/4/3/3/0 0/0 "GET /v3/roles/a8c80f0b51b545afbbc544ab0e449dce/implies HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 192.168.122.99:56740 [13/Oct/2025:14:12:53.854] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/576/576 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/projects/275a7b86bfdc49739cbb7db8cfcbc214 HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56482 [13/Oct/2025:14:12:54.306] neutron neutron/standalone.internalapi.localdomain 0/0/0/204/204 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=56870b58ae40491a95a78e2c7a0320d2&name=default HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 192.168.122.99:56748 [13/Oct/2025:14:12:53.908] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/610/610 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/users/7635d9f723cd4e8591f5fc90d4c5145b HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 192.168.122.99:56752 [13/Oct/2025:14:12:54.078] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/520/520 204 165 - - ---- 60/4/3/3/0 0/0 "DELETE /v3/projects/dfc1dd5f33a448e09863a9e23b14834f HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56498 [13/Oct/2025:14:12:54.513] neutron neutron/standalone.internalapi.localdomain 0/0/0/273/273 204 152 - - ---- 59/2/1/1/0 0/0 "DELETE /v2.0/security-groups/959f788d-fa01-49a9-b230-a007c654f66f HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56512 [13/Oct/2025:14:12:54.601] neutron neutron/standalone.internalapi.localdomain 0/0/0/197/197 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=b8ff58559ddc448185bdca26cc022f4f&name=default HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain systemd[1]: tmp-crun.BmHyxX.mount: Deactivated successfully.
Oct 13 14:12:54 standalone.localdomain podman[163600]: 2025-10-13 14:12:54.83235665 +0000 UTC m=+0.093441958 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-keystone, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, container_name=keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18)
Oct 13 14:12:54 standalone.localdomain ceph-mon[29756]: pgmap v1138: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:36692 [13/Oct/2025:14:12:54.227] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/668/668 201 8141 - - ---- 59/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 192.168.122.99:56756 [13/Oct/2025:14:12:54.348] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/591/591 200 862 - - ---- 59/5/4/4/0 0/0 "GET /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies HTTP/1.1"
Oct 13 14:12:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:12:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56524 [13/Oct/2025:14:12:54.801] neutron neutron/standalone.internalapi.localdomain 0/0/0/196/196 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/397d07d1-d113-4e60-8012-15023aa4d684 HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56764 [13/Oct/2025:14:12:54.434] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/567/567 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/25012deabffe443492ba94423c244148 HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56768 [13/Oct/2025:14:12:54.519] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/556/556 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/projects/9c519f9115db47c78a9df5d5d8e7d28f HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56780 [13/Oct/2025:14:12:54.789] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/369/369 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/projects/56870b58ae40491a95a78e2c7a0320d2 HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain podman[163600]: 2025-10-13 14:12:55.1628655 +0000 UTC m=+0.423950848 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, version=17.1.9, release=1)
Oct 13 14:12:55 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56790 [13/Oct/2025:14:12:54.898] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/345/345 400 438 - - ---- 59/5/4/4/0 0/0 "POST /v3/OS-TRUST/trusts HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56792 [13/Oct/2025:14:12:54.944] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/346/346 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/roles/a8c80f0b51b545afbbc544ab0e449dce/implies/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56806 [13/Oct/2025:14:12:55.001] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/357/357 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/b8ff58559ddc448185bdca26cc022f4f HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56812 [13/Oct/2025:14:12:55.004] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/422/422 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/users/ff3d5d8eaf8e411a8039bab9bfc480b9 HTTP/1.1"
Oct 13 14:12:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1139: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:55 standalone.localdomain haproxy[70940]: 192.168.122.99:56826 [13/Oct/2025:14:12:55.077] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/633/633 201 471 - - ---- 59/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:56 standalone.localdomain ceph-mon[29756]: pgmap v1139: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:56 standalone.localdomain haproxy[70940]: 172.21.0.2:36706 [13/Oct/2025:14:12:55.160] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/889/889 201 8100 - - ---- 59/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56832 [13/Oct/2025:14:12:55.246] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/879/879 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/roles/37693efafb6845dcb535241785a23173 HTTP/1.1"
Oct 13 14:12:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56840 [13/Oct/2025:14:12:55.292] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/903/903 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/a8c80f0b51b545afbbc544ab0e449dce HTTP/1.1"
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:12:56 standalone.localdomain haproxy[70940]: 172.21.0.2:36712 [13/Oct/2025:14:12:55.361] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1181/1181 201 8100 - - ---- 59/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:12:56 standalone.localdomain podman[163681]: 2025-10-13 14:12:56.572694924 +0000 UTC m=+0.124161870 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, tcib_managed=true, version=17.1.9, distribution-scope=public, release=1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:02:54, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, architecture=x86_64)
Oct 13 14:12:56 standalone.localdomain podman[163693]: 2025-10-13 14:12:56.599965351 +0000 UTC m=+0.138102369 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, container_name=heat_api_cron, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:12:56 standalone.localdomain haproxy[70940]: 192.168.122.99:56848 [13/Oct/2025:14:12:55.428] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1236/1236 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/groups/7f8a94d09d5443b3b86b3ce61f4b1eb6 HTTP/1.1"
Oct 13 14:12:56 standalone.localdomain podman[163690]: 2025-10-13 14:12:56.626615538 +0000 UTC m=+0.168004155 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:06:43, vcs-type=git, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-manila-api-container)
Oct 13 14:12:56 standalone.localdomain podman[163693]: 2025-10-13 14:12:56.682723649 +0000 UTC m=+0.220860687 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, container_name=heat_api_cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T15:56:26)
Oct 13 14:12:56 standalone.localdomain podman[163680]: 2025-10-13 14:12:56.638543184 +0000 UTC m=+0.193673443 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, release=1, container_name=heat_engine, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, build-date=2025-07-21T15:44:11, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163714]: 2025-10-13 14:12:56.691427796 +0000 UTC m=+0.218466023 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=neutron_api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vcs-type=git, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, release=1, com.redhat.component=openstack-neutron-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:12:56 standalone.localdomain podman[163707]: 2025-10-13 14:12:56.731883518 +0000 UTC m=+0.254123148 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, name=rhosp17/openstack-heat-api, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:12:56 standalone.localdomain podman[163718]: 2025-10-13 14:12:56.747571549 +0000 UTC m=+0.266778546 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:17, container_name=nova_conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-conductor-container, distribution-scope=public)
Oct 13 14:12:56 standalone.localdomain podman[163722]: 2025-10-13 14:12:56.779917492 +0000 UTC m=+0.287903595 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, release=1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, version=17.1.9)
Oct 13 14:12:56 standalone.localdomain podman[163697]: 2025-10-13 14:12:56.796104778 +0000 UTC m=+0.323718743 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-horizon, summary=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, com.redhat.component=openstack-horizon-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T13:58:15, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=horizon, release=1, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc.)
Oct 13 14:12:56 standalone.localdomain podman[163707]: 2025-10-13 14:12:56.809807219 +0000 UTC m=+0.332046879 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, version=17.1.9, com.redhat.component=openstack-heat-api-container, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:56 standalone.localdomain podman[163722]: 2025-10-13 14:12:56.815843714 +0000 UTC m=+0.323829817 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:12:56 standalone.localdomain podman[163680]: 2025-10-13 14:12:56.821650572 +0000 UTC m=+0.376780831 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, name=rhosp17/openstack-heat-engine, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, build-date=2025-07-21T15:44:11, distribution-scope=public, release=1, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:12:56 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163816]: 2025-10-13 14:12:56.830866365 +0000 UTC m=+0.227324066 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, build-date=2025-07-21T12:58:43, container_name=memcached, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, name=rhosp17/openstack-memcached, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc.)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163697]: 2025-10-13 14:12:56.850372373 +0000 UTC m=+0.377986338 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, description=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:58:15, tcib_managed=true, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, name=rhosp17/openstack-horizon)
Oct 13 14:12:56 standalone.localdomain podman[163681]: 2025-10-13 14:12:56.858691018 +0000 UTC m=+0.410157974 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, name=rhosp17/openstack-nova-scheduler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T16:02:54, description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, release=1, container_name=nova_scheduler, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163738]: 2025-10-13 14:12:56.66968421 +0000 UTC m=+0.176366822 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, container_name=heat_api_cfn, vcs-type=git)
Oct 13 14:12:56 standalone.localdomain podman[163682]: 2025-10-13 14:12:56.675520369 +0000 UTC m=+0.216211744 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, release=1, vcs-type=git, com.redhat.component=openstack-cinder-api-container, build-date=2025-07-21T15:58:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=cinder_api, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:12:56 standalone.localdomain podman[163816]: 2025-10-13 14:12:56.901623775 +0000 UTC m=+0.298081456 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, container_name=memcached, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vendor=Red Hat, Inc.)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163718]: 2025-10-13 14:12:56.926644343 +0000 UTC m=+0.445851370 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, container_name=nova_conductor, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:17, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:12:56 standalone.localdomain podman[163714]: 2025-10-13 14:12:56.927032265 +0000 UTC m=+0.454070482 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, container_name=neutron_api, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163743]: 2025-10-13 14:12:56.943631784 +0000 UTC m=+0.448433829 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T15:56:28, name=rhosp17/openstack-manila-scheduler, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-manila-scheduler-container)
Oct 13 14:12:56 standalone.localdomain podman[163738]: 2025-10-13 14:12:56.957300613 +0000 UTC m=+0.463983235 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:12:56 standalone.localdomain podman[163743]: 2025-10-13 14:12:56.973663956 +0000 UTC m=+0.478465981 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vcs-type=git, build-date=2025-07-21T15:56:28, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-scheduler-container, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, container_name=manila_scheduler, managed_by=tripleo_ansible)
Oct 13 14:12:56 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:12:57 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:12:57 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:12:57 standalone.localdomain podman[163682]: 2025-10-13 14:12:57.06312114 +0000 UTC m=+0.603812515 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:58:55, description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.component=openstack-cinder-api-container, container_name=cinder_api, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1)
Oct 13 14:12:57 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:12:57 standalone.localdomain podman[163690]: 2025-10-13 14:12:57.114216698 +0000 UTC m=+0.655605335 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-manila-api, build-date=2025-07-21T16:06:43, com.redhat.component=openstack-manila-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, container_name=manila_api_cron, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:12:57 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:12:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56852 [13/Oct/2025:14:12:55.712] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1618/1618 204 165 - - ---- 59/4/3/3/0 0/0 "POST /v3/users/255c8db4f9254449af546fea08b927e9/password HTTP/1.1"
Oct 13 14:12:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:12:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2331522127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:12:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:12:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2331522127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:12:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2331522127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:12:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2331522127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:12:57 standalone.localdomain systemd[1]: tmp-crun.aZg5iz.mount: Deactivated successfully.
Oct 13 14:12:57 standalone.localdomain haproxy[70940]: 172.21.0.2:36726 [13/Oct/2025:14:12:56.054] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1605/1605 201 8100 - - ---- 58/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1140: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56866 [13/Oct/2025:14:12:56.128] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1591/1591 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/roles/5a21574795494b5eaabccbe739a19656 HTTP/1.1"
Oct 13 14:12:57 standalone.localdomain haproxy[70940]: 192.168.122.99:56878 [13/Oct/2025:14:12:56.196] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1590/1590 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b/implies/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 172.21.0.2:36730 [13/Oct/2025:14:12:56.549] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1566/1566 201 8100 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56894 [13/Oct/2025:14:12:56.667] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1545/1545 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/projects/b11c1c6866a544b5bb839d591f0d9f80 HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56902 [13/Oct/2025:14:12:57.662] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 200 510 - - ---- 58/5/4/4/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56906 [13/Oct/2025:14:12:57.721] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/635/635 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/users/26ae00f4529e4d50b86ed2eee44b4485 HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56914 [13/Oct/2025:14:12:57.793] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/626/626 404 321 - - ---- 59/5/4/4/0 0/0 "PATCH /v3/domains/c3c76986b3844c18ac786ca55bb2a30c HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56918 [13/Oct/2025:14:12:58.119] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/363/363 200 510 - - ---- 59/5/4/4/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain ceph-mon[29756]: pgmap v1140: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56932 [13/Oct/2025:14:12:58.215] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/423/423 200 478 - - ---- 59/5/4/4/0 0/0 "PATCH /v3/domains/91c7459d11e443cab3351ad0c2d8dcc3 HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 192.168.122.99:56940 [13/Oct/2025:14:12:58.271] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/440/440 201 574 - - ---- 59/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:58 standalone.localdomain haproxy[70940]: 172.21.0.2:36744 [13/Oct/2025:14:12:58.333] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/651/651 201 713 - - ---- 60/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56946 [13/Oct/2025:14:12:58.358] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/721/721 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/projects/3f394795664c480f9c752554fc575f44 HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56958 [13/Oct/2025:14:12:58.423] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/741/741 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/00515c3559934eb5861b350991c52f53 HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56964 [13/Oct/2025:14:12:58.486] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/730/730 201 572 - - ---- 59/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56966 [13/Oct/2025:14:12:58.640] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/737/737 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/domains/91c7459d11e443cab3351ad0c2d8dcc3 HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1141: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56980 [13/Oct/2025:14:12:58.716] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/983/983 201 498 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:12:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:12:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:12:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:12:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:56990 [13/Oct/2025:14:12:58.988] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/775/775 200 708 - - ---- 59/6/5/5/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:57002 [13/Oct/2025:14:12:59.080] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/729/729 201 535 - - ---- 59/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain systemd[1]: tmp-crun.UNst7a.mount: Deactivated successfully.
Oct 13 14:12:59 standalone.localdomain podman[164264]: 2025-10-13 14:12:59.852598572 +0000 UTC m=+0.103580649 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, tcib_managed=true, architecture=x86_64, version=17.1.9, config_id=tripleo_step4)
Oct 13 14:12:59 standalone.localdomain podman[164256]: 2025-10-13 14:12:59.871535883 +0000 UTC m=+0.134130516 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:12:59 standalone.localdomain podman[164254]: 2025-10-13 14:12:59.846623099 +0000 UTC m=+0.108627874 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, tcib_managed=true, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:12:59 standalone.localdomain haproxy[70940]: 192.168.122.99:57014 [13/Oct/2025:14:12:59.166] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/744/744 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/roles/ddf8a916dd4e483ba79315e6ae209170 HTTP/1.1"
Oct 13 14:12:59 standalone.localdomain podman[164277]: 2025-10-13 14:12:59.959532693 +0000 UTC m=+0.212724508 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, container_name=nova_vnc_proxy, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-novncproxy-container, version=17.1.9, build-date=2025-07-21T15:24:10, release=1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:12:59 standalone.localdomain podman[164261]: 2025-10-13 14:12:59.831961888 +0000 UTC m=+0.089282639 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Oct 13 14:12:59 standalone.localdomain podman[164261]: 2025-10-13 14:12:59.996787315 +0000 UTC m=+0.254108096 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=swift_container_server, architecture=x86_64, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, release=1)
Oct 13 14:12:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:00 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:13:00 standalone.localdomain podman[164264]: 2025-10-13 14:13:00.047694927 +0000 UTC m=+0.298676964 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:11:22, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:13:00 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:13:00 standalone.localdomain podman[164254]: 2025-10-13 14:13:00.076747139 +0000 UTC m=+0.338751894 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, container_name=swift_object_server, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:13:00 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:13:00 standalone.localdomain podman[164256]: 2025-10-13 14:13:00.223846572 +0000 UTC m=+0.486441195 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, container_name=nova_migration_target, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:57030 [13/Oct/2025:14:12:59.218] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1008/1008 201 497 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:13:00 standalone.localdomain podman[164277]: 2025-10-13 14:13:00.25276894 +0000 UTC m=+0.505960735 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-novncproxy-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-novncproxy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:24:10, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:13:00 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:57040 [13/Oct/2025:14:12:59.379] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/901/901 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/0b3c8f3c6e62469c8e5b71c548cb8980 HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53124 [13/Oct/2025:14:12:59.701] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/619/619 200 3377 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53132 [13/Oct/2025:14:12:59.764] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/626/626 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/255c8db4f9254449af546fea08b927e9 HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53144 [13/Oct/2025:14:12:59.811] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/873/873 201 623 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53158 [13/Oct/2025:14:12:59.913] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/838/838 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/groups/565b6f9688f1414498e7d7c16c7b0c08 HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain ceph-mon[29756]: pgmap v1141: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53174 [13/Oct/2025:14:13:00.230] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/572/572 200 3377 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53188 [13/Oct/2025:14:13:00.282] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/584/584 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/e34036a84ff54672b5084cf1843c365d HTTP/1.1"
Oct 13 14:13:00 standalone.localdomain haproxy[70940]: 192.168.122.99:53196 [13/Oct/2025:14:13:00.324] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/0db46bf3fd934836abb02ee9f8de2578/users/cc2d6285528f4b38bad58cbf1ac674f9/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53208 [13/Oct/2025:14:13:00.393] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/826/826 201 574 - - ---- 59/5/4/4/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53222 [13/Oct/2025:14:13:00.687] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/588/588 201 451 - - ---- 59/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53224 [13/Oct/2025:14:13:00.756] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/fe3dffb53f0b414a8e095b67f132e54d HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53230 [13/Oct/2025:14:13:00.805] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/633/633 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/23ea26702be64e6d8033dfa665660ddc/users/a5b53084cc03456fa695326524b16650/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1142: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 172.21.0.2:53638 [13/Oct/2025:14:13:00.870] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/924/924 201 8100 - - ---- 59/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain podman[164435]: 2025-10-13 14:13:01.813563625 +0000 UTC m=+0.077136618 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, version=17.1.9, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=)
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53244 [13/Oct/2025:14:13:00.933] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/892/892 200 3612 - - ---- 59/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain podman[164435]: 2025-10-13 14:13:01.848788585 +0000 UTC m=+0.112361528 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, distribution-scope=public, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vendor=Red Hat, Inc., release=1)
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53252 [13/Oct/2025:14:13:01.220] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/666/666 201 566 - - ---- 59/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53254 [13/Oct/2025:14:13:01.278] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/658/658 201 454 - - ---- 59/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:01 standalone.localdomain haproxy[70940]: 192.168.122.99:53268 [13/Oct/2025:14:13:01.364] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/634/634 200 480 - - ---- 59/5/4/4/0 0/0 "PATCH /v3/domains/c7ca48b48fe14afc95f21d1df9f56183 HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 192.168.122.99:53280 [13/Oct/2025:14:13:01.441] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/604/604 200 3850 - - ---- 59/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 172.17.0.2:60232 [13/Oct/2025:14:13:01.800] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/295/295 200 8095 - - ---- 59/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 192.168.122.99:53294 [13/Oct/2025:14:13:01.827] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/287/287 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/0db46bf3fd934836abb02ee9f8de2578/users/cc2d6285528f4b38bad58cbf1ac674f9/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain podman[164469]: 2025-10-13 14:13:02.160435157 +0000 UTC m=+0.064452888 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container, summary=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-haproxy, vcs-type=git)
Oct 13 14:13:02 standalone.localdomain podman[164469]: 2025-10-13 14:13:02.193836802 +0000 UTC m=+0.097854573 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-haproxy-container)
Oct 13 14:13:02 standalone.localdomain podman[164501]: 2025-10-13 14:13:02.28473026 +0000 UTC m=+0.070593426 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, release=1, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=)
Oct 13 14:13:02 standalone.localdomain podman[164501]: 2025-10-13 14:13:02.316758723 +0000 UTC m=+0.102621879 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50562 [13/Oct/2025:14:13:01.795] neutron neutron/standalone.internalapi.localdomain 0/0/0/552/552 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=c1d2a15ead2d4ed583ce1c00d7ed3f50&name=default HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 192.168.122.99:53308 [13/Oct/2025:14:13:01.889] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/556/556 200 724 - - ---- 59/4/3/3/0 0/0 "PATCH /v3/users/84c2721bf3ed4260a64c6fa87e65635f HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 192.168.122.99:53324 [13/Oct/2025:14:13:01.939] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/574/574 204 165 - - ---- 59/4/3/3/0 0/0 "PUT /v3/projects/448d9dfe86114259bd765db35152d7a8/users/eafc7c80cbfb41a68cd61ab91843310f/roles/9e61132ed0fc4d18be51e4b49f4a188d HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 172.21.0.2:50568 [13/Oct/2025:14:13:02.349] neutron neutron/standalone.internalapi.localdomain 0/0/0/168/168 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/8e3fed9f-30f0-4584-b1a8-e11f8dd1158c HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 192.168.122.99:53340 [13/Oct/2025:14:13:02.002] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/616/616 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/domains/c7ca48b48fe14afc95f21d1df9f56183 HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 192.168.122.99:53342 [13/Oct/2025:14:13:02.047] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/620/620 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/23ea26702be64e6d8033dfa665660ddc/users/a5b53084cc03456fa695326524b16650/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:02 standalone.localdomain ceph-mon[29756]: pgmap v1142: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:02 standalone.localdomain haproxy[70940]: 172.21.0.2:53652 [13/Oct/2025:14:13:02.119] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/856/856 201 8112 - - ---- 59/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53352 [13/Oct/2025:14:13:02.446] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/580/580 200 591 - - ---- 59/5/4/4/0 0/0 "GET /v3/users/84c2721bf3ed4260a64c6fa87e65635f HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53364 [13/Oct/2025:14:13:02.514] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/563/563 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/448d9dfe86114259bd765db35152d7a8/users/eafc7c80cbfb41a68cd61ab91843310f/roles/843ad9a4cd8d4bc6831b9196fe3254c4 HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53376 [13/Oct/2025:14:13:02.520] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/638/638 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/c1d2a15ead2d4ed583ce1c00d7ed3f50 HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53388 [13/Oct/2025:14:13:02.620] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/599/599 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/roles/a8c80f0b51b545afbbc544ab0e449dce HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50570 [13/Oct/2025:14:13:03.160] neutron neutron/standalone.internalapi.localdomain 0/0/0/179/179 200 2976 - - ---- 60/2/1/0/0 0/0 "GET /v2.0/security-groups?tenant_id=e19449e443e14a8b861fc92b2d1714d7&name=default HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 172.21.0.2:50580 [13/Oct/2025:14:13:03.342] neutron neutron/standalone.internalapi.localdomain 0/0/0/215/215 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/1eac973c-7bc1-4334-be93-d493dafefd44 HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 172.21.0.2:53654 [13/Oct/2025:14:13:02.676] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/908/908 201 8110 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53394 [13/Oct/2025:14:13:02.978] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/681/681 201 572 - - ---- 58/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1143: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:13:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53396 [13/Oct/2025:14:13:03.028] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/762/762 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/projects/11aac55e869d42f0b2f2d6a5f4b221b2 HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain podman[164555]: 2025-10-13 14:13:03.840541462 +0000 UTC m=+0.097737469 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-api-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:13:03 standalone.localdomain podman[164577]: 2025-10-13 14:13:03.898598654 +0000 UTC m=+0.146421753 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, name=rhosp17/openstack-glance-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53400 [13/Oct/2025:14:13:03.078] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/828/828 200 8121 - - ---- 58/6/5/5/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:13:03 standalone.localdomain podman[164553]: 2025-10-13 14:13:03.874595168 +0000 UTC m=+0.139043307 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-placement-api-container, container_name=placement_api, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:13:03 standalone.localdomain podman[164552]: 2025-10-13 14:13:03.943915724 +0000 UTC m=+0.207289201 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, managed_by=tripleo_ansible, container_name=glance_api_cron, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, version=17.1.9)
Oct 13 14:13:03 standalone.localdomain podman[164552]: 2025-10-13 14:13:03.952765445 +0000 UTC m=+0.216138912 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, version=17.1.9, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:13:03 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:13:03 standalone.localdomain podman[164553]: 2025-10-13 14:13:03.964119074 +0000 UTC m=+0.228567193 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, name=rhosp17/openstack-placement-api, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T13:58:12, summary=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, com.redhat.component=openstack-placement-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=placement_api, release=1, batch=17.1_20250721.1)
Oct 13 14:13:03 standalone.localdomain podman[164555]: 2025-10-13 14:13:03.978842216 +0000 UTC m=+0.236038203 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, container_name=nova_metadata, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, vcs-type=git, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:13:03 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:13:03 standalone.localdomain haproxy[70940]: 192.168.122.99:53416 [13/Oct/2025:14:13:03.222] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/773/773 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/roles/8404b71aa506493ba9615df19ffdd490 HTTP/1.1"
Oct 13 14:13:04 standalone.localdomain podman[164572]: 2025-10-13 14:13:04.00734024 +0000 UTC m=+0.247077671 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:13:04 standalone.localdomain podman[164554]: 2025-10-13 14:13:03.81437048 +0000 UTC m=+0.078139849 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, name=rhosp17/openstack-swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:13:04 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain podman[164554]: 2025-10-13 14:13:04.052105804 +0000 UTC m=+0.315875223 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, version=17.1.9, container_name=swift_proxy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:13:04 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain podman[164577]: 2025-10-13 14:13:04.103782019 +0000 UTC m=+0.351605098 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, container_name=glance_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, com.redhat.component=openstack-glance-api-container, version=17.1.9, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, release=1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:13:04 standalone.localdomain haproxy[70940]: 192.168.122.99:53422 [13/Oct/2025:14:13:03.561] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/544/544 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/projects/e19449e443e14a8b861fc92b2d1714d7 HTTP/1.1"
Oct 13 14:13:04 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain podman[164572]: 2025-10-13 14:13:04.15496375 +0000 UTC m=+0.394701171 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:04 standalone.localdomain podman[164556]: 2025-10-13 14:13:04.113277731 +0000 UTC m=+0.355373794 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, distribution-scope=public, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4)
Oct 13 14:13:04 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain haproxy[70940]: 192.168.122.99:53424 [13/Oct/2025:14:13:03.586] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/599/599 201 572 - - ---- 58/4/3/3/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:04 standalone.localdomain podman[164573]: 2025-10-13 14:13:04.166765042 +0000 UTC m=+0.412420264 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:13:04 standalone.localdomain podman[164573]: 2025-10-13 14:13:04.245499637 +0000 UTC m=+0.491154779 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9)
Oct 13 14:13:04 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain runuser[164748]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:04 standalone.localdomain podman[164556]: 2025-10-13 14:13:04.31405411 +0000 UTC m=+0.556150153 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, architecture=x86_64, container_name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api)
Oct 13 14:13:04 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain haproxy[70940]: 192.168.122.99:53432 [13/Oct/2025:14:13:03.662] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/825/825 201 496 - - ---- 58/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:04 standalone.localdomain haproxy[70940]: 192.168.122.99:53446 [13/Oct/2025:14:13:03.792] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/817/817 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/users/84c2721bf3ed4260a64c6fa87e65635f HTTP/1.1"
Oct 13 14:13:04 standalone.localdomain ceph-mon[29756]: pgmap v1143: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:04 standalone.localdomain systemd[1]: tmp-crun.H6Ct58.mount: Deactivated successfully.
Oct 13 14:13:04 standalone.localdomain runuser[164748]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:04 standalone.localdomain haproxy[70940]: 172.21.0.2:53656 [13/Oct/2025:14:13:03.909] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1031/1031 201 8141 - - ---- 58/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:05 standalone.localdomain haproxy[70940]: 192.168.122.99:53452 [13/Oct/2025:14:13:03.997] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1017/1017 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/roles/7c0bffc586494bf494dbaf620e203c6b HTTP/1.1"
Oct 13 14:13:05 standalone.localdomain runuser[164818]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:05 standalone.localdomain haproxy[70940]: 172.21.0.2:53666 [13/Oct/2025:14:13:04.108] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1199/1199 201 8100 - - ---- 58/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:05 standalone.localdomain haproxy[70940]: 192.168.122.99:53466 [13/Oct/2025:14:13:04.188] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1395/1395 201 496 - - ---- 58/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:05 standalone.localdomain haproxy[70940]: 192.168.122.99:53468 [13/Oct/2025:14:13:04.490] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1113/1113 200 3173 - - ---- 58/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:05 standalone.localdomain haproxy[70940]: 192.168.122.99:53474 [13/Oct/2025:14:13:04.612] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1064/1064 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/users/7b2e404fade44d6bad896a8467fcf68e HTTP/1.1"
Oct 13 14:13:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1144: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:05 standalone.localdomain systemd[1]: tmp-crun.63wfgZ.mount: Deactivated successfully.
Oct 13 14:13:05 standalone.localdomain runuser[164818]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:05 standalone.localdomain podman[164878]: 2025-10-13 14:13:05.761916131 +0000 UTC m=+0.100342530 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.component=openstack-cinder-volume-container, maintainer=OpenStack TripleO Team, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vcs-type=git, name=rhosp17/openstack-cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, architecture=x86_64, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, description=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, build-date=2025-07-21T16:13:39, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:13:05 standalone.localdomain runuser[164902]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:05 standalone.localdomain podman[164878]: 2025-10-13 14:13:05.794218482 +0000 UTC m=+0.132644851 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cinder-volume-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-volume, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, build-date=2025-07-21T16:13:39)
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 172.21.0.2:53674 [13/Oct/2025:14:13:04.945] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1066/1066 201 8141 - - ---- 58/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain ceph-mon[29756]: pgmap v1144: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 192.168.122.99:53476 [13/Oct/2025:14:13:05.016] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1123/1123 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/users/159085cea54044308c908b757d445f19 HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain runuser[164902]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 172.21.0.2:53688 [13/Oct/2025:14:13:05.316] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1174/1174 201 8100 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 192.168.122.99:53492 [13/Oct/2025:14:13:05.586] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/947/947 200 3173 - - ---- 58/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 192.168.122.99:53506 [13/Oct/2025:14:13:05.607] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/982/982 204 165 - - ---- 58/6/5/5/0 0/0 "PUT /v3/projects/e9b4092f9c68402dbd339a5c7b99b996/users/23873ac56c1040e19f266ae10f5be6d5/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 192.168.122.99:53518 [13/Oct/2025:14:13:05.680] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1085/1085 204 165 - - ---- 58/6/5/5/0 0/0 "DELETE /v3/users/1626c6cbb1ad405886368172699487d1 HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 192.168.122.99:53520 [13/Oct/2025:14:13:06.013] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/866/866 201 1041 - - ---- 58/5/4/4/0 0/0 "POST /v3/OS-TRUST/trusts HTTP/1.1"
Oct 13 14:13:06 standalone.localdomain haproxy[70940]: 192.168.122.99:53528 [13/Oct/2025:14:13:06.143] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/831/831 204 165 - - ---- 58/5/4/4/0 0/0 "DELETE /v3/users/98bc3d66902a4dfe97d2f6a840be3dbc HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53534 [13/Oct/2025:14:13:06.493] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/530/530 200 510 - - ---- 58/4/3/3/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53550 [13/Oct/2025:14:13:06.535] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/519/519 204 165 - - ---- 58/4/3/3/0 0/0 "PUT /v3/projects/86d48840c9974dd0a0de4d8dc2baa0ab/users/39258bf6ee73411fab99bdb73989b40e/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53554 [13/Oct/2025:14:13:06.591] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/507/507 200 3173 - - ---- 58/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 172.21.0.2:53696 [13/Oct/2025:14:13:06.767] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/645/645 201 8100 - - ---- 58/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53568 [13/Oct/2025:14:13:06.881] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/572/572 200 1036 - - ---- 59/5/3/3/0 0/0 "GET /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1145: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 172.21.0.2:53710 [13/Oct/2025:14:13:06.976] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/740/740 201 8100 - - ---- 58/2/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:13:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53570 [13/Oct/2025:14:13:07.026] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/721/721 201 576 - - ---- 59/4/3/3/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53580 [13/Oct/2025:14:13:07.056] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/748/748 200 3173 - - ---- 59/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain podman[164988]: 2025-10-13 14:13:07.822640495 +0000 UTC m=+0.082161393 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, architecture=x86_64, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=nova_api, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53588 [13/Oct/2025:14:13:07.099] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/731/731 204 165 - - ---- 59/4/3/3/0 0/0 "PUT /v3/projects/e9b4092f9c68402dbd339a5c7b99b996/users/23873ac56c1040e19f266ae10f5be6d5/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain podman[164988]: 2025-10-13 14:13:07.854990597 +0000 UTC m=+0.114511485 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-nova-api, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, container_name=nova_api, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:13:07 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:13:07 standalone.localdomain podman[164987]: 2025-10-13 14:13:07.802785146 +0000 UTC m=+0.068786782 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=keystone_cron)
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 172.17.0.2:60232 [13/Oct/2025:14:13:07.416] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/502/502 200 8095 - - ---- 59/2/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain podman[164987]: 2025-10-13 14:13:07.938034325 +0000 UTC m=+0.204035961 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, container_name=keystone_cron, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:13:07 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 192.168.122.99:53598 [13/Oct/2025:14:13:07.455] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/502/502 200 573 - - ---- 59/4/3/3/0 0/0 "GET /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e/roles HTTP/1.1"
Oct 13 14:13:07 standalone.localdomain haproxy[70940]: 172.17.0.2:46012 [13/Oct/2025:14:13:07.724] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/248/248 200 8095 - - ---- 59/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 172.21.0.2:50594 [13/Oct/2025:14:13:07.414] neutron neutron/standalone.internalapi.localdomain 0/0/0/700/700 200 2976 - - ---- 59/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=dcda7d2fbed640718072d8d15b65fd30&name=default HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 172.21.0.2:50604 [13/Oct/2025:14:13:07.718] neutron neutron/standalone.internalapi.localdomain 0/0/0/535/535 200 2976 - - ---- 59/2/1/1/0 0/0 "GET /v2.0/security-groups?tenant_id=87a1a19f89ac46ca90a271c04ae6c45f&name=default HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53612 [13/Oct/2025:14:13:07.749] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/521/521 201 499 - - ---- 59/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53622 [13/Oct/2025:14:13:07.807] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/482/482 204 165 - - ---- 59/4/3/3/0 0/0 "PUT /v3/projects/86d48840c9974dd0a0de4d8dc2baa0ab/users/39258bf6ee73411fab99bdb73989b40e/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53624 [13/Oct/2025:14:13:07.833] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/511/511 200 3173 - - ---- 59/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53634 [13/Oct/2025:14:13:07.959] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/442/442 200 446 - - ---- 59/4/3/3/0 0/0 "GET /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e/roles/9e61132ed0fc4d18be51e4b49f4a188d HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53644 [13/Oct/2025:14:13:08.271] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/161/161 200 3173 - - ---- 59/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53646 [13/Oct/2025:14:13:08.292] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/168/168 200 3173 - - ---- 59/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 172.21.0.2:50616 [13/Oct/2025:14:13:08.115] neutron neutron/standalone.internalapi.localdomain 0/0/0/364/364 204 152 - - ---- 59/2/1/1/0 0/0 "DELETE /v2.0/security-groups/951118ca-62d6-43da-afbb-6346c441a9e3 HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53652 [13/Oct/2025:14:13:08.345] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/140/140 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/e9b4092f9c68402dbd339a5c7b99b996/users/23873ac56c1040e19f266ae10f5be6d5/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 172.21.0.2:50626 [13/Oct/2025:14:13:08.255] neutron neutron/standalone.internalapi.localdomain 0/0/0/267/267 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/4d74ca51-63c2-45c6-917c-9ab347534d8d HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53664 [13/Oct/2025:14:13:08.404] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/146/146 200 202 - - ---- 59/5/4/4/0 0/0 "HEAD /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e/roles/9e61132ed0fc4d18be51e4b49f4a188d HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53666 [13/Oct/2025:14:13:08.436] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/176/176 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/ee6436030a1c49299e0ed6eb085e006a/users/ee83bd271a9a4691912928400a45698e/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53680 [13/Oct/2025:14:13:08.464] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/205/205 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/86d48840c9974dd0a0de4d8dc2baa0ab/users/39258bf6ee73411fab99bdb73989b40e/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain ceph-mon[29756]: pgmap v1145: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 192.168.122.99:53694 [13/Oct/2025:14:13:08.482] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/311/311 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/projects/dcda7d2fbed640718072d8d15b65fd30 HTTP/1.1"
Oct 13 14:13:08 standalone.localdomain haproxy[70940]: 172.21.0.2:50630 [13/Oct/2025:14:13:08.795] neutron neutron/standalone.internalapi.localdomain 0/0/0/122/122 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=c68bdcc67389489eaf1560e1592ad4eb&name=default HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50634 [13/Oct/2025:14:13:08.920] neutron neutron/standalone.internalapi.localdomain 0/0/0/205/205 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/8ee9cb23-563a-4038-9f6f-eaaf19468f32 HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 172.21.0.2:53718 [13/Oct/2025:14:13:08.490] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/659/659 201 8170 - - ---- 59/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53702 [13/Oct/2025:14:13:08.524] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/708/708 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/87a1a19f89ac46ca90a271c04ae6c45f HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53718 [13/Oct/2025:14:13:08.554] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/726/726 404 319 - - ---- 59/4/3/3/0 0/0 "GET /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e/roles/843ad9a4cd8d4bc6831b9196fe3254c4 HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53734 [13/Oct/2025:14:13:08.615] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/721/721 200 3173 - - ---- 60/5/4/4/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50650 [13/Oct/2025:14:13:09.235] neutron neutron/standalone.internalapi.localdomain 0/0/0/137/137 200 2976 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=83a3230fcc4b4fffb8daa3ac839cad44&name=default HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 172.21.0.2:50654 [13/Oct/2025:14:13:09.376] neutron neutron/standalone.internalapi.localdomain 0/0/0/158/158 204 152 - - ---- 59/1/0/0/0 0/0 "DELETE /v2.0/security-groups/e1e11d3c-e4ee-4480-b218-09c07593888c HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 172.21.0.2:53720 [13/Oct/2025:14:13:08.682] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/972/972 201 8170 - - ---- 59/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1146: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53750 [13/Oct/2025:14:13:09.128] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/583/583 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/projects/c68bdcc67389489eaf1560e1592ad4eb HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:13:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:13:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:13:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:13:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:13:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53754 [13/Oct/2025:14:13:09.150] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/679/679 201 483 - - ---- 59/5/4/4/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain podman[165260]: 2025-10-13 14:13:09.84725065 +0000 UTC m=+0.099334229 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, container_name=neutron_sriov_agent, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53768 [13/Oct/2025:14:13:09.283] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 404 209 - - ---- 59/5/4/4/0 0/0 "HEAD /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e/roles/843ad9a4cd8d4bc6831b9196fe3254c4 HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain podman[165257]: 2025-10-13 14:13:09.890669442 +0000 UTC m=+0.147459636 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T15:22:44, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:09 standalone.localdomain podman[165270]: 2025-10-13 14:13:09.897669077 +0000 UTC m=+0.144088882 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.component=openstack-barbican-worker-container, io.buildah.version=1.33.12, container_name=barbican_worker, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-barbican-worker, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:13:09 standalone.localdomain podman[165257]: 2025-10-13 14:13:09.914042689 +0000 UTC m=+0.170832893 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, name=rhosp17/openstack-barbican-api, com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:44, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, container_name=barbican_api, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:13:09 standalone.localdomain podman[165260]: 2025-10-13 14:13:09.925998576 +0000 UTC m=+0.178082165 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-type=git, config_id=tripleo_step4, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, release=1)
Oct 13 14:13:09 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:13:09 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:13:09 standalone.localdomain podman[165270]: 2025-10-13 14:13:09.945424762 +0000 UTC m=+0.191844567 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-worker-container, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:36:22, container_name=barbican_worker)
Oct 13 14:13:09 standalone.localdomain haproxy[70940]: 192.168.122.99:53776 [13/Oct/2025:14:13:09.339] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/615/615 204 165 - - ---- 59/5/4/4/0 0/0 "PUT /v3/projects/ee6436030a1c49299e0ed6eb085e006a/users/ee83bd271a9a4691912928400a45698e/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:09 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:13:09 standalone.localdomain podman[165258]: 2025-10-13 14:13:09.997733986 +0000 UTC m=+0.252510087 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:18:19, release=1)
Oct 13 14:13:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:10 standalone.localdomain haproxy[70940]: 192.168.122.99:53784 [13/Oct/2025:14:13:09.539] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/511/511 204 165 - - ---- 59/4/3/3/0 0/0 "DELETE /v3/projects/83a3230fcc4b4fffb8daa3ac839cad44 HTTP/1.1"
Oct 13 14:13:10 standalone.localdomain podman[165256]: 2025-10-13 14:13:10.061679569 +0000 UTC m=+0.321454904 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, release=1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, distribution-scope=public, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4)
Oct 13 14:13:10 standalone.localdomain podman[165258]: 2025-10-13 14:13:10.077749252 +0000 UTC m=+0.332525353 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-barbican-keystone-listener-container, distribution-scope=public, vcs-type=git, build-date=2025-07-21T16:18:19, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, managed_by=tripleo_ansible, container_name=barbican_keystone_listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:13:10 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:13:10 standalone.localdomain haproxy[70940]: 192.168.122.99:50270 [13/Oct/2025:14:13:09.657] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/465/465 201 431 - - ---- 59/3/2/2/0 0/0 "POST /v3/regions HTTP/1.1"
Oct 13 14:13:10 standalone.localdomain podman[165256]: 2025-10-13 14:13:10.155382344 +0000 UTC m=+0.415157609 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, build-date=2025-07-21T16:28:54, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64)
Oct 13 14:13:10 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:13:10 standalone.localdomain podman[165281]: 2025-10-13 14:13:10.167364101 +0000 UTC m=+0.403973275 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, container_name=nova_api_cron, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1)
Oct 13 14:13:10 standalone.localdomain podman[165281]: 2025-10-13 14:13:10.175728328 +0000 UTC m=+0.412337482 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, container_name=nova_api_cron, build-date=2025-07-21T16:05:11, vcs-type=git, version=17.1.9, release=1, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container)
Oct 13 14:13:10 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:13:10 standalone.localdomain haproxy[70940]: 172.21.0.2:44690 [13/Oct/2025:14:13:09.716] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/715/715 201 8100 - - ---- 59/5/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:10 standalone.localdomain haproxy[70940]: 192.168.122.99:50286 [13/Oct/2025:14:13:09.832] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/653/653 201 484 - - ---- 59/3/2/2/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:10 standalone.localdomain haproxy[70940]: 192.168.122.99:50300 [13/Oct/2025:14:13:09.893] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/674/674 204 165 - - ---- 59/3/2/2/0 0/0 "DELETE /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e HTTP/1.1"
Oct 13 14:13:10 standalone.localdomain ceph-mon[29756]: pgmap v1146: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:10 standalone.localdomain haproxy[70940]: 172.21.0.2:44702 [13/Oct/2025:14:13:09.962] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/972/972 201 8114 - - ---- 59/5/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:11 standalone.localdomain haproxy[70940]: 172.21.0.2:44710 [13/Oct/2025:14:13:10.053] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1217/1217 201 8100 - - ---- 59/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:11 standalone.localdomain haproxy[70940]: 192.168.122.99:50314 [13/Oct/2025:14:13:10.123] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1203/1203 201 430 - - ---- 59/4/3/3/0 0/0 "POST /v3/regions HTTP/1.1"
Oct 13 14:13:11 standalone.localdomain haproxy[70940]: 172.21.0.2:44722 [13/Oct/2025:14:13:10.437] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1209/1209 201 8100 - - ---- 59/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1147: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:11 standalone.localdomain haproxy[70940]: 192.168.122.99:50324 [13/Oct/2025:14:13:10.489] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1212/1212 201 484 - - ---- 59/5/4/4/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:11 standalone.localdomain haproxy[70940]: 192.168.122.99:50334 [13/Oct/2025:14:13:10.569] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1183/1183 404 320 - - ---- 59/5/4/4/0 0/0 "GET /v3/OS-TRUST/trusts/bd3d3ad72eab4eafa50d29bade9eb73e HTTP/1.1"
Oct 13 14:13:11 standalone.localdomain haproxy[70940]: 192.168.122.99:50338 [13/Oct/2025:14:13:10.938] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/871/871 201 576 - - ---- 59/5/4/4/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 172.21.0.2:44726 [13/Oct/2025:14:13:11.276] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/893/893 201 8100 - - ---- 59/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50342 [13/Oct/2025:14:13:11.331] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/916/916 201 438 - - ---- 59/6/5/5/0 0/0 "PUT /v3/regions/b190335d-e397-4226-9129-b6c5f4d9eae9 HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50358 [13/Oct/2025:14:13:11.649] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/637/637 200 510 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50364 [13/Oct/2025:14:13:11.705] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/654/654 201 486 - - ---- 59/6/5/5/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50372 [13/Oct/2025:14:13:11.755] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/690/690 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/roles/843ad9a4cd8d4bc6831b9196fe3254c4 HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50388 [13/Oct/2025:14:13:11.812] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/926/926 201 498 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50400 [13/Oct/2025:14:13:12.173] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/584/584 200 510 - - ---- 59/6/5/5/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain ceph-mon[29756]: pgmap v1147: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50410 [13/Oct/2025:14:13:12.249] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/578/578 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/regions/b190335d-e397-4226-9129-b6c5f4d9eae9 HTTP/1.1"
Oct 13 14:13:12 standalone.localdomain haproxy[70940]: 192.168.122.99:50420 [13/Oct/2025:14:13:12.288] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/582/582 201 586 - - ---- 59/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50432 [13/Oct/2025:14:13:12.362] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/652/652 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/domains/0aafd1f7793e4bbba547dc5775dddeed HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50440 [13/Oct/2025:14:13:12.447] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/633/633 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/roles/9e61132ed0fc4d18be51e4b49f4a188d HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50446 [13/Oct/2025:14:13:12.740] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/387/387 200 2700 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50462 [13/Oct/2025:14:13:12.761] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/392/392 201 582 - - ---- 59/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50478 [13/Oct/2025:14:13:12.831] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/378/378 201 460 - - ---- 59/6/5/5/0 0/0 "POST /v3/regions HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50492 [13/Oct/2025:14:13:12.874] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/639/639 201 504 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50496 [13/Oct/2025:14:13:13.017] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/550/550 201 456 - - ---- 59/6/5/5/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1148: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50498 [13/Oct/2025:14:13:13.082] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/614/614 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/users/eafc7c80cbfb41a68cd61ab91843310f HTTP/1.1"
Oct 13 14:13:13 standalone.localdomain haproxy[70940]: 192.168.122.99:50502 [13/Oct/2025:14:13:13.129] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/639/639 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/projects/3453039293644746ac78be4362a9ea9a/users/2e1247f88b0b4980b39cc8a7908f725d/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50508 [13/Oct/2025:14:13:13.154] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/922/922 201 502 - - ---- 59/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50510 [13/Oct/2025:14:13:13.212] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/923/923 200 456 - - ---- 59/6/5/5/0 0/0 "PATCH /v3/regions/50f99afd969a4361aaac0c352c9b9506 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50516 [13/Oct/2025:14:13:13.514] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/644/644 200 2700 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50520 [13/Oct/2025:14:13:13.571] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/664/664 200 452 - - ---- 59/6/5/5/0 0/0 "PATCH /v3/domains/30fe6d51deef4c7e9486aaf0cab2db00 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50522 [13/Oct/2025:14:13:13.701] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/626/626 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/projects/448d9dfe86114259bd765db35152d7a8 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50534 [13/Oct/2025:14:13:13.771] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/602/602 200 2700 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50538 [13/Oct/2025:14:13:14.079] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/316/316 200 2700 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50548 [13/Oct/2025:14:13:14.137] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/307/307 200 456 - - ---- 59/6/5/5/0 0/0 "GET /v3/regions/50f99afd969a4361aaac0c352c9b9506 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50550 [13/Oct/2025:14:13:14.160] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/311/311 204 165 - - ---- 60/7/6/5/0 0/0 "PUT /v3/projects/ab1bf4c522974944b2026bc1ff7400d5/users/a9f2e9a17ec2466ba462c960fa3f48f0/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50566 [13/Oct/2025:14:13:14.237] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/339/339 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/domains/30fe6d51deef4c7e9486aaf0cab2db00 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50570 [13/Oct/2025:14:13:14.330] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/298/298 201 535 - - ---- 59/6/5/5/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50578 [13/Oct/2025:14:13:14.375] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/301/301 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/projects/3453039293644746ac78be4362a9ea9a/users/2e1247f88b0b4980b39cc8a7908f725d/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50580 [13/Oct/2025:14:13:14.398] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/333/333 204 165 - - ---- 59/6/5/5/0 0/0 "PUT /v3/projects/0acf3492e7ab46898cc6824b613279dd/users/e8cb134530974897ae3e0d0564acde20/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:13:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1861744921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50586 [13/Oct/2025:14:13:14.448] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/328/328 204 165 - - ---- 59/6/5/5/0 0/0 "DELETE /v3/regions/50f99afd969a4361aaac0c352c9b9506 HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain ceph-mon[29756]: pgmap v1148: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1861744921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50590 [13/Oct/2025:14:13:14.475] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/379/379 200 2700 - - ---- 59/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:14 standalone.localdomain haproxy[70940]: 192.168.122.99:50606 [13/Oct/2025:14:13:14.579] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/354/354 201 485 - - ---- 60/7/6/5/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 172.17.0.2:37440 [13/Oct/2025:14:13:14.975] placement placement/standalone.internalapi.localdomain 0/0/0/45/45 200 298 - - ---- 60/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50614 [13/Oct/2025:14:13:14.631] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/620/620 201 620 - - ---- 60/6/5/5/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50622 [13/Oct/2025:14:13:14.679] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/618/618 200 2700 - - ---- 60/6/5/5/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50630 [13/Oct/2025:14:13:14.733] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/584/584 200 2700 - - ---- 61/7/6/6/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50642 [13/Oct/2025:14:13:14.779] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/586/586 200 871 - - ---- 60/6/5/5/0 0/0 "GET /v3/regions HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50654 [13/Oct/2025:14:13:14.859] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/525/525 204 165 - - ---- 60/6/5/5/0 0/0 "PUT /v3/projects/ab1bf4c522974944b2026bc1ff7400d5/users/a9f2e9a17ec2466ba462c960fa3f48f0/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50666 [13/Oct/2025:14:13:14.936] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/522/522 200 481 - - ---- 60/5/4/4/0 0/0 "PATCH /v3/domains/01baaa451bc04269aa0670755ed7ea3a HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50670 [13/Oct/2025:14:13:15.252] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/255/255 201 451 - - ---- 60/5/4/4/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:13:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1427319367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50682 [13/Oct/2025:14:13:15.299] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/265/265 204 165 - - ---- 60/5/4/4/0 0/0 "PUT /v3/projects/3453039293644746ac78be4362a9ea9a/users/2e1247f88b0b4980b39cc8a7908f725d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50684 [13/Oct/2025:14:13:15.318] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/298/298 204 165 - - ---- 60/4/3/3/0 0/0 "PUT /v3/projects/0acf3492e7ab46898cc6824b613279dd/users/e8cb134530974897ae3e0d0564acde20/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 192.168.122.99:50700 [13/Oct/2025:14:13:15.368] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/307/307 404 321 - - ---- 60/3/2/2/0 0/0 "DELETE /v3/regions/50f99afd969a4361aaac0c352c9b9506 HTTP/1.1"
Oct 13 14:13:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1149: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:13:15 standalone.localdomain systemd[1]: tmp-crun.AGhae1.mount: Deactivated successfully.
Oct 13 14:13:15 standalone.localdomain podman[165525]: 2025-10-13 14:13:15.840012918 +0000 UTC m=+0.100637988 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, com.redhat.component=openstack-ovn-northd-container, config_id=ovn_cluster_northd, version=17.1.9, name=rhosp17/openstack-ovn-northd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:13:15 standalone.localdomain podman[165525]: 2025-10-13 14:13:15.851261263 +0000 UTC m=+0.111886333 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, container_name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, build-date=2025-07-21T13:30:04, version=17.1.9, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, name=rhosp17/openstack-ovn-northd, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:15 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:13:15 standalone.localdomain haproxy[70940]: 172.21.0.2:44742 [13/Oct/2025:14:13:15.391] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/594/594 201 8124 - - ---- 60/5/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1427319367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:13:16 standalone.localdomain ceph-mon[29756]: pgmap v1149: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50710 [13/Oct/2025:14:13:15.459] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/571/571 200 481 - - ---- 60/4/3/3/0 0/0 "GET /v3/domains/01baaa451bc04269aa0670755ed7ea3a HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50722 [13/Oct/2025:14:13:15.510] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/562/562 201 453 - - ---- 60/4/3/3/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 172.21.0.2:44748 [13/Oct/2025:14:13:15.569] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/811/811 201 8174 - - ---- 60/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:13:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:13:16 standalone.localdomain podman[165552]: 2025-10-13 14:13:16.526439998 +0000 UTC m=+0.062116946 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:13:16 standalone.localdomain podman[165552]: 2025-10-13 14:13:16.577943768 +0000 UTC m=+0.113620766 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, tcib_managed=true, com.redhat.component=openstack-mariadb-container)
Oct 13 14:13:16 standalone.localdomain podman[165551]: 2025-10-13 14:13:16.59822385 +0000 UTC m=+0.133002891 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:13:16 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:13:16 standalone.localdomain podman[165551]: 2025-10-13 14:13:16.66698027 +0000 UTC m=+0.201759301 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:13:16 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 172.21.0.2:44756 [13/Oct/2025:14:13:15.626] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/1067/1067 201 8120 - - ---- 60/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50736 [13/Oct/2025:14:13:15.681] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/1025/1025 200 871 - - ---- 60/6/5/5/0 0/0 "GET /v3/regions HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50746 [13/Oct/2025:14:13:15.990] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/727/727 200 401 - - ---- 60/6/5/5/0 0/0 "GET /v3/domains/default HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50754 [13/Oct/2025:14:13:16.032] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/771/771 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/domains/01baaa451bc04269aa0670755ed7ea3a HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50768 [13/Oct/2025:14:13:16.073] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/795/795 204 165 - - ---- 60/6/5/5/0 0/0 "PUT /v3/projects/91568481fa6247f5bbf32305ec123acc/users/797aff902b984fdf868da89ef4daa365/roles/f9623d2c3b2f4e46bf55f4aaa37c1ebd HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50782 [13/Oct/2025:14:13:16.383] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/537/537 201 437 - - ---- 60/6/5/5/0 0/0 "POST /v3/policies HTTP/1.1"
Oct 13 14:13:16 standalone.localdomain haproxy[70940]: 192.168.122.99:50784 [13/Oct/2025:14:13:16.699] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/259/259 200 7316 - - ---- 60/6/5/5/0 0/0 "GET /v3/auth/catalog HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50796 [13/Oct/2025:14:13:16.710] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/298/298 201 461 - - ---- 60/6/5/5/0 0/0 "POST /v3/regions HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain runuser[165619]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50804 [13/Oct/2025:14:13:16.727] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/384/384 204 165 - - ---- 60/6/5/5/0 0/0 "DELETE /v3/users/a9f2e9a17ec2466ba462c960fa3f48f0 HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50806 [13/Oct/2025:14:13:16.805] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/359/359 200 1518 - - ---- 60/5/4/4/0 0/0 "GET /v3/domains HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50820 [13/Oct/2025:14:13:16.871] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/354/354 204 165 - - ---- 60/5/4/4/0 0/0 "PUT /v3/projects/91568481fa6247f5bbf32305ec123acc/users/797aff902b984fdf868da89ef4daa365/roles/8fa374141fe044298638dc94bb9c591f HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50824 [13/Oct/2025:14:13:16.922] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 200 439 - - ---- 60/5/4/4/0 0/0 "PATCH /v3/policies/758934981f74427896ba7ceb943e358f HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50838 [13/Oct/2025:14:13:16.966] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/417/417 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/users/e8cb134530974897ae3e0d0564acde20 HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50840 [13/Oct/2025:14:13:17.010] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/431/431 200 602 - - ---- 60/4/3/3/0 0/0 "GET /v3/regions?parent_region_id=6b784131fc65467cb7ae6b5b8af3735e HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain runuser[165619]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1150: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:17 standalone.localdomain runuser[165688]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 172.21.0.2:44764 [13/Oct/2025:14:13:17.113] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/665/665 201 8100 - - ---- 60/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50850 [13/Oct/2025:14:13:17.165] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/663/663 404 321 - - ---- 60/4/3/3/0 0/0 "PATCH /v3/domains/01baaa451bc04269aa0670755ed7ea3a HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50852 [13/Oct/2025:14:13:17.227] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/652/652 200 7737 - - ---- 60/4/3/3/0 0/0 "GET /v3/users HTTP/1.1"
Oct 13 14:13:17 standalone.localdomain haproxy[70940]: 192.168.122.99:50860 [13/Oct/2025:14:13:17.290] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/633/633 200 439 - - ---- 59/3/2/2/0 0/0 "GET /v3/policies/758934981f74427896ba7ceb943e358f HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.21.0.2:44776 [13/Oct/2025:14:13:17.387] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/803/803 201 8100 - - ---- 59/3/2/2/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 192.168.122.99:50868 [13/Oct/2025:14:13:17.443] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/771/771 204 165 - - ---- 60/3/2/2/0 0/0 "DELETE /v3/regions/852cbeaeb1f04a07ab46ff7549efd66e HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.17.0.2:46012 [13/Oct/2025:14:13:17.790] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/517/517 200 8095 - - ---- 60/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 192.168.122.99:50884 [13/Oct/2025:14:13:17.830] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/505/505 201 484 - - ---- 61/4/3/2/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain runuser[165688]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:18 standalone.localdomain runuser[165832]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.21.0.2:43768 [13/Oct/2025:14:13:17.782] neutron neutron/standalone.internalapi.localdomain 0/0/0/668/668 200 2976 - - ---- 61/3/2/1/0 0/0 "GET /v2.0/security-groups?tenant_id=ab1bf4c522974944b2026bc1ff7400d5&name=default HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.21.0.2:43788 [13/Oct/2025:14:13:18.452] neutron neutron/standalone.internalapi.localdomain 0/0/0/153/153 204 152 - - ---- 60/2/1/1/0 0/0 "DELETE /v2.0/security-groups/364aa64d-2e4e-498e-9cb7-ce190df492c3 HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.21.0.2:44784 [13/Oct/2025:14:13:17.883] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/774/774 201 8139 - - ---- 60/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 192.168.122.99:50888 [13/Oct/2025:14:13:17.927] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/781/781 204 165 - - ---- 60/4/3/3/0 0/0 "DELETE /v3/policies/758934981f74427896ba7ceb943e358f HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain ceph-mon[29756]: pgmap v1150: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:18.196] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/588/588 200 8095 - - ---- 60/3/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 192.168.122.99:50894 [13/Oct/2025:14:13:18.219] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/624/624 204 165 - - ---- 60/4/3/3/0 0/0 "DELETE /v3/regions/40b01d20f4f34f25a07a121df28736ca HTTP/1.1"
Oct 13 14:13:18 standalone.localdomain haproxy[70940]: 172.21.0.2:43780 [13/Oct/2025:14:13:18.192] neutron neutron/standalone.internalapi.localdomain 0/0/0/734/734 200 2976 - - ---- 60/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=0acf3492e7ab46898cc6824b613279dd&name=default HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 172.21.0.2:43790 [13/Oct/2025:14:13:18.929] neutron neutron/standalone.internalapi.localdomain 0/0/0/182/182 204 152 - - ---- 60/1/0/0/0 0/0 "DELETE /v2.0/security-groups/aca67fdd-3452-47e5-a342-489e71f78c05 HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50902 [13/Oct/2025:14:13:18.336] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/809/809 201 555 - - ---- 60/5/4/4/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50912 [13/Oct/2025:14:13:18.607] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/596/596 204 165 - - ---- 60/5/4/4/0 0/0 "DELETE /v3/projects/ab1bf4c522974944b2026bc1ff7400d5 HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain runuser[165832]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 172.21.0.2:44786 [13/Oct/2025:14:13:18.661] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/839/839 201 8139 - - ---- 59/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50916 [13/Oct/2025:14:13:18.712] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/833/833 201 436 - - ---- 59/5/4/4/0 0/0 "POST /v3/policies HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50926 [13/Oct/2025:14:13:18.846] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/756/756 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/regions/6b784131fc65467cb7ae6b5b8af3735e HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50936 [13/Oct/2025:14:13:19.113] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/580/580 204 165 - - ---- 59/5/4/4/0 0/0 "DELETE /v3/projects/0acf3492e7ab46898cc6824b613279dd HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1151: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50952 [13/Oct/2025:14:13:19.146] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/606/606 201 531 - - ---- 58/4/3/3/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50968 [13/Oct/2025:14:13:19.503] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/348/348 201 1042 - - ---- 58/4/3/3/0 0/0 "POST /v3/OS-TRUST/trusts HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:50984 [13/Oct/2025:14:13:19.547] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/358/358 201 435 - - ---- 58/4/3/3/0 0/0 "POST /v3/policies HTTP/1.1"
Oct 13 14:13:19 standalone.localdomain haproxy[70940]: 192.168.122.99:40718 [13/Oct/2025:14:13:19.604] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/388/388 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/users/a5b53084cc03456fa695326524b16650 HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40720 [13/Oct/2025:14:13:19.755] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/307/307 200 480 - - ---- 58/4/3/3/0 0/0 "PATCH /v3/domains/9a67d73e67c24e7cb1e259f3597936ea HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40738 [13/Oct/2025:14:13:19.854] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/260/260 200 1037 - - ---- 58/4/3/3/0 0/0 "GET /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5 HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40740 [13/Oct/2025:14:13:19.908] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/251/251 201 437 - - ---- 58/4/3/3/0 0/0 "POST /v3/policies HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40742 [13/Oct/2025:14:13:19.995] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/247/247 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/users/39258bf6ee73411fab99bdb73989b40e HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40756 [13/Oct/2025:14:13:20.063] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/376/376 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/domains/9a67d73e67c24e7cb1e259f3597936ea HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40764 [13/Oct/2025:14:13:20.115] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/387/387 200 573 - - ---- 58/3/2/2/0 0/0 "GET /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5/roles HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40774 [13/Oct/2025:14:13:20.160] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/386/386 200 965 - - ---- 58/3/2/2/0 0/0 "GET /v3/policies HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain ceph-mon[29756]: pgmap v1151: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 172.21.0.2:51860 [13/Oct/2025:14:13:20.244] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/606/606 201 8100 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40788 [13/Oct/2025:14:13:20.444] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/469/469 404 321 - - ---- 58/3/2/2/0 0/0 "GET /v3/domains/9a67d73e67c24e7cb1e259f3597936ea HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40802 [13/Oct/2025:14:13:20.506] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/425/425 200 446 - - ---- 58/3/2/2/0 0/0 "GET /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5/roles/f9623d2c3b2f4e46bf55f4aaa37c1ebd HTTP/1.1"
Oct 13 14:13:20 standalone.localdomain haproxy[70940]: 192.168.122.99:40810 [13/Oct/2025:14:13:20.549] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/397/397 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/policies/613822f4e6fc4fb99efa19e1384f82a1 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:20.857] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/159/159 200 8095 - - ---- 58/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40826 [13/Oct/2025:14:13:20.914] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/134/134 404 319 - - ---- 58/3/2/2/0 0/0 "GET /v3/users/0c89aca230fa47938741295bb8f39abb HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40836 [13/Oct/2025:14:13:20.933] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/137/137 200 202 - - ---- 58/3/2/2/0 0/0 "HEAD /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5/roles/f9623d2c3b2f4e46bf55f4aaa37c1ebd HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40848 [13/Oct/2025:14:13:20.949] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/140/140 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/policies/f936a8f8f07540d794cab45906c86bb1 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40864 [13/Oct/2025:14:13:21.050] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/64/64 404 320 - - ---- 58/3/2/2/0 0/0 "GET /v3/groups/7f7661bc562a4196a651d11385023117 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40876 [13/Oct/2025:14:13:21.071] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/72/72 404 319 - - ---- 58/3/2/2/0 0/0 "GET /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5/roles/8fa374141fe044298638dc94bb9c591f HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 172.21.0.2:52940 [13/Oct/2025:14:13:20.853] neutron neutron/standalone.internalapi.localdomain 0/0/0/300/300 200 2976 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=86d48840c9974dd0a0de4d8dc2baa0ab&name=default HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40882 [13/Oct/2025:14:13:21.090] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/73/73 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/policies/a31f995e26b44365beffb3b5581d9895 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40886 [13/Oct/2025:14:13:21.116] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/66/66 404 320 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/groups/7f7661bc562a4196a651d11385023117 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40900 [13/Oct/2025:14:13:21.146] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/60/60 404 209 - - ---- 58/3/2/2/0 0/0 "HEAD /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5/roles/8fa374141fe044298638dc94bb9c591f HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40912 [13/Oct/2025:14:13:21.166] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/105/105 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/users/ee83bd271a9a4691912928400a45698e HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40924 [13/Oct/2025:14:13:21.185] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/144/144 404 319 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/users/0c89aca230fa47938741295bb8f39abb HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 172.21.0.2:52942 [13/Oct/2025:14:13:21.154] neutron neutron/standalone.internalapi.localdomain 0/0/0/177/177 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/1495a870-5e79-4355-81a7-ddefdbc046e5 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40940 [13/Oct/2025:14:13:21.208] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/175/175 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40946 [13/Oct/2025:14:13:21.275] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/182/182 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/users/2e1247f88b0b4980b39cc8a7908f725d HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40962 [13/Oct/2025:14:13:21.331] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/172/172 404 321 - - ---- 58/3/2/2/0 0/0 "PATCH /v3/domains/9a67d73e67c24e7cb1e259f3597936ea HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40972 [13/Oct/2025:14:13:21.333] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/243/243 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/projects/86d48840c9974dd0a0de4d8dc2baa0ab HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40982 [13/Oct/2025:14:13:21.386] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/231/231 404 320 - - ---- 58/2/1/1/0 0/0 "GET /v3/OS-TRUST/trusts/a8124b5c179f43488dadf8587213edc5 HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1152: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 172.21.0.2:52956 [13/Oct/2025:14:13:21.579] neutron neutron/standalone.internalapi.localdomain 0/0/0/160/160 200 2976 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=23ea26702be64e6d8033dfa665660ddc&name=default HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 172.21.0.2:51864 [13/Oct/2025:14:13:21.461] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/439/439 201 8100 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 192.168.122.99:40994 [13/Oct/2025:14:13:21.506] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/458/458 200 1518 - - ---- 58/2/1/1/0 0/0 "GET /v3/domains HTTP/1.1"
Oct 13 14:13:21 standalone.localdomain haproxy[70940]: 172.21.0.2:52972 [13/Oct/2025:14:13:21.741] neutron neutron/standalone.internalapi.localdomain 0/0/0/250/250 204 152 - - ---- 58/2/1/1/0 0/0 "DELETE /v2.0/security-groups/0e641c8e-f7e0-42f1-905a-d85ec18718ca HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41002 [13/Oct/2025:14:13:21.620] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/418/418 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/roles/8fa374141fe044298638dc94bb9c591f HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:21.911] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/203/203 200 8095 - - ---- 58/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41012 [13/Oct/2025:14:13:21.967] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/227/227 200 1265 - - ---- 58/3/2/2/0 0/0 "GET /v3/domains?enabled=True HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 172.21.0.2:52978 [13/Oct/2025:14:13:21.906] neutron neutron/standalone.internalapi.localdomain 0/0/0/334/334 200 2976 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=3453039293644746ac78be4362a9ea9a&name=default HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41026 [13/Oct/2025:14:13:21.995] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/300/300 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/projects/23ea26702be64e6d8033dfa665660ddc HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41032 [13/Oct/2025:14:13:22.039] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/321/321 204 165 - - ---- 58/2/1/1/0 0/0 "DELETE /v3/roles/f9623d2c3b2f4e46bf55f4aaa37c1ebd HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41042 [13/Oct/2025:14:13:22.199] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/207/207 200 610 - - ---- 58/2/1/1/0 0/0 "GET /v3/domains?name=tempest-test_domain-1775515638 HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 172.21.0.2:52982 [13/Oct/2025:14:13:22.242] neutron neutron/standalone.internalapi.localdomain 0/0/0/231/231 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/150911b5-6648-412d-b77f-897a6881e511 HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 172.21.0.2:51872 [13/Oct/2025:14:13:22.298] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/403/403 201 8100 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:13:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:13:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:13:22 standalone.localdomain ceph-mon[29756]: pgmap v1152: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41048 [13/Oct/2025:14:13:22.363] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/467/467 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/users/797aff902b984fdf868da89ef4daa365 HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain podman[166099]: 2025-10-13 14:13:22.848641324 +0000 UTC m=+0.098971968 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, container_name=cinder_api_cron, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, architecture=x86_64, release=1)
Oct 13 14:13:22 standalone.localdomain systemd[1]: tmp-crun.Hcfpq0.mount: Deactivated successfully.
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41062 [13/Oct/2025:14:13:22.409] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/488/488 404 321 - - ---- 58/3/2/2/0 0/0 "PATCH /v3/domains/9a67d73e67c24e7cb1e259f3597936ea HTTP/1.1"
Oct 13 14:13:22 standalone.localdomain podman[166100]: 2025-10-13 14:13:22.903819891 +0000 UTC m=+0.153595858 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, container_name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-scheduler-container, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, release=1, build-date=2025-07-21T16:10:12)
Oct 13 14:13:22 standalone.localdomain podman[166099]: 2025-10-13 14:13:22.911896058 +0000 UTC m=+0.162226652 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, container_name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, name=rhosp17/openstack-cinder-api, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:58:55, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1)
Oct 13 14:13:22 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:13:22 standalone.localdomain podman[166100]: 2025-10-13 14:13:22.934793149 +0000 UTC m=+0.184569126 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, distribution-scope=public, com.redhat.component=openstack-cinder-scheduler-container, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, container_name=cinder_scheduler, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:10:12, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f)
Oct 13 14:13:22 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:13:22 standalone.localdomain haproxy[70940]: 192.168.122.99:41072 [13/Oct/2025:14:13:22.476] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/513/513 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/projects/3453039293644746ac78be4362a9ea9a HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain podman[166098]: 2025-10-13 14:13:22.997535898 +0000 UTC m=+0.247614235 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2)
Oct 13 14:13:23 standalone.localdomain podman[166098]: 2025-10-13 14:13:23.011946169 +0000 UTC m=+0.262024506 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:13:23 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:13:23
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'images', 'volumes', 'backups', '.mgr', 'vms', 'manila_data']
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 172.21.0.2:52984 [13/Oct/2025:14:13:22.992] neutron neutron/standalone.internalapi.localdomain 0/0/0/147/147 200 2976 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=ee6436030a1c49299e0ed6eb085e006a&name=default HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 172.21.0.2:51878 [13/Oct/2025:14:13:22.712] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/607/607 201 8100 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 172.21.0.2:52988 [13/Oct/2025:14:13:23.141] neutron neutron/standalone.internalapi.localdomain 0/0/0/203/203 204 152 - - ---- 58/1/0/0/0 0/0 "DELETE /v2.0/security-groups/4b18ce42-e544-41c3-a3aa-7f320be4797b HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41078 [13/Oct/2025:14:13:22.833] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/568/568 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/projects/91568481fa6247f5bbf32305ec123acc HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41084 [13/Oct/2025:14:13:22.900] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/560/560 200 479 - - ---- 58/4/3/3/0 0/0 "PATCH /v3/domains/c646e88f424c47dbbaa64d8fa8feff02 HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41100 [13/Oct/2025:14:13:23.322] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/183/183 200 510 - - ---- 58/4/3/3/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41112 [13/Oct/2025:14:13:23.346] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/209/209 204 165 - - ---- 58/4/3/3/0 0/0 "DELETE /v3/projects/ee6436030a1c49299e0ed6eb085e006a HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41122 [13/Oct/2025:14:13:23.405] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/232/232 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/users/037469faa2bc4eecba568af2e037728d HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1153: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41124 [13/Oct/2025:14:13:23.464] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/273/273 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/domains/c646e88f424c47dbbaa64d8fa8feff02 HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain haproxy[70940]: 192.168.122.99:41136 [13/Oct/2025:14:13:23.508] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/275/275 201 578 - - ---- 58/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:23 standalone.localdomain systemd[1]: tmp-crun.B3xDb1.mount: Deactivated successfully.
Oct 13 14:13:24 standalone.localdomain haproxy[70940]: 172.21.0.2:51892 [13/Oct/2025:14:13:23.557] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/587/587 201 8100 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:24 standalone.localdomain haproxy[70940]: 192.168.122.99:41152 [13/Oct/2025:14:13:23.639] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/564/564 204 165 - - ---- 58/3/2/2/0 0/0 "DELETE /v3/users/fcea1cdffee440ed9d551e550ea18ad2 HTTP/1.1"
Oct 13 14:13:24 standalone.localdomain haproxy[70940]: 192.168.122.99:41156 [13/Oct/2025:14:13:23.740] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/542/542 200 480 - - ---- 58/2/1/1/0 0/0 "PATCH /v3/domains/4b021fb034a047b791dcfffc6384bc46 HTTP/1.1"
Oct 13 14:13:24 standalone.localdomain haproxy[70940]: 192.168.122.99:41160 [13/Oct/2025:14:13:23.787] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/789/789 201 500 - - ---- 58/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:24 standalone.localdomain ceph-mon[29756]: pgmap v1153: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:24 standalone.localdomain haproxy[70940]: 172.21.0.2:51896 [13/Oct/2025:14:13:24.151] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/696/696 201 8100 - - ---- 58/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 172.21.0.2:51912 [13/Oct/2025:14:13:24.205] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/909/909 201 8100 - - ---- 57/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41164 [13/Oct/2025:14:13:24.284] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/965/965 204 165 - - ---- 57/3/2/2/0 0/0 "DELETE /v3/domains/4b021fb034a047b791dcfffc6384bc46 HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41168 [13/Oct/2025:14:13:24.577] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/721/721 200 2700 - - ---- 57/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41180 [13/Oct/2025:14:13:24.850] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/474/474 200 510 - - ---- 57/3/2/2/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:25.122] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/270/270 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41196 [13/Oct/2025:14:13:25.252] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/218/218 200 479 - - ---- 57/3/2/2/0 0/0 "PATCH /v3/domains/683d92cd722a41d98f6fb1f1bb199988 HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41198 [13/Oct/2025:14:13:25.301] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/243/243 204 165 - - ---- 58/4/3/3/0 0/0 "PUT /v3/projects/b635c4f5aeb949ecb0e5b21a3fb6ef6c/users/18921f2767b84a7087ae952ec5e055a3/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 172.21.0.2:52992 [13/Oct/2025:14:13:25.116] neutron neutron/standalone.internalapi.localdomain 0/0/0/444/444 200 2976 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=1ac812309ced40e6ae6c399baafa16c0&name=default HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41204 [13/Oct/2025:14:13:25.326] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/300/300 201 574 - - ---- 57/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1154: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:13:25 standalone.localdomain systemd[1]: tmp-crun.s6bvOj.mount: Deactivated successfully.
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41210 [13/Oct/2025:14:13:25.473] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/340/340 204 165 - - ---- 57/3/2/2/0 0/0 "DELETE /v3/domains/683d92cd722a41d98f6fb1f1bb199988 HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain podman[166188]: 2025-10-13 14:13:25.820746973 +0000 UTC m=+0.090051786 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, container_name=keystone, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git)
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 172.21.0.2:52996 [13/Oct/2025:14:13:25.563] neutron neutron/standalone.internalapi.localdomain 0/0/0/284/284 204 152 - - ---- 57/1/0/0/0 0/0 "DELETE /v2.0/security-groups/fa93a96a-d136-47a4-9361-d2e435289f97 HTTP/1.1"
Oct 13 14:13:25 standalone.localdomain haproxy[70940]: 192.168.122.99:41212 [13/Oct/2025:14:13:25.546] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/323/323 200 2700 - - ---- 57/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain ceph-mon[29756]: pgmap v1154: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 192.168.122.99:41222 [13/Oct/2025:14:13:25.630] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/512/512 201 498 - - ---- 57/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 192.168.122.99:41236 [13/Oct/2025:14:13:25.816] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/384/384 204 165 - - ---- 57/4/3/3/0 0/0 "DELETE /v3/users/cc2d6285528f4b38bad58cbf1ac674f9 HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain podman[166188]: 2025-10-13 14:13:26.2329458 +0000 UTC m=+0.502250633 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, name=rhosp17/openstack-keystone, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, container_name=keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:13:26 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 192.168.122.99:41248 [13/Oct/2025:14:13:25.851] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/482/482 204 165 - - ---- 57/4/3/3/0 0/0 "DELETE /v3/projects/1ac812309ced40e6ae6c399baafa16c0 HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 192.168.122.99:41262 [13/Oct/2025:14:13:25.872] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/531/531 204 165 - - ---- 57/3/2/2/0 0/0 "PUT /v3/projects/b635c4f5aeb949ecb0e5b21a3fb6ef6c/users/18921f2767b84a7087ae952ec5e055a3/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 192.168.122.99:41264 [13/Oct/2025:14:13:26.145] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/346/346 200 2700 - - ---- 57/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 172.21.0.2:53002 [13/Oct/2025:14:13:26.335] neutron neutron/standalone.internalapi.localdomain 0/0/0/210/210 200 2976 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=1057aedfc5ed4ffeb2b2beab0ef4a1d1&name=default HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 192.168.122.99:41270 [13/Oct/2025:14:13:26.205] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/357/357 204 165 - - ---- 57/2/1/1/0 0/0 "DELETE /v3/users/23873ac56c1040e19f266ae10f5be6d5 HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 172.21.0.2:53006 [13/Oct/2025:14:13:26.548] neutron neutron/standalone.internalapi.localdomain 0/0/0/189/189 204 152 - - ---- 57/1/0/0/0 0/0 "DELETE /v2.0/security-groups/e6860f7a-b609-4e0e-80c5-eb4533bfe4a3 HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain systemd[1]: tmp-crun.EtqKyQ.mount: Deactivated successfully.
Oct 13 14:13:26 standalone.localdomain podman[166221]: 2025-10-13 14:13:26.838853441 +0000 UTC m=+0.108165170 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, container_name=heat_api_cron, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T15:56:26)
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:13:26 standalone.localdomain podman[166221]: 2025-10-13 14:13:26.875095819 +0000 UTC m=+0.144407608 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:13:26 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:13:26 standalone.localdomain haproxy[70940]: 172.21.0.2:51914 [13/Oct/2025:14:13:26.411] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/526/526 201 8116 - - ---- 57/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:26 standalone.localdomain systemd[1]: tmp-crun.nquJWd.mount: Deactivated successfully.
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:13:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:13:26 standalone.localdomain podman[166262]: 2025-10-13 14:13:26.999171454 +0000 UTC m=+0.073958253 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, name=rhosp17/openstack-memcached, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached)
Oct 13 14:13:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:13:27 standalone.localdomain podman[166240]: 2025-10-13 14:13:26.975627294 +0000 UTC m=+0.107869921 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true)
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 192.168.122.99:41276 [13/Oct/2025:14:13:26.494] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/533/533 204 165 - - ---- 57/3/2/2/0 0/0 "PUT /v3/projects/e69ac02d79d34d9c957819d3dcce8263/users/037f61f895d14dfc8bcda35abd24d0fc/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:13:27 standalone.localdomain podman[166257]: 2025-10-13 14:13:27.039422775 +0000 UTC m=+0.119757304 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, version=17.1.9, vcs-type=git, build-date=2025-07-21T16:02:54, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_scheduler, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 14:13:27 standalone.localdomain podman[166241]: 2025-10-13 14:13:27.007361124 +0000 UTC m=+0.133475443 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=heat_engine, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-engine, com.redhat.component=openstack-heat-engine-container, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:13:27 standalone.localdomain podman[166257]: 2025-10-13 14:13:27.061757578 +0000 UTC m=+0.142092147 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:02:54, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, container_name=nova_scheduler, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:13:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:13:27 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:13:27 standalone.localdomain podman[166241]: 2025-10-13 14:13:27.089803866 +0000 UTC m=+0.215918185 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-heat-engine, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, version=17.1.9, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:13:27 standalone.localdomain podman[166258]: 2025-10-13 14:13:27.103536905 +0000 UTC m=+0.181892434 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, release=1, build-date=2025-07-21T13:58:15, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, container_name=horizon, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, config_id=tripleo_step3)
Oct 13 14:13:27 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166240]: 2025-10-13 14:13:27.112207701 +0000 UTC m=+0.244450338 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:13:27 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166258]: 2025-10-13 14:13:27.123702853 +0000 UTC m=+0.202058362 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, architecture=x86_64, container_name=horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, batch=17.1_20250721.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 horizon, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, com.redhat.component=openstack-horizon-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 14:13:27 standalone.localdomain podman[166311]: 2025-10-13 14:13:27.16352093 +0000 UTC m=+0.171331301 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, architecture=x86_64, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:49:55, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:13:27 standalone.localdomain podman[166379]: 2025-10-13 14:13:27.182648276 +0000 UTC m=+0.105786437 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, tcib_managed=true, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:17, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, distribution-scope=public)
Oct 13 14:13:27 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166262]: 2025-10-13 14:13:27.193126985 +0000 UTC m=+0.267913754 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, vcs-type=git, container_name=memcached, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:13:27 standalone.localdomain podman[166301]: 2025-10-13 14:13:27.148406938 +0000 UTC m=+0.169377001 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-type=git, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, container_name=neutron_api, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T15:44:03, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d)
Oct 13 14:13:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:13:27 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166379]: 2025-10-13 14:13:27.20373545 +0000 UTC m=+0.126873621 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, vcs-type=git, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, summary=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:17, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:13:27 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166385]: 2025-10-13 14:13:27.243752314 +0000 UTC m=+0.159453167 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, container_name=cinder_api, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T15:58:55, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:13:27 standalone.localdomain podman[166454]: 2025-10-13 14:13:27.261622041 +0000 UTC m=+0.053657642 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, com.redhat.component=openstack-manila-api-container, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, summary=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43)
Oct 13 14:13:27 standalone.localdomain podman[166357]: 2025-10-13 14:13:27.290788302 +0000 UTC m=+0.251920275 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-container, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:13:27 standalone.localdomain podman[166311]: 2025-10-13 14:13:27.297627102 +0000 UTC m=+0.305437462 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, container_name=heat_api_cfn, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, architecture=x86_64, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:13:27 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 172.21.0.2:51928 [13/Oct/2025:14:13:26.566] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/767/767 201 8100 - - ---- 57/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain podman[166357]: 2025-10-13 14:13:27.335908802 +0000 UTC m=+0.297040785 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:13:27 standalone.localdomain podman[166454]: 2025-10-13 14:13:27.348007603 +0000 UTC m=+0.140043234 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, release=1, description=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., build-date=2025-07-21T16:06:43, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, distribution-scope=public, com.redhat.component=openstack-manila-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, architecture=x86_64)
Oct 13 14:13:27 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166301]: 2025-10-13 14:13:27.372245494 +0000 UTC m=+0.393215567 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, version=17.1.9, container_name=neutron_api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:44:03, name=rhosp17/openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:13:27 standalone.localdomain podman[166385]: 2025-10-13 14:13:27.378889077 +0000 UTC m=+0.294589970 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 192.168.122.99:41288 [13/Oct/2025:14:13:26.740] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/649/649 204 165 - - ---- 57/3/2/2/0 0/0 "DELETE /v3/projects/1057aedfc5ed4ffeb2b2beab0ef4a1d1 HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166348]: 2025-10-13 14:13:27.343102913 +0000 UTC m=+0.309540859 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-manila-scheduler-container, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, name=rhosp17/openstack-manila-scheduler, build-date=2025-07-21T15:56:28, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:27 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain podman[166348]: 2025-10-13 14:13:27.427011969 +0000 UTC m=+0.393450005 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, vcs-type=git, distribution-scope=public, build-date=2025-07-21T15:56:28, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, name=rhosp17/openstack-manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, io.openshift.expose-services=, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible)
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 192.168.122.99:41302 [13/Oct/2025:14:13:26.940] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/494/494 201 574 - - ---- 58/3/1/1/0 0/0 "POST /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2 HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 192.168.122.99:41314 [13/Oct/2025:14:13:27.032] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/446/446 200 2700 - - ---- 57/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:27.340] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/189/189 200 8095 - - ---- 57/3/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 172.21.0.2:53022 [13/Oct/2025:14:13:27.335] neutron neutron/standalone.internalapi.localdomain 0/0/0/309/309 200 2976 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=0db46bf3fd934836abb02ee9f8de2578&name=default HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1155: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 172.21.0.2:53034 [13/Oct/2025:14:13:27.648] neutron neutron/standalone.internalapi.localdomain 0/0/0/181/181 204 152 - - ---- 57/1/0/0/0 0/0 "DELETE /v2.0/security-groups/cd9423e0-8b9f-41ae-b88c-e1f5febb480e HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 172.21.0.2:51938 [13/Oct/2025:14:13:27.393] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/448/448 201 8100 - - ---- 57/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 192.168.122.99:41328 [13/Oct/2025:14:13:27.437] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/423/423 204 165 - - ---- 57/3/2/2/0 0/0 "DELETE /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/486f91477bb3486390ca5c4ed1cd1ac2 HTTP/1.1"
Oct 13 14:13:27 standalone.localdomain haproxy[70940]: 192.168.122.99:41338 [13/Oct/2025:14:13:27.481] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/419/419 204 165 - - ---- 57/3/2/2/0 0/0 "PUT /v3/projects/e69ac02d79d34d9c957819d3dcce8263/users/037f61f895d14dfc8bcda35abd24d0fc/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41342 [13/Oct/2025:14:13:27.833] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/193/193 204 165 - - ---- 57/2/1/1/0 0/0 "DELETE /v3/projects/0db46bf3fd934836abb02ee9f8de2578 HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 172.21.0.2:53048 [13/Oct/2025:14:13:28.028] neutron neutron/standalone.internalapi.localdomain 0/0/0/141/141 200 2976 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=e9b4092f9c68402dbd339a5c7b99b996&name=default HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 172.21.0.2:51944 [13/Oct/2025:14:13:27.850] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/474/474 201 8100 - - ---- 56/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 172.21.0.2:53060 [13/Oct/2025:14:13:28.172] neutron neutron/standalone.internalapi.localdomain 0/0/0/187/187 204 152 - - ---- 56/1/0/0/0 0/0 "DELETE /v2.0/security-groups/f6c2a636-a169-4b21-8034-00cdc3a1f29a HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41358 [13/Oct/2025:14:13:27.868] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/499/499 201 574 - - ---- 57/4/3/3/0 0/0 "POST /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2 HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0043440946049692905 of space, bias 1.0, pg target 0.4344094604969291 quantized to 32 (current 32)
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 172.21.0.2:51954 [13/Oct/2025:14:13:27.907] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/769/769 201 8112 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41376 [13/Oct/2025:14:13:28.325] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/367/367 200 510 - - ---- 56/4/3/3/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41378 [13/Oct/2025:14:13:28.362] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/383/383 204 165 - - ---- 56/4/3/3/0 0/0 "DELETE /v3/projects/e9b4092f9c68402dbd339a5c7b99b996 HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain ceph-mon[29756]: pgmap v1155: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41384 [13/Oct/2025:14:13:28.369] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/430/430 204 165 - - ---- 56/3/2/2/0 0/0 "DELETE /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/db0675df804b4bb7b2ed631fc9497d69 HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41400 [13/Oct/2025:14:13:28.679] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/180/180 201 576 - - ---- 56/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:28 standalone.localdomain haproxy[70940]: 192.168.122.99:41416 [13/Oct/2025:14:13:28.695] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/233/233 201 588 - - ---- 56/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:29 standalone.localdomain haproxy[70940]: 172.21.0.2:51958 [13/Oct/2025:14:13:28.747] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/538/538 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:29 standalone.localdomain haproxy[70940]: 192.168.122.99:41420 [13/Oct/2025:14:13:28.801] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/562/562 404 357 - - ---- 56/3/2/2/0 0/0 "GET /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/db0675df804b4bb7b2ed631fc9497d69 HTTP/1.1"
Oct 13 14:13:29 standalone.localdomain haproxy[70940]: 192.168.122.99:41430 [13/Oct/2025:14:13:28.863] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/769/769 201 498 - - ---- 56/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1156: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:29 standalone.localdomain systemd[1]: tmp-crun.ff5Pts.mount: Deactivated successfully.
Oct 13 14:13:29 standalone.localdomain podman[166760]: 2025-10-13 14:13:29.786889643 +0000 UTC m=+0.116228975 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-share, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-manila-share-container, build-date=2025-07-21T15:22:36, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:13:29 standalone.localdomain podman[166760]: 2025-10-13 14:13:29.819935704 +0000 UTC m=+0.149275076 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, vcs-type=git, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, com.redhat.component=openstack-manila-share-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-manila-share, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:36, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1)
Oct 13 14:13:29 standalone.localdomain runuser[166794]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:29 standalone.localdomain haproxy[70940]: 192.168.122.99:41432 [13/Oct/2025:14:13:28.931] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/963/963 201 505 - - ---- 56/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 172.21.0.2:51968 [13/Oct/2025:14:13:29.295] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/930/930 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:41438 [13/Oct/2025:14:13:29.369] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/882/882 201 574 - - ---- 56/4/3/3/0 0/0 "POST /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2 HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:39990 [13/Oct/2025:14:13:29.634] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/634/634 200 2700 - - ---- 56/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40006 [13/Oct/2025:14:13:29.897] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/386/386 200 2700 - - ---- 56/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40008 [13/Oct/2025:14:13:30.227] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/80/80 200 510 - - ---- 56/4/3/3/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40022 [13/Oct/2025:14:13:30.254] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/72/72 201 574 - - ---- 56/4/3/3/0 0/0 "POST /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2 HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40034 [13/Oct/2025:14:13:30.270] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/90/90 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/15118c8f5a1d495f8612885771630022/users/05e9ae5467c44eac882b10216b1f471a/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40050 [13/Oct/2025:14:13:30.286] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/173/173 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/9487b400e57348799c6fb50b62fc3011/users/ede8ad203e344ebca8f01cd9d5095847/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40060 [13/Oct/2025:14:13:30.311] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/232/232 201 572 - - ---- 56/4/3/3/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40064 [13/Oct/2025:14:13:30.329] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/284/284 200 1067 - - ---- 56/4/3/3/0 0/0 "GET /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2 HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain runuser[166794]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40068 [13/Oct/2025:14:13:30.365] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/311/311 200 2700 - - ---- 56/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40070 [13/Oct/2025:14:13:30.462] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/234/234 200 2700 - - ---- 56/4/3/3/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:13:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:13:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:13:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:13:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:13:30 standalone.localdomain ceph-mon[29756]: pgmap v1156: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:30 standalone.localdomain runuser[166936]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:30 standalone.localdomain podman[166906]: 2025-10-13 14:13:30.822338952 +0000 UTC m=+0.080716730 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, version=17.1.9, distribution-scope=public, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git)
Oct 13 14:13:30 standalone.localdomain systemd[1]: tmp-crun.E0zzca.mount: Deactivated successfully.
Oct 13 14:13:30 standalone.localdomain podman[166907]: 2025-10-13 14:13:30.846415378 +0000 UTC m=+0.096323707 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:13:30 standalone.localdomain podman[166913]: 2025-10-13 14:13:30.895442447 +0000 UTC m=+0.143326374 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, io.openshift.expose-services=, container_name=nova_vnc_proxy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:24:10, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:30 standalone.localdomain podman[166904]: 2025-10-13 14:13:30.907421994 +0000 UTC m=+0.164109770 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:13:30 standalone.localdomain podman[166905]: 2025-10-13 14:13:30.940497145 +0000 UTC m=+0.200798222 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=nova_migration_target, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1)
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40084 [13/Oct/2025:14:13:30.545] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/417/417 201 497 - - ---- 56/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:30 standalone.localdomain haproxy[70940]: 192.168.122.99:40096 [13/Oct/2025:14:13:30.616] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/363/363 204 165 - - ---- 56/4/3/3/0 0/0 "DELETE /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/e1ff6b2a837b479c8e1febf5b420e3b5 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40102 [13/Oct/2025:14:13:30.679] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/322/322 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/15118c8f5a1d495f8612885771630022/users/05e9ae5467c44eac882b10216b1f471a/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain podman[166907]: 2025-10-13 14:13:31.036076378 +0000 UTC m=+0.285984697 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, com.redhat.component=openstack-swift-account-container, distribution-scope=public, name=rhosp17/openstack-swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git)
Oct 13 14:13:31 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40108 [13/Oct/2025:14:13:30.698] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/353/353 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/9487b400e57348799c6fb50b62fc3011/users/ede8ad203e344ebca8f01cd9d5095847/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain podman[166906]: 2025-10-13 14:13:31.090482313 +0000 UTC m=+0.348860091 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40122 [13/Oct/2025:14:13:30.968] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/125/125 200 2700 - - ---- 56/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40132 [13/Oct/2025:14:13:30.982] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/156/156 204 165 - - ---- 56/3/2/2/0 0/0 "DELETE /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/bd023e43eb8a46f0bd0a3759335859b4 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain podman[166904]: 2025-10-13 14:13:31.1498977 +0000 UTC m=+0.406585486 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, distribution-scope=public, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_object_server, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40140 [13/Oct/2025:14:13:31.002] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/152/152 200 2700 - - ---- 56/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:13:31 standalone.localdomain podman[166913]: 2025-10-13 14:13:31.173927844 +0000 UTC m=+0.421811771 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, maintainer=OpenStack TripleO Team, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, name=rhosp17/openstack-nova-novncproxy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:24:10, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container)
Oct 13 14:13:31 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:13:31 standalone.localdomain podman[166905]: 2025-10-13 14:13:31.263651098 +0000 UTC m=+0.523952235 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:13:31 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:13:31 standalone.localdomain runuser[166936]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:31 standalone.localdomain runuser[167088]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 172.21.0.2:37088 [13/Oct/2025:14:13:31.055] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/407/407 201 8126 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40154 [13/Oct/2025:14:13:31.095] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/398/398 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/a53f0865fedd4deb93471b0af0b5bff0/users/89bd8032744245d1a05c082e6dd45241/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40170 [13/Oct/2025:14:13:31.144] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/443/443 201 574 - - ---- 56/4/3/3/0 0/0 "POST /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40178 [13/Oct/2025:14:13:31.156] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/493/493 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/15118c8f5a1d495f8612885771630022/users/05e9ae5467c44eac882b10216b1f471a/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40186 [13/Oct/2025:14:13:31.466] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/230/230 201 588 - - ---- 56/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1157: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40188 [13/Oct/2025:14:13:31.497] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/240/240 200 2700 - - ---- 56/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:31 standalone.localdomain haproxy[70940]: 192.168.122.99:40198 [13/Oct/2025:14:13:31.589] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/187/187 200 569 - - ---- 56/3/2/2/0 0/0 "GET /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/958ad32809bd471d8cc58d24aff741a2 HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 172.21.0.2:37092 [13/Oct/2025:14:13:31.654] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/513/513 201 8174 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain runuser[167088]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 192.168.122.99:40204 [13/Oct/2025:14:13:31.699] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/770/770 201 505 - - ---- 56/4/3/3/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 192.168.122.99:40206 [13/Oct/2025:14:13:31.739] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/754/754 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/a53f0865fedd4deb93471b0af0b5bff0/users/89bd8032744245d1a05c082e6dd45241/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 192.168.122.99:40208 [13/Oct/2025:14:13:31.779] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/758/758 204 165 - - ---- 56/3/2/2/0 0/0 "DELETE /v3/users/18921f2767b84a7087ae952ec5e055a3/credentials/OS-EC2/958ad32809bd471d8cc58d24aff741a2 HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 192.168.122.99:40222 [13/Oct/2025:14:13:32.169] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/421/421 201 576 - - ---- 56/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 192.168.122.99:40232 [13/Oct/2025:14:13:32.474] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/164/164 200 2700 - - ---- 57/4/3/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain ceph-mon[29756]: pgmap v1157: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 172.21.0.2:37098 [13/Oct/2025:14:13:32.501] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/438/438 201 8110 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:32 standalone.localdomain haproxy[70940]: 192.168.122.99:40238 [13/Oct/2025:14:13:32.546] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/448/448 204 165 - - ---- 56/4/3/3/0 0/0 "DELETE /v3/users/18921f2767b84a7087ae952ec5e055a3 HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 192.168.122.99:40254 [13/Oct/2025:14:13:32.593] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/686/686 201 499 - - ---- 56/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 192.168.122.99:40256 [13/Oct/2025:14:13:32.641] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/666/666 204 165 - - ---- 56/3/2/2/0 0/0 "PUT /v3/projects/7d2ce07158c4432a8f445f957002124c/users/8b4a2e6087cd4138baff96ba37f9e784/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 192.168.122.99:40270 [13/Oct/2025:14:13:32.942] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/417/417 201 574 - - ---- 56/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 172.21.0.2:37110 [13/Oct/2025:14:13:32.998] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/682/682 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 192.168.122.99:40276 [13/Oct/2025:14:13:33.281] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/410/410 200 2700 - - ---- 56/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1158: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 192.168.122.99:40278 [13/Oct/2025:14:13:33.311] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/393/393 200 2700 - - ---- 56/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:33 standalone.localdomain haproxy[70940]: 192.168.122.99:40290 [13/Oct/2025:14:13:33.361] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/599/599 201 497 - - ---- 56/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:33.688] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/334/334 200 8095 - - ---- 56/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40304 [13/Oct/2025:14:13:33.695] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/352/352 204 165 - - ---- 56/3/2/2/0 0/0 "PUT /v3/projects/2020480fe5ca43d6bd1d2085962e64a8/users/b282f3c5f206443cb703aa872eb7a686/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40306 [13/Oct/2025:14:13:33.709] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/380/380 204 165 - - ---- 56/3/2/2/0 0/0 "PUT /v3/projects/7d2ce07158c4432a8f445f957002124c/users/8b4a2e6087cd4138baff96ba37f9e784/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40322 [13/Oct/2025:14:13:33.964] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/165/165 200 2700 - - ---- 56/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40338 [13/Oct/2025:14:13:34.049] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/95/95 200 2700 - - ---- 57/3/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 172.21.0.2:42754 [13/Oct/2025:14:13:33.682] neutron neutron/standalone.internalapi.localdomain 0/0/0/502/502 200 2976 - - ---- 57/2/1/0/0 0/0 "GET /v2.0/security-groups?tenant_id=b635c4f5aeb949ecb0e5b21a3fb6ef6c&name=default HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 172.21.0.2:42758 [13/Oct/2025:14:13:34.187] neutron neutron/standalone.internalapi.localdomain 0/0/0/180/180 204 152 - - ---- 56/1/0/0/0 0/0 "DELETE /v2.0/security-groups/cb2be9ac-cbc0-4b56-8294-959ec1bbaca9 HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 172.21.0.2:37116 [13/Oct/2025:14:13:34.100] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/364/364 201 8126 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40342 [13/Oct/2025:14:13:34.130] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/372/372 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/ffa4b90840e744128b3b10658bf5c402/users/75e5418128e9480bb151a9bd982fa780/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40346 [13/Oct/2025:14:13:34.147] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/421/421 204 165 - - ---- 56/4/3/3/0 0/0 "PUT /v3/projects/2020480fe5ca43d6bd1d2085962e64a8/users/b282f3c5f206443cb703aa872eb7a686/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40356 [13/Oct/2025:14:13:34.370] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/308/308 204 165 - - ---- 56/3/2/2/0 0/0 "DELETE /v3/projects/b635c4f5aeb949ecb0e5b21a3fb6ef6c HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:13:34 standalone.localdomain ceph-mon[29756]: pgmap v1158: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40360 [13/Oct/2025:14:13:34.468] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/350/350 200 719 - - ---- 55/2/1/1/0 0/0 "GET /v3/users/ede8ad203e344ebca8f01cd9d5095847/projects HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain systemd[1]: tmp-crun.JvSTlw.mount: Deactivated successfully.
Oct 13 14:13:34 standalone.localdomain podman[167175]: 2025-10-13 14:13:34.856268105 +0000 UTC m=+0.097282326 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_metadata, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:13:34 standalone.localdomain podman[167175]: 2025-10-13 14:13:34.878145155 +0000 UTC m=+0.119159366 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=nova_metadata, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, vcs-type=git, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:13:34 standalone.localdomain haproxy[70940]: 192.168.122.99:40376 [13/Oct/2025:14:13:34.505] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/382/382 200 2700 - - ---- 55/1/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:34 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:13:34 standalone.localdomain podman[167174]: 2025-10-13 14:13:34.914532478 +0000 UTC m=+0.157458768 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9)
Oct 13 14:13:35 standalone.localdomain podman[167177]: 2025-10-13 14:13:34.965879958 +0000 UTC m=+0.195079608 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Oct 13 14:13:35 standalone.localdomain podman[167192]: 2025-10-13 14:13:35.017464525 +0000 UTC m=+0.239839216 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:13:35 standalone.localdomain podman[167177]: 2025-10-13 14:13:35.022470218 +0000 UTC m=+0.251669818 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12)
Oct 13 14:13:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:35 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain podman[167176]: 2025-10-13 14:13:34.834701655 +0000 UTC m=+0.080969937 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, release=1, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, vcs-type=git, container_name=glance_api_internal, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:13:35 standalone.localdomain podman[167178]: 2025-10-13 14:13:35.073162669 +0000 UTC m=+0.306512515 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team)
Oct 13 14:13:35 standalone.localdomain podman[167176]: 2025-10-13 14:13:35.07614434 +0000 UTC m=+0.322412652 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:35 standalone.localdomain podman[167174]: 2025-10-13 14:13:35.08496363 +0000 UTC m=+0.327889910 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, tcib_managed=true, com.redhat.component=openstack-swift-proxy-server-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37)
Oct 13 14:13:35 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain podman[167178]: 2025-10-13 14:13:35.117558577 +0000 UTC m=+0.350908423 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.9, container_name=ovn_controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git)
Oct 13 14:13:35 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain podman[167173]: 2025-10-13 14:13:34.939100488 +0000 UTC m=+0.180834871 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:13:35 standalone.localdomain podman[167172]: 2025-10-13 14:13:35.118535526 +0000 UTC m=+0.355471932 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, version=17.1.9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, tcib_managed=true, container_name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:13:35 standalone.localdomain podman[167173]: 2025-10-13 14:13:35.176804668 +0000 UTC m=+0.418539041 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, config_id=tripleo_step4, release=1, container_name=placement_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T13:58:12, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, io.buildah.version=1.33.12, name=rhosp17/openstack-placement-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:13:35 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain haproxy[70940]: 172.21.0.2:37124 [13/Oct/2025:14:13:34.578] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/619/619 201 8114 - - ---- 55/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:35 standalone.localdomain podman[167172]: 2025-10-13 14:13:35.202883607 +0000 UTC m=+0.439820013 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, release=1, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container)
Oct 13 14:13:35 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain podman[167192]: 2025-10-13 14:13:35.233812413 +0000 UTC m=+0.456187144 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, distribution-scope=public)
Oct 13 14:13:35 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain haproxy[70940]: 172.21.0.2:37136 [13/Oct/2025:14:13:34.827] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/649/649 201 8126 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:35 standalone.localdomain haproxy[70940]: 192.168.122.99:40382 [13/Oct/2025:14:13:34.890] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/605/605 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/projects/ffa4b90840e744128b3b10658bf5c402/users/75e5418128e9480bb151a9bd982fa780/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:35 standalone.localdomain haproxy[70940]: 192.168.122.99:40394 [13/Oct/2025:14:13:35.200] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/347/347 201 483 - - ---- 56/3/1/1/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1159: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:35 standalone.localdomain systemd[1]: tmp-crun.Blkc7n.mount: Deactivated successfully.
Oct 13 14:13:35 standalone.localdomain haproxy[70940]: 172.21.0.2:37152 [13/Oct/2025:14:13:35.479] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/440/440 401 377 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:35 standalone.localdomain haproxy[70940]: 192.168.122.99:40404 [13/Oct/2025:14:13:35.498] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/486/486 200 2700 - - ---- 55/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 192.168.122.99:40406 [13/Oct/2025:14:13:35.549] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/494/494 201 446 - - ---- 55/3/2/2/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain ceph-mon[29756]: pgmap v1159: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 192.168.122.99:40412 [13/Oct/2025:14:13:35.923] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/184/184 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/users/ede8ad203e344ebca8f01cd9d5095847 HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 192.168.122.99:40424 [13/Oct/2025:14:13:35.988] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/179/179 204 165 - - ---- 55/3/2/2/0 0/0 "PUT /v3/projects/ffa4b90840e744128b3b10658bf5c402/users/75e5418128e9480bb151a9bd982fa780/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 192.168.122.99:40438 [13/Oct/2025:14:13:36.047] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/181/181 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/domains/a65b1497674944abb43e937c51b04615/users/b282f3c5f206443cb703aa872eb7a686/roles/0c921a8b5f044d898bfd3adce8994bcf HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 192.168.122.99:40442 [13/Oct/2025:14:13:36.108] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/201/201 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/users/8b4a2e6087cd4138baff96ba37f9e784 HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 172.21.0.2:37164 [13/Oct/2025:14:13:36.176] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/455/455 201 8172 - - ---- 55/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 192.168.122.99:40454 [13/Oct/2025:14:13:36.230] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/453/453 201 484 - - ---- 55/2/1/1/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:36 standalone.localdomain haproxy[70940]: 172.21.0.2:37172 [13/Oct/2025:14:13:36.312] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/658/658 201 8100 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40464 [13/Oct/2025:14:13:36.635] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/431/431 201 567 - - ---- 55/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40466 [13/Oct/2025:14:13:36.686] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/459/459 201 446 - - ---- 55/2/1/1/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:36.976] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/258/258 200 8095 - - ---- 55/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 172.21.0.2:42770 [13/Oct/2025:14:13:36.973] neutron neutron/standalone.internalapi.localdomain 0/0/0/390/390 200 2976 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=7d2ce07158c4432a8f445f957002124c&name=default HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 172.21.0.2:42782 [13/Oct/2025:14:13:37.364] neutron neutron/standalone.internalapi.localdomain 0/0/0/176/176 204 152 - - ---- 55/1/0/0/0 0/0 "DELETE /v2.0/security-groups/d4006c0d-62fd-4fb9-a639-1c3845c29f2e HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40482 [13/Oct/2025:14:13:37.070] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/493/493 201 617 - - ---- 55/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40492 [13/Oct/2025:14:13:37.148] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/461/461 201 533 - - ---- 55/3/2/2/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40508 [13/Oct/2025:14:13:37.544] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/158/158 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/7d2ce07158c4432a8f445f957002124c HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1160: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40512 [13/Oct/2025:14:13:37.566] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/193/193 200 612 - - ---- 55/2/1/1/0 0/0 "GET /v3/users/9b87ea9862b9499ba835763880e6b59d HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40514 [13/Oct/2025:14:13:37.611] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/216/216 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/groups/ce359d4dcf2d4a0d981b46f0b9b5a8ce/users/b282f3c5f206443cb703aa872eb7a686 HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 172.21.0.2:42794 [13/Oct/2025:14:13:37.705] neutron neutron/standalone.internalapi.localdomain 0/0/0/148/148 200 2976 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=9487b400e57348799c6fb50b62fc3011&name=default HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40528 [13/Oct/2025:14:13:37.762] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/145/145 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/users/9b87ea9862b9499ba835763880e6b59d HTTP/1.1"
Oct 13 14:13:37 standalone.localdomain haproxy[70940]: 192.168.122.99:40544 [13/Oct/2025:14:13:37.831] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/133/133 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/domains/08e9f8d706a9418e9e4c8d469ece6a72/groups/ce359d4dcf2d4a0d981b46f0b9b5a8ce/roles/67205be61fef4b4c823c78f0ea64cc55 HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 172.21.0.2:42796 [13/Oct/2025:14:13:37.857] neutron neutron/standalone.internalapi.localdomain 0/0/0/179/179 204 152 - - ---- 55/1/0/0/0 0/0 "DELETE /v2.0/security-groups/9000c51e-d45e-4acc-bd80-d37fd428ad31 HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40550 [13/Oct/2025:14:13:37.910] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/164/164 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/e6e35dcce9364bab9190da56d9ddf9b4 HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40564 [13/Oct/2025:14:13:37.967] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/154/154 201 483 - - ---- 55/3/2/2/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40580 [13/Oct/2025:14:13:38.039] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/157/157 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/9487b400e57348799c6fb50b62fc3011 HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40588 [13/Oct/2025:14:13:38.079] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/161/161 201 555 - - ---- 55/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40590 [13/Oct/2025:14:13:38.123] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/157/157 201 447 - - ---- 55/2/1/1/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 172.21.0.2:37180 [13/Oct/2025:14:13:38.201] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/373/373 201 8100 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40600 [13/Oct/2025:14:13:38.243] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/380/380 200 2160 - - ---- 55/2/1/1/0 0/0 "GET /v3/projects?is_domain=True HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain haproxy[70940]: 192.168.122.99:40614 [13/Oct/2025:14:13:38.284] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/378/378 204 165 - - ---- 55/2/1/1/0 0/0 "PUT /v3/domains/31e4532ac0b04a8094e4a0f3f8b77049/users/b282f3c5f206443cb703aa872eb7a686/roles/209cf90d907045108b50dac792d83db5 HTTP/1.1"
Oct 13 14:13:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:13:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:13:38 standalone.localdomain ceph-mon[29756]: pgmap v1160: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:38 standalone.localdomain podman[167518]: 2025-10-13 14:13:38.83465084 +0000 UTC m=+0.073475068 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-nova-api-container, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, container_name=nova_api, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:38 standalone.localdomain podman[167511]: 2025-10-13 14:13:38.892983784 +0000 UTC m=+0.133337929 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, distribution-scope=public, release=1, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:13:38 standalone.localdomain podman[167518]: 2025-10-13 14:13:38.897978457 +0000 UTC m=+0.136802735 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api, version=17.1.9, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:13:38 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:13:38 standalone.localdomain podman[167511]: 2025-10-13 14:13:38.927539521 +0000 UTC m=+0.167893696 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, name=rhosp17/openstack-keystone, vcs-type=git, distribution-scope=public, release=1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:38 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 172.21.0.2:37188 [13/Oct/2025:14:13:38.581] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/426/426 201 8100 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40630 [13/Oct/2025:14:13:38.627] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/424/424 200 1795 - - ---- 55/3/2/2/0 0/0 "GET /v3/domains HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40646 [13/Oct/2025:14:13:38.666] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/451/451 201 484 - - ---- 55/3/2/2/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40658 [13/Oct/2025:14:13:39.013] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/148/148 200 510 - - ---- 55/3/2/2/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40662 [13/Oct/2025:14:13:39.054] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/171/171 200 564 - - ---- 55/3/2/2/0 0/0 "PATCH /v3/projects/2d02143ef0224ddf982ed03fa35a676b HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40678 [13/Oct/2025:14:13:39.119] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/154/154 201 447 - - ---- 55/3/2/2/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain podman[167613]: 2025-10-13 14:13:39.298737124 +0000 UTC m=+0.085554908 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, tcib_managed=true, com.redhat.component=openstack-cinder-backup-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:18:24, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:13:39 standalone.localdomain podman[167613]: 2025-10-13 14:13:39.328576456 +0000 UTC m=+0.115394220 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, build-date=2025-07-21T16:18:24, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, com.redhat.component=openstack-cinder-backup-container, vcs-type=git)
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40690 [13/Oct/2025:14:13:39.165] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/174/174 201 566 - - ---- 55/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40704 [13/Oct/2025:14:13:39.228] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/231/231 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/2d02143ef0224ddf982ed03fa35a676b HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40718 [13/Oct/2025:14:13:39.275] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/242/242 201 533 - - ---- 55/3/2/2/0 0/0 "POST /v3/groups HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1161: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:39 standalone.localdomain systemd[1]: tmp-crun.f7NAdA.mount: Deactivated successfully.
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40720 [13/Oct/2025:14:13:39.342] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/504/504 201 494 - - ---- 55/3/2/2/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:39 standalone.localdomain haproxy[70940]: 192.168.122.99:40738 [13/Oct/2025:14:13:39.464] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/488/488 201 567 - - ---- 55/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:40750 [13/Oct/2025:14:13:39.521] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/533/533 204 165 - - ---- 55/3/2/2/0 0/0 "PUT /v3/groups/18da9bd0f74c436386591cbb148645cd/users/b282f3c5f206443cb703aa872eb7a686 HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:60960 [13/Oct/2025:14:13:39.848] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/262/262 200 3622 - - ---- 55/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:60966 [13/Oct/2025:14:13:39.955] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/195/195 200 562 - - ---- 55/3/2/2/0 0/0 "GET /v3/projects/6e78c7e433f0454caf44f7a3e513b3a5 HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:60968 [13/Oct/2025:14:13:40.057] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/143/143 204 165 - - ---- 55/3/2/2/0 0/0 "PUT /v3/domains/f7138ecb57ad4ce3a9b78235d8beebb2/groups/18da9bd0f74c436386591cbb148645cd/roles/9e778ac89a4a4f80935fca41f5d272fc HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:60984 [13/Oct/2025:14:13:40.113] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/151/151 204 165 - - ---- 55/3/2/2/0 0/0 "PUT /v3/projects/c2a70aa156b74a3eb29039512dad8fbc/users/d613f33253694a1c89dfdac1f11ff238/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:60994 [13/Oct/2025:14:13:40.152] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/239/239 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/6e78c7e433f0454caf44f7a3e513b3a5 HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32768 [13/Oct/2025:14:13:40.204] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/268/268 200 1377 - - ---- 55/3/2/2/0 0/0 "GET /v3/auth/domains HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32780 [13/Oct/2025:14:13:40.268] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/252/252 200 3622 - - ---- 55/3/2/2/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32782 [13/Oct/2025:14:13:40.396] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/204/204 201 566 - - ---- 55/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32786 [13/Oct/2025:14:13:40.477] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/181/181 204 165 - - ---- 56/4/3/2/0 0/0 "DELETE /v3/domains/f7138ecb57ad4ce3a9b78235d8beebb2/groups/18da9bd0f74c436386591cbb148645cd/roles/9e778ac89a4a4f80935fca41f5d272fc HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32796 [13/Oct/2025:14:13:40.524] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/184/184 204 165 - - ---- 56/3/2/2/0 0/0 "PUT /v3/projects/c2a70aa156b74a3eb29039512dad8fbc/users/d613f33253694a1c89dfdac1f11ff238/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:13:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:13:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:13:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:13:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:13:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32802 [13/Oct/2025:14:13:40.603] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/160/160 200 561 - - ---- 55/2/1/1/0 0/0 "GET /v3/projects/b4adc5944ac341fbb088ad37c5480c7a HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain ceph-mon[29756]: pgmap v1161: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:40 standalone.localdomain haproxy[70940]: 192.168.122.99:32808 [13/Oct/2025:14:13:40.662] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/156/156 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/groups/18da9bd0f74c436386591cbb148645cd/users/b282f3c5f206443cb703aa872eb7a686 HTTP/1.1"
Oct 13 14:13:40 standalone.localdomain podman[167740]: 2025-10-13 14:13:40.831693498 +0000 UTC m=+0.098298188 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=barbican_api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.component=openstack-barbican-api-container, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:22:44, summary=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api)
Oct 13 14:13:40 standalone.localdomain podman[167752]: 2025-10-13 14:13:40.897978665 +0000 UTC m=+0.148509563 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.component=openstack-barbican-worker-container, description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, build-date=2025-07-21T15:36:22, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-barbican-worker, container_name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:40 standalone.localdomain podman[167752]: 2025-10-13 14:13:40.917230643 +0000 UTC m=+0.167761551 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, release=1, io.buildah.version=1.33.12, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, container_name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-barbican-worker-container, distribution-scope=public)
Oct 13 14:13:40 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:13:40 standalone.localdomain podman[167742]: 2025-10-13 14:13:40.935590845 +0000 UTC m=+0.183055599 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, release=1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:03:34, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, container_name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:13:40 standalone.localdomain podman[167742]: 2025-10-13 14:13:40.995432615 +0000 UTC m=+0.242897369 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, build-date=2025-07-21T16:03:34, container_name=neutron_sriov_agent, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:13:41 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:13:41 standalone.localdomain podman[167758]: 2025-10-13 14:13:41.010266129 +0000 UTC m=+0.260144177 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, container_name=nova_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:13:41 standalone.localdomain podman[167740]: 2025-10-13 14:13:41.014135767 +0000 UTC m=+0.280740457 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git)
Oct 13 14:13:41 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:13:41 standalone.localdomain podman[167741]: 2025-10-13 14:13:40.873758084 +0000 UTC m=+0.136175766 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-barbican-keystone-listener, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T16:18:19, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:13:41 standalone.localdomain podman[167741]: 2025-10-13 14:13:41.065850698 +0000 UTC m=+0.328268360 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-type=git, container_name=barbican_keystone_listener, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:18:19, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c)
Oct 13 14:13:41 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:13:41 standalone.localdomain podman[167739]: 2025-10-13 14:13:40.983662055 +0000 UTC m=+0.250945185 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:13:41 standalone.localdomain podman[167739]: 2025-10-13 14:13:41.117107417 +0000 UTC m=+0.384390627 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, tcib_managed=true, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:13:41 standalone.localdomain podman[167758]: 2025-10-13 14:13:41.171687686 +0000 UTC m=+0.421565704 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, build-date=2025-07-21T16:05:11, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 172.21.0.2:36964 [13/Oct/2025:14:13:40.718] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/455/455 201 8104 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:41 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:13:41 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 192.168.122.99:32810 [13/Oct/2025:14:13:40.766] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/472/472 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/projects/b4adc5944ac341fbb088ad37c5480c7a HTTP/1.1"
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 192.168.122.99:32816 [13/Oct/2025:14:13:40.820] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/487/487 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/groups/18da9bd0f74c436386591cbb148645cd HTTP/1.1"
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 172.21.0.2:36976 [13/Oct/2025:14:13:41.179] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/414/414 201 736 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 192.168.122.99:32828 [13/Oct/2025:14:13:41.243] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/405/405 201 553 - - ---- 55/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1162: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 192.168.122.99:32834 [13/Oct/2025:14:13:41.309] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/398/398 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/roles/9e778ac89a4a4f80935fca41f5d272fc HTTP/1.1"
Oct 13 14:13:41 standalone.localdomain haproxy[70940]: 172.21.0.2:36988 [13/Oct/2025:14:13:41.598] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/371/371 201 736 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32840 [13/Oct/2025:14:13:41.653] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 200 548 - - ---- 55/3/2/2/0 0/0 "GET /v3/projects/40e2d8ad82c74c88bd0d8a186148d47f HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32844 [13/Oct/2025:14:13:41.710] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/378/378 200 480 - - ---- 55/3/2/2/0 0/0 "PATCH /v3/domains/f7138ecb57ad4ce3a9b78235d8beebb2 HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32846 [13/Oct/2025:14:13:41.972] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/162/162 200 379 - - ---- 55/3/2/2/0 0/0 "HEAD /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32854 [13/Oct/2025:14:13:42.020] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/221/221 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/40e2d8ad82c74c88bd0d8a186148d47f HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32858 [13/Oct/2025:14:13:42.090] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/277/277 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/domains/f7138ecb57ad4ce3a9b78235d8beebb2 HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32870 [13/Oct/2025:14:13:42.135] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/271/271 200 731 - - ---- 55/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32886 [13/Oct/2025:14:13:42.246] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/201/201 201 483 - - ---- 55/3/2/2/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32890 [13/Oct/2025:14:13:42.369] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/125/125 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/domains/31e4532ac0b04a8094e4a0f3f8b77049/users/b282f3c5f206443cb703aa872eb7a686/roles/209cf90d907045108b50dac792d83db5 HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32904 [13/Oct/2025:14:13:42.408] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/129/129 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32920 [13/Oct/2025:14:13:42.451] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/138/138 201 610 - - ---- 55/3/2/2/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32932 [13/Oct/2025:14:13:42.497] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/146/146 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/roles/209cf90d907045108b50dac792d83db5 HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32948 [13/Oct/2025:14:13:42.542] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/145/145 404 208 - - ---- 55/3/2/2/0 0/0 "HEAD /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32952 [13/Oct/2025:14:13:42.592] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/133/133 200 605 - - ---- 55/2/1/1/0 0/0 "GET /v3/projects/4d80395912e24c12bf0983d2f5fe1622 HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain runuser[167900]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:42 standalone.localdomain haproxy[70940]: 192.168.122.99:32966 [13/Oct/2025:14:13:42.646] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/139/139 200 479 - - ---- 55/2/1/1/0 0/0 "PATCH /v3/domains/31e4532ac0b04a8094e4a0f3f8b77049 HTTP/1.1"
Oct 13 14:13:42 standalone.localdomain ceph-mon[29756]: pgmap v1162: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 172.21.0.2:37004 [13/Oct/2025:14:13:42.689] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/355/355 201 736 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:32976 [13/Oct/2025:14:13:42.729] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/396/396 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/4d80395912e24c12bf0983d2f5fe1622 HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:32992 [13/Oct/2025:14:13:42.787] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/440/440 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/domains/31e4532ac0b04a8094e4a0f3f8b77049 HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33006 [13/Oct/2025:14:13:43.045] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/225/225 200 731 - - ---- 55/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33020 [13/Oct/2025:14:13:43.126] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/205/205 200 479 - - ---- 55/3/2/2/0 0/0 "PATCH /v3/domains/53bd2f8b37a54b388249ebbb2719f60e HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33034 [13/Oct/2025:14:13:43.230] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/149/149 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/domains/08e9f8d706a9418e9e4c8d469ece6a72/groups/ce359d4dcf2d4a0d981b46f0b9b5a8ce/roles/67205be61fef4b4c823c78f0ea64cc55 HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33050 [13/Oct/2025:14:13:43.274] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/153/153 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain runuser[167900]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33064 [13/Oct/2025:14:13:43.334] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/202/202 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/domains/53bd2f8b37a54b388249ebbb2719f60e HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33068 [13/Oct/2025:14:13:43.382] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/200/200 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/groups/ce359d4dcf2d4a0d981b46f0b9b5a8ce/users/b282f3c5f206443cb703aa872eb7a686 HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33080 [13/Oct/2025:14:13:43.431] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/185/185 404 288 - - ---- 55/3/2/2/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain runuser[167971]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33086 [13/Oct/2025:14:13:43.540] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/113/113 201 483 - - ---- 55/3/2/2/0 0/0 "POST /v3/domains HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1163: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33102 [13/Oct/2025:14:13:43.585] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/122/122 204 165 - - ---- 56/4/3/3/0 0/0 "DELETE /v3/groups/ce359d4dcf2d4a0d981b46f0b9b5a8ce HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33108 [13/Oct/2025:14:13:43.623] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/181/181 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/users/d613f33253694a1c89dfdac1f11ff238 HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33118 [13/Oct/2025:14:13:43.657] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/193/193 201 616 - - ---- 55/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:43 standalone.localdomain haproxy[70940]: 192.168.122.99:33128 [13/Oct/2025:14:13:43.710] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/204/204 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/roles/67205be61fef4b4c823c78f0ea64cc55 HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 172.21.0.2:37020 [13/Oct/2025:14:13:43.806] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/437/437 201 8100 - - ---- 55/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain runuser[167971]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33132 [13/Oct/2025:14:13:43.852] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/457/457 201 610 - - ---- 55/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain runuser[168025]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33136 [13/Oct/2025:14:13:43.918] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/447/447 200 480 - - ---- 55/2/1/1/0 0/0 "PATCH /v3/domains/08e9f8d706a9418e9e4c8d469ece6a72 HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:44.254] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/193/193 200 8095 - - ---- 55/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33152 [13/Oct/2025:14:13:44.312] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/294/294 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/projects/091c983d13e94189a57ae1cc0eda95b4 HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 172.21.0.2:51884 [13/Oct/2025:14:13:44.248] neutron neutron/standalone.internalapi.localdomain 0/0/0/372/372 200 2976 - - ---- 56/2/1/0/0 0/0 "GET /v2.0/security-groups?tenant_id=c2a70aa156b74a3eb29039512dad8fbc&name=default HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33154 [13/Oct/2025:14:13:44.367] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/366/366 204 165 - - ---- 55/2/1/1/0 0/0 "DELETE /v3/domains/08e9f8d706a9418e9e4c8d469ece6a72 HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain ceph-mon[29756]: pgmap v1163: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 172.21.0.2:51890 [13/Oct/2025:14:13:44.622] neutron neutron/standalone.internalapi.localdomain 0/0/0/171/171 204 152 - - ---- 55/1/0/0/0 0/0 "DELETE /v2.0/security-groups/192c05fb-c8a7-4a99-8c6f-0390f0851419 HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33168 [13/Oct/2025:14:13:44.610] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/232/232 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/ebdb2f6f960c4ad4af1092abce700c85 HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33176 [13/Oct/2025:14:13:44.737] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/170/170 204 165 - - ---- 56/4/3/2/0 0/0 "DELETE /v3/domains/a65b1497674944abb43e937c51b04615/users/b282f3c5f206443cb703aa872eb7a686/roles/0c921a8b5f044d898bfd3adce8994bcf HTTP/1.1"
Oct 13 14:13:44 standalone.localdomain haproxy[70940]: 192.168.122.99:33178 [13/Oct/2025:14:13:44.796] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/188/188 204 165 - - ---- 55/3/2/2/0 0/0 "DELETE /v3/projects/c2a70aa156b74a3eb29039512dad8fbc HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:45 standalone.localdomain runuser[168025]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33190 [13/Oct/2025:14:13:44.845] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/219/219 200 479 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/domains/2c5a080a9996487789b68f13cdf9dd0c HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33198 [13/Oct/2025:14:13:44.908] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/212/212 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/roles/0c921a8b5f044d898bfd3adce8994bcf HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33210 [13/Oct/2025:14:13:45.065] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/149/149 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/domains/2c5a080a9996487789b68f13cdf9dd0c HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33212 [13/Oct/2025:14:13:45.122] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/175/175 200 479 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/domains/a65b1497674944abb43e937c51b04615 HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33220 [13/Oct/2025:14:13:45.216] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/142/142 201 585 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33222 [13/Oct/2025:14:13:45.300] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/190/190 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/domains/a65b1497674944abb43e937c51b04615 HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33234 [13/Oct/2025:14:13:45.361] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/176/176 200 580 - - ---- 54/2/1/1/0 0/0 "GET /v3/projects/0c4c9bab02a14c64a8657550d97a111c HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33238 [13/Oct/2025:14:13:45.494] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/92/92 200 2700 - - ---- 54/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33254 [13/Oct/2025:14:13:45.539] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/68/68 200 3740 - - ---- 54/2/1/1/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33256 [13/Oct/2025:14:13:45.588] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/39/39 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/e69ac02d79d34d9c957819d3dcce8263/users/05e9ae5467c44eac882b10216b1f471a/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1164: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33266 [13/Oct/2025:14:13:45.610] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/98/98 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/0c4c9bab02a14c64a8657550d97a111c HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33268 [13/Oct/2025:14:13:45.630] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/148/148 200 1028 - - ---- 54/2/1/1/0 0/0 "GET /v3/auth/projects HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33280 [13/Oct/2025:14:13:45.712] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/144/144 201 553 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:45 standalone.localdomain haproxy[70940]: 192.168.122.99:33294 [13/Oct/2025:14:13:45.781] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/159/159 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/e69ac02d79d34d9c957819d3dcce8263/users/05e9ae5467c44eac882b10216b1f471a/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33310 [13/Oct/2025:14:13:45.859] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/144/144 200 563 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/projects/21ff9d1546c64e788f0131265958a0a3 HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain ceph-mon[29756]: pgmap v1164: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33316 [13/Oct/2025:14:13:45.944] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/382/382 201 530 - - ---- 54/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33328 [13/Oct/2025:14:13:46.004] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/340/340 200 550 - - ---- 54/2/1/1/0 0/0 "GET /v3/projects/21ff9d1546c64e788f0131265958a0a3 HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33342 [13/Oct/2025:14:13:46.329] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/41/41 201 570 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33358 [13/Oct/2025:14:13:46.347] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/103/103 204 165 - - ---- 55/3/2/1/0 0/0 "DELETE /v3/projects/21ff9d1546c64e788f0131265958a0a3 HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33372 [13/Oct/2025:14:13:46.374] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/140/140 201 569 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain podman[168099]: 2025-10-13 14:13:46.554428482 +0000 UTC m=+0.069170787 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.openshift.expose-services=, release=1, com.redhat.component=openstack-ovn-northd-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, config_id=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:13:46 standalone.localdomain podman[168099]: 2025-10-13 14:13:46.56876775 +0000 UTC m=+0.083510045 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, config_id=ovn_cluster_northd, vcs-type=git, name=rhosp17/openstack-ovn-northd, distribution-scope=public)
Oct 13 14:13:46 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33382 [13/Oct/2025:14:13:46.454] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/126/126 201 566 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33386 [13/Oct/2025:14:13:46.517] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/140/140 201 447 - - ---- 54/2/1/1/0 0/0 "POST /v3/roles HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:13:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33394 [13/Oct/2025:14:13:46.583] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/164/164 200 573 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/projects/2b9a1b1c6a1044c091807e43f2439b56 HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33408 [13/Oct/2025:14:13:46.661] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/125/125 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/92af55bb2a0d49b881c82859840814af/users/cb794496675244d2a418f5df6e510de1/roles/78943ac09d144499927deb11fcdf9ecf HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain podman[168128]: 2025-10-13 14:13:46.822251722 +0000 UTC m=+0.088091964 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-mariadb, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45)
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33424 [13/Oct/2025:14:13:46.751] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/86/86 200 560 - - ---- 54/2/1/1/0 0/0 "GET /v3/projects/2b9a1b1c6a1044c091807e43f2439b56 HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain podman[168127]: 2025-10-13 14:13:46.870310262 +0000 UTC m=+0.136939088 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33432 [13/Oct/2025:14:13:46.789] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/107/107 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/363db56c0e0f4920ae200004344cf7f3/users/cb794496675244d2a418f5df6e510de1/roles/78943ac09d144499927deb11fcdf9ecf HTTP/1.1"
Oct 13 14:13:46 standalone.localdomain podman[168128]: 2025-10-13 14:13:46.906886642 +0000 UTC m=+0.172726874 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., container_name=clustercheck, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:13:46 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:13:46 standalone.localdomain podman[168127]: 2025-10-13 14:13:46.939416006 +0000 UTC m=+0.206044782 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, container_name=nova_compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:13:46 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:13:46 standalone.localdomain haproxy[70940]: 192.168.122.99:33434 [13/Oct/2025:14:13:46.841] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/155/155 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/projects/2b9a1b1c6a1044c091807e43f2439b56 HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 172.21.0.2:37026 [13/Oct/2025:14:13:46.898] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/370/370 201 719 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33444 [13/Oct/2025:14:13:47.002] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/360/360 201 561 - - ---- 54/1/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 172.21.0.2:37034 [13/Oct/2025:14:13:47.271] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/152/152 201 8107 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33446 [13/Oct/2025:14:13:47.366] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/133/133 200 570 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/projects/6451267160f6408cae5a9857bc9654b2 HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33460 [13/Oct/2025:14:13:47.426] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/136/136 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33464 [13/Oct/2025:14:13:47.503] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/84/84 200 557 - - ---- 54/1/0/0/0 0/0 "GET /v3/projects/6451267160f6408cae5a9857bc9654b2 HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 172.21.0.2:37042 [13/Oct/2025:14:13:47.566] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/75/75 201 8107 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1165: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33470 [13/Oct/2025:14:13:47.591] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/123/123 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/6451267160f6408cae5a9857bc9654b2 HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33482 [13/Oct/2025:14:13:47.646] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/149/149 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/roles/78943ac09d144499927deb11fcdf9ecf HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33486 [13/Oct/2025:14:13:47.718] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/142/142 404 321 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/domains/2c5a080a9996487789b68f13cdf9dd0c HTTP/1.1"
Oct 13 14:13:47 standalone.localdomain haproxy[70940]: 192.168.122.99:33498 [13/Oct/2025:14:13:47.799] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/178/178 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/363db56c0e0f4920ae200004344cf7f3 HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33504 [13/Oct/2025:14:13:47.863] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/184/184 404 321 - - ---- 54/2/1/1/0 0/0 "PATCH /v3/domains/53bd2f8b37a54b388249ebbb2719f60e HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33520 [13/Oct/2025:14:13:47.981] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/121/121 404 322 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/363db56c0e0f4920ae200004344cf7f3 HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33524 [13/Oct/2025:14:13:48.050] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/139/139 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/users/89bd8032744245d1a05c082e6dd45241 HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33540 [13/Oct/2025:14:13:48.105] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/173/173 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/92af55bb2a0d49b881c82859840814af HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33548 [13/Oct/2025:14:13:48.193] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/175/175 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/users/75e5418128e9480bb151a9bd982fa780 HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33562 [13/Oct/2025:14:13:48.282] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/196/196 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/users/cb794496675244d2a418f5df6e510de1 HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain ceph-mon[29756]: pgmap v1165: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 172.21.0.2:37056 [13/Oct/2025:14:13:48.371] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/469/469 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33570 [13/Oct/2025:14:13:48.485] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/435/435 404 321 - - ---- 54/1/0/0/0 0/0 "PATCH /v3/domains/f7138ecb57ad4ce3a9b78235d8beebb2 HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:48.850] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/118/118 200 8095 - - ---- 54/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:48 standalone.localdomain haproxy[70940]: 192.168.122.99:33578 [13/Oct/2025:14:13:48.923] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/61/61 404 321 - - ---- 54/1/0/0/0 0/0 "PATCH /v3/domains/31e4532ac0b04a8094e4a0f3f8b77049 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:33584 [13/Oct/2025:14:13:48.988] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/19/19 404 321 - - ---- 54/1/0/0/0 0/0 "PATCH /v3/domains/08e9f8d706a9418e9e4c8d469ece6a72 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:33600 [13/Oct/2025:14:13:49.009] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/17/17 404 321 - - ---- 54/1/0/0/0 0/0 "PATCH /v3/domains/a65b1497674944abb43e937c51b04615 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:33610 [13/Oct/2025:14:13:49.030] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/98/98 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/users/037f61f895d14dfc8bcda35abd24d0fc HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51896 [13/Oct/2025:14:13:48.844] neutron neutron/standalone.internalapi.localdomain 0/0/0/312/312 200 2976 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=ffa4b90840e744128b3b10658bf5c402&name=default HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:33620 [13/Oct/2025:14:13:49.131] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/139/139 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/users/05e9ae5467c44eac882b10216b1f471a HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51898 [13/Oct/2025:14:13:49.158] neutron neutron/standalone.internalapi.localdomain 0/0/0/185/185 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/security-groups/8df36501-8bc1-4471-b5bb-9dd1ad61a6f5 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:33632 [13/Oct/2025:14:13:49.273] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/132/132 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/users/b282f3c5f206443cb703aa872eb7a686 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:33648 [13/Oct/2025:14:13:49.346] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/150/150 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/projects/ffa4b90840e744128b3b10658bf5c402 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 172.21.0.2:51906 [13/Oct/2025:14:13:49.498] neutron neutron/standalone.internalapi.localdomain 0/0/0/105/105 200 2976 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=a53f0865fedd4deb93471b0af0b5bff0&name=default HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1166: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 172.21.0.2:52074 [13/Oct/2025:14:13:49.606] neutron neutron/standalone.internalapi.localdomain 0/0/0/201/201 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/security-groups/b2b0878b-0526-407f-abff-41cf91f4b354 HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 172.21.0.2:37058 [13/Oct/2025:14:13:49.408] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/422/422 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:49 standalone.localdomain haproxy[70940]: 192.168.122.99:46520 [13/Oct/2025:14:13:49.811] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/92/92 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/projects/a53f0865fedd4deb93471b0af0b5bff0 HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:49.836] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/165/165 200 8095 - - ---- 54/2/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 172.21.0.2:52082 [13/Oct/2025:14:13:49.832] neutron neutron/standalone.internalapi.localdomain 0/0/0/339/339 200 2976 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=15118c8f5a1d495f8612885771630022&name=default HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 172.21.0.2:46400 [13/Oct/2025:14:13:49.906] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/372/372 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 172.21.0.2:52084 [13/Oct/2025:14:13:50.173] neutron neutron/standalone.internalapi.localdomain 0/0/0/155/155 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/security-groups/c28a8db8-973f-42a2-a0e7-aa6cd5a5fe96 HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 172.21.0.2:46404 [13/Oct/2025:14:13:50.283] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/299/299 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 192.168.122.99:46526 [13/Oct/2025:14:13:50.332] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/356/356 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/15118c8f5a1d495f8612885771630022 HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 192.168.122.99:46542 [13/Oct/2025:14:13:50.586] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/174/174 200 510 - - ---- 54/1/0/0/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain ceph-mon[29756]: pgmap v1166: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 192.168.122.99:46558 [13/Oct/2025:14:13:50.765] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/30/30 201 602 - - ---- 54/1/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:50 standalone.localdomain haproxy[70940]: 172.21.0.2:52096 [13/Oct/2025:14:13:50.691] neutron neutron/standalone.internalapi.localdomain 0/0/0/174/174 200 2976 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=2020480fe5ca43d6bd1d2085962e64a8&name=default HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 172.21.0.2:52112 [13/Oct/2025:14:13:50.868] neutron neutron/standalone.internalapi.localdomain 0/0/0/256/256 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/security-groups/e7a262e1-e118-45f0-a3d4-4471cd867c59 HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46570 [13/Oct/2025:14:13:50.798] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/365/365 201 512 - - ---- 54/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46584 [13/Oct/2025:14:13:51.126] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/169/169 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/2020480fe5ca43d6bd1d2085962e64a8 HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46586 [13/Oct/2025:14:13:51.165] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/204/204 200 2700 - - ---- 54/1/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46600 [13/Oct/2025:14:13:51.373] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/38/38 204 165 - - ---- 54/1/0/0/0 0/0 "PUT /v3/projects/f14b211757c149a891bdfcd939743db2/users/5211fe019dc24d3e81d7e5bd2bbcfd1d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 172.21.0.2:52120 [13/Oct/2025:14:13:51.298] neutron neutron/standalone.internalapi.localdomain 0/0/0/132/132 200 2976 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=e69ac02d79d34d9c957819d3dcce8263&name=default HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46614 [13/Oct/2025:14:13:51.414] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/53/53 200 2700 - - ---- 54/1/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46630 [13/Oct/2025:14:13:51.470] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/20/20 204 165 - - ---- 54/1/0/0/0 0/0 "PUT /v3/projects/f14b211757c149a891bdfcd939743db2/users/5211fe019dc24d3e81d7e5bd2bbcfd1d/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 172.21.0.2:52124 [13/Oct/2025:14:13:51.432] neutron neutron/standalone.internalapi.localdomain 0/0/0/234/234 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/security-groups/d840fdbb-54e6-4fe4-a41e-61c898a03a1f HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1167: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 172.21.0.2:46416 [13/Oct/2025:14:13:51.497] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/383/383 201 8140 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:51 standalone.localdomain haproxy[70940]: 192.168.122.99:46636 [13/Oct/2025:14:13:51.669] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/328/328 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/e69ac02d79d34d9c957819d3dcce8263 HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain haproxy[70940]: 192.168.122.99:46652 [13/Oct/2025:14:13:51.884] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/185/185 201 604 - - ---- 54/1/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain sudo[168490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:13:52 standalone.localdomain sudo[168490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:13:52 standalone.localdomain sudo[168490]: pam_unix(sudo:session): session closed for user root
Oct 13 14:13:52 standalone.localdomain sudo[168505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:13:52 standalone.localdomain sudo[168505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:13:52 standalone.localdomain haproxy[70940]: 172.21.0.2:46430 [13/Oct/2025:14:13:52.001] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/372/372 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain haproxy[70940]: 192.168.122.99:46662 [13/Oct/2025:14:13:52.072] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/567/567 201 512 - - ---- 54/1/0/0/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: pgmap v1167: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:52 standalone.localdomain haproxy[70940]: 172.21.0.2:46438 [13/Oct/2025:14:13:52.380] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/530/530 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain sudo[168505]: pam_unix(sudo:session): session closed for user root
Oct 13 14:13:52 standalone.localdomain haproxy[70940]: 192.168.122.99:46668 [13/Oct/2025:14:13:52.643] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/291/291 200 2700 - - ---- 54/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain haproxy[70940]: 192.168.122.99:46676 [13/Oct/2025:14:13:52.925] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/33/33 200 510 - - ---- 54/2/1/1/0 0/0 "GET /v3/domains?name=Default HTTP/1.1"
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:13:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:13:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 4b0f3722-ad39-44e9-885c-3b4042dcbe0b (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:13:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 4b0f3722-ad39-44e9-885c-3b4042dcbe0b (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:13:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 4b0f3722-ad39-44e9-885c-3b4042dcbe0b (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46692 [13/Oct/2025:14:13:52.937] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/64/64 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/f9aa987f8a72457cb115a055e6facabd/users/94bd3e704390469ba4edf0e85c413985/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain sudo[168561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46704 [13/Oct/2025:14:13:52.960] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/108/108 201 576 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain sudo[168561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:13:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:13:53 standalone.localdomain sudo[168561]: pam_unix(sudo:session): session closed for user root
Oct 13 14:13:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:13:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:13:53 standalone.localdomain podman[168577]: 2025-10-13 14:13:53.139273493 +0000 UTC m=+0.057337885 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, container_name=iscsid, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container)
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46706 [13/Oct/2025:14:13:53.003] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/153/153 200 2700 - - ---- 54/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain podman[168577]: 2025-10-13 14:13:53.186083265 +0000 UTC m=+0.104147747 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=iscsid, release=1)
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:13:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:13:53 standalone.localdomain podman[168585]: 2025-10-13 14:13:53.190532821 +0000 UTC m=+0.095222533 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:10:12, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, container_name=cinder_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-scheduler-container, tcib_managed=true, name=rhosp17/openstack-cinder-scheduler)
Oct 13 14:13:53 standalone.localdomain systemd[1]: tmp-crun.dwuTN5.mount: Deactivated successfully.
Oct 13 14:13:53 standalone.localdomain podman[168585]: 2025-10-13 14:13:53.269781945 +0000 UTC m=+0.174471667 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-cinder-scheduler, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-cinder-scheduler-container, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9)
Oct 13 14:13:53 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:13:53 standalone.localdomain podman[168579]: 2025-10-13 14:13:53.272340423 +0000 UTC m=+0.185976730 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api_cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:13:53 standalone.localdomain podman[168579]: 2025-10-13 14:13:53.354908498 +0000 UTC m=+0.268544815 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, architecture=x86_64, container_name=cinder_api_cron, name=rhosp17/openstack-cinder-api, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:58:55, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1)
Oct 13 14:13:53 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46718 [13/Oct/2025:14:13:53.071] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/385/385 201 499 - - ---- 54/2/1/1/0 0/0 "POST /v3/users HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46722 [13/Oct/2025:14:13:53.160] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/317/317 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/f9aa987f8a72457cb115a055e6facabd/users/94bd3e704390469ba4edf0e85c413985/roles/c256a819446e40c281f7e7aa56296425 HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46736 [13/Oct/2025:14:13:53.460] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/81/81 200 2700 - - ---- 54/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46750 [13/Oct/2025:14:13:53.481] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/83/83 200 2700 - - ---- 54/2/1/1/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46762 [13/Oct/2025:14:13:53.547] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/67/67 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/11f2aabe1649428091077f6ea50b313c/users/94126c22656742f4978722fc8afc86d7/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46768 [13/Oct/2025:14:13:53.569] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/101/101 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/f9aa987f8a72457cb115a055e6facabd/users/94bd3e704390469ba4edf0e85c413985/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1168: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:53 standalone.localdomain haproxy[70940]: 192.168.122.99:46784 [13/Oct/2025:14:13:53.616] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/95/95 200 2700 - - ---- 55/2/0/0/0 0/0 "GET /v3/roles HTTP/1.1"
Oct 13 14:13:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:13:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:13:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:13:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:13:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:13:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:13:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 172.21.0.2:46444 [13/Oct/2025:14:13:53.678] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/388/388 201 8202 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46786 [13/Oct/2025:14:13:53.712] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/4/373/377 204 165 - - ---- 54/2/1/1/0 0/0 "PUT /v3/projects/11f2aabe1649428091077f6ea50b313c/users/94126c22656742f4978722fc8afc86d7/roles/98963189b9624f4a8f70329b16b1abb6 HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46800 [13/Oct/2025:14:13:54.071] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/82/82 403 345 - - ---- 54/1/0/0/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 172.21.0.2:46458 [13/Oct/2025:14:13:54.097] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/430/430 201 8114 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46812 [13/Oct/2025:14:13:54.157] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/430/430 400 494 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46814 [13/Oct/2025:14:13:54.531] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/67/67 200 457 - - ---- 54/2/1/1/0 0/0 "GET /v3 HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46818 [13/Oct/2025:14:13:54.594] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/19/19 400 623 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46832 [13/Oct/2025:14:13:54.603] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/20/20 200 457 - - ---- 54/2/1/1/0 0/0 "GET /v3 HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46840 [13/Oct/2025:14:13:54.620] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/26/26 404 322 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/projects/d038248729ea4e899fb9d577264d4b8d HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46856 [13/Oct/2025:14:13:54.627] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/28/28 200 457 - - ---- 54/2/1/1/0 0/0 "GET /v3 HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46870 [13/Oct/2025:14:13:54.650] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/17/17 403 344 - - ---- 54/2/1/1/0 0/0 "GET /v3/projects HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46872 [13/Oct/2025:14:13:54.660] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/9/9 300 525 - - ---- 53/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46878 [13/Oct/2025:14:13:54.672] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/30/30 201 564 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46886 [13/Oct/2025:14:13:54.672] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/32/32 300 525 - - ---- 54/2/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46898 [13/Oct/2025:14:13:54.704] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/55/55 409 492 - - ---- 54/2/1/1/0 0/0 "POST /v3/projects HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46914 [13/Oct/2025:14:13:54.710] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/132/132 204 165 - - ---- 54/2/1/1/0 0/0 "DELETE /v3/users/94126c22656742f4978722fc8afc86d7 HTTP/1.1"
Oct 13 14:13:54 standalone.localdomain ceph-mon[29756]: pgmap v1168: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:13:54 standalone.localdomain haproxy[70940]: 192.168.122.99:46924 [13/Oct/2025:14:13:54.761] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/191/191 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/projects/2fc5da71fef04c37b7b62aad4b66c81a HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 172.21.0.2:46466 [13/Oct/2025:14:13:54.844] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/410/410 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 192.168.122.99:46932 [13/Oct/2025:14:13:54.958] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/353/353 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/users/5211fe019dc24d3e81d7e5bd2bbcfd1d HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:55.263] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/132/132 200 8095 - - ---- 54/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 192.168.122.99:46938 [13/Oct/2025:14:13:55.313] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/153/153 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/users/94bd3e704390469ba4edf0e85c413985 HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 172.21.0.2:52138 [13/Oct/2025:14:13:55.257] neutron neutron/standalone.internalapi.localdomain 0/0/0/304/304 200 2976 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=11f2aabe1649428091077f6ea50b313c&name=default HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain runuser[168696]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1169: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 172.21.0.2:52146 [13/Oct/2025:14:13:55.564] neutron neutron/standalone.internalapi.localdomain 0/0/0/168/168 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/security-groups/16c094be-9d07-4e3a-bae7-85be1ec26736 HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 172.21.0.2:46480 [13/Oct/2025:14:13:55.469] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/375/375 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 192.168.122.99:46950 [13/Oct/2025:14:13:55.735] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/174/174 204 165 - - ---- 54/1/0/0/0 0/0 "DELETE /v3/projects/11f2aabe1649428091077f6ea50b313c HTTP/1.1"
Oct 13 14:13:55 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:13:55.854] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/135/135 200 8095 - - ---- 53/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:13:56 standalone.localdomain ceph-mon[29756]: pgmap v1169: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:56 standalone.localdomain haproxy[70940]: 172.21.0.2:52156 [13/Oct/2025:14:13:55.847] neutron neutron/standalone.internalapi.localdomain 0/0/0/315/315 200 2976 - - ---- 53/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=f9aa987f8a72457cb115a055e6facabd&name=default HTTP/1.1"
Oct 13 14:13:56 standalone.localdomain runuser[168696]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:56 standalone.localdomain haproxy[70940]: 172.21.0.2:52166 [13/Oct/2025:14:13:56.164] neutron neutron/standalone.internalapi.localdomain 0/0/0/176/176 204 152 - - ---- 53/1/0/0/0 0/0 "DELETE /v2.0/security-groups/b5f2faed-e632-4c38-992f-35e760a31cfb HTTP/1.1"
Oct 13 14:13:56 standalone.localdomain runuser[168765]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:56 standalone.localdomain haproxy[70940]: 192.168.122.99:46962 [13/Oct/2025:14:13:56.343] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/102/102 204 165 - - ---- 53/1/0/0/0 0/0 "DELETE /v3/projects/f9aa987f8a72457cb115a055e6facabd HTTP/1.1"
Oct 13 14:13:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:13:56 standalone.localdomain systemd[1]: tmp-crun.ZDmWyy.mount: Deactivated successfully.
Oct 13 14:13:56 standalone.localdomain podman[168810]: 2025-10-13 14:13:56.586957687 +0000 UTC m=+0.088831178 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, container_name=keystone, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:27:18)
Oct 13 14:13:56 standalone.localdomain podman[168810]: 2025-10-13 14:13:56.613971744 +0000 UTC m=+0.115845215 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, container_name=keystone, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, tcib_managed=true)
Oct 13 14:13:56 standalone.localdomain haproxy[70940]: 172.21.0.2:52170 [13/Oct/2025:14:13:56.448] neutron neutron/standalone.internalapi.localdomain 0/0/0/173/173 200 2976 - - ---- 53/1/0/0/0 0/0 "GET /v2.0/security-groups?tenant_id=f14b211757c149a891bdfcd939743db2&name=default HTTP/1.1"
Oct 13 14:13:56 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:13:56 standalone.localdomain haproxy[70940]: 172.21.0.2:52184 [13/Oct/2025:14:13:56.626] neutron neutron/standalone.internalapi.localdomain 0/0/0/166/166 204 152 - - ---- 53/1/0/0/0 0/0 "DELETE /v2.0/security-groups/545f036b-0aab-4a7c-a983-fc367c8ddc36 HTTP/1.1"
Oct 13 14:13:56 standalone.localdomain haproxy[70940]: 192.168.122.99:46966 [13/Oct/2025:14:13:56.795] keystone_admin keystone_admin/standalone.internalapi.localdomain 0/0/0/97/97 204 165 - - ---- 53/1/0/0/0 0/0 "DELETE /v3/projects/f14b211757c149a891bdfcd939743db2 HTTP/1.1"
Oct 13 14:13:57 standalone.localdomain runuser[168765]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:57 standalone.localdomain runuser[168853]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:13:57 standalone.localdomain sudo[154129]: pam_unix(sudo:session): session closed for user root
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:13:57 standalone.localdomain podman[168901]: 2025-10-13 14:13:57.281880461 +0000 UTC m=+0.118510436 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, build-date=2025-07-21T16:02:54, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, tcib_managed=true, container_name=nova_scheduler, release=1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-nova-scheduler-container, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:13:57 standalone.localdomain podman[168902]: 2025-10-13 14:13:57.227604561 +0000 UTC m=+0.061221404 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public)
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:13:57 standalone.localdomain podman[168901]: 2025-10-13 14:13:57.394915007 +0000 UTC m=+0.231545002 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, build-date=2025-07-21T16:02:54, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, distribution-scope=public, io.openshift.expose-services=, release=1)
Oct 13 14:13:57 standalone.localdomain sudo[169082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wywwwxyqbzudhxmwlmdpuqllofssvkhd ; /usr/bin/python3
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:13:57 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain sudo[169082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:13:57 standalone.localdomain podman[168905]: 2025-10-13 14:13:57.442568385 +0000 UTC m=+0.273091523 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:13:57 standalone.localdomain podman[168951]: 2025-10-13 14:13:57.34692829 +0000 UTC m=+0.114775822 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T13:58:15, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-horizon-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:13:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:13:57 standalone.localdomain podman[168902]: 2025-10-13 14:13:57.46398197 +0000 UTC m=+0.297598833 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26)
Oct 13 14:13:57 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[168905]: 2025-10-13 14:13:57.475967506 +0000 UTC m=+0.306490624 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, distribution-scope=public, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=logrotate_crond, release=1, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Oct 13 14:13:57 standalone.localdomain podman[168951]: 2025-10-13 14:13:57.483827927 +0000 UTC m=+0.251675469 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-horizon-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:15, container_name=horizon)
Oct 13 14:13:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:13:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2024230499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:13:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:13:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2024230499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:13:57 standalone.localdomain podman[169135]: 2025-10-13 14:13:57.534537798 +0000 UTC m=+0.068293460 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, com.redhat.component=openstack-manila-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, container_name=manila_scheduler, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:28, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, tcib_managed=true)
Oct 13 14:13:57 standalone.localdomain podman[168952]: 2025-10-13 14:13:57.534657061 +0000 UTC m=+0.300032097 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-conductor-container, tcib_managed=true, container_name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:13:57 standalone.localdomain podman[168900]: 2025-10-13 14:13:57.257845905 +0000 UTC m=+0.094450099 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:11, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4)
Oct 13 14:13:57 standalone.localdomain podman[168950]: 2025-10-13 14:13:57.415559199 +0000 UTC m=+0.184864785 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached)
Oct 13 14:13:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2024230499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:13:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2024230499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:13:57 standalone.localdomain python3[169113]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/ci-framework-data/tests/pre-adoption-tempest//dpa_tempest_workspace/tempest_run_output.txt follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 14:13:57 standalone.localdomain sudo[169082]: pam_unix(sudo:session): session closed for user root
Oct 13 14:13:57 standalone.localdomain podman[169099]: 2025-10-13 14:13:57.622024783 +0000 UTC m=+0.183633547 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T15:58:55, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, release=1, architecture=x86_64, distribution-scope=public, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:13:57 standalone.localdomain podman[169048]: 2025-10-13 14:13:57.583741713 +0000 UTC m=+0.200357649 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:49:55, version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:13:57 standalone.localdomain podman[169135]: 2025-10-13 14:13:57.638854648 +0000 UTC m=+0.172610350 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, com.redhat.component=openstack-manila-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, architecture=x86_64, config_id=tripleo_step4, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, name=rhosp17/openstack-manila-scheduler, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:28)
Oct 13 14:13:57 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[169048]: 2025-10-13 14:13:57.667857185 +0000 UTC m=+0.284473151 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, container_name=heat_api_cfn, io.openshift.expose-services=, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T14:49:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:13:57 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[169054]: 2025-10-13 14:13:57.678717008 +0000 UTC m=+0.291012572 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-api, release=1, build-date=2025-07-21T16:06:43, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-manila-api-container, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-api, config_id=tripleo_step4, container_name=manila_api_cron, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:13:57 standalone.localdomain podman[169099]: 2025-10-13 14:13:57.684144354 +0000 UTC m=+0.245753118 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, container_name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:58:55, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:13:57 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[168900]: 2025-10-13 14:13:57.701515145 +0000 UTC m=+0.538119359 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, container_name=heat_engine, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:13:57 standalone.localdomain podman[169050]: 2025-10-13 14:13:57.548430623 +0000 UTC m=+0.158337994 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1)
Oct 13 14:13:57 standalone.localdomain podman[168950]: 2025-10-13 14:13:57.704462475 +0000 UTC m=+0.473768091 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.component=openstack-memcached-container, version=17.1.9, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, container_name=memcached, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, build-date=2025-07-21T12:58:43, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:13:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1170: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:57 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[168952]: 2025-10-13 14:13:57.720176485 +0000 UTC m=+0.485551511 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, release=1, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T15:44:17, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:13:57 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[169050]: 2025-10-13 14:13:57.734100191 +0000 UTC m=+0.344007562 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, tcib_managed=true, version=17.1.9, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:13:57 standalone.localdomain podman[169054]: 2025-10-13 14:13:57.741155367 +0000 UTC m=+0.353450941 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-api-container, managed_by=tripleo_ansible, release=1, version=17.1.9, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vcs-type=git, container_name=manila_api_cron, maintainer=OpenStack TripleO Team)
Oct 13 14:13:57 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain podman[169101]: 2025-10-13 14:13:57.601423183 +0000 UTC m=+0.170301559 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, release=1, architecture=x86_64, container_name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, version=17.1.9, com.redhat.component=openstack-neutron-server-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=)
Oct 13 14:13:57 standalone.localdomain sudo[169317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwjlrqcyukjrpiejgkymdccdvgznfmmv ; /usr/bin/python3
Oct 13 14:13:57 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain sudo[169317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:13:57 standalone.localdomain podman[169101]: 2025-10-13 14:13:57.787731622 +0000 UTC m=+0.356609998 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T15:44:03, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, release=1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true)
Oct 13 14:13:57 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:13:57 standalone.localdomain runuser[168853]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:13:57 standalone.localdomain python3[169321]: ansible-ansible.legacy.copy Invoked with dest=/home/zuul/ci-framework-data/tests/pre-adoption-tempest//dpa_tempest_workspace/tempest_run_output.txt src=/home/zuul/.ansible/tmp/ansible-tmp-1760364837.320606-182-279840584543482/source _original_basename=tmp3aimxrdq follow=False checksum=d038465291091449281c9ea7e0d009b1d5dca527 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:13:57 standalone.localdomain sudo[169317]: pam_unix(sudo:session): session closed for user root
Oct 13 14:13:58 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=63811 DF PROTO=TCP SPT=45138 DPT=19885 SEQ=2588821215 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0959E2C90000000001030307) 
Oct 13 14:13:58 standalone.localdomain sudo[169375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lajezklxtmaudkvjqlnqcihgnvozjsuc ; /usr/bin/python3
Oct 13 14:13:58 standalone.localdomain sudo[169375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:13:58 standalone.localdomain python3[169434]: ansible-ansible.legacy.command Invoked with _raw_params=OS_CLOUD=standalone openstack subnet delete f7c97b4e-a144-4065-a133-87f2c9e328f5 _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000040-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:13:58 standalone.localdomain ceph-mon[29756]: pgmap v1170: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:59 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=63812 DF PROTO=TCP SPT=45138 DPT=19885 SEQ=2588821215 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0959E6F10000000001030307) 
Oct 13 14:13:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1171: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:13:59 standalone.localdomain haproxy[70940]: 172.21.0.2:49644 [13/Oct/2025:14:13:59.992] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:14:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:00 standalone.localdomain haproxy[70940]: 172.21.0.2:49644 [13/Oct/2025:14:13:59.998] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/351/351 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:00 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:14:00.376] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/66/66 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50250 [13/Oct/2025:14:14:00.370] neutron neutron/standalone.internalapi.localdomain 0/0/0/143/143 200 827 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/subnets/f7c97b4e-a144-4065-a133-87f2c9e328f5 HTTP/1.1"
Oct 13 14:14:00 standalone.localdomain haproxy[70940]: 172.21.0.2:50250 [13/Oct/2025:14:14:00.518] neutron neutron/standalone.internalapi.localdomain 0/0/0/182/182 204 152 - - ---- 54/1/0/0/0 0/0 "DELETE /v2.0/subnets/f7c97b4e-a144-4065-a133-87f2c9e328f5 HTTP/1.1"
Oct 13 14:14:00 standalone.localdomain ceph-mon[29756]: pgmap v1171: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:00 standalone.localdomain sudo[169375]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:01 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=63813 DF PROTO=TCP SPT=45138 DPT=19885 SEQ=2588821215 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0959EEF10000000001030307) 
Oct 13 14:14:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1172: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:14:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:14:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:14:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:14:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:14:01 standalone.localdomain podman[169658]: 2025-10-13 14:14:01.8502792 +0000 UTC m=+0.102472866 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible)
Oct 13 14:14:01 standalone.localdomain podman[169661]: 2025-10-13 14:14:01.896707519 +0000 UTC m=+0.141012523 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:14:01 standalone.localdomain systemd[1]: tmp-crun.48tIOo.mount: Deactivated successfully.
Oct 13 14:14:01 standalone.localdomain podman[169726]: 2025-10-13 14:14:01.955140276 +0000 UTC m=+0.077955845 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:14:01 standalone.localdomain podman[169660]: 2025-10-13 14:14:01.975554682 +0000 UTC m=+0.222741514 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:14:02 standalone.localdomain podman[169672]: 2025-10-13 14:14:02.060747217 +0000 UTC m=+0.299337586 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, build-date=2025-07-21T15:24:10, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-novncproxy-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1)
Oct 13 14:14:02 standalone.localdomain podman[169658]: 2025-10-13 14:14:02.071941339 +0000 UTC m=+0.324134975 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public)
Oct 13 14:14:02 standalone.localdomain podman[169726]: 2025-10-13 14:14:02.089832116 +0000 UTC m=+0.212647675 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:14:02 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:14:02 standalone.localdomain podman[169659]: 2025-10-13 14:14:02.010627954 +0000 UTC m=+0.263469369 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, distribution-scope=public)
Oct 13 14:14:02 standalone.localdomain podman[169661]: 2025-10-13 14:14:02.142150246 +0000 UTC m=+0.386455300 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, release=1, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:14:02 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:14:02 standalone.localdomain podman[169660]: 2025-10-13 14:14:02.166593074 +0000 UTC m=+0.413779946 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:14:02 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:14:02 standalone.localdomain podman[169824]: 2025-10-13 14:14:02.317163829 +0000 UTC m=+0.090899851 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, version=17.1.9, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:08:11, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:14:02 standalone.localdomain podman[169824]: 2025-10-13 14:14:02.324778652 +0000 UTC m=+0.098514634 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:08:11, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-haproxy, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, release=1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50)
Oct 13 14:14:02 standalone.localdomain podman[169672]: 2025-10-13 14:14:02.363981951 +0000 UTC m=+0.602572270 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, release=1, com.redhat.component=openstack-nova-novncproxy-container, io.buildah.version=1.33.12, container_name=nova_vnc_proxy, build-date=2025-07-21T15:24:10, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:14:02 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:14:02 standalone.localdomain podman[169659]: 2025-10-13 14:14:02.40775688 +0000 UTC m=+0.660598275 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Oct 13 14:14:02 standalone.localdomain podman[169852]: 2025-10-13 14:14:02.41430073 +0000 UTC m=+0.063043249 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, build-date=2025-07-21T13:08:05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1)
Oct 13 14:14:02 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:14:02 standalone.localdomain podman[169852]: 2025-10-13 14:14:02.442588075 +0000 UTC m=+0.091330584 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, tcib_managed=true, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public)
Oct 13 14:14:02 standalone.localdomain ceph-mon[29756]: pgmap v1172: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:03 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=40118 DF PROTO=TCP SPT=49690 DPT=19885 SEQ=1521241476 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0959F69A0000000001030307) 
Oct 13 14:14:03 standalone.localdomain sudo[169916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnjajumoewqdaltrhjvoadttpulbrtjz ; /usr/bin/python3
Oct 13 14:14:03 standalone.localdomain sudo[169916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:14:03 standalone.localdomain python3[169918]: ansible-ansible.legacy.command Invoked with _raw_params=OS_CLOUD=standalone openstack network delete b5c071a4-7b94-4335-b0f2-67d1f6b021bf _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000041-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:14:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1173: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:04 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=40119 DF PROTO=TCP SPT=49690 DPT=19885 SEQ=1521241476 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A0959FAB10000000001030307) 
Oct 13 14:14:04 standalone.localdomain ceph-mon[29756]: pgmap v1173: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:05 standalone.localdomain haproxy[70940]: 172.21.0.2:49656 [13/Oct/2025:14:14:05.020] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:14:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:05 standalone.localdomain haproxy[70940]: 172.21.0.2:49656 [13/Oct/2025:14:14:05.026] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:05 standalone.localdomain haproxy[70940]: 172.17.0.2:58178 [13/Oct/2025:14:14:05.369] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:05 standalone.localdomain haproxy[70940]: 172.21.0.2:50256 [13/Oct/2025:14:14:05.366] neutron neutron/standalone.internalapi.localdomain 0/0/0/105/105 200 891 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks/b5c071a4-7b94-4335-b0f2-67d1f6b021bf HTTP/1.1"
Oct 13 14:14:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1174: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:14:05 standalone.localdomain systemd[1]: tmp-crun.giUef7.mount: Deactivated successfully.
Oct 13 14:14:05 standalone.localdomain podman[169973]: 2025-10-13 14:14:05.858872109 +0000 UTC m=+0.099411902 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=ovn_controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44)
Oct 13 14:14:05 standalone.localdomain podman[169978]: 2025-10-13 14:14:05.921539515 +0000 UTC m=+0.149619126 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:14:05 standalone.localdomain podman[169973]: 2025-10-13 14:14:05.938318269 +0000 UTC m=+0.178858062 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, container_name=ovn_controller, managed_by=tripleo_ansible)
Oct 13 14:14:05 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:14:05 standalone.localdomain podman[169953]: 2025-10-13 14:14:05.984848982 +0000 UTC m=+0.221447464 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, version=17.1.9)
Oct 13 14:14:05 standalone.localdomain podman[169949]: 2025-10-13 14:14:05.940552547 +0000 UTC m=+0.194401167 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=swift_proxy, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:14:06 standalone.localdomain podman[169963]: 2025-10-13 14:14:05.899669376 +0000 UTC m=+0.146765619 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, container_name=glance_api_internal, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20)
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.17.0.2:36142 [13/Oct/2025:14:14:06.039] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/1/2/3 300 515 - - ---- 56/4/1/1/0 0/0 "GET / HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.17.0.2:36158 [13/Oct/2025:14:14:06.040] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/1/3/4 300 515 - - ---- 56/4/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain podman[169969]: 2025-10-13 14:14:06.044738853 +0000 UTC m=+0.282843191 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, container_name=ovn_metadata_agent, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1)
Oct 13 14:14:06 standalone.localdomain ceph-mon[29756]: pgmap v1174: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:06 standalone.localdomain podman[169948]: 2025-10-13 14:14:05.865976286 +0000 UTC m=+0.124742446 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, release=1, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.component=openstack-placement-api-container, container_name=placement_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, architecture=x86_64)
Oct 13 14:14:06 standalone.localdomain podman[169953]: 2025-10-13 14:14:06.0734206 +0000 UTC m=+0.310019132 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_metadata)
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.21.0.2:50256 [13/Oct/2025:14:14:05.476] neutron neutron/standalone.internalapi.localdomain 0/0/0/600/600 204 152 - - ---- 56/1/0/0/0 0/0 "DELETE /v2.0/networks/b5c071a4-7b94-4335-b0f2-67d1f6b021bf HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain podman[169947]: 2025-10-13 14:14:05.993684442 +0000 UTC m=+0.253296738 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, container_name=glance_api_cron, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, version=17.1.9)
Oct 13 14:14:06 standalone.localdomain podman[169948]: 2025-10-13 14:14:06.096830367 +0000 UTC m=+0.355596527 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., container_name=placement_api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, com.redhat.component=openstack-placement-api-container, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, version=17.1.9, name=rhosp17/openstack-placement-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true)
Oct 13 14:14:06 standalone.localdomain podman[169963]: 2025-10-13 14:14:06.107839904 +0000 UTC m=+0.354936187 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, release=1, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:14:06 standalone.localdomain podman[169947]: 2025-10-13 14:14:06.124573415 +0000 UTC m=+0.384185701 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-glance-api, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:14:06 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain podman[169969]: 2025-10-13 14:14:06.142890985 +0000 UTC m=+0.380995293 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:14:06 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain podman[169978]: 2025-10-13 14:14:06.170482339 +0000 UTC m=+0.398561940 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-glance-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:14:06 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain podman[169949]: 2025-10-13 14:14:06.185372364 +0000 UTC m=+0.439220954 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, release=1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:14:06 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:14:06 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=40120 DF PROTO=TCP SPT=49690 DPT=19885 SEQ=1521241476 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A02B10000000001030307) 
Oct 13 14:14:06 standalone.localdomain podman[170030]: 2025-10-13 14:14:06.301542088 +0000 UTC m=+0.458638889 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, tcib_managed=true, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:14:06 standalone.localdomain sudo[169916]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:06 standalone.localdomain podman[170030]: 2025-10-13 14:14:06.331502263 +0000 UTC m=+0.488599064 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, build-date=2025-07-21T16:13:39, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true)
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.17.0.2:36142 [13/Oct/2025:14:14:06.045] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/362/362 201 8164 - - ---- 54/3/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.17.0.2:36158 [13/Oct/2025:14:14:06.046] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/634/634 201 8164 - - ---- 56/4/1/1/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.17.0.2:36174 [13/Oct/2025:14:14:06.419] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/355/355 200 8159 - - ---- 57/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.21.0.2:51992 [13/Oct/2025:14:14:06.414] placement placement/standalone.internalapi.localdomain 0/0/0/372/372 404 462 - - ---- 57/2/1/1/0 0/0 "GET /placement/resource_providers/c16bb402-f0b8-4820-bf57-79131a639f2f/aggregates HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.17.0.2:36174 [13/Oct/2025:14:14:06.791] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/37/37 200 8159 - - ---- 57/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.21.0.2:52004 [13/Oct/2025:14:14:06.687] placement placement/standalone.internalapi.localdomain 0/0/0/150/150 404 462 - - ---- 57/2/1/1/0 0/0 "GET /placement/resource_providers/c16bb402-f0b8-4820-bf57-79131a639f2f/aggregates HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.21.0.2:51992 [13/Oct/2025:14:14:06.793] placement placement/standalone.internalapi.localdomain 0/0/0/52/52 404 498 - - ---- 57/2/1/1/0 0/0 "DELETE /placement/resource_providers/c16bb402-f0b8-4820-bf57-79131a639f2f HTTP/1.1"
Oct 13 14:14:06 standalone.localdomain haproxy[70940]: 172.21.0.2:52004 [13/Oct/2025:14:14:06.841] placement placement/standalone.internalapi.localdomain 0/0/0/12/12 404 498 - - ---- 57/2/0/0/0 0/0 "DELETE /placement/resource_providers/c16bb402-f0b8-4820-bf57-79131a639f2f HTTP/1.1"
Oct 13 14:14:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1175: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:08 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=58749 DF PROTO=TCP SPT=49694 DPT=19885 SEQ=1895575811 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A0A630000000001030307) 
Oct 13 14:14:08 standalone.localdomain sudo[170286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyalwuyrzlogzqlhkcjkntbgxsugxswi ; /usr/bin/python3
Oct 13 14:14:08 standalone.localdomain sudo[170286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:14:08 standalone.localdomain runuser[170293]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:08 standalone.localdomain python3[170288]: ansible-ansible.legacy.command Invoked with _raw_params=echo 'Check if the pre adoption tempest tests passed' _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000042-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:14:08 standalone.localdomain sudo[170286]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:08 standalone.localdomain ceph-mon[29756]: pgmap v1175: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:09 standalone.localdomain runuser[170293]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:09 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=58750 DF PROTO=TCP SPT=49694 DPT=19885 SEQ=1895575811 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A0E710000000001030307) 
Oct 13 14:14:09 standalone.localdomain runuser[170458]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1176: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:14:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:14:09 standalone.localdomain podman[170503]: 2025-10-13 14:14:09.842804734 +0000 UTC m=+0.105246651 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18)
Oct 13 14:14:09 standalone.localdomain podman[170504]: 2025-10-13 14:14:09.872870894 +0000 UTC m=+0.134567787 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, release=1, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_api)
Oct 13 14:14:09 standalone.localdomain podman[170503]: 2025-10-13 14:14:09.902130308 +0000 UTC m=+0.164572185 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:14:09 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:14:09 standalone.localdomain podman[170504]: 2025-10-13 14:14:09.955742768 +0000 UTC m=+0.217439721 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12)
Oct 13 14:14:09 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:14:10 standalone.localdomain runuser[170458]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:10 standalone.localdomain runuser[170600]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:10 standalone.localdomain runuser[170600]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:10 standalone.localdomain ceph-mon[29756]: pgmap v1176: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:11 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=58751 DF PROTO=TCP SPT=49694 DPT=19885 SEQ=1895575811 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A16710000000001030307) 
Oct 13 14:14:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1177: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:14:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:14:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:14:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:14:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:14:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:14:11 standalone.localdomain podman[170718]: 2025-10-13 14:14:11.870439907 +0000 UTC m=+0.115998839 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-sriov-agent, config_id=tripleo_step4, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T16:03:34, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:14:11 standalone.localdomain podman[170716]: 2025-10-13 14:14:11.841247164 +0000 UTC m=+0.098358469 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, com.redhat.component=openstack-barbican-api-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, batch=17.1_20250721.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=barbican_api, io.buildah.version=1.33.12, architecture=x86_64, release=1)
Oct 13 14:14:11 standalone.localdomain podman[170715]: 2025-10-13 14:14:11.904060165 +0000 UTC m=+0.161821490 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, build-date=2025-07-21T16:28:54, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:14:11 standalone.localdomain podman[170716]: 2025-10-13 14:14:11.921312383 +0000 UTC m=+0.178423678 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, container_name=barbican_api, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:14:11 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:14:11 standalone.localdomain podman[170718]: 2025-10-13 14:14:11.956844689 +0000 UTC m=+0.202403691 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:14:11 standalone.localdomain podman[170717]: 2025-10-13 14:14:11.977025257 +0000 UTC m=+0.228450008 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, name=rhosp17/openstack-barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:14:11 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:14:12 standalone.localdomain podman[170717]: 2025-10-13 14:14:12.006352623 +0000 UTC m=+0.257777384 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, com.redhat.component=openstack-barbican-keystone-listener-container, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, build-date=2025-07-21T16:18:19, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=barbican_keystone_listener, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:14:12 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:14:12 standalone.localdomain podman[170730]: 2025-10-13 14:14:12.082202753 +0000 UTC m=+0.322090882 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, build-date=2025-07-21T15:36:22, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, name=rhosp17/openstack-barbican-worker, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, description=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64)
Oct 13 14:14:12 standalone.localdomain podman[170715]: 2025-10-13 14:14:12.111660635 +0000 UTC m=+0.369421990 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, version=17.1.9, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:14:12 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:14:12 standalone.localdomain podman[170736]: 2025-10-13 14:14:12.085639508 +0000 UTC m=+0.324881237 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:05:11, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_api_cron, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:14:12 standalone.localdomain podman[170730]: 2025-10-13 14:14:12.14485797 +0000 UTC m=+0.384746029 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, distribution-scope=public, name=rhosp17/openstack-barbican-worker, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, build-date=2025-07-21T15:36:22, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:14:12 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:14:12 standalone.localdomain podman[170736]: 2025-10-13 14:14:12.174127745 +0000 UTC m=+0.413369414 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=nova_api_cron)
Oct 13 14:14:12 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:14:12 standalone.localdomain ceph-mon[29756]: pgmap v1177: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:13 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=8852 DF PROTO=TCP SPT=43182 DPT=19885 SEQ=2864528905 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A1E310000000001030307) 
Oct 13 14:14:13 standalone.localdomain sudo[170886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfhkynouxzlzflxlgpwwebnxqzxardvp ; /usr/bin/python3
Oct 13 14:14:13 standalone.localdomain sudo[170886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 14:14:13 standalone.localdomain python3[170888]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable rhceph-7-tools-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000043-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:14:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1178: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:14 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=8853 DF PROTO=TCP SPT=43182 DPT=19885 SEQ=2864528905 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A22310000000001030307) 
Oct 13 14:14:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:14:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/385332166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:14:14 standalone.localdomain ceph-mon[29756]: pgmap v1178: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/385332166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:14:15 standalone.localdomain haproxy[70940]: 172.17.0.2:36174 [13/Oct/2025:14:14:14.958] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/66/66 200 8101 - - ---- 58/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:14:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45654 [13/Oct/2025:14:14:14.954] placement placement/standalone.internalapi.localdomain 0/0/0/83/83 200 298 - - ---- 58/3/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:14:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45654 [13/Oct/2025:14:14:15.044] placement placement/standalone.internalapi.localdomain 0/0/0/17/17 200 1117 - - ---- 58/3/0/0/0 0/0 "GET /placement/resource_providers?in_tree=e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 HTTP/1.1"
Oct 13 14:14:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45654 [13/Oct/2025:14:14:15.066] placement placement/standalone.internalapi.localdomain 0/0/0/19/19 200 640 - - ---- 58/3/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/inventories HTTP/1.1"
Oct 13 14:14:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45654 [13/Oct/2025:14:14:15.090] placement placement/standalone.internalapi.localdomain 0/0/0/15/15 200 361 - - ---- 58/3/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/aggregates HTTP/1.1"
Oct 13 14:14:15 standalone.localdomain haproxy[70940]: 172.17.0.2:45654 [13/Oct/2025:14:14:15.109] placement placement/standalone.internalapi.localdomain 0/0/0/24/24 200 1719 - - ---- 58/3/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/traits HTTP/1.1"
Oct 13 14:14:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:14:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1018737493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:14:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1179: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1018737493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:14:16 standalone.localdomain ceph-mon[29756]: pgmap v1179: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:16 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=8854 DF PROTO=TCP SPT=43182 DPT=19885 SEQ=2864528905 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A2A310000000001030307) 
Oct 13 14:14:16 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:14:16 standalone.localdomain recover_tripleo_nova_virtqemud[171079]: 93291
Oct 13 14:14:16 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:14:16 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:14:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:14:16 standalone.localdomain podman[171080]: 2025-10-13 14:14:16.814722323 +0000 UTC m=+0.080522974 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, distribution-scope=public, release=1, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.component=openstack-ovn-northd-container, name=rhosp17/openstack-ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1)
Oct 13 14:14:16 standalone.localdomain podman[171080]: 2025-10-13 14:14:16.85193054 +0000 UTC m=+0.117731241 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, container_name=ovn_cluster_northd, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_id=ovn_cluster_northd, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, build-date=2025-07-21T13:30:04)
Oct 13 14:14:16 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:14:16 standalone.localdomain rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 14:14:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1180: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:14:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:14:17 standalone.localdomain systemd[1]: tmp-crun.oEWs54.mount: Deactivated successfully.
Oct 13 14:14:17 standalone.localdomain podman[171112]: 2025-10-13 14:14:17.84461094 +0000 UTC m=+0.111159520 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, version=17.1.9, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12)
Oct 13 14:14:17 standalone.localdomain podman[171113]: 2025-10-13 14:14:17.86162271 +0000 UTC m=+0.127465118 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, config_id=tripleo_step2, com.redhat.component=openstack-mariadb-container, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, container_name=clustercheck, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:14:17 standalone.localdomain podman[171112]: 2025-10-13 14:14:17.87794232 +0000 UTC m=+0.144490860 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37)
Oct 13 14:14:17 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:14:17 standalone.localdomain podman[171113]: 2025-10-13 14:14:17.932414786 +0000 UTC m=+0.198257224 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible)
Oct 13 14:14:17 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:14:18 standalone.localdomain ceph-mon[29756]: pgmap v1180: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1181: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:20 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=8855 DF PROTO=TCP SPT=43182 DPT=19885 SEQ=2864528905 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A095A39F10000000001030307) 
Oct 13 14:14:20 standalone.localdomain ceph-mon[29756]: pgmap v1181: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:21 standalone.localdomain runuser[171525]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1182: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:21 standalone.localdomain runuser[171525]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:21 standalone.localdomain ceph-mon[29756]: pgmap v1182: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:22 standalone.localdomain runuser[171595]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:22 standalone.localdomain sudo[170886]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:22 standalone.localdomain runuser[171595]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:22 standalone.localdomain runuser[171657]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:14:23
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'images', 'manila_data', 'backups', 'manila_metadata']
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:14:23 standalone.localdomain runuser[171657]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1183: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:14:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:14:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:14:23 standalone.localdomain podman[171727]: 2025-10-13 14:14:23.818828366 +0000 UTC m=+0.074045696 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, release=1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-cinder-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, config_id=tripleo_step4, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, build-date=2025-07-21T16:10:12, distribution-scope=public, name=rhosp17/openstack-cinder-scheduler, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:14:23 standalone.localdomain podman[171726]: 2025-10-13 14:14:23.890135447 +0000 UTC m=+0.145304035 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, container_name=cinder_api_cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:14:23 standalone.localdomain podman[171727]: 2025-10-13 14:14:23.899749961 +0000 UTC m=+0.154967341 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-scheduler-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:10:12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, vcs-type=git, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:14:23 standalone.localdomain podman[171726]: 2025-10-13 14:14:23.90725358 +0000 UTC m=+0.162422128 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api_cron, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, release=1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:14:23 standalone.localdomain podman[171725]: 2025-10-13 14:14:23.924057074 +0000 UTC m=+0.180466890 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, container_name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container)
Oct 13 14:14:23 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:14:23 standalone.localdomain podman[171725]: 2025-10-13 14:14:23.93993223 +0000 UTC m=+0.196342086 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=iscsid, version=17.1.9, config_id=tripleo_step3)
Oct 13 14:14:23 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:14:23 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:14:24 standalone.localdomain ceph-mon[29756]: pgmap v1183: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1184: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:26 standalone.localdomain ceph-mon[29756]: pgmap v1184: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:14:26 standalone.localdomain podman[171812]: 2025-10-13 14:14:26.823196612 +0000 UTC m=+0.085826405 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, tcib_managed=true, container_name=keystone, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:14:26 standalone.localdomain podman[171812]: 2025-10-13 14:14:26.861811213 +0000 UTC m=+0.124440986 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, container_name=keystone)
Oct 13 14:14:26 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:14:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1185: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:14:27 standalone.localdomain podman[171867]: 2025-10-13 14:14:27.861138917 +0000 UTC m=+0.104233149 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, architecture=x86_64, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, description=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=horizon, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc., vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, summary=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-horizon-container)
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:14:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:14:27 standalone.localdomain podman[171859]: 2025-10-13 14:14:27.862697214 +0000 UTC m=+0.109572312 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.component=openstack-cinder-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:14:27 standalone.localdomain podman[171848]: 2025-10-13 14:14:27.916542202 +0000 UTC m=+0.174722366 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, container_name=manila_scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-manila-scheduler-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-manila-scheduler, architecture=x86_64)
Oct 13 14:14:27 standalone.localdomain podman[171859]: 2025-10-13 14:14:27.946678362 +0000 UTC m=+0.193553460 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, release=1, container_name=cinder_api, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:14:27 standalone.localdomain systemd[1]: tmp-crun.V57xGa.mount: Deactivated successfully.
Oct 13 14:14:27 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:14:27 standalone.localdomain podman[171976]: 2025-10-13 14:14:27.961871318 +0000 UTC m=+0.079770392 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, version=17.1.9, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:11, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine)
Oct 13 14:14:27 standalone.localdomain podman[171867]: 2025-10-13 14:14:27.968041196 +0000 UTC m=+0.211135428 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-horizon, container_name=horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc.)
Oct 13 14:14:27 standalone.localdomain podman[171848]: 2025-10-13 14:14:27.985708566 +0000 UTC m=+0.243888740 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, com.redhat.component=openstack-manila-scheduler-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T15:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, batch=17.1_20250721.1, container_name=manila_scheduler, release=1, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:14:28 standalone.localdomain podman[171982]: 2025-10-13 14:14:28.005812191 +0000 UTC m=+0.118360841 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-server, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vcs-type=git)
Oct 13 14:14:28 standalone.localdomain podman[171879]: 2025-10-13 14:14:28.015793827 +0000 UTC m=+0.244544180 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, managed_by=tripleo_ansible, container_name=memcached, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171850]: 2025-10-13 14:14:28.061961498 +0000 UTC m=+0.313527409 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, name=rhosp17/openstack-nova-scheduler, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:02:54, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-nova-scheduler-container, managed_by=tripleo_ansible)
Oct 13 14:14:28 standalone.localdomain podman[171885]: 2025-10-13 14:14:28.017262781 +0000 UTC m=+0.244378095 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-type=git, container_name=heat_api, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:56:26)
Oct 13 14:14:28 standalone.localdomain podman[171976]: 2025-10-13 14:14:28.086476238 +0000 UTC m=+0.204375352 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, container_name=heat_engine, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4)
Oct 13 14:14:28 standalone.localdomain podman[171872]: 2025-10-13 14:14:28.114874966 +0000 UTC m=+0.342424183 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:14:28 standalone.localdomain podman[171850]: 2025-10-13 14:14:28.139661695 +0000 UTC m=+0.391227596 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, build-date=2025-07-21T16:02:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.component=openstack-nova-scheduler-container, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, io.buildah.version=1.33.12)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171860]: 2025-10-13 14:14:28.187430326 +0000 UTC m=+0.425048600 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, distribution-scope=public, container_name=heat_api_cron, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, build-date=2025-07-21T15:56:26, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container)
Oct 13 14:14:28 standalone.localdomain podman[171885]: 2025-10-13 14:14:28.196258165 +0000 UTC m=+0.423373549 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, architecture=x86_64)
Oct 13 14:14:28 standalone.localdomain podman[171982]: 2025-10-13 14:14:28.207782319 +0000 UTC m=+0.320331009 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, container_name=neutron_api, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T15:44:03, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171979]: 2025-10-13 14:14:28.214942067 +0000 UTC m=+0.329209939 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, build-date=2025-07-21T16:06:43, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-api, container_name=manila_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-api, com.redhat.component=openstack-manila-api-container, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.openshift.expose-services=)
Oct 13 14:14:28 standalone.localdomain podman[171860]: 2025-10-13 14:14:28.225195751 +0000 UTC m=+0.462814075 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:56:26, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, managed_by=tripleo_ansible, container_name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, architecture=x86_64)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171879]: 2025-10-13 14:14:28.239851719 +0000 UTC m=+0.468602152 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, maintainer=OpenStack TripleO Team, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-memcached, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171872]: 2025-10-13 14:14:28.249650308 +0000 UTC m=+0.477199595 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171847]: 2025-10-13 14:14:28.327425207 +0000 UTC m=+0.588624103 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:14:28 standalone.localdomain podman[171893]: 2025-10-13 14:14:28.387549797 +0000 UTC m=+0.602785168 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain podman[171893]: 2025-10-13 14:14:28.431143039 +0000 UTC m=+0.646378370 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, container_name=nova_conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-conductor-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T15:44:17, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-conductor, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, release=1, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:14:28 standalone.localdomain podman[171847]: 2025-10-13 14:14:28.440902548 +0000 UTC m=+0.702101464 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, tcib_managed=true, build-date=2025-07-21T14:49:55, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc.)
Oct 13 14:14:28 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:14:28 standalone.localdomain podman[171979]: 2025-10-13 14:14:28.455279408 +0000 UTC m=+0.569547290 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, summary=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:06:43, container_name=manila_api_cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, com.redhat.component=openstack-manila-api-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, name=rhosp17/openstack-manila-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0043440946049692905 of space, bias 1.0, pg target 0.4344094604969291 quantized to 32 (current 32)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:14:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:14:28 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:14:28 standalone.localdomain ceph-mon[29756]: pgmap v1185: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1186: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:29 standalone.localdomain podman[172345]: 2025-10-13 14:14:29.928099083 +0000 UTC m=+0.068211898 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.expose-services=, build-date=2025-07-21T15:22:36, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, name=rhosp17/openstack-manila-share, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-manila-share-container, distribution-scope=public)
Oct 13 14:14:29 standalone.localdomain podman[172345]: 2025-10-13 14:14:29.956879713 +0000 UTC m=+0.096992518 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-share, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:22:36, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, release=1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-manila-share-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74)
Oct 13 14:14:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:30 standalone.localdomain ceph-mon[29756]: pgmap v1186: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1187: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:14:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:14:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:14:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:14:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:14:32 standalone.localdomain ceph-mon[29756]: pgmap v1187: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:32 standalone.localdomain systemd[1]: tmp-crun.HO2JKO.mount: Deactivated successfully.
Oct 13 14:14:32 standalone.localdomain podman[172493]: 2025-10-13 14:14:32.873585807 +0000 UTC m=+0.105611271 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, container_name=nova_vnc_proxy, vcs-type=git, name=rhosp17/openstack-nova-novncproxy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T15:24:10, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:14:32 standalone.localdomain podman[172475]: 2025-10-13 14:14:32.878386584 +0000 UTC m=+0.131570945 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.9, container_name=nova_migration_target, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:14:32 standalone.localdomain podman[172476]: 2025-10-13 14:14:32.944217337 +0000 UTC m=+0.192308932 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, container_name=swift_container_server, release=1, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:14:32 standalone.localdomain podman[172487]: 2025-10-13 14:14:32.826302371 +0000 UTC m=+0.071217229 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:14:33 standalone.localdomain podman[172487]: 2025-10-13 14:14:33.009727731 +0000 UTC m=+0.254642589 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, batch=17.1_20250721.1)
Oct 13 14:14:33 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:14:33 standalone.localdomain podman[172474]: 2025-10-13 14:14:32.985581692 +0000 UTC m=+0.240237549 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, vcs-type=git, container_name=swift_object_server, distribution-scope=public, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:14:33 standalone.localdomain podman[172493]: 2025-10-13 14:14:33.15783842 +0000 UTC m=+0.389863874 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, com.redhat.component=openstack-nova-novncproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, build-date=2025-07-21T15:24:10, io.buildah.version=1.33.12, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, release=1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:14:33 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:14:33 standalone.localdomain podman[172476]: 2025-10-13 14:14:33.176246533 +0000 UTC m=+0.424338128 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, container_name=swift_container_server, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:14:33 standalone.localdomain podman[172475]: 2025-10-13 14:14:33.187735235 +0000 UTC m=+0.440919586 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, release=1, version=17.1.9, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public)
Oct 13 14:14:33 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:14:33 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:14:33 standalone.localdomain podman[172474]: 2025-10-13 14:14:33.211332307 +0000 UTC m=+0.465988184 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, release=1, container_name=swift_object_server, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:14:33 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:14:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1188: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:33 standalone.localdomain runuser[172630]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:34 standalone.localdomain runuser[172630]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:34 standalone.localdomain ceph-mon[29756]: pgmap v1188: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:34 standalone.localdomain runuser[172695]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:35 standalone.localdomain runuser[172695]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:35 standalone.localdomain runuser[172753]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1189: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:36 standalone.localdomain ceph-mon[29756]: pgmap v1189: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:36 standalone.localdomain runuser[172753]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:14:36 standalone.localdomain systemd[1]: tmp-crun.VmBp75.mount: Deactivated successfully.
Oct 13 14:14:36 standalone.localdomain podman[172832]: 2025-10-13 14:14:36.699286712 +0000 UTC m=+0.134504495 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_controller)
Oct 13 14:14:36 standalone.localdomain podman[172820]: 2025-10-13 14:14:36.673559564 +0000 UTC m=+0.125514549 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, name=rhosp17/openstack-placement-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.component=openstack-placement-api-container, release=1, version=17.1.9, container_name=placement_api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:14:36 standalone.localdomain podman[172832]: 2025-10-13 14:14:36.729769624 +0000 UTC m=+0.164987447 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:28:44, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1)
Oct 13 14:14:36 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:14:36 standalone.localdomain podman[172820]: 2025-10-13 14:14:36.756855902 +0000 UTC m=+0.208810887 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, version=17.1.9, container_name=placement_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-placement-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-placement-api, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T13:58:12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:14:36 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:14:36 standalone.localdomain podman[172821]: 2025-10-13 14:14:36.772214752 +0000 UTC m=+0.216258485 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:14:36 standalone.localdomain podman[172819]: 2025-10-13 14:14:36.732404404 +0000 UTC m=+0.183774111 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=glance_api_cron, release=1, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:14:36 standalone.localdomain podman[172819]: 2025-10-13 14:14:36.815725592 +0000 UTC m=+0.267095299 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T13:58:20, vcs-type=git, container_name=glance_api_cron, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:14:36 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:14:36 standalone.localdomain podman[172824]: 2025-10-13 14:14:36.864133764 +0000 UTC m=+0.300601896 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public)
Oct 13 14:14:36 standalone.localdomain podman[172837]: 2025-10-13 14:14:36.817771365 +0000 UTC m=+0.252107811 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, version=17.1.9, container_name=glance_api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, name=rhosp17/openstack-glance-api, release=1, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:14:36 standalone.localdomain podman[172824]: 2025-10-13 14:14:36.915717001 +0000 UTC m=+0.352185123 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent)
Oct 13 14:14:36 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:14:36 standalone.localdomain podman[172822]: 2025-10-13 14:14:36.922774016 +0000 UTC m=+0.360617209 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, container_name=nova_metadata, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git)
Oct 13 14:14:36 standalone.localdomain podman[172823]: 2025-10-13 14:14:36.996744189 +0000 UTC m=+0.435682006 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, version=17.1.9, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:14:37 standalone.localdomain podman[172821]: 2025-10-13 14:14:37.003779714 +0000 UTC m=+0.447823457 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, container_name=swift_proxy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:14:37 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:14:37 standalone.localdomain podman[172822]: 2025-10-13 14:14:37.061239552 +0000 UTC m=+0.499082755 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_metadata, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, release=1, build-date=2025-07-21T16:05:11, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api)
Oct 13 14:14:37 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:14:37 standalone.localdomain podman[172837]: 2025-10-13 14:14:37.112076876 +0000 UTC m=+0.546413402 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:14:37 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:14:37 standalone.localdomain podman[172823]: 2025-10-13 14:14:37.23386166 +0000 UTC m=+0.672799427 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, container_name=glance_api_internal, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:58:20, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:14:37 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:14:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1190: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:38 standalone.localdomain ceph-mon[29756]: pgmap v1190: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:39 standalone.localdomain podman[173145]: 2025-10-13 14:14:39.477661424 +0000 UTC m=+0.112435340 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-backup-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, build-date=2025-07-21T16:18:24, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:14:39 standalone.localdomain podman[173145]: 2025-10-13 14:14:39.480802751 +0000 UTC m=+0.115576687 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:24, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, version=17.1.9)
Oct 13 14:14:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1191: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:14:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:14:40 standalone.localdomain ceph-mon[29756]: pgmap v1191: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:40 standalone.localdomain podman[173339]: 2025-10-13 14:14:40.814095138 +0000 UTC m=+0.081184454 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, container_name=keystone_cron, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 14:14:40 standalone.localdomain podman[173339]: 2025-10-13 14:14:40.853079139 +0000 UTC m=+0.120168495 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, distribution-scope=public)
Oct 13 14:14:40 standalone.localdomain podman[173340]: 2025-10-13 14:14:40.865874441 +0000 UTC m=+0.131590456 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:14:40 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:14:40 standalone.localdomain podman[173340]: 2025-10-13 14:14:40.941020669 +0000 UTC m=+0.206736704 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, build-date=2025-07-21T16:05:11, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., container_name=nova_api, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:14:40 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:14:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1192: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:14:42 standalone.localdomain ceph-mon[29756]: pgmap v1192: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:42 standalone.localdomain podman[173423]: 2025-10-13 14:14:42.857189403 +0000 UTC m=+0.100002640 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, container_name=nova_api_cron, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, version=17.1.9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 14:14:42 standalone.localdomain podman[173423]: 2025-10-13 14:14:42.863125454 +0000 UTC m=+0.105938711 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-api-container, distribution-scope=public, release=1, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:05:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, architecture=x86_64)
Oct 13 14:14:42 standalone.localdomain systemd[1]: tmp-crun.PynaTz.mount: Deactivated successfully.
Oct 13 14:14:42 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:14:42 standalone.localdomain systemd[1]: tmp-crun.UxHS5c.mount: Deactivated successfully.
Oct 13 14:14:42 standalone.localdomain podman[173404]: 2025-10-13 14:14:42.886001894 +0000 UTC m=+0.146695567 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, name=rhosp17/openstack-barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vcs-type=git, com.redhat.component=openstack-barbican-api-container, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:14:42 standalone.localdomain podman[173409]: 2025-10-13 14:14:42.947467344 +0000 UTC m=+0.196428179 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, container_name=neutron_sriov_agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, distribution-scope=public, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34)
Oct 13 14:14:42 standalone.localdomain podman[173405]: 2025-10-13 14:14:42.983607609 +0000 UTC m=+0.242669553 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-keystone-listener-container, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true)
Oct 13 14:14:42 standalone.localdomain podman[173403]: 2025-10-13 14:14:42.993895794 +0000 UTC m=+0.257785505 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, architecture=x86_64, build-date=2025-07-21T16:28:54, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, vendor=Red Hat, Inc.)
Oct 13 14:14:43 standalone.localdomain podman[173404]: 2025-10-13 14:14:43.015589317 +0000 UTC m=+0.276282970 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-barbican-api, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, build-date=2025-07-21T15:22:44, description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:14:43 standalone.localdomain podman[173405]: 2025-10-13 14:14:43.015936847 +0000 UTC m=+0.274998821 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, build-date=2025-07-21T16:18:19, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, name=rhosp17/openstack-barbican-keystone-listener, release=1, io.buildah.version=1.33.12, container_name=barbican_keystone_listener, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:14:43 standalone.localdomain podman[173417]: 2025-10-13 14:14:42.919668883 +0000 UTC m=+0.165207963 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, config_id=tripleo_step3, build-date=2025-07-21T15:36:22, container_name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-worker-container, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:14:43 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:14:43 standalone.localdomain podman[173409]: 2025-10-13 14:14:43.033919708 +0000 UTC m=+0.282880563 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, release=1, io.buildah.version=1.33.12, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969)
Oct 13 14:14:43 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:14:43 standalone.localdomain podman[173417]: 2025-10-13 14:14:43.055073734 +0000 UTC m=+0.300612814 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-barbican-worker-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, build-date=2025-07-21T15:36:22, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, tcib_managed=true, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1)
Oct 13 14:14:43 standalone.localdomain podman[173403]: 2025-10-13 14:14:43.06700769 +0000 UTC m=+0.330897371 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:54, container_name=neutron_dhcp, managed_by=tripleo_ansible, release=1, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:14:43 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:14:43 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:14:43 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:14:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1193: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:44 standalone.localdomain ceph-mon[29756]: pgmap v1193: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1194: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:46 standalone.localdomain ceph-mon[29756]: pgmap v1194: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:46 standalone.localdomain runuser[173579]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:47 standalone.localdomain runuser[173579]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:47 standalone.localdomain runuser[173648]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1195: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:14:47 standalone.localdomain podman[173693]: 2025-10-13 14:14:47.833214281 +0000 UTC m=+0.090234421 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, tcib_managed=true, vcs-type=git, container_name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:30:04, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, config_id=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:14:47 standalone.localdomain podman[173693]: 2025-10-13 14:14:47.847896099 +0000 UTC m=+0.104916199 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=ovn_cluster_northd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, release=1)
Oct 13 14:14:47 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:14:48 standalone.localdomain runuser[173648]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:48 standalone.localdomain runuser[173729]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:14:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:14:48 standalone.localdomain ceph-mon[29756]: pgmap v1195: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:48 standalone.localdomain podman[173858]: 2025-10-13 14:14:48.821594069 +0000 UTC m=+0.086834736 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, distribution-scope=public, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute)
Oct 13 14:14:48 standalone.localdomain podman[173859]: 2025-10-13 14:14:48.867813152 +0000 UTC m=+0.127961143 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-type=git, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, container_name=clustercheck, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45)
Oct 13 14:14:48 standalone.localdomain podman[173858]: 2025-10-13 14:14:48.873782376 +0000 UTC m=+0.139023043 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, build-date=2025-07-21T14:48:37, version=17.1.9, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Oct 13 14:14:48 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:14:48 standalone.localdomain podman[173859]: 2025-10-13 14:14:48.936778782 +0000 UTC m=+0.196926713 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, release=1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public)
Oct 13 14:14:48 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:14:48 standalone.localdomain runuser[173729]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:14:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1196: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:50 standalone.localdomain ceph-mon[29756]: pgmap v1196: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1197: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:52 standalone.localdomain ceph-mon[29756]: pgmap v1197: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:53 standalone.localdomain sudo[174142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:14:53 standalone.localdomain sudo[174142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:14:53 standalone.localdomain sudo[174142]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:14:53 standalone.localdomain sudo[174157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:14:53 standalone.localdomain sudo[174157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:14:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1198: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:53 standalone.localdomain sudo[174157]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:14:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:14:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:14:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:14:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:14:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev aa3b7959-a78f-43f9-9776-32f0213217fc (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:14:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev aa3b7959-a78f-43f9-9776-32f0213217fc (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:14:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event aa3b7959-a78f-43f9-9776-32f0213217fc (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:14:54 standalone.localdomain sudo[174213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:14:54 standalone.localdomain sudo[174213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:14:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:14:54 standalone.localdomain sudo[174213]: pam_unix(sudo:session): session closed for user root
Oct 13 14:14:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:14:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:14:54 standalone.localdomain systemd[1]: tmp-crun.ntxjqW.mount: Deactivated successfully.
Oct 13 14:14:54 standalone.localdomain podman[174228]: 2025-10-13 14:14:54.234632152 +0000 UTC m=+0.107887240 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, release=1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:14:54 standalone.localdomain podman[174230]: 2025-10-13 14:14:54.487515516 +0000 UTC m=+0.352236493 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, build-date=2025-07-21T16:10:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-cinder-scheduler-container)
Oct 13 14:14:54 standalone.localdomain podman[174230]: 2025-10-13 14:14:54.581189791 +0000 UTC m=+0.445910768 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, container_name=cinder_scheduler, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-cinder-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, name=rhosp17/openstack-cinder-scheduler, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:14:54 standalone.localdomain podman[174228]: 2025-10-13 14:14:54.694452315 +0000 UTC m=+0.567707433 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, container_name=iscsid, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:14:54 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:14:54 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:14:54 standalone.localdomain podman[174229]: 2025-10-13 14:14:54.338789797 +0000 UTC m=+0.208748444 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, container_name=cinder_api_cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, name=rhosp17/openstack-cinder-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T15:58:55, tcib_managed=true, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: pgmap v1198: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:14:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:14:54 standalone.localdomain podman[174229]: 2025-10-13 14:14:54.827880336 +0000 UTC m=+0.697838973 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, architecture=x86_64, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api_cron, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:14:54 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:14:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:14:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1199: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:56 standalone.localdomain ceph-mon[29756]: pgmap v1199: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:14:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4236962658' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:14:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:14:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4236962658' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:14:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4236962658' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:14:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4236962658' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:14:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:14:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1200: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:57 standalone.localdomain podman[174388]: 2025-10-13 14:14:57.821359828 +0000 UTC m=+0.082080272 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.openshift.expose-services=, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, container_name=keystone, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, vcs-type=git)
Oct 13 14:14:57 standalone.localdomain podman[174388]: 2025-10-13 14:14:57.859736041 +0000 UTC m=+0.120456465 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=keystone, version=17.1.9, build-date=2025-07-21T13:27:18, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, release=1, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:14:57 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:14:58 standalone.localdomain ceph-mon[29756]: pgmap v1200: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:14:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:14:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:14:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:14:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:14:58 standalone.localdomain podman[174507]: 2025-10-13 14:14:58.929318924 +0000 UTC m=+0.175955233 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, container_name=heat_engine, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-heat-engine-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, build-date=2025-07-21T15:44:11, summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:14:58 standalone.localdomain podman[174507]: 2025-10-13 14:14:58.973695501 +0000 UTC m=+0.220331840 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, container_name=heat_engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:14:58 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:14:58 standalone.localdomain podman[174514]: 2025-10-13 14:14:58.98085924 +0000 UTC m=+0.236522935 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, architecture=x86_64, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, container_name=memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:14:59 standalone.localdomain podman[174512]: 2025-10-13 14:14:58.855991711 +0000 UTC m=+0.112011487 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-scheduler, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, container_name=nova_scheduler, io.buildah.version=1.33.12, build-date=2025-07-21T16:02:54, tcib_managed=true)
Oct 13 14:14:59 standalone.localdomain podman[174521]: 2025-10-13 14:14:58.883480992 +0000 UTC m=+0.119401313 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.openshift.expose-services=, container_name=manila_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, build-date=2025-07-21T16:06:43, version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, com.redhat.component=openstack-manila-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:14:59 standalone.localdomain podman[174529]: 2025-10-13 14:14:59.011783655 +0000 UTC m=+0.251905624 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, release=1, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_id=tripleo_step3, container_name=horizon, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container)
Oct 13 14:14:59 standalone.localdomain podman[174514]: 2025-10-13 14:14:59.071799591 +0000 UTC m=+0.327463276 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174535]: 2025-10-13 14:14:58.908288321 +0000 UTC m=+0.153815966 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, tcib_managed=true, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:14:59 standalone.localdomain podman[174506]: 2025-10-13 14:14:58.958770245 +0000 UTC m=+0.223529568 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, container_name=manila_scheduler, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, architecture=x86_64, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, release=1, name=rhosp17/openstack-manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:28, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:14:59 standalone.localdomain podman[174521]: 2025-10-13 14:14:59.115091525 +0000 UTC m=+0.351011856 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, summary=Red Hat OpenStack Platform 17.1 manila-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, name=rhosp17/openstack-manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:06:43, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, release=1, container_name=manila_api_cron, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174505]: 2025-10-13 14:14:59.123942396 +0000 UTC m=+0.393383912 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, managed_by=tripleo_ansible, container_name=heat_api_cfn, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1)
Oct 13 14:14:59 standalone.localdomain podman[174536]: 2025-10-13 14:14:59.082614572 +0000 UTC m=+0.320028139 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, container_name=neutron_api, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, build-date=2025-07-21T15:44:03)
Oct 13 14:14:59 standalone.localdomain podman[174512]: 2025-10-13 14:14:59.139001117 +0000 UTC m=+0.395020923 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-nova-scheduler-container, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, name=rhosp17/openstack-nova-scheduler, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_scheduler, build-date=2025-07-21T16:02:54, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:14:59 standalone.localdomain podman[174522]: 2025-10-13 14:14:58.989792964 +0000 UTC m=+0.228663615 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, com.redhat.component=openstack-heat-api-container, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174529]: 2025-10-13 14:14:59.150470437 +0000 UTC m=+0.390592456 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, config_id=tripleo_step3, container_name=horizon, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-horizon-container, description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:15, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, version=17.1.9, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5)
Oct 13 14:14:59 standalone.localdomain podman[174505]: 2025-10-13 14:14:59.159854504 +0000 UTC m=+0.429296030 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, io.openshift.expose-services=, container_name=heat_api_cfn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174522]: 2025-10-13 14:14:59.178970979 +0000 UTC m=+0.417841660 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T15:56:26)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174559]: 2025-10-13 14:14:59.207584484 +0000 UTC m=+0.434068527 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, tcib_managed=true, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, release=1, io.buildah.version=1.33.12, config_id=tripleo_step4)
Oct 13 14:14:59 standalone.localdomain podman[174559]: 2025-10-13 14:14:59.214024351 +0000 UTC m=+0.440508364 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:07:52, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, container_name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1)
Oct 13 14:14:59 standalone.localdomain podman[174520]: 2025-10-13 14:14:59.219593242 +0000 UTC m=+0.460815815 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, build-date=2025-07-21T15:58:55, description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1)
Oct 13 14:14:59 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174506]: 2025-10-13 14:14:59.242373448 +0000 UTC m=+0.507132821 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-scheduler-container, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:56:28, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174536]: 2025-10-13 14:14:59.264831255 +0000 UTC m=+0.502244862 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public)
Oct 13 14:14:59 standalone.localdomain podman[174520]: 2025-10-13 14:14:59.271632033 +0000 UTC m=+0.512854626 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T15:58:55, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, release=1, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174535]: 2025-10-13 14:14:59.293533683 +0000 UTC m=+0.539061348 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-heat-api, tcib_managed=true, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-heat-api-container)
Oct 13 14:14:59 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain podman[174557]: 2025-10-13 14:14:59.380942116 +0000 UTC m=+0.618444416 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, com.redhat.component=openstack-nova-conductor-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, container_name=nova_conductor, build-date=2025-07-21T15:44:17, version=17.1.9, name=rhosp17/openstack-nova-conductor, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team)
Oct 13 14:14:59 standalone.localdomain podman[174557]: 2025-10-13 14:14:59.410996055 +0000 UTC m=+0.648498355 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, com.redhat.component=openstack-nova-conductor-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., release=1, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9)
Oct 13 14:14:59 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:14:59 standalone.localdomain runuser[174859]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:14:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1201: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:14:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:00 standalone.localdomain runuser[174859]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:00 standalone.localdomain runuser[175012]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:00 standalone.localdomain ceph-mon[29756]: pgmap v1201: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:01 standalone.localdomain runuser[175012]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:01 standalone.localdomain runuser[175115]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:01 standalone.localdomain runuser[175115]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1202: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:01 standalone.localdomain ceph-mon[29756]: pgmap v1202: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:02 standalone.localdomain podman[175184]: 2025-10-13 14:15:02.217893322 +0000 UTC m=+0.092835900 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb)
Oct 13 14:15:02 standalone.localdomain podman[175184]: 2025-10-13 14:15:02.253878083 +0000 UTC m=+0.128820641 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:15:02 standalone.localdomain podman[175218]: 2025-10-13 14:15:02.446752382 +0000 UTC m=+0.088965993 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-haproxy-container, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:15:02 standalone.localdomain podman[175218]: 2025-10-13 14:15:02.480923326 +0000 UTC m=+0.123136927 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:15:02 standalone.localdomain podman[175246]: 2025-10-13 14:15:02.539614231 +0000 UTC m=+0.062224683 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, tcib_managed=true, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-rabbitmq, release=1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:15:02 standalone.localdomain podman[175246]: 2025-10-13 14:15:02.62326752 +0000 UTC m=+0.145877872 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rabbitmq, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:15:03 standalone.localdomain sshd[175297]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:03 standalone.localdomain sshd[175297]: Accepted publickey for root from 192.168.122.11 port 33856 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Created slice User Slice of UID 0.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 14:15:03 standalone.localdomain systemd-logind[45629]: New session 46 of user root.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Starting User Manager for UID 0...
Oct 13 14:15:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1203: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:03 standalone.localdomain systemd[175349]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:03 standalone.localdomain podman[175305]: 2025-10-13 14:15:03.772391954 +0000 UTC m=+0.112817071 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, container_name=nova_migration_target, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:15:03 standalone.localdomain podman[175304]: 2025-10-13 14:15:03.814392848 +0000 UTC m=+0.154954310 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, version=17.1.9, container_name=swift_object_server, io.buildah.version=1.33.12)
Oct 13 14:15:03 standalone.localdomain podman[175307]: 2025-10-13 14:15:03.824462946 +0000 UTC m=+0.153041351 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, distribution-scope=public, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, tcib_managed=true, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:15:03 standalone.localdomain podman[175306]: 2025-10-13 14:15:03.874662731 +0000 UTC m=+0.208552849 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., container_name=swift_container_server, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:15:03 standalone.localdomain podman[175320]: 2025-10-13 14:15:03.926675472 +0000 UTC m=+0.241776815 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T15:24:10, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, container_name=nova_vnc_proxy, name=rhosp17/openstack-nova-novncproxy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Queued start job for default target Main User Target.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Created slice User Application Slice.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Reached target Paths.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Reached target Timers.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Starting D-Bus User Message Bus Socket...
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Starting Create User's Volatile Files and Directories...
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Finished Create User's Volatile Files and Directories.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Listening on D-Bus User Message Bus Socket.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Reached target Sockets.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Reached target Basic System.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Reached target Main User Target.
Oct 13 14:15:03 standalone.localdomain systemd[175349]: Startup finished in 215ms.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started User Manager for UID 0.
Oct 13 14:15:03 standalone.localdomain systemd[1]: Started Session 46 of User root.
Oct 13 14:15:03 standalone.localdomain sshd[175297]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:04 standalone.localdomain podman[175304]: 2025-10-13 14:15:04.009768564 +0000 UTC m=+0.350330036 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, version=17.1.9, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:15:04 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:15:04 standalone.localdomain podman[175307]: 2025-10-13 14:15:04.028676541 +0000 UTC m=+0.357254956 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, tcib_managed=true)
Oct 13 14:15:04 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:15:04 standalone.localdomain podman[175306]: 2025-10-13 14:15:04.091572136 +0000 UTC m=+0.425462274 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T15:54:32, container_name=swift_container_server, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1)
Oct 13 14:15:04 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:15:04 standalone.localdomain podman[175305]: 2025-10-13 14:15:04.124910015 +0000 UTC m=+0.465335142 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12)
Oct 13 14:15:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:15:04 standalone.localdomain podman[175320]: 2025-10-13 14:15:04.234781355 +0000 UTC m=+0.549882708 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, container_name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:24:10, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:15:04 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:15:04 standalone.localdomain ceph-mon[29756]: pgmap v1203: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:05 standalone.localdomain haproxy[70940]: 172.21.0.2:58174 [13/Oct/2025:14:15:05.544] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 52/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1204: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:05 standalone.localdomain haproxy[70940]: 172.21.0.2:58174 [13/Oct/2025:14:15:05.551] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/306/306 201 8100 - - ---- 52/1/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:05 standalone.localdomain haproxy[70940]: 172.21.0.2:40010 [13/Oct/2025:14:15:05.879] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/4/4 300 1507 - - ---- 53/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:05 standalone.localdomain haproxy[70940]: 172.17.0.2:45074 [13/Oct/2025:14:15:05.905] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 54/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:06 standalone.localdomain ceph-mon[29756]: pgmap v1204: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:06 standalone.localdomain haproxy[70940]: 172.17.0.2:45074 [13/Oct/2025:14:15:05.912] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/377/377 201 8103 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:06 standalone.localdomain haproxy[70940]: 172.17.0.2:45074 [13/Oct/2025:14:15:06.291] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/40/40 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:06 standalone.localdomain haproxy[70940]: 172.21.0.2:40010 [13/Oct/2025:14:15:05.898] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/442/442 404 340 - - ---- 54/1/0/0/0 0/0 "GET /v2/images/cirros HTTP/1.1"
Oct 13 14:15:06 standalone.localdomain haproxy[70940]: 172.21.0.2:40010 [13/Oct/2025:14:15:06.343] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/14/14 200 254 - - ---- 54/1/0/0/0 0/0 "GET /v2/images?name=cirros HTTP/1.1"
Oct 13 14:15:06 standalone.localdomain haproxy[70940]: 172.21.0.2:40010 [13/Oct/2025:14:15:06.362] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/12/12 200 257 - - ---- 54/1/0/0/0 0/0 "GET /v2/images?os_hidden=True HTTP/1.1"
Oct 13 14:15:06 standalone.localdomain podman[175494]: 2025-10-13 14:15:06.469244495 +0000 UTC m=+0.105345744 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:13:39, summary=Red Hat OpenStack Platform 17.1 cinder-volume, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, version=17.1.9, release=1, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:15:06 standalone.localdomain podman[175494]: 2025-10-13 14:15:06.503044958 +0000 UTC m=+0.139146167 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T16:13:39, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-volume, batch=17.1_20250721.1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-volume-container)
Oct 13 14:15:06 standalone.localdomain sshd[175435]: Received disconnect from 192.168.122.11 port 33856:11: disconnected by user
Oct 13 14:15:06 standalone.localdomain sshd[175435]: Disconnected from user root 192.168.122.11 port 33856
Oct 13 14:15:06 standalone.localdomain sshd[175297]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:06 standalone.localdomain systemd-logind[45629]: Session 46 logged out. Waiting for processes to exit.
Oct 13 14:15:06 standalone.localdomain systemd[1]: session-46.scope: Deactivated successfully.
Oct 13 14:15:06 standalone.localdomain systemd[1]: session-46.scope: Consumed 1.804s CPU time.
Oct 13 14:15:06 standalone.localdomain systemd-logind[45629]: Removed session 46.
Oct 13 14:15:06 standalone.localdomain sshd[175534]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:06 standalone.localdomain sshd[175534]: Accepted publickey for root from 192.168.122.11 port 33860 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:15:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:15:06 standalone.localdomain systemd-logind[45629]: New session 48 of user root.
Oct 13 14:15:06 standalone.localdomain systemd[1]: Started Session 48 of User root.
Oct 13 14:15:06 standalone.localdomain sshd[175534]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:06 standalone.localdomain podman[175537]: 2025-10-13 14:15:06.86345548 +0000 UTC m=+0.068771593 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, release=1, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-placement-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true)
Oct 13 14:15:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:15:06 standalone.localdomain podman[175537]: 2025-10-13 14:15:06.922916459 +0000 UTC m=+0.128232542 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-placement-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, release=1, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67)
Oct 13 14:15:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:15:06 standalone.localdomain podman[175586]: 2025-10-13 14:15:06.957625491 +0000 UTC m=+0.073066916 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=glance_api_cron, config_id=tripleo_step4)
Oct 13 14:15:06 standalone.localdomain podman[175538]: 2025-10-13 14:15:06.92652678 +0000 UTC m=+0.128244934 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:15:06 standalone.localdomain podman[175586]: 2025-10-13 14:15:06.993770166 +0000 UTC m=+0.109211591 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron)
Oct 13 14:15:06 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain podman[175538]: 2025-10-13 14:15:07.060863108 +0000 UTC m=+0.262581282 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:15:07 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:15:07 standalone.localdomain podman[175615]: 2025-10-13 14:15:07.061360384 +0000 UTC m=+0.101356071 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:15:07 standalone.localdomain podman[175642]: 2025-10-13 14:15:07.180526308 +0000 UTC m=+0.061231653 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_metadata, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 14:15:07 standalone.localdomain podman[175615]: 2025-10-13 14:15:07.206128541 +0000 UTC m=+0.246124218 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public)
Oct 13 14:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:15:07 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain podman[175642]: 2025-10-13 14:15:07.240201213 +0000 UTC m=+0.120906518 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1)
Oct 13 14:15:07 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:15:07 standalone.localdomain podman[175689]: 2025-10-13 14:15:07.33885734 +0000 UTC m=+0.064568935 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, container_name=glance_api_internal, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-glance-api)
Oct 13 14:15:07 standalone.localdomain podman[175683]: 2025-10-13 14:15:07.391344766 +0000 UTC m=+0.155440136 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, container_name=glance_api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Oct 13 14:15:07 standalone.localdomain podman[175641]: 2025-10-13 14:15:07.311553085 +0000 UTC m=+0.193077836 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:15:07 standalone.localdomain systemd[1]: tmp-crun.z9uwKl.mount: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain podman[175641]: 2025-10-13 14:15:07.507788237 +0000 UTC m=+0.389312948 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy)
Oct 13 14:15:07 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain podman[175689]: 2025-10-13 14:15:07.541827138 +0000 UTC m=+0.267538703 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, distribution-scope=public, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, release=1, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:15:07 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain podman[175683]: 2025-10-13 14:15:07.598021206 +0000 UTC m=+0.362116656 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=glance_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, release=1, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:15:07 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:15:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1205: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:08 standalone.localdomain haproxy[70940]: 172.21.0.2:58186 [13/Oct/2025:14:15:08.588] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:08 standalone.localdomain ceph-mon[29756]: pgmap v1205: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:08 standalone.localdomain haproxy[70940]: 172.21.0.2:58186 [13/Oct/2025:14:15:08.594] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/291/291 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:08 standalone.localdomain haproxy[70940]: 172.21.0.2:40022 [13/Oct/2025:14:15:08.898] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/3/3 300 1507 - - ---- 54/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:08 standalone.localdomain haproxy[70940]: 172.17.0.2:45074 [13/Oct/2025:14:15:08.913] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:08 standalone.localdomain haproxy[70940]: 172.21.0.2:40022 [13/Oct/2025:14:15:08.909] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/49/49 201 1083 - - ---- 54/1/0/0/0 0/0 "POST /v2/images HTTP/1.1"
Oct 13 14:15:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1206: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:10 standalone.localdomain ceph-mon[29756]: pgmap v1206: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:15:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:15:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1207: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 13 14:15:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 e53: 1 total, 1 up, 1 in
Oct 13 14:15:11 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e53: 1 total, 1 up, 1 in
Oct 13 14:15:11 standalone.localdomain systemd[1]: tmp-crun.JHyMpD.mount: Deactivated successfully.
Oct 13 14:15:11 standalone.localdomain systemd[1]: tmp-crun.SDcpiH.mount: Deactivated successfully.
Oct 13 14:15:11 standalone.localdomain podman[176097]: 2025-10-13 14:15:11.896811531 +0000 UTC m=+0.154708662 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, container_name=nova_api, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, release=1, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:15:11 standalone.localdomain podman[176096]: 2025-10-13 14:15:11.86017121 +0000 UTC m=+0.120562558 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, container_name=keystone_cron, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-keystone-container)
Oct 13 14:15:11 standalone.localdomain podman[176097]: 2025-10-13 14:15:11.938233158 +0000 UTC m=+0.196130269 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=nova_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:15:11 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:15:11 standalone.localdomain haproxy[70940]: 172.21.0.2:40022 [13/Oct/2025:14:15:08.963] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/3016/3016 204 188 - - ---- 54/1/0/0/0 0/0 "PUT /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b/file HTTP/1.1"
Oct 13 14:15:11 standalone.localdomain podman[176096]: 2025-10-13 14:15:11.995381175 +0000 UTC m=+0.255772553 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, container_name=keystone_cron, distribution-scope=public, release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:15:12 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:15:12 standalone.localdomain sshd[175549]: Received disconnect from 192.168.122.11 port 33860:11: disconnected by user
Oct 13 14:15:12 standalone.localdomain sshd[175549]: Disconnected from user root 192.168.122.11 port 33860
Oct 13 14:15:12 standalone.localdomain sshd[175534]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:12 standalone.localdomain systemd[1]: session-48.scope: Deactivated successfully.
Oct 13 14:15:12 standalone.localdomain systemd[1]: session-48.scope: Consumed 5.032s CPU time.
Oct 13 14:15:12 standalone.localdomain systemd-logind[45629]: Session 48 logged out. Waiting for processes to exit.
Oct 13 14:15:12 standalone.localdomain systemd-logind[45629]: Removed session 48.
Oct 13 14:15:12 standalone.localdomain sshd[176167]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:12 standalone.localdomain runuser[176173]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:12 standalone.localdomain sshd[176167]: Accepted publickey for root from 192.168.122.11 port 56422 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:12 standalone.localdomain systemd-logind[45629]: New session 49 of user root.
Oct 13 14:15:12 standalone.localdomain systemd[1]: Started Session 49 of User root.
Oct 13 14:15:12 standalone.localdomain sshd[176167]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:12 standalone.localdomain ceph-mon[29756]: pgmap v1207: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:12 standalone.localdomain ceph-mon[29756]: osdmap e53: 1 total, 1 up, 1 in
Oct 13 14:15:12 standalone.localdomain runuser[176173]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:13 standalone.localdomain runuser[176261]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:13 standalone.localdomain runuser[176261]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:13 standalone.localdomain runuser[176316]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:15:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1209: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:15:13 standalone.localdomain systemd[149397]: Created slice User Background Tasks Slice.
Oct 13 14:15:13 standalone.localdomain systemd[149397]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 14:15:13 standalone.localdomain podman[176332]: 2025-10-13 14:15:13.826674064 +0000 UTC m=+0.078397989 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-api, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, distribution-scope=public)
Oct 13 14:15:13 standalone.localdomain systemd[149397]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 14:15:13 standalone.localdomain podman[176331]: 2025-10-13 14:15:13.874250579 +0000 UTC m=+0.136165676 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, build-date=2025-07-21T16:28:54, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc.)
Oct 13 14:15:13 standalone.localdomain podman[176332]: 2025-10-13 14:15:13.88177944 +0000 UTC m=+0.133503345 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, release=1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:22:44, container_name=barbican_api, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container)
Oct 13 14:15:13 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:15:13 standalone.localdomain podman[176331]: 2025-10-13 14:15:13.923845076 +0000 UTC m=+0.185760153 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Oct 13 14:15:13 standalone.localdomain podman[176335]: 2025-10-13 14:15:13.931442408 +0000 UTC m=+0.185488144 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:36:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-barbican-worker, distribution-scope=public, com.redhat.component=openstack-barbican-worker-container, container_name=barbican_worker, tcib_managed=true, release=1)
Oct 13 14:15:13 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:15:13 standalone.localdomain podman[176335]: 2025-10-13 14:15:13.962000443 +0000 UTC m=+0.216046199 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, distribution-scope=public, build-date=2025-07-21T15:36:22, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.component=openstack-barbican-worker-container, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible)
Oct 13 14:15:13 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:15:13 standalone.localdomain podman[176334]: 2025-10-13 14:15:13.976707422 +0000 UTC m=+0.232033917 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:15:14 standalone.localdomain podman[176333]: 2025-10-13 14:15:14.03154912 +0000 UTC m=+0.287817844 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-barbican-keystone-listener-container, build-date=2025-07-21T16:18:19, container_name=barbican_keystone_listener, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, name=rhosp17/openstack-barbican-keystone-listener, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12)
Oct 13 14:15:14 standalone.localdomain podman[176334]: 2025-10-13 14:15:14.049672544 +0000 UTC m=+0.304998979 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, container_name=neutron_sriov_agent, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, release=1, com.redhat.component=openstack-neutron-sriov-agent-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:15:14 standalone.localdomain podman[176333]: 2025-10-13 14:15:14.060084852 +0000 UTC m=+0.316353586 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:15:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.21.0.2:46134 [13/Oct/2025:14:15:14.069] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:14 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:15:14 standalone.localdomain podman[176344]: 2025-10-13 14:15:14.137766928 +0000 UTC m=+0.384962135 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-nova-api, architecture=x86_64, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:15:14 standalone.localdomain podman[176344]: 2025-10-13 14:15:14.178975259 +0000 UTC m=+0.426170476 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, container_name=nova_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container)
Oct 13 14:15:14 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.21.0.2:46134 [13/Oct/2025:14:15:14.074] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/303/303 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.21.0.2:41372 [13/Oct/2025:14:15:14.396] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/3/3 200 747 - - ---- 54/1/0/0/0 0/0 "GET /v2.1 HTTP/1.1"
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.17.0.2:51160 [13/Oct/2025:14:15:14.406] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/41/41 200 8095 - - ---- 55/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.21.0.2:41372 [13/Oct/2025:14:15:14.401] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/57/57 404 467 - - ---- 55/1/0/0/0 0/0 "GET /v2.1/flavors/m1.small HTTP/1.1"
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.21.0.2:41372 [13/Oct/2025:14:15:14.462] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/14/14 200 1406 - - ---- 55/1/0/0/0 0/0 "GET /v2.1/flavors/detail?is_public=None HTTP/1.1"
Oct 13 14:15:14 standalone.localdomain runuser[176316]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:15:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3696441001' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:15:14 standalone.localdomain sshd[176219]: Received disconnect from 192.168.122.11 port 56422:11: disconnected by user
Oct 13 14:15:14 standalone.localdomain sshd[176219]: Disconnected from user root 192.168.122.11 port 56422
Oct 13 14:15:14 standalone.localdomain sshd[176167]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:14 standalone.localdomain systemd[1]: session-49.scope: Deactivated successfully.
Oct 13 14:15:14 standalone.localdomain systemd[1]: session-49.scope: Consumed 1.885s CPU time.
Oct 13 14:15:14 standalone.localdomain systemd-logind[45629]: Session 49 logged out. Waiting for processes to exit.
Oct 13 14:15:14 standalone.localdomain systemd-logind[45629]: Removed session 49.
Oct 13 14:15:14 standalone.localdomain sshd[176549]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:14 standalone.localdomain ceph-mon[29756]: pgmap v1209: 177 pgs: 177 active+clean; 32 MiB data, 62 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3696441001' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:15:14 standalone.localdomain sshd[176549]: Accepted publickey for root from 192.168.122.11 port 56434 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:14 standalone.localdomain systemd-logind[45629]: New session 50 of user root.
Oct 13 14:15:14 standalone.localdomain systemd[1]: Started Session 50 of User root.
Oct 13 14:15:14 standalone.localdomain sshd[176549]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:14 standalone.localdomain haproxy[70940]: 172.17.0.2:35914 [13/Oct/2025:14:15:14.946] placement placement/standalone.internalapi.localdomain 0/0/0/18/18 200 298 - - ---- 54/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:15:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:15:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2189142373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:15:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1210: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2189142373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:15:16 standalone.localdomain ceph-mon[29756]: pgmap v1210: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:16 standalone.localdomain haproxy[70940]: 172.21.0.2:46138 [13/Oct/2025:14:15:16.482] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 55/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:16 standalone.localdomain haproxy[70940]: 172.21.0.2:46138 [13/Oct/2025:14:15:16.487] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/305/305 201 8100 - - ---- 55/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:16 standalone.localdomain haproxy[70940]: 172.21.0.2:41386 [13/Oct/2025:14:15:16.806] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/4/4 200 747 - - ---- 56/1/0/0/0 0/0 "GET /v2.1 HTTP/1.1"
Oct 13 14:15:16 standalone.localdomain haproxy[70940]: 172.17.0.2:51160 [13/Oct/2025:14:15:16.817] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 8095 - - ---- 56/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:16 standalone.localdomain haproxy[70940]: 172.21.0.2:41386 [13/Oct/2025:14:15:16.813] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/57/57 200 865 - - ---- 56/1/0/0/0 0/0 "POST /v2.1/flavors HTTP/1.1"
Oct 13 14:15:16 standalone.localdomain account-server[114566]: 172.20.0.100 - - [13/Oct/2025:14:15:16 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txf43456e7cc01464985bb7-0068ed0974" "proxy-server 2" 0.0013 "-" 22 -
Oct 13 14:15:16 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txf43456e7cc01464985bb7-0068ed0974)
Oct 13 14:15:17 standalone.localdomain sshd[176561]: Received disconnect from 192.168.122.11 port 56434:11: disconnected by user
Oct 13 14:15:17 standalone.localdomain sshd[176561]: Disconnected from user root 192.168.122.11 port 56434
Oct 13 14:15:17 standalone.localdomain sshd[176549]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:17 standalone.localdomain systemd[1]: session-50.scope: Deactivated successfully.
Oct 13 14:15:17 standalone.localdomain systemd[1]: session-50.scope: Consumed 1.842s CPU time.
Oct 13 14:15:17 standalone.localdomain systemd-logind[45629]: Session 50 logged out. Waiting for processes to exit.
Oct 13 14:15:17 standalone.localdomain systemd-logind[45629]: Removed session 50.
Oct 13 14:15:17 standalone.localdomain sshd[176625]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:17 standalone.localdomain sshd[176625]: Accepted publickey for root from 192.168.122.11 port 56438 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:17 standalone.localdomain systemd-logind[45629]: New session 51 of user root.
Oct 13 14:15:17 standalone.localdomain systemd[1]: Started Session 51 of User root.
Oct 13 14:15:17 standalone.localdomain sshd[176625]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1211: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:15:18 standalone.localdomain ceph-mon[29756]: pgmap v1211: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:18 standalone.localdomain systemd[1]: tmp-crun.vtJNoc.mount: Deactivated successfully.
Oct 13 14:15:18 standalone.localdomain podman[176660]: 2025-10-13 14:15:18.838285599 +0000 UTC m=+0.089453227 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:30:04, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-ovn-northd-container)
Oct 13 14:15:18 standalone.localdomain podman[176660]: 2025-10-13 14:15:18.87689353 +0000 UTC m=+0.128061178 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, build-date=2025-07-21T13:30:04, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_cluster_northd, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, managed_by=tripleo_ansible, architecture=x86_64, config_id=ovn_cluster_northd)
Oct 13 14:15:18 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:15:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:15:18 standalone.localdomain podman[176762]: 2025-10-13 14:15:18.984201592 +0000 UTC m=+0.071284092 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=nova_compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:15:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:15:19 standalone.localdomain haproxy[70940]: 172.21.0.2:46140 [13/Oct/2025:14:15:19.022] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 54/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:19 standalone.localdomain podman[176762]: 2025-10-13 14:15:19.04986057 +0000 UTC m=+0.136943040 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, release=1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:15:19 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:15:19 standalone.localdomain podman[176789]: 2025-10-13 14:15:19.14273513 +0000 UTC m=+0.133926866 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:15:19 standalone.localdomain podman[176789]: 2025-10-13 14:15:19.219635172 +0000 UTC m=+0.210826978 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, io.buildah.version=1.33.12, release=1, tcib_managed=true, container_name=clustercheck, vcs-type=git, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.expose-services=)
Oct 13 14:15:19 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:15:19 standalone.localdomain haproxy[70940]: 172.21.0.2:46140 [13/Oct/2025:14:15:19.030] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/314/314 201 8100 - - ---- 54/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:19 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:19.384] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/29/29 200 8095 - - ---- 56/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:19 standalone.localdomain haproxy[70940]: 172.21.0.2:39892 [13/Oct/2025:14:15:19.376] neutron neutron/standalone.internalapi.localdomain 0/0/0/56/56 404 290 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/networks/private HTTP/1.1"
Oct 13 14:15:19 standalone.localdomain haproxy[70940]: 172.21.0.2:39892 [13/Oct/2025:14:15:19.435] neutron neutron/standalone.internalapi.localdomain 0/0/0/17/17 200 188 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/networks?name=private HTTP/1.1"
Oct 13 14:15:19 standalone.localdomain sshd[176628]: Received disconnect from 192.168.122.11 port 56438:11: disconnected by user
Oct 13 14:15:19 standalone.localdomain sshd[176628]: Disconnected from user root 192.168.122.11 port 56438
Oct 13 14:15:19 standalone.localdomain sshd[176625]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:19 standalone.localdomain systemd[1]: session-51.scope: Deactivated successfully.
Oct 13 14:15:19 standalone.localdomain systemd[1]: session-51.scope: Consumed 1.940s CPU time.
Oct 13 14:15:19 standalone.localdomain systemd-logind[45629]: Session 51 logged out. Waiting for processes to exit.
Oct 13 14:15:19 standalone.localdomain systemd-logind[45629]: Removed session 51.
Oct 13 14:15:19 standalone.localdomain sshd[176875]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1212: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:19 standalone.localdomain sshd[176875]: Accepted publickey for root from 192.168.122.11 port 36136 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:19 standalone.localdomain systemd-logind[45629]: New session 52 of user root.
Oct 13 14:15:19 standalone.localdomain systemd[1]: Started Session 52 of User root.
Oct 13 14:15:19 standalone.localdomain sshd[176875]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:20 standalone.localdomain ceph-mon[29756]: pgmap v1212: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:21 standalone.localdomain haproxy[70940]: 172.21.0.2:37890 [13/Oct/2025:14:15:21.485] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 55/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1213: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:21 standalone.localdomain haproxy[70940]: 172.21.0.2:37890 [13/Oct/2025:14:15:21.491] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/304/304 201 8100 - - ---- 55/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:21 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:21.833] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 8095 - - ---- 56/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:22 standalone.localdomain sshd[149431]: Received disconnect from 38.102.83.114 port 39250:11: disconnected by user
Oct 13 14:15:22 standalone.localdomain sshd[149431]: Disconnected from user zuul 38.102.83.114 port 39250
Oct 13 14:15:22 standalone.localdomain sshd[149393]: pam_unix(sshd:session): session closed for user zuul
Oct 13 14:15:22 standalone.localdomain systemd[1]: session-44.scope: Deactivated successfully.
Oct 13 14:15:22 standalone.localdomain systemd[1]: session-44.scope: Consumed 52.299s CPU time.
Oct 13 14:15:22 standalone.localdomain systemd-logind[45629]: Session 44 logged out. Waiting for processes to exit.
Oct 13 14:15:22 standalone.localdomain systemd-logind[45629]: Removed session 44.
Oct 13 14:15:22 standalone.localdomain haproxy[70940]: 172.21.0.2:43306 [13/Oct/2025:14:15:21.829] neutron neutron/standalone.internalapi.localdomain 0/0/0/835/835 201 878 - - ---- 56/1/0/0/0 0/0 "POST /v2.0/networks HTTP/1.1"
Oct 13 14:15:22 standalone.localdomain ceph-mon[29756]: pgmap v1213: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Oct 13 14:15:22 standalone.localdomain sshd[176931]: Received disconnect from 192.168.122.11 port 36136:11: disconnected by user
Oct 13 14:15:22 standalone.localdomain sshd[176931]: Disconnected from user root 192.168.122.11 port 36136
Oct 13 14:15:22 standalone.localdomain sshd[176875]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:22 standalone.localdomain systemd[1]: session-52.scope: Deactivated successfully.
Oct 13 14:15:22 standalone.localdomain systemd[1]: session-52.scope: Consumed 1.890s CPU time.
Oct 13 14:15:22 standalone.localdomain systemd-logind[45629]: Session 52 logged out. Waiting for processes to exit.
Oct 13 14:15:22 standalone.localdomain systemd-logind[45629]: Removed session 52.
Oct 13 14:15:22 standalone.localdomain sshd[177063]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:23 standalone.localdomain sshd[177063]: Accepted publickey for root from 192.168.122.11 port 36152 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:23 standalone.localdomain systemd-logind[45629]: New session 53 of user root.
Oct 13 14:15:23 standalone.localdomain systemd[1]: Started Session 53 of User root.
Oct 13 14:15:23 standalone.localdomain sshd[177063]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:15:23
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'manila_data', '.mgr', 'images', 'vms', 'backups', 'volumes']
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:15:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1214: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Oct 13 14:15:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:15:24 standalone.localdomain haproxy[70940]: 172.21.0.2:37892 [13/Oct/2025:14:15:24.721] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 55/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:24 standalone.localdomain ceph-mon[29756]: pgmap v1214: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Oct 13 14:15:24 standalone.localdomain podman[177108]: 2025-10-13 14:15:24.841897164 +0000 UTC m=+0.106960012 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-cinder-scheduler-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, container_name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f)
Oct 13 14:15:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:15:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:15:24 standalone.localdomain podman[177127]: 2025-10-13 14:15:24.918972081 +0000 UTC m=+0.066932188 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:15:24 standalone.localdomain podman[177108]: 2025-10-13 14:15:24.939457258 +0000 UTC m=+0.204520096 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, tcib_managed=true, io.openshift.expose-services=, container_name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-cinder-scheduler, build-date=2025-07-21T16:10:12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, com.redhat.component=openstack-cinder-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']})
Oct 13 14:15:24 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:15:24 standalone.localdomain podman[177127]: 2025-10-13 14:15:24.953685793 +0000 UTC m=+0.101645890 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:15:24 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:15:25 standalone.localdomain podman[177138]: 2025-10-13 14:15:25.040083235 +0000 UTC m=+0.172630341 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.expose-services=, release=1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, container_name=cinder_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:15:25 standalone.localdomain haproxy[70940]: 172.21.0.2:37892 [13/Oct/2025:14:15:24.727] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/329/329 201 8100 - - ---- 54/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:25 standalone.localdomain podman[177138]: 2025-10-13 14:15:25.071654381 +0000 UTC m=+0.204201467 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, build-date=2025-07-21T15:58:55, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api_cron, summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:15:25 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:15:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:25 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:25.093] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 200 8095 - - ---- 55/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:25 standalone.localdomain haproxy[70940]: 172.21.0.2:43308 [13/Oct/2025:14:15:25.085] neutron neutron/standalone.internalapi.localdomain 0/0/0/92/92 404 289 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/subnets/priv_sub HTTP/1.1"
Oct 13 14:15:25 standalone.localdomain haproxy[70940]: 172.21.0.2:43308 [13/Oct/2025:14:15:25.181] neutron neutron/standalone.internalapi.localdomain 0/0/0/37/37 200 187 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/subnets?name=priv_sub HTTP/1.1"
Oct 13 14:15:25 standalone.localdomain runuser[177185]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:25 standalone.localdomain sshd[177074]: Received disconnect from 192.168.122.11 port 36152:11: disconnected by user
Oct 13 14:15:25 standalone.localdomain sshd[177074]: Disconnected from user root 192.168.122.11 port 36152
Oct 13 14:15:25 standalone.localdomain sshd[177063]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:25 standalone.localdomain systemd[1]: session-53.scope: Deactivated successfully.
Oct 13 14:15:25 standalone.localdomain systemd[1]: session-53.scope: Consumed 1.895s CPU time.
Oct 13 14:15:25 standalone.localdomain systemd-logind[45629]: Session 53 logged out. Waiting for processes to exit.
Oct 13 14:15:25 standalone.localdomain systemd-logind[45629]: Removed session 53.
Oct 13 14:15:25 standalone.localdomain sshd[177230]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:25 standalone.localdomain sshd[177230]: Accepted publickey for root from 192.168.122.11 port 36154 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:25 standalone.localdomain systemd-logind[45629]: New session 54 of user root.
Oct 13 14:15:25 standalone.localdomain systemd[1]: Started Session 54 of User root.
Oct 13 14:15:25 standalone.localdomain sshd[177230]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1215: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Oct 13 14:15:25 standalone.localdomain systemd[1]: tmp-crun.8BUnlJ.mount: Deactivated successfully.
Oct 13 14:15:25 standalone.localdomain runuser[177185]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:26 standalone.localdomain ceph-mon[29756]: pgmap v1215: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Oct 13 14:15:26 standalone.localdomain runuser[177270]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:26 standalone.localdomain runuser[177270]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:26 standalone.localdomain runuser[177327]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:27 standalone.localdomain haproxy[70940]: 172.21.0.2:37904 [13/Oct/2025:14:15:27.309] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:27 standalone.localdomain haproxy[70940]: 172.21.0.2:37904 [13/Oct/2025:14:15:27.314] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/268/268 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:27 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:27.617] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:27 standalone.localdomain runuser[177327]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:27 standalone.localdomain haproxy[70940]: 172.21.0.2:43310 [13/Oct/2025:14:15:27.611] neutron neutron/standalone.internalapi.localdomain 0/0/0/47/47 404 290 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks/private HTTP/1.1"
Oct 13 14:15:27 standalone.localdomain haproxy[70940]: 172.21.0.2:43310 [13/Oct/2025:14:15:27.660] neutron neutron/standalone.internalapi.localdomain 0/0/0/46/46 200 863 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks?name=private HTTP/1.1"
Oct 13 14:15:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1216: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:15:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:15:28 standalone.localdomain haproxy[70940]: 172.21.0.2:43310 [13/Oct/2025:14:15:27.710] neutron neutron/standalone.internalapi.localdomain 0/0/0/913/913 201 812 - - ---- 54/1/0/0/0 0/0 "POST /v2.0/subnets HTTP/1.1"
Oct 13 14:15:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:15:28 standalone.localdomain ceph-mon[29756]: pgmap v1216: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:28 standalone.localdomain systemd[1]: tmp-crun.GqR6c4.mount: Deactivated successfully.
Oct 13 14:15:28 standalone.localdomain podman[177411]: 2025-10-13 14:15:28.87233832 +0000 UTC m=+0.119226588 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, vcs-type=git, config_id=tripleo_step3, batch=17.1_20250721.1)
Oct 13 14:15:28 standalone.localdomain sshd[177233]: Received disconnect from 192.168.122.11 port 36154:11: disconnected by user
Oct 13 14:15:28 standalone.localdomain sshd[177233]: Disconnected from user root 192.168.122.11 port 36154
Oct 13 14:15:28 standalone.localdomain podman[177411]: 2025-10-13 14:15:28.906532786 +0000 UTC m=+0.153421084 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:27:18, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, container_name=keystone, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64)
Oct 13 14:15:28 standalone.localdomain sshd[177230]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:28 standalone.localdomain systemd[1]: session-54.scope: Deactivated successfully.
Oct 13 14:15:28 standalone.localdomain systemd[1]: session-54.scope: Consumed 1.893s CPU time.
Oct 13 14:15:28 standalone.localdomain systemd-logind[45629]: Session 54 logged out. Waiting for processes to exit.
Oct 13 14:15:28 standalone.localdomain systemd-logind[45629]: Removed session 54.
Oct 13 14:15:28 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:15:28 standalone.localdomain sshd[177519]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:29 standalone.localdomain sshd[177519]: Accepted publickey for root from 192.168.122.11 port 36168 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:15:29 standalone.localdomain systemd-logind[45629]: New session 55 of user root.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started Session 55 of User root.
Oct 13 14:15:29 standalone.localdomain sshd[177519]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:29 standalone.localdomain podman[177522]: 2025-10-13 14:15:29.161652589 +0000 UTC m=+0.097019169 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, container_name=heat_engine, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, vcs-type=git, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:15:29 standalone.localdomain podman[177522]: 2025-10-13 14:15:29.197994887 +0000 UTC m=+0.133361437 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, architecture=x86_64, container_name=heat_engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-engine-container, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:11)
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:15:29 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain systemd[1]: tmp-crun.0XZE81.mount: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:15:29 standalone.localdomain podman[177558]: 2025-10-13 14:15:29.293721193 +0000 UTC m=+0.116096003 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, com.redhat.component=openstack-nova-scheduler-container, io.buildah.version=1.33.12, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, container_name=nova_scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:15:29 standalone.localdomain podman[177575]: 2025-10-13 14:15:29.362071386 +0000 UTC m=+0.168768224 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, container_name=horizon, release=1, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container)
Oct 13 14:15:29 standalone.localdomain podman[177560]: 2025-10-13 14:15:29.397654061 +0000 UTC m=+0.210926021 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.component=openstack-memcached-container, container_name=memcached, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step1, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:15:29 standalone.localdomain podman[177560]: 2025-10-13 14:15:29.436742734 +0000 UTC m=+0.250014694 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177574]: 2025-10-13 14:15:29.447870166 +0000 UTC m=+0.259749504 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, container_name=heat_api_cfn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:15:29 standalone.localdomain podman[177638]: 2025-10-13 14:15:29.395619928 +0000 UTC m=+0.082863570 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc.)
Oct 13 14:15:29 standalone.localdomain kernel: device tap9a2e0de2-08 entered promiscuous mode
Oct 13 14:15:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:15:29 standalone.localdomain NetworkManager[5962]: <info>  [1760364929.4614] manager: (tap9a2e0de2-08): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Oct 13 14:15:29 standalone.localdomain systemd-udevd[177768]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:15:29 standalone.localdomain podman[177599]: 2025-10-13 14:15:29.413800618 +0000 UTC m=+0.204570976 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:15:29 standalone.localdomain podman[177575]: 2025-10-13 14:15:29.494035647 +0000 UTC m=+0.300732515 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, com.redhat.component=openstack-horizon-container, io.buildah.version=1.33.12, container_name=horizon, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, name=rhosp17/openstack-horizon, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step3, distribution-scope=public, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177645]: 2025-10-13 14:15:29.514681452 +0000 UTC m=+0.199045166 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-cron, tcib_managed=true)
Oct 13 14:15:29 standalone.localdomain podman[177645]: 2025-10-13 14:15:29.520213862 +0000 UTC m=+0.204577586 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:15:29 standalone.localdomain podman[177558]: 2025-10-13 14:15:29.527789966 +0000 UTC m=+0.350164676 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=nova_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-scheduler)
Oct 13 14:15:29 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177599]: 2025-10-13 14:15:29.550124833 +0000 UTC m=+0.340895221 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, tcib_managed=true, build-date=2025-07-21T15:56:26, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177574]: 2025-10-13 14:15:29.578168896 +0000 UTC m=+0.390048214 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, config_id=tripleo_step4, container_name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177657]: 2025-10-13 14:15:29.593033743 +0000 UTC m=+0.273383034 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64)
Oct 13 14:15:29 standalone.localdomain podman[177636]: 2025-10-13 14:15:29.615815194 +0000 UTC m=+0.305830562 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, managed_by=tripleo_ansible, vcs-type=git, container_name=manila_scheduler, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-manila-scheduler, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28)
Oct 13 14:15:29 standalone.localdomain podman[177657]: 2025-10-13 14:15:29.647121027 +0000 UTC m=+0.327470358 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, release=1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177766]: 2025-10-13 14:15:29.660933862 +0000 UTC m=+0.190803502 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, com.redhat.component=openstack-nova-conductor-container, config_id=tripleo_step4, architecture=x86_64, container_name=nova_conductor, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, build-date=2025-07-21T15:44:17, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, name=rhosp17/openstack-nova-conductor, tcib_managed=true, vcs-type=git)
Oct 13 14:15:29 standalone.localdomain podman[177561]: 2025-10-13 14:15:29.328744951 +0000 UTC m=+0.132090976 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, build-date=2025-07-21T16:06:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.component=openstack-manila-api-container, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, release=1, name=rhosp17/openstack-manila-api, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, container_name=manila_api_cron, io.openshift.expose-services=)
Oct 13 14:15:29 standalone.localdomain podman[177638]: 2025-10-13 14:15:29.689204652 +0000 UTC m=+0.376448344 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, build-date=2025-07-21T15:58:55, container_name=cinder_api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cinder-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177561]: 2025-10-13 14:15:29.710000362 +0000 UTC m=+0.513346417 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, container_name=manila_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, name=rhosp17/openstack-manila-api, architecture=x86_64, com.redhat.component=openstack-manila-api-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1217: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:29 standalone.localdomain podman[177636]: 2025-10-13 14:15:29.764768977 +0000 UTC m=+0.454784335 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, container_name=manila_scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.component=openstack-manila-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, build-date=2025-07-21T15:56:28, release=1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible)
Oct 13 14:15:29 standalone.localdomain podman[177640]: 2025-10-13 14:15:29.772643639 +0000 UTC m=+0.457489087 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, container_name=neutron_api, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T15:44:03, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177766]: 2025-10-13 14:15:29.792095688 +0000 UTC m=+0.321965308 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:17, name=rhosp17/openstack-nova-conductor, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-nova-conductor-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:15:29 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:15:29 standalone.localdomain podman[177640]: 2025-10-13 14:15:29.962897974 +0000 UTC m=+0.647743412 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, container_name=neutron_api, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:03, vcs-type=git)
Oct 13 14:15:29 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:15:30 standalone.localdomain podman[177998]: 2025-10-13 14:15:30.053256834 +0000 UTC m=+0.066025772 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:22:36, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-manila-share, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:15:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:30 standalone.localdomain podman[178033]: 2025-10-13 14:15:30.132258485 +0000 UTC m=+0.073145252 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, com.redhat.component=openstack-manila-share-container, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-share, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-share, version=17.1.9, distribution-scope=public)
Oct 13 14:15:30 standalone.localdomain podman[177998]: 2025-10-13 14:15:30.135731022 +0000 UTC m=+0.148499990 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, release=1, build-date=2025-07-21T15:22:36, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-manila-share, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:15:30 standalone.localdomain podman[178048]: 2025-10-13 14:15:30.15874085 +0000 UTC m=+0.066718274 container create c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1)
Oct 13 14:15:30 standalone.localdomain systemd[1]: Started libpod-conmon-c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb.scope.
Oct 13 14:15:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:15:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3d4bbb5323787d2179afa6229aa20b6d84f11955342adb310c1b60e2d4dfdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:15:30 standalone.localdomain podman[178048]: 2025-10-13 14:15:30.118515572 +0000 UTC m=+0.026493016 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 14:15:30 standalone.localdomain podman[178048]: 2025-10-13 14:15:30.218917611 +0000 UTC m=+0.126895035 container init c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T16:28:54, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 14:15:30 standalone.localdomain podman[178048]: 2025-10-13 14:15:30.225471454 +0000 UTC m=+0.133448908 container start c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64)
Oct 13 14:15:30 standalone.localdomain dnsmasq[178073]: started, version 2.85 cachesize 150
Oct 13 14:15:30 standalone.localdomain dnsmasq[178073]: DNS service limited to local subnets
Oct 13 14:15:30 standalone.localdomain dnsmasq[178073]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 14:15:30 standalone.localdomain dnsmasq[178073]: warning: no upstream servers configured
Oct 13 14:15:30 standalone.localdomain dnsmasq-dhcp[178073]: DHCP, static leases only on 192.168.0.0, lease time 1d
Oct 13 14:15:30 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 0 addresses
Oct 13 14:15:30 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:15:30 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:15:30 standalone.localdomain ceph-mon[29756]: pgmap v1217: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:30 standalone.localdomain haproxy[70940]: 172.21.0.2:38318 [13/Oct/2025:14:15:30.837] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:31 standalone.localdomain haproxy[70940]: 172.21.0.2:38318 [13/Oct/2025:14:15:30.843] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/290/290 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:31 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:31.163] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/29/29 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:31 standalone.localdomain haproxy[70940]: 172.21.0.2:48250 [13/Oct/2025:14:15:31.159] neutron neutron/standalone.internalapi.localdomain 0/0/0/49/49 404 289 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks/public HTTP/1.1"
Oct 13 14:15:31 standalone.localdomain haproxy[70940]: 172.21.0.2:48250 [13/Oct/2025:14:15:31.211] neutron neutron/standalone.internalapi.localdomain 0/0/0/13/13 200 188 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks?name=public HTTP/1.1"
Oct 13 14:15:31 standalone.localdomain sshd[177523]: Received disconnect from 192.168.122.11 port 36168:11: disconnected by user
Oct 13 14:15:31 standalone.localdomain sshd[177523]: Disconnected from user root 192.168.122.11 port 36168
Oct 13 14:15:31 standalone.localdomain sshd[177519]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:31 standalone.localdomain systemd[1]: session-55.scope: Deactivated successfully.
Oct 13 14:15:31 standalone.localdomain systemd[1]: session-55.scope: Consumed 1.962s CPU time.
Oct 13 14:15:31 standalone.localdomain systemd-logind[45629]: Session 55 logged out. Waiting for processes to exit.
Oct 13 14:15:31 standalone.localdomain systemd-logind[45629]: Removed session 55.
Oct 13 14:15:31 standalone.localdomain sshd[178175]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:31 standalone.localdomain sshd[178175]: Accepted publickey for root from 192.168.122.11 port 39240 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:31 standalone.localdomain systemd-logind[45629]: New session 56 of user root.
Oct 13 14:15:31 standalone.localdomain systemd[1]: Started Session 56 of User root.
Oct 13 14:15:31 standalone.localdomain sshd[178175]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1218: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:32 standalone.localdomain systemd[1]: Stopping User Manager for UID 1000...
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Activating special unit Exit the Session...
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Removed slice User Background Tasks Slice.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped target Main User Target.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped target Basic System.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped target Paths.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped target Sockets.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped target Timers.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Closed D-Bus User Message Bus Socket.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Stopped Create User's Volatile Files and Directories.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Removed slice User Application Slice.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Reached target Shutdown.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Finished Exit the Session.
Oct 13 14:15:32 standalone.localdomain systemd[149397]: Reached target Exit the Session.
Oct 13 14:15:32 standalone.localdomain systemd[1]: user@1000.service: Deactivated successfully.
Oct 13 14:15:32 standalone.localdomain systemd[1]: Stopped User Manager for UID 1000.
Oct 13 14:15:32 standalone.localdomain systemd[1]: user@1000.service: Consumed 2.480s CPU time, read 24.0K from disk, written 7.0K to disk.
Oct 13 14:15:32 standalone.localdomain ceph-mon[29756]: pgmap v1218: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:32 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:15:32 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:15:33 standalone.localdomain haproxy[70940]: 172.21.0.2:38322 [13/Oct/2025:14:15:33.370] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:33 standalone.localdomain haproxy[70940]: 172.21.0.2:38322 [13/Oct/2025:14:15:33.375] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/303/303 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:33 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:33.715] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/23/23 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1219: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:34 standalone.localdomain haproxy[70940]: 172.21.0.2:48264 [13/Oct/2025:14:15:33.709] neutron neutron/standalone.internalapi.localdomain 0/0/0/680/680 201 882 - - ---- 54/1/0/0/0 0/0 "POST /v2.0/networks HTTP/1.1"
Oct 13 14:15:34 standalone.localdomain sshd[178178]: Received disconnect from 192.168.122.11 port 39240:11: disconnected by user
Oct 13 14:15:34 standalone.localdomain sshd[178178]: Disconnected from user root 192.168.122.11 port 39240
Oct 13 14:15:34 standalone.localdomain sshd[178175]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:34 standalone.localdomain systemd[1]: session-56.scope: Deactivated successfully.
Oct 13 14:15:34 standalone.localdomain systemd[1]: session-56.scope: Consumed 1.964s CPU time.
Oct 13 14:15:34 standalone.localdomain systemd-logind[45629]: Session 56 logged out. Waiting for processes to exit.
Oct 13 14:15:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:15:34 standalone.localdomain sshd[178230]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:15:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:15:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:15:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:15:34 standalone.localdomain systemd-logind[45629]: Removed session 56.
Oct 13 14:15:34 standalone.localdomain podman[178233]: 2025-10-13 14:15:34.777855413 +0000 UTC m=+0.084033307 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:54:32, distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:15:34 standalone.localdomain sshd[178230]: Accepted publickey for root from 192.168.122.11 port 39242 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:34 standalone.localdomain systemd-logind[45629]: New session 57 of user root.
Oct 13 14:15:34 standalone.localdomain systemd[1]: Started Session 57 of User root.
Oct 13 14:15:34 standalone.localdomain sshd[178230]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:34 standalone.localdomain ceph-mon[29756]: pgmap v1219: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:34 standalone.localdomain podman[178232]: 2025-10-13 14:15:34.842706218 +0000 UTC m=+0.148169390 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:15:34 standalone.localdomain podman[178231]: 2025-10-13 14:15:34.930337844 +0000 UTC m=+0.223980193 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=swift_object_server, name=rhosp17/openstack-swift-object, architecture=x86_64, build-date=2025-07-21T14:56:28, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:15:34 standalone.localdomain podman[178234]: 2025-10-13 14:15:34.885711821 +0000 UTC m=+0.186098437 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., container_name=swift_account_server, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 14:15:34 standalone.localdomain podman[178233]: 2025-10-13 14:15:34.97080943 +0000 UTC m=+0.276987344 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc.)
Oct 13 14:15:34 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:15:34 standalone.localdomain podman[178236]: 2025-10-13 14:15:34.938099523 +0000 UTC m=+0.244291578 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, config_id=tripleo_step4, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:15:35 standalone.localdomain podman[178234]: 2025-10-13 14:15:35.121865207 +0000 UTC m=+0.422251783 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:15:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:35 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:15:35 standalone.localdomain podman[178231]: 2025-10-13 14:15:35.147984671 +0000 UTC m=+0.441627040 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=swift_object_server, tcib_managed=true, architecture=x86_64, version=17.1.9)
Oct 13 14:15:35 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:15:35 standalone.localdomain podman[178236]: 2025-10-13 14:15:35.23178831 +0000 UTC m=+0.537980375 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=nova_vnc_proxy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-novncproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, build-date=2025-07-21T15:24:10, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=)
Oct 13 14:15:35 standalone.localdomain podman[178232]: 2025-10-13 14:15:35.24282726 +0000 UTC m=+0.548290412 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:15:35 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:15:35 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:15:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1220: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:36 standalone.localdomain ceph-mon[29756]: pgmap v1220: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:36 standalone.localdomain haproxy[70940]: 172.21.0.2:38328 [13/Oct/2025:14:15:36.587] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:36 standalone.localdomain haproxy[70940]: 172.21.0.2:38328 [13/Oct/2025:14:15:36.594] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/321/321 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:36 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:36.948] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/37/37 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:37 standalone.localdomain haproxy[70940]: 172.21.0.2:48270 [13/Oct/2025:14:15:36.944] neutron neutron/standalone.internalapi.localdomain 0/0/0/69/69 404 294 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/subnets/public_subnet HTTP/1.1"
Oct 13 14:15:37 standalone.localdomain haproxy[70940]: 172.21.0.2:48270 [13/Oct/2025:14:15:37.016] neutron neutron/standalone.internalapi.localdomain 0/0/0/24/24 200 187 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/subnets?name=public_subnet HTTP/1.1"
Oct 13 14:15:37 standalone.localdomain sshd[178296]: Received disconnect from 192.168.122.11 port 39242:11: disconnected by user
Oct 13 14:15:37 standalone.localdomain sshd[178296]: Disconnected from user root 192.168.122.11 port 39242
Oct 13 14:15:37 standalone.localdomain sshd[178230]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:37 standalone.localdomain systemd[1]: session-57.scope: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain systemd[1]: session-57.scope: Consumed 2.008s CPU time.
Oct 13 14:15:37 standalone.localdomain systemd-logind[45629]: Session 57 logged out. Waiting for processes to exit.
Oct 13 14:15:37 standalone.localdomain sshd[178402]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:15:37 standalone.localdomain systemd-logind[45629]: Removed session 57.
Oct 13 14:15:37 standalone.localdomain sshd[178402]: Accepted publickey for root from 192.168.122.11 port 39244 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:37 standalone.localdomain systemd-logind[45629]: New session 58 of user root.
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started Session 58 of User root.
Oct 13 14:15:37 standalone.localdomain sshd[178402]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:37 standalone.localdomain podman[178404]: 2025-10-13 14:15:37.428296508 +0000 UTC m=+0.072748130 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, architecture=x86_64, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git)
Oct 13 14:15:37 standalone.localdomain podman[178418]: 2025-10-13 14:15:37.486571431 +0000 UTC m=+0.124945796 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, vcs-type=git, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:15:37 standalone.localdomain podman[178406]: 2025-10-13 14:15:37.460033384 +0000 UTC m=+0.097621335 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T16:05:11, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=nova_metadata, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:15:37 standalone.localdomain podman[178404]: 2025-10-13 14:15:37.513375925 +0000 UTC m=+0.157827537 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-type=git, version=17.1.9, architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc.)
Oct 13 14:15:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain podman[178405]: 2025-10-13 14:15:37.529597635 +0000 UTC m=+0.174213142 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, container_name=placement_api, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-placement-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vcs-type=git)
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:15:37 standalone.localdomain podman[178406]: 2025-10-13 14:15:37.542849623 +0000 UTC m=+0.180437564 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, container_name=nova_metadata, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:15:37 standalone.localdomain podman[178418]: 2025-10-13 14:15:37.566814979 +0000 UTC m=+0.205189344 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:15:37 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain podman[178419]: 2025-10-13 14:15:37.63571995 +0000 UTC m=+0.269384110 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44)
Oct 13 14:15:37 standalone.localdomain podman[178536]: 2025-10-13 14:15:37.659212743 +0000 UTC m=+0.083218831 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, container_name=glance_api_internal, release=1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api)
Oct 13 14:15:37 standalone.localdomain podman[178405]: 2025-10-13 14:15:37.659817591 +0000 UTC m=+0.304433098 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, name=rhosp17/openstack-placement-api, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, com.redhat.component=openstack-placement-api-container, io.openshift.expose-services=, container_name=placement_api, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:15:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:15:37 standalone.localdomain podman[178510]: 2025-10-13 14:15:37.695093507 +0000 UTC m=+0.148653485 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, tcib_managed=true, container_name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:15:37 standalone.localdomain podman[178419]: 2025-10-13 14:15:37.722923083 +0000 UTC m=+0.356587293 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:15:37 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1221: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:37 standalone.localdomain podman[178574]: 2025-10-13 14:15:37.793785063 +0000 UTC m=+0.111976756 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, vcs-type=git, release=1, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, container_name=glance_api)
Oct 13 14:15:37 standalone.localdomain podman[178536]: 2025-10-13 14:15:37.868970897 +0000 UTC m=+0.292976985 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_internal)
Oct 13 14:15:37 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain podman[178510]: 2025-10-13 14:15:37.889752937 +0000 UTC m=+0.343312895 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, release=1, vendor=Red Hat, Inc.)
Oct 13 14:15:37 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:15:37 standalone.localdomain podman[178574]: 2025-10-13 14:15:37.979093425 +0000 UTC m=+0.297285128 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, container_name=glance_api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:15:37 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:15:38 standalone.localdomain runuser[178622]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:38 standalone.localdomain ceph-mon[29756]: pgmap v1221: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:38 standalone.localdomain runuser[178622]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:39 standalone.localdomain runuser[178774]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:39 standalone.localdomain haproxy[70940]: 172.21.0.2:38334 [13/Oct/2025:14:15:39.205] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:39 standalone.localdomain haproxy[70940]: 172.21.0.2:38334 [13/Oct/2025:14:15:39.211] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:39 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:39.528] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:39 standalone.localdomain haproxy[70940]: 172.21.0.2:48276 [13/Oct/2025:14:15:39.520] neutron neutron/standalone.internalapi.localdomain 0/0/0/62/62 404 289 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks/public HTTP/1.1"
Oct 13 14:15:39 standalone.localdomain runuser[178774]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:39 standalone.localdomain podman[178832]: 2025-10-13 14:15:39.622568416 +0000 UTC m=+0.102275507 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, com.redhat.component=openstack-cinder-backup-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, distribution-scope=public, name=rhosp17/openstack-cinder-backup, vcs-type=git, release=1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef)
Oct 13 14:15:39 standalone.localdomain haproxy[70940]: 172.21.0.2:48276 [13/Oct/2025:14:15:39.586] neutron neutron/standalone.internalapi.localdomain 0/0/0/58/58 200 886 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks?name=public HTTP/1.1"
Oct 13 14:15:39 standalone.localdomain runuser[178863]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:39 standalone.localdomain podman[178832]: 2025-10-13 14:15:39.654792658 +0000 UTC m=+0.134499719 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T16:18:24, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, release=1)
Oct 13 14:15:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1222: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:40 standalone.localdomain runuser[178863]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:40 standalone.localdomain kernel: device tap88d4c439-3c entered promiscuous mode
Oct 13 14:15:40 standalone.localdomain NetworkManager[5962]: <info>  [1760364940.6655] manager: (tap88d4c439-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/14)
Oct 13 14:15:40 standalone.localdomain systemd-udevd[179089]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:15:40 standalone.localdomain ceph-mon[29756]: pgmap v1222: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:41 standalone.localdomain podman[179204]: 2025-10-13 14:15:41.332428639 +0000 UTC m=+0.065816916 container create 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:15:41 standalone.localdomain systemd[1]: Started libpod-conmon-8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee.scope.
Oct 13 14:15:41 standalone.localdomain podman[179204]: 2025-10-13 14:15:41.294477382 +0000 UTC m=+0.027865659 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1
Oct 13 14:15:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:15:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de3b1bdb5a8693c0cf197a01129c6d27b663ac340b2bc6737270378a06d0e1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 14:15:41 standalone.localdomain podman[179204]: 2025-10-13 14:15:41.413865356 +0000 UTC m=+0.147253623 container init 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, build-date=2025-07-21T16:28:54, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, version=17.1.9)
Oct 13 14:15:41 standalone.localdomain podman[179204]: 2025-10-13 14:15:41.423951915 +0000 UTC m=+0.157340232 container start 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1)
Oct 13 14:15:41 standalone.localdomain dnsmasq[179222]: started, version 2.85 cachesize 150
Oct 13 14:15:41 standalone.localdomain dnsmasq[179222]: DNS service limited to local subnets
Oct 13 14:15:41 standalone.localdomain dnsmasq[179222]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 14:15:41 standalone.localdomain dnsmasq[179222]: warning: no upstream servers configured
Oct 13 14:15:41 standalone.localdomain dnsmasq-dhcp[179222]: DHCP, static leases only on 192.168.122.0, lease time 1d
Oct 13 14:15:41 standalone.localdomain dnsmasq[179222]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 0 addresses
Oct 13 14:15:41 standalone.localdomain dnsmasq-dhcp[179222]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 14:15:41 standalone.localdomain dnsmasq-dhcp[179222]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 14:15:41 standalone.localdomain haproxy[70940]: 172.21.0.2:48276 [13/Oct/2025:14:15:39.652] neutron neutron/standalone.internalapi.localdomain 0/0/0/1997/1997 201 827 - - ---- 54/1/0/0/0 0/0 "POST /v2.0/subnets HTTP/1.1"
Oct 13 14:15:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1223: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:41 standalone.localdomain sshd[178482]: Received disconnect from 192.168.122.11 port 39244:11: disconnected by user
Oct 13 14:15:41 standalone.localdomain sshd[178482]: Disconnected from user root 192.168.122.11 port 39244
Oct 13 14:15:41 standalone.localdomain sshd[178402]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:41 standalone.localdomain systemd[1]: session-58.scope: Deactivated successfully.
Oct 13 14:15:41 standalone.localdomain systemd[1]: session-58.scope: Consumed 1.990s CPU time.
Oct 13 14:15:41 standalone.localdomain systemd-logind[45629]: Session 58 logged out. Waiting for processes to exit.
Oct 13 14:15:41 standalone.localdomain systemd-logind[45629]: Removed session 58.
Oct 13 14:15:41 standalone.localdomain sshd[179233]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:42 standalone.localdomain sshd[179233]: Accepted publickey for root from 192.168.122.11 port 58984 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:15:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:15:42 standalone.localdomain systemd-logind[45629]: New session 59 of user root.
Oct 13 14:15:42 standalone.localdomain systemd[1]: Started Session 59 of User root.
Oct 13 14:15:42 standalone.localdomain sshd[179233]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:42 standalone.localdomain podman[179236]: 2025-10-13 14:15:42.214018166 +0000 UTC m=+0.091359761 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:15:42 standalone.localdomain podman[179236]: 2025-10-13 14:15:42.22586332 +0000 UTC m=+0.103204905 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:15:42 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:15:42 standalone.localdomain podman[179237]: 2025-10-13 14:15:42.271375792 +0000 UTC m=+0.148529212 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:15:42 standalone.localdomain podman[179237]: 2025-10-13 14:15:42.307743531 +0000 UTC m=+0.184896951 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, architecture=x86_64, name=rhosp17/openstack-nova-api, version=17.1.9)
Oct 13 14:15:42 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:15:42 standalone.localdomain ceph-mon[29756]: pgmap v1223: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1224: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:43 standalone.localdomain haproxy[70940]: 172.21.0.2:38652 [13/Oct/2025:14:15:43.740] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:44 standalone.localdomain haproxy[70940]: 172.21.0.2:38652 [13/Oct/2025:14:15:43.746] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/377/377 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:44 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:44.161] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:44 standalone.localdomain haproxy[70940]: 172.21.0.2:57080 [13/Oct/2025:14:15:44.153] neutron neutron/standalone.internalapi.localdomain 0/0/0/111/111 404 291 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/routers/priv_router HTTP/1.1"
Oct 13 14:15:44 standalone.localdomain haproxy[70940]: 172.21.0.2:57080 [13/Oct/2025:14:15:44.267] neutron neutron/standalone.internalapi.localdomain 0/0/0/47/47 200 188 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/routers?name=priv_router HTTP/1.1"
Oct 13 14:15:44 standalone.localdomain sshd[179248]: Received disconnect from 192.168.122.11 port 58984:11: disconnected by user
Oct 13 14:15:44 standalone.localdomain sshd[179248]: Disconnected from user root 192.168.122.11 port 58984
Oct 13 14:15:44 standalone.localdomain sshd[179233]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:44 standalone.localdomain systemd[1]: session-59.scope: Deactivated successfully.
Oct 13 14:15:44 standalone.localdomain systemd[1]: session-59.scope: Consumed 1.860s CPU time.
Oct 13 14:15:44 standalone.localdomain systemd-logind[45629]: Session 59 logged out. Waiting for processes to exit.
Oct 13 14:15:44 standalone.localdomain sshd[179327]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:15:44 standalone.localdomain systemd-logind[45629]: Removed session 59.
Oct 13 14:15:44 standalone.localdomain podman[179331]: 2025-10-13 14:15:44.678258642 +0000 UTC m=+0.061715720 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, io.buildah.version=1.33.12, release=1, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9)
Oct 13 14:15:44 standalone.localdomain podman[179331]: 2025-10-13 14:15:44.698858336 +0000 UTC m=+0.082315394 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, maintainer=OpenStack TripleO Team, container_name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, build-date=2025-07-21T16:18:19, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, name=rhosp17/openstack-barbican-keystone-listener, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c)
Oct 13 14:15:44 standalone.localdomain sshd[179327]: Accepted publickey for root from 192.168.122.11 port 58990 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:44 standalone.localdomain systemd-logind[45629]: New session 60 of user root.
Oct 13 14:15:44 standalone.localdomain systemd[1]: Started Session 60 of User root.
Oct 13 14:15:44 standalone.localdomain systemd[1]: tmp-crun.TPhGhZ.mount: Deactivated successfully.
Oct 13 14:15:44 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:15:44 standalone.localdomain sshd[179327]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:44 standalone.localdomain podman[179329]: 2025-10-13 14:15:44.741774906 +0000 UTC m=+0.128991660 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container, tcib_managed=true, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64)
Oct 13 14:15:44 standalone.localdomain podman[179343]: 2025-10-13 14:15:44.801318799 +0000 UTC m=+0.179155744 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, vcs-type=git, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, build-date=2025-07-21T15:36:22, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-worker-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:15:44 standalone.localdomain ceph-mon[29756]: pgmap v1224: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:44 standalone.localdomain podman[179349]: 2025-10-13 14:15:44.848361847 +0000 UTC m=+0.222440876 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api_cron, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1)
Oct 13 14:15:44 standalone.localdomain podman[179349]: 2025-10-13 14:15:44.857753135 +0000 UTC m=+0.231832164 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:15:44 standalone.localdomain podman[179329]: 2025-10-13 14:15:44.877975837 +0000 UTC m=+0.265192651 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, version=17.1.9, container_name=neutron_dhcp, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:15:44 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:15:44 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:15:44 standalone.localdomain podman[179332]: 2025-10-13 14:15:44.948503497 +0000 UTC m=+0.327845108 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, release=1, build-date=2025-07-21T16:03:34)
Oct 13 14:15:44 standalone.localdomain podman[179343]: 2025-10-13 14:15:44.97879422 +0000 UTC m=+0.356631165 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-barbican-worker, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, release=1, version=17.1.9, build-date=2025-07-21T15:36:22)
Oct 13 14:15:44 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:15:44 standalone.localdomain podman[179332]: 2025-10-13 14:15:44.994978908 +0000 UTC m=+0.374320559 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:15:45 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:15:45 standalone.localdomain podman[179330]: 2025-10-13 14:15:45.051624451 +0000 UTC m=+0.438835185 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, name=rhosp17/openstack-barbican-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-barbican-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:15:45 standalone.localdomain podman[179330]: 2025-10-13 14:15:45.106259312 +0000 UTC m=+0.493470046 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-barbican-api-container, io.openshift.expose-services=, architecture=x86_64, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, vcs-type=git, build-date=2025-07-21T15:22:44, tcib_managed=true, name=rhosp17/openstack-barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed)
Oct 13 14:15:45 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:15:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:45 standalone.localdomain systemd[1]: tmp-crun.Xo6dB7.mount: Deactivated successfully.
Oct 13 14:15:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1225: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:46 standalone.localdomain ceph-mon[29756]: pgmap v1225: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:46 standalone.localdomain haproxy[70940]: 172.21.0.2:38654 [13/Oct/2025:14:15:46.422] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:46 standalone.localdomain haproxy[70940]: 172.21.0.2:38654 [13/Oct/2025:14:15:46.428] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/345/345 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:46 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:46.795] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:46 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:15:46 standalone.localdomain object-server[179509]: Object update sweep starting on /srv/node/d1 (pid: 14)
Oct 13 14:15:46 standalone.localdomain object-server[179509]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 14)
Oct 13 14:15:46 standalone.localdomain object-server[179509]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:15:46 standalone.localdomain object-server[114601]: Object update sweep completed: 0.06s
Oct 13 14:15:47 standalone.localdomain haproxy[70940]: 172.21.0.2:57084 [13/Oct/2025:14:15:46.789] neutron neutron/standalone.internalapi.localdomain 0/0/0/262/262 201 634 - - ---- 54/1/0/0/0 0/0 "POST /v2.0/routers HTTP/1.1"
Oct 13 14:15:47 standalone.localdomain sshd[179413]: Received disconnect from 192.168.122.11 port 58990:11: disconnected by user
Oct 13 14:15:47 standalone.localdomain sshd[179413]: Disconnected from user root 192.168.122.11 port 58990
Oct 13 14:15:47 standalone.localdomain sshd[179327]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:47 standalone.localdomain systemd[1]: session-60.scope: Deactivated successfully.
Oct 13 14:15:47 standalone.localdomain systemd[1]: session-60.scope: Consumed 1.899s CPU time.
Oct 13 14:15:47 standalone.localdomain systemd-logind[45629]: Session 60 logged out. Waiting for processes to exit.
Oct 13 14:15:47 standalone.localdomain systemd-logind[45629]: Removed session 60.
Oct 13 14:15:47 standalone.localdomain sshd[179512]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:47 standalone.localdomain sshd[179512]: Accepted publickey for root from 192.168.122.11 port 59006 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:47 standalone.localdomain systemd-logind[45629]: New session 61 of user root.
Oct 13 14:15:47 standalone.localdomain systemd[1]: Started Session 61 of User root.
Oct 13 14:15:47 standalone.localdomain sshd[179512]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1226: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.807689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364947807750, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 251, "total_data_size": 1937987, "memory_usage": 1979056, "flush_reason": "Manual Compaction"}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364947821050, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1879669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27550, "largest_seqno": 29621, "table_properties": {"data_size": 1871551, "index_size": 4885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16842, "raw_average_key_size": 19, "raw_value_size": 1854989, "raw_average_value_size": 2187, "num_data_blocks": 222, "num_entries": 848, "num_filter_entries": 848, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364754, "oldest_key_time": 1760364754, "file_creation_time": 1760364947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 13418 microseconds, and 5612 cpu microseconds.
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.821106) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1879669 bytes OK
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.821133) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.823458) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.823483) EVENT_LOG_v1 {"time_micros": 1760364947823476, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.823533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1929124, prev total WAL file size 1929124, number of live WAL files 2.
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.824317) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1835KB)], [62(4536KB)]
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364947824378, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 6524846, "oldest_snapshot_seqno": -1}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 3995 keys, 5490793 bytes, temperature: kUnknown
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364947849291, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 5490793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5463357, "index_size": 16332, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97700, "raw_average_key_size": 24, "raw_value_size": 5390337, "raw_average_value_size": 1349, "num_data_blocks": 688, "num_entries": 3995, "num_filter_entries": 3995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760364947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.849610) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 5490793 bytes
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.851388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 261.0 rd, 219.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.4 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 4517, records dropped: 522 output_compression: NoCompression
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.851420) EVENT_LOG_v1 {"time_micros": 1760364947851404, "job": 34, "event": "compaction_finished", "compaction_time_micros": 25000, "compaction_time_cpu_micros": 14559, "output_level": 6, "num_output_files": 1, "total_output_size": 5490793, "num_input_records": 4517, "num_output_records": 3995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364947851850, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760364947852444, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.824229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.852511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.852515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.852517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.852518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:15:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:15:47.852520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:15:48 standalone.localdomain ceph-mon[29756]: pgmap v1226: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.21.0.2:38666 [13/Oct/2025:14:15:49.093] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.21.0.2:38666 [13/Oct/2025:14:15:49.098] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/276/276 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:49.407] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.21.0.2:57094 [13/Oct/2025:14:15:49.401] neutron neutron/standalone.internalapi.localdomain 0/0/0/71/71 404 289 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/subnets/priv_sub HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.21.0.2:57094 [13/Oct/2025:14:15:49.475] neutron neutron/standalone.internalapi.localdomain 0/0/0/49/49 200 810 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/subnets?name=priv_sub HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.21.0.2:57094 [13/Oct/2025:14:15:49.531] neutron neutron/standalone.internalapi.localdomain 0/0/0/41/41 404 291 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/routers/priv_router HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain haproxy[70940]: 172.21.0.2:57094 [13/Oct/2025:14:15:49.576] neutron neutron/standalone.internalapi.localdomain 0/0/0/69/69 200 632 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/routers?name=priv_router HTTP/1.1"
Oct 13 14:15:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:15:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:15:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:15:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1227: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:49 standalone.localdomain systemd[1]: tmp-crun.yS2AiZ.mount: Deactivated successfully.
Oct 13 14:15:49 standalone.localdomain podman[179638]: 2025-10-13 14:15:49.832558653 +0000 UTC m=+0.091915140 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:30:04, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, config_id=ovn_cluster_northd)
Oct 13 14:15:49 standalone.localdomain podman[179638]: 2025-10-13 14:15:49.845766528 +0000 UTC m=+0.105123025 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, config_id=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ovn-northd, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:30:04, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:15:49 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:15:49 standalone.localdomain podman[179639]: 2025-10-13 14:15:49.934592492 +0000 UTC m=+0.188674416 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, distribution-scope=public, io.openshift.expose-services=, release=1, vcs-type=git, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:15:49 standalone.localdomain podman[179639]: 2025-10-13 14:15:49.989292446 +0000 UTC m=+0.243374410 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, release=1, build-date=2025-07-21T12:58:45, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., container_name=clustercheck, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:15:50 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:15:50 standalone.localdomain podman[179637]: 2025-10-13 14:15:49.992804183 +0000 UTC m=+0.251800129 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, container_name=nova_compute, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:15:50 standalone.localdomain podman[179637]: 2025-10-13 14:15:50.072729132 +0000 UTC m=+0.331725138 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:15:50 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:15:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:50 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 1 addresses
Oct 13 14:15:50 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:15:50 standalone.localdomain podman[179883]: 2025-10-13 14:15:50.450906829 +0000 UTC m=+0.039706063 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, release=1)
Oct 13 14:15:50 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:15:50 standalone.localdomain ceph-mon[29756]: pgmap v1227: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:50 standalone.localdomain runuser[179958]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:51 standalone.localdomain runuser[179958]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:51 standalone.localdomain runuser[180039]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:51 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 1 addresses
Oct 13 14:15:51 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:15:51 standalone.localdomain podman[180038]: 2025-10-13 14:15:51.732224116 +0000 UTC m=+0.060645187 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:15:51 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:15:51 standalone.localdomain systemd[1]: tmp-crun.Tx2Zht.mount: Deactivated successfully.
Oct 13 14:15:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1228: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:51 standalone.localdomain haproxy[70940]: 172.21.0.2:57094 [13/Oct/2025:14:15:49.648] neutron neutron/standalone.internalapi.localdomain 0/0/0/2335/2335 200 483 - - ---- 54/1/0/0/0 0/0 "PUT /v2.0/routers/b5cb6ecf-35bc-4222-ae29-d5d413db942b/add_router_interface HTTP/1.1"
Oct 13 14:15:52 standalone.localdomain sshd[179515]: Received disconnect from 192.168.122.11 port 59006:11: disconnected by user
Oct 13 14:15:52 standalone.localdomain sshd[179515]: Disconnected from user root 192.168.122.11 port 59006
Oct 13 14:15:52 standalone.localdomain sshd[179512]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:52 standalone.localdomain systemd[1]: session-61.scope: Deactivated successfully.
Oct 13 14:15:52 standalone.localdomain systemd[1]: session-61.scope: Consumed 1.857s CPU time.
Oct 13 14:15:52 standalone.localdomain systemd-logind[45629]: Session 61 logged out. Waiting for processes to exit.
Oct 13 14:15:52 standalone.localdomain systemd-logind[45629]: Removed session 61.
Oct 13 14:15:52 standalone.localdomain sshd[180117]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:52 standalone.localdomain runuser[180039]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:52 standalone.localdomain runuser[180126]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:15:52 standalone.localdomain sshd[180117]: Accepted publickey for root from 192.168.122.11 port 36838 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:52 standalone.localdomain systemd-logind[45629]: New session 62 of user root.
Oct 13 14:15:52 standalone.localdomain systemd[1]: Started Session 62 of User root.
Oct 13 14:15:52 standalone.localdomain sshd[180117]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:52 standalone.localdomain ceph-mon[29756]: pgmap v1228: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:53 standalone.localdomain runuser[180126]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:15:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1229: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:54 standalone.localdomain sudo[180225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:15:54 standalone.localdomain sudo[180225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:15:54 standalone.localdomain sudo[180225]: pam_unix(sudo:session): session closed for user root
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.21.0.2:50554 [13/Oct/2025:14:15:54.220] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain sudo[180242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 14:15:54 standalone.localdomain sudo[180242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.21.0.2:50554 [13/Oct/2025:14:15:54.227] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/328/328 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:15:54.581] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56640 [13/Oct/2025:14:15:54.577] neutron neutron/standalone.internalapi.localdomain 0/0/0/68/68 404 291 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/routers/priv_router HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56640 [13/Oct/2025:14:15:54.648] neutron neutron/standalone.internalapi.localdomain 0/0/0/78/78 200 632 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/routers?name=priv_router HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56640 [13/Oct/2025:14:15:54.729] neutron neutron/standalone.internalapi.localdomain 0/0/0/12/12 404 289 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks/public HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain haproxy[70940]: 172.21.0.2:56640 [13/Oct/2025:14:15:54.743] neutron neutron/standalone.internalapi.localdomain 0/0/0/64/64 200 924 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks?name=public HTTP/1.1"
Oct 13 14:15:54 standalone.localdomain ceph-mon[29756]: pgmap v1229: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:15:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:15:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:15:55 standalone.localdomain podman[180337]: 2025-10-13 14:15:55.092563695 +0000 UTC m=+0.080116266 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, version=7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, name=rhceph)
Oct 13 14:15:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:15:55 standalone.localdomain podman[180356]: 2025-10-13 14:15:55.158605937 +0000 UTC m=+0.061909595 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:15:55 standalone.localdomain podman[180356]: 2025-10-13 14:15:55.16874124 +0000 UTC m=+0.072044928 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9)
Oct 13 14:15:55 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:15:55 standalone.localdomain podman[180337]: 2025-10-13 14:15:55.221362618 +0000 UTC m=+0.208915239 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 14:15:55 standalone.localdomain systemd[1]: tmp-crun.apfGe9.mount: Deactivated successfully.
Oct 13 14:15:55 standalone.localdomain podman[180358]: 2025-10-13 14:15:55.234394529 +0000 UTC m=+0.135497990 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1)
Oct 13 14:15:55 standalone.localdomain podman[180358]: 2025-10-13 14:15:55.270872262 +0000 UTC m=+0.171975723 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, build-date=2025-07-21T15:58:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, name=rhosp17/openstack-cinder-api, container_name=cinder_api_cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9)
Oct 13 14:15:55 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:15:55 standalone.localdomain podman[180357]: 2025-10-13 14:15:55.283360756 +0000 UTC m=+0.184612051 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, container_name=cinder_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-scheduler, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-cinder-scheduler-container, version=17.1.9, build-date=2025-07-21T16:10:12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, distribution-scope=public)
Oct 13 14:15:55 standalone.localdomain podman[180357]: 2025-10-13 14:15:55.328562067 +0000 UTC m=+0.229813352 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:10:12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-scheduler-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-scheduler, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:15:55 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:15:55 standalone.localdomain sudo[180242]: pam_unix(sudo:session): session closed for user root
Oct 13 14:15:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:15:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:15:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:55 standalone.localdomain sudo[180499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:15:55 standalone.localdomain sudo[180499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:15:55 standalone.localdomain sudo[180499]: pam_unix(sudo:session): session closed for user root
Oct 13 14:15:55 standalone.localdomain sudo[180514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:15:55 standalone.localdomain sudo[180514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:15:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1230: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:55 standalone.localdomain dnsmasq[179222]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 14:15:55 standalone.localdomain podman[180584]: 2025-10-13 14:15:55.897778942 +0000 UTC m=+0.059890444 container kill 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, version=17.1.9, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true)
Oct 13 14:15:55 standalone.localdomain dnsmasq-dhcp[179222]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 14:15:55 standalone.localdomain dnsmasq-dhcp[179222]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 14:15:56 standalone.localdomain sudo[180514]: pam_unix(sudo:session): session closed for user root
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:15:56 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 3af1cac5-b7ab-4abc-8bbb-7eef148e2da1 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:15:56 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 3af1cac5-b7ab-4abc-8bbb-7eef148e2da1 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:15:56 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 3af1cac5-b7ab-4abc-8bbb-7eef148e2da1 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:15:56 standalone.localdomain sudo[180646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:15:56 standalone.localdomain sudo[180646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:15:56 standalone.localdomain sudo[180646]: pam_unix(sudo:session): session closed for user root
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: pgmap v1230: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:15:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:15:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/826765395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:15:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:15:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/826765395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:15:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/826765395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:15:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/826765395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:15:57 standalone.localdomain haproxy[70940]: 172.21.0.2:56640 [13/Oct/2025:14:15:54.814] neutron neutron/standalone.internalapi.localdomain 0/0/0/2874/2874 200 812 - - ---- 54/1/0/0/0 0/0 "PUT /v2.0/routers/b5cb6ecf-35bc-4222-ae29-d5d413db942b HTTP/1.1"
Oct 13 14:15:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1231: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:57 standalone.localdomain sshd[180174]: Received disconnect from 192.168.122.11 port 36838:11: disconnected by user
Oct 13 14:15:57 standalone.localdomain sshd[180174]: Disconnected from user root 192.168.122.11 port 36838
Oct 13 14:15:57 standalone.localdomain sshd[180117]: pam_unix(sshd:session): session closed for user root
Oct 13 14:15:57 standalone.localdomain sshd[180701]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:15:57 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:15:57 standalone.localdomain systemd[1]: session-62.scope: Deactivated successfully.
Oct 13 14:15:57 standalone.localdomain systemd[1]: session-62.scope: Consumed 2.090s CPU time.
Oct 13 14:15:57 standalone.localdomain systemd-logind[45629]: Session 62 logged out. Waiting for processes to exit.
Oct 13 14:15:57 standalone.localdomain systemd-logind[45629]: Removed session 62.
Oct 13 14:15:58 standalone.localdomain recover_tripleo_nova_virtqemud[180704]: 93291
Oct 13 14:15:58 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:15:58 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:15:58 standalone.localdomain sshd[180701]: Accepted publickey for root from 192.168.122.11 port 36854 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:15:58 standalone.localdomain systemd-logind[45629]: New session 63 of user root.
Oct 13 14:15:58 standalone.localdomain systemd[1]: Started Session 63 of User root.
Oct 13 14:15:58 standalone.localdomain sshd[180701]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:15:58 standalone.localdomain ceph-mon[29756]: pgmap v1231: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:15:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:15:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:15:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1232: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:15:59 standalone.localdomain systemd[1]: tmp-crun.pEt6Tl.mount: Deactivated successfully.
Oct 13 14:15:59 standalone.localdomain haproxy[70940]: 172.21.0.2:51292 [13/Oct/2025:14:15:59.860] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/11/11 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:15:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:15:59 standalone.localdomain podman[180855]: 2025-10-13 14:15:59.918367637 +0000 UTC m=+0.145978582 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, container_name=heat_api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12)
Oct 13 14:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:16:00 standalone.localdomain podman[180829]: 2025-10-13 14:15:59.957295705 +0000 UTC m=+0.208466486 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-scheduler-container, container_name=nova_scheduler, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, vcs-type=git, name=rhosp17/openstack-nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:16:00 standalone.localdomain podman[180877]: 2025-10-13 14:16:00.011367809 +0000 UTC m=+0.230499524 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-manila-api-container, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, container_name=manila_api_cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:16:00 standalone.localdomain podman[180826]: 2025-10-13 14:15:59.933854533 +0000 UTC m=+0.189727248 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, container_name=heat_api_cfn, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn)
Oct 13 14:16:00 standalone.localdomain podman[180828]: 2025-10-13 14:16:00.024229405 +0000 UTC m=+0.282024109 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, release=1, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, container_name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc.)
Oct 13 14:16:00 standalone.localdomain podman[180956]: 2025-10-13 14:16:00.076575765 +0000 UTC m=+0.187594633 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-conductor-container, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, vcs-type=git, architecture=x86_64)
Oct 13 14:16:00 standalone.localdomain podman[180844]: 2025-10-13 14:15:59.88336307 +0000 UTC m=+0.121944683 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:58:55, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-cinder-api, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:16:00 standalone.localdomain podman[180826]: 2025-10-13 14:16:00.113167032 +0000 UTC m=+0.369039757 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, version=17.1.9, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:16:00 standalone.localdomain podman[181069]: 2025-10-13 14:16:00.12352832 +0000 UTC m=+0.118547719 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-neutron-server-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, build-date=2025-07-21T15:44:03, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, release=1, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, container_name=neutron_api)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180860]: 2025-10-13 14:15:59.974630818 +0000 UTC m=+0.204072000 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, version=17.1.9, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container)
Oct 13 14:16:00 standalone.localdomain podman[180855]: 2025-10-13 14:16:00.136878501 +0000 UTC m=+0.364489446 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=heat_api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:16:00 standalone.localdomain podman[180851]: 2025-10-13 14:16:00.041129895 +0000 UTC m=+0.283999690 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, distribution-scope=public, vcs-type=git, container_name=heat_api_cron, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, release=1)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180829]: 2025-10-13 14:16:00.145973471 +0000 UTC m=+0.397144212 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, build-date=2025-07-21T16:02:54, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_scheduler, version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 14:16:00 standalone.localdomain podman[180956]: 2025-10-13 14:16:00.150967985 +0000 UTC m=+0.261986863 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, com.redhat.component=openstack-nova-conductor-container, build-date=2025-07-21T15:44:17, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, name=rhosp17/openstack-nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, vcs-type=git, container_name=nova_conductor, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:16:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:00 standalone.localdomain podman[180877]: 2025-10-13 14:16:00.155911387 +0000 UTC m=+0.375043112 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, container_name=manila_api_cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:06:43, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-manila-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:00 standalone.localdomain haproxy[70940]: 172.21.0.2:51292 [13/Oct/2025:14:15:59.873] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/283/283 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:00 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180860]: 2025-10-13 14:16:00.160775917 +0000 UTC m=+0.390217059 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, name=rhosp17/openstack-cron)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180851]: 2025-10-13 14:16:00.173642352 +0000 UTC m=+0.416512127 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, name=rhosp17/openstack-heat-api, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, container_name=heat_api_cron, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T15:56:26)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:16:00.192] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:00 standalone.localdomain podman[180844]: 2025-10-13 14:16:00.224798576 +0000 UTC m=+0.463380169 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api, io.buildah.version=1.33.12)
Oct 13 14:16:00 standalone.localdomain podman[180827]: 2025-10-13 14:16:00.231563055 +0000 UTC m=+0.489810713 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, container_name=keystone, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, release=1, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain haproxy[70940]: 172.21.0.2:36304 [13/Oct/2025:14:16:00.172] neutron neutron/standalone.internalapi.localdomain 0/0/0/86/86 404 303 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/floatingips/192.168.122.20 HTTP/1.1"
Oct 13 14:16:00 standalone.localdomain podman[180853]: 2025-10-13 14:16:00.285571956 +0000 UTC m=+0.520680932 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:58:15, batch=17.1_20250721.1, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, com.redhat.component=openstack-horizon-container, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, tcib_managed=true, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:16:00 standalone.localdomain podman[180827]: 2025-10-13 14:16:00.297838803 +0000 UTC m=+0.556086431 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, vcs-type=git, architecture=x86_64, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:16:00 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180954]: 2025-10-13 14:16:00.002211687 +0000 UTC m=+0.115078432 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-scheduler, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, container_name=manila_scheduler, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, name=rhosp17/openstack-manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-manila-scheduler-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:16:00 standalone.localdomain podman[180828]: 2025-10-13 14:16:00.309834763 +0000 UTC m=+0.567629487 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, name=rhosp17/openstack-heat-engine, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain haproxy[70940]: 172.21.0.2:36304 [13/Oct/2025:14:16:00.260] neutron neutron/standalone.internalapi.localdomain 0/0/0/74/74 200 192 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/floatingips HTTP/1.1"
Oct 13 14:16:00 standalone.localdomain podman[181069]: 2025-10-13 14:16:00.33766944 +0000 UTC m=+0.332688829 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vcs-type=git, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, container_name=neutron_api, distribution-scope=public, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180830]: 2025-10-13 14:16:00.056194618 +0000 UTC m=+0.297531066 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, summary=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, release=1, version=17.1.9, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public)
Oct 13 14:16:00 standalone.localdomain podman[180853]: 2025-10-13 14:16:00.364530156 +0000 UTC m=+0.599639112 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 horizon, tcib_managed=true, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-horizon, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, build-date=2025-07-21T13:58:15, summary=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, vcs-type=git, container_name=horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-horizon-container)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180954]: 2025-10-13 14:16:00.390241357 +0000 UTC m=+0.503108102 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-manila-scheduler, container_name=manila_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain podman[180830]: 2025-10-13 14:16:00.442309769 +0000 UTC m=+0.683646257 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, tcib_managed=true, vcs-type=git, build-date=2025-07-21T12:58:43, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:16:00 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain sshd[180714]: Received disconnect from 192.168.122.11 port 36854:11: disconnected by user
Oct 13 14:16:00 standalone.localdomain sshd[180714]: Disconnected from user root 192.168.122.11 port 36854
Oct 13 14:16:00 standalone.localdomain sshd[180701]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:00 standalone.localdomain systemd[1]: session-63.scope: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain systemd[1]: session-63.scope: Consumed 1.923s CPU time.
Oct 13 14:16:00 standalone.localdomain systemd-logind[45629]: Session 63 logged out. Waiting for processes to exit.
Oct 13 14:16:00 standalone.localdomain systemd-logind[45629]: Removed session 63.
Oct 13 14:16:00 standalone.localdomain sshd[181287]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:00 standalone.localdomain sshd[181287]: Accepted publickey for root from 192.168.122.11 port 43598 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:00 standalone.localdomain systemd-logind[45629]: New session 64 of user root.
Oct 13 14:16:00 standalone.localdomain systemd[1]: Started Session 64 of User root.
Oct 13 14:16:00 standalone.localdomain sshd[181287]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:00 standalone.localdomain systemd[1]: tmp-crun.Hs6r9i.mount: Deactivated successfully.
Oct 13 14:16:00 standalone.localdomain ceph-mon[29756]: pgmap v1232: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1233: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:01 standalone.localdomain ceph-mon[29756]: pgmap v1233: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51294 [13/Oct/2025:14:16:02.339] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:02 standalone.localdomain podman[181374]: 2025-10-13 14:16:02.375135903 +0000 UTC m=+0.080986643 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, name=rhosp17/openstack-mariadb, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T12:58:45, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:16:02 standalone.localdomain podman[181374]: 2025-10-13 14:16:02.378340812 +0000 UTC m=+0.084191582 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git)
Oct 13 14:16:02 standalone.localdomain podman[181408]: 2025-10-13 14:16:02.633100211 +0000 UTC m=+0.106903801 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.component=openstack-haproxy-container, release=1, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, name=rhosp17/openstack-haproxy, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:02 standalone.localdomain haproxy[70940]: 172.21.0.2:51294 [13/Oct/2025:14:16:02.346] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/288/288 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:02 standalone.localdomain podman[181408]: 2025-10-13 14:16:02.661870696 +0000 UTC m=+0.135674266 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, vcs-type=git, release=1, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true)
Oct 13 14:16:02 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:16:02.661] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/30/30 200 8095 - - ---- 54/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:02 standalone.localdomain haproxy[70940]: 172.21.0.2:36314 [13/Oct/2025:14:16:02.656] neutron neutron/standalone.internalapi.localdomain 0/0/0/51/51 404 289 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks/public HTTP/1.1"
Oct 13 14:16:02 standalone.localdomain haproxy[70940]: 172.21.0.2:36314 [13/Oct/2025:14:16:02.710] neutron neutron/standalone.internalapi.localdomain 0/0/0/62/62 200 924 - - ---- 54/1/0/0/0 0/0 "GET /v2.0/networks?name=public HTTP/1.1"
Oct 13 14:16:02 standalone.localdomain podman[181434]: 2025-10-13 14:16:02.772879282 +0000 UTC m=+0.098676007 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:16:02 standalone.localdomain podman[181434]: 2025-10-13 14:16:02.802232515 +0000 UTC m=+0.128029210 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:08:05, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:16:03 standalone.localdomain runuser[181483]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1234: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:03 standalone.localdomain haproxy[70940]: 172.21.0.2:36314 [13/Oct/2025:14:16:02.775] neutron neutron/standalone.internalapi.localdomain 0/0/0/1014/1014 201 756 - - ---- 54/1/0/0/0 0/0 "POST /v2.0/floatingips HTTP/1.1"
Oct 13 14:16:04 standalone.localdomain sshd[181290]: Received disconnect from 192.168.122.11 port 43598:11: disconnected by user
Oct 13 14:16:04 standalone.localdomain sshd[181290]: Disconnected from user root 192.168.122.11 port 43598
Oct 13 14:16:04 standalone.localdomain sshd[181287]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:04 standalone.localdomain systemd[1]: session-64.scope: Deactivated successfully.
Oct 13 14:16:04 standalone.localdomain systemd[1]: session-64.scope: Consumed 1.826s CPU time.
Oct 13 14:16:04 standalone.localdomain systemd-logind[45629]: Session 64 logged out. Waiting for processes to exit.
Oct 13 14:16:04 standalone.localdomain systemd-logind[45629]: Removed session 64.
Oct 13 14:16:04 standalone.localdomain sshd[181530]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:04 standalone.localdomain sshd[181530]: Accepted publickey for root from 192.168.122.11 port 43600 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:04 standalone.localdomain systemd-logind[45629]: New session 65 of user root.
Oct 13 14:16:04 standalone.localdomain systemd[1]: Started Session 65 of User root.
Oct 13 14:16:04 standalone.localdomain sshd[181530]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:04 standalone.localdomain runuser[181483]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:04 standalone.localdomain runuser[181571]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:04 standalone.localdomain ceph-mon[29756]: pgmap v1234: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:05 standalone.localdomain runuser[181571]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:05 standalone.localdomain runuser[181627]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:05 standalone.localdomain sshd[181686]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:16:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:16:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:16:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:16:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:16:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1235: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:05 standalone.localdomain podman[181695]: 2025-10-13 14:16:05.840195325 +0000 UTC m=+0.099237105 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team)
Oct 13 14:16:05 standalone.localdomain runuser[181627]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:05 standalone.localdomain podman[181694]: 2025-10-13 14:16:05.856750744 +0000 UTC m=+0.102676900 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9)
Oct 13 14:16:05 standalone.localdomain podman[181692]: 2025-10-13 14:16:05.922992742 +0000 UTC m=+0.181170025 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, container_name=swift_object_server, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object)
Oct 13 14:16:05 standalone.localdomain podman[181693]: 2025-10-13 14:16:05.898769587 +0000 UTC m=+0.154033680 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:16:05 standalone.localdomain podman[181696]: 2025-10-13 14:16:05.98661122 +0000 UTC m=+0.245221756 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, release=1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, container_name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, name=rhosp17/openstack-nova-novncproxy, build-date=2025-07-21T15:24:10, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:16:06 standalone.localdomain podman[181695]: 2025-10-13 14:16:06.009096592 +0000 UTC m=+0.268138372 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, architecture=x86_64, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=swift_account_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:16:06 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain podman[181694]: 2025-10-13 14:16:06.050870717 +0000 UTC m=+0.296796883 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server)
Oct 13 14:16:06 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain haproxy[70940]: 172.21.0.2:51306 [13/Oct/2025:14:16:06.089] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 53/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:06 standalone.localdomain podman[181692]: 2025-10-13 14:16:06.108534792 +0000 UTC m=+0.366693494 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, distribution-scope=public, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:16:06 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain ceph-mon[29756]: pgmap v1235: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:06 standalone.localdomain podman[181696]: 2025-10-13 14:16:06.232744174 +0000 UTC m=+0.491354700 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-novncproxy-container, container_name=nova_vnc_proxy, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, distribution-scope=public, build-date=2025-07-21T15:24:10, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, architecture=x86_64, name=rhosp17/openstack-nova-novncproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible)
Oct 13 14:16:06 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain podman[181693]: 2025-10-13 14:16:06.292928825 +0000 UTC m=+0.548192978 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible)
Oct 13 14:16:06 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain haproxy[70940]: 172.21.0.2:51306 [13/Oct/2025:14:16:06.095] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/270/270 201 8100 - - ---- 53/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:06 standalone.localdomain sshd[181686]: Received disconnect from 80.94.93.119 port 32572:11:  [preauth]
Oct 13 14:16:06 standalone.localdomain sshd[181686]: Disconnected from authenticating user root 80.94.93.119 port 32572 [preauth]
Oct 13 14:16:06 standalone.localdomain haproxy[70940]: 172.17.0.2:49732 [13/Oct/2025:14:16:06.432] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/43/43 200 8095 - - ---- 55/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:06 standalone.localdomain haproxy[70940]: 172.21.0.2:39480 [13/Oct/2025:14:16:06.425] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/58/58 404 463 - - ---- 55/1/0/0/0 0/0 "GET /v2.1/servers/test HTTP/1.1"
Oct 13 14:16:06 standalone.localdomain haproxy[70940]: 172.21.0.2:39480 [13/Oct/2025:14:16:06.489] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/96/96 200 377 - - ---- 55/1/0/0/0 0/0 "GET /v2.1/servers?name=test HTTP/1.1"
Oct 13 14:16:06 standalone.localdomain podman[181841]: 2025-10-13 14:16:06.629283286 +0000 UTC m=+0.085218024 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, name=rhosp17/openstack-cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-volume-container, build-date=2025-07-21T16:13:39, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public)
Oct 13 14:16:06 standalone.localdomain podman[181841]: 2025-10-13 14:16:06.711954409 +0000 UTC m=+0.167889137 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, release=1, build-date=2025-07-21T16:13:39, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, com.redhat.component=openstack-cinder-volume-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team)
Oct 13 14:16:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:16:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 2400.0 total, 600.0 interval
                                                        Cumulative writes: 6624 writes, 29K keys, 6624 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                        Cumulative WAL: 6624 writes, 6624 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1560 writes, 7047 keys, 1560 commit groups, 1.0 writes per commit group, ingest: 5.89 MB, 0.01 MB/s
                                                        Interval WAL: 1560 writes, 1560 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    115.1      0.14              0.07        17    0.008       0      0       0.0       0.0
                                                          L6      1/0    5.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7    209.3    177.1      0.34              0.19        16    0.021     57K   8959       0.0       0.0
                                                         Sum      1/0    5.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7    148.1    159.0      0.48              0.26        33    0.015     57K   8959       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    167.9    172.2      0.15              0.08         8    0.018     17K   2562       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    209.3    177.1      0.34              0.19        16    0.021     57K   8959       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.2      0.14              0.07        16    0.009       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 2400.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.016, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.07 GB write, 0.03 MB/s write, 0.07 GB read, 0.03 MB/s read, 0.5 seconds
                                                        Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.1 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 10.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00017 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(933,9.70 MB,3.14838%) FilterBlock(34,163.17 KB,0.0517362%) IndexBlock(34,259.67 KB,0.0823331%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 14:16:06 standalone.localdomain systemd[1]: tmp-crun.KbUlor.mount: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain sshd[181540]: Received disconnect from 192.168.122.11 port 43600:11: disconnected by user
Oct 13 14:16:06 standalone.localdomain sshd[181540]: Disconnected from user root 192.168.122.11 port 43600
Oct 13 14:16:06 standalone.localdomain sshd[181530]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:06 standalone.localdomain systemd[1]: session-65.scope: Deactivated successfully.
Oct 13 14:16:06 standalone.localdomain systemd[1]: session-65.scope: Consumed 2.171s CPU time.
Oct 13 14:16:06 standalone.localdomain systemd-logind[45629]: Session 65 logged out. Waiting for processes to exit.
Oct 13 14:16:06 standalone.localdomain systemd-logind[45629]: Removed session 65.
Oct 13 14:16:06 standalone.localdomain sshd[181874]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:07 standalone.localdomain sshd[181874]: Accepted publickey for root from 192.168.122.11 port 43606 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:07 standalone.localdomain systemd-logind[45629]: New session 66 of user root.
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started Session 66 of User root.
Oct 13 14:16:07 standalone.localdomain sshd[181874]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:16:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1236: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:07 standalone.localdomain podman[181901]: 2025-10-13 14:16:07.78637316 +0000 UTC m=+0.058488571 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9)
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:16:07 standalone.localdomain podman[181901]: 2025-10-13 14:16:07.829741154 +0000 UTC m=+0.101856565 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, tcib_managed=true, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=glance_api_cron, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:16:07 standalone.localdomain podman[181902]: 2025-10-13 14:16:07.847995596 +0000 UTC m=+0.120109547 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, container_name=nova_metadata, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:16:07 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:16:07 standalone.localdomain podman[181902]: 2025-10-13 14:16:07.87674318 +0000 UTC m=+0.148857141 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, container_name=nova_metadata, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-nova-api, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:16:07 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:16:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:16:07 standalone.localdomain podman[181943]: 2025-10-13 14:16:07.884925692 +0000 UTC m=+0.069360215 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:28:44, distribution-scope=public, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller)
Oct 13 14:16:07 standalone.localdomain podman[181903]: 2025-10-13 14:16:07.990917504 +0000 UTC m=+0.260794047 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc.)
Oct 13 14:16:08 standalone.localdomain podman[181942]: 2025-10-13 14:16:07.951536162 +0000 UTC m=+0.141219567 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, build-date=2025-07-21T13:58:12, batch=17.1_20250721.1, name=rhosp17/openstack-placement-api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, container_name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:16:08 standalone.localdomain podman[181988]: 2025-10-13 14:16:08.044766741 +0000 UTC m=+0.140244247 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, com.redhat.component=openstack-glance-api-container, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:16:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:16:08 standalone.localdomain podman[181998]: 2025-10-13 14:16:08.061335231 +0000 UTC m=+0.144343984 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:08 standalone.localdomain podman[181943]: 2025-10-13 14:16:08.070543084 +0000 UTC m=+0.254977617 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1)
Oct 13 14:16:08 standalone.localdomain podman[181942]: 2025-10-13 14:16:08.081537142 +0000 UTC m=+0.271220547 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 placement-api, release=1, container_name=placement_api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:58:12, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-placement-api, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:16:08 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:16:08 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:16:08 standalone.localdomain podman[182054]: 2025-10-13 14:16:08.147803001 +0000 UTC m=+0.083901182 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, container_name=glance_api, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, build-date=2025-07-21T13:58:20, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, release=1, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:16:08 standalone.localdomain podman[181903]: 2025-10-13 14:16:08.172766659 +0000 UTC m=+0.442643182 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:16:08 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:16:08 standalone.localdomain podman[181988]: 2025-10-13 14:16:08.249738668 +0000 UTC m=+0.345216124 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, architecture=x86_64, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20)
Oct 13 14:16:08 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:16:08 standalone.localdomain podman[181998]: 2025-10-13 14:16:08.320393591 +0000 UTC m=+0.403402334 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, release=1, architecture=x86_64, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, distribution-scope=public)
Oct 13 14:16:08 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:16:08 standalone.localdomain podman[182054]: 2025-10-13 14:16:08.368753169 +0000 UTC m=+0.304851340 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, release=1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, container_name=glance_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:16:08 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:16:08 standalone.localdomain haproxy[70940]: 172.21.0.2:51318 [13/Oct/2025:14:16:08.780] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 54/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:08 standalone.localdomain ceph-mon[29756]: pgmap v1236: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:08 standalone.localdomain systemd[1]: tmp-crun.rmSvBf.mount: Deactivated successfully.
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:51318 [13/Oct/2025:14:16:08.788] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/272/272 201 8100 - - ---- 54/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:39716 [13/Oct/2025:14:16:09.148] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/3/3 300 1507 - - ---- 55/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.17.0.2:49740 [13/Oct/2025:14:16:09.162] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 200 8095 - - ---- 56/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:39716 [13/Oct/2025:14:16:09.156] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/47/47 404 340 - - ---- 56/1/0/0/0 0/0 "GET /v2/images/cirros HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:39716 [13/Oct/2025:14:16:09.209] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/18/18 200 1199 - - ---- 56/1/0/0/0 0/0 "GET /v2/images?name=cirros HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:09.235] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/11/11 404 465 - - ---- 57/1/0/0/0 0/0 "GET /v2.1/flavors/m1.small HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:09.249] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/11/11 200 1215 - - ---- 57/1/0/0/0 0/0 "GET /v2.1/flavors HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:09.264] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/11/11 200 823 - - ---- 57/1/0/0/0 0/0 "GET /v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567 HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:36318 [13/Oct/2025:14:16:09.286] neutron neutron/standalone.internalapi.localdomain 0/0/0/29/29 404 290 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/networks/private HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.21.0.2:36318 [13/Oct/2025:14:16:09.318] neutron neutron/standalone.internalapi.localdomain 0/0/0/94/94 200 901 - - ---- 58/1/0/0/0 0/0 "GET /v2.0/networks?name=private HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.17.0.2:49658 [13/Oct/2025:14:16:09.450] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/4/4 300 1507 - - ---- 59/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:16:09.459] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 60/5/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:16:09.466] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/274/274 201 8164 - - ---- 60/5/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1237: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:09 standalone.localdomain haproxy[70940]: 172.17.0.2:43936 [13/Oct/2025:14:16:09.754] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 61/6/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:43936 [13/Oct/2025:14:16:09.758] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/268/268 201 8103 - - ---- 61/6/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:43936 [13/Oct/2025:14:16:10.031] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 8159 - - ---- 61/6/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:49658 [13/Oct/2025:14:16:09.741] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/403/403 200 1388 - - ---- 62/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:49658 [13/Oct/2025:14:16:10.146] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 62/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.156] neutron neutron/standalone.internalapi.localdomain 0/0/0/84/84 200 901 - - ---- 63/2/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.242] neutron neutron/standalone.internalapi.localdomain 0/0/0/9/9 200 361 - - ---- 63/2/0/0/0 0/0 "GET /v2.0/quotas/e44641a80bcb466cb3dd688e48b72d8e HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.253] neutron neutron/standalone.internalapi.localdomain 0/0/0/53/53 200 461 - - ---- 63/2/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&fields=id HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:43938 [13/Oct/2025:14:16:10.315] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 201 8104 - - ---- 64/7/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:16:10.618] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/18/18 200 8099 - - ---- 64/7/0/0/0 0/0 "GET /v3/auth/tokens?allow_expired=1 HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.613] neutron neutron/standalone.internalapi.localdomain 0/0/0/26/26 200 16918 - - ---- 64/2/0/0/0 0/0 "GET /v2.0/extensions HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.643] neutron neutron/standalone.internalapi.localdomain 0/0/0/54/54 200 187 - - ---- 64/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.700] neutron neutron/standalone.internalapi.localdomain 0/0/0/81/81 200 252 - - ---- 64/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain ceph-mon[29756]: pgmap v1237: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:09.415] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/1432/1432 202 821 - - ---- 64/1/0/0/0 0/0 "POST /v2.1/servers HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:46110 [13/Oct/2025:14:16:10.882] placement placement/standalone.internalapi.localdomain 0/0/0/3/3 200 381 - - ---- 65/1/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:10.868] neutron neutron/standalone.internalapi.localdomain 0/0/0/39/39 200 185 - - ---- 66/2/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:43940 [13/Oct/2025:14:16:10.891] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 8101 - - ---- 66/8/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:10.850] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/69/69 200 1698 - - ---- 66/1/0/0/0 0/0 "GET /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:46110 [13/Oct/2025:14:16:10.887] placement placement/standalone.internalapi.localdomain 0/0/0/64/64 200 2247 - - ---- 66/1/0/0/0 0/0 "GET /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A2%2CMEMORY_MB%3A512%2CVCPU%3A1&root_required=COMPUTE_IMAGE_TYPE_QCOW2%2C%21COMPUTE_STATUS_DISABLED HTTP/1.1"
Oct 13 14:16:10 standalone.localdomain haproxy[70940]: 172.17.0.2:46126 [13/Oct/2025:14:16:10.972] placement placement/standalone.internalapi.localdomain 0/0/0/2/2 200 381 - - ---- 67/2/0/0/0 0/0 "GET /placement HTTP/1.1"
Oct 13 14:16:11 standalone.localdomain haproxy[70940]: 172.17.0.2:43940 [13/Oct/2025:14:16:10.978] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 200 8101 - - ---- 67/8/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:11 standalone.localdomain haproxy[70940]: 172.17.0.2:46126 [13/Oct/2025:14:16:10.975] placement placement/standalone.internalapi.localdomain 0/0/0/38/38 200 327 - - ---- 67/2/0/0/0 0/0 "GET /placement/allocations/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:11 standalone.localdomain haproxy[70940]: 172.17.0.2:46126 [13/Oct/2025:14:16:11.015] placement placement/standalone.internalapi.localdomain 0/0/0/64/64 204 209 - - ---- 67/2/0/0/0 0/0 "PUT /placement/allocations/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:11 standalone.localdomain haproxy[70940]: 172.17.0.2:46138 [13/Oct/2025:14:16:11.367] placement placement/standalone.internalapi.localdomain 0/0/0/8/8 200 566 - - ---- 70/3/0/0/0 0/0 "GET /placement/allocations/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1238: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:16:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/540359458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:43948 [13/Oct/2025:14:16:12.050] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 71/9/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:43948 [13/Oct/2025:14:16:12.056] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/279/279 201 8164 - - ---- 71/9/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:16:12.346] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/38/38 200 8159 - - ---- 73/9/1/1/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:43936 [13/Oct/2025:14:16:12.348] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/71/71 200 8159 - - ---- 73/9/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45284 [13/Oct/2025:14:16:12.341] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/107/107 200 1388 - - ---- 73/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45284 [13/Oct/2025:14:16:12.450] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/6/6 200 6258 - - ---- 73/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45284 [13/Oct/2025:14:16:12.462] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/19/19 200 1388 - - ---- 73/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:12.341] neutron neutron/standalone.internalapi.localdomain 0/0/0/152/152 200 901 - - ---- 73/3/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45284 [13/Oct/2025:14:16:12.484] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/122/179 200 21692646 - - ---- 73/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b/file HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:16:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45284 [13/Oct/2025:14:16:12.735] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/25/25 200 1388 - - ---- 73/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain haproxy[70940]: 172.17.0.2:45284 [13/Oct/2025:14:16:12.762] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 73/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:12 standalone.localdomain systemd[1]: tmp-crun.u7YnIX.mount: Deactivated successfully.
Oct 13 14:16:12 standalone.localdomain podman[182526]: 2025-10-13 14:16:12.802432646 +0000 UTC m=+0.065611060 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, architecture=x86_64, container_name=nova_api)
Oct 13 14:16:12 standalone.localdomain ceph-mon[29756]: pgmap v1238: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/540359458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:16:12 standalone.localdomain podman[182526]: 2025-10-13 14:16:12.830946773 +0000 UTC m=+0.094125157 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.openshift.expose-services=, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, container_name=nova_api, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:16:12 standalone.localdomain systemd[1]: tmp-crun.t4EdXr.mount: Deactivated successfully.
Oct 13 14:16:12 standalone.localdomain podman[182525]: 2025-10-13 14:16:12.851663011 +0000 UTC m=+0.115157794 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, version=17.1.9, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:16:12 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:16:12 standalone.localdomain podman[182525]: 2025-10-13 14:16:12.863843186 +0000 UTC m=+0.127337969 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.buildah.version=1.33.12, release=1, architecture=x86_64, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:16:12 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:16:13 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 2 addresses
Oct 13 14:16:13 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:16:13 standalone.localdomain podman[182600]: 2025-10-13 14:16:13.066273934 +0000 UTC m=+0.035466362 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:16:13 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:16:13 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:12.523] neutron neutron/standalone.internalapi.localdomain 0/0/0/584/584 201 1263 - - ---- 73/3/0/0/0 0/0 "POST /v2.0/ports HTTP/1.1"
Oct 13 14:16:13 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:13.147] neutron neutron/standalone.internalapi.localdomain 0/0/0/8/8 200 16918 - - ---- 73/3/0/0/0 0/0 "GET /v2.0/extensions HTTP/1.1"
Oct 13 14:16:13 standalone.localdomain haproxy[70940]: 172.17.0.2:43948 [13/Oct/2025:14:16:13.159] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/281/281 201 8104 - - ---- 73/9/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:13 standalone.localdomain haproxy[70940]: 172.17.0.2:51164 [13/Oct/2025:14:16:13.451] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/34/34 200 8099 - - ---- 73/9/0/0/0 0/0 "GET /v3/auth/tokens?allow_expired=1 HTTP/1.1"
Oct 13 14:16:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1239: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 13 14:16:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e54 e54: 1 total, 1 up, 1 in
Oct 13 14:16:13 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e54: 1 total, 1 up, 1 in
Oct 13 14:16:14 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 2 addresses
Oct 13 14:16:14 standalone.localdomain podman[182686]: 2025-10-13 14:16:14.121609758 +0000 UTC m=+0.050811915 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:14 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:16:14 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:16:14 standalone.localdomain systemd[1]: tmp-crun.2E2p2b.mount: Deactivated successfully.
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:13.444] neutron neutron/standalone.internalapi.localdomain 0/0/0/918/918 200 1327 - - ---- 73/3/0/0/0 0/0 "PUT /v2.0/ports/8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:43950 [13/Oct/2025:14:16:14.366] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 74/10/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.381] neutron neutron/standalone.internalapi.localdomain 0/0/0/74/74 200 1330 - - ---- 74/3/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.462] neutron neutron/standalone.internalapi.localdomain 0/0/0/40/40 200 192 - - ---- 74/3/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.238&port_id=8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.505] neutron neutron/standalone.internalapi.localdomain 0/0/0/47/47 200 810 - - ---- 74/3/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.555] neutron neutron/standalone.internalapi.localdomain 0/0/0/57/57 200 1352 - - ---- 74/3/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:43950 [13/Oct/2025:14:16:14.374] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 201 8164 - - ---- 74/10/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.615] neutron neutron/standalone.internalapi.localdomain 0/0/0/62/62 200 187 - - ---- 74/3/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:49732 [13/Oct/2025:14:16:14.685] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/25/25 200 8159 - - ---- 75/10/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:43562 [13/Oct/2025:14:16:14.681] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/61/61 200 542 - - ---- 75/2/0/0/0 0/0 "POST /v2.1/os-server-external-events HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.680] neutron neutron/standalone.internalapi.localdomain 0/0/0/68/68 200 252 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.774] neutron neutron/standalone.internalapi.localdomain 0/0/0/48/48 200 1330 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain ceph-mon[29756]: pgmap v1239: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail
Oct 13 14:16:14 standalone.localdomain ceph-mon[29756]: osdmap e54: 1 total, 1 up, 1 in
Oct 13 14:16:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 13 14:16:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e55 e55: 1 total, 1 up, 1 in
Oct 13 14:16:14 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e55: 1 total, 1 up, 1 in
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.826] neutron neutron/standalone.internalapi.localdomain 0/0/0/66/66 200 901 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.895] neutron neutron/standalone.internalapi.localdomain 0/0/0/40/40 200 192 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.238&port_id=8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:16:14 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.939] neutron neutron/standalone.internalapi.localdomain 0/0/0/44/44 200 810 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:16:15 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:14.989] neutron neutron/standalone.internalapi.localdomain 0/0/0/65/65 200 1352 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:16:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:15 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:15.058] neutron neutron/standalone.internalapi.localdomain 0/0/0/136/136 200 187 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:16:15 standalone.localdomain haproxy[70940]: 172.17.0.2:47258 [13/Oct/2025:14:16:15.216] neutron neutron/standalone.internalapi.localdomain 0/0/0/76/76 200 252 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:16:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:16:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1533876605' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:16:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:16:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:16:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:16:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:16:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:16:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:16:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1242: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Oct 13 14:16:15 standalone.localdomain ceph-mon[29756]: osdmap e55: 1 total, 1 up, 1 in
Oct 13 14:16:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1533876605' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:16:15 standalone.localdomain podman[182903]: 2025-10-13 14:16:15.881954044 +0000 UTC m=+0.111090299 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:03:34, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, release=1)
Oct 13 14:16:15 standalone.localdomain podman[182904]: 2025-10-13 14:16:15.924607777 +0000 UTC m=+0.070803980 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, com.redhat.component=openstack-barbican-worker-container, io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, build-date=2025-07-21T15:36:22, container_name=barbican_worker, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, distribution-scope=public)
Oct 13 14:16:15 standalone.localdomain podman[182904]: 2025-10-13 14:16:15.969748686 +0000 UTC m=+0.115944869 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=barbican_worker, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-worker-container, config_id=tripleo_step3, distribution-scope=public, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., build-date=2025-07-21T15:36:22, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:16:15 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:16:16 standalone.localdomain podman[182903]: 2025-10-13 14:16:16.00726494 +0000 UTC m=+0.236401175 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:16 standalone.localdomain podman[182900]: 2025-10-13 14:16:16.014685829 +0000 UTC m=+0.274721745 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true)
Oct 13 14:16:16 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:15.973] neutron neutron/standalone.internalapi.localdomain 0/0/0/71/71 200 1330 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:16 standalone.localdomain podman[182900]: 2025-10-13 14:16:16.052724739 +0000 UTC m=+0.312760655 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, container_name=neutron_dhcp, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=)
Oct 13 14:16:16 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:16:16 standalone.localdomain podman[182902]: 2025-10-13 14:16:16.072685663 +0000 UTC m=+0.330621595 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, batch=17.1_20250721.1, config_id=tripleo_step3, vcs-type=git, container_name=barbican_keystone_listener, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T16:18:19, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:16 standalone.localdomain podman[182914]: 2025-10-13 14:16:16.088017085 +0000 UTC m=+0.232392142 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, batch=17.1_20250721.1, container_name=nova_api_cron, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public)
Oct 13 14:16:16 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:16.048] neutron neutron/standalone.internalapi.localdomain 0/0/0/41/41 200 261 - - ---- 75/3/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:16:16 standalone.localdomain podman[182901]: 2025-10-13 14:16:15.991611019 +0000 UTC m=+0.249319403 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, release=1, build-date=2025-07-21T15:22:44, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, container_name=barbican_api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:16:16 standalone.localdomain podman[182902]: 2025-10-13 14:16:16.098717834 +0000 UTC m=+0.356653776 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, name=rhosp17/openstack-barbican-keystone-listener, container_name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1)
Oct 13 14:16:16 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:15.927] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/175/175 200 1855 - - ---- 75/2/0/0/0 0/0 "GET /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:16 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:16:16 standalone.localdomain podman[182901]: 2025-10-13 14:16:16.124741105 +0000 UTC m=+0.382449469 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, release=1, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:16 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:16:16 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:16:16 standalone.localdomain podman[182914]: 2025-10-13 14:16:16.17527128 +0000 UTC m=+0.319646357 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api_cron, tcib_managed=true, com.redhat.component=openstack-nova-api-container, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, release=1)
Oct 13 14:16:16 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:16:16 standalone.localdomain haproxy[70940]: 172.17.0.2:46138 [13/Oct/2025:14:16:16.202] placement placement/standalone.internalapi.localdomain 0/0/0/25/25 200 397 - - ---- 75/3/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:16:16 standalone.localdomain runuser[183118]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1747321257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/690031091' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: pgmap v1242: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1747321257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:16:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/690031091' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:16:16 standalone.localdomain systemd[1]: tmp-crun.ta9HGh.mount: Deactivated successfully.
Oct 13 14:16:17 standalone.localdomain runuser[183118]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:17 standalone.localdomain runuser[183222]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 14:16:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1637676307' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:16:17 standalone.localdomain sudo[183269]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/bin/nova-rootwrap /etc/nova/rootwrap.conf privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context vif_plug_ovs.privsep.vif_plug --privsep_sock_path /tmp/tmpjgba0caa/privsep.sock
Oct 13 14:16:17 standalone.localdomain systemd[1]: Started Session 25 of User root.
Oct 13 14:16:17 standalone.localdomain sudo[183269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:16:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1243: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Oct 13 14:16:17 standalone.localdomain runuser[183222]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:17 standalone.localdomain runuser[183289]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1637676307' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:16:18 standalone.localdomain sudo[183269]: pam_unix(sudo:session): session closed for user root
Oct 13 14:16:18 standalone.localdomain kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 13 14:16:18 standalone.localdomain kernel: device tap8a49767c-fb entered promiscuous mode
Oct 13 14:16:18 standalone.localdomain NetworkManager[5962]: <info>  [1760364978.5422] manager: (tap8a49767c-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/15)
Oct 13 14:16:18 standalone.localdomain systemd-udevd[183369]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:16:18 standalone.localdomain NetworkManager[5962]: <info>  [1760364978.5631] device (tap8a49767c-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 14:16:18 standalone.localdomain NetworkManager[5962]: <info>  [1760364978.5637] device (tap8a49767c-fb): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Oct 13 14:16:18 standalone.localdomain systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 13 14:16:18 standalone.localdomain systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 13 14:16:18 standalone.localdomain systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 13 14:16:18 standalone.localdomain systemd-machined[183383]: New machine qemu-1-instance-00000001.
Oct 13 14:16:18 standalone.localdomain systemd-udevd[183366]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:16:18 standalone.localdomain NetworkManager[5962]: <info>  [1760364978.6449] manager: (tap0c455abd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/16)
Oct 13 14:16:18 standalone.localdomain systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 13 14:16:18 standalone.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap0c455abd-21: link becomes ready
Oct 13 14:16:18 standalone.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap0c455abd-20: link becomes ready
Oct 13 14:16:18 standalone.localdomain NetworkManager[5962]: <info>  [1760364978.6849] device (tap0c455abd-20): carrier: link connected
Oct 13 14:16:18 standalone.localdomain runuser[183289]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:18 standalone.localdomain haproxy[70940]: 172.17.0.2:43562 [13/Oct/2025:14:16:18.730] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/22/22 200 546 - - ---- 75/2/0/0/0 0/0 "POST /v2.1/os-server-external-events HTTP/1.1"
Oct 13 14:16:18 standalone.localdomain kernel: device tap0c455abd-20 entered promiscuous mode
Oct 13 14:16:18 standalone.localdomain ceph-mon[29756]: pgmap v1243: 177 pgs: 177 active+clean; 52 MiB data, 83 MiB used, 6.9 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Oct 13 14:16:19 standalone.localdomain kernel: IPv6: tap0c455abd-21: IPv6 duplicate address fe80::a9fe:a9fe used by fa:16:3e:c5:d6:aa detected!
Oct 13 14:16:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1244: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 76 op/s
Oct 13 14:16:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 13 14:16:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 e56: 1 total, 1 up, 1 in
Oct 13 14:16:20 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e56: 1 total, 1 up, 1 in
Oct 13 14:16:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:16:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:16:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:16:20 standalone.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 13 14:16:20 standalone.localdomain podman[183601]: 2025-10-13 14:16:20.378558627 +0000 UTC m=+0.140562897 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, distribution-scope=public, release=1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.9)
Oct 13 14:16:20 standalone.localdomain podman[183602]: 2025-10-13 14:16:20.342768296 +0000 UTC m=+0.106683494 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, build-date=2025-07-21T13:30:04, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:16:20 standalone.localdomain podman[183601]: 2025-10-13 14:16:20.423000444 +0000 UTC m=+0.185004714 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:16:20 standalone.localdomain podman[183602]: 2025-10-13 14:16:20.425934904 +0000 UTC m=+0.189850152 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-ovn-northd, build-date=2025-07-21T13:30:04, config_id=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:16:20 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:16:20 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:16:20 standalone.localdomain podman[183603]: 2025-10-13 14:16:20.568677307 +0000 UTC m=+0.322836905 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, io.openshift.expose-services=, release=1, tcib_managed=true)
Oct 13 14:16:20 standalone.localdomain podman[183603]: 2025-10-13 14:16:20.629296272 +0000 UTC m=+0.383455910 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, release=1, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:20 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:16:20 standalone.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 13 14:16:20 standalone.localdomain haproxy[70940]: 172.17.0.2:43562 [13/Oct/2025:14:16:20.800] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/34/34 200 546 - - ---- 68/2/0/0/0 0/0 "POST /v2.1/os-server-external-events HTTP/1.1"
Oct 13 14:16:20 standalone.localdomain ceph-mon[29756]: pgmap v1244: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 76 op/s
Oct 13 14:16:20 standalone.localdomain ceph-mon[29756]: osdmap e56: 1 total, 1 up, 1 in
Oct 13 14:16:20 standalone.localdomain systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 13 14:16:20 standalone.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:21.189] neutron neutron/standalone.internalapi.localdomain 0/0/0/60/60 200 1332 - - ---- 65/2/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:21.254] neutron neutron/standalone.internalapi.localdomain 0/0/0/45/45 200 261 - - ---- 65/2/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:21.113] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/198/198 200 2000 - - ---- 65/2/0/0/0 0/0 "GET /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain systemd[1]: tmp-crun.eAXogf.mount: Deactivated successfully.
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:21.391] neutron neutron/standalone.internalapi.localdomain 0/0/0/55/55 200 1332 - - ---- 65/2/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:21.451] neutron neutron/standalone.internalapi.localdomain 0/0/0/47/47 200 261 - - ---- 65/2/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:21.317] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/192/192 200 2000 - - ---- 65/2/0/0/0 0/0 "GET /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.21.0.2:32772 [13/Oct/2025:14:16:21.517] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/23/23 200 1117 - - ---- 66/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain haproxy[70940]: 172.21.0.2:39492 [13/Oct/2025:14:16:21.544] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/19/19 200 823 - - ---- 66/2/0/0/0 0/0 "GET /v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567 HTTP/1.1"
Oct 13 14:16:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1246: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Oct 13 14:16:21 standalone.localdomain sshd[181877]: Received disconnect from 192.168.122.11 port 43606:11: disconnected by user
Oct 13 14:16:21 standalone.localdomain sshd[181877]: Disconnected from user root 192.168.122.11 port 43606
Oct 13 14:16:21 standalone.localdomain sshd[181874]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:21 standalone.localdomain systemd[1]: session-66.scope: Deactivated successfully.
Oct 13 14:16:21 standalone.localdomain systemd[1]: session-66.scope: Consumed 2.154s CPU time.
Oct 13 14:16:21 standalone.localdomain systemd-logind[45629]: Session 66 logged out. Waiting for processes to exit.
Oct 13 14:16:21 standalone.localdomain systemd-logind[45629]: Removed session 66.
Oct 13 14:16:21 standalone.localdomain sshd[183845]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:21 standalone.localdomain ceph-mon[29756]: pgmap v1246: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Oct 13 14:16:22 standalone.localdomain sshd[183845]: Accepted publickey for root from 192.168.122.11 port 33026 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:22 standalone.localdomain systemd-logind[45629]: New session 67 of user root.
Oct 13 14:16:22 standalone.localdomain systemd[1]: Started Session 67 of User root.
Oct 13 14:16:22 standalone.localdomain sshd[183845]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:22 standalone.localdomain setroubleshoot[183607]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count. For complete SELinux messages run: sealert -l 7501d5cd-9d8a-4973-bb60-e83685a385f7
Oct 13 14:16:22 standalone.localdomain setroubleshoot[183607]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count.
                                                               
                                                               *****  Plugin qemu_file_image (98.8 confidence) suggests   *******************
                                                               
                                                               If max_map_count is a virtualization target
                                                               Then you need to change the label on max_map_count'
                                                               Do
                                                               # semanage fcontext -a -t virt_image_t 'max_map_count'
                                                               # restorecon -v 'max_map_count'
                                                               
                                                               *****  Plugin catchall (2.13 confidence) suggests   **************************
                                                               
                                                               If you believe that qemu-kvm should be allowed read access on the max_map_count file by default.
                                                               Then you should report this as a bug.
                                                               You can generate a local policy module to allow this access.
                                                               Do
                                                               allow this access for now by executing:
                                                               # ausearch -c 'qemu-kvm' --raw | audit2allow -M my-qemukvm
                                                               # semodule -X 300 -i my-qemukvm.pp
                                                               
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:16:23
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'images', 'backups', 'manila_data', 'manila_metadata', '.mgr', 'volumes']
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:16:23 standalone.localdomain haproxy[70940]: 172.21.0.2:36226 [13/Oct/2025:14:16:23.722] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/9/9 300 515 - - ---- 61/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1247: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 41 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:36226 [13/Oct/2025:14:16:23.733] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/341/341 201 8100 - - ---- 61/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:24.125] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/49/49 200 8095 - - ---- 63/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:37996 [13/Oct/2025:14:16:24.121] neutron neutron/standalone.internalapi.localdomain 0/0/0/93/93 404 303 - - ---- 63/3/0/0/0 0/0 "GET /v2.0/floatingips/192.168.122.20 HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:37996 [13/Oct/2025:14:16:24.216] neutron neutron/standalone.internalapi.localdomain 0/0/0/40/40 200 754 - - ---- 63/3/0/0/0 0/0 "GET /v2.0/floatingips HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:60034 [13/Oct/2025:14:16:24.261] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/10/10 404 463 - - ---- 64/2/0/0/0 0/0 "GET /v2.1/servers/test HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:60034 [13/Oct/2025:14:16:24.276] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/128/128 200 652 - - ---- 64/2/0/0/0 0/0 "GET /v2.1/servers?name=test HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:24.448] neutron neutron/standalone.internalapi.localdomain 0/0/0/53/53 200 1332 - - ---- 64/3/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.17.0.2:47242 [13/Oct/2025:14:16:24.505] neutron neutron/standalone.internalapi.localdomain 0/0/0/45/45 200 261 - - ---- 64/3/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:60034 [13/Oct/2025:14:16:24.408] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/151/151 200 2000 - - ---- 64/2/0/0/0 0/0 "GET /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain haproxy[70940]: 172.21.0.2:37996 [13/Oct/2025:14:16:24.563] neutron neutron/standalone.internalapi.localdomain 0/0/0/129/129 200 1332 - - ---- 63/3/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:24 standalone.localdomain ceph-mon[29756]: pgmap v1247: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 41 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Oct 13 14:16:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:25 standalone.localdomain haproxy[70940]: 172.21.0.2:37996 [13/Oct/2025:14:16:24.697] neutron neutron/standalone.internalapi.localdomain 0/0/0/954/954 200 1057 - - ---- 61/2/0/0/0 0/0 "PUT /v2.0/floatingips/e9cf0655-ef05-453c-845c-882a11b9270c HTTP/1.1"
Oct 13 14:16:25 standalone.localdomain haproxy[70940]: 172.17.0.2:43562 [13/Oct/2025:14:16:25.685] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/23/23 200 542 - - ---- 61/2/0/0/0 0/0 "POST /v2.1/os-server-external-events HTTP/1.1"
Oct 13 14:16:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:16:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:16:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:16:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1248: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 13 14:16:25 standalone.localdomain podman[183910]: 2025-10-13 14:16:25.810615954 +0000 UTC m=+0.072291705 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, container_name=iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:16:25 standalone.localdomain podman[183910]: 2025-10-13 14:16:25.843982841 +0000 UTC m=+0.105658582 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.33.12, container_name=iscsid)
Oct 13 14:16:25 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:16:25 standalone.localdomain podman[183911]: 2025-10-13 14:16:25.861388506 +0000 UTC m=+0.122410677 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, build-date=2025-07-21T15:58:55, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, batch=17.1_20250721.1, container_name=cinder_api_cron, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1)
Oct 13 14:16:25 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:25.727] neutron neutron/standalone.internalapi.localdomain 0/0/0/156/156 200 1332 - - ---- 59/2/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:16:25 standalone.localdomain podman[183911]: 2025-10-13 14:16:25.894893447 +0000 UTC m=+0.155915618 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, container_name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9)
Oct 13 14:16:25 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:16:25 standalone.localdomain podman[183912]: 2025-10-13 14:16:25.913559671 +0000 UTC m=+0.175194051 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, container_name=cinder_scheduler, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-scheduler-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, io.openshift.expose-services=)
Oct 13 14:16:25 standalone.localdomain podman[183912]: 2025-10-13 14:16:25.933811655 +0000 UTC m=+0.195446075 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, build-date=2025-07-21T16:10:12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-scheduler, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.component=openstack-cinder-scheduler-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=cinder_scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:16:25 standalone.localdomain sshd[183848]: Received disconnect from 192.168.122.11 port 33026:11: disconnected by user
Oct 13 14:16:25 standalone.localdomain sshd[183848]: Disconnected from user root 192.168.122.11 port 33026
Oct 13 14:16:25 standalone.localdomain sshd[183845]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:25 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:16:25 standalone.localdomain systemd[1]: session-67.scope: Deactivated successfully.
Oct 13 14:16:25 standalone.localdomain systemd[1]: session-67.scope: Consumed 1.986s CPU time.
Oct 13 14:16:25 standalone.localdomain systemd-logind[45629]: Session 67 logged out. Waiting for processes to exit.
Oct 13 14:16:25 standalone.localdomain systemd-logind[45629]: Removed session 67.
Oct 13 14:16:25 standalone.localdomain sshd[183969]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:26 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:16:26 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx345fa197191f42f29ad61-0068ed09ba" "proxy-server 2" 0.0006 "-" 23 -
Oct 13 14:16:26 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx345fa197191f42f29ad61-0068ed09ba)
Oct 13 14:16:26 standalone.localdomain sshd[183969]: Accepted publickey for root from 192.168.122.11 port 33038 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:26 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:25.887] neutron neutron/standalone.internalapi.localdomain 0/0/0/195/195 200 901 - - ---- 59/2/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:16:26 standalone.localdomain systemd-logind[45629]: New session 68 of user root.
Oct 13 14:16:26 standalone.localdomain systemd[1]: Started Session 68 of User root.
Oct 13 14:16:26 standalone.localdomain sshd[183969]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:26 standalone.localdomain ceph-mon[29756]: pgmap v1248: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 13 14:16:26 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:26.093] neutron neutron/standalone.internalapi.localdomain 0/0/0/164/164 200 1062 - - ---- 58/2/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.238&port_id=8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:16:26 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:26.264] neutron neutron/standalone.internalapi.localdomain 0/0/0/63/63 200 810 - - ---- 58/2/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:16:26 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:26.330] neutron neutron/standalone.internalapi.localdomain 0/0/0/92/92 200 1352 - - ---- 58/2/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:16:26 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:26.426] neutron neutron/standalone.internalapi.localdomain 0/0/0/256/256 200 187 - - ---- 58/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:16:26 standalone.localdomain haproxy[70940]: 172.17.0.2:51376 [13/Oct/2025:14:16:26.686] neutron neutron/standalone.internalapi.localdomain 0/0/0/112/112 200 252 - - ---- 58/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:16:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1249: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 13 14:16:27 standalone.localdomain haproxy[70940]: 172.21.0.2:36236 [13/Oct/2025:14:16:27.888] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 59/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:28 standalone.localdomain haproxy[70940]: 172.21.0.2:36236 [13/Oct/2025:14:16:27.894] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/283/283 201 8100 - - ---- 59/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:28 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:28.207] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/19/19 200 8095 - - ---- 60/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:28 standalone.localdomain haproxy[70940]: 172.21.0.2:38000 [13/Oct/2025:14:16:28.202] neutron neutron/standalone.internalapi.localdomain 0/0/0/58/58 200 201 - - ---- 60/3/0/0/0 0/0 "GET /v2.0/security-group-rules?direction=ingress&protocol=icmp HTTP/1.1"
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030098408710217757 of space, bias 1.0, pg target 0.30098408710217756 quantized to 32 (current 32)
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:16:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:16:28 standalone.localdomain sshd[183972]: Received disconnect from 192.168.122.11 port 33038:11: disconnected by user
Oct 13 14:16:28 standalone.localdomain sshd[183972]: Disconnected from user root 192.168.122.11 port 33038
Oct 13 14:16:28 standalone.localdomain sshd[183969]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:28 standalone.localdomain systemd[1]: session-68.scope: Deactivated successfully.
Oct 13 14:16:28 standalone.localdomain systemd[1]: session-68.scope: Consumed 2.028s CPU time.
Oct 13 14:16:28 standalone.localdomain systemd-logind[45629]: Session 68 logged out. Waiting for processes to exit.
Oct 13 14:16:28 standalone.localdomain systemd-logind[45629]: Removed session 68.
Oct 13 14:16:28 standalone.localdomain sshd[184014]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:28 standalone.localdomain sshd[184014]: Accepted publickey for root from 192.168.122.11 port 33050 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:28 standalone.localdomain systemd-logind[45629]: New session 69 of user root.
Oct 13 14:16:28 standalone.localdomain systemd[1]: Started Session 69 of User root.
Oct 13 14:16:28 standalone.localdomain sshd[184014]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:28 standalone.localdomain ceph-mon[29756]: pgmap v1249: 177 pgs: 177 active+clean; 101 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 130 op/s
Oct 13 14:16:29 standalone.localdomain runuser[184125]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:29 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 13 14:16:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1250: 177 pgs: 177 active+clean; 105 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.4 MiB/s rd, 503 KiB/s wr, 94 op/s
Oct 13 14:16:29 standalone.localdomain runuser[184125]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:30 standalone.localdomain ceph-mon[29756]: pgmap v1250: 177 pgs: 177 active+clean; 105 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.4 MiB/s rd, 503 KiB/s wr, 94 op/s
Oct 13 14:16:30 standalone.localdomain runuser[184213]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:16:30 standalone.localdomain podman[184258]: 2025-10-13 14:16:30.302842291 +0000 UTC m=+0.122142198 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, com.redhat.component=openstack-manila-share-container, summary=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, release=1, description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:16:30 standalone.localdomain podman[184258]: 2025-10-13 14:16:30.334146214 +0000 UTC m=+0.153446111 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, release=1, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T15:22:36, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-manila-share-container, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74)
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:16:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:16:30 standalone.localdomain systemd[1]: tmp-crun.HD1VeM.mount: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain podman[184438]: 2025-10-13 14:16:30.619588518 +0000 UTC m=+0.152117772 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, com.redhat.component=openstack-horizon-container, build-date=2025-07-21T13:58:15, container_name=horizon, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:16:30 standalone.localdomain podman[184304]: 2025-10-13 14:16:30.524535963 +0000 UTC m=+0.208778325 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=heat_api_cfn, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, version=17.1.9)
Oct 13 14:16:30 standalone.localdomain podman[184328]: 2025-10-13 14:16:30.579092932 +0000 UTC m=+0.226341816 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, container_name=manila_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, name=rhosp17/openstack-manila-api)
Oct 13 14:16:30 standalone.localdomain podman[184334]: 2025-10-13 14:16:30.44447987 +0000 UTC m=+0.089563977 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team)
Oct 13 14:16:30 standalone.localdomain podman[184304]: 2025-10-13 14:16:30.655760301 +0000 UTC m=+0.340002663 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T14:49:55, batch=17.1_20250721.1)
Oct 13 14:16:30 standalone.localdomain podman[184328]: 2025-10-13 14:16:30.661238219 +0000 UTC m=+0.308487133 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, tcib_managed=true, build-date=2025-07-21T16:06:43, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, com.redhat.component=openstack-manila-api-container, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api)
Oct 13 14:16:30 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain podman[184334]: 2025-10-13 14:16:30.679972456 +0000 UTC m=+0.325056583 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron)
Oct 13 14:16:30 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:30.700] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 59/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:30 standalone.localdomain podman[184306]: 2025-10-13 14:16:30.66350609 +0000 UTC m=+0.343741569 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.component=openstack-cinder-api-container, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, name=rhosp17/openstack-cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:16:30 standalone.localdomain podman[184345]: 2025-10-13 14:16:30.722177825 +0000 UTC m=+0.354102478 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:16:30 standalone.localdomain podman[184306]: 2025-10-13 14:16:30.741978424 +0000 UTC m=+0.422213883 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, release=1, container_name=cinder_api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public)
Oct 13 14:16:30 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain podman[184442]: 2025-10-13 14:16:30.766127728 +0000 UTC m=+0.297251019 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, vendor=Red Hat, Inc.)
Oct 13 14:16:30 standalone.localdomain podman[184330]: 2025-10-13 14:16:30.822998757 +0000 UTC m=+0.470008633 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=heat_api_cron, distribution-scope=public, vcs-type=git, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:16:30 standalone.localdomain podman[184331]: 2025-10-13 14:16:30.830590831 +0000 UTC m=+0.491087683 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:16:30 standalone.localdomain podman[184330]: 2025-10-13 14:16:30.833786309 +0000 UTC m=+0.480796195 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, distribution-scope=public, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, container_name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:16:30 standalone.localdomain podman[184438]: 2025-10-13 14:16:30.838690589 +0000 UTC m=+0.371219893 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=horizon, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-horizon-container, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-horizon)
Oct 13 14:16:30 standalone.localdomain podman[184345]: 2025-10-13 14:16:30.851019159 +0000 UTC m=+0.482943822 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, batch=17.1_20250721.1, release=1, distribution-scope=public, name=rhosp17/openstack-heat-engine, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.component=openstack-heat-engine-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:44:11, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine)
Oct 13 14:16:30 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain podman[184439]: 2025-10-13 14:16:30.85362291 +0000 UTC m=+0.386188555 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, architecture=x86_64, tcib_managed=true, container_name=neutron_api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d)
Oct 13 14:16:30 standalone.localdomain podman[184442]: 2025-10-13 14:16:30.857796798 +0000 UTC m=+0.388920109 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., container_name=memcached, name=rhosp17/openstack-memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:16:30 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:16:30 standalone.localdomain runuser[184213]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:30 standalone.localdomain runuser[184704]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:30 standalone.localdomain podman[184434]: 2025-10-13 14:16:30.969587978 +0000 UTC m=+0.502162073 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-manila-scheduler-container, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, container_name=manila_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:30 standalone.localdomain podman[184305]: 2025-10-13 14:16:30.974894911 +0000 UTC m=+0.665764667 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-scheduler-container, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, batch=17.1_20250721.1, container_name=nova_scheduler, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:02:54, name=rhosp17/openstack-nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., vcs-ref=4f7cb55437a2b42333072591935a511528e79935)
Oct 13 14:16:30 standalone.localdomain podman[184340]: 2025-10-13 14:16:30.989553982 +0000 UTC m=+0.636232618 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step3)
Oct 13 14:16:30 standalone.localdomain podman[184331]: 2025-10-13 14:16:30.993459273 +0000 UTC m=+0.653956145 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, release=1)
Oct 13 14:16:31 standalone.localdomain podman[184305]: 2025-10-13 14:16:31.00215994 +0000 UTC m=+0.693029706 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-scheduler, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:02:54, distribution-scope=public, com.redhat.component=openstack-nova-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935)
Oct 13 14:16:31 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:30.706] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/305/305 201 8100 - - ---- 59/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:31 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain podman[184439]: 2025-10-13 14:16:31.028879542 +0000 UTC m=+0.561445187 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T15:44:03, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-neutron-server, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=neutron_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d)
Oct 13 14:16:31 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain podman[184434]: 2025-10-13 14:16:31.061179966 +0000 UTC m=+0.593754061 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, com.redhat.component=openstack-manila-scheduler-container, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, batch=17.1_20250721.1, container_name=manila_scheduler, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:28, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-type=git, version=17.1.9)
Oct 13 14:16:31 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:31.035] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/39/39 404 294 - - ---- 59/2/0/0/0 0/0 "GET /v3/projects/admin HTTP/1.1"
Oct 13 14:16:31 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain podman[184333]: 2025-10-13 14:16:31.081889783 +0000 UTC m=+0.727656191 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, container_name=nova_conductor, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T15:44:17, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, name=rhosp17/openstack-nova-conductor, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:16:31 standalone.localdomain podman[184333]: 2025-10-13 14:16:31.104841219 +0000 UTC m=+0.750607617 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-conductor-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, architecture=x86_64, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git)
Oct 13 14:16:31 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:31.080] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 644 - - ---- 59/2/0/0/0 0/0 "GET /v3/projects?name=admin HTTP/1.1"
Oct 13 14:16:31 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain podman[184340]: 2025-10-13 14:16:31.133920334 +0000 UTC m=+0.780598980 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, build-date=2025-07-21T13:27:18, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:16:31 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:31.118] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 8095 - - ---- 60/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:31 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain haproxy[70940]: 172.21.0.2:42604 [13/Oct/2025:14:16:31.112] neutron neutron/standalone.internalapi.localdomain 0/0/0/61/61 200 365 - - ---- 60/3/0/0/0 0/0 "GET /v2.0/security-groups?fields=id&fields=name&fields=description&fields=project_id&fields=tags&project_id=e44641a80bcb466cb3dd688e48b72d8e&tenant_id=e44641a80bcb466cb3dd688e48b72d8e HTTP/1.1"
Oct 13 14:16:31 standalone.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.165s CPU time.
Oct 13 14:16:31 standalone.localdomain sshd[184017]: Received disconnect from 192.168.122.11 port 33050:11: disconnected by user
Oct 13 14:16:31 standalone.localdomain sshd[184017]: Disconnected from user root 192.168.122.11 port 33050
Oct 13 14:16:31 standalone.localdomain sshd[184014]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:31 standalone.localdomain systemd[1]: session-69.scope: Deactivated successfully.
Oct 13 14:16:31 standalone.localdomain systemd[1]: session-69.scope: Consumed 2.206s CPU time.
Oct 13 14:16:31 standalone.localdomain systemd-logind[45629]: Session 69 logged out. Waiting for processes to exit.
Oct 13 14:16:31 standalone.localdomain systemd-logind[45629]: Removed session 69.
Oct 13 14:16:31 standalone.localdomain sshd[184852]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:31 standalone.localdomain sshd[184852]: Accepted publickey for root from 192.168.122.11 port 44218 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:31 standalone.localdomain systemd-logind[45629]: New session 70 of user root.
Oct 13 14:16:31 standalone.localdomain systemd[1]: Started Session 70 of User root.
Oct 13 14:16:31 standalone.localdomain sshd[184852]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:31 standalone.localdomain runuser[184704]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1251: 177 pgs: 177 active+clean; 105 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 434 KiB/s wr, 81 op/s
Oct 13 14:16:32 standalone.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 13 14:16:32 standalone.localdomain ceph-mon[29756]: pgmap v1251: 177 pgs: 177 active+clean; 105 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 434 KiB/s wr, 81 op/s
Oct 13 14:16:33 standalone.localdomain haproxy[70940]: 172.21.0.2:52656 [13/Oct/2025:14:16:33.321] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 59/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:33 standalone.localdomain haproxy[70940]: 172.21.0.2:52656 [13/Oct/2025:14:16:33.326] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/288/288 201 8100 - - ---- 59/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:33 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:33.645] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 8095 - - ---- 60/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:33 standalone.localdomain haproxy[70940]: 172.21.0.2:42620 [13/Oct/2025:14:16:33.638] neutron neutron/standalone.internalapi.localdomain 0/0/0/63/63 200 2965 - - ---- 60/3/0/0/0 0/0 "GET /v2.0/security-groups/abececb5-f6f2-4dbd-993f-ddc54effe614 HTTP/1.1"
Oct 13 14:16:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1252: 177 pgs: 177 active+clean; 105 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 419 KiB/s wr, 79 op/s
Oct 13 14:16:33 standalone.localdomain haproxy[70940]: 172.21.0.2:42620 [13/Oct/2025:14:16:33.705] neutron neutron/standalone.internalapi.localdomain 0/0/0/275/275 201 763 - - ---- 60/3/0/0/0 0/0 "POST /v2.0/security-group-rules HTTP/1.1"
Oct 13 14:16:34 standalone.localdomain sshd[184864]: Received disconnect from 192.168.122.11 port 44218:11: disconnected by user
Oct 13 14:16:34 standalone.localdomain sshd[184864]: Disconnected from user root 192.168.122.11 port 44218
Oct 13 14:16:34 standalone.localdomain sshd[184852]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:34 standalone.localdomain systemd[1]: session-70.scope: Deactivated successfully.
Oct 13 14:16:34 standalone.localdomain systemd[1]: session-70.scope: Consumed 1.945s CPU time.
Oct 13 14:16:34 standalone.localdomain systemd-logind[45629]: Session 70 logged out. Waiting for processes to exit.
Oct 13 14:16:34 standalone.localdomain systemd-logind[45629]: Removed session 70.
Oct 13 14:16:34 standalone.localdomain sshd[184918]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:34 standalone.localdomain sshd[184918]: Accepted publickey for root from 192.168.122.11 port 44224 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:34 standalone.localdomain systemd-logind[45629]: New session 71 of user root.
Oct 13 14:16:34 standalone.localdomain systemd[1]: Started Session 71 of User root.
Oct 13 14:16:34 standalone.localdomain sshd[184918]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:34 standalone.localdomain ceph-mon[29756]: pgmap v1252: 177 pgs: 177 active+clean; 105 MiB data, 122 MiB used, 6.9 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 419 KiB/s wr, 79 op/s
Oct 13 14:16:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1253: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 13 14:16:36 standalone.localdomain haproxy[70940]: 172.21.0.2:52666 [13/Oct/2025:14:16:36.183] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 57/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:36 standalone.localdomain ceph-mon[29756]: pgmap v1253: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct 13 14:16:36 standalone.localdomain haproxy[70940]: 172.21.0.2:52666 [13/Oct/2025:14:16:36.189] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/287/287 201 8100 - - ---- 57/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:36 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:36.512] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/36/36 200 8095 - - ---- 58/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:36 standalone.localdomain haproxy[70940]: 172.21.0.2:42628 [13/Oct/2025:14:16:36.506] neutron neutron/standalone.internalapi.localdomain 0/0/0/74/74 200 201 - - ---- 58/2/0/0/0 0/0 "GET /v2.0/security-group-rules?direction=ingress&protocol=tcp HTTP/1.1"
Oct 13 14:16:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:16:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:16:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:16:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:16:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:16:36 standalone.localdomain podman[184966]: 2025-10-13 14:16:36.747255091 +0000 UTC m=+0.065697303 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:16:36 standalone.localdomain podman[184965]: 2025-10-13 14:16:36.801726606 +0000 UTC m=+0.122115849 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=nova_migration_target, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:16:36 standalone.localdomain podman[184964]: 2025-10-13 14:16:36.866209051 +0000 UTC m=+0.186085857 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, com.redhat.component=openstack-swift-object-container, vcs-type=git, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, container_name=swift_object_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:16:36 standalone.localdomain podman[184967]: 2025-10-13 14:16:36.777569273 +0000 UTC m=+0.093914121 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:16:36 standalone.localdomain sshd[184921]: Received disconnect from 192.168.122.11 port 44224:11: disconnected by user
Oct 13 14:16:36 standalone.localdomain sshd[184921]: Disconnected from user root 192.168.122.11 port 44224
Oct 13 14:16:36 standalone.localdomain sshd[184918]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:36 standalone.localdomain systemd[1]: session-71.scope: Deactivated successfully.
Oct 13 14:16:36 standalone.localdomain systemd[1]: session-71.scope: Consumed 2.044s CPU time.
Oct 13 14:16:36 standalone.localdomain systemd-logind[45629]: Session 71 logged out. Waiting for processes to exit.
Oct 13 14:16:36 standalone.localdomain systemd-logind[45629]: Removed session 71.
Oct 13 14:16:36 standalone.localdomain sshd[185066]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:36 standalone.localdomain podman[184972]: 2025-10-13 14:16:36.929081605 +0000 UTC m=+0.239406547 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:24:10, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-novncproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-nova-novncproxy, config_id=tripleo_step4, container_name=nova_vnc_proxy, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9)
Oct 13 14:16:36 standalone.localdomain podman[184967]: 2025-10-13 14:16:36.959771539 +0000 UTC m=+0.276116347 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-swift-account-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:36 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:16:36 standalone.localdomain podman[184966]: 2025-10-13 14:16:36.972872863 +0000 UTC m=+0.291315075 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, name=rhosp17/openstack-swift-container, distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true)
Oct 13 14:16:36 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:16:37 standalone.localdomain sshd[185066]: Accepted publickey for root from 192.168.122.11 port 44226 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:37 standalone.localdomain systemd-logind[45629]: New session 72 of user root.
Oct 13 14:16:37 standalone.localdomain systemd[1]: Started Session 72 of User root.
Oct 13 14:16:37 standalone.localdomain sshd[185066]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:37 standalone.localdomain podman[184964]: 2025-10-13 14:16:37.043762404 +0000 UTC m=+0.363639230 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, container_name=swift_object_server, config_id=tripleo_step4, name=rhosp17/openstack-swift-object)
Oct 13 14:16:37 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:16:37 standalone.localdomain podman[184965]: 2025-10-13 14:16:37.160905208 +0000 UTC m=+0.481294461 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:16:37 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:16:37 standalone.localdomain podman[184972]: 2025-10-13 14:16:37.194851823 +0000 UTC m=+0.505176775 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=nova_vnc_proxy, io.buildah.version=1.33.12, release=1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, name=rhosp17/openstack-nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4)
Oct 13 14:16:37 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:16:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1254: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 94 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:16:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:16:38 standalone.localdomain haproxy[70940]: 172.21.0.2:52676 [13/Oct/2025:14:16:38.766] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:38 standalone.localdomain podman[185140]: 2025-10-13 14:16:38.828685537 +0000 UTC m=+0.088795413 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, tcib_managed=true)
Oct 13 14:16:38 standalone.localdomain ceph-mon[29756]: pgmap v1254: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 94 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 13 14:16:38 standalone.localdomain podman[185144]: 2025-10-13 14:16:38.878126228 +0000 UTC m=+0.132533579 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, container_name=nova_metadata, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, release=1, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:16:38 standalone.localdomain podman[185138]: 2025-10-13 14:16:38.967736625 +0000 UTC m=+0.229546104 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:16:39 standalone.localdomain podman[185138]: 2025-10-13 14:16:39.003863557 +0000 UTC m=+0.265673046 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, version=17.1.9, name=rhosp17/openstack-glance-api, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:16:39 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain podman[185175]: 2025-10-13 14:16:39.016910769 +0000 UTC m=+0.250978244 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, architecture=x86_64, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 14:16:39 standalone.localdomain podman[185157]: 2025-10-13 14:16:38.917803859 +0000 UTC m=+0.161594843 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:16:39 standalone.localdomain podman[185139]: 2025-10-13 14:16:39.074616544 +0000 UTC m=+0.337867187 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, container_name=placement_api, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, com.redhat.component=openstack-placement-api-container, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:16:39 standalone.localdomain haproxy[70940]: 172.21.0.2:52676 [13/Oct/2025:14:16:38.772] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/308/308 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:39 standalone.localdomain podman[185157]: 2025-10-13 14:16:39.079962789 +0000 UTC m=+0.323753743 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, vcs-type=git, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:16:39 standalone.localdomain podman[185144]: 2025-10-13 14:16:39.08973234 +0000 UTC m=+0.344139701 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, vendor=Red Hat, Inc.)
Oct 13 14:16:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain podman[185164]: 2025-10-13 14:16:38.940010442 +0000 UTC m=+0.167659400 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, container_name=ovn_controller, release=1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:16:39 standalone.localdomain haproxy[70940]: 172.21.0.2:52676 [13/Oct/2025:14:16:39.095] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 404 294 - - ---- 56/2/0/0/0 0/0 "GET /v3/projects/admin HTTP/1.1"
Oct 13 14:16:39 standalone.localdomain podman[185164]: 2025-10-13 14:16:39.121875308 +0000 UTC m=+0.349524256 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9)
Oct 13 14:16:39 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain haproxy[70940]: 172.21.0.2:52676 [13/Oct/2025:14:16:39.120] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/18/18 200 644 - - ---- 56/2/0/0/0 0/0 "GET /v3/projects?name=admin HTTP/1.1"
Oct 13 14:16:39 standalone.localdomain podman[185139]: 2025-10-13 14:16:39.139903114 +0000 UTC m=+0.403153727 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, container_name=placement_api, description=Red Hat OpenStack Platform 17.1 placement-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-placement-api-container)
Oct 13 14:16:39 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:39.148] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/26/26 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:39 standalone.localdomain haproxy[70940]: 172.21.0.2:42638 [13/Oct/2025:14:16:39.142] neutron neutron/standalone.internalapi.localdomain 0/0/0/77/77 200 365 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups?fields=id&fields=name&fields=description&fields=project_id&fields=tags&project_id=e44641a80bcb466cb3dd688e48b72d8e&tenant_id=e44641a80bcb466cb3dd688e48b72d8e HTTP/1.1"
Oct 13 14:16:39 standalone.localdomain podman[185163]: 2025-10-13 14:16:39.221994569 +0000 UTC m=+0.464242466 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, build-date=2025-07-21T16:28:53, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64)
Oct 13 14:16:39 standalone.localdomain podman[185175]: 2025-10-13 14:16:39.23892372 +0000 UTC m=+0.472991215 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, container_name=glance_api, release=1, tcib_managed=true, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4)
Oct 13 14:16:39 standalone.localdomain podman[185140]: 2025-10-13 14:16:39.251794437 +0000 UTC m=+0.511904363 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, release=1, com.redhat.component=openstack-swift-proxy-server-container, tcib_managed=true, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:16:39 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain podman[185163]: 2025-10-13 14:16:39.282929425 +0000 UTC m=+0.525177312 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_metadata_agent, distribution-scope=public)
Oct 13 14:16:39 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain sshd[185102]: Received disconnect from 192.168.122.11 port 44226:11: disconnected by user
Oct 13 14:16:39 standalone.localdomain sshd[185102]: Disconnected from user root 192.168.122.11 port 44226
Oct 13 14:16:39 standalone.localdomain sshd[185066]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:39 standalone.localdomain systemd[1]: session-72.scope: Deactivated successfully.
Oct 13 14:16:39 standalone.localdomain systemd[1]: session-72.scope: Consumed 1.985s CPU time.
Oct 13 14:16:39 standalone.localdomain systemd-logind[45629]: Session 72 logged out. Waiting for processes to exit.
Oct 13 14:16:39 standalone.localdomain systemd-logind[45629]: Removed session 72.
Oct 13 14:16:39 standalone.localdomain sshd[185415]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:39 standalone.localdomain sshd[185415]: Accepted publickey for root from 192.168.122.11 port 44234 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:39 standalone.localdomain systemd-logind[45629]: New session 73 of user root.
Oct 13 14:16:39 standalone.localdomain systemd[1]: Started Session 73 of User root.
Oct 13 14:16:39 standalone.localdomain sshd[185415]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:39 standalone.localdomain podman[185421]: 2025-10-13 14:16:39.763931035 +0000 UTC m=+0.060593585 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, com.redhat.component=openstack-cinder-backup-container, version=17.1.9, build-date=2025-07-21T16:18:24, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-cinder-backup)
Oct 13 14:16:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1255: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 94 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 13 14:16:39 standalone.localdomain podman[185421]: 2025-10-13 14:16:39.794835246 +0000 UTC m=+0.091497826 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-backup, version=17.1.9, build-date=2025-07-21T16:18:24, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-cinder-backup-container, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:40 standalone.localdomain ceph-mon[29756]: pgmap v1255: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 94 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 13 14:16:41 standalone.localdomain haproxy[70940]: 172.21.0.2:53044 [13/Oct/2025:14:16:41.408] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:41 standalone.localdomain haproxy[70940]: 172.21.0.2:53044 [13/Oct/2025:14:16:41.414] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/274/274 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:41 standalone.localdomain haproxy[70940]: 172.17.0.2:50712 [13/Oct/2025:14:16:41.713] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/20/20 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1256: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 13 14:16:41 standalone.localdomain haproxy[70940]: 172.21.0.2:51186 [13/Oct/2025:14:16:41.708] neutron neutron/standalone.internalapi.localdomain 0/0/0/60/60 200 3563 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/security-groups/abececb5-f6f2-4dbd-993f-ddc54effe614 HTTP/1.1"
Oct 13 14:16:41 standalone.localdomain haproxy[70940]: 172.21.0.2:51186 [13/Oct/2025:14:16:41.772] neutron neutron/standalone.internalapi.localdomain 0/0/0/196/196 201 758 - - ---- 57/1/0/0/0 0/0 "POST /v2.0/security-group-rules HTTP/1.1"
Oct 13 14:16:42 standalone.localdomain sshd[185422]: Received disconnect from 192.168.122.11 port 44234:11: disconnected by user
Oct 13 14:16:42 standalone.localdomain sshd[185422]: Disconnected from user root 192.168.122.11 port 44234
Oct 13 14:16:42 standalone.localdomain sshd[185415]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:42 standalone.localdomain systemd[1]: session-73.scope: Deactivated successfully.
Oct 13 14:16:42 standalone.localdomain systemd[1]: session-73.scope: Consumed 1.921s CPU time.
Oct 13 14:16:42 standalone.localdomain systemd-logind[45629]: Session 73 logged out. Waiting for processes to exit.
Oct 13 14:16:42 standalone.localdomain systemd-logind[45629]: Removed session 73.
Oct 13 14:16:42 standalone.localdomain runuser[185682]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:42 standalone.localdomain sshd[185699]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:42 standalone.localdomain sshd[185699]: Accepted publickey for root from 192.168.122.11 port 40044 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:42 standalone.localdomain systemd-logind[45629]: New session 74 of user root.
Oct 13 14:16:42 standalone.localdomain systemd[1]: Started Session 74 of User root.
Oct 13 14:16:42 standalone.localdomain sshd[185699]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:42 standalone.localdomain ceph-mon[29756]: pgmap v1256: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 13 14:16:42 standalone.localdomain runuser[185682]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:43 standalone.localdomain runuser[185760]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:43 standalone.localdomain runuser[185760]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:16:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:16:43 standalone.localdomain runuser[185826]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1257: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 13 14:16:43 standalone.localdomain podman[185833]: 2025-10-13 14:16:43.794150517 +0000 UTC m=+0.061719871 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.component=openstack-nova-api-container, distribution-scope=public, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., container_name=nova_api)
Oct 13 14:16:43 standalone.localdomain podman[185830]: 2025-10-13 14:16:43.874839311 +0000 UTC m=+0.143043914 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, container_name=keystone_cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, version=17.1.9, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:16:43 standalone.localdomain podman[185833]: 2025-10-13 14:16:43.878873275 +0000 UTC m=+0.146442709 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api, version=17.1.9, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:16:43 standalone.localdomain podman[185830]: 2025-10-13 14:16:43.885913222 +0000 UTC m=+0.154117845 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-keystone)
Oct 13 14:16:43 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:16:43 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:16:44 standalone.localdomain haproxy[70940]: 172.21.0.2:53052 [13/Oct/2025:14:16:44.389] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/6/6 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:44 standalone.localdomain runuser[185826]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:44 standalone.localdomain haproxy[70940]: 172.21.0.2:53052 [13/Oct/2025:14:16:44.398] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/298/298 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:44 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:16:44.728] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/94/94 200 8095 - - ---- 58/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:44 standalone.localdomain haproxy[70940]: 172.21.0.2:49118 [13/Oct/2025:14:16:44.723] cinder cinder/standalone.internalapi.localdomain 0/0/0/132/132 404 385 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:16:44 standalone.localdomain ceph-mon[29756]: pgmap v1257: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 13 14:16:44 standalone.localdomain haproxy[70940]: 172.21.0.2:49118 [13/Oct/2025:14:16:44.859] cinder cinder/standalone.internalapi.localdomain 0/0/0/24/24 200 316 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:16:45 standalone.localdomain sshd[185730]: Received disconnect from 192.168.122.11 port 40044:11: disconnected by user
Oct 13 14:16:45 standalone.localdomain sshd[185730]: Disconnected from user root 192.168.122.11 port 40044
Oct 13 14:16:45 standalone.localdomain sshd[185699]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:45 standalone.localdomain systemd[1]: session-74.scope: Deactivated successfully.
Oct 13 14:16:45 standalone.localdomain systemd[1]: session-74.scope: Consumed 2.135s CPU time.
Oct 13 14:16:45 standalone.localdomain systemd-logind[45629]: Session 74 logged out. Waiting for processes to exit.
Oct 13 14:16:45 standalone.localdomain systemd-logind[45629]: Removed session 74.
Oct 13 14:16:45 standalone.localdomain sshd[185946]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:45 standalone.localdomain sshd[185946]: Accepted publickey for root from 192.168.122.11 port 40054 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:45 standalone.localdomain systemd-logind[45629]: New session 75 of user root.
Oct 13 14:16:45 standalone.localdomain systemd[1]: Started Session 75 of User root.
Oct 13 14:16:45 standalone.localdomain sshd[185946]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1258: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 13 14:16:46 standalone.localdomain ceph-mon[29756]: pgmap v1258: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct 13 14:16:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:16:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:16:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:16:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:16:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:16:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:16:46 standalone.localdomain podman[185982]: 2025-10-13 14:16:46.759387409 +0000 UTC m=+0.078004812 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git)
Oct 13 14:16:46 standalone.localdomain podman[185983]: 2025-10-13 14:16:46.808133618 +0000 UTC m=+0.126201084 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, vcs-type=git, distribution-scope=public, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:16:46 standalone.localdomain podman[185982]: 2025-10-13 14:16:46.816743193 +0000 UTC m=+0.135360596 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T16:28:54, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent)
Oct 13 14:16:46 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:16:46 standalone.localdomain podman[185983]: 2025-10-13 14:16:46.834750747 +0000 UTC m=+0.152818213 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T15:22:44, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=barbican_api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64)
Oct 13 14:16:46 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:16:46 standalone.localdomain podman[186001]: 2025-10-13 14:16:46.882850108 +0000 UTC m=+0.186924393 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron)
Oct 13 14:16:46 standalone.localdomain podman[185984]: 2025-10-13 14:16:46.920729123 +0000 UTC m=+0.228757349 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, version=17.1.9, build-date=2025-07-21T16:18:19, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, container_name=barbican_keystone_listener, release=1, tcib_managed=true, com.redhat.component=openstack-barbican-keystone-listener-container, maintainer=OpenStack TripleO Team)
Oct 13 14:16:46 standalone.localdomain podman[186001]: 2025-10-13 14:16:46.921776496 +0000 UTC m=+0.225850781 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-nova-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:16:46 standalone.localdomain podman[185984]: 2025-10-13 14:16:46.94080645 +0000 UTC m=+0.248834666 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_keystone_listener, release=1, build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-barbican-keystone-listener, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team)
Oct 13 14:16:46 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:16:46 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:16:47 standalone.localdomain podman[185996]: 2025-10-13 14:16:47.022232136 +0000 UTC m=+0.328248271 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-barbican-worker, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, container_name=barbican_worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4)
Oct 13 14:16:47 standalone.localdomain podman[185996]: 2025-10-13 14:16:47.046181203 +0000 UTC m=+0.352197358 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-type=git, name=rhosp17/openstack-barbican-worker, com.redhat.component=openstack-barbican-worker-container, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=barbican_worker, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, build-date=2025-07-21T15:36:22, config_id=tripleo_step3, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:16:47 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:16:47 standalone.localdomain podman[185985]: 2025-10-13 14:16:46.973837947 +0000 UTC m=+0.271584778 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, release=1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public)
Oct 13 14:16:47 standalone.localdomain podman[185985]: 2025-10-13 14:16:47.107862131 +0000 UTC m=+0.405608922 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container)
Oct 13 14:16:47 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.21.0.2:53058 [13/Oct/2025:14:16:47.134] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 57/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.21.0.2:53058 [13/Oct/2025:14:16:47.141] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/295/295 201 8100 - - ---- 57/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:47.489] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/4/4 300 1507 - - ---- 58/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:40288 [13/Oct/2025:14:16:47.508] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 8095 - - ---- 59/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:47.501] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/42/42 404 340 - - ---- 59/1/0/0/0 0/0 "GET /v2/images/cirros HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.21.0.2:52646 [13/Oct/2025:14:16:47.547] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/22/22 200 1199 - - ---- 59/1/0/0/0 0/0 "GET /v2/images?name=cirros HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:40290 [13/Oct/2025:14:16:47.607] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 61/5/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain systemd[1]: tmp-crun.bwJQqG.mount: Deactivated successfully.
Oct 13 14:16:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1259: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 682 B/s rd, 25 KiB/s wr, 0 op/s
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:40290 [13/Oct/2025:14:16:47.611] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/266/266 201 8166 - - ---- 61/5/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:40302 [13/Oct/2025:14:16:47.888] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/26/26 200 8161 - - ---- 63/6/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:47.881] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/1/54/55 200 1388 - - ---- 63/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:47.939] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/4/4 200 6258 - - ---- 63/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:47.946] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 63/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:47.952] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/19/19 200 1388 - - ---- 63/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:47.975] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 63/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:47 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:47.980] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 63/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:48.063] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/19/19 200 1388 - - ---- 63/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:48.083] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/4/4 200 6258 - - ---- 63/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33388 [13/Oct/2025:14:16:48.091] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 63/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.21.0.2:49128 [13/Oct/2025:14:16:47.574] cinder cinder/standalone.internalapi.localdomain 0/0/0/693/693 202 1102 - - ---- 63/1/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.100:53779 [13/Oct/2025:14:05:16.007] mysql mysql/standalone.internalapi.localdomain 1/0/692269 137597 -- 63/54/53/53/0 0/0
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.100:54399 [13/Oct/2025:14:08:05.175] mysql mysql/standalone.internalapi.localdomain 1/0/523203 124133 -- 63/54/53/53/0 0/0
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.100:38477 [13/Oct/2025:14:16:48.379] mysql mysql/standalone.internalapi.localdomain 1/0/26 7877 -- 63/54/53/53/0 0/0
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:40312 [13/Oct/2025:14:16:48.473] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 61/6/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain sshd[185957]: Received disconnect from 192.168.122.11 port 40054:11: disconnected by user
Oct 13 14:16:48 standalone.localdomain sshd[185957]: Disconnected from user root 192.168.122.11 port 40054
Oct 13 14:16:48 standalone.localdomain sshd[185946]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:48 standalone.localdomain systemd[1]: session-75.scope: Deactivated successfully.
Oct 13 14:16:48 standalone.localdomain systemd[1]: session-75.scope: Consumed 2.154s CPU time.
Oct 13 14:16:48 standalone.localdomain systemd-logind[45629]: Session 75 logged out. Waiting for processes to exit.
Oct 13 14:16:48 standalone.localdomain systemd-logind[45629]: Removed session 75.
Oct 13 14:16:48 standalone.localdomain sshd[186151]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:48 standalone.localdomain sshd[186151]: Accepted publickey for root from 192.168.122.11 port 40056 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:48 standalone.localdomain systemd-logind[45629]: New session 76 of user root.
Oct 13 14:16:48 standalone.localdomain systemd[1]: Started Session 76 of User root.
Oct 13 14:16:48 standalone.localdomain sshd[186151]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:48 standalone.localdomain ceph-mon[29756]: pgmap v1259: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 682 B/s rd, 25 KiB/s wr, 0 op/s
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:40312 [13/Oct/2025:14:16:48.480] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/366/366 201 8166 - - ---- 61/6/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:40302 [13/Oct/2025:14:16:48.853] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 8161 - - ---- 62/6/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.849] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/47/47 200 1388 - - ---- 62/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.898] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 62/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.904] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/18/18 200 1388 - - ---- 62/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.924] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 62/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.931] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 62/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:48 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.964] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/17/17 200 1388 - - ---- 62/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:49 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:48.984] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/110/186 200 21692646 - - ---- 62/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b/file HTTP/1.1"
Oct 13 14:16:49 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:49.199] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/24/24 200 1388 - - ---- 62/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:49 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:49.224] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 62/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:49 standalone.localdomain sudo[186234]:     root : PWD=/ ; USER=root ; COMMAND=/usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf privsep-helper --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --privsep_context cinder.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpf3kz1bm0/privsep.sock
Oct 13 14:16:49 standalone.localdomain systemd[1]: Started Session c15 of User root.
Oct 13 14:16:49 standalone.localdomain sudo[186234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1260: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 26 KiB/s wr, 7 op/s
Oct 13 14:16:50 standalone.localdomain sudo[186234]: pam_unix(sudo:session): session closed for user root
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:50.130] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/27/27 200 1388 - - ---- 62/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:50.158] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 62/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.21.0.2:33996 [13/Oct/2025:14:16:50.467] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 63/7/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:16:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:16:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.21.0.2:33996 [13/Oct/2025:14:16:50.472] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/302/302 201 8100 - - ---- 63/7/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain podman[186482]: 2025-10-13 14:16:50.790407885 +0000 UTC m=+0.059209814 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1, tcib_managed=true, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:16:50 standalone.localdomain podman[186482]: 2025-10-13 14:16:50.815881519 +0000 UTC m=+0.084683458 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:50 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:16:50 standalone.localdomain ceph-mon[29756]: pgmap v1260: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 26 KiB/s wr, 7 op/s
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:16:50.804] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/34/34 200 8095 - - ---- 64/7/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.21.0.2:38638 [13/Oct/2025:14:16:50.801] cinder cinder/standalone.internalapi.localdomain 0/0/0/59/59 404 385 - - ---- 64/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain haproxy[70940]: 172.21.0.2:38638 [13/Oct/2025:14:16:50.862] cinder cinder/standalone.internalapi.localdomain 0/0/0/31/31 200 1394 - - ---- 64/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:16:50 standalone.localdomain podman[186493]: 2025-10-13 14:16:50.894949522 +0000 UTC m=+0.160088287 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, release=1, managed_by=tripleo_ansible, container_name=clustercheck, tcib_managed=true, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9)
Oct 13 14:16:50 standalone.localdomain podman[186488]: 2025-10-13 14:16:50.958070984 +0000 UTC m=+0.225758298 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, config_id=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-northd, version=17.1.9, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T13:30:04, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ovn_cluster_northd, io.buildah.version=1.33.12)
Oct 13 14:16:50 standalone.localdomain podman[186488]: 2025-10-13 14:16:50.972009242 +0000 UTC m=+0.239696536 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, config_id=ovn_cluster_northd, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-northd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, architecture=x86_64)
Oct 13 14:16:50 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:16:50 standalone.localdomain podman[186493]: 2025-10-13 14:16:50.988050226 +0000 UTC m=+0.253189021 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:16:51 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:16:51 standalone.localdomain sshd[186154]: Received disconnect from 192.168.122.11 port 40056:11: disconnected by user
Oct 13 14:16:51 standalone.localdomain sshd[186154]: Disconnected from user root 192.168.122.11 port 40056
Oct 13 14:16:51 standalone.localdomain sshd[186151]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:51 standalone.localdomain systemd[1]: session-76.scope: Deactivated successfully.
Oct 13 14:16:51 standalone.localdomain systemd[1]: session-76.scope: Consumed 2.007s CPU time.
Oct 13 14:16:51 standalone.localdomain systemd-logind[45629]: Session 76 logged out. Waiting for processes to exit.
Oct 13 14:16:51 standalone.localdomain systemd-logind[45629]: Removed session 76.
Oct 13 14:16:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 13 14:16:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e57 e57: 1 total, 1 up, 1 in
Oct 13 14:16:51 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e57: 1 total, 1 up, 1 in
Oct 13 14:16:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1262: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 8 op/s
Oct 13 14:16:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 13 14:16:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e58 e58: 1 total, 1 up, 1 in
Oct 13 14:16:52 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e58: 1 total, 1 up, 1 in
Oct 13 14:16:52 standalone.localdomain ceph-mon[29756]: osdmap e57: 1 total, 1 up, 1 in
Oct 13 14:16:52 standalone.localdomain ceph-mon[29756]: pgmap v1262: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 8 op/s
Oct 13 14:16:52 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:52.299] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/19/19 200 1388 - - ---- 61/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:16:52 standalone.localdomain haproxy[70940]: 172.17.0.2:33404 [13/Oct/2025:14:16:52.319] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/4/4 200 6258 - - ---- 61/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:16:53 standalone.localdomain ceph-mon[29756]: osdmap e58: 1 total, 1 up, 1 in
Oct 13 14:16:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1264: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 1.5 KiB/s wr, 10 op/s
Oct 13 14:16:54 standalone.localdomain ceph-mon[29756]: pgmap v1264: 177 pgs: 177 active+clean; 122 MiB data, 165 MiB used, 6.8 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 1.5 KiB/s wr, 10 op/s
Oct 13 14:16:55 standalone.localdomain runuser[186740]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:16:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:16:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:16:55 standalone.localdomain runuser[186740]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1265: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 62 op/s
Oct 13 14:16:55 standalone.localdomain runuser[186809]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:56 standalone.localdomain sshd[186890]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:56 standalone.localdomain ceph-mon[29756]: pgmap v1265: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 62 op/s
Oct 13 14:16:56 standalone.localdomain sshd[186890]: Accepted publickey for root from 192.168.122.11 port 54980 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:16:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:16:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:16:56 standalone.localdomain systemd-logind[45629]: New session 77 of user root.
Oct 13 14:16:56 standalone.localdomain systemd[1]: Started Session 77 of User root.
Oct 13 14:16:56 standalone.localdomain sshd[186890]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:56 standalone.localdomain sudo[186920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:16:56 standalone.localdomain sudo[186920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:16:56 standalone.localdomain sudo[186920]: pam_unix(sudo:session): session closed for user root
Oct 13 14:16:56 standalone.localdomain podman[186895]: 2025-10-13 14:16:56.445543245 +0000 UTC m=+0.102509244 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, com.redhat.component=openstack-cinder-scheduler-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T16:10:12, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:16:56 standalone.localdomain sudo[186959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:16:56 standalone.localdomain sudo[186959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:16:56 standalone.localdomain podman[186893]: 2025-10-13 14:16:56.409461465 +0000 UTC m=+0.074193403 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15)
Oct 13 14:16:56 standalone.localdomain podman[186893]: 2025-10-13 14:16:56.488250059 +0000 UTC m=+0.152982027 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:16:56 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:16:56 standalone.localdomain podman[186894]: 2025-10-13 14:16:56.520647607 +0000 UTC m=+0.182065854 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, container_name=cinder_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:16:56 standalone.localdomain podman[186894]: 2025-10-13 14:16:56.529260262 +0000 UTC m=+0.190678529 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, architecture=x86_64, container_name=cinder_api_cron)
Oct 13 14:16:56 standalone.localdomain podman[186895]: 2025-10-13 14:16:56.538429783 +0000 UTC m=+0.195395782 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cinder-scheduler-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., architecture=x86_64, container_name=cinder_scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-scheduler, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1)
Oct 13 14:16:56 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:16:56 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:16:56 standalone.localdomain runuser[186809]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:56 standalone.localdomain runuser[187011]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:16:57 standalone.localdomain sudo[186959]: pam_unix(sudo:session): session closed for user root
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 0f87750d-9433-4bd1-a81a-438ef2d174f9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:16:57 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 0f87750d-9433-4bd1-a81a-438ef2d174f9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:16:57 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 0f87750d-9433-4bd1-a81a-438ef2d174f9 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:16:57 standalone.localdomain sudo[187092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:16:57 standalone.localdomain sudo[187092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:16:57 standalone.localdomain sudo[187092]: pam_unix(sudo:session): session closed for user root
Oct 13 14:16:57 standalone.localdomain runuser[187011]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1080770853' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:16:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1080770853' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:16:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1266: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 32 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Oct 13 14:16:58 standalone.localdomain haproxy[70940]: 172.21.0.2:34008 [13/Oct/2025:14:16:58.059] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 60/4/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:16:58 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1080770853' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:16:58 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1080770853' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:16:58 standalone.localdomain ceph-mon[29756]: pgmap v1266: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 32 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Oct 13 14:16:58 standalone.localdomain haproxy[70940]: 172.21.0.2:34008 [13/Oct/2025:14:16:58.064] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/274/274 201 8100 - - ---- 59/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:58 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:16:58.376] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/27/27 200 8095 - - ---- 60/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:16:58 standalone.localdomain haproxy[70940]: 172.21.0.2:38648 [13/Oct/2025:14:16:58.369] cinder cinder/standalone.internalapi.localdomain 0/0/0/67/67 404 385 - - ---- 60/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:16:58 standalone.localdomain haproxy[70940]: 172.21.0.2:38648 [13/Oct/2025:14:16:58.439] cinder cinder/standalone.internalapi.localdomain 0/0/0/46/46 200 1753 - - ---- 60/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:16:58 standalone.localdomain sshd[186915]: Received disconnect from 192.168.122.11 port 54980:11: disconnected by user
Oct 13 14:16:58 standalone.localdomain sshd[186915]: Disconnected from user root 192.168.122.11 port 54980
Oct 13 14:16:58 standalone.localdomain sshd[186890]: pam_unix(sshd:session): session closed for user root
Oct 13 14:16:58 standalone.localdomain systemd[1]: session-77.scope: Deactivated successfully.
Oct 13 14:16:58 standalone.localdomain systemd[1]: session-77.scope: Consumed 1.936s CPU time.
Oct 13 14:16:58 standalone.localdomain systemd-logind[45629]: Session 77 logged out. Waiting for processes to exit.
Oct 13 14:16:58 standalone.localdomain systemd-logind[45629]: Removed session 77.
Oct 13 14:16:58 standalone.localdomain sshd[187174]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:16:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:16:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:16:58 standalone.localdomain sshd[187174]: Accepted publickey for root from 192.168.122.11 port 54986 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:16:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:16:58 standalone.localdomain systemd-logind[45629]: New session 78 of user root.
Oct 13 14:16:58 standalone.localdomain systemd[1]: Started Session 78 of User root.
Oct 13 14:16:58 standalone.localdomain sshd[187174]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:16:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1267: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 30 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Oct 13 14:16:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 13 14:17:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e59 e59: 1 total, 1 up, 1 in
Oct 13 14:17:00 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e59: 1 total, 1 up, 1 in
Oct 13 14:17:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:17:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:17:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:17:00 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:17:00 standalone.localdomain recover_tripleo_nova_virtqemud[187360]: 93291
Oct 13 14:17:00 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:17:00 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:17:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:17:00 standalone.localdomain haproxy[70940]: 172.21.0.2:34616 [13/Oct/2025:14:17:00.907] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 57/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:17:01 standalone.localdomain ceph-mon[29756]: pgmap v1267: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 30 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Oct 13 14:17:01 standalone.localdomain ceph-mon[29756]: osdmap e59: 1 total, 1 up, 1 in
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:17:01 standalone.localdomain haproxy[70940]: 172.21.0.2:34616 [13/Oct/2025:14:17:00.914] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/287/287 201 8100 - - ---- 57/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:01 standalone.localdomain systemd[1]: tmp-crun.pnYkMi.mount: Deactivated successfully.
Oct 13 14:17:01 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:01.243] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/37/37 200 8095 - - ---- 58/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:01 standalone.localdomain haproxy[70940]: 172.21.0.2:44470 [13/Oct/2025:14:17:01.236] cinder cinder/standalone.internalapi.localdomain 0/0/0/70/70 404 391 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/snapshots/snapshot HTTP/1.1"
Oct 13 14:17:01 standalone.localdomain haproxy[70940]: 172.21.0.2:44470 [13/Oct/2025:14:17:01.311] cinder cinder/standalone.internalapi.localdomain 0/0/0/22/22 200 318 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/snapshots/detail?all_tenants=1&name=snapshot HTTP/1.1"
Oct 13 14:17:01 standalone.localdomain podman[187337]: 2025-10-13 14:17:01.181474833 +0000 UTC m=+0.437444902 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:17:01 standalone.localdomain sshd[187177]: Received disconnect from 192.168.122.11 port 54986:11: disconnected by user
Oct 13 14:17:01 standalone.localdomain sshd[187177]: Disconnected from user root 192.168.122.11 port 54986
Oct 13 14:17:01 standalone.localdomain sshd[187174]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:01 standalone.localdomain systemd[1]: session-78.scope: Deactivated successfully.
Oct 13 14:17:01 standalone.localdomain systemd[1]: session-78.scope: Consumed 2.200s CPU time.
Oct 13 14:17:01 standalone.localdomain systemd-logind[45629]: Session 78 logged out. Waiting for processes to exit.
Oct 13 14:17:01 standalone.localdomain systemd-logind[45629]: Removed session 78.
Oct 13 14:17:01 standalone.localdomain sshd[187673]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:01 standalone.localdomain podman[187484]: 2025-10-13 14:17:01.270527063 +0000 UTC m=+0.246647981 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, container_name=heat_engine)
Oct 13 14:17:01 standalone.localdomain podman[187335]: 2025-10-13 14:17:01.034140409 +0000 UTC m=+0.290031865 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, release=1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, container_name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-manila-api-container, build-date=2025-07-21T16:06:43)
Oct 13 14:17:01 standalone.localdomain sshd[187673]: Accepted publickey for root from 192.168.122.11 port 59022 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:01 standalone.localdomain systemd-logind[45629]: New session 79 of user root.
Oct 13 14:17:01 standalone.localdomain systemd[1]: Started Session 79 of User root.
Oct 13 14:17:01 standalone.localdomain sshd[187673]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1269: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 27 KiB/s rd, 2.3 MiB/s wr, 43 op/s
Oct 13 14:17:01 standalone.localdomain podman[187516]: 2025-10-13 14:17:01.741155665 +0000 UTC m=+0.672712862 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-container, release=1, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4)
Oct 13 14:17:01 standalone.localdomain podman[187517]: 2025-10-13 14:17:01.606074657 +0000 UTC m=+0.537372886 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, name=rhosp17/openstack-neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, container_name=neutron_api, architecture=x86_64, build-date=2025-07-21T15:44:03, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1)
Oct 13 14:17:01 standalone.localdomain podman[187492]: 2025-10-13 14:17:01.806627829 +0000 UTC m=+0.764046982 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-horizon-container, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, build-date=2025-07-21T13:58:15, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-horizon, distribution-scope=public, batch=17.1_20250721.1, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:17:02 standalone.localdomain ceph-mon[29756]: pgmap v1269: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 27 KiB/s rd, 2.3 MiB/s wr, 43 op/s
Oct 13 14:17:02 standalone.localdomain podman[187337]: 2025-10-13 14:17:02.100540913 +0000 UTC m=+1.356511012 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=)
Oct 13 14:17:02 standalone.localdomain podman[187516]: 2025-10-13 14:17:02.116815434 +0000 UTC m=+1.048372641 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:56:26, vcs-type=git, container_name=heat_api, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=)
Oct 13 14:17:02 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187333]: 2025-10-13 14:17:01.426608215 +0000 UTC m=+0.682633675 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, name=rhosp17/openstack-heat-api-cfn, tcib_managed=true, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187484]: 2025-10-13 14:17:02.16904352 +0000 UTC m=+1.145164458 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:17:02 standalone.localdomain podman[187335]: 2025-10-13 14:17:02.181747352 +0000 UTC m=+1.437638828 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, container_name=manila_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-manila-api-container, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, name=rhosp17/openstack-manila-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187492]: 2025-10-13 14:17:02.212660623 +0000 UTC m=+1.170079796 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, summary=Red Hat OpenStack Platform 17.1 horizon, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-horizon, release=1, container_name=horizon, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-horizon-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3)
Oct 13 14:17:02 standalone.localdomain podman[187333]: 2025-10-13 14:17:02.220132433 +0000 UTC m=+1.476157813 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, release=1, tcib_managed=true, architecture=x86_64, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T14:49:55, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:17:02 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187570]: 2025-10-13 14:17:02.10435561 +0000 UTC m=+0.908166066 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, name=rhosp17/openstack-nova-conductor, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, build-date=2025-07-21T15:44:17, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:17:02 standalone.localdomain podman[187452]: 2025-10-13 14:17:02.233280997 +0000 UTC m=+1.394713027 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, tcib_managed=true, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, release=1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:58:55, architecture=x86_64, container_name=cinder_api, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:17:02 standalone.localdomain podman[187567]: 2025-10-13 14:17:01.876941282 +0000 UTC m=+0.688100724 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, container_name=keystone, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64)
Oct 13 14:17:02 standalone.localdomain podman[187490]: 2025-10-13 14:17:02.287775914 +0000 UTC m=+1.252168531 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, container_name=memcached, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:17:02 standalone.localdomain podman[187512]: 2025-10-13 14:17:02.296687408 +0000 UTC m=+1.227601654 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, architecture=x86_64, distribution-scope=public, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, name=rhosp17/openstack-nova-scheduler, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, container_name=nova_scheduler, build-date=2025-07-21T16:02:54, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:17:02 standalone.localdomain podman[187452]: 2025-10-13 14:17:02.317299972 +0000 UTC m=+1.478731982 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, container_name=cinder_api, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, distribution-scope=public)
Oct 13 14:17:02 standalone.localdomain podman[187569]: 2025-10-13 14:17:02.022628625 +0000 UTC m=+0.828531465 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:28, name=rhosp17/openstack-manila-scheduler, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, com.redhat.component=openstack-manila-scheduler-container, version=17.1.9, container_name=manila_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d)
Oct 13 14:17:02 standalone.localdomain podman[187490]: 2025-10-13 14:17:02.333822551 +0000 UTC m=+1.298215168 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, managed_by=tripleo_ansible, config_id=tripleo_step1, architecture=x86_64, build-date=2025-07-21T12:58:43, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, container_name=memcached, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187570]: 2025-10-13 14:17:02.341884029 +0000 UTC m=+1.145694535 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, name=rhosp17/openstack-nova-conductor, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-conductor-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d)
Oct 13 14:17:02 standalone.localdomain podman[187512]: 2025-10-13 14:17:02.352457694 +0000 UTC m=+1.283371950 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, com.redhat.component=openstack-nova-scheduler-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, vcs-type=git, container_name=nova_scheduler, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-scheduler, version=17.1.9)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187569]: 2025-10-13 14:17:02.362308027 +0000 UTC m=+1.168210907 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, container_name=manila_scheduler, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, build-date=2025-07-21T15:56:28, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-manila-scheduler, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187517]: 2025-10-13 14:17:02.372718038 +0000 UTC m=+1.304016207 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, distribution-scope=public, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-server, version=17.1.9, io.openshift.expose-services=, container_name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T15:44:03, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187491]: 2025-10-13 14:17:02.298128972 +0000 UTC m=+1.255548145 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:17:02 standalone.localdomain podman[187567]: 2025-10-13 14:17:02.417844726 +0000 UTC m=+1.229004178 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, batch=17.1_20250721.1, release=1, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, name=rhosp17/openstack-keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187491]: 2025-10-13 14:17:02.435695115 +0000 UTC m=+1.393114278 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, release=1, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:17:02 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:17:02 standalone.localdomain podman[187839]: 2025-10-13 14:17:02.476425489 +0000 UTC m=+0.067563080 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.component=openstack-mariadb-container, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:17:02 standalone.localdomain podman[187839]: 2025-10-13 14:17:02.508879558 +0000 UTC m=+0.100017129 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-mariadb-container, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:17:02 standalone.localdomain podman[187883]: 2025-10-13 14:17:02.770308662 +0000 UTC m=+0.066711704 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:11, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:17:02 standalone.localdomain podman[187883]: 2025-10-13 14:17:02.798793238 +0000 UTC m=+0.095196270 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:17:02 standalone.localdomain podman[187917]: 2025-10-13 14:17:02.900735754 +0000 UTC m=+0.063268117 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:17:02 standalone.localdomain podman[187917]: 2025-10-13 14:17:02.928756137 +0000 UTC m=+0.091288470 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:17:03 standalone.localdomain haproxy[70940]: 172.21.0.2:34630 [13/Oct/2025:14:17:03.483] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:03 standalone.localdomain haproxy[70940]: 172.21.0.2:34630 [13/Oct/2025:14:17:03.489] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/269/269 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1270: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 26 KiB/s rd, 2.2 MiB/s wr, 41 op/s
Oct 13 14:17:03 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:03.815] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/86/86 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:03 standalone.localdomain haproxy[70940]: 172.21.0.2:44480 [13/Oct/2025:14:17:03.806] cinder cinder/standalone.internalapi.localdomain 0/0/0/128/128 404 385 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:17:03 standalone.localdomain haproxy[70940]: 172.21.0.2:44480 [13/Oct/2025:14:17:03.938] cinder cinder/standalone.internalapi.localdomain 0/0/0/43/43 200 1753 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:17:04 standalone.localdomain haproxy[70940]: 172.21.0.2:44480 [13/Oct/2025:14:17:03.986] cinder cinder/standalone.internalapi.localdomain 0/0/0/109/109 202 575 - - ---- 57/1/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/snapshots HTTP/1.1"
Oct 13 14:17:04 standalone.localdomain sshd[187694]: Received disconnect from 192.168.122.11 port 59022:11: disconnected by user
Oct 13 14:17:04 standalone.localdomain sshd[187694]: Disconnected from user root 192.168.122.11 port 59022
Oct 13 14:17:04 standalone.localdomain sshd[187673]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:04 standalone.localdomain systemd[1]: session-79.scope: Deactivated successfully.
Oct 13 14:17:04 standalone.localdomain systemd[1]: session-79.scope: Consumed 1.961s CPU time.
Oct 13 14:17:04 standalone.localdomain systemd-logind[45629]: Session 79 logged out. Waiting for processes to exit.
Oct 13 14:17:04 standalone.localdomain systemd-logind[45629]: Removed session 79.
Oct 13 14:17:04 standalone.localdomain sshd[187985]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:04 standalone.localdomain sshd[187985]: Accepted publickey for root from 192.168.122.11 port 59038 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:04 standalone.localdomain systemd-logind[45629]: New session 80 of user root.
Oct 13 14:17:04 standalone.localdomain systemd[1]: Started Session 80 of User root.
Oct 13 14:17:04 standalone.localdomain sshd[187985]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 13 14:17:04 standalone.localdomain ceph-mon[29756]: pgmap v1270: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 26 KiB/s rd, 2.2 MiB/s wr, 41 op/s
Oct 13 14:17:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 e60: 1 total, 1 up, 1 in
Oct 13 14:17:04 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e60: 1 total, 1 up, 1 in
Oct 13 14:17:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1272: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 13 14:17:05 standalone.localdomain ceph-mon[29756]: osdmap e60: 1 total, 1 up, 1 in
Oct 13 14:17:06 standalone.localdomain haproxy[70940]: 172.21.0.2:34642 [13/Oct/2025:14:17:06.200] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:06 standalone.localdomain haproxy[70940]: 172.21.0.2:34642 [13/Oct/2025:14:17:06.205] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/314/314 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:06 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:06.548] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:06 standalone.localdomain haproxy[70940]: 172.21.0.2:44482 [13/Oct/2025:14:17:06.543] cinder cinder/standalone.internalapi.localdomain 0/0/0/44/44 404 391 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/snapshots/snapshot HTTP/1.1"
Oct 13 14:17:06 standalone.localdomain haproxy[70940]: 172.21.0.2:44482 [13/Oct/2025:14:17:06.589] cinder cinder/standalone.internalapi.localdomain 0/0/0/13/13 200 731 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/snapshots/detail?all_tenants=1&name=snapshot HTTP/1.1"
Oct 13 14:17:06 standalone.localdomain sshd[187988]: Received disconnect from 192.168.122.11 port 59038:11: disconnected by user
Oct 13 14:17:06 standalone.localdomain sshd[187988]: Disconnected from user root 192.168.122.11 port 59038
Oct 13 14:17:06 standalone.localdomain systemd[1]: tmp-crun.jbFfPR.mount: Deactivated successfully.
Oct 13 14:17:06 standalone.localdomain sshd[187985]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:06 standalone.localdomain systemd[1]: session-80.scope: Deactivated successfully.
Oct 13 14:17:06 standalone.localdomain systemd[1]: session-80.scope: Consumed 1.930s CPU time.
Oct 13 14:17:06 standalone.localdomain systemd-logind[45629]: Session 80 logged out. Waiting for processes to exit.
Oct 13 14:17:06 standalone.localdomain podman[188033]: 2025-10-13 14:17:06.862879442 +0000 UTC m=+0.106608121 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, version=17.1.9, com.redhat.component=openstack-cinder-volume-container, name=rhosp17/openstack-cinder-volume, io.openshift.expose-services=, release=1, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:13:39, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:17:06 standalone.localdomain systemd-logind[45629]: Removed session 80.
Oct 13 14:17:06 standalone.localdomain sshd[188059]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:06 standalone.localdomain podman[188033]: 2025-10-13 14:17:06.895968641 +0000 UTC m=+0.139697340 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, com.redhat.component=openstack-cinder-volume-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:13:39, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, release=1)
Oct 13 14:17:06 standalone.localdomain ceph-mon[29756]: pgmap v1272: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 13 14:17:06 standalone.localdomain sshd[188059]: Accepted publickey for root from 192.168.122.11 port 59050 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:17:06 standalone.localdomain systemd-logind[45629]: New session 81 of user root.
Oct 13 14:17:06 standalone.localdomain systemd[1]: Started Session 81 of User root.
Oct 13 14:17:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:17:07 standalone.localdomain sshd[188059]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:17:07 standalone.localdomain podman[188073]: 2025-10-13 14:17:07.054665083 +0000 UTC m=+0.057765158 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, release=1, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true)
Oct 13 14:17:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:17:07 standalone.localdomain podman[188122]: 2025-10-13 14:17:07.206591009 +0000 UTC m=+0.096271444 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, container_name=swift_object_server)
Oct 13 14:17:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:17:07 standalone.localdomain podman[188150]: 2025-10-13 14:17:07.290696397 +0000 UTC m=+0.078370913 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git)
Oct 13 14:17:07 standalone.localdomain podman[188073]: 2025-10-13 14:17:07.303122259 +0000 UTC m=+0.306222344 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account)
Oct 13 14:17:07 standalone.localdomain podman[188079]: 2025-10-13 14:17:07.112257136 +0000 UTC m=+0.105111006 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-swift-container-container, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:17:07 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:17:07 standalone.localdomain podman[188079]: 2025-10-13 14:17:07.329755448 +0000 UTC m=+0.322609298 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, distribution-scope=public, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:17:07 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:17:07 standalone.localdomain podman[188171]: 2025-10-13 14:17:07.383381729 +0000 UTC m=+0.117073844 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, architecture=x86_64, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, container_name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:17:07 standalone.localdomain podman[188122]: 2025-10-13 14:17:07.437205705 +0000 UTC m=+0.326886120 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, vcs-type=git, container_name=swift_object_server, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:17:07 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:17:07 standalone.localdomain podman[188150]: 2025-10-13 14:17:07.637823467 +0000 UTC m=+0.425497973 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:17:07 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:17:07 standalone.localdomain podman[188171]: 2025-10-13 14:17:07.6908751 +0000 UTC m=+0.424567205 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-novncproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, build-date=2025-07-21T15:24:10, release=1, container_name=nova_vnc_proxy, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:17:07 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:17:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1273: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 13 14:17:07 standalone.localdomain runuser[188225]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:07 standalone.localdomain ceph-mon[29756]: pgmap v1273: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 3.5 KiB/s rd, 511 B/s wr, 4 op/s
Oct 13 14:17:08 standalone.localdomain runuser[188225]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:08 standalone.localdomain runuser[188288]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:08 standalone.localdomain haproxy[70940]: 172.21.0.2:34650 [13/Oct/2025:14:17:08.716] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:17:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 2400.1 total, 600.0 interval
                                                        Cumulative writes: 5858 writes, 25K keys, 5858 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                        Cumulative WAL: 5858 writes, 973 syncs, 6.02 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1072 writes, 4369 keys, 1072 commit groups, 1.0 writes per commit group, ingest: 4.33 MB, 0.01 MB/s
                                                        Interval WAL: 1072 writes, 352 syncs, 3.05 writes per sync, written: 0.00 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 14:17:08 standalone.localdomain haproxy[70940]: 172.21.0.2:34650 [13/Oct/2025:14:17:08.722] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/278/278 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:09 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:09.052] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/33/33 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:09 standalone.localdomain haproxy[70940]: 172.21.0.2:44496 [13/Oct/2025:14:17:09.048] cinder cinder/standalone.internalapi.localdomain 0/0/0/63/63 404 385 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:17:09 standalone.localdomain runuser[188288]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:09 standalone.localdomain haproxy[70940]: 172.21.0.2:44496 [13/Oct/2025:14:17:09.115] cinder cinder/standalone.internalapi.localdomain 0/0/0/35/35 200 1753 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:17:09 standalone.localdomain runuser[188357]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:09 standalone.localdomain sshd[188080]: Received disconnect from 192.168.122.11 port 59050:11: disconnected by user
Oct 13 14:17:09 standalone.localdomain sshd[188080]: Disconnected from user root 192.168.122.11 port 59050
Oct 13 14:17:09 standalone.localdomain sshd[188059]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:09 standalone.localdomain systemd[1]: session-81.scope: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain systemd[1]: session-81.scope: Consumed 1.963s CPU time.
Oct 13 14:17:09 standalone.localdomain systemd-logind[45629]: Session 81 logged out. Waiting for processes to exit.
Oct 13 14:17:09 standalone.localdomain sshd[188407]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:17:09 standalone.localdomain systemd-logind[45629]: Removed session 81.
Oct 13 14:17:09 standalone.localdomain sshd[188407]: Accepted publickey for root from 192.168.122.11 port 59052 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:09 standalone.localdomain systemd-logind[45629]: New session 82 of user root.
Oct 13 14:17:09 standalone.localdomain systemd[1]: Started Session 82 of User root.
Oct 13 14:17:09 standalone.localdomain sshd[188407]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:09 standalone.localdomain podman[188424]: 2025-10-13 14:17:09.628567623 +0000 UTC m=+0.190758590 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-proxy-server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, version=17.1.9, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:17:09 standalone.localdomain systemd[1]: tmp-crun.2qBF0c.mount: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain podman[188451]: 2025-10-13 14:17:09.704016825 +0000 UTC m=+0.250871411 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:17:09 standalone.localdomain podman[188421]: 2025-10-13 14:17:09.746279835 +0000 UTC m=+0.313278901 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, build-date=2025-07-21T13:58:12, name=rhosp17/openstack-placement-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-type=git, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-placement-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12)
Oct 13 14:17:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1274: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 6.5 KiB/s rd, 1.4 KiB/s wr, 8 op/s
Oct 13 14:17:09 standalone.localdomain podman[188415]: 2025-10-13 14:17:09.679715767 +0000 UTC m=+0.242339707 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, vcs-type=git, distribution-scope=public, tcib_managed=true, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.buildah.version=1.33.12, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:17:09 standalone.localdomain podman[188451]: 2025-10-13 14:17:09.793411036 +0000 UTC m=+0.340265592 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, distribution-scope=public, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9)
Oct 13 14:17:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain podman[188415]: 2025-10-13 14:17:09.82280272 +0000 UTC m=+0.385426670 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:17:09 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain podman[188427]: 2025-10-13 14:17:09.833651454 +0000 UTC m=+0.392376405 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, com.redhat.component=openstack-nova-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., container_name=nova_metadata, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:17:09 standalone.localdomain podman[188435]: 2025-10-13 14:17:09.742229441 +0000 UTC m=+0.300635442 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, container_name=glance_api_internal, managed_by=tripleo_ansible, release=1)
Oct 13 14:17:09 standalone.localdomain podman[188445]: 2025-10-13 14:17:09.746939505 +0000 UTC m=+0.295678648 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1)
Oct 13 14:17:09 standalone.localdomain podman[188424]: 2025-10-13 14:17:09.850737039 +0000 UTC m=+0.412927986 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:17:09 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain podman[188445]: 2025-10-13 14:17:09.879805454 +0000 UTC m=+0.428544587 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:17:09 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain podman[188421]: 2025-10-13 14:17:09.911401036 +0000 UTC m=+0.478400122 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., container_name=placement_api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-07-21T13:58:12, name=rhosp17/openstack-placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67)
Oct 13 14:17:09 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:17:09 standalone.localdomain podman[188427]: 2025-10-13 14:17:09.967295876 +0000 UTC m=+0.526020837 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_metadata, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:17:09 standalone.localdomain runuser[188357]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:10 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:17:10 standalone.localdomain podman[188435]: 2025-10-13 14:17:10.095647856 +0000 UTC m=+0.654053877 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:17:10 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:17:10 standalone.localdomain podman[188460]: 2025-10-13 14:17:09.765699043 +0000 UTC m=+0.312080753 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, container_name=glance_api, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:17:10 standalone.localdomain podman[188460]: 2025-10-13 14:17:10.213824632 +0000 UTC m=+0.760206352 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, distribution-scope=public, release=1)
Oct 13 14:17:10 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:17:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:10 standalone.localdomain ceph-mon[29756]: pgmap v1274: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 6.5 KiB/s rd, 1.4 KiB/s wr, 8 op/s
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.21.0.2:45258 [13/Oct/2025:14:17:11.264] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.21.0.2:45258 [13/Oct/2025:14:17:11.270] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/287/287 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.17.0.2:42734 [13/Oct/2025:14:17:11.607] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/45/45 200 8095 - - ---- 58/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54540 [13/Oct/2025:14:17:11.603] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/55/55 404 463 - - ---- 58/1/0/0/0 0/0 "GET /v2.1/servers/test HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54540 [13/Oct/2025:14:17:11.661] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/103/103 200 652 - - ---- 58/1/0/0/0 0/0 "GET /v2.1/servers?name=test HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1275: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 6.1 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.17.0.2:52882 [13/Oct/2025:14:17:11.826] neutron neutron/standalone.internalapi.localdomain 0/0/0/89/89 200 1332 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.17.0.2:52882 [13/Oct/2025:14:17:11.919] neutron neutron/standalone.internalapi.localdomain 0/0/0/35/35 200 261 - - ---- 59/1/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:17:11 standalone.localdomain haproxy[70940]: 172.21.0.2:54540 [13/Oct/2025:14:17:11.770] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/195/195 200 2121 - - ---- 59/1/0/0/0 0/0 "GET /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53738 [13/Oct/2025:14:17:11.969] cinder cinder/standalone.internalapi.localdomain 0/0/0/47/47 404 385 - - ---- 60/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53738 [13/Oct/2025:14:17:12.018] cinder cinder/standalone.internalapi.localdomain 0/0/0/42/42 200 1753 - - ---- 60/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain ceph-mon[29756]: pgmap v1275: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 6.1 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53750 [13/Oct/2025:14:17:12.143] cinder cinder/standalone.internalapi.localdomain 0/0/0/55/55 200 1750 - - ---- 61/2/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/0f364d66-0f07-4584-a6a7-fbb971c71200 HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53750 [13/Oct/2025:14:17:12.319] cinder cinder/standalone.internalapi.localdomain 0/0/0/4/4 300 942 - - ---- 61/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.21.0.2:53750 [13/Oct/2025:14:17:12.330] cinder cinder/standalone.internalapi.localdomain 0/0/0/75/75 200 576 - - ---- 61/2/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/attachments HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.21.0.2:54540 [13/Oct/2025:14:17:12.064] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/365/365 200 557 - - ---- 61/1/0/0/0 0/0 "POST /v2.1/servers/54a46fec-332e-42f9-83ed-88e763d13f63/os-volume_attachments HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.17.0.2:34294 [13/Oct/2025:14:17:12.449] cinder cinder/standalone.internalapi.localdomain 0/0/0/3/3 300 942 - - ---- 62/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain haproxy[70940]: 172.17.0.2:34294 [13/Oct/2025:14:17:12.464] cinder cinder/standalone.internalapi.localdomain 0/0/0/36/36 200 1870 - - ---- 62/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/0f364d66-0f07-4584-a6a7-fbb971c71200 HTTP/1.1"
Oct 13 14:17:12 standalone.localdomain sudo[188912]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/bin/nova-rootwrap /etc/nova/rootwrap.conf privsep-helper --config-file /usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf --privsep_context os_brick.privileged.default --privsep_sock_path /tmp/tmp0nqg7nx5/privsep.sock
Oct 13 14:17:12 standalone.localdomain systemd-logind[45629]: Existing logind session ID 25 used by new audit session, ignoring.
Oct 13 14:17:12 standalone.localdomain systemd[1]: Started Session c16 of User root.
Oct 13 14:17:12 standalone.localdomain sudo[188912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Oct 13 14:17:12 standalone.localdomain sshd[188567]: Received disconnect from 192.168.122.11 port 59052:11: disconnected by user
Oct 13 14:17:12 standalone.localdomain sshd[188567]: Disconnected from user root 192.168.122.11 port 59052
Oct 13 14:17:12 standalone.localdomain sshd[188407]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:12 standalone.localdomain systemd[1]: session-82.scope: Deactivated successfully.
Oct 13 14:17:12 standalone.localdomain systemd[1]: session-82.scope: Consumed 1.997s CPU time.
Oct 13 14:17:12 standalone.localdomain systemd-logind[45629]: Session 82 logged out. Waiting for processes to exit.
Oct 13 14:17:12 standalone.localdomain systemd-logind[45629]: Removed session 82.
Oct 13 14:17:12 standalone.localdomain sshd[188915]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:12 standalone.localdomain sshd[188915]: Accepted publickey for root from 192.168.122.11 port 43770 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:12 standalone.localdomain systemd-logind[45629]: New session 83 of user root.
Oct 13 14:17:12 standalone.localdomain systemd[1]: Started Session 83 of User root.
Oct 13 14:17:12 standalone.localdomain sshd[188915]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:13 standalone.localdomain sudo[188912]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1276: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 6.1 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 13 14:17:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 14:17:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1647659311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:14 standalone.localdomain haproxy[70940]: 172.17.0.2:34294 [13/Oct/2025:14:17:13.459] cinder cinder/standalone.internalapi.localdomain 0/0/0/658/658 200 1073 - - ---- 59/2/0/0/0 0/0 "PUT /v3/e44641a80bcb466cb3dd688e48b72d8e/attachments/a59f404c-c2bc-4084-9b37-4a4c77ab7cd0 HTTP/1.1"
Oct 13 14:17:14 standalone.localdomain haproxy[70940]: 172.17.0.2:34294 [13/Oct/2025:14:17:14.314] cinder cinder/standalone.internalapi.localdomain 0/0/0/43/43 204 266 - - ---- 59/2/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/attachments/a59f404c-c2bc-4084-9b37-4a4c77ab7cd0/action HTTP/1.1"
Oct 13 14:17:14 standalone.localdomain haproxy[70940]: 172.21.0.2:45270 [13/Oct/2025:14:17:14.624] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 60/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:17:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:17:14 standalone.localdomain podman[189009]: 2025-10-13 14:17:14.813182877 +0000 UTC m=+0.083649486 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, tcib_managed=true, release=1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:17:14 standalone.localdomain ceph-mon[29756]: pgmap v1276: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 6.1 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 13 14:17:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1647659311' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:14 standalone.localdomain podman[189009]: 2025-10-13 14:17:14.863764193 +0000 UTC m=+0.134230762 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, container_name=keystone_cron, config_id=tripleo_step3, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:17:14 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:17:14 standalone.localdomain haproxy[70940]: 172.21.0.2:45270 [13/Oct/2025:14:17:14.630] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/307/307 201 8100 - - ---- 60/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:14 standalone.localdomain systemd[1]: tmp-crun.EAVMon.mount: Deactivated successfully.
Oct 13 14:17:14 standalone.localdomain podman[189010]: 2025-10-13 14:17:14.965276936 +0000 UTC m=+0.235997012 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-nova-api-container, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1)
Oct 13 14:17:15 standalone.localdomain podman[189010]: 2025-10-13 14:17:15.008909129 +0000 UTC m=+0.279629265 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, architecture=x86_64, build-date=2025-07-21T16:05:11, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_api, distribution-scope=public, name=rhosp17/openstack-nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, tcib_managed=true)
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:14.981] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/28/28 200 8095 - - ---- 61/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.21.0.2:53764 [13/Oct/2025:14:17:14.978] cinder cinder/standalone.internalapi.localdomain 0/0/0/65/65 404 392 - - ---- 61/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/boot-volume HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.21.0.2:53764 [13/Oct/2025:14:17:15.045] cinder cinder/standalone.internalapi.localdomain 0/0/0/39/39 200 316 - - ---- 61/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=boot-volume HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.248] neutron neutron/standalone.internalapi.localdomain 0/0/0/48/48 200 254 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63&fields=binding%3Ahost_id&fields=binding%3Avif_type HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain sshd[188918]: Received disconnect from 192.168.122.11 port 43770:11: disconnected by user
Oct 13 14:17:15 standalone.localdomain sshd[188918]: Disconnected from user root 192.168.122.11 port 43770
Oct 13 14:17:15 standalone.localdomain sshd[188915]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:15 standalone.localdomain systemd[1]: session-83.scope: Deactivated successfully.
Oct 13 14:17:15 standalone.localdomain systemd[1]: session-83.scope: Consumed 1.985s CPU time.
Oct 13 14:17:15 standalone.localdomain systemd-logind[45629]: Session 83 logged out. Waiting for processes to exit.
Oct 13 14:17:15 standalone.localdomain systemd-logind[45629]: Removed session 83.
Oct 13 14:17:15 standalone.localdomain sshd[189064]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.347] neutron neutron/standalone.internalapi.localdomain 0/0/0/73/73 200 1332 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.433] neutron neutron/standalone.internalapi.localdomain 0/0/0/57/57 200 901 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain sshd[189064]: Accepted publickey for root from 192.168.122.11 port 43786 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:15 standalone.localdomain systemd-logind[45629]: New session 84 of user root.
Oct 13 14:17:15 standalone.localdomain systemd[1]: Started Session 84 of User root.
Oct 13 14:17:15 standalone.localdomain sshd[189064]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.493] neutron neutron/standalone.internalapi.localdomain 0/0/0/56/56 200 1062 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.238&port_id=8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.555] neutron neutron/standalone.internalapi.localdomain 0/0/0/32/32 200 810 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.592] neutron neutron/standalone.internalapi.localdomain 0/0/0/48/48 200 1352 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.645] neutron neutron/standalone.internalapi.localdomain 0/0/0/67/67 200 187 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:17:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1277: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 9.5 KiB/s rd, 847 B/s wr, 8 op/s
Oct 13 14:17:15 standalone.localdomain haproxy[70940]: 172.17.0.2:52894 [13/Oct/2025:14:17:15.718] neutron neutron/standalone.internalapi.localdomain 0/0/0/67/67 200 252 - - ---- 60/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:17:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:17:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/253913538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:17:16 standalone.localdomain ceph-mon[29756]: pgmap v1277: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 9.5 KiB/s rd, 847 B/s wr, 8 op/s
Oct 13 14:17:16 standalone.localdomain haproxy[70940]: 172.17.0.2:39616 [13/Oct/2025:14:17:16.698] placement placement/standalone.internalapi.localdomain 0/0/0/23/23 200 397 - - ---- 61/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:17:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3494441767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.21.0.2:45276 [13/Oct/2025:14:17:17.346] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 62/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/253913538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:17:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3494441767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.21.0.2:45276 [13/Oct/2025:14:17:17.351] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/300/300 201 8100 - - ---- 62/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.21.0.2:51616 [13/Oct/2025:14:17:17.680] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/2/2 300 1507 - - ---- 63/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:17:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:17:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:17:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:17:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:17:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:42744 [13/Oct/2025:14:17:17.698] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/52/52 200 8095 - - ---- 64/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.21.0.2:51616 [13/Oct/2025:14:17:17.690] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/73/73 404 340 - - ---- 64/1/0/0/0 0/0 "GET /v2/images/cirros HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1278: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 8.6 KiB/s rd, 767 B/s wr, 7 op/s
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.21.0.2:51616 [13/Oct/2025:14:17:17.767] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/37/37 200 1199 - - ---- 64/1/0/0/0 0/0 "GET /v2/images?name=cirros HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.859] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/24/24 200 1388 - - ---- 66/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.884] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/8/8 200 6258 - - ---- 66/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.897] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/8/8 200 6258 - - ---- 66/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.908] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/42/42 200 1388 - - ---- 66/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.952] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/6/6 200 6258 - - ---- 66/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:17 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.960] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 66/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:17.986] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/30/30 200 1388 - - ---- 66/1/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32864 [13/Oct/2025:14:17:18.016] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/7/7 200 6258 - - ---- 66/1/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain systemd[1]: tmp-crun.9jo024.mount: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain podman[189156]: 2025-10-13 14:17:18.129710858 +0000 UTC m=+0.390324952 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step3, container_name=barbican_worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, build-date=2025-07-21T15:36:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-worker, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, version=17.1.9, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.21.0.2:53766 [13/Oct/2025:14:17:17.811] cinder cinder/standalone.internalapi.localdomain 0/0/0/404/404 202 1109 - - ---- 66/3/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain podman[189156]: 2025-10-13 14:17:18.225867556 +0000 UTC m=+0.486481660 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, name=rhosp17/openstack-barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, build-date=2025-07-21T15:36:22, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=barbican_worker, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.100:49035 [13/Oct/2025:14:16:48.294] mysql mysql/standalone.internalapi.localdomain 1/0/29939 24544 -- 66/54/53/53/0 0/0
Oct 13 14:17:18 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain podman[189154]: 2025-10-13 14:17:18.246528762 +0000 UTC m=+0.512640616 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, name=rhosp17/openstack-barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., config_id=tripleo_step3)
Oct 13 14:17:18 standalone.localdomain podman[189153]: 2025-10-13 14:17:18.051612095 +0000 UTC m=+0.316891032 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=barbican_api, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-barbican-api-container, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, batch=17.1_20250721.1)
Oct 13 14:17:18 standalone.localdomain podman[189155]: 2025-10-13 14:17:18.164716275 +0000 UTC m=+0.427432643 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:03:34, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, release=1, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent)
Oct 13 14:17:18 standalone.localdomain podman[189169]: 2025-10-13 14:17:18.187053712 +0000 UTC m=+0.438852175 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, name=rhosp17/openstack-nova-api, tcib_managed=true, build-date=2025-07-21T16:05:11, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, container_name=nova_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:17:18 standalone.localdomain podman[189155]: 2025-10-13 14:17:18.303517926 +0000 UTC m=+0.566234284 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-type=git, build-date=2025-07-21T16:03:34, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-sriov-agent-container, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, release=1, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible)
Oct 13 14:17:18 standalone.localdomain podman[189169]: 2025-10-13 14:17:18.318161176 +0000 UTC m=+0.569959679 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, architecture=x86_64, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, container_name=nova_api_cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:17:18 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain podman[189153]: 2025-10-13 14:17:18.342481345 +0000 UTC m=+0.607760312 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=barbican_api, release=1, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3)
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.100:55637 [13/Oct/2025:14:16:48.445] mysql mysql/standalone.internalapi.localdomain 1/0/29907 112414 -- 66/54/53/53/0 0/0
Oct 13 14:17:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain podman[189154]: 2025-10-13 14:17:18.371176298 +0000 UTC m=+0.637288102 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:19, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, name=rhosp17/openstack-barbican-keystone-listener, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public)
Oct 13 14:17:18 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.398] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/26/26 200 1388 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain ceph-mon[29756]: pgmap v1278: 177 pgs: 177 active+clean; 169 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 8.6 KiB/s rd, 767 B/s wr, 7 op/s
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.426] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/4/4 200 6258 - - ---- 64/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.434] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/16/16 200 1388 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.452] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 64/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.459] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/2/2 200 6258 - - ---- 64/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain podman[189152]: 2025-10-13 14:17:18.476907351 +0000 UTC m=+0.743662644 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, container_name=neutron_dhcp, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, release=1)
Oct 13 14:17:18 standalone.localdomain podman[189152]: 2025-10-13 14:17:18.527837088 +0000 UTC m=+0.794592351 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, release=1, tcib_managed=true, vendor=Red Hat, Inc., container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.505] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/26/26 200 1388 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain sshd[189067]: Received disconnect from 192.168.122.11 port 43786:11: disconnected by user
Oct 13 14:17:18 standalone.localdomain sshd[189067]: Disconnected from user root 192.168.122.11 port 43786
Oct 13 14:17:18 standalone.localdomain sshd[189064]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:18 standalone.localdomain systemd[1]: session-84.scope: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain systemd[1]: session-84.scope: Consumed 2.081s CPU time.
Oct 13 14:17:18 standalone.localdomain systemd-logind[45629]: Session 84 logged out. Waiting for processes to exit.
Oct 13 14:17:18 standalone.localdomain systemd-logind[45629]: Removed session 84.
Oct 13 14:17:18 standalone.localdomain sshd[189312]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:18 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:17:18 standalone.localdomain sshd[189312]: Accepted publickey for root from 192.168.122.11 port 43800 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:18 standalone.localdomain systemd-logind[45629]: New session 85 of user root.
Oct 13 14:17:18 standalone.localdomain systemd[1]: Started Session 85 of User root.
Oct 13 14:17:18 standalone.localdomain sshd[189312]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.532] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/205/271 200 21692646 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b/file HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.837] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/20/20 200 1388 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.859] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/5/5 200 6258 - - ---- 64/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.938] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/15/15 200 1388 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:18 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:18.956] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/3/3 200 6258 - - ---- 64/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:19 standalone.localdomain systemd[1]: tmp-crun.gXXZZ3.mount: Deactivated successfully.
Oct 13 14:17:19 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:19.247] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/25/25 200 1388 - - ---- 64/2/0/0/0 0/0 "GET /v2/images/9de40855-8304-4f15-8bc5-8a8a2b61b79b HTTP/1.1"
Oct 13 14:17:19 standalone.localdomain haproxy[70940]: 172.17.0.2:32880 [13/Oct/2025:14:17:19.273] glance_api_internal glance_api_internal/standalone.internalapi.localdomain 0/0/0/6/6 200 6258 - - ---- 64/2/0/0/0 0/0 "GET /v2/schemas/image HTTP/1.1"
Oct 13 14:17:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1279: 177 pgs: 177 active+clean; 173 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 58 KiB/s wr, 27 op/s
Oct 13 14:17:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:20 standalone.localdomain haproxy[70940]: 172.21.0.2:58262 [13/Oct/2025:14:17:20.530] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 65/4/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:20 standalone.localdomain runuser[189590]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:20 standalone.localdomain ceph-mon[29756]: pgmap v1279: 177 pgs: 177 active+clean; 173 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 58 KiB/s wr, 27 op/s
Oct 13 14:17:20 standalone.localdomain haproxy[70940]: 172.21.0.2:58262 [13/Oct/2025:14:17:20.538] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/355/355 201 8100 - - ---- 65/4/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:20 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:20.919] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 200 8095 - - ---- 66/4/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:20 standalone.localdomain haproxy[70940]: 172.21.0.2:52130 [13/Oct/2025:14:17:20.916] cinder cinder/standalone.internalapi.localdomain 0/0/0/52/52 404 392 - - ---- 66/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/boot-volume HTTP/1.1"
Oct 13 14:17:21 standalone.localdomain haproxy[70940]: 172.21.0.2:52130 [13/Oct/2025:14:17:20.970] cinder cinder/standalone.internalapi.localdomain 0/0/0/36/36 200 1404 - - ---- 66/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=boot-volume HTTP/1.1"
Oct 13 14:17:21 standalone.localdomain sshd[189367]: Received disconnect from 192.168.122.11 port 43800:11: disconnected by user
Oct 13 14:17:21 standalone.localdomain sshd[189367]: Disconnected from user root 192.168.122.11 port 43800
Oct 13 14:17:21 standalone.localdomain sshd[189312]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:21 standalone.localdomain systemd[1]: session-85.scope: Deactivated successfully.
Oct 13 14:17:21 standalone.localdomain systemd[1]: session-85.scope: Consumed 2.002s CPU time.
Oct 13 14:17:21 standalone.localdomain systemd-logind[45629]: Session 85 logged out. Waiting for processes to exit.
Oct 13 14:17:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:17:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:17:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:17:21 standalone.localdomain systemd-logind[45629]: Removed session 85.
Oct 13 14:17:21 standalone.localdomain runuser[189590]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:21 standalone.localdomain podman[189784]: 2025-10-13 14:17:21.349914615 +0000 UTC m=+0.073702049 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, container_name=ovn_cluster_northd, build-date=2025-07-21T13:30:04, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-northd-container, description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:17:21 standalone.localdomain podman[189782]: 2025-10-13 14:17:21.424630974 +0000 UTC m=+0.148025946 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step5, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute)
Oct 13 14:17:21 standalone.localdomain podman[189784]: 2025-10-13 14:17:21.435828949 +0000 UTC m=+0.159616393 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, build-date=2025-07-21T13:30:04, release=1, version=17.1.9, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, container_name=ovn_cluster_northd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vendor=Red Hat, Inc., config_id=ovn_cluster_northd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:17:21 standalone.localdomain podman[189786]: 2025-10-13 14:17:21.394251199 +0000 UTC m=+0.118507937 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, release=1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T12:58:45)
Oct 13 14:17:21 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:17:21 standalone.localdomain podman[189782]: 2025-10-13 14:17:21.461829618 +0000 UTC m=+0.185224510 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, version=17.1.9, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:17:21 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:17:21 standalone.localdomain podman[189786]: 2025-10-13 14:17:21.477840421 +0000 UTC m=+0.202097129 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, tcib_managed=true, container_name=clustercheck, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, batch=17.1_20250721.1, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:17:21 standalone.localdomain runuser[189918]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:21 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:17:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1280: 177 pgs: 177 active+clean; 173 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 57 KiB/s wr, 23 op/s
Oct 13 14:17:22 standalone.localdomain runuser[189918]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:22 standalone.localdomain runuser[189972]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:22 standalone.localdomain systemd[1]: tmp-crun.llMVjY.mount: Deactivated successfully.
Oct 13 14:17:22 standalone.localdomain runuser[189972]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:22 standalone.localdomain ceph-mon[29756]: pgmap v1280: 177 pgs: 177 active+clean; 173 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 57 KiB/s wr, 23 op/s
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:17:23
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'vms', 'images', 'volumes', 'manila_data', '.mgr', 'manila_metadata']
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:17:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1281: 177 pgs: 177 active+clean; 173 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 57 KiB/s wr, 23 op/s
Oct 13 14:17:24 standalone.localdomain ceph-mon[29756]: pgmap v1281: 177 pgs: 177 active+clean; 173 MiB data, 187 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 57 KiB/s wr, 23 op/s
Oct 13 14:17:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1282: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 73 op/s
Oct 13 14:17:26 standalone.localdomain sshd[190064]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:26 standalone.localdomain sshd[190064]: Accepted publickey for root from 192.168.122.11 port 49710 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:26 standalone.localdomain systemd-logind[45629]: New session 86 of user root.
Oct 13 14:17:26 standalone.localdomain systemd[1]: Started Session 86 of User root.
Oct 13 14:17:26 standalone.localdomain ceph-mon[29756]: pgmap v1282: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 73 op/s
Oct 13 14:17:26 standalone.localdomain sshd[190064]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:17:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:17:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:17:26 standalone.localdomain podman[190090]: 2025-10-13 14:17:26.807802827 +0000 UTC m=+0.080220968 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, tcib_managed=true, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:17:26 standalone.localdomain podman[190090]: 2025-10-13 14:17:26.841819445 +0000 UTC m=+0.114237576 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api_cron, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-cinder-api, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:17:26 standalone.localdomain systemd[1]: tmp-crun.Ivj0YA.mount: Deactivated successfully.
Oct 13 14:17:26 standalone.localdomain podman[190091]: 2025-10-13 14:17:26.867749173 +0000 UTC m=+0.135538153 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-cinder-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, release=1, build-date=2025-07-21T16:10:12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-scheduler, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:17:26 standalone.localdomain podman[190089]: 2025-10-13 14:17:26.929464301 +0000 UTC m=+0.202262324 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, distribution-scope=public)
Oct 13 14:17:26 standalone.localdomain systemd[1]: tmp-crun.k01ZKv.mount: Deactivated successfully.
Oct 13 14:17:26 standalone.localdomain podman[190089]: 2025-10-13 14:17:26.939479489 +0000 UTC m=+0.212277502 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:17:26 standalone.localdomain podman[190091]: 2025-10-13 14:17:26.941939365 +0000 UTC m=+0.209728375 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, container_name=cinder_scheduler, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-scheduler-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:10:12, batch=17.1_20250721.1)
Oct 13 14:17:26 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:17:27 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:17:27 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:17:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1283: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 69 op/s
Oct 13 14:17:28 standalone.localdomain haproxy[70940]: 172.21.0.2:58266 [13/Oct/2025:14:17:28.180] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 57/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006617287828029034 of space, bias 1.0, pg target 0.6617287828029034 quantized to 32 (current 32)
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006013138609715243 of space, bias 1.0, pg target 0.6013138609715243 quantized to 32 (current 32)
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:17:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:17:28 standalone.localdomain haproxy[70940]: 172.21.0.2:58266 [13/Oct/2025:14:17:28.185] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/328/328 201 8100 - - ---- 57/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:28 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:28.568] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/30/30 200 8095 - - ---- 58/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:28 standalone.localdomain haproxy[70940]: 172.21.0.2:52136 [13/Oct/2025:14:17:28.562] cinder cinder/standalone.internalapi.localdomain 0/0/0/70/70 404 392 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/boot-volume HTTP/1.1"
Oct 13 14:17:28 standalone.localdomain haproxy[70940]: 172.21.0.2:52136 [13/Oct/2025:14:17:28.638] cinder cinder/standalone.internalapi.localdomain 0/0/0/44/44 200 1760 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=boot-volume HTTP/1.1"
Oct 13 14:17:28 standalone.localdomain ceph-mon[29756]: pgmap v1283: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 69 op/s
Oct 13 14:17:28 standalone.localdomain sshd[190075]: Received disconnect from 192.168.122.11 port 49710:11: disconnected by user
Oct 13 14:17:28 standalone.localdomain sshd[190075]: Disconnected from user root 192.168.122.11 port 49710
Oct 13 14:17:28 standalone.localdomain sshd[190064]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:28 standalone.localdomain systemd[1]: session-86.scope: Deactivated successfully.
Oct 13 14:17:28 standalone.localdomain systemd[1]: session-86.scope: Consumed 1.998s CPU time.
Oct 13 14:17:28 standalone.localdomain systemd-logind[45629]: Session 86 logged out. Waiting for processes to exit.
Oct 13 14:17:28 standalone.localdomain systemd-logind[45629]: Removed session 86.
Oct 13 14:17:28 standalone.localdomain sshd[190176]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:29 standalone.localdomain sshd[190176]: Accepted publickey for root from 192.168.122.11 port 49718 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:29 standalone.localdomain systemd-logind[45629]: New session 87 of user root.
Oct 13 14:17:29 standalone.localdomain systemd[1]: Started Session 87 of User root.
Oct 13 14:17:29 standalone.localdomain sshd[190176]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1284: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 70 op/s
Oct 13 14:17:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:30 standalone.localdomain podman[190291]: 2025-10-13 14:17:30.47147019 +0000 UTC m=+0.072412489 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, com.redhat.component=openstack-manila-share-container, version=17.1.9, batch=17.1_20250721.1, release=1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-manila-share, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc.)
Oct 13 14:17:30 standalone.localdomain podman[190291]: 2025-10-13 14:17:30.503882338 +0000 UTC m=+0.104824687 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-share-container, name=rhosp17/openstack-manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, build-date=2025-07-21T15:22:36, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, release=1)
Oct 13 14:17:30 standalone.localdomain haproxy[70940]: 172.21.0.2:58696 [13/Oct/2025:14:17:30.776] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:30 standalone.localdomain ceph-mon[29756]: pgmap v1284: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 70 op/s
Oct 13 14:17:31 standalone.localdomain haproxy[70940]: 172.21.0.2:58696 [13/Oct/2025:14:17:30.780] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/333/333 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:31 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:31.156] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:31 standalone.localdomain haproxy[70940]: 172.21.0.2:32842 [13/Oct/2025:14:17:31.151] cinder cinder/standalone.internalapi.localdomain 0/0/0/67/67 404 392 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/boot-volume HTTP/1.1"
Oct 13 14:17:31 standalone.localdomain haproxy[70940]: 172.21.0.2:32842 [13/Oct/2025:14:17:31.222] cinder cinder/standalone.internalapi.localdomain 0/0/0/30/30 200 1760 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=boot-volume HTTP/1.1"
Oct 13 14:17:31 standalone.localdomain sshd[190179]: Received disconnect from 192.168.122.11 port 49718:11: disconnected by user
Oct 13 14:17:31 standalone.localdomain sshd[190179]: Disconnected from user root 192.168.122.11 port 49718
Oct 13 14:17:31 standalone.localdomain sshd[190176]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:31 standalone.localdomain systemd[1]: session-87.scope: Deactivated successfully.
Oct 13 14:17:31 standalone.localdomain systemd[1]: session-87.scope: Consumed 1.891s CPU time.
Oct 13 14:17:31 standalone.localdomain systemd-logind[45629]: Session 87 logged out. Waiting for processes to exit.
Oct 13 14:17:31 standalone.localdomain systemd-logind[45629]: Removed session 87.
Oct 13 14:17:31 standalone.localdomain sshd[190519]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:31 standalone.localdomain sshd[190519]: Accepted publickey for root from 192.168.122.11 port 46702 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:31 standalone.localdomain systemd-logind[45629]: New session 88 of user root.
Oct 13 14:17:31 standalone.localdomain systemd[1]: Started Session 88 of User root.
Oct 13 14:17:31 standalone.localdomain sshd[190519]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1285: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:17:32 standalone.localdomain systemd[1]: tmp-crun.9emuz3.mount: Deactivated successfully.
Oct 13 14:17:32 standalone.localdomain podman[190548]: 2025-10-13 14:17:32.867094314 +0000 UTC m=+0.123360116 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, build-date=2025-07-21T15:56:28, description=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, com.redhat.component=openstack-manila-scheduler-container, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64)
Oct 13 14:17:32 standalone.localdomain ceph-mon[29756]: pgmap v1285: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 13 14:17:32 standalone.localdomain podman[190577]: 2025-10-13 14:17:32.951908195 +0000 UTC m=+0.186099998 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, name=rhosp17/openstack-horizon, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-horizon-container, config_id=tripleo_step3, tcib_managed=true, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:58:15, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc.)
Oct 13 14:17:32 standalone.localdomain podman[190547]: 2025-10-13 14:17:32.96019552 +0000 UTC m=+0.218172434 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, container_name=keystone, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container)
Oct 13 14:17:32 standalone.localdomain podman[190570]: 2025-10-13 14:17:32.970760424 +0000 UTC m=+0.207883427 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=heat_api_cron, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container)
Oct 13 14:17:32 standalone.localdomain podman[190570]: 2025-10-13 14:17:32.97873508 +0000 UTC m=+0.215858083 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, container_name=heat_api_cron, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:17:32 standalone.localdomain podman[190589]: 2025-10-13 14:17:32.927758332 +0000 UTC m=+0.154958919 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, release=1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, summary=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12)
Oct 13 14:17:32 standalone.localdomain podman[190584]: 2025-10-13 14:17:32.933751186 +0000 UTC m=+0.165928267 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, build-date=2025-07-21T15:44:03, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-server, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, container_name=neutron_api, version=17.1.9, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:17:32 standalone.localdomain podman[190547]: 2025-10-13 14:17:32.990796112 +0000 UTC m=+0.248773026 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, com.redhat.component=openstack-keystone-container, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, container_name=keystone, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:17:32 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190577]: 2025-10-13 14:17:33.014310945 +0000 UTC m=+0.248502758 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, container_name=horizon, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-horizon)
Oct 13 14:17:33 standalone.localdomain podman[190548]: 2025-10-13 14:17:33.022869598 +0000 UTC m=+0.279135400 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, container_name=manila_scheduler, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, name=rhosp17/openstack-manila-scheduler, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, build-date=2025-07-21T15:56:28)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190546]: 2025-10-13 14:17:33.05573576 +0000 UTC m=+0.320341079 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-type=git, container_name=heat_api_cfn, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, managed_by=tripleo_ansible, release=1, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, name=rhosp17/openstack-heat-api-cfn, version=17.1.9)
Oct 13 14:17:33 standalone.localdomain podman[190549]: 2025-10-13 14:17:33.073801285 +0000 UTC m=+0.325755204 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, name=rhosp17/openstack-heat-engine, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:11, tcib_managed=true, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:17:33 standalone.localdomain podman[190583]: 2025-10-13 14:17:33.151377053 +0000 UTC m=+0.377157237 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-type=git, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api)
Oct 13 14:17:33 standalone.localdomain podman[190549]: 2025-10-13 14:17:33.15681063 +0000 UTC m=+0.408764569 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, version=17.1.9, container_name=heat_engine, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190595]: 2025-10-13 14:17:33.170070938 +0000 UTC m=+0.397045208 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.9, vendor=Red Hat, Inc., container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:17:33 standalone.localdomain podman[190595]: 2025-10-13 14:17:33.178244929 +0000 UTC m=+0.405219209 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, release=1, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Oct 13 14:17:33 standalone.localdomain podman[190546]: 2025-10-13 14:17:33.190100254 +0000 UTC m=+0.454705603 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190584]: 2025-10-13 14:17:33.211370578 +0000 UTC m=+0.443547729 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=neutron_api, release=1, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, name=rhosp17/openstack-neutron-server, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T15:44:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:17:33 standalone.localdomain podman[190553]: 2025-10-13 14:17:33.119959956 +0000 UTC m=+0.350396053 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-scheduler-container, container_name=nova_scheduler, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190583]: 2025-10-13 14:17:33.232147568 +0000 UTC m=+0.457927772 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, version=17.1.9, com.redhat.component=openstack-heat-api-container, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:17:33 standalone.localdomain podman[190553]: 2025-10-13 14:17:33.257393185 +0000 UTC m=+0.487829292 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, build-date=2025-07-21T16:02:54, vcs-type=git, name=rhosp17/openstack-nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, architecture=x86_64, container_name=nova_scheduler, managed_by=tripleo_ansible, vcs-ref=4f7cb55437a2b42333072591935a511528e79935)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190567]: 2025-10-13 14:17:33.289043598 +0000 UTC m=+0.532438404 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., build-date=2025-07-21T16:06:43, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=manila_api_cron, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, com.redhat.component=openstack-manila-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:17:33 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190555]: 2025-10-13 14:17:33.039123959 +0000 UTC m=+0.288753976 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, version=17.1.9, build-date=2025-07-21T12:58:43)
Oct 13 14:17:33 standalone.localdomain podman[190567]: 2025-10-13 14:17:33.299519601 +0000 UTC m=+0.542914397 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, tcib_managed=true, batch=17.1_20250721.1, container_name=manila_api_cron, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, com.redhat.component=openstack-manila-api-container, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190589]: 2025-10-13 14:17:33.319553497 +0000 UTC m=+0.546754144 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-conductor-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., container_name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, version=17.1.9, build-date=2025-07-21T15:44:17, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1)
Oct 13 14:17:33 standalone.localdomain podman[190555]: 2025-10-13 14:17:33.327957436 +0000 UTC m=+0.577587413 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T12:58:43, container_name=memcached, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached)
Oct 13 14:17:33 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain podman[190556]: 2025-10-13 14:17:33.365099269 +0000 UTC m=+0.595307319 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, architecture=x86_64, io.buildah.version=1.33.12, container_name=cinder_api, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:17:33 standalone.localdomain runuser[190874]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:33 standalone.localdomain podman[190556]: 2025-10-13 14:17:33.410060422 +0000 UTC m=+0.640268532 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, container_name=cinder_api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T15:58:55, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:17:33 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:17:33 standalone.localdomain haproxy[70940]: 172.21.0.2:58712 [13/Oct/2025:14:17:33.726] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1286: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 13 14:17:33 standalone.localdomain systemd[1]: tmp-crun.iE6REQ.mount: Deactivated successfully.
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:58712 [13/Oct/2025:14:17:33.733] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/293/293 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain runuser[190874]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:50210 [13/Oct/2025:14:17:34.105] glance_api glance_api/standalone.internalapi.localdomain 0/0/0/2/2 300 1507 - - ---- 57/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:34.117] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/20/20 200 8095 - - ---- 58/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:32856 [13/Oct/2025:14:17:34.113] cinder cinder/standalone.internalapi.localdomain 0/0/0/44/44 404 392 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/boot-volume HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:32856 [13/Oct/2025:14:17:34.159] cinder cinder/standalone.internalapi.localdomain 0/0/0/28/28 200 1760 - - ---- 58/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=boot-volume HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:34.191] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/18/18 404 465 - - ---- 59/1/0/0/0 0/0 "GET /v2.1/flavors/m1.small HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:34.211] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/22/22 200 1215 - - ---- 59/1/0/0/0 0/0 "GET /v2.1/flavors HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:34.237] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/18/18 200 823 - - ---- 59/1/0/0/0 0/0 "GET /v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567 HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain runuser[190953]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:48324 [13/Oct/2025:14:17:34.264] neutron neutron/standalone.internalapi.localdomain 0/0/0/29/29 404 290 - - ---- 60/1/0/0/0 0/0 "GET /v2.0/networks/private HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:48324 [13/Oct/2025:14:17:34.297] neutron neutron/standalone.internalapi.localdomain 0/0/0/110/110 200 901 - - ---- 60/1/0/0/0 0/0 "GET /v2.0/networks?name=private HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:32868 [13/Oct/2025:14:17:34.432] cinder cinder/standalone.internalapi.localdomain 0/0/0/64/64 200 1757 - - ---- 61/2/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/c028ef23-0c2a-4a52-9222-65435c33ad7b HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:34.501] neutron neutron/standalone.internalapi.localdomain 0/0/0/98/98 200 901 - - ---- 62/2/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:34.605] neutron neutron/standalone.internalapi.localdomain 0/0/0/8/8 200 361 - - ---- 62/2/0/0/0 0/0 "GET /v2.0/quotas/e44641a80bcb466cb3dd688e48b72d8e HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:34.616] neutron neutron/standalone.internalapi.localdomain 0/0/0/43/43 200 507 - - ---- 62/2/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&fields=id HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:34.663] neutron neutron/standalone.internalapi.localdomain 0/0/0/60/60 200 187 - - ---- 62/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:34.725] neutron neutron/standalone.internalapi.localdomain 0/0/0/67/67 200 252 - - ---- 62/2/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:32868 [13/Oct/2025:14:17:34.828] cinder cinder/standalone.internalapi.localdomain 0/0/0/43/43 200 1757 - - ---- 62/2/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/c028ef23-0c2a-4a52-9222-65435c33ad7b HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain haproxy[70940]: 172.21.0.2:32868 [13/Oct/2025:14:17:34.876] cinder cinder/standalone.internalapi.localdomain 0/0/0/3/3 300 942 - - ---- 62/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:34 standalone.localdomain runuser[190953]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:34 standalone.localdomain runuser[191015]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:35 standalone.localdomain ceph-mon[29756]: pgmap v1286: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.21.0.2:32868 [13/Oct/2025:14:17:34.882] cinder cinder/standalone.internalapi.localdomain 0/0/0/216/216 200 576 - - ---- 62/2/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/attachments HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:34.412] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/729/729 202 821 - - ---- 62/1/0/0/0 0/0 "POST /v2.1/servers HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:35.163] neutron neutron/standalone.internalapi.localdomain 0/0/0/28/28 200 185 - - ---- 63/2/0/0/0 0/0 "GET /v2.0/ports?device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:35.144] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/53/53 200 1550 - - ---- 63/1/0/0/0 0/0 "GET /v2.1/servers/8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.17.0.2:49734 [13/Oct/2025:14:17:35.162] placement placement/standalone.internalapi.localdomain 0/0/0/67/67 200 2249 - - ---- 63/1/0/0/0 0/0 "GET /placement/allocation_candidates?limit=1000&resources=DISK_GB%3A1%2CMEMORY_MB%3A512%2CVCPU%3A1&root_required=%21COMPUTE_STATUS_DISABLED HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.17.0.2:49750 [13/Oct/2025:14:17:35.247] placement placement/standalone.internalapi.localdomain 0/0/0/8/8 200 327 - - ---- 64/2/0/0/0 0/0 "GET /placement/allocations/8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.17.0.2:49750 [13/Oct/2025:14:17:35.257] placement placement/standalone.internalapi.localdomain 0/0/0/33/33 204 209 - - ---- 64/2/0/0/0 0/0 "PUT /placement/allocations/8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:35 standalone.localdomain haproxy[70940]: 172.17.0.2:49756 [13/Oct/2025:14:17:35.472] placement placement/standalone.internalapi.localdomain 0/0/0/13/13 200 566 - - ---- 65/3/0/0/0 0/0 "GET /placement/allocations/8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1287: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 13 14:17:35 standalone.localdomain runuser[191015]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:17:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3831693806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:17:36 standalone.localdomain haproxy[70940]: 172.17.0.2:58156 [13/Oct/2025:14:17:36.220] cinder cinder/standalone.internalapi.localdomain 0/0/0/4/4 300 942 - - ---- 67/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:36 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:36.118] neutron neutron/standalone.internalapi.localdomain 0/0/0/124/124 200 901 - - ---- 67/3/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:17:36 standalone.localdomain haproxy[70940]: 172.17.0.2:58156 [13/Oct/2025:14:17:36.229] cinder cinder/standalone.internalapi.localdomain 0/0/0/49/49 200 1877 - - ---- 67/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/c028ef23-0c2a-4a52-9222-65435c33ad7b HTTP/1.1"
Oct 13 14:17:36 standalone.localdomain ceph-mon[29756]: pgmap v1287: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Oct 13 14:17:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3831693806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:17:36 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 3 addresses
Oct 13 14:17:36 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:17:36 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:17:36 standalone.localdomain podman[191155]: 2025-10-13 14:17:36.811753434 +0000 UTC m=+0.068043455 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, build-date=2025-07-21T16:28:54, version=17.1.9, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 14:17:36 standalone.localdomain systemd[1]: tmp-crun.B3sqxw.mount: Deactivated successfully.
Oct 13 14:17:36 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:36.248] neutron neutron/standalone.internalapi.localdomain 0/0/0/574/574 201 1263 - - ---- 67/3/0/0/0 0/0 "POST /v2.0/ports HTTP/1.1"
Oct 13 14:17:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 14:17:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3354409420' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:36 standalone.localdomain haproxy[70940]: 172.17.0.2:58156 [13/Oct/2025:14:17:36.323] cinder cinder/standalone.internalapi.localdomain 0/0/0/657/657 200 1073 - - ---- 67/3/0/0/0 0/0 "PUT /v3/e44641a80bcb466cb3dd688e48b72d8e/attachments/1e258e22-c3f1-4019-a0eb-b18008ad094e HTTP/1.1"
Oct 13 14:17:37 standalone.localdomain haproxy[70940]: 172.17.0.2:58156 [13/Oct/2025:14:17:36.999] cinder cinder/standalone.internalapi.localdomain 0/0/0/56/56 204 266 - - ---- 67/3/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/attachments/1e258e22-c3f1-4019-a0eb-b18008ad094e/action HTTP/1.1"
Oct 13 14:17:37 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3354409420' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:37 standalone.localdomain dnsmasq[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 3 addresses
Oct 13 14:17:37 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 14:17:37 standalone.localdomain dnsmasq-dhcp[178073]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 14:17:37 standalone.localdomain podman[191292]: 2025-10-13 14:17:37.447554747 +0000 UTC m=+0.036834954 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.openshift.expose-services=, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:17:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:17:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:17:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:17:37 standalone.localdomain podman[191307]: 2025-10-13 14:17:37.557755409 +0000 UTC m=+0.085393589 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=swift_account_server, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, vcs-type=git, build-date=2025-07-21T16:11:22)
Oct 13 14:17:37 standalone.localdomain podman[191306]: 2025-10-13 14:17:37.65010291 +0000 UTC m=+0.182367972 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, build-date=2025-07-21T15:54:32, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:17:37 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:36.827] neutron neutron/standalone.internalapi.localdomain 0/0/0/841/841 200 1345 - - ---- 67/3/0/0/0 0/0 "PUT /v2.0/ports/da3e5a61-7adb-481b-b7f5-703c3939cde2 HTTP/1.1"
Oct 13 14:17:37 standalone.localdomain podman[191309]: 2025-10-13 14:17:37.626556005 +0000 UTC m=+0.151690298 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, container_name=swift_object_server, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:17:37 standalone.localdomain haproxy[70940]: 172.17.0.2:46454 [13/Oct/2025:14:17:37.671] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/30/30 200 542 - - ---- 68/2/0/0/0 0/0 "POST /v2.1/os-server-external-events HTTP/1.1"
Oct 13 14:17:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:17:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:17:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1288: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:17:37 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:37.720] neutron neutron/standalone.internalapi.localdomain 0/0/0/85/85 200 1348 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:37 standalone.localdomain podman[191391]: 2025-10-13 14:17:37.855408818 +0000 UTC m=+0.092271991 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-novncproxy-container, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, version=17.1.9, io.openshift.expose-services=, container_name=nova_vnc_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:24:10, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:17:37 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:37.811] neutron neutron/standalone.internalapi.localdomain 0/0/0/67/67 200 924 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/networks?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&shared=False HTTP/1.1"
Oct 13 14:17:37 standalone.localdomain podman[191307]: 2025-10-13 14:17:37.879744697 +0000 UTC m=+0.407382887 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, release=1, tcib_managed=true, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22)
Oct 13 14:17:37 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:17:37 standalone.localdomain podman[191390]: 2025-10-13 14:17:37.897112641 +0000 UTC m=+0.128509146 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, name=rhosp17/openstack-nova-compute, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:17:37 standalone.localdomain podman[191309]: 2025-10-13 14:17:37.925814404 +0000 UTC m=+0.450948617 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, container_name=swift_object_server, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, version=17.1.9)
Oct 13 14:17:37 standalone.localdomain podman[191306]: 2025-10-13 14:17:37.933730128 +0000 UTC m=+0.465995190 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:17:37 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:17:37 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:37.896] neutron neutron/standalone.internalapi.localdomain 0/0/0/133/133 200 901 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/networks?shared=True HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:38.052] neutron neutron/standalone.internalapi.localdomain 0/0/0/53/53 200 1348 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:38.111] neutron neutron/standalone.internalapi.localdomain 0/0/0/48/48 200 192 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.237&port_id=da3e5a61-7adb-481b-b7f5-703c3939cde2 HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain podman[191391]: 2025-10-13 14:17:38.178913202 +0000 UTC m=+0.415776455 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_vnc_proxy, tcib_managed=true, release=1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:38.165] neutron neutron/standalone.internalapi.localdomain 0/0/0/28/28 200 810 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:17:38 standalone.localdomain podman[191390]: 2025-10-13 14:17:38.225906608 +0000 UTC m=+0.457303123 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64)
Oct 13 14:17:38 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:38.199] neutron neutron/standalone.internalapi.localdomain 0/0/0/51/51 200 1352 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:38.255] neutron neutron/standalone.internalapi.localdomain 0/0/0/63/63 200 187 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain haproxy[70940]: 172.17.0.2:44806 [13/Oct/2025:14:17:38.325] neutron neutron/standalone.internalapi.localdomain 0/0/0/87/87 200 252 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:17:38 standalone.localdomain ceph-mon[29756]: pgmap v1288: 177 pgs: 177 active+clean; 227 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:17:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 14:17:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3839180126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:39 standalone.localdomain kernel: device tapda3e5a61-7a entered promiscuous mode
Oct 13 14:17:39 standalone.localdomain NetworkManager[5962]: <info>  [1760365059.0509] manager: (tapda3e5a61-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/17)
Oct 13 14:17:39 standalone.localdomain systemd-udevd[191540]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:17:39 standalone.localdomain NetworkManager[5962]: <info>  [1760365059.0746] device (tapda3e5a61-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 14:17:39 standalone.localdomain NetworkManager[5962]: <info>  [1760365059.0751] device (tapda3e5a61-7a): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Oct 13 14:17:39 standalone.localdomain systemd-machined[183383]: New machine qemu-2-instance-00000002.
Oct 13 14:17:39 standalone.localdomain kernel: IPv4: martian source 169.254.169.254 from 169.254.169.254, on dev tap9a2e0de2-08
Oct 13 14:17:39 standalone.localdomain kernel: ll header: 00000000: ff ff ff ff ff ff fa 16 3e e5 ca 6c 08 06
Oct 13 14:17:39 standalone.localdomain systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct 13 14:17:39 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3839180126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:39 standalone.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 13 14:17:39 standalone.localdomain haproxy[70940]: 172.17.0.2:46454 [13/Oct/2025:14:17:39.706] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/26/26 200 717 - - ---- 68/2/0/0/0 0/0 "POST /v2.1/os-server-external-events HTTP/1.1"
Oct 13 14:17:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1289: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 13 14:17:39 standalone.localdomain systemd[1]: tmp-crun.Bs5jry.mount: Deactivated successfully.
Oct 13 14:17:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:17:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:17:39 standalone.localdomain podman[191698]: 2025-10-13 14:17:39.946816092 +0000 UTC m=+0.094500129 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-backup, architecture=x86_64, com.redhat.component=openstack-cinder-backup-container, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:17:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:17:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:17:40 standalone.localdomain podman[191698]: 2025-10-13 14:17:39.999883095 +0000 UTC m=+0.147567112 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:18:24, name=rhosp17/openstack-cinder-backup, release=1, com.redhat.component=openstack-cinder-backup-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public)
Oct 13 14:17:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:17:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:17:40 standalone.localdomain podman[191718]: 2025-10-13 14:17:40.13457743 +0000 UTC m=+0.175543413 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=)
Oct 13 14:17:40 standalone.localdomain podman[191716]: 2025-10-13 14:17:40.099731447 +0000 UTC m=+0.146795578 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:17:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:17:40 standalone.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 13 14:17:40 standalone.localdomain podman[191718]: 2025-10-13 14:17:40.222977329 +0000 UTC m=+0.263943332 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:17:40 standalone.localdomain podman[191716]: 2025-10-13 14:17:40.237248608 +0000 UTC m=+0.284312759 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:58:20, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:17:40 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain podman[191769]: 2025-10-13 14:17:40.246245296 +0000 UTC m=+0.147088477 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, name=rhosp17/openstack-placement-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, architecture=x86_64, tcib_managed=true, container_name=placement_api, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:17:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:17:40 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain podman[191717]: 2025-10-13 14:17:40.21681806 +0000 UTC m=+0.255944576 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:40.307] neutron neutron/standalone.internalapi.localdomain 0/0/0/74/74 200 1350 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/ports?device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain podman[191807]: 2025-10-13 14:17:40.391598178 +0000 UTC m=+0.173255502 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, distribution-scope=public, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:17:40 standalone.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service.
Oct 13 14:17:40 standalone.localdomain podman[191720]: 2025-10-13 14:17:40.354222298 +0000 UTC m=+0.370416859 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, release=1, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:17:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:40 standalone.localdomain podman[191832]: 2025-10-13 14:17:40.412289725 +0000 UTC m=+0.144832568 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, container_name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:17:40 standalone.localdomain podman[191769]: 2025-10-13 14:17:40.42483031 +0000 UTC m=+0.325673501 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, maintainer=OpenStack TripleO Team, container_name=placement_api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, com.redhat.component=openstack-placement-api-container, description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, vcs-type=git, version=17.1.9)
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:40.388] neutron neutron/standalone.internalapi.localdomain 0/0/0/41/41 200 261 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain podman[191720]: 2025-10-13 14:17:40.436743978 +0000 UTC m=+0.452938549 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:17:40 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain ceph-mon[29756]: pgmap v1289: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:40.206] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/249/249 200 1898 - - ---- 68/2/0/0/0 0/0 "GET /v2.1/servers/8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain podman[191772]: 2025-10-13 14:17:40.323521633 +0000 UTC m=+0.222045483 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, container_name=nova_metadata, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, release=1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1)
Oct 13 14:17:40 standalone.localdomain podman[191717]: 2025-10-13 14:17:40.487257642 +0000 UTC m=+0.526384128 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37)
Oct 13 14:17:40 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain podman[191772]: 2025-10-13 14:17:40.558813363 +0000 UTC m=+0.457337223 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=nova_metadata, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, distribution-scope=public)
Oct 13 14:17:40 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:40.513] neutron neutron/standalone.internalapi.localdomain 0/0/0/89/89 200 1350 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/ports?device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain podman[191807]: 2025-10-13 14:17:40.628037604 +0000 UTC m=+0.409694928 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:17:40 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.17.0.2:44800 [13/Oct/2025:14:17:40.606] neutron neutron/standalone.internalapi.localdomain 0/0/0/59/59 200 261 - - ---- 68/3/0/0/0 0/0 "GET /v2.0/security-groups?id=abececb5-f6f2-4dbd-993f-ddc54effe614&fields=id&fields=name HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:40.459] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/217/217 200 1898 - - ---- 68/2/0/0/0 0/0 "GET /v2.1/servers/8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain podman[191832]: 2025-10-13 14:17:40.677998971 +0000 UTC m=+0.410541844 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:17:40 standalone.localdomain haproxy[70940]: 172.21.0.2:33358 [13/Oct/2025:14:17:40.680] nova_osapi nova_osapi/standalone.internalapi.localdomain 0/0/0/18/18 200 823 - - ---- 68/2/0/0/0 0/0 "GET /v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567 HTTP/1.1"
Oct 13 14:17:40 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain systemd[1]: tmp-crun.He2Cip.mount: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain sshd[190522]: Received disconnect from 192.168.122.11 port 46702:11: disconnected by user
Oct 13 14:17:40 standalone.localdomain sshd[190522]: Disconnected from user root 192.168.122.11 port 46702
Oct 13 14:17:40 standalone.localdomain sshd[190519]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:40 standalone.localdomain systemd[1]: session-88.scope: Deactivated successfully.
Oct 13 14:17:40 standalone.localdomain systemd[1]: session-88.scope: Consumed 2.354s CPU time.
Oct 13 14:17:40 standalone.localdomain systemd-logind[45629]: Session 88 logged out. Waiting for processes to exit.
Oct 13 14:17:40 standalone.localdomain systemd-logind[45629]: Removed session 88.
Oct 13 14:17:40 standalone.localdomain sshd[191967]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:41 standalone.localdomain sshd[191967]: Accepted publickey for root from 192.168.122.11 port 43274 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:41 standalone.localdomain systemd-logind[45629]: New session 89 of user root.
Oct 13 14:17:41 standalone.localdomain systemd[1]: Started Session 89 of User root.
Oct 13 14:17:41 standalone.localdomain sshd[191967]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:41 standalone.localdomain setroubleshoot[191641]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count. For complete SELinux messages run: sealert -l 96b0e6ea-7da0-46eb-9903-9b19ca29d85a
Oct 13 14:17:41 standalone.localdomain setroubleshoot[191641]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count.
                                                               
                                                               *****  Plugin qemu_file_image (98.8 confidence) suggests   *******************
                                                               
                                                               If max_map_count is a virtualization target
                                                               Then you need to change the label on max_map_count'
                                                               Do
                                                               # semanage fcontext -a -t virt_image_t 'max_map_count'
                                                               # restorecon -v 'max_map_count'
                                                               
                                                               *****  Plugin catchall (2.13 confidence) suggests   **************************
                                                               
                                                               If you believe that qemu-kvm should be allowed read access on the max_map_count file by default.
                                                               Then you should report this as a bug.
                                                               You can generate a local policy module to allow this access.
                                                               Do
                                                               allow this access for now by executing:
                                                               # ausearch -c 'qemu-kvm' --raw | audit2allow -M my-qemukvm
                                                               # semodule -X 300 -i my-qemukvm.pp
                                                               
Oct 13 14:17:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1290: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 13 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 13 14:17:42 standalone.localdomain ceph-mon[29756]: pgmap v1290: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 13 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 13 14:17:43 standalone.localdomain haproxy[70940]: 172.21.0.2:57276 [13/Oct/2025:14:17:43.147] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 64/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:43 standalone.localdomain haproxy[70940]: 172.21.0.2:57276 [13/Oct/2025:14:17:43.154] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/296/296 201 8100 - - ---- 64/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:43 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:43.483] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/44/44 200 8095 - - ---- 65/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:43 standalone.localdomain haproxy[70940]: 172.21.0.2:59612 [13/Oct/2025:14:17:43.479] cinder cinder/standalone.internalapi.localdomain 0/0/0/87/87 404 387 - - ---- 65/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/backups/backup HTTP/1.1"
Oct 13 14:17:43 standalone.localdomain haproxy[70940]: 172.21.0.2:59612 [13/Oct/2025:14:17:43.570] cinder cinder/standalone.internalapi.localdomain 0/0/0/14/14 200 316 - - ---- 65/3/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/backups/detail?all_tenants=1&name=backup HTTP/1.1"
Oct 13 14:17:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1291: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 13 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 13 14:17:43 standalone.localdomain sshd[192011]: Received disconnect from 192.168.122.11 port 43274:11: disconnected by user
Oct 13 14:17:43 standalone.localdomain sshd[192011]: Disconnected from user root 192.168.122.11 port 43274
Oct 13 14:17:43 standalone.localdomain sshd[191967]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:43 standalone.localdomain systemd[1]: session-89.scope: Deactivated successfully.
Oct 13 14:17:43 standalone.localdomain systemd[1]: session-89.scope: Consumed 2.239s CPU time.
Oct 13 14:17:43 standalone.localdomain sshd[192158]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:43 standalone.localdomain systemd-logind[45629]: Session 89 logged out. Waiting for processes to exit.
Oct 13 14:17:43 standalone.localdomain systemd-logind[45629]: Removed session 89.
Oct 13 14:17:43 standalone.localdomain sshd[192158]: Accepted publickey for root from 192.168.122.11 port 43288 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:44 standalone.localdomain systemd-logind[45629]: New session 90 of user root.
Oct 13 14:17:44 standalone.localdomain systemd[1]: Started Session 90 of User root.
Oct 13 14:17:44 standalone.localdomain sshd[192158]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:44 standalone.localdomain ceph-mon[29756]: pgmap v1291: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 13 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct 13 14:17:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:17:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:17:45 standalone.localdomain haproxy[70940]: 172.21.0.2:57288 [13/Oct/2025:14:17:45.767] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 60/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1292: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 81 op/s
Oct 13 14:17:45 standalone.localdomain podman[192194]: 2025-10-13 14:17:45.832754636 +0000 UTC m=+0.074137353 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, vcs-type=git, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, container_name=nova_api)
Oct 13 14:17:45 standalone.localdomain systemd[1]: tmp-crun.52FGgY.mount: Deactivated successfully.
Oct 13 14:17:45 standalone.localdomain podman[192193]: 2025-10-13 14:17:45.860657794 +0000 UTC m=+0.115862797 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-keystone, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:17:45 standalone.localdomain podman[192193]: 2025-10-13 14:17:45.893187435 +0000 UTC m=+0.148392468 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, release=1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=keystone_cron)
Oct 13 14:17:45 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:17:45 standalone.localdomain podman[192194]: 2025-10-13 14:17:45.915596424 +0000 UTC m=+0.156979131 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible)
Oct 13 14:17:45 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:17:46 standalone.localdomain haproxy[70940]: 172.21.0.2:57288 [13/Oct/2025:14:17:45.773] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/365/365 201 8100 - - ---- 60/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:46 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:46.182] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/31/31 200 8095 - - ---- 61/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:46 standalone.localdomain haproxy[70940]: 172.21.0.2:59618 [13/Oct/2025:14:17:46.172] cinder cinder/standalone.internalapi.localdomain 0/0/0/91/91 404 385 - - ---- 61/2/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/disk HTTP/1.1"
Oct 13 14:17:46 standalone.localdomain haproxy[70940]: 172.21.0.2:59618 [13/Oct/2025:14:17:46.269] cinder cinder/standalone.internalapi.localdomain 0/0/0/50/50 200 2065 - - ---- 61/2/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/volumes/detail?all_tenants=1&name=disk HTTP/1.1"
Oct 13 14:17:46 standalone.localdomain runuser[192250]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:46 standalone.localdomain ceph-mon[29756]: pgmap v1292: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 81 op/s
Oct 13 14:17:46 standalone.localdomain haproxy[70940]: 172.21.0.2:59618 [13/Oct/2025:14:17:46.324] cinder cinder/standalone.internalapi.localdomain 0/0/0/116/116 202 660 - - ---- 61/2/0/0/0 0/0 "POST /v3/e44641a80bcb466cb3dd688e48b72d8e/backups HTTP/1.1"
Oct 13 14:17:46 standalone.localdomain haproxy[70940]: 172.17.0.100:39841 [13/Oct/2025:14:07:38.913] mysql mysql/standalone.internalapi.localdomain 1/0/607615 137428 -- 61/54/53/53/0 0/0
Oct 13 14:17:46 standalone.localdomain sudo[192297]:     root : PWD=/ ; USER=root ; COMMAND=/usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf privsep-helper --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --privsep_context os_brick.privileged.default --privsep_sock_path /tmp/tmpgua6qdo4/privsep.sock
Oct 13 14:17:46 standalone.localdomain systemd[1]: Started Session c17 of User root.
Oct 13 14:17:46 standalone.localdomain sudo[192297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:46 standalone.localdomain sshd[192161]: Received disconnect from 192.168.122.11 port 43288:11: disconnected by user
Oct 13 14:17:46 standalone.localdomain sshd[192161]: Disconnected from user root 192.168.122.11 port 43288
Oct 13 14:17:46 standalone.localdomain sshd[192158]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:46 standalone.localdomain systemd[1]: session-90.scope: Deactivated successfully.
Oct 13 14:17:46 standalone.localdomain systemd[1]: session-90.scope: Consumed 1.905s CPU time.
Oct 13 14:17:46 standalone.localdomain systemd-logind[45629]: Session 90 logged out. Waiting for processes to exit.
Oct 13 14:17:46 standalone.localdomain systemd-logind[45629]: Removed session 90.
Oct 13 14:17:46 standalone.localdomain sshd[192299]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:46 standalone.localdomain sshd[192299]: Accepted publickey for root from 192.168.122.11 port 43302 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:46 standalone.localdomain systemd-logind[45629]: New session 91 of user root.
Oct 13 14:17:46 standalone.localdomain systemd[1]: Started Session 91 of User root.
Oct 13 14:17:46 standalone.localdomain sshd[192299]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:47 standalone.localdomain runuser[192250]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:47 standalone.localdomain sudo[192297]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:47 standalone.localdomain runuser[192343]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1293: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 81 op/s
Oct 13 14:17:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 14:17:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/422325698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:47 standalone.localdomain runuser[192343]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:47 standalone.localdomain runuser[192426]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:17:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:17:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:17:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:17:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:17:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:17:48 standalone.localdomain haproxy[70940]: 172.21.0.2:57294 [13/Oct/2025:14:17:48.778] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 58/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:48 standalone.localdomain systemd[1]: tmp-crun.1Z5Qed.mount: Deactivated successfully.
Oct 13 14:17:48 standalone.localdomain ceph-mon[29756]: pgmap v1293: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 81 op/s
Oct 13 14:17:48 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/422325698' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 14:17:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 13 14:17:48 standalone.localdomain runuser[192426]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:17:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e61 e61: 1 total, 1 up, 1 in
Oct 13 14:17:48 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e61: 1 total, 1 up, 1 in
Oct 13 14:17:48 standalone.localdomain systemd[1]: tmp-crun.ajCfwU.mount: Deactivated successfully.
Oct 13 14:17:48 standalone.localdomain podman[192535]: 2025-10-13 14:17:48.850911785 +0000 UTC m=+0.102034210 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, container_name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:36:22, config_id=tripleo_step3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.component=openstack-barbican-worker-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:17:48 standalone.localdomain podman[192535]: 2025-10-13 14:17:48.930113312 +0000 UTC m=+0.181235727 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.component=openstack-barbican-worker-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, container_name=barbican_worker, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:17:48 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:17:48 standalone.localdomain podman[192542]: 2025-10-13 14:17:48.944785894 +0000 UTC m=+0.188057328 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-nova-api, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, container_name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:17:48 standalone.localdomain podman[192531]: 2025-10-13 14:17:48.898594082 +0000 UTC m=+0.133550610 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-dhcp-agent, release=1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:17:48 standalone.localdomain podman[192542]: 2025-10-13 14:17:48.974282451 +0000 UTC m=+0.217553905 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, tcib_managed=true, managed_by=tripleo_ansible, release=1, vcs-type=git, com.redhat.component=openstack-nova-api-container, container_name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:17:48 standalone.localdomain podman[192533]: 2025-10-13 14:17:48.974340373 +0000 UTC m=+0.232220176 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, io.openshift.expose-services=, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., com.redhat.component=openstack-barbican-keystone-listener-container, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1)
Oct 13 14:17:48 standalone.localdomain podman[192531]: 2025-10-13 14:17:48.978096058 +0000 UTC m=+0.213052586 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T16:28:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:17:48 standalone.localdomain podman[192532]: 2025-10-13 14:17:48.990264163 +0000 UTC m=+0.242863234 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, name=rhosp17/openstack-barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, build-date=2025-07-21T15:22:44, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, config_id=tripleo_step3, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:17:49 standalone.localdomain podman[192533]: 2025-10-13 14:17:49.001820359 +0000 UTC m=+0.259700172 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, config_id=tripleo_step3, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, architecture=x86_64, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener, distribution-scope=public, com.redhat.component=openstack-barbican-keystone-listener-container, batch=17.1_20250721.1)
Oct 13 14:17:49 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:17:49 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:17:49 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:17:49 standalone.localdomain podman[192532]: 2025-10-13 14:17:49.052288292 +0000 UTC m=+0.304887353 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, distribution-scope=public, container_name=barbican_api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-api, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:17:49 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:17:49 standalone.localdomain haproxy[70940]: 172.21.0.2:57294 [13/Oct/2025:14:17:48.784] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/342/342 201 8100 - - ---- 58/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:49 standalone.localdomain podman[192534]: 2025-10-13 14:17:49.16468218 +0000 UTC m=+0.411310377 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:03:34, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:17:49 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:49.154] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/24/24 200 8095 - - ---- 59/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:49 standalone.localdomain haproxy[70940]: 172.21.0.2:59634 [13/Oct/2025:14:17:49.149] cinder cinder/standalone.internalapi.localdomain 0/0/0/42/42 404 387 - - ---- 59/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/backups/backup HTTP/1.1"
Oct 13 14:17:49 standalone.localdomain haproxy[70940]: 172.21.0.2:59634 [13/Oct/2025:14:17:49.196] cinder cinder/standalone.internalapi.localdomain 0/0/0/16/16 200 1069 - - ---- 59/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/backups/detail?all_tenants=1&name=backup HTTP/1.1"
Oct 13 14:17:49 standalone.localdomain podman[192534]: 2025-10-13 14:17:49.22903492 +0000 UTC m=+0.475663107 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-sriov-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, build-date=2025-07-21T16:03:34, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_sriov_agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:17:49 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:17:49 standalone.localdomain sshd[192303]: Received disconnect from 192.168.122.11 port 43302:11: disconnected by user
Oct 13 14:17:49 standalone.localdomain sshd[192303]: Disconnected from user root 192.168.122.11 port 43302
Oct 13 14:17:49 standalone.localdomain sshd[192299]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:49 standalone.localdomain systemd[1]: session-91.scope: Deactivated successfully.
Oct 13 14:17:49 standalone.localdomain systemd[1]: session-91.scope: Consumed 2.174s CPU time.
Oct 13 14:17:49 standalone.localdomain systemd-logind[45629]: Session 91 logged out. Waiting for processes to exit.
Oct 13 14:17:49 standalone.localdomain systemd-logind[45629]: Removed session 91.
Oct 13 14:17:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1295: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 2.4 MiB/s rd, 102 B/s wr, 89 op/s
Oct 13 14:17:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 13 14:17:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e62 e62: 1 total, 1 up, 1 in
Oct 13 14:17:49 standalone.localdomain ceph-mon[29756]: osdmap e61: 1 total, 1 up, 1 in
Oct 13 14:17:49 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e62: 1 total, 1 up, 1 in
Oct 13 14:17:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 13 14:17:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e63 e63: 1 total, 1 up, 1 in
Oct 13 14:17:50 standalone.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully.
Oct 13 14:17:50 standalone.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Consumed 1.221s CPU time.
Oct 13 14:17:50 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e63: 1 total, 1 up, 1 in
Oct 13 14:17:50 standalone.localdomain ceph-mon[29756]: pgmap v1295: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 2.4 MiB/s rd, 102 B/s wr, 89 op/s
Oct 13 14:17:50 standalone.localdomain ceph-mon[29756]: osdmap e62: 1 total, 1 up, 1 in
Oct 13 14:17:51 standalone.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 13 14:17:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:17:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:17:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:17:51 standalone.localdomain systemd[1]: tmp-crun.lZH3XF.mount: Deactivated successfully.
Oct 13 14:17:51 standalone.localdomain podman[192992]: 2025-10-13 14:17:51.778188779 +0000 UTC m=+0.082466968 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, architecture=x86_64, release=1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step2, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc.)
Oct 13 14:17:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1298: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 70 KiB/s rd, 170 B/s wr, 19 op/s
Oct 13 14:17:51 standalone.localdomain podman[192992]: 2025-10-13 14:17:51.831035516 +0000 UTC m=+0.135313655 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:17:51 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:17:51 standalone.localdomain podman[192990]: 2025-10-13 14:17:51.81201439 +0000 UTC m=+0.118091115 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, container_name=nova_compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 14:17:51 standalone.localdomain podman[192991]: 2025-10-13 14:17:51.832999196 +0000 UTC m=+0.136657836 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_northd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, description=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, build-date=2025-07-21T13:30:04, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:17:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 13 14:17:51 standalone.localdomain podman[192990]: 2025-10-13 14:17:51.898925064 +0000 UTC m=+0.205001829 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, container_name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:17:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e64 e64: 1 total, 1 up, 1 in
Oct 13 14:17:51 standalone.localdomain ceph-mon[29756]: osdmap e63: 1 total, 1 up, 1 in
Oct 13 14:17:51 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:17:51 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e64: 1 total, 1 up, 1 in
Oct 13 14:17:51 standalone.localdomain podman[192991]: 2025-10-13 14:17:51.914152343 +0000 UTC m=+0.217810953 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-ovn-northd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team)
Oct 13 14:17:51 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:17:52 standalone.localdomain ceph-mon[29756]: pgmap v1298: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 70 KiB/s rd, 170 B/s wr, 19 op/s
Oct 13 14:17:52 standalone.localdomain ceph-mon[29756]: osdmap e64: 1 total, 1 up, 1 in
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:17:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1300: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 86 KiB/s rd, 207 B/s wr, 24 op/s
Oct 13 14:17:54 standalone.localdomain sshd[193116]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:54 standalone.localdomain sshd[193116]: Accepted publickey for root from 192.168.122.11 port 55902 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:54 standalone.localdomain systemd-logind[45629]: New session 92 of user root.
Oct 13 14:17:54 standalone.localdomain systemd[1]: Started Session 92 of User root.
Oct 13 14:17:54 standalone.localdomain sshd[193116]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:54 standalone.localdomain ceph-mon[29756]: pgmap v1300: 177 pgs: 177 active+clean; 229 MiB data, 209 MiB used, 6.8 GiB / 7.0 GiB avail; 86 KiB/s rd, 207 B/s wr, 24 op/s
Oct 13 14:17:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:17:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:17:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1301: 177 pgs: 177 active+clean; 297 MiB data, 255 MiB used, 6.7 GiB / 7.0 GiB avail; 3.7 MiB/s rd, 7.8 MiB/s wr, 115 op/s
Oct 13 14:17:56 standalone.localdomain ceph-mon[29756]: pgmap v1301: 177 pgs: 177 active+clean; 297 MiB data, 255 MiB used, 6.7 GiB / 7.0 GiB avail; 3.7 MiB/s rd, 7.8 MiB/s wr, 115 op/s
Oct 13 14:17:56 standalone.localdomain haproxy[70940]: 172.21.0.2:49646 [13/Oct/2025:14:17:56.484] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:56 standalone.localdomain haproxy[70940]: 172.21.0.2:49646 [13/Oct/2025:14:17:56.491] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/294/294 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:56 standalone.localdomain haproxy[70940]: 172.17.0.2:40280 [13/Oct/2025:14:17:56.815] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/22/22 200 8095 - - ---- 57/2/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:17:56 standalone.localdomain haproxy[70940]: 172.21.0.2:47128 [13/Oct/2025:14:17:56.811] cinder cinder/standalone.internalapi.localdomain 0/0/0/39/39 404 387 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/backups/backup HTTP/1.1"
Oct 13 14:17:56 standalone.localdomain haproxy[70940]: 172.21.0.2:47128 [13/Oct/2025:14:17:56.852] cinder cinder/standalone.internalapi.localdomain 0/0/0/16/16 200 1070 - - ---- 57/1/0/0/0 0/0 "GET /v3/e44641a80bcb466cb3dd688e48b72d8e/backups/detail?all_tenants=1&name=backup HTTP/1.1"
Oct 13 14:17:57 standalone.localdomain sshd[193119]: Received disconnect from 192.168.122.11 port 55902:11: disconnected by user
Oct 13 14:17:57 standalone.localdomain sshd[193119]: Disconnected from user root 192.168.122.11 port 55902
Oct 13 14:17:57 standalone.localdomain sshd[193116]: pam_unix(sshd:session): session closed for user root
Oct 13 14:17:57 standalone.localdomain systemd[1]: session-92.scope: Deactivated successfully.
Oct 13 14:17:57 standalone.localdomain systemd[1]: session-92.scope: Consumed 2.032s CPU time.
Oct 13 14:17:57 standalone.localdomain systemd-logind[45629]: Session 92 logged out. Waiting for processes to exit.
Oct 13 14:17:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:17:57 standalone.localdomain sudo[193197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:17:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:17:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:17:57 standalone.localdomain systemd-logind[45629]: Removed session 92.
Oct 13 14:17:57 standalone.localdomain sudo[193197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:17:57 standalone.localdomain sudo[193197]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:57 standalone.localdomain sudo[193248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 14:17:57 standalone.localdomain sudo[193248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:17:57 standalone.localdomain systemd[1]: tmp-crun.YARqHm.mount: Deactivated successfully.
Oct 13 14:17:57 standalone.localdomain podman[193217]: 2025-10-13 14:17:57.282609304 +0000 UTC m=+0.092722314 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, container_name=cinder_scheduler, config_id=tripleo_step4, name=rhosp17/openstack-cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-cinder-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:17:57 standalone.localdomain systemd[1]: tmp-crun.kHiHWl.mount: Deactivated successfully.
Oct 13 14:17:57 standalone.localdomain podman[193211]: 2025-10-13 14:17:57.395430985 +0000 UTC m=+0.205739291 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:17:57 standalone.localdomain podman[193214]: 2025-10-13 14:17:57.356021312 +0000 UTC m=+0.166871595 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=cinder_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, distribution-scope=public, release=1, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api)
Oct 13 14:17:57 standalone.localdomain podman[193211]: 2025-10-13 14:17:57.407781236 +0000 UTC m=+0.218089552 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:17:57 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:17:57 standalone.localdomain podman[193217]: 2025-10-13 14:17:57.468100571 +0000 UTC m=+0.278213661 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, container_name=cinder_scheduler, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vcs-type=git, distribution-scope=public, build-date=2025-07-21T16:10:12, name=rhosp17/openstack-cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Oct 13 14:17:57 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:17:57 standalone.localdomain podman[193214]: 2025-10-13 14:17:57.492880704 +0000 UTC m=+0.303731007 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:58:55, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, container_name=cinder_api_cron, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., release=1)
Oct 13 14:17:57 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2887649855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2887649855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2887649855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2887649855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:17:57 standalone.localdomain sudo[193248]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:17:57 standalone.localdomain sshd[193343]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:17:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:57 standalone.localdomain sudo[193345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:17:57 standalone.localdomain sudo[193345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:17:57 standalone.localdomain sudo[193345]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1302: 177 pgs: 177 active+clean; 297 MiB data, 255 MiB used, 6.7 GiB / 7.0 GiB avail; 2.8 MiB/s rd, 5.9 MiB/s wr, 87 op/s
Oct 13 14:17:57 standalone.localdomain sshd[193343]: Accepted publickey for root from 192.168.122.11 port 55908 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:17:57 standalone.localdomain systemd-logind[45629]: New session 93 of user root.
Oct 13 14:17:57 standalone.localdomain systemd[1]: Started Session 93 of User root.
Oct 13 14:17:57 standalone.localdomain sshd[193343]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:17:57 standalone.localdomain sudo[193360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:17:57 standalone.localdomain sudo[193360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:17:58 standalone.localdomain sudo[193360]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:17:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 30b93714-638e-406d-8b76-a3482e5470d4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:17:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 30b93714-638e-406d-8b76-a3482e5470d4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:17:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 30b93714-638e-406d-8b76-a3482e5470d4 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:17:58 standalone.localdomain sudo[193433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:17:58 standalone.localdomain sudo[193433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:17:58 standalone.localdomain sudo[193433]: pam_unix(sudo:session): session closed for user root
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: pgmap v1302: 177 pgs: 177 active+clean; 297 MiB data, 255 MiB used, 6.7 GiB / 7.0 GiB avail; 2.8 MiB/s rd, 5.9 MiB/s wr, 87 op/s
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:17:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:17:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:17:59 standalone.localdomain runuser[193461]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:17:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1303: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 78 op/s
Oct 13 14:17:59 standalone.localdomain haproxy[70940]: 172.21.0.2:60810 [13/Oct/2025:14:17:59.795] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/4/4 300 515 - - ---- 56/2/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:17:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:18:00 standalone.localdomain haproxy[70940]: 172.21.0.2:60810 [13/Oct/2025:14:17:59.801] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/289/289 201 8100 - - ---- 56/2/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:18:00 standalone.localdomain haproxy[70940]: 172.21.0.2:38882 [13/Oct/2025:14:18:00.105] barbican barbican/standalone.internalapi.localdomain 0/0/0/3/3 300 487 - - ---- 57/1/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:18:00 standalone.localdomain haproxy[70940]: 172.17.0.2:57210 [13/Oct/2025:14:18:00.120] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/3/3 300 515 - - ---- 58/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:18:00 standalone.localdomain runuser[193461]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:00 standalone.localdomain runuser[193618]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 13 14:18:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 e65: 1 total, 1 up, 1 in
Oct 13 14:18:00 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e65: 1 total, 1 up, 1 in
Oct 13 14:18:00 standalone.localdomain haproxy[70940]: 172.17.0.2:57210 [13/Oct/2025:14:18:00.126] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/372/372 201 8105 - - ---- 58/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:18:00 standalone.localdomain haproxy[70940]: 172.17.0.2:57210 [13/Oct/2025:14:18:00.504] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 200 8095 - - ---- 58/3/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:18:00 standalone.localdomain haproxy[70940]: 172.21.0.2:38882 [13/Oct/2025:14:18:00.110] barbican barbican/standalone.internalapi.localdomain 0/0/0/542/542 201 356 - - ---- 58/1/0/0/0 0/0 "POST /v1/secrets/ HTTP/1.1"
Oct 13 14:18:00 standalone.localdomain runuser[193618]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:00 standalone.localdomain runuser[193716]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:00 standalone.localdomain ceph-mon[29756]: pgmap v1303: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 2.5 MiB/s rd, 5.2 MiB/s wr, 78 op/s
Oct 13 14:18:00 standalone.localdomain ceph-mon[29756]: osdmap e65: 1 total, 1 up, 1 in
Oct 13 14:18:00 standalone.localdomain sshd[193375]: Received disconnect from 192.168.122.11 port 55908:11: disconnected by user
Oct 13 14:18:00 standalone.localdomain sshd[193375]: Disconnected from user root 192.168.122.11 port 55908
Oct 13 14:18:00 standalone.localdomain sshd[193343]: pam_unix(sshd:session): session closed for user root
Oct 13 14:18:00 standalone.localdomain systemd[1]: session-93.scope: Deactivated successfully.
Oct 13 14:18:00 standalone.localdomain systemd[1]: session-93.scope: Consumed 2.185s CPU time.
Oct 13 14:18:00 standalone.localdomain systemd-logind[45629]: Session 93 logged out. Waiting for processes to exit.
Oct 13 14:18:00 standalone.localdomain systemd-logind[45629]: Removed session 93.
Oct 13 14:18:01 standalone.localdomain sshd[193804]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:18:01 standalone.localdomain sshd[193804]: Accepted publickey for root from 192.168.122.11 port 43816 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:18:01 standalone.localdomain systemd-logind[45629]: New session 94 of user root.
Oct 13 14:18:01 standalone.localdomain systemd[1]: Started Session 94 of User root.
Oct 13 14:18:01 standalone.localdomain sshd[193804]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:18:01 standalone.localdomain runuser[193716]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1305: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 70 op/s
Oct 13 14:18:01 standalone.localdomain ceph-mon[29756]: pgmap v1305: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 70 op/s
Oct 13 14:18:02 standalone.localdomain systemd[1]: tmp-crun.QEGS29.mount: Deactivated successfully.
Oct 13 14:18:02 standalone.localdomain podman[193947]: 2025-10-13 14:18:02.624889108 +0000 UTC m=+0.083917053 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:18:02 standalone.localdomain podman[193947]: 2025-10-13 14:18:02.656871392 +0000 UTC m=+0.115899307 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, build-date=2025-07-21T12:58:45, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:02 standalone.localdomain podman[193979]: 2025-10-13 14:18:02.916909363 +0000 UTC m=+0.089313949 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, name=rhosp17/openstack-haproxy, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, io.openshift.expose-services=, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:08:11, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12)
Oct 13 14:18:02 standalone.localdomain podman[193979]: 2025-10-13 14:18:02.945139792 +0000 UTC m=+0.117544398 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-haproxy, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 haproxy, release=1, vcs-type=git, version=17.1.9)
Oct 13 14:18:03 standalone.localdomain podman[194011]: 2025-10-13 14:18:03.014666142 +0000 UTC m=+0.060314987 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.openshift.expose-services=, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T13:08:05, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:18:03 standalone.localdomain podman[194058]: 2025-10-13 14:18:03.123032926 +0000 UTC m=+0.065928220 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, container_name=horizon, io.buildah.version=1.33.12, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-horizon, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 horizon, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:18:03 standalone.localdomain podman[194032]: 2025-10-13 14:18:03.098014236 +0000 UTC m=+0.068238981 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.openshift.expose-services=, release=1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:18:03 standalone.localdomain podman[194030]: 2025-10-13 14:18:03.150820411 +0000 UTC m=+0.127230546 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-rabbitmq-container, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:18:03 standalone.localdomain podman[194011]: 2025-10-13 14:18:03.154374961 +0000 UTC m=+0.200023806 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.component=openstack-rabbitmq-container, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, version=17.1.9, name=rhosp17/openstack-rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, build-date=2025-07-21T13:08:05, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1)
Oct 13 14:18:03 standalone.localdomain podman[194032]: 2025-10-13 14:18:03.233096963 +0000 UTC m=+0.203321708 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, container_name=heat_api_cron, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-heat-api, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible)
Oct 13 14:18:03 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194085]: 2025-10-13 14:18:03.247703982 +0000 UTC m=+0.124928874 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, container_name=manila_scheduler, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:18:03 standalone.localdomain podman[194058]: 2025-10-13 14:18:03.266869642 +0000 UTC m=+0.209764956 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=horizon, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:18:03 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194085]: 2025-10-13 14:18:03.282836634 +0000 UTC m=+0.160061516 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, architecture=x86_64, config_id=tripleo_step4, container_name=manila_scheduler, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:18:03 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain haproxy[70940]: 172.21.0.2:60816 [13/Oct/2025:14:18:03.353] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/2/2 300 515 - - ---- 57/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:18:03 standalone.localdomain podman[194031]: 2025-10-13 14:18:03.218647668 +0000 UTC m=+0.186280613 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, tcib_managed=true, version=17.1.9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone, io.buildah.version=1.33.12, release=1, vcs-type=git)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:18:03 standalone.localdomain podman[194135]: 2025-10-13 14:18:03.392381824 +0000 UTC m=+0.126628857 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, release=1, version=17.1.9, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, container_name=heat_engine, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:18:03 standalone.localdomain podman[194150]: 2025-10-13 14:18:03.345079079 +0000 UTC m=+0.068083076 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, container_name=heat_api, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:18:03 standalone.localdomain podman[194135]: 2025-10-13 14:18:03.442814146 +0000 UTC m=+0.177061189 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, container_name=heat_engine, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, release=1)
Oct 13 14:18:03 standalone.localdomain podman[194134]: 2025-10-13 14:18:03.47219564 +0000 UTC m=+0.206459003 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:49:55, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, release=1, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:18:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:18:03 standalone.localdomain podman[194137]: 2025-10-13 14:18:03.481562128 +0000 UTC m=+0.209313301 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:44:03, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, container_name=neutron_api, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:18:03 standalone.localdomain podman[194134]: 2025-10-13 14:18:03.503757742 +0000 UTC m=+0.238021105 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-cfn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1)
Oct 13 14:18:03 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194243]: 2025-10-13 14:18:03.504918447 +0000 UTC m=+0.107521339 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1)
Oct 13 14:18:03 standalone.localdomain podman[194245]: 2025-10-13 14:18:03.577614404 +0000 UTC m=+0.174023696 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vendor=Red Hat, Inc., container_name=nova_conductor, name=rhosp17/openstack-nova-conductor, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-nova-conductor-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, release=1, build-date=2025-07-21T15:44:17, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:18:03 standalone.localdomain podman[194174]: 2025-10-13 14:18:03.371553763 +0000 UTC m=+0.065328651 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:18:03 standalone.localdomain podman[194290]: 2025-10-13 14:18:03.557276768 +0000 UTC m=+0.072706538 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, build-date=2025-07-21T15:58:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_api, name=rhosp17/openstack-cinder-api, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, release=1, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b)
Oct 13 14:18:03 standalone.localdomain podman[194245]: 2025-10-13 14:18:03.627299023 +0000 UTC m=+0.223708335 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, tcib_managed=true, container_name=nova_conductor, release=1, build-date=2025-07-21T15:44:17, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, name=rhosp17/openstack-nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 14:18:03 standalone.localdomain podman[194150]: 2025-10-13 14:18:03.63889989 +0000 UTC m=+0.361903877 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, container_name=heat_api, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, tcib_managed=true)
Oct 13 14:18:03 standalone.localdomain podman[194137]: 2025-10-13 14:18:03.64183861 +0000 UTC m=+0.369589813 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1, name=rhosp17/openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, container_name=neutron_api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:03, com.redhat.component=openstack-neutron-server-container, batch=17.1_20250721.1)
Oct 13 14:18:03 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194290]: 2025-10-13 14:18:03.652662413 +0000 UTC m=+0.168092193 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-api-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, build-date=2025-07-21T15:58:55, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:18:03 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194244]: 2025-10-13 14:18:03.618559174 +0000 UTC m=+0.203842483 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-manila-api-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, container_name=manila_api_cron, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, batch=17.1_20250721.1, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team)
Oct 13 14:18:03 standalone.localdomain haproxy[70940]: 172.21.0.2:60816 [13/Oct/2025:14:18:03.357] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/329/329 201 8100 - - ---- 57/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:18:03 standalone.localdomain podman[194243]: 2025-10-13 14:18:03.692416716 +0000 UTC m=+0.295019608 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, summary=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, distribution-scope=public, vcs-type=git, release=1, io.openshift.expose-services=, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:18:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194174]: 2025-10-13 14:18:03.711765912 +0000 UTC m=+0.405540820 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:18:03 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194244]: 2025-10-13 14:18:03.800524803 +0000 UTC m=+0.385808132 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, release=1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container, container_name=manila_api_cron, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-manila-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:18:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1306: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 69 op/s
Oct 13 14:18:03 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194031]: 2025-10-13 14:18:03.859023532 +0000 UTC m=+0.826656497 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, vcs-type=git, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:18:03 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain podman[194173]: 2025-10-13 14:18:03.680999145 +0000 UTC m=+0.376393652 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.component=openstack-nova-scheduler-container, io.openshift.expose-services=, build-date=2025-07-21T16:02:54, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, container_name=nova_scheduler, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, name=rhosp17/openstack-nova-scheduler, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git)
Oct 13 14:18:03 standalone.localdomain sshd[193838]: Received disconnect from 192.168.122.11 port 43816:11: disconnected by user
Oct 13 14:18:03 standalone.localdomain sshd[193838]: Disconnected from user root 192.168.122.11 port 43816
Oct 13 14:18:03 standalone.localdomain sshd[193804]: pam_unix(sshd:session): session closed for user root
Oct 13 14:18:03 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain systemd[1]: session-94.scope: Deactivated successfully.
Oct 13 14:18:03 standalone.localdomain systemd[1]: session-94.scope: Consumed 2.119s CPU time.
Oct 13 14:18:03 standalone.localdomain systemd-logind[45629]: Session 94 logged out. Waiting for processes to exit.
Oct 13 14:18:03 standalone.localdomain systemd-logind[45629]: Removed session 94.
Oct 13 14:18:04 standalone.localdomain podman[194173]: 2025-10-13 14:18:04.013092303 +0000 UTC m=+0.708486790 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, container_name=nova_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-scheduler, architecture=x86_64, build-date=2025-07-21T16:02:54, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:18:04 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:18:04 standalone.localdomain sshd[194386]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:18:04 standalone.localdomain sshd[194386]: Accepted publickey for root from 192.168.122.11 port 43818 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:18:04 standalone.localdomain systemd-logind[45629]: New session 95 of user root.
Oct 13 14:18:04 standalone.localdomain systemd[1]: Started Session 95 of User root.
Oct 13 14:18:04 standalone.localdomain sshd[194386]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:18:04 standalone.localdomain ceph-mon[29756]: pgmap v1306: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 69 op/s
Oct 13 14:18:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1307: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 818 B/s rd, 30 KiB/s wr, 1 op/s
Oct 13 14:18:06 standalone.localdomain haproxy[70940]: 172.21.0.2:60826 [13/Oct/2025:14:18:06.269] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/5/5 300 515 - - ---- 57/3/0/0/0 0/0 "GET / HTTP/1.1"
Oct 13 14:18:06 standalone.localdomain ceph-mon[29756]: pgmap v1307: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 818 B/s rd, 30 KiB/s wr, 1 op/s
Oct 13 14:18:06 standalone.localdomain haproxy[70940]: 172.21.0.2:60826 [13/Oct/2025:14:18:06.276] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/301/301 201 8100 - - ---- 57/3/0/0/0 0/0 "POST /v3/auth/tokens HTTP/1.1"
Oct 13 14:18:06 standalone.localdomain haproxy[70940]: 172.21.0.2:60826 [13/Oct/2025:14:18:06.587] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/37/37 404 291 - - ---- 57/3/0/0/0 0/0 "GET /v3/users/admin HTTP/1.1"
Oct 13 14:18:06 standalone.localdomain haproxy[70940]: 172.21.0.2:60826 [13/Oct/2025:14:18:06.628] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 200 572 - - ---- 57/3/0/0/0 0/0 "GET /v3/users?name=admin HTTP/1.1"
Oct 13 14:18:06 standalone.localdomain haproxy[70940]: 172.21.0.2:60826 [13/Oct/2025:14:18:06.663] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/21/21 201 458 - - ---- 57/3/0/0/0 0/0 "POST /v3/credentials HTTP/1.1"
Oct 13 14:18:06 standalone.localdomain sshd[194397]: Received disconnect from 192.168.122.11 port 43818:11: disconnected by user
Oct 13 14:18:06 standalone.localdomain sshd[194397]: Disconnected from user root 192.168.122.11 port 43818
Oct 13 14:18:06 standalone.localdomain sshd[194386]: pam_unix(sshd:session): session closed for user root
Oct 13 14:18:06 standalone.localdomain systemd[1]: session-95.scope: Deactivated successfully.
Oct 13 14:18:06 standalone.localdomain systemd[1]: session-95.scope: Consumed 2.136s CPU time.
Oct 13 14:18:06 standalone.localdomain systemd-logind[45629]: Session 95 logged out. Waiting for processes to exit.
Oct 13 14:18:06 standalone.localdomain systemd-logind[45629]: Removed session 95.
Oct 13 14:18:07 standalone.localdomain podman[194442]: 2025-10-13 14:18:07.014697984 +0000 UTC m=+0.075640388 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:13:39, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-volume)
Oct 13 14:18:07 standalone.localdomain podman[194442]: 2025-10-13 14:18:07.043328285 +0000 UTC m=+0.104270649 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, name=rhosp17/openstack-cinder-volume, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-volume-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, build-date=2025-07-21T16:13:39, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, release=1)
Oct 13 14:18:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1308: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 818 B/s rd, 30 KiB/s wr, 1 op/s
Oct 13 14:18:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:18:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:18:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:18:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:18:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:18:08 standalone.localdomain podman[194493]: 2025-10-13 14:18:08.796855292 +0000 UTC m=+0.058328636 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:18:08 standalone.localdomain podman[194490]: 2025-10-13 14:18:08.85172258 +0000 UTC m=+0.119577301 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, release=1, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Oct 13 14:18:08 standalone.localdomain ceph-mon[29756]: pgmap v1308: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 818 B/s rd, 30 KiB/s wr, 1 op/s
Oct 13 14:18:08 standalone.localdomain podman[194503]: 2025-10-13 14:18:08.907267809 +0000 UTC m=+0.165357749 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, container_name=nova_vnc_proxy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:18:08 standalone.localdomain podman[194489]: 2025-10-13 14:18:08.956824614 +0000 UTC m=+0.224898001 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, release=1, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, architecture=x86_64, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:56:28, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team)
Oct 13 14:18:08 standalone.localdomain podman[194493]: 2025-10-13 14:18:08.980935346 +0000 UTC m=+0.242408680 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 14:18:08 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:18:09 standalone.localdomain podman[194491]: 2025-10-13 14:18:09.055623514 +0000 UTC m=+0.320162403 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, version=17.1.9, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 14:18:09 standalone.localdomain podman[194489]: 2025-10-13 14:18:09.162019707 +0000 UTC m=+0.430093094 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:09 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:18:09 standalone.localdomain podman[194490]: 2025-10-13 14:18:09.181441535 +0000 UTC m=+0.449296256 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute)
Oct 13 14:18:09 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:18:09 standalone.localdomain podman[194503]: 2025-10-13 14:18:09.211843741 +0000 UTC m=+0.469933671 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-nova-novncproxy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-novncproxy-container, vendor=Red Hat, Inc.)
Oct 13 14:18:09 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:18:09 standalone.localdomain podman[194491]: 2025-10-13 14:18:09.255958418 +0000 UTC m=+0.520497327 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:18:09 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:18:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1309: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:10 standalone.localdomain ceph-mon[29756]: pgmap v1309: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:18:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:18:10 standalone.localdomain podman[194763]: 2025-10-13 14:18:10.963995286 +0000 UTC m=+0.226386397 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, container_name=swift_proxy, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37)
Oct 13 14:18:10 standalone.localdomain podman[194768]: 2025-10-13 14:18:10.935592222 +0000 UTC m=+0.191283188 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:18:10 standalone.localdomain systemd[1]: tmp-crun.BMPf17.mount: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain podman[194768]: 2025-10-13 14:18:11.001571372 +0000 UTC m=+0.257262328 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=ovn_metadata_agent)
Oct 13 14:18:11 standalone.localdomain podman[194784]: 2025-10-13 14:18:11.030617335 +0000 UTC m=+0.284170635 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, release=1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9)
Oct 13 14:18:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain podman[194784]: 2025-10-13 14:18:11.078950643 +0000 UTC m=+0.332503963 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git)
Oct 13 14:18:11 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain podman[194764]: 2025-10-13 14:18:11.090670244 +0000 UTC m=+0.332818402 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, container_name=nova_metadata, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, release=1, tcib_managed=true, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:18:11 standalone.localdomain podman[194765]: 2025-10-13 14:18:11.002367056 +0000 UTC m=+0.260944940 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, release=1, architecture=x86_64)
Oct 13 14:18:11 standalone.localdomain podman[194791]: 2025-10-13 14:18:11.145076407 +0000 UTC m=+0.391995033 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, distribution-scope=public, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, release=1)
Oct 13 14:18:11 standalone.localdomain podman[194765]: 2025-10-13 14:18:11.242249718 +0000 UTC m=+0.500827592 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-glance-api, tcib_managed=true)
Oct 13 14:18:11 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain podman[194761]: 2025-10-13 14:18:11.261768828 +0000 UTC m=+0.526305626 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., container_name=glance_api_cron, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:18:11 standalone.localdomain podman[194761]: 2025-10-13 14:18:11.27189663 +0000 UTC m=+0.536433408 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-glance-api-container, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:18:11 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain podman[194763]: 2025-10-13 14:18:11.285425496 +0000 UTC m=+0.547816597 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:18:11 standalone.localdomain podman[194764]: 2025-10-13 14:18:11.285707875 +0000 UTC m=+0.527856023 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_metadata, distribution-scope=public, vcs-type=git, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:18:11 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain podman[194762]: 2025-10-13 14:18:11.236674386 +0000 UTC m=+0.501776881 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-placement-api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=placement_api, architecture=x86_64)
Oct 13 14:18:11 standalone.localdomain podman[194762]: 2025-10-13 14:18:11.36742645 +0000 UTC m=+0.632528915 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, com.redhat.component=openstack-placement-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T13:58:12)
Oct 13 14:18:11 standalone.localdomain podman[194791]: 2025-10-13 14:18:11.376500439 +0000 UTC m=+0.623419055 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, architecture=x86_64, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:18:11 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:18:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1310: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 0 op/s
Oct 13 14:18:12 standalone.localdomain runuser[195101]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:12 standalone.localdomain runuser[195101]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:12 standalone.localdomain ceph-mon[29756]: pgmap v1310: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 0 op/s
Oct 13 14:18:13 standalone.localdomain runuser[195170]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:13 standalone.localdomain runuser[195170]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:13 standalone.localdomain runuser[195234]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1311: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 9.7 KiB/s wr, 0 op/s
Oct 13 14:18:13 standalone.localdomain ceph-mon[29756]: pgmap v1311: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 9.7 KiB/s wr, 0 op/s
Oct 13 14:18:14 standalone.localdomain runuser[195234]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.247] neutron neutron/standalone.internalapi.localdomain 0/0/0/121/121 200 254 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63&fields=binding%3Ahost_id&fields=binding%3Avif_type HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.409] neutron neutron/standalone.internalapi.localdomain 0/0/0/49/49 200 1332 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.476] neutron neutron/standalone.internalapi.localdomain 0/0/0/91/91 200 901 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.573] neutron neutron/standalone.internalapi.localdomain 0/0/0/66/66 200 1062 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.238&port_id=8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.643] neutron neutron/standalone.internalapi.localdomain 0/0/0/33/33 200 810 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.678] neutron neutron/standalone.internalapi.localdomain 0/0/0/81/81 200 1352 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1312: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 9.7 KiB/s wr, 0 op/s
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.766] neutron neutron/standalone.internalapi.localdomain 0/0/0/73/73 200 187 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:18:15 standalone.localdomain haproxy[70940]: 172.17.0.2:59026 [13/Oct/2025:14:18:15.845] neutron neutron/standalone.internalapi.localdomain 0/0/0/76/76 200 252 - - ---- 55/1/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:18:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:18:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2115866227' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:18:16 standalone.localdomain ceph-mon[29756]: pgmap v1312: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 9.7 KiB/s wr, 0 op/s
Oct 13 14:18:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2115866227' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:18:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:18:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:18:16 standalone.localdomain haproxy[70940]: 172.17.0.2:37114 [13/Oct/2025:14:18:16.732] placement placement/standalone.internalapi.localdomain 0/0/0/20/20 200 497 - - ---- 56/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:18:16 standalone.localdomain systemd[1]: tmp-crun.Wv8bt1.mount: Deactivated successfully.
Oct 13 14:18:16 standalone.localdomain podman[195339]: 2025-10-13 14:18:16.82554635 +0000 UTC m=+0.087757241 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=keystone_cron, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64)
Oct 13 14:18:16 standalone.localdomain podman[195339]: 2025-10-13 14:18:16.871955349 +0000 UTC m=+0.134166260 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=keystone_cron)
Oct 13 14:18:16 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:18:16 standalone.localdomain podman[195340]: 2025-10-13 14:18:16.916668644 +0000 UTC m=+0.178526604 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, version=17.1.9, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team)
Oct 13 14:18:16 standalone.localdomain podman[195340]: 2025-10-13 14:18:16.971897773 +0000 UTC m=+0.233755753 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, container_name=nova_api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:18:16 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:18:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:18:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2966865833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:18:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2966865833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:18:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1313: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 1023 B/s wr, 0 op/s
Oct 13 14:18:18 standalone.localdomain ceph-mon[29756]: pgmap v1313: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 1023 B/s wr, 0 op/s
Oct 13 14:18:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:18:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:18:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:18:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:18:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:18:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:18:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1314: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 1023 B/s wr, 0 op/s
Oct 13 14:18:19 standalone.localdomain podman[195438]: 2025-10-13 14:18:19.84398424 +0000 UTC m=+0.088530515 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, config_id=tripleo_step3, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-barbican-keystone-listener, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, build-date=2025-07-21T16:18:19, vendor=Red Hat, Inc.)
Oct 13 14:18:19 standalone.localdomain podman[195431]: 2025-10-13 14:18:19.82513031 +0000 UTC m=+0.087288867 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, release=1, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:18:19 standalone.localdomain podman[195438]: 2025-10-13 14:18:19.887971863 +0000 UTC m=+0.132518148 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, build-date=2025-07-21T16:18:19, io.openshift.expose-services=, name=rhosp17/openstack-barbican-keystone-listener, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, distribution-scope=public, com.redhat.component=openstack-barbican-keystone-listener-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:18:19 standalone.localdomain systemd[1]: tmp-crun.aEKRvu.mount: Deactivated successfully.
Oct 13 14:18:19 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:18:19 standalone.localdomain podman[195453]: 2025-10-13 14:18:19.950940641 +0000 UTC m=+0.187115959 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:19 standalone.localdomain podman[195445]: 2025-10-13 14:18:19.917521383 +0000 UTC m=+0.156904999 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, container_name=barbican_worker, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, release=1, tcib_managed=true)
Oct 13 14:18:20 standalone.localdomain podman[195445]: 2025-10-13 14:18:20.002864278 +0000 UTC m=+0.242247894 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, vcs-type=git, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-barbican-worker, com.redhat.component=openstack-barbican-worker-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, container_name=barbican_worker, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:18:20 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:18:20 standalone.localdomain podman[195439]: 2025-10-13 14:18:20.019256663 +0000 UTC m=+0.257098222 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:18:20 standalone.localdomain podman[195453]: 2025-10-13 14:18:20.038064232 +0000 UTC m=+0.274239540 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=nova_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, version=17.1.9, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:05:11)
Oct 13 14:18:20 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:18:20 standalone.localdomain podman[195431]: 2025-10-13 14:18:20.061111061 +0000 UTC m=+0.323269618 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=neutron_dhcp, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, distribution-scope=public, vcs-type=git, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:18:20 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:18:20 standalone.localdomain podman[195439]: 2025-10-13 14:18:20.092157616 +0000 UTC m=+0.329999175 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, name=rhosp17/openstack-neutron-sriov-agent, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:18:20 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:18:20 standalone.localdomain podman[195432]: 2025-10-13 14:18:20.107131576 +0000 UTC m=+0.362756432 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.openshift.expose-services=, name=rhosp17/openstack-barbican-api, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., container_name=barbican_api, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12)
Oct 13 14:18:20 standalone.localdomain podman[195432]: 2025-10-13 14:18:20.132863419 +0000 UTC m=+0.388488295 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-api, tcib_managed=true, io.openshift.expose-services=, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1)
Oct 13 14:18:20 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:18:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:20 standalone.localdomain ceph-mon[29756]: pgmap v1314: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 1023 B/s wr, 0 op/s
Oct 13 14:18:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1315: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:18:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:18:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:18:22 standalone.localdomain podman[195849]: 2025-10-13 14:18:22.822967585 +0000 UTC m=+0.079538928 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, com.redhat.component=openstack-ovn-northd-container, name=rhosp17/openstack-ovn-northd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=ovn_cluster_northd, vendor=Red Hat, Inc., build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:22 standalone.localdomain podman[195849]: 2025-10-13 14:18:22.866515224 +0000 UTC m=+0.123086567 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, name=rhosp17/openstack-ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64)
Oct 13 14:18:22 standalone.localdomain systemd[1]: tmp-crun.LnsrfU.mount: Deactivated successfully.
Oct 13 14:18:22 standalone.localdomain podman[195845]: 2025-10-13 14:18:22.876156901 +0000 UTC m=+0.132489727 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1, container_name=nova_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, version=17.1.9)
Oct 13 14:18:22 standalone.localdomain ceph-mon[29756]: pgmap v1315: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:22 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:18:22 standalone.localdomain podman[195845]: 2025-10-13 14:18:22.9040659 +0000 UTC m=+0.160398706 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team)
Oct 13 14:18:22 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:18:22 standalone.localdomain podman[195851]: 2025-10-13 14:18:22.920126375 +0000 UTC m=+0.172275973 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, version=17.1.9, container_name=clustercheck, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb)
Oct 13 14:18:22 standalone.localdomain podman[195851]: 2025-10-13 14:18:22.963642473 +0000 UTC m=+0.215792061 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., release=1)
Oct 13 14:18:22 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:18:23
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'manila_data', 'backups', 'manila_metadata', 'vms']
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:18:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1316: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:24 standalone.localdomain ceph-mon[29756]: pgmap v1316: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:25 standalone.localdomain runuser[195960]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:25 standalone.localdomain runuser[195960]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:25 standalone.localdomain runuser[196021]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1317: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:26 standalone.localdomain runuser[196021]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:26 standalone.localdomain runuser[196083]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:26 standalone.localdomain ceph-mon[29756]: pgmap v1317: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:27 standalone.localdomain runuser[196083]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:18:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:18:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:18:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1318: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:27 standalone.localdomain podman[196150]: 2025-10-13 14:18:27.834474711 +0000 UTC m=+0.093447727 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, container_name=cinder_api_cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, release=1)
Oct 13 14:18:27 standalone.localdomain podman[196150]: 2025-10-13 14:18:27.94035099 +0000 UTC m=+0.199324096 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, container_name=cinder_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, build-date=2025-07-21T15:58:55, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, com.redhat.component=openstack-cinder-api-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:18:27 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:18:28 standalone.localdomain podman[196151]: 2025-10-13 14:18:28.025417577 +0000 UTC m=+0.279097720 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, name=rhosp17/openstack-cinder-scheduler, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, io.buildah.version=1.33.12, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-scheduler-container)
Oct 13 14:18:28 standalone.localdomain podman[196149]: 2025-10-13 14:18:27.944197697 +0000 UTC m=+0.202359737 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-iscsid)
Oct 13 14:18:28 standalone.localdomain podman[196149]: 2025-10-13 14:18:28.078093958 +0000 UTC m=+0.336256008 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:15, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:18:28 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:18:28 standalone.localdomain podman[196151]: 2025-10-13 14:18:28.098242188 +0000 UTC m=+0.351922331 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T16:10:12, release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, container_name=cinder_scheduler, config_id=tripleo_step4, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:18:28 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009512623883305416 of space, bias 1.0, pg target 0.9512623883305417 quantized to 32 (current 32)
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:18:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:18:28 standalone.localdomain ceph-mon[29756]: pgmap v1318: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1319: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:30 standalone.localdomain podman[196320]: 2025-10-13 14:18:30.648402517 +0000 UTC m=+0.101286017 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-share, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, com.redhat.component=openstack-manila-share-container, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, name=rhosp17/openstack-manila-share, vendor=Red Hat, Inc.)
Oct 13 14:18:30 standalone.localdomain podman[196320]: 2025-10-13 14:18:30.6799986 +0000 UTC m=+0.132882040 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, tcib_managed=true, com.redhat.component=openstack-manila-share-container, release=1, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T15:22:36, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:18:30 standalone.localdomain ceph-mon[29756]: pgmap v1319: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1320: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:32 standalone.localdomain ceph-mon[29756]: pgmap v1320: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:18:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1321: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:33 standalone.localdomain systemd[1]: tmp-crun.3sVfHx.mount: Deactivated successfully.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:18:33 standalone.localdomain podman[196579]: 2025-10-13 14:18:33.880047937 +0000 UTC m=+0.112246335 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-cinder-api, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, architecture=x86_64, container_name=cinder_api, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:18:33 standalone.localdomain podman[196566]: 2025-10-13 14:18:33.951372611 +0000 UTC m=+0.198334783 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-horizon-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-horizon, architecture=x86_64, build-date=2025-07-21T13:58:15, summary=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, io.openshift.expose-services=, release=1)
Oct 13 14:18:33 standalone.localdomain podman[196574]: 2025-10-13 14:18:33.906183811 +0000 UTC m=+0.141246797 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, io.openshift.expose-services=, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-conductor, build-date=2025-07-21T15:44:17, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=nova_conductor, vcs-type=git, release=1, batch=17.1_20250721.1)
Oct 13 14:18:33 standalone.localdomain podman[196574]: 2025-10-13 14:18:33.992416254 +0000 UTC m=+0.227479230 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-conductor-container, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, managed_by=tripleo_ansible)
Oct 13 14:18:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:18:34 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196581]: 2025-10-13 14:18:34.009794259 +0000 UTC m=+0.242890654 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:18:34 standalone.localdomain podman[196581]: 2025-10-13 14:18:34.015534725 +0000 UTC m=+0.248631140 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:18:34 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196566]: 2025-10-13 14:18:34.035220851 +0000 UTC m=+0.282183083 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, container_name=horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, vcs-type=git, build-date=2025-07-21T13:58:15, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, com.redhat.component=openstack-horizon-container, name=rhosp17/openstack-horizon, version=17.1.9, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 horizon)
Oct 13 14:18:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:18:34 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196654]: 2025-10-13 14:18:34.106573527 +0000 UTC m=+0.223135977 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, distribution-scope=public, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:34 standalone.localdomain podman[196750]: 2025-10-13 14:18:34.086343685 +0000 UTC m=+0.078812317 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, container_name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container)
Oct 13 14:18:34 standalone.localdomain podman[196554]: 2025-10-13 14:18:33.934862083 +0000 UTC m=+0.189542503 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, architecture=x86_64, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git)
Oct 13 14:18:34 standalone.localdomain podman[196654]: 2025-10-13 14:18:34.139756328 +0000 UTC m=+0.256318788 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, distribution-scope=public, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196556]: 2025-10-13 14:18:34.21197654 +0000 UTC m=+0.451844294 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=heat_api_cron, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 14:18:34 standalone.localdomain podman[196750]: 2025-10-13 14:18:34.216763567 +0000 UTC m=+0.209232219 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, container_name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, name=rhosp17/openstack-heat-api-cfn, release=1, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:18:34 standalone.localdomain podman[196659]: 2025-10-13 14:18:34.230415738 +0000 UTC m=+0.344697268 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-manila-api-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-manila-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T16:06:43, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, container_name=manila_api_cron, distribution-scope=public)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196556]: 2025-10-13 14:18:34.245325636 +0000 UTC m=+0.485193390 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, vcs-type=git, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, container_name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196555]: 2025-10-13 14:18:34.056164086 +0000 UTC m=+0.312942071 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, container_name=memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, distribution-scope=public, name=rhosp17/openstack-memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, tcib_managed=true, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:18:34 standalone.localdomain podman[196659]: 2025-10-13 14:18:34.275925598 +0000 UTC m=+0.390206948 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-manila-api-container, container_name=manila_api_cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, name=rhosp17/openstack-manila-api, version=17.1.9, vcs-type=git)
Oct 13 14:18:34 standalone.localdomain podman[196553]: 2025-10-13 14:18:33.879414397 +0000 UTC m=+0.139008548 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.openshift.expose-services=, container_name=manila_scheduler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, name=rhosp17/openstack-manila-scheduler, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.component=openstack-manila-scheduler-container, managed_by=tripleo_ansible)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196555]: 2025-10-13 14:18:34.296847242 +0000 UTC m=+0.553625217 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:34 standalone.localdomain podman[196553]: 2025-10-13 14:18:34.311025458 +0000 UTC m=+0.570619569 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, container_name=manila_scheduler, name=rhosp17/openstack-manila-scheduler, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.component=openstack-manila-scheduler-container, config_id=tripleo_step4, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196579]: 2025-10-13 14:18:34.32118238 +0000 UTC m=+0.553380778 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, release=1, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, container_name=cinder_api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:18:34 standalone.localdomain podman[196554]: 2025-10-13 14:18:34.321569673 +0000 UTC m=+0.576250153 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=heat_engine, architecture=x86_64, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-heat-engine-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, release=1, version=17.1.9)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196784]: 2025-10-13 14:18:34.380823146 +0000 UTC m=+0.329064577 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=nova_scheduler, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, build-date=2025-07-21T16:02:54, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, tcib_managed=true, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, com.redhat.component=openstack-nova-scheduler-container, io.buildah.version=1.33.12, vcs-type=git, release=1)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196720]: 2025-10-13 14:18:34.196659319 +0000 UTC m=+0.240695377 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:27:18, version=17.1.9, config_id=tripleo_step3, container_name=keystone, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:18:34 standalone.localdomain podman[196573]: 2025-10-13 14:18:34.420893699 +0000 UTC m=+0.655131120 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, tcib_managed=true, container_name=neutron_api, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_step4, distribution-scope=public)
Oct 13 14:18:34 standalone.localdomain podman[196720]: 2025-10-13 14:18:34.432936689 +0000 UTC m=+0.476972727 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196784]: 2025-10-13 14:18:34.484195347 +0000 UTC m=+0.432436758 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, build-date=2025-07-21T16:02:54, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.component=openstack-nova-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, name=rhosp17/openstack-nova-scheduler, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, tcib_managed=true, container_name=nova_scheduler, distribution-scope=public, config_id=tripleo_step4, vcs-type=git)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain podman[196573]: 2025-10-13 14:18:34.566919352 +0000 UTC m=+0.801156803 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, release=1, build-date=2025-07-21T15:44:03, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, name=rhosp17/openstack-neutron-server, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, container_name=neutron_api, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-neutron-server-container)
Oct 13 14:18:34 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:18:34 standalone.localdomain ceph-mon[29756]: pgmap v1321: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.495228) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365115495337, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 250, "total_data_size": 1897357, "memory_usage": 1939960, "flush_reason": "Manual Compaction"}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365115503088, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1200552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29622, "largest_seqno": 31680, "table_properties": {"data_size": 1194241, "index_size": 3270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16496, "raw_average_key_size": 20, "raw_value_size": 1180041, "raw_average_value_size": 1491, "num_data_blocks": 149, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760364948, "oldest_key_time": 1760364948, "file_creation_time": 1760365115, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 7891 microseconds, and 3900 cpu microseconds.
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.503132) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1200552 bytes OK
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.503160) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.505170) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.505184) EVENT_LOG_v1 {"time_micros": 1760365115505179, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.505203) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1888502, prev total WAL file size 1888991, number of live WAL files 2.
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.505747) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303031' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1172KB)], [65(5362KB)]
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365115505782, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 6691345, "oldest_snapshot_seqno": -1}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4341 keys, 5251632 bytes, temperature: kUnknown
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365115525876, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 5251632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5224180, "index_size": 15516, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105052, "raw_average_key_size": 24, "raw_value_size": 5147260, "raw_average_value_size": 1185, "num_data_blocks": 661, "num_entries": 4341, "num_filter_entries": 4341, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365115, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.526064) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 5251632 bytes
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527199) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 332.0 rd, 260.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.2 +0.0 blob) out(5.0 +0.0 blob), read-write-amplify(9.9) write-amplify(4.4) OK, records in: 4786, records dropped: 445 output_compression: NoCompression
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527215) EVENT_LOG_v1 {"time_micros": 1760365115527207, "job": 36, "event": "compaction_finished", "compaction_time_micros": 20156, "compaction_time_cpu_micros": 9604, "output_level": 6, "num_output_files": 1, "total_output_size": 5251632, "num_input_records": 4786, "num_output_records": 4341, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365115527393, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365115527823, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.505655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:35.527930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1322: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:36 standalone.localdomain ceph-mon[29756]: pgmap v1322: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct 13 14:18:37 standalone.localdomain runuser[196920]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1323: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:38 standalone.localdomain runuser[196920]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:38 standalone.localdomain runuser[196989]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:38 standalone.localdomain ceph-mon[29756]: pgmap v1323: 177 pgs: 177 active+clean; 297 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:39 standalone.localdomain runuser[196989]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:39 standalone.localdomain runuser[197051]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:18:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:18:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:18:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:18:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:18:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1324: 177 pgs: 177 active+clean; 308 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 94 KiB/s wr, 22 op/s
Oct 13 14:18:39 standalone.localdomain systemd[1]: tmp-crun.uchSq3.mount: Deactivated successfully.
Oct 13 14:18:39 standalone.localdomain podman[197105]: 2025-10-13 14:18:39.840214293 +0000 UTC m=+0.097866971 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, release=1, version=17.1.9, config_id=tripleo_step4, container_name=swift_account_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account)
Oct 13 14:18:39 standalone.localdomain podman[197104]: 2025-10-13 14:18:39.882676281 +0000 UTC m=+0.143577080 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, architecture=x86_64, distribution-scope=public, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:18:39 standalone.localdomain runuser[197051]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:39 standalone.localdomain podman[197111]: 2025-10-13 14:18:39.943850752 +0000 UTC m=+0.194122003 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-nova-novncproxy-container, release=1, container_name=nova_vnc_proxy, tcib_managed=true, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T15:24:10, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, name=rhosp17/openstack-nova-novncproxy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:18:39 standalone.localdomain podman[197102]: 2025-10-13 14:18:39.925529769 +0000 UTC m=+0.189554034 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=swift_object_server, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:18:40 standalone.localdomain podman[197103]: 2025-10-13 14:18:39.981660276 +0000 UTC m=+0.241813751 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12)
Oct 13 14:18:40 standalone.localdomain podman[197105]: 2025-10-13 14:18:40.058786819 +0000 UTC m=+0.316439517 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, config_id=tripleo_step4, container_name=swift_account_server, architecture=x86_64)
Oct 13 14:18:40 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:18:40 standalone.localdomain podman[197104]: 2025-10-13 14:18:40.081639283 +0000 UTC m=+0.342540082 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public)
Oct 13 14:18:40 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:18:40 standalone.localdomain podman[197102]: 2025-10-13 14:18:40.124822121 +0000 UTC m=+0.388846406 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, architecture=x86_64, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9)
Oct 13 14:18:40 standalone.localdomain podman[197279]: 2025-10-13 14:18:40.136201351 +0000 UTC m=+0.069218211 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:18:24, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, com.redhat.component=openstack-cinder-backup-container, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:18:40 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:18:40 standalone.localdomain podman[197279]: 2025-10-13 14:18:40.165731991 +0000 UTC m=+0.098748851 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-cinder-backup-container, version=17.1.9, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, build-date=2025-07-21T16:18:24, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:18:40 standalone.localdomain podman[197111]: 2025-10-13 14:18:40.227509381 +0000 UTC m=+0.477780652 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, config_id=tripleo_step4, release=1, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, build-date=2025-07-21T15:24:10, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=nova_vnc_proxy, managed_by=tripleo_ansible)
Oct 13 14:18:40 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:18:40 standalone.localdomain podman[197103]: 2025-10-13 14:18:40.378376294 +0000 UTC m=+0.638529769 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T14:48:37)
Oct 13 14:18:40 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:18:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:40 standalone.localdomain ceph-mon[29756]: pgmap v1324: 177 pgs: 177 active+clean; 308 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 94 KiB/s wr, 22 op/s
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:18:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:18:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1325: 177 pgs: 177 active+clean; 308 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 94 KiB/s wr, 22 op/s
Oct 13 14:18:42 standalone.localdomain podman[197459]: 2025-10-13 14:18:41.923204989 +0000 UTC m=+0.172471889 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, release=1, distribution-scope=public, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_internal, tcib_managed=true)
Oct 13 14:18:42 standalone.localdomain podman[197489]: 2025-10-13 14:18:42.040055495 +0000 UTC m=+0.269714811 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, vendor=Red Hat, Inc.)
Oct 13 14:18:42 standalone.localdomain ceph-mon[29756]: pgmap v1325: 177 pgs: 177 active+clean; 308 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 94 KiB/s wr, 22 op/s
Oct 13 14:18:42 standalone.localdomain podman[197459]: 2025-10-13 14:18:42.33760258 +0000 UTC m=+0.586869490 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:42 standalone.localdomain podman[197449]: 2025-10-13 14:18:42.274725256 +0000 UTC m=+0.518866648 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:18:42 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain podman[197465]: 2025-10-13 14:18:42.361612159 +0000 UTC m=+0.600350214 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:18:42 standalone.localdomain podman[197448]: 2025-10-13 14:18:42.46662709 +0000 UTC m=+0.723794712 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.expose-services=, release=1, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9)
Oct 13 14:18:42 standalone.localdomain podman[197489]: 2025-10-13 14:18:42.481959452 +0000 UTC m=+0.711618748 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20)
Oct 13 14:18:42 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain podman[197447]: 2025-10-13 14:18:42.335708352 +0000 UTC m=+0.595616288 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-placement-api, tcib_managed=true, com.redhat.component=openstack-placement-api-container, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1)
Oct 13 14:18:42 standalone.localdomain podman[197465]: 2025-10-13 14:18:42.547774757 +0000 UTC m=+0.786512602 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_metadata_agent)
Oct 13 14:18:42 standalone.localdomain podman[197447]: 2025-10-13 14:18:42.577004416 +0000 UTC m=+0.836912352 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, com.redhat.component=openstack-placement-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, container_name=placement_api, build-date=2025-07-21T13:58:12, architecture=x86_64, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-placement-api, tcib_managed=true, maintainer=OpenStack TripleO Team)
Oct 13 14:18:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain podman[197483]: 2025-10-13 14:18:42.671914667 +0000 UTC m=+0.905659779 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, build-date=2025-07-21T13:28:44, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git)
Oct 13 14:18:42 standalone.localdomain podman[197449]: 2025-10-13 14:18:42.697156064 +0000 UTC m=+0.941297376 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_metadata, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:18:42 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain podman[197448]: 2025-10-13 14:18:42.721795872 +0000 UTC m=+0.978963474 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, io.buildah.version=1.33.12, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:18:42 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain podman[197483]: 2025-10-13 14:18:42.777013641 +0000 UTC m=+1.010758773 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20250721.1, distribution-scope=public, release=1, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git)
Oct 13 14:18:42 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:18:42 standalone.localdomain podman[197446]: 2025-10-13 14:18:42.599589431 +0000 UTC m=+0.861024065 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., container_name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, io.openshift.expose-services=)
Oct 13 14:18:42 standalone.localdomain podman[197446]: 2025-10-13 14:18:42.949050355 +0000 UTC m=+1.210485049 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:18:43 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:18:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1326: 177 pgs: 177 active+clean; 308 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 94 KiB/s wr, 22 op/s
Oct 13 14:18:44 standalone.localdomain ceph-mon[29756]: pgmap v1326: 177 pgs: 177 active+clean; 308 MiB data, 256 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 94 KiB/s wr, 22 op/s
Oct 13 14:18:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1327: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 106 KiB/s wr, 22 op/s
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: pgmap v1327: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 106 KiB/s wr, 22 op/s
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.517695) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365126517740, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 363, "num_deletes": 251, "total_data_size": 122357, "memory_usage": 129768, "flush_reason": "Manual Compaction"}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365126520889, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 120336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31681, "largest_seqno": 32043, "table_properties": {"data_size": 118202, "index_size": 311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5479, "raw_average_key_size": 18, "raw_value_size": 113911, "raw_average_value_size": 384, "num_data_blocks": 14, "num_entries": 296, "num_filter_entries": 296, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365115, "oldest_key_time": 1760365115, "file_creation_time": 1760365126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 3247 microseconds, and 1086 cpu microseconds.
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.520941) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 120336 bytes OK
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.520961) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.522234) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.522248) EVENT_LOG_v1 {"time_micros": 1760365126522244, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.522262) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 119925, prev total WAL file size 119925, number of live WAL files 2.
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.522899) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(117KB)], [68(5128KB)]
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365126522932, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 5371968, "oldest_snapshot_seqno": -1}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 4123 keys, 4375537 bytes, temperature: kUnknown
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365126543290, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 4375537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4350596, "index_size": 13517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101200, "raw_average_key_size": 24, "raw_value_size": 4278598, "raw_average_value_size": 1037, "num_data_blocks": 569, "num_entries": 4123, "num_filter_entries": 4123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.543657) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 4375537 bytes
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.545322) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.7 rd, 213.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 5.0 +0.0 blob) out(4.2 +0.0 blob), read-write-amplify(81.0) write-amplify(36.4) OK, records in: 4637, records dropped: 514 output_compression: NoCompression
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.545352) EVENT_LOG_v1 {"time_micros": 1760365126545338, "job": 38, "event": "compaction_finished", "compaction_time_micros": 20452, "compaction_time_cpu_micros": 10700, "output_level": 6, "num_output_files": 1, "total_output_size": 4375537, "num_input_records": 4637, "num_output_records": 4123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365126545561, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365126546207, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.522847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.546311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.546318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.546321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.546324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:18:46.546327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:18:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:18:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:18:47 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:18:47 standalone.localdomain recover_tripleo_nova_virtqemud[197794]: 93291
Oct 13 14:18:47 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:18:47 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:18:47 standalone.localdomain podman[197786]: 2025-10-13 14:18:47.80936839 +0000 UTC m=+0.073912245 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, container_name=keystone_cron, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=)
Oct 13 14:18:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1328: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 106 KiB/s wr, 22 op/s
Oct 13 14:18:47 standalone.localdomain podman[197786]: 2025-10-13 14:18:47.84319077 +0000 UTC m=+0.107734605 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, container_name=keystone_cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9)
Oct 13 14:18:47 standalone.localdomain podman[197787]: 2025-10-13 14:18:47.868902761 +0000 UTC m=+0.125031247 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, version=17.1.9)
Oct 13 14:18:47 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:18:47 standalone.localdomain podman[197787]: 2025-10-13 14:18:47.950452002 +0000 UTC m=+0.206580508 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, container_name=nova_api, io.buildah.version=1.33.12, release=1)
Oct 13 14:18:47 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:18:48 standalone.localdomain ceph-mon[29756]: pgmap v1328: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 106 KiB/s wr, 22 op/s
Oct 13 14:18:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1329: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 106 KiB/s wr, 22 op/s
Oct 13 14:18:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:50 standalone.localdomain runuser[197945]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:18:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:18:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:18:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:18:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:18:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:18:50 standalone.localdomain systemd[1]: tmp-crun.UNKswW.mount: Deactivated successfully.
Oct 13 14:18:50 standalone.localdomain podman[197992]: 2025-10-13 14:18:50.858320958 +0000 UTC m=+0.104812057 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, release=1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=barbican_keystone_listener, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-barbican-keystone-listener, vcs-type=git, com.redhat.component=openstack-barbican-keystone-listener-container, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:18:50 standalone.localdomain podman[198005]: 2025-10-13 14:18:50.874128534 +0000 UTC m=+0.104065873 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-api-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, config_id=tripleo_step4, container_name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., release=1)
Oct 13 14:18:50 standalone.localdomain podman[197990]: 2025-10-13 14:18:50.823577148 +0000 UTC m=+0.087385879 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, container_name=neutron_dhcp, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:18:50 standalone.localdomain podman[197992]: 2025-10-13 14:18:50.880172079 +0000 UTC m=+0.126663178 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, distribution-scope=public, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, name=rhosp17/openstack-barbican-keystone-listener, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, container_name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Oct 13 14:18:50 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:18:50 standalone.localdomain ceph-mon[29756]: pgmap v1329: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 68 KiB/s rd, 106 KiB/s wr, 22 op/s
Oct 13 14:18:50 standalone.localdomain podman[198003]: 2025-10-13 14:18:50.928040122 +0000 UTC m=+0.171187848 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:18:50 standalone.localdomain podman[197991]: 2025-10-13 14:18:50.981126846 +0000 UTC m=+0.244897286 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:22:44, summary=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, vendor=Red Hat, Inc., vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:18:51 standalone.localdomain podman[198003]: 2025-10-13 14:18:50.999852152 +0000 UTC m=+0.242999888 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:18:51 standalone.localdomain podman[197990]: 2025-10-13 14:18:51.007480257 +0000 UTC m=+0.271288998 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=neutron_dhcp, config_id=tripleo_step4, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, build-date=2025-07-21T16:28:54, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:18:51 standalone.localdomain podman[198005]: 2025-10-13 14:18:51.007916121 +0000 UTC m=+0.237853480 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, architecture=x86_64)
Oct 13 14:18:51 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:18:51 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:18:51 standalone.localdomain podman[197991]: 2025-10-13 14:18:51.053358989 +0000 UTC m=+0.317129439 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, description=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-api-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_id=tripleo_step3, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:18:51 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:18:51 standalone.localdomain podman[198004]: 2025-10-13 14:18:51.05729226 +0000 UTC m=+0.299687452 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-worker, com.redhat.component=openstack-barbican-worker-container, description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:36:22, config_id=tripleo_step3, container_name=barbican_worker, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:18:51 standalone.localdomain podman[198004]: 2025-10-13 14:18:51.14211281 +0000 UTC m=+0.384507982 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, managed_by=tripleo_ansible, container_name=barbican_worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T15:36:22, name=rhosp17/openstack-barbican-worker, com.redhat.component=openstack-barbican-worker-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:18:51 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:18:51 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:18:51 standalone.localdomain runuser[197945]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:51 standalone.localdomain runuser[198179]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1330: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:52 standalone.localdomain runuser[198179]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:52 standalone.localdomain runuser[198376]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:18:52 standalone.localdomain runuser[198376]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:18:52 standalone.localdomain ceph-mon[29756]: pgmap v1330: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:18:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:18:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:18:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:18:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1331: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:53 standalone.localdomain podman[198452]: 2025-10-13 14:18:53.818085121 +0000 UTC m=+0.085720129 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step5, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:18:53 standalone.localdomain podman[198453]: 2025-10-13 14:18:53.861442466 +0000 UTC m=+0.129205128 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_northd, managed_by=tripleo_ansible, version=17.1.9, container_name=ovn_cluster_northd, distribution-scope=public, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:30:04, vendor=Red Hat, Inc.)
Oct 13 14:18:53 standalone.localdomain podman[198452]: 2025-10-13 14:18:53.871916118 +0000 UTC m=+0.139551106 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step5, container_name=nova_compute)
Oct 13 14:18:53 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:18:53 standalone.localdomain podman[198453]: 2025-10-13 14:18:53.928538561 +0000 UTC m=+0.196301243 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.expose-services=, release=1, container_name=ovn_cluster_northd, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-northd-container, config_id=ovn_cluster_northd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:30:04, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:18:53 standalone.localdomain podman[198454]: 2025-10-13 14:18:53.972517973 +0000 UTC m=+0.235943721 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, config_id=tripleo_step2, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1)
Oct 13 14:18:53 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:18:54 standalone.localdomain podman[198454]: 2025-10-13 14:18:54.019918982 +0000 UTC m=+0.283344720 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:18:54 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:18:54 standalone.localdomain ceph-mon[29756]: pgmap v1331: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:18:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1332: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:56 standalone.localdomain ceph-mon[29756]: pgmap v1332: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 12 KiB/s wr, 0 op/s
Oct 13 14:18:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:18:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1661921203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:18:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:18:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1661921203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:18:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1661921203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:18:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1661921203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:18:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1333: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:58 standalone.localdomain ceph-mon[29756]: pgmap v1333: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:18:58 standalone.localdomain sudo[198642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:18:58 standalone.localdomain sudo[198642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:18:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:18:58 standalone.localdomain sudo[198642]: pam_unix(sudo:session): session closed for user root
Oct 13 14:18:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:18:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:18:58 standalone.localdomain sudo[198659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:18:58 standalone.localdomain sudo[198659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:18:58 standalone.localdomain systemd[1]: tmp-crun.gckS4N.mount: Deactivated successfully.
Oct 13 14:18:58 standalone.localdomain podman[198657]: 2025-10-13 14:18:58.811966066 +0000 UTC m=+0.091004552 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-07-21T13:27:15, config_id=tripleo_step3, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid)
Oct 13 14:18:58 standalone.localdomain podman[198657]: 2025-10-13 14:18:58.841630228 +0000 UTC m=+0.120668694 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, tcib_managed=true, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, container_name=iscsid, release=1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:18:58 standalone.localdomain podman[198660]: 2025-10-13 14:18:58.850093069 +0000 UTC m=+0.127561556 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-scheduler-container, container_name=cinder_scheduler, config_id=tripleo_step4, release=1, name=rhosp17/openstack-cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, build-date=2025-07-21T16:10:12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:18:58 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:18:58 standalone.localdomain podman[198658]: 2025-10-13 14:18:58.897729734 +0000 UTC m=+0.175289104 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, architecture=x86_64, release=1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, container_name=cinder_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:18:58 standalone.localdomain podman[198660]: 2025-10-13 14:18:58.922859498 +0000 UTC m=+0.200327985 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_id=tripleo_step4, build-date=2025-07-21T16:10:12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=cinder_scheduler, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-scheduler-container, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:18:58 standalone.localdomain podman[198658]: 2025-10-13 14:18:58.932948499 +0000 UTC m=+0.210507869 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, container_name=cinder_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T15:58:55)
Oct 13 14:18:58 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:18:58 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:18:59 standalone.localdomain sudo[198659]: pam_unix(sudo:session): session closed for user root
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:18:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 8f0e20e5-c7a8-43d8-8060-4acf294085a0 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:18:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 8f0e20e5-c7a8-43d8-8060-4acf294085a0 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:18:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 8f0e20e5-c7a8-43d8-8060-4acf294085a0 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:18:59 standalone.localdomain sudo[198767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:18:59 standalone.localdomain sudo[198767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:18:59 standalone.localdomain sudo[198767]: pam_unix(sudo:session): session closed for user root
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:18:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:18:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1334: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:00 standalone.localdomain ceph-mon[29756]: pgmap v1334: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1335: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:02 standalone.localdomain podman[199067]: 2025-10-13 14:19:02.772747361 +0000 UTC m=+0.071811791 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, release=1, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:19:02 standalone.localdomain podman[199067]: 2025-10-13 14:19:02.805072475 +0000 UTC m=+0.104136955 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, architecture=x86_64, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:19:02 standalone.localdomain ceph-mon[29756]: pgmap v1335: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:03 standalone.localdomain podman[199107]: 2025-10-13 14:19:03.040675915 +0000 UTC m=+0.063213226 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50)
Oct 13 14:19:03 standalone.localdomain podman[199107]: 2025-10-13 14:19:03.075008321 +0000 UTC m=+0.097545622 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.component=openstack-haproxy-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:08:11, name=rhosp17/openstack-haproxy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:19:03 standalone.localdomain podman[199141]: 2025-10-13 14:19:03.274847831 +0000 UTC m=+0.092416645 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:08:05, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, vcs-type=git, release=1, vendor=Red Hat, Inc.)
Oct 13 14:19:03 standalone.localdomain podman[199141]: 2025-10-13 14:19:03.310020492 +0000 UTC m=+0.127589286 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, build-date=2025-07-21T13:08:05)
Oct 13 14:19:03 standalone.localdomain runuser[199176]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1336: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:19:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:19:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:19:04 standalone.localdomain runuser[199176]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:04 standalone.localdomain runuser[199247]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:19:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:19:04 standalone.localdomain podman[199293]: 2025-10-13 14:19:04.8734467 +0000 UTC m=+0.128975080 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3)
Oct 13 14:19:04 standalone.localdomain ceph-mon[29756]: pgmap v1336: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:19:04 standalone.localdomain podman[199317]: 2025-10-13 14:19:04.909008084 +0000 UTC m=+0.151474812 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, build-date=2025-07-21T12:58:43)
Oct 13 14:19:04 standalone.localdomain runuser[199247]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:04 standalone.localdomain podman[199304]: 2025-10-13 14:19:04.966667399 +0000 UTC m=+0.178931337 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, container_name=nova_scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, build-date=2025-07-21T16:02:54)
Oct 13 14:19:04 standalone.localdomain runuser[199500]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:04 standalone.localdomain podman[199352]: 2025-10-13 14:19:04.979797732 +0000 UTC m=+0.189061188 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, name=rhosp17/openstack-horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, version=17.1.9, vcs-type=git, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 14:19:04 standalone.localdomain podman[199304]: 2025-10-13 14:19:04.987006875 +0000 UTC m=+0.199270813 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, build-date=2025-07-21T16:02:54, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, com.redhat.component=openstack-nova-scheduler-container, container_name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.buildah.version=1.33.12)
Oct 13 14:19:04 standalone.localdomain podman[199317]: 2025-10-13 14:19:04.997455996 +0000 UTC m=+0.239922724 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, container_name=memcached, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step1)
Oct 13 14:19:04 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199375]: 2025-10-13 14:19:04.957684743 +0000 UTC m=+0.159636223 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=logrotate_crond, version=17.1.9, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-cron)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199352]: 2025-10-13 14:19:05.014854581 +0000 UTC m=+0.224118027 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, build-date=2025-07-21T13:58:15, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-horizon-container, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, architecture=x86_64, name=rhosp17/openstack-horizon, io.buildah.version=1.33.12)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199335]: 2025-10-13 14:19:04.929710162 +0000 UTC m=+0.156558699 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, name=rhosp17/openstack-manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=manila_api_cron, com.redhat.component=openstack-manila-api-container, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, managed_by=tripleo_ansible, release=1)
Oct 13 14:19:05 standalone.localdomain podman[199337]: 2025-10-13 14:19:05.087790836 +0000 UTC m=+0.308152213 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=heat_api_cron, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container)
Oct 13 14:19:05 standalone.localdomain podman[199293]: 2025-10-13 14:19:05.090448028 +0000 UTC m=+0.345976428 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, container_name=keystone, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:19:05 standalone.localdomain podman[199375]: 2025-10-13 14:19:05.092976825 +0000 UTC m=+0.294928325 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-07-21T13:07:52)
Oct 13 14:19:05 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199360]: 2025-10-13 14:19:05.171646636 +0000 UTC m=+0.372790682 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.component=openstack-neutron-server-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=neutron_api, config_id=tripleo_step4, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public)
Oct 13 14:19:05 standalone.localdomain podman[199337]: 2025-10-13 14:19:05.172223444 +0000 UTC m=+0.392584811 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, version=17.1.9, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:19:05 standalone.localdomain podman[199295]: 2025-10-13 14:19:05.182448279 +0000 UTC m=+0.429789857 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T15:44:11, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_engine)
Oct 13 14:19:05 standalone.localdomain podman[199292]: 2025-10-13 14:19:05.147665848 +0000 UTC m=+0.403410734 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., container_name=heat_api_cfn, build-date=2025-07-21T14:49:55, managed_by=tripleo_ansible)
Oct 13 14:19:05 standalone.localdomain podman[199295]: 2025-10-13 14:19:05.209806251 +0000 UTC m=+0.457147839 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-engine, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199335]: 2025-10-13 14:19:05.264038699 +0000 UTC m=+0.490887286 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-manila-api-container, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, build-date=2025-07-21T16:06:43, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, architecture=x86_64, container_name=manila_api_cron, managed_by=tripleo_ansible, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, version=17.1.9)
Oct 13 14:19:05 standalone.localdomain podman[199355]: 2025-10-13 14:19:05.061834207 +0000 UTC m=+0.271502475 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, vcs-type=git, release=1, build-date=2025-07-21T15:56:26, architecture=x86_64, container_name=heat_api, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:19:05 standalone.localdomain podman[199361]: 2025-10-13 14:19:05.222925514 +0000 UTC m=+0.424674318 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, release=1, vcs-type=git)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199294]: 2025-10-13 14:19:05.064302583 +0000 UTC m=+0.318273504 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=manila_scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, vcs-type=git, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, name=rhosp17/openstack-manila-scheduler, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-manila-scheduler-container, io.buildah.version=1.33.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199360]: 2025-10-13 14:19:05.336095567 +0000 UTC m=+0.537239673 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, container_name=neutron_api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:19:05 standalone.localdomain podman[199294]: 2025-10-13 14:19:05.349365744 +0000 UTC m=+0.603336675 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, name=rhosp17/openstack-manila-scheduler, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, container_name=manila_scheduler, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:28, vcs-type=git, com.redhat.component=openstack-manila-scheduler-container)
Oct 13 14:19:05 standalone.localdomain podman[199361]: 2025-10-13 14:19:05.357208926 +0000 UTC m=+0.558957720 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, architecture=x86_64, name=rhosp17/openstack-nova-conductor, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, vcs-type=git, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199292]: 2025-10-13 14:19:05.388358004 +0000 UTC m=+0.644102900 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, build-date=2025-07-21T14:49:55, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-cfn-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn)
Oct 13 14:19:05 standalone.localdomain podman[199329]: 2025-10-13 14:19:05.394775942 +0000 UTC m=+0.620588187 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, tcib_managed=true, config_id=tripleo_step4, container_name=cinder_api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, release=1, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199355]: 2025-10-13 14:19:05.401818368 +0000 UTC m=+0.611486636 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, container_name=heat_api, com.redhat.component=openstack-heat-api-container)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain podman[199329]: 2025-10-13 14:19:05.425829488 +0000 UTC m=+0.651641723 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, architecture=x86_64, com.redhat.component=openstack-cinder-api-container, container_name=cinder_api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9)
Oct 13 14:19:05 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:19:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:05 standalone.localdomain runuser[199500]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1337: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:06 standalone.localdomain ceph-mon[29756]: pgmap v1337: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:07 standalone.localdomain podman[199704]: 2025-10-13 14:19:07.158923765 +0000 UTC m=+0.079145926 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, name=rhosp17/openstack-cinder-volume, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1)
Oct 13 14:19:07 standalone.localdomain podman[199704]: 2025-10-13 14:19:07.191847059 +0000 UTC m=+0.112069230 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T16:13:39, name=rhosp17/openstack-cinder-volume, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cinder-volume-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:19:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1338: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:08 standalone.localdomain ceph-mon[29756]: pgmap v1338: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1339: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:10 standalone.localdomain ceph-mon[29756]: pgmap v1339: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:19:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:19:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:19:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:19:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:19:10 standalone.localdomain podman[199841]: 2025-10-13 14:19:10.886020611 +0000 UTC m=+0.150822832 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, name=rhosp17/openstack-swift-object, release=1, container_name=swift_object_server, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:19:11 standalone.localdomain podman[199856]: 2025-10-13 14:19:11.122800116 +0000 UTC m=+0.370794010 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, container_name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, distribution-scope=public, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:24:10)
Oct 13 14:19:11 standalone.localdomain podman[199844]: 2025-10-13 14:19:10.986991418 +0000 UTC m=+0.238132739 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:19:11 standalone.localdomain podman[199842]: 2025-10-13 14:19:11.240033054 +0000 UTC m=+0.498766139 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, release=1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:19:11 standalone.localdomain podman[199844]: 2025-10-13 14:19:11.324684338 +0000 UTC m=+0.575825629 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=swift_container_server, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:19:11 standalone.localdomain podman[199856]: 2025-10-13 14:19:11.493581946 +0000 UTC m=+0.741575860 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=nova_vnc_proxy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:19:11 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:19:11 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:19:11 standalone.localdomain podman[199841]: 2025-10-13 14:19:11.75372493 +0000 UTC m=+1.018527141 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, tcib_managed=true, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, batch=17.1_20250721.1)
Oct 13 14:19:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1340: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:11 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:19:11 standalone.localdomain podman[199849]: 2025-10-13 14:19:11.449723996 +0000 UTC m=+0.699459324 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, release=1, name=rhosp17/openstack-swift-account, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:19:12 standalone.localdomain podman[199849]: 2025-10-13 14:19:12.019171698 +0000 UTC m=+1.268907066 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, container_name=swift_account_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:19:12 standalone.localdomain ceph-mon[29756]: pgmap v1340: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:12 standalone.localdomain podman[199842]: 2025-10-13 14:19:12.175080496 +0000 UTC m=+1.433813571 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:19:12 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:19:12 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:19:12 standalone.localdomain podman[200166]: 2025-10-13 14:19:12.802192952 +0000 UTC m=+0.066170047 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.component=openstack-glance-api-container, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:19:12 standalone.localdomain podman[200164]: 2025-10-13 14:19:12.861196208 +0000 UTC m=+0.129570628 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, release=1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-placement-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team)
Oct 13 14:19:12 standalone.localdomain systemd[1]: tmp-crun.pNasUl.mount: Deactivated successfully.
Oct 13 14:19:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:19:12 standalone.localdomain podman[200217]: 2025-10-13 14:19:12.934537544 +0000 UTC m=+0.103827486 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, container_name=ovn_controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:19:12 standalone.localdomain podman[200165]: 2025-10-13 14:19:12.839671885 +0000 UTC m=+0.103879707 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:19:12 standalone.localdomain podman[200217]: 2025-10-13 14:19:12.981911992 +0000 UTC m=+0.151201924 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, container_name=ovn_controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:19:12 standalone.localdomain podman[200164]: 2025-10-13 14:19:12.992733245 +0000 UTC m=+0.261107695 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, build-date=2025-07-21T13:58:12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, container_name=placement_api, name=rhosp17/openstack-placement-api, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:19:12 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain podman[200166]: 2025-10-13 14:19:13.007464748 +0000 UTC m=+0.271441823 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:19:13 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain podman[200172]: 2025-10-13 14:19:13.078788632 +0000 UTC m=+0.335087441 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, release=1, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:19:13 standalone.localdomain podman[200165]: 2025-10-13 14:19:13.102474052 +0000 UTC m=+0.366681874 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:19:13 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain podman[200172]: 2025-10-13 14:19:13.117218165 +0000 UTC m=+0.373517004 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:19:13 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain podman[200216]: 2025-10-13 14:19:13.002308919 +0000 UTC m=+0.176044758 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, build-date=2025-07-21T14:48:37, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, container_name=swift_proxy, config_id=tripleo_step4, version=17.1.9, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:19:13 standalone.localdomain podman[200264]: 2025-10-13 14:19:13.05531152 +0000 UTC m=+0.116599729 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:19:13 standalone.localdomain podman[200264]: 2025-10-13 14:19:13.188808248 +0000 UTC m=+0.250096457 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64)
Oct 13 14:19:13 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:19:13 standalone.localdomain podman[200216]: 2025-10-13 14:19:13.230808481 +0000 UTC m=+0.404544300 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, release=1, build-date=2025-07-21T14:48:37)
Oct 13 14:19:13 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain podman[200340]: 2025-10-13 14:19:13.278800727 +0000 UTC m=+0.058177791 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, com.redhat.component=openstack-glance-api-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:19:13 standalone.localdomain podman[200340]: 2025-10-13 14:19:13.290709764 +0000 UTC m=+0.070086818 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, name=rhosp17/openstack-glance-api, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container)
Oct 13 14:19:13 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:19:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1341: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:13 standalone.localdomain systemd[1]: tmp-crun.6TpD82.mount: Deactivated successfully.
Oct 13 14:19:14 standalone.localdomain ceph-mon[29756]: pgmap v1341: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:19:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/745876583' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:19:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1342: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:16 standalone.localdomain haproxy[70940]: 172.17.0.2:37818 [13/Oct/2025:14:19:16.141] keystone_public keystone_public/standalone.internalapi.localdomain 0/0/0/32/32 200 8101 - - ---- 56/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:19:16 standalone.localdomain runuser[200411]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:16 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:19:16.135] placement placement/standalone.internalapi.localdomain 0/0/0/57/57 200 497 - - ---- 56/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:19:16 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:19:16.198] placement placement/standalone.internalapi.localdomain 0/0/0/11/11 200 1117 - - ---- 56/1/0/0/0 0/0 "GET /placement/resource_providers?in_tree=e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 HTTP/1.1"
Oct 13 14:19:16 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:19:16.214] placement placement/standalone.internalapi.localdomain 0/0/0/8/8 200 640 - - ---- 56/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/inventories HTTP/1.1"
Oct 13 14:19:16 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:19:16.225] placement placement/standalone.internalapi.localdomain 0/0/0/7/7 200 361 - - ---- 56/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/aggregates HTTP/1.1"
Oct 13 14:19:16 standalone.localdomain haproxy[70940]: 172.17.0.2:49752 [13/Oct/2025:14:19:16.236] placement placement/standalone.internalapi.localdomain 0/0/0/10/10 200 1719 - - ---- 56/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/traits HTTP/1.1"
Oct 13 14:19:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/745876583' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:19:16 standalone.localdomain ceph-mon[29756]: pgmap v1342: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:19:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/357815293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:19:16 standalone.localdomain runuser[200411]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:16 standalone.localdomain runuser[200494]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:17 standalone.localdomain runuser[200494]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/357815293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:19:17 standalone.localdomain runuser[200556]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:17 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:17.740] neutron neutron/standalone.internalapi.localdomain 0/0/0/72/72 200 254 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/ports?device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c&fields=binding%3Ahost_id&fields=binding%3Avif_type HTTP/1.1"
Oct 13 14:19:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1343: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:17 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:17.824] neutron neutron/standalone.internalapi.localdomain 0/0/0/52/52 200 1350 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c HTTP/1.1"
Oct 13 14:19:17 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:17.888] neutron neutron/standalone.internalapi.localdomain 0/0/0/71/71 200 901 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:19:18 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:17.964] neutron neutron/standalone.internalapi.localdomain 0/0/0/42/42 200 192 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.237&port_id=da3e5a61-7adb-481b-b7f5-703c3939cde2 HTTP/1.1"
Oct 13 14:19:18 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:18.010] neutron neutron/standalone.internalapi.localdomain 0/0/0/42/42 200 810 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:19:18 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:18.055] neutron neutron/standalone.internalapi.localdomain 0/0/0/85/85 200 1352 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:19:18 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:18.145] neutron neutron/standalone.internalapi.localdomain 0/0/0/76/76 200 187 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:19:18 standalone.localdomain runuser[200556]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:18 standalone.localdomain haproxy[70940]: 172.17.0.2:34778 [13/Oct/2025:14:19:18.224] neutron neutron/standalone.internalapi.localdomain 0/0/0/68/68 200 252 - - ---- 57/1/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:19:18 standalone.localdomain ceph-mon[29756]: pgmap v1343: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:19:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:19:18 standalone.localdomain podman[200622]: 2025-10-13 14:19:18.811823441 +0000 UTC m=+0.076137224 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, release=1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=keystone_cron, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:27:18)
Oct 13 14:19:18 standalone.localdomain podman[200622]: 2025-10-13 14:19:18.82284202 +0000 UTC m=+0.087155783 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron)
Oct 13 14:19:18 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:19:18 standalone.localdomain podman[200623]: 2025-10-13 14:19:18.864360658 +0000 UTC m=+0.127519616 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, release=1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, vcs-type=git, name=rhosp17/openstack-nova-api, distribution-scope=public, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1)
Oct 13 14:19:18 standalone.localdomain podman[200623]: 2025-10-13 14:19:18.895817665 +0000 UTC m=+0.158976583 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, container_name=nova_api)
Oct 13 14:19:18 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:19:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1344: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:20 standalone.localdomain ceph-mon[29756]: pgmap v1344: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s wr, 1 op/s
Oct 13 14:19:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:19:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:19:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:19:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:19:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:19:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:19:21 standalone.localdomain systemd[1]: tmp-crun.veNt4s.mount: Deactivated successfully.
Oct 13 14:19:21 standalone.localdomain podman[200859]: 2025-10-13 14:19:21.808449409 +0000 UTC m=+0.078263139 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54)
Oct 13 14:19:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1345: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:21 standalone.localdomain podman[200860]: 2025-10-13 14:19:21.864456932 +0000 UTC m=+0.134279033 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, config_id=tripleo_step3, container_name=barbican_api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, build-date=2025-07-21T15:22:44, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 barbican-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-barbican-api-container, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:19:21 standalone.localdomain podman[200859]: 2025-10-13 14:19:21.872770188 +0000 UTC m=+0.142583918 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:19:21 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:19:21 standalone.localdomain podman[200860]: 2025-10-13 14:19:21.922901531 +0000 UTC m=+0.192723632 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, com.redhat.component=openstack-barbican-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T15:22:44, name=rhosp17/openstack-barbican-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:19:21 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:19:21 standalone.localdomain podman[200863]: 2025-10-13 14:19:21.96219003 +0000 UTC m=+0.224736617 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, com.redhat.component=openstack-barbican-worker-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-worker, name=rhosp17/openstack-barbican-worker, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step3, build-date=2025-07-21T15:36:22, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:19:22 standalone.localdomain podman[200861]: 2025-10-13 14:19:22.010273209 +0000 UTC m=+0.280087519 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, version=17.1.9, architecture=x86_64, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:19, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:19:22 standalone.localdomain podman[200862]: 2025-10-13 14:19:22.121885634 +0000 UTC m=+0.385656629 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-type=git, build-date=2025-07-21T16:03:34, container_name=neutron_sriov_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1)
Oct 13 14:19:22 standalone.localdomain podman[200861]: 2025-10-13 14:19:22.15395998 +0000 UTC m=+0.423774230 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:19, release=1, name=rhosp17/openstack-barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-keystone-listener-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:19:22 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:19:22 standalone.localdomain podman[200874]: 2025-10-13 14:19:22.170705035 +0000 UTC m=+0.426506954 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, architecture=x86_64, tcib_managed=true)
Oct 13 14:19:22 standalone.localdomain podman[200862]: 2025-10-13 14:19:22.181042254 +0000 UTC m=+0.444813299 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, container_name=neutron_sriov_agent, version=17.1.9, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible)
Oct 13 14:19:22 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:19:22 standalone.localdomain podman[200863]: 2025-10-13 14:19:22.191863667 +0000 UTC m=+0.454410244 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-barbican-worker-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, container_name=barbican_worker, build-date=2025-07-21T15:36:22, name=rhosp17/openstack-barbican-worker, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:19:22 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:19:22 standalone.localdomain podman[200874]: 2025-10-13 14:19:22.20986599 +0000 UTC m=+0.465667929 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, release=1, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, distribution-scope=public, container_name=nova_api_cron, vendor=Red Hat, Inc., vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:19:22 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:19:22 standalone.localdomain systemd[1]: tmp-crun.cj5IfZ.mount: Deactivated successfully.
Oct 13 14:19:22 standalone.localdomain ceph-mon[29756]: pgmap v1345: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:19:23
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'vms', 'manila_metadata', 'volumes', '.mgr', 'images', 'manila_data']
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:19:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1346: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:19:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:19:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:19:24 standalone.localdomain podman[201123]: 2025-10-13 14:19:24.806669515 +0000 UTC m=+0.076322339 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, release=1, version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:48:37, container_name=nova_compute, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:19:24 standalone.localdomain podman[201130]: 2025-10-13 14:19:24.855671143 +0000 UTC m=+0.117198707 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, container_name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=)
Oct 13 14:19:24 standalone.localdomain podman[201130]: 2025-10-13 14:19:24.906856309 +0000 UTC m=+0.168383863 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, container_name=clustercheck, release=1)
Oct 13 14:19:24 standalone.localdomain ceph-mon[29756]: pgmap v1346: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:24 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:19:24 standalone.localdomain podman[201123]: 2025-10-13 14:19:24.933914271 +0000 UTC m=+0.203567155 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12)
Oct 13 14:19:24 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:19:24 standalone.localdomain podman[201124]: 2025-10-13 14:19:24.9085539 +0000 UTC m=+0.174185460 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-ovn-northd-container, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team)
Oct 13 14:19:24 standalone.localdomain podman[201124]: 2025-10-13 14:19:24.993977519 +0000 UTC m=+0.259609099 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=ovn_cluster_northd, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:19:25 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:19:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1347: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:27 standalone.localdomain ceph-mon[29756]: pgmap v1347: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1348: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:28 standalone.localdomain ceph-mon[29756]: pgmap v1348: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:19:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:19:28 standalone.localdomain runuser[201242]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:29 standalone.localdomain runuser[201242]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:29 standalone.localdomain runuser[201311]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:19:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:19:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:19:29 standalone.localdomain systemd[1]: tmp-crun.WR5Vs3.mount: Deactivated successfully.
Oct 13 14:19:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1349: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:29 standalone.localdomain podman[201358]: 2025-10-13 14:19:29.837163259 +0000 UTC m=+0.090755215 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.9, name=rhosp17/openstack-cinder-scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.component=openstack-cinder-scheduler-container, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, container_name=cinder_scheduler, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, build-date=2025-07-21T16:10:12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:19:29 standalone.localdomain podman[201356]: 2025-10-13 14:19:29.853062417 +0000 UTC m=+0.106281841 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:19:29 standalone.localdomain podman[201356]: 2025-10-13 14:19:29.872760403 +0000 UTC m=+0.125979827 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, container_name=iscsid, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:19:29 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:19:29 standalone.localdomain podman[201358]: 2025-10-13 14:19:29.890222911 +0000 UTC m=+0.143814917 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, build-date=2025-07-21T16:10:12, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=cinder_scheduler, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:19:29 standalone.localdomain podman[201357]: 2025-10-13 14:19:29.8924743 +0000 UTC m=+0.150212513 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, com.redhat.component=openstack-cinder-api-container, tcib_managed=true, vcs-type=git, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1)
Oct 13 14:19:29 standalone.localdomain podman[201357]: 2025-10-13 14:19:29.898975711 +0000 UTC m=+0.156713934 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, container_name=cinder_api_cron, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:19:29 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:19:29 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:19:30 standalone.localdomain runuser[201311]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:30 standalone.localdomain runuser[201464]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:30 standalone.localdomain podman[201565]: 2025-10-13 14:19:30.78747399 +0000 UTC m=+0.068866681 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-manila-share-container, name=rhosp17/openstack-manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:19:30 standalone.localdomain podman[201565]: 2025-10-13 14:19:30.815605626 +0000 UTC m=+0.096998297 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, summary=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-share, tcib_managed=true, release=1, version=17.1.9, build-date=2025-07-21T15:22:36, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:19:30 standalone.localdomain ceph-mon[29756]: pgmap v1349: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:31 standalone.localdomain runuser[201464]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1350: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:32 standalone.localdomain ceph-mon[29756]: pgmap v1350: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1351: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:34 standalone.localdomain ceph-mon[29756]: pgmap v1351: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:19:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:19:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1352: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:35 standalone.localdomain systemd[1]: tmp-crun.CRdFfM.mount: Deactivated successfully.
Oct 13 14:19:35 standalone.localdomain podman[201898]: 2025-10-13 14:19:35.921222258 +0000 UTC m=+0.081787907 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, container_name=neutron_api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1, tcib_managed=true, build-date=2025-07-21T15:44:03)
Oct 13 14:19:35 standalone.localdomain podman[201853]: 2025-10-13 14:19:35.886609363 +0000 UTC m=+0.104747444 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T15:58:55, container_name=cinder_api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, version=17.1.9)
Oct 13 14:19:35 standalone.localdomain podman[201841]: 2025-10-13 14:19:35.998426094 +0000 UTC m=+0.221996422 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-memcached, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, distribution-scope=public)
Oct 13 14:19:36 standalone.localdomain podman[201826]: 2025-10-13 14:19:35.969451323 +0000 UTC m=+0.202866844 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-keystone-container, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., container_name=keystone, io.openshift.expose-services=)
Oct 13 14:19:36 standalone.localdomain podman[201853]: 2025-10-13 14:19:36.024900538 +0000 UTC m=+0.243038619 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:58:55, distribution-scope=public, tcib_managed=true, container_name=cinder_api, version=17.1.9, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201903]: 2025-10-13 14:19:35.94142335 +0000 UTC m=+0.103159595 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=nova_conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1)
Oct 13 14:19:36 standalone.localdomain podman[201841]: 2025-10-13 14:19:36.098773512 +0000 UTC m=+0.322343840 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, name=rhosp17/openstack-memcached, vcs-type=git, com.redhat.component=openstack-memcached-container, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, tcib_managed=true, container_name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:19:36 standalone.localdomain podman[201827]: 2025-10-13 14:19:35.977886482 +0000 UTC m=+0.216817343 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-manila-scheduler, release=1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, container_name=manila_scheduler, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-manila-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:19:36 standalone.localdomain podman[201878]: 2025-10-13 14:19:36.141675172 +0000 UTC m=+0.342049296 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, container_name=horizon, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, release=1, tcib_managed=true, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc., vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, summary=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']})
Oct 13 14:19:36 standalone.localdomain podman[201909]: 2025-10-13 14:19:36.109077978 +0000 UTC m=+0.258056191 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:07:52, version=17.1.9, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201827]: 2025-10-13 14:19:36.16697789 +0000 UTC m=+0.405908771 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-manila-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-07-21T15:56:28, container_name=manila_scheduler, architecture=x86_64, name=rhosp17/openstack-manila-scheduler)
Oct 13 14:19:36 standalone.localdomain podman[201898]: 2025-10-13 14:19:36.175308127 +0000 UTC m=+0.335873846 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, container_name=neutron_api, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1)
Oct 13 14:19:36 standalone.localdomain podman[201903]: 2025-10-13 14:19:36.175822572 +0000 UTC m=+0.337558827 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-conductor-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, build-date=2025-07-21T15:44:17, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, container_name=nova_conductor, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d)
Oct 13 14:19:36 standalone.localdomain podman[201862]: 2025-10-13 14:19:36.034414552 +0000 UTC m=+0.251218522 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, name=rhosp17/openstack-manila-api, architecture=x86_64, com.redhat.component=openstack-manila-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=manila_api_cron, release=1, tcib_managed=true)
Oct 13 14:19:36 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201909]: 2025-10-13 14:19:36.193003431 +0000 UTC m=+0.341981644 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, distribution-scope=public, vcs-type=git, tcib_managed=true)
Oct 13 14:19:36 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201862]: 2025-10-13 14:19:36.22092059 +0000 UTC m=+0.437724620 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T16:06:43, io.openshift.expose-services=, architecture=x86_64, container_name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, name=rhosp17/openstack-manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-api-container, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201882]: 2025-10-13 14:19:36.083175502 +0000 UTC m=+0.247220728 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, tcib_managed=true, vcs-type=git, build-date=2025-07-21T15:56:26, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12)
Oct 13 14:19:36 standalone.localdomain podman[201867]: 2025-10-13 14:19:36.241020509 +0000 UTC m=+0.411368529 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-heat-api-container, vcs-type=git, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:19:36 standalone.localdomain podman[201833]: 2025-10-13 14:19:36.190030749 +0000 UTC m=+0.423425420 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, container_name=heat_engine, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:19:36 standalone.localdomain podman[201878]: 2025-10-13 14:19:36.250862862 +0000 UTC m=+0.451236986 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, config_id=tripleo_step3, release=1, tcib_managed=true, version=17.1.9, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, summary=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 horizon, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-horizon-container, architecture=x86_64, build-date=2025-07-21T13:58:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, name=rhosp17/openstack-horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201867]: 2025-10-13 14:19:36.301569512 +0000 UTC m=+0.471917552 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201826]: 2025-10-13 14:19:36.311296732 +0000 UTC m=+0.544712273 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, name=rhosp17/openstack-keystone, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, container_name=keystone, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201833]: 2025-10-13 14:19:36.376183148 +0000 UTC m=+0.609577829 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, architecture=x86_64, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:19:36 standalone.localdomain podman[201825]: 2025-10-13 14:19:36.393970585 +0000 UTC m=+0.644260755 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, name=rhosp17/openstack-heat-api-cfn, container_name=heat_api_cfn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, release=1)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201882]: 2025-10-13 14:19:36.41722402 +0000 UTC m=+0.581269266 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, container_name=heat_api, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201834]: 2025-10-13 14:19:36.498405648 +0000 UTC m=+0.721774179 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:02:54, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-scheduler-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-nova-scheduler, release=1)
Oct 13 14:19:36 standalone.localdomain podman[201834]: 2025-10-13 14:19:36.519211259 +0000 UTC m=+0.742579810 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-scheduler-container, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-scheduler, vcs-type=git, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T16:02:54, description=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain podman[201825]: 2025-10-13 14:19:36.577658297 +0000 UTC m=+0.827948437 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, managed_by=tripleo_ansible, container_name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, release=1, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true)
Oct 13 14:19:36 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:19:36 standalone.localdomain ceph-mon[29756]: pgmap v1352: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:36 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:19:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1353: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:38 standalone.localdomain ceph-mon[29756]: pgmap v1353: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1354: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:40 standalone.localdomain podman[202180]: 2025-10-13 14:19:40.310155297 +0000 UTC m=+0.099644647 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T16:18:24, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-backup-container, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, release=1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, name=rhosp17/openstack-cinder-backup)
Oct 13 14:19:40 standalone.localdomain podman[202180]: 2025-10-13 14:19:40.342847003 +0000 UTC m=+0.132336273 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-cinder-backup, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:18:24, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:19:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:40 standalone.localdomain ceph-mon[29756]: pgmap v1354: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:41 standalone.localdomain runuser[202353]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:19:41 standalone.localdomain podman[202374]: 2025-10-13 14:19:41.836271097 +0000 UTC m=+0.100871585 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, tcib_managed=true, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git)
Oct 13 14:19:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1355: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:19:41 standalone.localdomain systemd[1]: tmp-crun.GcwWUE.mount: Deactivated successfully.
Oct 13 14:19:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:19:41 standalone.localdomain podman[202460]: 2025-10-13 14:19:41.992273797 +0000 UTC m=+0.122137449 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, container_name=nova_vnc_proxy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64)
Oct 13 14:19:42 standalone.localdomain podman[202374]: 2025-10-13 14:19:42.015835072 +0000 UTC m=+0.280435550 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:19:42 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:19:42 standalone.localdomain podman[202483]: 2025-10-13 14:19:42.090979064 +0000 UTC m=+0.093357764 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, vcs-type=git, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:19:42 standalone.localdomain podman[202483]: 2025-10-13 14:19:42.281910919 +0000 UTC m=+0.284289719 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 14:19:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:19:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:19:42 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:19:42 standalone.localdomain podman[202460]: 2025-10-13 14:19:42.312998735 +0000 UTC m=+0.442862387 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, name=rhosp17/openstack-nova-novncproxy, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:24:10, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_vnc_proxy, tcib_managed=true)
Oct 13 14:19:42 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:19:42 standalone.localdomain podman[202560]: 2025-10-13 14:19:42.359393134 +0000 UTC m=+0.058552673 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:19:42 standalone.localdomain podman[202561]: 2025-10-13 14:19:42.405564324 +0000 UTC m=+0.094559281 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=swift_account_server, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, architecture=x86_64)
Oct 13 14:19:42 standalone.localdomain runuser[202353]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:42 standalone.localdomain podman[202561]: 2025-10-13 14:19:42.636769109 +0000 UTC m=+0.325764056 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, tcib_managed=true, config_id=tripleo_step4, container_name=swift_account_server, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:19:42 standalone.localdomain runuser[202679]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:42 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:19:42 standalone.localdomain podman[202560]: 2025-10-13 14:19:42.727681545 +0000 UTC m=+0.426841144 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target)
Oct 13 14:19:42 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:19:42 standalone.localdomain ceph-mon[29756]: pgmap v1355: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:43 standalone.localdomain runuser[202679]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:43 standalone.localdomain runuser[202742]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:19:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:19:43 standalone.localdomain podman[202801]: 2025-10-13 14:19:43.823112622 +0000 UTC m=+0.088976249 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=swift_proxy, managed_by=tripleo_ansible, version=17.1.9, release=1, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:19:43 standalone.localdomain podman[202800]: 2025-10-13 14:19:43.82985702 +0000 UTC m=+0.097204642 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, vendor=Red Hat, Inc., com.redhat.component=openstack-placement-api-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-placement-api, container_name=placement_api, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, build-date=2025-07-21T13:58:12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, release=1, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, distribution-scope=public)
Oct 13 14:19:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1356: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:43 standalone.localdomain systemd[1]: tmp-crun.g1iEj7.mount: Deactivated successfully.
Oct 13 14:19:43 standalone.localdomain podman[202811]: 2025-10-13 14:19:43.910524392 +0000 UTC m=+0.171384185 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, container_name=glance_api_internal, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:19:43 standalone.localdomain podman[202818]: 2025-10-13 14:19:43.9468547 +0000 UTC m=+0.204389720 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:19:43 standalone.localdomain podman[202799]: 2025-10-13 14:19:43.864695022 +0000 UTC m=+0.132977013 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, container_name=glance_api_cron, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:19:44 standalone.localdomain podman[202799]: 2025-10-13 14:19:44.002930285 +0000 UTC m=+0.271212276 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4)
Oct 13 14:19:44 standalone.localdomain podman[202802]: 2025-10-13 14:19:44.0131369 +0000 UTC m=+0.276097487 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, io.openshift.expose-services=, container_name=nova_metadata, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11)
Oct 13 14:19:44 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain podman[202801]: 2025-10-13 14:19:44.029793632 +0000 UTC m=+0.295657289 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:19:44 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain podman[202818]: 2025-10-13 14:19:44.05409059 +0000 UTC m=+0.311625630 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Oct 13 14:19:44 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain podman[202800]: 2025-10-13 14:19:44.063682755 +0000 UTC m=+0.331030377 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, container_name=placement_api, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-placement-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:58:12, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64)
Oct 13 14:19:44 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain runuser[202742]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:44 standalone.localdomain podman[202802]: 2025-10-13 14:19:44.107864435 +0000 UTC m=+0.370825032 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, container_name=nova_metadata)
Oct 13 14:19:44 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain podman[202828]: 2025-10-13 14:19:44.151360573 +0000 UTC m=+0.402513036 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, architecture=x86_64, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=glance_api, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:19:44 standalone.localdomain podman[202811]: 2025-10-13 14:19:44.157870304 +0000 UTC m=+0.418730067 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, vcs-type=git, vendor=Red Hat, Inc., release=1, tcib_managed=true, io.openshift.expose-services=, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:19:44 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain podman[202823]: 2025-10-13 14:19:44.29814248 +0000 UTC m=+0.550524002 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_controller, version=17.1.9, build-date=2025-07-21T13:28:44, architecture=x86_64, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:19:44 standalone.localdomain podman[202828]: 2025-10-13 14:19:44.323903642 +0000 UTC m=+0.575056125 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, tcib_managed=true, container_name=glance_api, architecture=x86_64, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:19:44 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain podman[202823]: 2025-10-13 14:19:44.378900455 +0000 UTC m=+0.631281957 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git)
Oct 13 14:19:44 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:19:44 standalone.localdomain ceph-mon[29756]: pgmap v1356: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1357: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:46 standalone.localdomain ceph-mon[29756]: pgmap v1357: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1358: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:48 standalone.localdomain ceph-mon[29756]: pgmap v1358: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:19:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:19:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1359: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:49 standalone.localdomain podman[203034]: 2025-10-13 14:19:49.849461967 +0000 UTC m=+0.098743179 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, name=rhosp17/openstack-keystone, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:19:49 standalone.localdomain podman[203035]: 2025-10-13 14:19:49.915454918 +0000 UTC m=+0.170186658 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, com.redhat.component=openstack-nova-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, release=1, build-date=2025-07-21T16:05:11)
Oct 13 14:19:49 standalone.localdomain podman[203034]: 2025-10-13 14:19:49.936404632 +0000 UTC m=+0.185685844 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, container_name=keystone_cron, release=1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:19:49 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:19:49 standalone.localdomain podman[203035]: 2025-10-13 14:19:49.965183179 +0000 UTC m=+0.219914939 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, architecture=x86_64, container_name=nova_api, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, release=1, version=17.1.9, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container)
Oct 13 14:19:49 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:19:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:50 standalone.localdomain ceph-mon[29756]: pgmap v1359: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1360: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:19:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:19:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:19:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:19:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:19:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:19:52 standalone.localdomain podman[203366]: 2025-10-13 14:19:52.843207542 +0000 UTC m=+0.092131538 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container, io.openshift.expose-services=, container_name=barbican_worker, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, name=rhosp17/openstack-barbican-worker, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:19:52 standalone.localdomain podman[203360]: 2025-10-13 14:19:52.851522871 +0000 UTC m=+0.091555410 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:19:52 standalone.localdomain podman[203360]: 2025-10-13 14:19:52.915036528 +0000 UTC m=+0.155069067 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:19:52 standalone.localdomain podman[203365]: 2025-10-13 14:19:52.903956093 +0000 UTC m=+0.148352607 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, name=rhosp17/openstack-neutron-sriov-agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, build-date=2025-07-21T16:03:34, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true)
Oct 13 14:19:52 standalone.localdomain ceph-mon[29756]: pgmap v1360: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:52 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:19:52 standalone.localdomain podman[203371]: 2025-10-13 14:19:52.951479352 +0000 UTC m=+0.189442166 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, build-date=2025-07-21T16:05:11, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, architecture=x86_64, container_name=nova_api_cron, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:19:52 standalone.localdomain podman[203371]: 2025-10-13 14:19:52.961955067 +0000 UTC m=+0.199917911 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, container_name=nova_api_cron, io.buildah.version=1.33.12, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:19:53 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:19:53 standalone.localdomain podman[203363]: 2025-10-13 14:19:53.046617882 +0000 UTC m=+0.299385736 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, container_name=barbican_api, vcs-type=git, com.redhat.component=openstack-barbican-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 barbican-api, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T15:22:44)
Oct 13 14:19:53 standalone.localdomain podman[203365]: 2025-10-13 14:19:53.047603032 +0000 UTC m=+0.291999546 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, release=1, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-sriov-agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:19:53 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:19:53 standalone.localdomain podman[203366]: 2025-10-13 14:19:53.078536114 +0000 UTC m=+0.327460109 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T15:36:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, architecture=x86_64, container_name=barbican_worker, version=17.1.9, com.redhat.component=openstack-barbican-worker-container, release=1, description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:19:53 standalone.localdomain podman[203363]: 2025-10-13 14:19:53.129163469 +0000 UTC m=+0.381931303 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-api, vcs-type=git, com.redhat.component=openstack-barbican-api-container, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:22:44, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, container_name=barbican_api, description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64)
Oct 13 14:19:53 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:19:53 standalone.localdomain podman[203364]: 2025-10-13 14:19:53.099618641 +0000 UTC m=+0.350787786 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:19, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, name=rhosp17/openstack-barbican-keystone-listener, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_keystone_listener, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1)
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:19:53 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:19:53 standalone.localdomain podman[203364]: 2025-10-13 14:19:53.235590101 +0000 UTC m=+0.486759236 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-barbican-keystone-listener, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, container_name=barbican_keystone_listener, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-keystone-listener-container)
Oct 13 14:19:53 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:19:53 standalone.localdomain systemd[1]: tmp-crun.oyfjr8.mount: Deactivated successfully.
Oct 13 14:19:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1361: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:54 standalone.localdomain runuser[203524]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:54 standalone.localdomain ceph-mon[29756]: pgmap v1361: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:55 standalone.localdomain runuser[203524]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:55 standalone.localdomain runuser[203593]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:19:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:19:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:19:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:19:55 standalone.localdomain podman[203638]: 2025-10-13 14:19:55.795275812 +0000 UTC m=+0.060916517 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, vcs-type=git, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, io.openshift.expose-services=)
Oct 13 14:19:55 standalone.localdomain podman[203638]: 2025-10-13 14:19:55.82381511 +0000 UTC m=+0.089455805 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:19:55 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:19:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1362: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:55 standalone.localdomain systemd[1]: tmp-crun.f8eR2W.mount: Deactivated successfully.
Oct 13 14:19:55 standalone.localdomain podman[203639]: 2025-10-13 14:19:55.908848945 +0000 UTC m=+0.173154579 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-northd, com.redhat.component=openstack-ovn-northd-container, container_name=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=ovn_cluster_northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, build-date=2025-07-21T13:30:04, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, version=17.1.9, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4)
Oct 13 14:19:55 standalone.localdomain podman[203640]: 2025-10-13 14:19:55.879248304 +0000 UTC m=+0.141954708 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, config_id=tripleo_step2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck)
Oct 13 14:19:55 standalone.localdomain podman[203640]: 2025-10-13 14:19:55.943832594 +0000 UTC m=+0.206538998 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-mariadb, release=1, config_id=tripleo_step2, architecture=x86_64, container_name=clustercheck, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public)
Oct 13 14:19:55 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:19:55 standalone.localdomain podman[203639]: 2025-10-13 14:19:55.997761621 +0000 UTC m=+0.262067265 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, build-date=2025-07-21T13:30:04, release=1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-northd-container, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_id=ovn_cluster_northd, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:19:56 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:19:56 standalone.localdomain runuser[203593]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:56 standalone.localdomain runuser[203734]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:19:56 standalone.localdomain ceph-mon[29756]: pgmap v1362: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:56 standalone.localdomain runuser[203734]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:19:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:19:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3993604687' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:19:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:19:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3993604687' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:19:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3993604687' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:19:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3993604687' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:19:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1363: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:58 standalone.localdomain ceph-mon[29756]: pgmap v1363: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:19:59 standalone.localdomain sudo[203882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:19:59 standalone.localdomain sudo[203882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:19:59 standalone.localdomain sudo[203882]: pam_unix(sudo:session): session closed for user root
Oct 13 14:19:59 standalone.localdomain sudo[203897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:19:59 standalone.localdomain sudo[203897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:19:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1364: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:00 standalone.localdomain sudo[203897]: pam_unix(sudo:session): session closed for user root
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:20:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 8872969f-a688-46d2-b2d1-0e958a4b9445 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:20:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 8872969f-a688-46d2-b2d1-0e958a4b9445 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:20:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 8872969f-a688-46d2-b2d1-0e958a4b9445 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:20:00 standalone.localdomain sudo[203953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:20:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:20:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:20:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:20:00 standalone.localdomain sudo[203953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:20:00 standalone.localdomain sudo[203953]: pam_unix(sudo:session): session closed for user root
Oct 13 14:20:00 standalone.localdomain podman[203969]: 2025-10-13 14:20:00.377580942 +0000 UTC m=+0.074068805 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, tcib_managed=true, container_name=cinder_scheduler, build-date=2025-07-21T16:10:12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:20:00 standalone.localdomain podman[203969]: 2025-10-13 14:20:00.425445772 +0000 UTC m=+0.121933615 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, vendor=Red Hat, Inc., container_name=cinder_scheduler, name=rhosp17/openstack-cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, architecture=x86_64, build-date=2025-07-21T16:10:12, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']})
Oct 13 14:20:00 standalone.localdomain systemd[1]: tmp-crun.4c4Mpu.mount: Deactivated successfully.
Oct 13 14:20:00 standalone.localdomain podman[203968]: 2025-10-13 14:20:00.436710053 +0000 UTC m=+0.134915900 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, container_name=cinder_api_cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, architecture=x86_64, release=1, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, tcib_managed=true, distribution-scope=public)
Oct 13 14:20:00 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:20:00 standalone.localdomain podman[203968]: 2025-10-13 14:20:00.446762855 +0000 UTC m=+0.144968692 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-07-21T15:58:55, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.component=openstack-cinder-api-container, vcs-type=git, architecture=x86_64)
Oct 13 14:20:00 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:20:00 standalone.localdomain podman[203967]: 2025-10-13 14:20:00.539908123 +0000 UTC m=+0.238100619 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, release=1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public)
Oct 13 14:20:00 standalone.localdomain podman[203967]: 2025-10-13 14:20:00.585007947 +0000 UTC m=+0.283200443 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, container_name=iscsid, distribution-scope=public, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64)
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:00 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: pgmap v1364: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:20:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:20:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1365: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:02 standalone.localdomain podman[204303]: 2025-10-13 14:20:02.917155287 +0000 UTC m=+0.073885659 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:20:02 standalone.localdomain ceph-mon[29756]: pgmap v1365: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:02 standalone.localdomain podman[204303]: 2025-10-13 14:20:02.94678908 +0000 UTC m=+0.103519452 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, release=1, distribution-scope=public, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:20:03 standalone.localdomain systemd[1]: tmp-crun.lafrJC.mount: Deactivated successfully.
Oct 13 14:20:03 standalone.localdomain podman[204338]: 2025-10-13 14:20:03.157865347 +0000 UTC m=+0.052706641 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, tcib_managed=true, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1)
Oct 13 14:20:03 standalone.localdomain podman[204338]: 2025-10-13 14:20:03.188858781 +0000 UTC m=+0.083700065 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, release=1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:08:11, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git)
Oct 13 14:20:03 standalone.localdomain podman[204379]: 2025-10-13 14:20:03.441815291 +0000 UTC m=+0.079938737 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:03 standalone.localdomain podman[204379]: 2025-10-13 14:20:03.47486887 +0000 UTC m=+0.112992276 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, release=1, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:20:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1366: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:03 standalone.localdomain systemd[175349]: Created slice User Background Tasks Slice.
Oct 13 14:20:03 standalone.localdomain systemd[1]: tmp-crun.NvLhwe.mount: Deactivated successfully.
Oct 13 14:20:03 standalone.localdomain systemd[175349]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 14:20:03 standalone.localdomain systemd[175349]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 14:20:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:20:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:20:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:20:04 standalone.localdomain ceph-mon[29756]: pgmap v1366: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:20:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1367: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:06 standalone.localdomain ceph-mon[29756]: pgmap v1367: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:20:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:20:06 standalone.localdomain podman[204439]: 2025-10-13 14:20:06.867553798 +0000 UTC m=+0.115835355 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:20:06 standalone.localdomain podman[204492]: 2025-10-13 14:20:06.875859426 +0000 UTC m=+0.096093810 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, container_name=neutron_api, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, tcib_managed=true, build-date=2025-07-21T15:44:03, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:06 standalone.localdomain podman[204443]: 2025-10-13 14:20:06.912032932 +0000 UTC m=+0.159353199 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, com.redhat.component=openstack-nova-scheduler-container, description=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, build-date=2025-07-21T16:02:54, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, maintainer=OpenStack TripleO Team)
Oct 13 14:20:06 standalone.localdomain podman[204439]: 2025-10-13 14:20:06.915459848 +0000 UTC m=+0.163741395 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, container_name=heat_engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc.)
Oct 13 14:20:06 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:20:06 standalone.localdomain podman[204478]: 2025-10-13 14:20:06.952745869 +0000 UTC m=+0.183123829 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26)
Oct 13 14:20:06 standalone.localdomain podman[204443]: 2025-10-13 14:20:06.956683011 +0000 UTC m=+0.204003278 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, name=rhosp17/openstack-nova-scheduler, maintainer=OpenStack TripleO Team, container_name=nova_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-scheduler-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, config_id=tripleo_step4, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T16:02:54, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:20:06 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:20:06 standalone.localdomain podman[204483]: 2025-10-13 14:20:06.998920215 +0000 UTC m=+0.228131138 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 horizon, config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, name=rhosp17/openstack-horizon, container_name=horizon, distribution-scope=public)
Oct 13 14:20:07 standalone.localdomain podman[204438]: 2025-10-13 14:20:06.908540503 +0000 UTC m=+0.163867460 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-scheduler, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-manila-scheduler-container, tcib_managed=true, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, config_id=tripleo_step4, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, managed_by=tripleo_ansible, container_name=manila_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:20:07 standalone.localdomain podman[204490]: 2025-10-13 14:20:06.959146808 +0000 UTC m=+0.186148043 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, container_name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:20:07 standalone.localdomain podman[204478]: 2025-10-13 14:20:07.015867862 +0000 UTC m=+0.246245802 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, distribution-scope=public, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:20:07 standalone.localdomain podman[204436]: 2025-10-13 14:20:07.020887039 +0000 UTC m=+0.275260805 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204490]: 2025-10-13 14:20:07.043857453 +0000 UTC m=+0.270858708 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, architecture=x86_64)
Oct 13 14:20:07 standalone.localdomain podman[204492]: 2025-10-13 14:20:07.050340235 +0000 UTC m=+0.270574629 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:44:03, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-server-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204438]: 2025-10-13 14:20:07.090731322 +0000 UTC m=+0.346058259 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, release=1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=manila_scheduler, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:20:07 standalone.localdomain podman[204436]: 2025-10-13 14:20:07.096933745 +0000 UTC m=+0.351307531 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, architecture=x86_64, name=rhosp17/openstack-heat-api-cfn)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204437]: 2025-10-13 14:20:07.110295751 +0000 UTC m=+0.362657315 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, distribution-scope=public, architecture=x86_64, container_name=keystone, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:20:07 standalone.localdomain podman[204461]: 2025-10-13 14:20:07.061395169 +0000 UTC m=+0.288599371 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-cinder-api, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:20:07 standalone.localdomain podman[204437]: 2025-10-13 14:20:07.139751777 +0000 UTC m=+0.392113341 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.9, container_name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, distribution-scope=public)
Oct 13 14:20:07 standalone.localdomain podman[204503]: 2025-10-13 14:20:07.093582271 +0000 UTC m=+0.311636717 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204451]: 2025-10-13 14:20:07.153511185 +0000 UTC m=+0.393814704 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, tcib_managed=true, release=1, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204503]: 2025-10-13 14:20:07.17487577 +0000 UTC m=+0.392930196 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4)
Oct 13 14:20:07 standalone.localdomain podman[204451]: 2025-10-13 14:20:07.175371195 +0000 UTC m=+0.415674714 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:20:07 standalone.localdomain podman[204483]: 2025-10-13 14:20:07.184737586 +0000 UTC m=+0.413948509 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, name=rhosp17/openstack-horizon, summary=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, com.redhat.component=openstack-horizon-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=horizon, io.openshift.expose-services=, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, architecture=x86_64, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, version=17.1.9, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204461]: 2025-10-13 14:20:07.195943945 +0000 UTC m=+0.423148157 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, build-date=2025-07-21T15:58:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, name=rhosp17/openstack-cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, version=17.1.9)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204470]: 2025-10-13 14:20:07.261754493 +0000 UTC m=+0.497720387 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, container_name=manila_api_cron, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-manila-api, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, release=1, io.openshift.expose-services=, com.redhat.component=openstack-manila-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:20:07 standalone.localdomain podman[204470]: 2025-10-13 14:20:07.265994334 +0000 UTC m=+0.501960218 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, architecture=x86_64, container_name=manila_api_cron, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, build-date=2025-07-21T16:06:43, com.redhat.component=openstack-manila-api-container, name=rhosp17/openstack-manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public)
Oct 13 14:20:07 standalone.localdomain podman[204493]: 2025-10-13 14:20:07.275561863 +0000 UTC m=+0.498706468 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, build-date=2025-07-21T15:44:17, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, version=17.1.9, container_name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, tcib_managed=true, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:20:07 standalone.localdomain runuser[204767]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:07 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain podman[204746]: 2025-10-13 14:20:07.305270776 +0000 UTC m=+0.062044661 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:13:39, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, name=rhosp17/openstack-cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-cinder-volume-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:20:07 standalone.localdomain podman[204493]: 2025-10-13 14:20:07.327013403 +0000 UTC m=+0.550158008 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-conductor-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, config_id=tripleo_step4, container_name=nova_conductor, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:20:07 standalone.localdomain podman[204746]: 2025-10-13 14:20:07.33784925 +0000 UTC m=+0.094623125 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, release=1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-cinder-volume, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:20:07 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:20:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1368: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:07 standalone.localdomain runuser[204767]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:08 standalone.localdomain runuser[204871]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:08 standalone.localdomain runuser[204871]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:08 standalone.localdomain runuser[204933]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:08 standalone.localdomain ceph-mon[29756]: pgmap v1368: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:09 standalone.localdomain runuser[204933]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1369: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:10 standalone.localdomain ceph-mon[29756]: pgmap v1369: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1370: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:20:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:20:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:20:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:20:12 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:20:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:20:12 standalone.localdomain recover_tripleo_nova_virtqemud[205253]: 93291
Oct 13 14:20:12 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:20:12 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:20:13 standalone.localdomain ceph-mon[29756]: pgmap v1370: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:13 standalone.localdomain podman[205231]: 2025-10-13 14:20:13.277663638 +0000 UTC m=+0.524553122 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-novncproxy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.expose-services=)
Oct 13 14:20:13 standalone.localdomain podman[205228]: 2025-10-13 14:20:13.350472793 +0000 UTC m=+0.603963872 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, config_id=tripleo_step4, container_name=swift_object_server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true)
Oct 13 14:20:13 standalone.localdomain podman[205229]: 2025-10-13 14:20:13.168739538 +0000 UTC m=+0.420582446 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git)
Oct 13 14:20:13 standalone.localdomain podman[205242]: 2025-10-13 14:20:13.1006321 +0000 UTC m=+0.342788396 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target)
Oct 13 14:20:13 standalone.localdomain podman[205231]: 2025-10-13 14:20:13.757393884 +0000 UTC m=+1.004283388 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-novncproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:24:10, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, name=rhosp17/openstack-nova-novncproxy, batch=17.1_20250721.1, container_name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, tcib_managed=true, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git)
Oct 13 14:20:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1371: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:13 standalone.localdomain podman[205228]: 2025-10-13 14:20:13.848613042 +0000 UTC m=+1.102104081 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:20:13 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:20:13 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:20:13 standalone.localdomain podman[205229]: 2025-10-13 14:20:13.934109532 +0000 UTC m=+1.185952440 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., container_name=swift_container_server, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:20:13 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:20:13 standalone.localdomain podman[205242]: 2025-10-13 14:20:13.986752119 +0000 UTC m=+1.228908545 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:20:14 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205230]: 2025-10-13 14:20:14.006730562 +0000 UTC m=+1.254463642 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22)
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:20:14 standalone.localdomain ceph-mon[29756]: pgmap v1371: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:20:14 standalone.localdomain podman[205423]: 2025-10-13 14:20:14.2583476 +0000 UTC m=+0.138949384 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, vcs-type=git, container_name=swift_proxy, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:20:14 standalone.localdomain podman[205230]: 2025-10-13 14:20:14.285851316 +0000 UTC m=+1.533584406 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, distribution-scope=public, architecture=x86_64, container_name=swift_account_server, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container)
Oct 13 14:20:14 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205422]: 2025-10-13 14:20:14.306987314 +0000 UTC m=+0.186923867 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, summary=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, io.openshift.expose-services=, release=1, com.redhat.component=openstack-placement-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-placement-api, architecture=x86_64, vcs-type=git, container_name=placement_api, io.buildah.version=1.33.12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, description=Red Hat OpenStack Platform 17.1 placement-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:20:14 standalone.localdomain podman[205421]: 2025-10-13 14:20:14.209832931 +0000 UTC m=+0.089645141 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, release=1, architecture=x86_64, config_id=tripleo_step4, container_name=glance_api_cron, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:20:14 standalone.localdomain podman[205422]: 2025-10-13 14:20:14.348302919 +0000 UTC m=+0.228239462 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, config_id=tripleo_step4, name=rhosp17/openstack-placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-placement-api-container, container_name=placement_api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, release=1, build-date=2025-07-21T13:58:12, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:20:14 standalone.localdomain podman[205421]: 2025-10-13 14:20:14.363401319 +0000 UTC m=+0.243213529 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, com.redhat.component=openstack-glance-api-container, tcib_managed=true, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=)
Oct 13 14:20:14 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205527]: 2025-10-13 14:20:14.451086817 +0000 UTC m=+0.079135373 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, container_name=glance_api, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64)
Oct 13 14:20:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:20:14 standalone.localdomain podman[205423]: 2025-10-13 14:20:14.504631113 +0000 UTC m=+0.385232877 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, build-date=2025-07-21T14:48:37, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:20:14 standalone.localdomain podman[205424]: 2025-10-13 14:20:14.502555099 +0000 UTC m=+0.380706696 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:20:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205424]: 2025-10-13 14:20:14.603878541 +0000 UTC m=+0.482030138 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:20:14 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205557]: 2025-10-13 14:20:14.691993712 +0000 UTC m=+0.213115282 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, release=1, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:20:14 standalone.localdomain podman[205557]: 2025-10-13 14:20:14.709730955 +0000 UTC m=+0.230852535 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:20:14 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205527]: 2025-10-13 14:20:14.76069302 +0000 UTC m=+0.388741626 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=glance_api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.component=openstack-glance-api-container, distribution-scope=public, build-date=2025-07-21T13:58:20, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:20:14 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205474]: 2025-10-13 14:20:14.614405959 +0000 UTC m=+0.376655141 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=glance_api_internal, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc.)
Oct 13 14:20:14 standalone.localdomain podman[205474]: 2025-10-13 14:20:14.829046506 +0000 UTC m=+0.591295668 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vcs-type=git)
Oct 13 14:20:14 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:20:14 standalone.localdomain podman[205473]: 2025-10-13 14:20:14.5688361 +0000 UTC m=+0.335124987 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, version=17.1.9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:20:14 standalone.localdomain podman[205473]: 2025-10-13 14:20:14.90405734 +0000 UTC m=+0.670346227 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, build-date=2025-07-21T16:05:11, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:20:14 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:20:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1372: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:16 standalone.localdomain ceph-mon[29756]: pgmap v1372: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:20:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1372679479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:20:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1372679479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:20:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1373: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:18 standalone.localdomain haproxy[70940]: 172.17.0.2:59560 [13/Oct/2025:14:20:18.150] placement placement/standalone.internalapi.localdomain 0/0/1/23/24 200 497 - - ---- 55/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:20:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:20:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2602579018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:20:18 standalone.localdomain ceph-mon[29756]: pgmap v1373: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2602579018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:20:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1374: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:19 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:19.718] neutron neutron/standalone.internalapi.localdomain 0/0/0/136/136 200 254 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63&fields=binding%3Ahost_id&fields=binding%3Avif_type HTTP/1.1"
Oct 13 14:20:19 standalone.localdomain runuser[205712]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:19 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:19.919] neutron neutron/standalone.internalapi.localdomain 0/0/0/58/58 200 1332 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/ports?tenant_id=e44641a80bcb466cb3dd688e48b72d8e&device_id=54a46fec-332e-42f9-83ed-88e763d13f63 HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:19.991] neutron neutron/standalone.internalapi.localdomain 0/0/0/87/87 200 901 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/networks?id=0c455abd-28d4-47e7-a254-e50de0526def HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:20.080] neutron neutron/standalone.internalapi.localdomain 0/0/0/46/46 200 1062 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.238&port_id=8a49767c-fb09-4185-95da-4261d8043fad HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:20.129] neutron neutron/standalone.internalapi.localdomain 0/0/0/29/29 200 810 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/subnets?id=e1cc60b1-626d-46b0-8458-47640298fa14 HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:20.162] neutron neutron/standalone.internalapi.localdomain 0/0/0/43/43 200 1352 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/ports?network_id=0c455abd-28d4-47e7-a254-e50de0526def&device_owner=network%3Adhcp HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:20.209] neutron neutron/standalone.internalapi.localdomain 0/0/0/62/62 200 187 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=segments HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain haproxy[70940]: 172.17.0.2:52276 [13/Oct/2025:14:20:20.274] neutron neutron/standalone.internalapi.localdomain 0/0/0/68/68 200 252 - - ---- 56/1/0/0/0 0/0 "GET /v2.0/networks/0c455abd-28d4-47e7-a254-e50de0526def?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1"
Oct 13 14:20:20 standalone.localdomain runuser[205712]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:20 standalone.localdomain runuser[205854]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:20:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:20:20 standalone.localdomain podman[205881]: 2025-10-13 14:20:20.798377572 +0000 UTC m=+0.063300730 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, name=rhosp17/openstack-nova-api, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 14:20:20 standalone.localdomain systemd[1]: tmp-crun.gPkcjQ.mount: Deactivated successfully.
Oct 13 14:20:20 standalone.localdomain podman[205878]: 2025-10-13 14:20:20.848852093 +0000 UTC m=+0.109946252 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., container_name=keystone_cron, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, com.redhat.component=openstack-keystone-container, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, release=1)
Oct 13 14:20:20 standalone.localdomain ceph-mon[29756]: pgmap v1374: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:20 standalone.localdomain podman[205878]: 2025-10-13 14:20:20.861807866 +0000 UTC m=+0.122902025 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, container_name=keystone_cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., release=1, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:20:20 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:20:20 standalone.localdomain podman[205881]: 2025-10-13 14:20:20.952073095 +0000 UTC m=+0.216996323 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, container_name=nova_api, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1)
Oct 13 14:20:20 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:20:21 standalone.localdomain runuser[205854]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:21 standalone.localdomain runuser[205961]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:21 standalone.localdomain sshd[206049]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:20:21 standalone.localdomain sshd[206049]: Accepted publickey for root from 192.168.122.11 port 37856 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:20:21 standalone.localdomain systemd-logind[45629]: New session 96 of user root.
Oct 13 14:20:21 standalone.localdomain systemd[1]: Started Session 96 of User root.
Oct 13 14:20:21 standalone.localdomain sshd[206049]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:20:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1375: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:22 standalone.localdomain runuser[205961]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:22 standalone.localdomain ceph-mon[29756]: pgmap v1375: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:23 standalone.localdomain podman[206352]: 
Oct 13 14:20:23 standalone.localdomain podman[206352]: 2025-10-13 14:20:23.124402763 +0000 UTC m=+0.059997007 container create 057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=vigorous_saha, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:20:23
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'backups', 'volumes', 'manila_metadata', 'manila_data']
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started libpod-conmon-057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c.scope.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:20:23 standalone.localdomain podman[206352]: 2025-10-13 14:20:23.091304013 +0000 UTC m=+0.026898267 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:20:23 standalone.localdomain podman[206352]: 2025-10-13 14:20:23.22425168 +0000 UTC m=+0.159845914 container init 057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=vigorous_saha, build-date=2025-09-24T08:57:55, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, release=553, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.)
Oct 13 14:20:23 standalone.localdomain vigorous_saha[206386]: 167 167
Oct 13 14:20:23 standalone.localdomain systemd[1]: libpod-057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c.scope: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:20:23 standalone.localdomain podman[206352]: 2025-10-13 14:20:23.287409954 +0000 UTC m=+0.223004198 container start 057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=vigorous_saha, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.buildah.version=1.33.12, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 14:20:23 standalone.localdomain podman[206352]: 2025-10-13 14:20:23.288158598 +0000 UTC m=+0.223752872 container attach 057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=vigorous_saha, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, ceph=True, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc.)
Oct 13 14:20:23 standalone.localdomain podman[206352]: 2025-10-13 14:20:23.290031646 +0000 UTC m=+0.225625900 container died 057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=vigorous_saha, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:20:23 standalone.localdomain podman[206428]: 2025-10-13 14:20:23.338417322 +0000 UTC m=+0.085166281 container remove 057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=vigorous_saha, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph)
Oct 13 14:20:23 standalone.localdomain systemd[1]: libpod-conmon-057857bd9ea14514e81d6eb93421ebcdd39b6b9ceddd5622af211996958af33c.scope: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain podman[206368]: 2025-10-13 14:20:23.366095283 +0000 UTC m=+0.200386616 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, release=1, container_name=neutron_sriov_agent, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:20:23 standalone.localdomain podman[206407]: 2025-10-13 14:20:23.29272427 +0000 UTC m=+0.076278844 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, build-date=2025-07-21T15:22:44, com.redhat.component=openstack-barbican-api-container, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, container_name=barbican_api, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:20:23 standalone.localdomain podman[206367]: 2025-10-13 14:20:23.240630489 +0000 UTC m=+0.076199761 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., container_name=neutron_dhcp, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:54, distribution-scope=public, tcib_managed=true)
Oct 13 14:20:23 standalone.localdomain podman[206447]: 2025-10-13 14:20:23.39523529 +0000 UTC m=+0.124549376 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-type=git, tcib_managed=true, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step3, build-date=2025-07-21T16:18:19, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:20:23 standalone.localdomain podman[206368]: 2025-10-13 14:20:23.452124039 +0000 UTC m=+0.286415332 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T16:03:34, distribution-scope=public, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, vcs-type=git, container_name=neutron_sriov_agent)
Oct 13 14:20:23 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain podman[206385]: 2025-10-13 14:20:23.268567459 +0000 UTC m=+0.089733703 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, tcib_managed=true, version=17.1.9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.component=openstack-barbican-worker-container, name=rhosp17/openstack-barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, build-date=2025-07-21T15:36:22)
Oct 13 14:20:23 standalone.localdomain podman[206407]: 2025-10-13 14:20:23.475817607 +0000 UTC m=+0.259372271 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:44, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-barbican-api, config_id=tripleo_step3, io.buildah.version=1.33.12, container_name=barbican_api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-barbican-api-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, release=1, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:20:23 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain podman[206369]: 2025-10-13 14:20:23.445503004 +0000 UTC m=+0.276443863 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=nova_api_cron, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public)
Oct 13 14:20:23 standalone.localdomain podman[206385]: 2025-10-13 14:20:23.504825339 +0000 UTC m=+0.325991593 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, release=1, distribution-scope=public, config_id=tripleo_step3, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-barbican-worker, architecture=x86_64, build-date=2025-07-21T15:36:22, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-worker-container, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker)
Oct 13 14:20:23 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain podman[206367]: 2025-10-13 14:20:23.527445163 +0000 UTC m=+0.363014425 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, release=1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, name=rhosp17/openstack-neutron-dhcp-agent, build-date=2025-07-21T16:28:54, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:20:23 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain podman[206447]: 2025-10-13 14:20:23.578517432 +0000 UTC m=+0.307831538 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, release=1, batch=17.1_20250721.1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, build-date=2025-07-21T16:18:19, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:20:23 standalone.localdomain podman[206369]: 2025-10-13 14:20:23.579245115 +0000 UTC m=+0.410185974 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, name=rhosp17/openstack-nova-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:23 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:20:23 standalone.localdomain podman[206528]: 
Oct 13 14:20:23 standalone.localdomain podman[206528]: 2025-10-13 14:20:23.687283826 +0000 UTC m=+0.045601850 container create e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=bold_herschel, version=7, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started libpod-conmon-e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9.scope.
Oct 13 14:20:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:20:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91223b46daa0ab86de1ea72ea53f2207c263ecdfee785699a104287df09ffb84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 14:20:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91223b46daa0ab86de1ea72ea53f2207c263ecdfee785699a104287df09ffb84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 14:20:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91223b46daa0ab86de1ea72ea53f2207c263ecdfee785699a104287df09ffb84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:20:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91223b46daa0ab86de1ea72ea53f2207c263ecdfee785699a104287df09ffb84/merged/etc/ceph/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 13 14:20:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91223b46daa0ab86de1ea72ea53f2207c263ecdfee785699a104287df09ffb84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 14:20:23 standalone.localdomain podman[206528]: 2025-10-13 14:20:23.745712054 +0000 UTC m=+0.104030088 container init e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=bold_herschel, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Oct 13 14:20:23 standalone.localdomain podman[206528]: 2025-10-13 14:20:23.752816425 +0000 UTC m=+0.111134439 container start e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=bold_herschel, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12)
Oct 13 14:20:23 standalone.localdomain podman[206528]: 2025-10-13 14:20:23.753017622 +0000 UTC m=+0.111335666 container attach e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=bold_herschel, RELEASE=main, version=7, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, vcs-type=git, GIT_BRANCH=main, ceph=True, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Oct 13 14:20:23 standalone.localdomain podman[206528]: 2025-10-13 14:20:23.669840363 +0000 UTC m=+0.028158397 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8
Oct 13 14:20:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1376: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-31da6a28fefee46b699c13fcfecb35e0949e938140788dc7dd5106d38ac51f07-merged.mount: Deactivated successfully.
Oct 13 14:20:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.openstack", "caps": ["mgr", "allow *", "mon", "allow r, profile rbd", "osd", "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, profile rbd pool=backups, allow rw pool manila_data"]} v 0)
Oct 13 14:20:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/93770960' entity='client.admin' cmd={"prefix": "auth caps", "entity": "client.openstack", "caps": ["mgr", "allow *", "mon", "allow r, profile rbd", "osd", "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, profile rbd pool=backups, allow rw pool manila_data"]} : dispatch
Oct 13 14:20:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='client.? 172.18.0.100:0/93770960' entity='client.admin' cmd='[{"prefix": "auth caps", "entity": "client.openstack", "caps": ["mgr", "allow *", "mon", "allow r, profile rbd", "osd", "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, profile rbd pool=backups, allow rw pool manila_data"]}]': finished
Oct 13 14:20:24 standalone.localdomain bold_herschel[206550]: updated caps for client.openstack
Oct 13 14:20:24 standalone.localdomain systemd[1]: libpod-e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9.scope: Deactivated successfully.
Oct 13 14:20:24 standalone.localdomain podman[206528]: 2025-10-13 14:20:24.211204607 +0000 UTC m=+0.569522651 container died e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=bold_herschel, RELEASE=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 14:20:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91223b46daa0ab86de1ea72ea53f2207c263ecdfee785699a104287df09ffb84-merged.mount: Deactivated successfully.
Oct 13 14:20:24 standalone.localdomain podman[206576]: 2025-10-13 14:20:24.282915488 +0000 UTC m=+0.065046404 container remove e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9@sha256:c25fa4419cdc17fed403399149562b7a8aa53b14f4082888ea13258ff48ac8e8, name=bold_herschel, vcs-type=git, distribution-scope=public, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 14:20:24 standalone.localdomain systemd[1]: libpod-conmon-e0654c30a31f193d13d7a6bf602b048b141c6dc9c08cc375e1982165f5c51cf9.scope: Deactivated successfully.
Oct 13 14:20:24 standalone.localdomain sshd[206060]: Received disconnect from 192.168.122.11 port 37856:11: disconnected by user
Oct 13 14:20:24 standalone.localdomain sshd[206060]: Disconnected from user root 192.168.122.11 port 37856
Oct 13 14:20:24 standalone.localdomain sshd[206049]: pam_unix(sshd:session): session closed for user root
Oct 13 14:20:24 standalone.localdomain systemd-logind[45629]: Session 96 logged out. Waiting for processes to exit.
Oct 13 14:20:24 standalone.localdomain systemd[1]: session-96.scope: Deactivated successfully.
Oct 13 14:20:24 standalone.localdomain systemd[1]: session-96.scope: Consumed 1.907s CPU time.
Oct 13 14:20:24 standalone.localdomain systemd-logind[45629]: Removed session 96.
Oct 13 14:20:24 standalone.localdomain sshd[206592]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:20:24 standalone.localdomain sshd[206592]: Accepted publickey for root from 192.168.122.11 port 37862 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:20:24 standalone.localdomain systemd-logind[45629]: New session 97 of user root.
Oct 13 14:20:24 standalone.localdomain systemd[1]: Started Session 97 of User root.
Oct 13 14:20:24 standalone.localdomain sshd[206592]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:20:24 standalone.localdomain sshd[206603]: Received disconnect from 192.168.122.11 port 37862:11: disconnected by user
Oct 13 14:20:24 standalone.localdomain sshd[206603]: Disconnected from user root 192.168.122.11 port 37862
Oct 13 14:20:24 standalone.localdomain sshd[206592]: pam_unix(sshd:session): session closed for user root
Oct 13 14:20:24 standalone.localdomain systemd[1]: session-97.scope: Deactivated successfully.
Oct 13 14:20:24 standalone.localdomain systemd-logind[45629]: Session 97 logged out. Waiting for processes to exit.
Oct 13 14:20:24 standalone.localdomain systemd-logind[45629]: Removed session 97.
Oct 13 14:20:24 standalone.localdomain sshd[206619]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:20:24 standalone.localdomain sshd[206619]: Accepted publickey for root from 192.168.122.11 port 37876 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:20:24 standalone.localdomain systemd-logind[45629]: New session 98 of user root.
Oct 13 14:20:24 standalone.localdomain systemd[1]: Started Session 98 of User root.
Oct 13 14:20:24 standalone.localdomain ceph-mon[29756]: pgmap v1376: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/93770960' entity='client.admin' cmd={"prefix": "auth caps", "entity": "client.openstack", "caps": ["mgr", "allow *", "mon", "allow r, profile rbd", "osd", "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, profile rbd pool=backups, allow rw pool manila_data"]} : dispatch
Oct 13 14:20:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/93770960' entity='client.admin' cmd='[{"prefix": "auth caps", "entity": "client.openstack", "caps": ["mgr", "allow *", "mon", "allow r, profile rbd", "osd", "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, profile rbd pool=backups, allow rw pool manila_data"]}]': finished
Oct 13 14:20:24 standalone.localdomain sshd[206619]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:20:25 standalone.localdomain sshd[206622]: Received disconnect from 192.168.122.11 port 37876:11: disconnected by user
Oct 13 14:20:25 standalone.localdomain sshd[206622]: Disconnected from user root 192.168.122.11 port 37876
Oct 13 14:20:25 standalone.localdomain sshd[206619]: pam_unix(sshd:session): session closed for user root
Oct 13 14:20:25 standalone.localdomain systemd[1]: session-98.scope: Deactivated successfully.
Oct 13 14:20:25 standalone.localdomain systemd-logind[45629]: Session 98 logged out. Waiting for processes to exit.
Oct 13 14:20:25 standalone.localdomain systemd-logind[45629]: Removed session 98.
Oct 13 14:20:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1377: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:26 standalone.localdomain ceph-mon[29756]: pgmap v1377: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:20:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:20:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:20:26 standalone.localdomain podman[206655]: 2025-10-13 14:20:26.790629852 +0000 UTC m=+0.054057013 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, name=rhosp17/openstack-ovn-northd, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:30:04, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-ovn-northd-container)
Oct 13 14:20:26 standalone.localdomain podman[206655]: 2025-10-13 14:20:26.797036461 +0000 UTC m=+0.060463622 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, architecture=x86_64, com.redhat.component=openstack-ovn-northd-container, config_id=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, build-date=2025-07-21T13:30:04, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:20:26 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:20:26 standalone.localdomain podman[206654]: 2025-10-13 14:20:26.849370739 +0000 UTC m=+0.112822241 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:20:26 standalone.localdomain podman[206656]: 2025-10-13 14:20:26.941276049 +0000 UTC m=+0.198328232 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:20:26 standalone.localdomain podman[206654]: 2025-10-13 14:20:26.95288715 +0000 UTC m=+0.216338672 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, release=1)
Oct 13 14:20:26 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:20:26 standalone.localdomain podman[206656]: 2025-10-13 14:20:26.997959352 +0000 UTC m=+0.255011545 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:20:27 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:20:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1378: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:20:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:20:28 standalone.localdomain ceph-mon[29756]: pgmap v1378: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1379: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:30 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:14:20:30 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx11ec4f02bfb24aaaaee06-0068ed0aae" "proxy-server 2" 0.0018 "-" 21 -
Oct 13 14:20:30 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx11ec4f02bfb24aaaaee06-0068ed0aae)
Oct 13 14:20:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:20:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:20:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:20:30 standalone.localdomain podman[206822]: 2025-10-13 14:20:30.810505224 +0000 UTC m=+0.074086946 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-cinder-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, build-date=2025-07-21T15:58:55, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, summary=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=cinder_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1)
Oct 13 14:20:30 standalone.localdomain podman[206813]: 2025-10-13 14:20:30.860937833 +0000 UTC m=+0.126168457 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, tcib_managed=true, version=17.1.9, container_name=iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:20:30 standalone.localdomain podman[206813]: 2025-10-13 14:20:30.867867698 +0000 UTC m=+0.133098282 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:20:30 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:20:30 standalone.localdomain podman[206902]: 2025-10-13 14:20:30.90650315 +0000 UTC m=+0.059719019 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, build-date=2025-07-21T15:22:36, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-manila-share-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:20:30 standalone.localdomain ceph-mon[29756]: pgmap v1379: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:30 standalone.localdomain podman[206823]: 2025-10-13 14:20:30.830029011 +0000 UTC m=+0.082578051 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, build-date=2025-07-21T16:10:12, config_id=tripleo_step4, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, container_name=cinder_scheduler, io.buildah.version=1.33.12)
Oct 13 14:20:30 standalone.localdomain podman[206823]: 2025-10-13 14:20:30.981529465 +0000 UTC m=+0.234078505 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, release=1, container_name=cinder_scheduler, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.component=openstack-cinder-scheduler-container, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, name=rhosp17/openstack-cinder-scheduler)
Oct 13 14:20:30 standalone.localdomain podman[206822]: 2025-10-13 14:20:30.996112948 +0000 UTC m=+0.259694680 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=cinder_api_cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cinder-api-container, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:20:30 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:20:31 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:20:31 standalone.localdomain podman[206902]: 2025-10-13 14:20:31.035149823 +0000 UTC m=+0.188365692 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, build-date=2025-07-21T15:22:36, summary=Red Hat OpenStack Platform 17.1 manila-share, description=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, release=1, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-manila-share, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1)
Oct 13 14:20:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1380: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:32 standalone.localdomain runuser[207076]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:32 standalone.localdomain ceph-mon[29756]: pgmap v1380: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:20:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:20:33 standalone.localdomain runuser[207076]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:33 standalone.localdomain runuser[207200]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1381: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:34 standalone.localdomain runuser[207200]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:34 standalone.localdomain runuser[207262]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:34 standalone.localdomain runuser[207262]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:34 standalone.localdomain ceph-mon[29756]: pgmap v1381: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1382: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:36 standalone.localdomain ceph-mon[29756]: pgmap v1382: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:20:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1383: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:37 standalone.localdomain systemd[1]: tmp-crun.tMcWgA.mount: Deactivated successfully.
Oct 13 14:20:37 standalone.localdomain podman[207348]: 2025-10-13 14:20:37.901613492 +0000 UTC m=+0.145451607 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, name=rhosp17/openstack-nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-scheduler-container, release=1, container_name=nova_scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12)
Oct 13 14:20:37 standalone.localdomain podman[207366]: 2025-10-13 14:20:37.915223775 +0000 UTC m=+0.136963772 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, name=rhosp17/openstack-manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, container_name=manila_api_cron, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.component=openstack-manila-api-container)
Oct 13 14:20:37 standalone.localdomain podman[207412]: 2025-10-13 14:20:37.872923479 +0000 UTC m=+0.088119912 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, version=17.1.9, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, distribution-scope=public, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible)
Oct 13 14:20:37 standalone.localdomain podman[207346]: 2025-10-13 14:20:37.960653719 +0000 UTC m=+0.214692681 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, name=rhosp17/openstack-manila-scheduler, distribution-scope=public, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_scheduler, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-scheduler-container, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1)
Oct 13 14:20:37 standalone.localdomain podman[207348]: 2025-10-13 14:20:37.97128163 +0000 UTC m=+0.215119745 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-scheduler-container, container_name=nova_scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T16:02:54, io.openshift.expose-services=, name=rhosp17/openstack-nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:20:37 standalone.localdomain podman[207374]: 2025-10-13 14:20:37.977937466 +0000 UTC m=+0.210103768 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, build-date=2025-07-21T15:56:26, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:20:37 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207347]: 2025-10-13 14:20:38.023419662 +0000 UTC m=+0.261005471 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, container_name=heat_engine, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:20:38 standalone.localdomain podman[207371]: 2025-10-13 14:20:38.03492125 +0000 UTC m=+0.268634619 container health_status 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, com.redhat.component=openstack-horizon-container, name=rhosp17/openstack-horizon, summary=Red Hat OpenStack Platform 17.1 horizon, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, build-date=2025-07-21T13:58:15, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, architecture=x86_64, distribution-scope=public)
Oct 13 14:20:38 standalone.localdomain podman[207346]: 2025-10-13 14:20:38.061704343 +0000 UTC m=+0.315743335 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, com.redhat.component=openstack-manila-scheduler-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, container_name=manila_scheduler, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-manila-scheduler)
Oct 13 14:20:38 standalone.localdomain podman[207381]: 2025-10-13 14:20:38.070180256 +0000 UTC m=+0.297012172 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=neutron_api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:20:38 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207367]: 2025-10-13 14:20:38.077825714 +0000 UTC m=+0.311042048 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api)
Oct 13 14:20:38 standalone.localdomain podman[207367]: 2025-10-13 14:20:38.083958425 +0000 UTC m=+0.317174769 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, distribution-scope=public, version=17.1.9, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207347]: 2025-10-13 14:20:38.097099304 +0000 UTC m=+0.334685103 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-engine, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9)
Oct 13 14:20:38 standalone.localdomain podman[207412]: 2025-10-13 14:20:38.105194946 +0000 UTC m=+0.320391369 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-type=git, container_name=logrotate_crond, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:20:38 standalone.localdomain podman[207374]: 2025-10-13 14:20:38.111946226 +0000 UTC m=+0.344112518 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, build-date=2025-07-21T15:56:26, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207345]: 2025-10-13 14:20:38.119856492 +0000 UTC m=+0.374400690 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, architecture=x86_64, container_name=keystone, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207345]: 2025-10-13 14:20:38.143905481 +0000 UTC m=+0.398449709 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.9, release=1, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, build-date=2025-07-21T13:27:18, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, maintainer=OpenStack TripleO Team)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207371]: 2025-10-13 14:20:38.166194764 +0000 UTC m=+0.399908153 container exec_died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, description=Red Hat OpenStack Platform 17.1 horizon, io.openshift.expose-services=, com.redhat.component=openstack-horizon-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:58:15, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-horizon, summary=Red Hat OpenStack Platform 17.1 horizon, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc.)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207344]: 2025-10-13 14:20:38.182315615 +0000 UTC m=+0.436476160 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:20:38 standalone.localdomain podman[207366]: 2025-10-13 14:20:38.202785353 +0000 UTC m=+0.424525330 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T16:06:43, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, container_name=manila_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-manila-api-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 manila-api, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, name=rhosp17/openstack-manila-api, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207349]: 2025-10-13 14:20:38.014715111 +0000 UTC m=+0.237618074 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, architecture=x86_64, config_id=tripleo_step1, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:20:38 standalone.localdomain podman[207381]: 2025-10-13 14:20:38.22520655 +0000 UTC m=+0.452038456 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, io.buildah.version=1.33.12, container_name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:03)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207349]: 2025-10-13 14:20:38.25090805 +0000 UTC m=+0.473811033 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step1, release=1, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:43)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207365]: 2025-10-13 14:20:38.286542959 +0000 UTC m=+0.523685935 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, container_name=cinder_api, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T15:58:55, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:20:38 standalone.localdomain podman[207393]: 2025-10-13 14:20:38.054265012 +0000 UTC m=+0.269487826 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, container_name=nova_conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vcs-type=git, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:17)
Oct 13 14:20:38 standalone.localdomain podman[207393]: 2025-10-13 14:20:38.339105413 +0000 UTC m=+0.554328217 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, name=rhosp17/openstack-nova-conductor, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, container_name=nova_conductor, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:17, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:20:38 standalone.localdomain podman[207365]: 2025-10-13 14:20:38.346141533 +0000 UTC m=+0.583284519 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, release=1, architecture=x86_64, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, tcib_managed=true, container_name=cinder_api, io.openshift.expose-services=, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:20:38 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain podman[207344]: 2025-10-13 14:20:38.358403894 +0000 UTC m=+0.612564449 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99)
Oct 13 14:20:38 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:20:38 standalone.localdomain ceph-mon[29756]: pgmap v1383: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1384: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:40 standalone.localdomain podman[207705]: 2025-10-13 14:20:40.477209548 +0000 UTC m=+0.090752395 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-backup-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:18:24, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12)
Oct 13 14:20:40 standalone.localdomain podman[207705]: 2025-10-13 14:20:40.486102975 +0000 UTC m=+0.099645812 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:24, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-backup, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-backup-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:20:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:40 standalone.localdomain ceph-mon[29756]: pgmap v1384: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1385: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:42 standalone.localdomain ceph-mon[29756]: pgmap v1385: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:43 standalone.localdomain sshd[208017]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:20:43 standalone.localdomain sshd[208017]: Accepted publickey for root from 192.168.122.11 port 41768 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:20:43 standalone.localdomain systemd-logind[45629]: New session 99 of user root.
Oct 13 14:20:43 standalone.localdomain systemd[1]: Started Session 99 of User root.
Oct 13 14:20:43 standalone.localdomain sshd[208017]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:20:43 standalone.localdomain sudo[208023]:     root : PWD=/root ; USER=root ; COMMAND=/bin/podman exec -it nova_api nova-manage cell_v2 list_cells
Oct 13 14:20:43 standalone.localdomain sudo[208023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:20:43 standalone.localdomain podman[208037]: 2025-10-13 14:20:43.822685047 +0000 UTC m=+0.102117439 container exec 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api)
Oct 13 14:20:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1386: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:20:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:20:44 standalone.localdomain ceph-mon[29756]: pgmap v1386: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:20:45 standalone.localdomain podman[208086]: 2025-10-13 14:20:44.886751674 +0000 UTC m=+0.140270895 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:20:45 standalone.localdomain podman[208079]: 2025-10-13 14:20:45.048591149 +0000 UTC m=+0.294748422 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, release=1, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:20:45 standalone.localdomain podman[208103]: 2025-10-13 14:20:45.001916297 +0000 UTC m=+0.242806756 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:20:45 standalone.localdomain podman[208086]: 2025-10-13 14:20:45.164103623 +0000 UTC m=+0.417622864 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., release=1, distribution-scope=public, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, container_name=swift_container_server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:20:45 standalone.localdomain podman[208066]: 2025-10-13 14:20:45.131182579 +0000 UTC m=+0.395049243 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:20:45 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208067]: 2025-10-13 14:20:45.147019552 +0000 UTC m=+0.409007957 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, name=rhosp17/openstack-placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, container_name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, version=17.1.9, com.redhat.component=openstack-placement-api-container, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:12, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:20:45 standalone.localdomain podman[208065]: 2025-10-13 14:20:45.167828009 +0000 UTC m=+0.434582962 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public)
Oct 13 14:20:45 standalone.localdomain podman[208067]: 2025-10-13 14:20:45.23729402 +0000 UTC m=+0.499282445 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, release=1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 placement-api, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., container_name=placement_api, com.redhat.component=openstack-placement-api-container, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:58:12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1)
Oct 13 14:20:45 standalone.localdomain podman[208092]: 2025-10-13 14:20:45.100637358 +0000 UTC m=+0.343443587 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:20:45 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208073]: 2025-10-13 14:20:45.264441155 +0000 UTC m=+0.521710464 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, container_name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:20:45 standalone.localdomain podman[208092]: 2025-10-13 14:20:45.286913724 +0000 UTC m=+0.529719963 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:20:45 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208103]: 2025-10-13 14:20:45.305982597 +0000 UTC m=+0.546872906 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, container_name=swift_account_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, distribution-scope=public, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:20:45 standalone.localdomain podman[208206]: 2025-10-13 14:20:45.313659186 +0000 UTC m=+0.294088231 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, container_name=nova_metadata, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, build-date=2025-07-21T16:05:11, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:20:45 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208104]: 2025-10-13 14:20:45.187641035 +0000 UTC m=+0.426914553 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=nova_vnc_proxy, com.redhat.component=openstack-nova-novncproxy-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T15:24:10, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:20:45 standalone.localdomain podman[208173]: 2025-10-13 14:20:45.246649121 +0000 UTC m=+0.340309689 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, container_name=glance_api, batch=17.1_20250721.1)
Oct 13 14:20:45 standalone.localdomain podman[208206]: 2025-10-13 14:20:45.351899296 +0000 UTC m=+0.332328311 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhosp17/openstack-nova-api, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=, container_name=nova_metadata, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:20:45 standalone.localdomain podman[208079]: 2025-10-13 14:20:45.360033469 +0000 UTC m=+0.606190742 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, architecture=x86_64, container_name=swift_proxy, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:20:45 standalone.localdomain podman[208066]: 2025-10-13 14:20:45.366388977 +0000 UTC m=+0.630255631 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc.)
Oct 13 14:20:45 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208114]: 2025-10-13 14:20:45.445989063 +0000 UTC m=+0.681280648 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller)
Oct 13 14:20:45 standalone.localdomain podman[208065]: 2025-10-13 14:20:45.465271893 +0000 UTC m=+0.732026856 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:20:45 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208114]: 2025-10-13 14:20:45.491736917 +0000 UTC m=+0.727028482 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:20:45 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain runuser[208369]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:45 standalone.localdomain podman[208173]: 2025-10-13 14:20:45.547237003 +0000 UTC m=+0.640897571 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, architecture=x86_64, container_name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:20:45 standalone.localdomain podman[208172]: 2025-10-13 14:20:45.546572563 +0000 UTC m=+0.644001109 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9)
Oct 13 14:20:45 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208104]: 2025-10-13 14:20:45.564305925 +0000 UTC m=+0.803579493 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, io.buildah.version=1.33.12, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, tcib_managed=true)
Oct 13 14:20:45 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain podman[208073]: 2025-10-13 14:20:45.669221199 +0000 UTC m=+0.926490508 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true)
Oct 13 14:20:45 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:45 standalone.localdomain podman[208172]: 2025-10-13 14:20:45.761314464 +0000 UTC m=+0.858743030 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, vcs-type=git, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:45 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:20:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1387: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:45 standalone.localdomain systemd[1]: tmp-crun.cjbSTV.mount: Deactivated successfully.
Oct 13 14:20:46 standalone.localdomain runuser[208369]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:46 standalone.localdomain runuser[208452]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:46 standalone.localdomain haproxy[70940]: 172.17.0.100:47427 [13/Oct/2025:14:20:46.085] mysql mysql/standalone.internalapi.localdomain 1/0/509 3529 -- 55/55/54/54/0 0/0
Oct 13 14:20:46 standalone.localdomain ceph-mon[29756]: pgmap v1387: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:46 standalone.localdomain podman[208037]: 2025-10-13 14:20:46.766823229 +0000 UTC m=+3.046255371 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api)
Oct 13 14:20:46 standalone.localdomain sudo[208023]: pam_unix(sudo:session): session closed for user root
Oct 13 14:20:46 standalone.localdomain sshd[208022]: Received disconnect from 192.168.122.11 port 41768:11: disconnected by user
Oct 13 14:20:46 standalone.localdomain sshd[208022]: Disconnected from user root 192.168.122.11 port 41768
Oct 13 14:20:46 standalone.localdomain sshd[208017]: pam_unix(sshd:session): session closed for user root
Oct 13 14:20:46 standalone.localdomain systemd[1]: session-99.scope: Deactivated successfully.
Oct 13 14:20:46 standalone.localdomain systemd-logind[45629]: Session 99 logged out. Waiting for processes to exit.
Oct 13 14:20:46 standalone.localdomain systemd-logind[45629]: Removed session 99.
Oct 13 14:20:46 standalone.localdomain runuser[208452]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:46 standalone.localdomain runuser[208517]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:20:47 standalone.localdomain object-server[208564]: Object update sweep starting on /srv/node/d1 (pid: 15)
Oct 13 14:20:47 standalone.localdomain object-server[208564]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 15)
Oct 13 14:20:47 standalone.localdomain object-server[208564]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:20:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.08s
Oct 13 14:20:47 standalone.localdomain runuser[208517]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1388: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:48 standalone.localdomain ceph-mon[29756]: pgmap v1388: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1389: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:50 standalone.localdomain ceph-mon[29756]: pgmap v1389: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:20:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:20:51 standalone.localdomain podman[208739]: 2025-10-13 14:20:51.824796069 +0000 UTC m=+0.086779651 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, com.redhat.component=openstack-keystone-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, version=17.1.9)
Oct 13 14:20:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1390: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:51 standalone.localdomain podman[208739]: 2025-10-13 14:20:51.861817872 +0000 UTC m=+0.123801424 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:27:18, version=17.1.9, release=1, config_id=tripleo_step3, name=rhosp17/openstack-keystone)
Oct 13 14:20:51 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:20:51 standalone.localdomain systemd[1]: tmp-crun.YOKgSF.mount: Deactivated successfully.
Oct 13 14:20:51 standalone.localdomain podman[208740]: 2025-10-13 14:20:51.995049617 +0000 UTC m=+0.256115809 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, container_name=nova_api, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-api-container, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:20:52 standalone.localdomain podman[208740]: 2025-10-13 14:20:52.024876925 +0000 UTC m=+0.285943067 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:20:52 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:20:52 standalone.localdomain ceph-mon[29756]: pgmap v1390: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:20:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:20:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:20:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:20:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:20:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:20:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:20:53 standalone.localdomain systemd[1]: tmp-crun.uxahNy.mount: Deactivated successfully.
Oct 13 14:20:53 standalone.localdomain podman[208964]: 2025-10-13 14:20:53.845204682 +0000 UTC m=+0.084859622 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, name=rhosp17/openstack-nova-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_api_cron, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T16:05:11, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:20:53 standalone.localdomain podman[208939]: 2025-10-13 14:20:53.857675509 +0000 UTC m=+0.119360535 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=neutron_dhcp, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, name=rhosp17/openstack-neutron-dhcp-agent)
Oct 13 14:20:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1391: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:53 standalone.localdomain podman[208940]: 2025-10-13 14:20:53.818565042 +0000 UTC m=+0.078861474 container health_status 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, health_status=healthy, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-api, vcs-type=git, build-date=2025-07-21T15:22:44, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, container_name=barbican_api, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-api-container, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:20:53 standalone.localdomain podman[208964]: 2025-10-13 14:20:53.876682091 +0000 UTC m=+0.116337021 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, name=rhosp17/openstack-nova-api, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, distribution-scope=public)
Oct 13 14:20:53 standalone.localdomain podman[208958]: 2025-10-13 14:20:53.886770404 +0000 UTC m=+0.126372573 container health_status 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, health_status=healthy, release=1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, com.redhat.component=openstack-barbican-worker-container, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, tcib_managed=true, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team)
Oct 13 14:20:53 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:20:53 standalone.localdomain podman[208940]: 2025-10-13 14:20:53.903319679 +0000 UTC m=+0.163616101 container exec_died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 barbican-api, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, build-date=2025-07-21T15:22:44, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-barbican-api-container, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, container_name=barbican_api, name=rhosp17/openstack-barbican-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 14:20:53 standalone.localdomain podman[208953]: 2025-10-13 14:20:53.920926028 +0000 UTC m=+0.168947998 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:53 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Deactivated successfully.
Oct 13 14:20:53 standalone.localdomain podman[208953]: 2025-10-13 14:20:53.963937726 +0000 UTC m=+0.211959706 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., vcs-type=git, container_name=neutron_sriov_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.component=openstack-neutron-sriov-agent-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9)
Oct 13 14:20:53 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:20:53 standalone.localdomain podman[208947]: 2025-10-13 14:20:53.973338628 +0000 UTC m=+0.225526998 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, vcs-type=git, version=17.1.9, tcib_managed=true, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, batch=17.1_20250721.1, release=1, build-date=2025-07-21T16:18:19, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=barbican_keystone_listener)
Oct 13 14:20:53 standalone.localdomain podman[208939]: 2025-10-13 14:20:53.988045996 +0000 UTC m=+0.249731032 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., container_name=neutron_dhcp)
Oct 13 14:20:53 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:20:54 standalone.localdomain podman[208958]: 2025-10-13 14:20:54.004095295 +0000 UTC m=+0.243697474 container exec_died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:36:22, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, container_name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, tcib_managed=true, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-worker, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 barbican-worker, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:20:54 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Deactivated successfully.
Oct 13 14:20:54 standalone.localdomain podman[208947]: 2025-10-13 14:20:54.04185492 +0000 UTC m=+0.294043290 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:19, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=barbican_keystone_listener, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, tcib_managed=true)
Oct 13 14:20:54 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:20:54 standalone.localdomain ceph-mon[29756]: pgmap v1391: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:20:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1392: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:56 standalone.localdomain ceph-mon[29756]: pgmap v1392: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:20:57 standalone.localdomain podman[209136]: 2025-10-13 14:20:57.009032509 +0000 UTC m=+0.082219019 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, release=1, config_id=ovn_cluster_northd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-northd-container, vendor=Red Hat, Inc., container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T13:30:04, description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:20:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:20:57 standalone.localdomain podman[209136]: 2025-10-13 14:20:57.022901261 +0000 UTC m=+0.096087751 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=openstack-ovn-northd-container, summary=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, name=rhosp17/openstack-ovn-northd, managed_by=tripleo_ansible, build-date=2025-07-21T13:30:04, config_id=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:20:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:20:57 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:20:57 standalone.localdomain podman[209155]: 2025-10-13 14:20:57.089256955 +0000 UTC m=+0.060098651 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:20:57 standalone.localdomain podman[209155]: 2025-10-13 14:20:57.117038009 +0000 UTC m=+0.087879705 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:20:57 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:20:57 standalone.localdomain podman[209156]: 2025-10-13 14:20:57.195430798 +0000 UTC m=+0.162976492 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step2, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, container_name=clustercheck, com.redhat.component=openstack-mariadb-container, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-mariadb)
Oct 13 14:20:57 standalone.localdomain podman[209156]: 2025-10-13 14:20:57.235798714 +0000 UTC m=+0.203344418 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, distribution-scope=public, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, architecture=x86_64, container_name=clustercheck, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.expose-services=)
Oct 13 14:20:57 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:20:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:20:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1468703068' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:20:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:20:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1468703068' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:20:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1468703068' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:20:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1468703068' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:20:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1393: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:58 standalone.localdomain runuser[209263]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:58 standalone.localdomain ceph-mon[29756]: pgmap v1393: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:20:58 standalone.localdomain runuser[209263]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:58 standalone.localdomain runuser[209332]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:59 standalone.localdomain runuser[209332]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:20:59 standalone.localdomain runuser[209394]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:20:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1394: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:00 standalone.localdomain runuser[209394]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:00 standalone.localdomain sshd[209453]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:00 standalone.localdomain sudo[209451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:21:00 standalone.localdomain sudo[209451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:21:00 standalone.localdomain sudo[209451]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:00 standalone.localdomain sudo[209469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:21:00 standalone.localdomain sudo[209469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:21:00 standalone.localdomain sshd[209453]: Accepted publickey for root from 192.168.122.11 port 34728 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:00 standalone.localdomain systemd-logind[45629]: New session 100 of user root.
Oct 13 14:21:00 standalone.localdomain systemd[1]: Started Session 100 of User root.
Oct 13 14:21:00 standalone.localdomain sshd[209453]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:00 standalone.localdomain sudo[209494]:     root : PWD=/root ; USER=root ; COMMAND=/bin/podman exec -it nova_conductor nova-manage cell_v2 list_cells
Oct 13 14:21:00 standalone.localdomain sudo[209494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:00 standalone.localdomain systemd[1]: tmp-crun.Kp4i8X.mount: Deactivated successfully.
Oct 13 14:21:00 standalone.localdomain podman[209508]: 2025-10-13 14:21:00.697149558 +0000 UTC m=+0.123344048 container exec ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, build-date=2025-07-21T15:44:17, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-conductor, version=17.1.9, config_id=tripleo_step4, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:21:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:00 standalone.localdomain ceph-mon[29756]: pgmap v1394: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:01 standalone.localdomain sudo[209469]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:21:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev d02c2bf3-a6a1-4797-8247-4f1d2652f4aa (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:21:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev d02c2bf3-a6a1-4797-8247-4f1d2652f4aa (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:21:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event d02c2bf3-a6a1-4797-8247-4f1d2652f4aa (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:21:01 standalone.localdomain sudo[209641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:21:01 standalone.localdomain sudo[209641]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:21:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:21:01 standalone.localdomain sudo[209641]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:21:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:21:01 standalone.localdomain systemd[1]: tmp-crun.Ou0Xb0.mount: Deactivated successfully.
Oct 13 14:21:01 standalone.localdomain podman[209655]: 2025-10-13 14:21:01.330918928 +0000 UTC m=+0.060978739 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, release=1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:01 standalone.localdomain podman[209655]: 2025-10-13 14:21:01.338889546 +0000 UTC m=+0.068949367 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, version=17.1.9, tcib_managed=true)
Oct 13 14:21:01 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:21:01 standalone.localdomain podman[209657]: 2025-10-13 14:21:01.384598718 +0000 UTC m=+0.110760788 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-cinder-api-container, build-date=2025-07-21T15:58:55, architecture=x86_64, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1)
Oct 13 14:21:01 standalone.localdomain podman[209657]: 2025-10-13 14:21:01.396134557 +0000 UTC m=+0.122296637 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=cinder_api_cron, release=1, com.redhat.component=openstack-cinder-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-07-21T15:58:55, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:21:01 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:21:01 standalone.localdomain podman[209658]: 2025-10-13 14:21:01.443418208 +0000 UTC m=+0.171169327 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, name=rhosp17/openstack-cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_scheduler, build-date=2025-07-21T16:10:12, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, com.redhat.component=openstack-cinder-scheduler-container, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:21:01 standalone.localdomain podman[209658]: 2025-10-13 14:21:01.460146858 +0000 UTC m=+0.187897987 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.openshift.expose-services=, container_name=cinder_scheduler, build-date=2025-07-21T16:10:12, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-scheduler, managed_by=tripleo_ansible, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-scheduler-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1)
Oct 13 14:21:01 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:21:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1395: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:21:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:21:02 standalone.localdomain haproxy[70940]: 172.17.0.100:47871 [13/Oct/2025:14:21:02.448] mysql mysql/standalone.internalapi.localdomain 1/0/450 3529 -- 55/55/54/54/0 0/0
Oct 13 14:21:02 standalone.localdomain ceph-mon[29756]: pgmap v1395: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:03 standalone.localdomain podman[209857]: 2025-10-13 14:21:03.028273038 +0000 UTC m=+0.054553909 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:21:03 standalone.localdomain podman[209857]: 2025-10-13 14:21:03.057003332 +0000 UTC m=+0.083284223 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:21:03 standalone.localdomain podman[209508]: 2025-10-13 14:21:03.098779272 +0000 UTC m=+2.524973762 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, container_name=nova_conductor, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T15:44:17, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:21:03 standalone.localdomain sudo[209494]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:03 standalone.localdomain sshd[209493]: Received disconnect from 192.168.122.11 port 34728:11: disconnected by user
Oct 13 14:21:03 standalone.localdomain sshd[209493]: Disconnected from user root 192.168.122.11 port 34728
Oct 13 14:21:03 standalone.localdomain sshd[209453]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:03 standalone.localdomain systemd[1]: session-100.scope: Deactivated successfully.
Oct 13 14:21:03 standalone.localdomain systemd-logind[45629]: Session 100 logged out. Waiting for processes to exit.
Oct 13 14:21:03 standalone.localdomain systemd-logind[45629]: Removed session 100.
Oct 13 14:21:03 standalone.localdomain podman[209902]: 2025-10-13 14:21:03.294204822 +0000 UTC m=+0.075870502 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-haproxy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git)
Oct 13 14:21:03 standalone.localdomain podman[209902]: 2025-10-13 14:21:03.325740683 +0000 UTC m=+0.107406373 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:21:03 standalone.localdomain sshd[209987]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:03 standalone.localdomain sshd[209987]: Accepted publickey for root from 192.168.122.11 port 34732 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:03 standalone.localdomain podman[209994]: 2025-10-13 14:21:03.580325604 +0000 UTC m=+0.071434644 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, name=rhosp17/openstack-rabbitmq, release=1, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:21:03 standalone.localdomain systemd-logind[45629]: New session 101 of user root.
Oct 13 14:21:03 standalone.localdomain systemd[1]: Started Session 101 of User root.
Oct 13 14:21:03 standalone.localdomain sshd[209987]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:03 standalone.localdomain podman[209994]: 2025-10-13 14:21:03.608069508 +0000 UTC m=+0.099178538 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, vcs-type=git, tcib_managed=true)
Oct 13 14:21:03 standalone.localdomain sudo[210034]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_api.service
Oct 13 14:21:03 standalone.localdomain sudo[210034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:03 standalone.localdomain sudo[210034]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:03 standalone.localdomain sshd[210031]: Received disconnect from 192.168.122.11 port 34732:11: disconnected by user
Oct 13 14:21:03 standalone.localdomain sshd[210031]: Disconnected from user root 192.168.122.11 port 34732
Oct 13 14:21:03 standalone.localdomain sshd[209987]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:03 standalone.localdomain systemd[1]: session-101.scope: Deactivated successfully.
Oct 13 14:21:03 standalone.localdomain systemd-logind[45629]: Session 101 logged out. Waiting for processes to exit.
Oct 13 14:21:03 standalone.localdomain systemd-logind[45629]: Removed session 101.
Oct 13 14:21:03 standalone.localdomain sshd[210049]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:03 standalone.localdomain sshd[210049]: Accepted publickey for root from 192.168.122.11 port 34744 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:03 standalone.localdomain systemd-logind[45629]: New session 102 of user root.
Oct 13 14:21:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1396: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:03 standalone.localdomain systemd[1]: Started Session 102 of User root.
Oct 13 14:21:03 standalone.localdomain sshd[210049]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:21:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:21:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:21:03 standalone.localdomain sudo[210053]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_api_cron.service
Oct 13 14:21:03 standalone.localdomain sudo[210053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210053]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain sshd[210052]: Received disconnect from 192.168.122.11 port 34744:11: disconnected by user
Oct 13 14:21:04 standalone.localdomain sshd[210052]: Disconnected from user root 192.168.122.11 port 34744
Oct 13 14:21:04 standalone.localdomain sshd[210049]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain systemd[1]: session-102.scope: Deactivated successfully.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Session 102 logged out. Waiting for processes to exit.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Removed session 102.
Oct 13 14:21:04 standalone.localdomain sshd[210068]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:04 standalone.localdomain sshd[210068]: Accepted publickey for root from 192.168.122.11 port 34746 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: New session 103 of user root.
Oct 13 14:21:04 standalone.localdomain systemd[1]: Started Session 103 of User root.
Oct 13 14:21:04 standalone.localdomain sshd[210068]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210072]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_evaluator.service
Oct 13 14:21:04 standalone.localdomain sudo[210072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210072]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain sshd[210071]: Received disconnect from 192.168.122.11 port 34746:11: disconnected by user
Oct 13 14:21:04 standalone.localdomain sshd[210071]: Disconnected from user root 192.168.122.11 port 34746
Oct 13 14:21:04 standalone.localdomain sshd[210068]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain systemd[1]: session-103.scope: Deactivated successfully.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Session 103 logged out. Waiting for processes to exit.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Removed session 103.
Oct 13 14:21:04 standalone.localdomain sshd[210087]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:04 standalone.localdomain sshd[210087]: Accepted publickey for root from 192.168.122.11 port 34758 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: New session 104 of user root.
Oct 13 14:21:04 standalone.localdomain systemd[1]: Started Session 104 of User root.
Oct 13 14:21:04 standalone.localdomain sshd[210087]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210091]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_listener.service
Oct 13 14:21:04 standalone.localdomain sudo[210091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210091]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain sshd[210090]: Received disconnect from 192.168.122.11 port 34758:11: disconnected by user
Oct 13 14:21:04 standalone.localdomain sshd[210090]: Disconnected from user root 192.168.122.11 port 34758
Oct 13 14:21:04 standalone.localdomain sshd[210087]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain systemd[1]: session-104.scope: Deactivated successfully.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Session 104 logged out. Waiting for processes to exit.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Removed session 104.
Oct 13 14:21:04 standalone.localdomain sshd[210114]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:04 standalone.localdomain sshd[210114]: Accepted publickey for root from 192.168.122.11 port 34768 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: New session 105 of user root.
Oct 13 14:21:04 standalone.localdomain systemd[1]: Started Session 105 of User root.
Oct 13 14:21:04 standalone.localdomain sshd[210114]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210118]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_notifier.service
Oct 13 14:21:04 standalone.localdomain sudo[210118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain sudo[210118]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain sshd[210117]: Received disconnect from 192.168.122.11 port 34768:11: disconnected by user
Oct 13 14:21:04 standalone.localdomain sshd[210117]: Disconnected from user root 192.168.122.11 port 34768
Oct 13 14:21:04 standalone.localdomain sshd[210114]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:04 standalone.localdomain systemd[1]: session-105.scope: Deactivated successfully.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Session 105 logged out. Waiting for processes to exit.
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: Removed session 105.
Oct 13 14:21:04 standalone.localdomain sshd[210133]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:04 standalone.localdomain sshd[210133]: Accepted publickey for root from 192.168.122.11 port 34784 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:04 standalone.localdomain systemd-logind[45629]: New session 106 of user root.
Oct 13 14:21:04 standalone.localdomain systemd[1]: Started Session 106 of User root.
Oct 13 14:21:04 standalone.localdomain sshd[210133]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:04 standalone.localdomain ceph-mon[29756]: pgmap v1396: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:21:05 standalone.localdomain sudo[210137]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ceilometer_agent_central.service
Oct 13 14:21:05 standalone.localdomain sudo[210137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210137]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain sshd[210136]: Received disconnect from 192.168.122.11 port 34784:11: disconnected by user
Oct 13 14:21:05 standalone.localdomain sshd[210136]: Disconnected from user root 192.168.122.11 port 34784
Oct 13 14:21:05 standalone.localdomain sshd[210133]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain systemd[1]: session-106.scope: Deactivated successfully.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Session 106 logged out. Waiting for processes to exit.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Removed session 106.
Oct 13 14:21:05 standalone.localdomain sshd[210152]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:05 standalone.localdomain sshd[210152]: Accepted publickey for root from 192.168.122.11 port 34794 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: New session 107 of user root.
Oct 13 14:21:05 standalone.localdomain systemd[1]: Started Session 107 of User root.
Oct 13 14:21:05 standalone.localdomain sshd[210152]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210156]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ceilometer_agent_notification.service
Oct 13 14:21:05 standalone.localdomain sudo[210156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210156]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain sshd[210155]: Received disconnect from 192.168.122.11 port 34794:11: disconnected by user
Oct 13 14:21:05 standalone.localdomain sshd[210155]: Disconnected from user root 192.168.122.11 port 34794
Oct 13 14:21:05 standalone.localdomain sshd[210152]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain systemd[1]: session-107.scope: Deactivated successfully.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Session 107 logged out. Waiting for processes to exit.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Removed session 107.
Oct 13 14:21:05 standalone.localdomain sshd[210171]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:05 standalone.localdomain sshd[210171]: Accepted publickey for root from 192.168.122.11 port 34802 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: New session 108 of user root.
Oct 13 14:21:05 standalone.localdomain systemd[1]: Started Session 108 of User root.
Oct 13 14:21:05 standalone.localdomain sshd[210171]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210175]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_octavia_api.service
Oct 13 14:21:05 standalone.localdomain sudo[210175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210175]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain sshd[210174]: Received disconnect from 192.168.122.11 port 34802:11: disconnected by user
Oct 13 14:21:05 standalone.localdomain sshd[210174]: Disconnected from user root 192.168.122.11 port 34802
Oct 13 14:21:05 standalone.localdomain sshd[210171]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain systemd[1]: session-108.scope: Deactivated successfully.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Session 108 logged out. Waiting for processes to exit.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Removed session 108.
Oct 13 14:21:05 standalone.localdomain sshd[210198]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:05 standalone.localdomain sshd[210198]: Accepted publickey for root from 192.168.122.11 port 34812 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: New session 109 of user root.
Oct 13 14:21:05 standalone.localdomain systemd[1]: Started Session 109 of User root.
Oct 13 14:21:05 standalone.localdomain sshd[210198]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210202]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_octavia_health_manager.service
Oct 13 14:21:05 standalone.localdomain sudo[210202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:05 standalone.localdomain sudo[210202]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain sshd[210201]: Received disconnect from 192.168.122.11 port 34812:11: disconnected by user
Oct 13 14:21:05 standalone.localdomain sshd[210201]: Disconnected from user root 192.168.122.11 port 34812
Oct 13 14:21:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1397: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:05 standalone.localdomain sshd[210198]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:05 standalone.localdomain systemd[1]: session-109.scope: Deactivated successfully.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Session 109 logged out. Waiting for processes to exit.
Oct 13 14:21:05 standalone.localdomain systemd-logind[45629]: Removed session 109.
Oct 13 14:21:05 standalone.localdomain sshd[210217]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:06 standalone.localdomain sshd[210217]: Accepted publickey for root from 192.168.122.11 port 34826 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: New session 110 of user root.
Oct 13 14:21:06 standalone.localdomain systemd[1]: Started Session 110 of User root.
Oct 13 14:21:06 standalone.localdomain sshd[210217]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210221]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_octavia_rsyslog.service
Oct 13 14:21:06 standalone.localdomain sudo[210221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210221]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain sshd[210220]: Received disconnect from 192.168.122.11 port 34826:11: disconnected by user
Oct 13 14:21:06 standalone.localdomain sshd[210220]: Disconnected from user root 192.168.122.11 port 34826
Oct 13 14:21:06 standalone.localdomain sshd[210217]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain systemd[1]: session-110.scope: Deactivated successfully.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Session 110 logged out. Waiting for processes to exit.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Removed session 110.
Oct 13 14:21:06 standalone.localdomain sshd[210236]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:06 standalone.localdomain sshd[210236]: Accepted publickey for root from 192.168.122.11 port 34840 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: New session 111 of user root.
Oct 13 14:21:06 standalone.localdomain systemd[1]: Started Session 111 of User root.
Oct 13 14:21:06 standalone.localdomain sshd[210236]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210240]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_octavia_driver_agent.service
Oct 13 14:21:06 standalone.localdomain sudo[210240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210240]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain sshd[210239]: Received disconnect from 192.168.122.11 port 34840:11: disconnected by user
Oct 13 14:21:06 standalone.localdomain sshd[210239]: Disconnected from user root 192.168.122.11 port 34840
Oct 13 14:21:06 standalone.localdomain sshd[210236]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain systemd[1]: session-111.scope: Deactivated successfully.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Session 111 logged out. Waiting for processes to exit.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Removed session 111.
Oct 13 14:21:06 standalone.localdomain sshd[210255]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:06 standalone.localdomain sshd[210255]: Accepted publickey for root from 192.168.122.11 port 34848 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: New session 112 of user root.
Oct 13 14:21:06 standalone.localdomain systemd[1]: Started Session 112 of User root.
Oct 13 14:21:06 standalone.localdomain sshd[210255]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210267]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_octavia_housekeeping.service
Oct 13 14:21:06 standalone.localdomain sudo[210267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210267]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain sshd[210258]: Received disconnect from 192.168.122.11 port 34848:11: disconnected by user
Oct 13 14:21:06 standalone.localdomain sshd[210258]: Disconnected from user root 192.168.122.11 port 34848
Oct 13 14:21:06 standalone.localdomain sshd[210255]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain systemd[1]: session-112.scope: Deactivated successfully.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Session 112 logged out. Waiting for processes to exit.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Removed session 112.
Oct 13 14:21:06 standalone.localdomain sshd[210282]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:06 standalone.localdomain ceph-mon[29756]: pgmap v1397: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:06 standalone.localdomain sshd[210282]: Accepted publickey for root from 192.168.122.11 port 34850 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: New session 113 of user root.
Oct 13 14:21:06 standalone.localdomain systemd[1]: Started Session 113 of User root.
Oct 13 14:21:06 standalone.localdomain sshd[210282]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210286]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_octavia_worker.service
Oct 13 14:21:06 standalone.localdomain sudo[210286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:06 standalone.localdomain sudo[210286]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain sshd[210285]: Received disconnect from 192.168.122.11 port 34850:11: disconnected by user
Oct 13 14:21:06 standalone.localdomain sshd[210285]: Disconnected from user root 192.168.122.11 port 34850
Oct 13 14:21:06 standalone.localdomain sshd[210282]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:06 standalone.localdomain systemd[1]: session-113.scope: Deactivated successfully.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Session 113 logged out. Waiting for processes to exit.
Oct 13 14:21:06 standalone.localdomain systemd-logind[45629]: Removed session 113.
Oct 13 14:21:06 standalone.localdomain sshd[210301]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:07 standalone.localdomain sshd[210301]: Accepted publickey for root from 192.168.122.11 port 34854 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:07 standalone.localdomain systemd-logind[45629]: New session 114 of user root.
Oct 13 14:21:07 standalone.localdomain systemd[1]: Started Session 114 of User root.
Oct 13 14:21:07 standalone.localdomain sshd[210301]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:07 standalone.localdomain sudo[210305]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_horizon.service
Oct 13 14:21:07 standalone.localdomain sudo[210305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:07 standalone.localdomain sudo[210305]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:07 standalone.localdomain sshd[210304]: Received disconnect from 192.168.122.11 port 34854:11: disconnected by user
Oct 13 14:21:07 standalone.localdomain sshd[210304]: Disconnected from user root 192.168.122.11 port 34854
Oct 13 14:21:07 standalone.localdomain sshd[210301]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:07 standalone.localdomain systemd[1]: session-114.scope: Deactivated successfully.
Oct 13 14:21:07 standalone.localdomain systemd-logind[45629]: Session 114 logged out. Waiting for processes to exit.
Oct 13 14:21:07 standalone.localdomain systemd-logind[45629]: Removed session 114.
Oct 13 14:21:07 standalone.localdomain sshd[210320]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:07 standalone.localdomain sshd[210320]: Accepted publickey for root from 192.168.122.11 port 34860 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:07 standalone.localdomain systemd-logind[45629]: New session 115 of user root.
Oct 13 14:21:07 standalone.localdomain systemd[1]: Started Session 115 of User root.
Oct 13 14:21:07 standalone.localdomain sshd[210320]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:07 standalone.localdomain sudo[210326]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_horizon.service
Oct 13 14:21:07 standalone.localdomain sudo[210326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:07 standalone.localdomain systemd[1]: Stopping horizon container...
Oct 13 14:21:07 standalone.localdomain podman[210331]: 2025-10-13 14:21:07.446822154 +0000 UTC m=+0.067694927 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, build-date=2025-07-21T16:13:39, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, release=1, com.redhat.component=openstack-cinder-volume-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume)
Oct 13 14:21:07 standalone.localdomain podman[210331]: 2025-10-13 14:21:07.530335883 +0000 UTC m=+0.151208656 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, com.redhat.component=openstack-cinder-volume-container, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:13:39, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, name=rhosp17/openstack-cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:21:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1398: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:21:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:21:08 standalone.localdomain ceph-mon[29756]: pgmap v1398: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:08 standalone.localdomain podman[210429]: 2025-10-13 14:21:08.874278776 +0000 UTC m=+0.117011270 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=memcached)
Oct 13 14:21:08 standalone.localdomain podman[210405]: 2025-10-13 14:21:08.959526909 +0000 UTC m=+0.213949498 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:21:08 standalone.localdomain podman[210409]: 2025-10-13 14:21:08.912982191 +0000 UTC m=+0.158631586 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T16:02:54, name=rhosp17/openstack-nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-scheduler-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:21:08 standalone.localdomain podman[210406]: 2025-10-13 14:21:08.972257175 +0000 UTC m=+0.226100315 container health_status 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, health_status=healthy, release=1, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone)
Oct 13 14:21:08 standalone.localdomain podman[210405]: 2025-10-13 14:21:08.98687508 +0000 UTC m=+0.241297679 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, release=1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-heat-api-cfn, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, batch=17.1_20250721.1)
Oct 13 14:21:08 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210438]: 2025-10-13 14:21:09.023210291 +0000 UTC m=+0.251315401 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, distribution-scope=public, container_name=manila_api_cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-api, name=rhosp17/openstack-manila-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, summary=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, com.redhat.component=openstack-manila-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:21:09 standalone.localdomain podman[210438]: 2025-10-13 14:21:09.028522995 +0000 UTC m=+0.256628085 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=manila_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, name=rhosp17/openstack-manila-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-api, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T16:06:43, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-api-container, release=1)
Oct 13 14:21:09 standalone.localdomain podman[210467]: 2025-10-13 14:21:08.933441578 +0000 UTC m=+0.145924122 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-neutron-server, release=1, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_api)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210409]: 2025-10-13 14:21:09.048734575 +0000 UTC m=+0.294383960 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:02:54, com.redhat.component=openstack-nova-scheduler-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, batch=17.1_20250721.1, name=rhosp17/openstack-nova-scheduler, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:21:09 standalone.localdomain podman[210479]: 2025-10-13 14:21:08.949774815 +0000 UTC m=+0.156014694 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Oct 13 14:21:09 standalone.localdomain podman[210429]: 2025-10-13 14:21:09.062616287 +0000 UTC m=+0.305348761 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, version=17.1.9)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210435]: 2025-10-13 14:21:09.073185355 +0000 UTC m=+0.285645328 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., container_name=cinder_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, release=1, com.redhat.component=openstack-cinder-api-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, build-date=2025-07-21T15:58:55, description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210479]: 2025-10-13 14:21:09.091820265 +0000 UTC m=+0.298060134 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:21:09 standalone.localdomain podman[210435]: 2025-10-13 14:21:09.096846532 +0000 UTC m=+0.309306495 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, build-date=2025-07-21T15:58:55, managed_by=tripleo_ansible, vcs-type=git, container_name=cinder_api, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-cinder-api, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, com.redhat.component=openstack-cinder-api-container)
Oct 13 14:21:09 standalone.localdomain podman[210459]: 2025-10-13 14:21:09.103376455 +0000 UTC m=+0.313349911 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=heat_api, build-date=2025-07-21T15:56:26, vcs-type=git, name=rhosp17/openstack-heat-api, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:21:09 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210406]: 2025-10-13 14:21:09.130617532 +0000 UTC m=+0.384460672 container exec_died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, container_name=keystone, release=1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, tcib_managed=true, vcs-type=git)
Oct 13 14:21:09 standalone.localdomain podman[210459]: 2025-10-13 14:21:09.130823079 +0000 UTC m=+0.340796535 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:09 standalone.localdomain podman[210467]: 2025-10-13 14:21:09.136353041 +0000 UTC m=+0.348835585 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, build-date=2025-07-21T15:44:03, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, release=1, container_name=neutron_api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-neutron-server-container)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210408]: 2025-10-13 14:21:09.173973821 +0000 UTC m=+0.424018763 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T15:44:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-engine, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9)
Oct 13 14:21:09 standalone.localdomain podman[210446]: 2025-10-13 14:21:09.030058874 +0000 UTC m=+0.252802987 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.expose-services=, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210452]: Error: container 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 is not running
Oct 13 14:21:09 standalone.localdomain podman[210407]: 2025-10-13 14:21:08.898777949 +0000 UTC m=+0.151660960 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.component=openstack-manila-scheduler-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-manila-scheduler, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-scheduler, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T15:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.openshift.expose-services=, container_name=manila_scheduler, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, maintainer=OpenStack TripleO Team)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:21:09 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Failed with result 'exit-code'.
Oct 13 14:21:09 standalone.localdomain podman[210407]: 2025-10-13 14:21:09.23723524 +0000 UTC m=+0.490118261 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-manila-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, io.openshift.expose-services=, container_name=manila_scheduler, release=1, managed_by=tripleo_ansible, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210446]: 2025-10-13 14:21:09.265982204 +0000 UTC m=+0.488726327 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, build-date=2025-07-21T15:56:26, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210472]: 2025-10-13 14:21:09.282646302 +0000 UTC m=+0.495926091 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T15:44:17, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_conductor, name=rhosp17/openstack-nova-conductor, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, com.redhat.component=openstack-nova-conductor-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, version=17.1.9)
Oct 13 14:21:09 standalone.localdomain podman[210408]: 2025-10-13 14:21:09.30441322 +0000 UTC m=+0.554458202 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, version=17.1.9, release=1, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:21:09 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain podman[210472]: 2025-10-13 14:21:09.358731479 +0000 UTC m=+0.572011268 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-conductor, com.redhat.component=openstack-nova-conductor-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, build-date=2025-07-21T15:44:17, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_conductor)
Oct 13 14:21:09 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:21:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1399: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:10 standalone.localdomain systemd[1]: libpod-808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.scope: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain systemd[1]: libpod-808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.scope: Consumed 15.856s CPU time.
Oct 13 14:21:10 standalone.localdomain podman[210353]: 2025-10-13 14:21:10.53702082 +0000 UTC m=+3.096292176 container died 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, description=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, release=1, com.redhat.component=openstack-horizon-container, name=rhosp17/openstack-horizon, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, vendor=Red Hat, Inc., config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, managed_by=tripleo_ansible)
Oct 13 14:21:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.timer: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.
Oct 13 14:21:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Failed to open /run/systemd/transient/808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: No such file or directory
Oct 13 14:21:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6-userdata-shm.mount: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233-merged.mount: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain podman[210353]: 2025-10-13 14:21:10.620127876 +0000 UTC m=+3.179399232 container cleanup 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, config_id=tripleo_step3, container_name=horizon, build-date=2025-07-21T13:58:15, name=rhosp17/openstack-horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 horizon, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 horizon, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, release=1, com.redhat.component=openstack-horizon-container, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:21:10 standalone.localdomain podman[210353]: horizon
Oct 13 14:21:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.timer: Failed to open /run/systemd/transient/808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.timer: No such file or directory
Oct 13 14:21:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Failed to open /run/systemd/transient/808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: No such file or directory
Oct 13 14:21:10 standalone.localdomain podman[210722]: 2025-10-13 14:21:10.628248349 +0000 UTC m=+0.076607385 container cleanup 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, summary=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, description=Red Hat OpenStack Platform 17.1 horizon, build-date=2025-07-21T13:58:15, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-horizon, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, com.redhat.component=openstack-horizon-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, container_name=horizon, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, release=1)
Oct 13 14:21:10 standalone.localdomain systemd[1]: libpod-conmon-808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.scope: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain runuser[210748]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.timer: Failed to open /run/systemd/transient/808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.timer: No such file or directory
Oct 13 14:21:10 standalone.localdomain systemd[1]: 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: Failed to open /run/systemd/transient/808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6.service: No such file or directory
Oct 13 14:21:10 standalone.localdomain podman[210734]: 2025-10-13 14:21:10.710420105 +0000 UTC m=+0.056582891 container cleanup 808e608697bddd0a3bd53410b01d1448384171bf381a40b5ad1d3080aa8a98c6 (image=registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1, name=horizon, com.redhat.component=openstack-horizon-container, name=rhosp17/openstack-horizon, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ad1f6dbe9b4f6499ef64ed90ffb3ceeed90078a5, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:15, config_data={'environment': {'ENABLE_DESIGNATE': 'yes', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_OCTAVIA': 'yes', 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'eff67e8d67d5f186cef6e48df141386b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-horizon:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/horizon:/var/lib/kolla/config_files/src:ro', '/var/log/containers/horizon:/var/log/horizon:z', '/var/log/containers/httpd/horizon:/var/log/httpd:z', '/var/tmp/horizon:/var/tmp:z', '/var/www:/var/www:ro']}, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 horizon, container_name=horizon, io.k8s.description=Red Hat OpenStack Platform 17.1 horizon, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 horizon, summary=Red Hat OpenStack Platform 17.1 horizon, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-horizon/images/17.1.9-1)
Oct 13 14:21:10 standalone.localdomain podman[210734]: horizon
Oct 13 14:21:10 standalone.localdomain systemd[1]: tripleo_horizon.service: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain systemd[1]: Stopped horizon container.
Oct 13 14:21:10 standalone.localdomain sudo[210326]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:10 standalone.localdomain sshd[210323]: Received disconnect from 192.168.122.11 port 34860:11: disconnected by user
Oct 13 14:21:10 standalone.localdomain sshd[210323]: Disconnected from user root 192.168.122.11 port 34860
Oct 13 14:21:10 standalone.localdomain sshd[210320]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:10 standalone.localdomain systemd[1]: session-115.scope: Deactivated successfully.
Oct 13 14:21:10 standalone.localdomain systemd-logind[45629]: Session 115 logged out. Waiting for processes to exit.
Oct 13 14:21:10 standalone.localdomain systemd-logind[45629]: Removed session 115.
Oct 13 14:21:10 standalone.localdomain sshd[210802]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:10 standalone.localdomain sshd[210802]: Accepted publickey for root from 192.168.122.11 port 59652 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:10 standalone.localdomain systemd-logind[45629]: New session 116 of user root.
Oct 13 14:21:10 standalone.localdomain systemd[1]: Started Session 116 of User root.
Oct 13 14:21:10 standalone.localdomain sshd[210802]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:10 standalone.localdomain ceph-mon[29756]: pgmap v1399: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:10 standalone.localdomain sudo[210816]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_keystone.service
Oct 13 14:21:10 standalone.localdomain sudo[210816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:10 standalone.localdomain sudo[210816]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:10 standalone.localdomain sshd[210805]: Received disconnect from 192.168.122.11 port 59652:11: disconnected by user
Oct 13 14:21:10 standalone.localdomain sshd[210805]: Disconnected from user root 192.168.122.11 port 59652
Oct 13 14:21:10 standalone.localdomain sshd[210802]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:10 standalone.localdomain systemd[1]: session-116.scope: Deactivated successfully.
Oct 13 14:21:11 standalone.localdomain systemd-logind[45629]: Session 116 logged out. Waiting for processes to exit.
Oct 13 14:21:11 standalone.localdomain systemd-logind[45629]: Removed session 116.
Oct 13 14:21:11 standalone.localdomain sshd[210903]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:11 standalone.localdomain sshd[210903]: Accepted publickey for root from 192.168.122.11 port 59658 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:11 standalone.localdomain systemd-logind[45629]: New session 117 of user root.
Oct 13 14:21:11 standalone.localdomain systemd[1]: Started Session 117 of User root.
Oct 13 14:21:11 standalone.localdomain sshd[210903]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:11 standalone.localdomain sudo[210907]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_keystone.service
Oct 13 14:21:11 standalone.localdomain sudo[210907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:11 standalone.localdomain systemd[1]: Stopping keystone container...
Oct 13 14:21:11 standalone.localdomain runuser[210748]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:11 standalone.localdomain runuser[210951]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1400: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:12 standalone.localdomain runuser[210951]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:12 standalone.localdomain runuser[211054]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:12 standalone.localdomain runuser[211054]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:12 standalone.localdomain ceph-mon[29756]: pgmap v1400: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1401: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:14 standalone.localdomain haproxy[70940]: 172.17.0.100:56273 [13/Oct/2025:14:05:26.162] mysql mysql/standalone.internalapi.localdomain 1/0/948243 26247738 -- 54/54/53/53/0 0/0
Oct 13 14:21:14 standalone.localdomain systemd[1]: libpod-0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.scope: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd[1]: libpod-0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.scope: Consumed 3min 42.265s CPU time.
Oct 13 14:21:14 standalone.localdomain podman[210926]: 2025-10-13 14:21:14.415609246 +0000 UTC m=+3.167177462 container died 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3)
Oct 13 14:21:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.timer: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.
Oct 13 14:21:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Failed to open /run/systemd/transient/0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: No such file or directory
Oct 13 14:21:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77-userdata-shm.mount: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0152fbf47c47b98476307a2f85450ead7caad437035e3d8d6c9790cae3a7e742-merged.mount: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain podman[210926]: 2025-10-13 14:21:14.468322606 +0000 UTC m=+3.219890802 container cleanup 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, version=17.1.9, tcib_managed=true, container_name=keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, vcs-type=git, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:21:14 standalone.localdomain podman[210926]: keystone
Oct 13 14:21:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.timer: Failed to open /run/systemd/transient/0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.timer: No such file or directory
Oct 13 14:21:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Failed to open /run/systemd/transient/0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: No such file or directory
Oct 13 14:21:14 standalone.localdomain podman[211265]: 2025-10-13 14:21:14.489018251 +0000 UTC m=+0.071351772 container cleanup 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, release=1, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone, io.buildah.version=1.33.12)
Oct 13 14:21:14 standalone.localdomain systemd[1]: libpod-conmon-0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.scope: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.timer: Failed to open /run/systemd/transient/0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.timer: No such file or directory
Oct 13 14:21:14 standalone.localdomain systemd[1]: 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: Failed to open /run/systemd/transient/0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77.service: No such file or directory
Oct 13 14:21:14 standalone.localdomain podman[211281]: 2025-10-13 14:21:14.559240855 +0000 UTC m=+0.046847498 container cleanup 0f2d5d36203e0a68127ffa856cab6aa1750cdf6483a5330bf81a323eebc31b77 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, container_name=keystone, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/etc/openldap:/etc/openldap:ro', '/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible)
Oct 13 14:21:14 standalone.localdomain podman[211281]: keystone
Oct 13 14:21:14 standalone.localdomain systemd[1]: tripleo_keystone.service: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd[1]: Stopped keystone container.
Oct 13 14:21:14 standalone.localdomain sudo[210907]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:14 standalone.localdomain sshd[210906]: Received disconnect from 192.168.122.11 port 59658:11: disconnected by user
Oct 13 14:21:14 standalone.localdomain sshd[210906]: Disconnected from user root 192.168.122.11 port 59658
Oct 13 14:21:14 standalone.localdomain sshd[210903]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:14 standalone.localdomain sshd[211294]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:14 standalone.localdomain systemd[1]: session-117.scope: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd-logind[45629]: Session 117 logged out. Waiting for processes to exit.
Oct 13 14:21:14 standalone.localdomain systemd-logind[45629]: Removed session 117.
Oct 13 14:21:14 standalone.localdomain sshd[211294]: Accepted publickey for root from 192.168.122.11 port 59664 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:14 standalone.localdomain systemd-logind[45629]: New session 118 of user root.
Oct 13 14:21:14 standalone.localdomain systemd[1]: Started Session 118 of User root.
Oct 13 14:21:14 standalone.localdomain sshd[211294]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:14 standalone.localdomain sudo[211306]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_barbican_api.service
Oct 13 14:21:14 standalone.localdomain sudo[211306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:14 standalone.localdomain sudo[211306]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:14 standalone.localdomain sshd[211304]: Received disconnect from 192.168.122.11 port 59664:11: disconnected by user
Oct 13 14:21:14 standalone.localdomain sshd[211304]: Disconnected from user root 192.168.122.11 port 59664
Oct 13 14:21:14 standalone.localdomain sshd[211294]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:14 standalone.localdomain systemd[1]: session-118.scope: Deactivated successfully.
Oct 13 14:21:14 standalone.localdomain systemd-logind[45629]: Session 118 logged out. Waiting for processes to exit.
Oct 13 14:21:14 standalone.localdomain systemd-logind[45629]: Removed session 118.
Oct 13 14:21:14 standalone.localdomain sshd[211321]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:14 standalone.localdomain ceph-mon[29756]: pgmap v1401: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:14 standalone.localdomain sshd[211321]: Accepted publickey for root from 192.168.122.11 port 59676 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:14 standalone.localdomain systemd-logind[45629]: New session 119 of user root.
Oct 13 14:21:14 standalone.localdomain systemd[1]: Started Session 119 of User root.
Oct 13 14:21:15 standalone.localdomain sshd[211321]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:15 standalone.localdomain sudo[211325]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_barbican_api.service
Oct 13 14:21:15 standalone.localdomain sudo[211325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:15 standalone.localdomain systemd[1]: Stopping barbican_api container...
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:21:15 standalone.localdomain podman[211360]: 2025-10-13 14:21:15.285393918 +0000 UTC m=+0.055567260 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_container_server)
Oct 13 14:21:15 standalone.localdomain podman[211360]: 2025-10-13 14:21:15.442774485 +0000 UTC m=+0.212947837 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, name=rhosp17/openstack-swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=swift_container_server)
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:21:15 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain systemd[1]: tmp-crun.pKxJ8n.mount: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:21:15 standalone.localdomain podman[211398]: 2025-10-13 14:21:15.618721849 +0000 UTC m=+0.125478675 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account)
Oct 13 14:21:15 standalone.localdomain podman[211470]: 2025-10-13 14:21:15.653590254 +0000 UTC m=+0.067556633 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, version=17.1.9, container_name=ovn_controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public)
Oct 13 14:21:15 standalone.localdomain podman[211506]: 2025-10-13 14:21:15.679210742 +0000 UTC m=+0.066845551 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, name=rhosp17/openstack-nova-novncproxy, release=1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-nova-novncproxy-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, architecture=x86_64, container_name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:21:15 standalone.localdomain podman[211389]: 2025-10-13 14:21:15.585290659 +0000 UTC m=+0.115437392 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-placement-api-container, container_name=placement_api, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api)
Oct 13 14:21:15 standalone.localdomain podman[211388]: 2025-10-13 14:21:15.661392217 +0000 UTC m=+0.177984429 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28)
Oct 13 14:21:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:21:15 standalone.localdomain podman[211392]: 2025-10-13 14:21:15.730143836 +0000 UTC m=+0.249312098 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:21:15 standalone.localdomain podman[211470]: 2025-10-13 14:21:15.748876028 +0000 UTC m=+0.162842407 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:21:15 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:21:15 standalone.localdomain podman[211391]: 2025-10-13 14:21:15.588567021 +0000 UTC m=+0.115281078 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_metadata, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, name=rhosp17/openstack-nova-api, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:21:15 standalone.localdomain podman[211391]: 2025-10-13 14:21:15.82893205 +0000 UTC m=+0.355646127 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f)
Oct 13 14:21:15 standalone.localdomain podman[211398]: 2025-10-13 14:21:15.838832857 +0000 UTC m=+0.345589673 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public)
Oct 13 14:21:15 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain podman[211388]: 2025-10-13 14:21:15.856862289 +0000 UTC m=+0.373454481 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, io.buildah.version=1.33.12, release=1, architecture=x86_64, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server)
Oct 13 14:21:15 standalone.localdomain podman[211390]: 2025-10-13 14:21:15.761211403 +0000 UTC m=+0.290008335 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:21:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1402: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:15 standalone.localdomain podman[211389]: 2025-10-13 14:21:15.869788241 +0000 UTC m=+0.399934964 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-placement-api, vcs-type=git, com.redhat.component=openstack-placement-api-container, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 placement-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:12, container_name=placement_api, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:21:15 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain podman[211392]: 2025-10-13 14:21:15.880258506 +0000 UTC m=+0.399426758 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:21:15 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain podman[211499]: 2025-10-13 14:21:15.839574311 +0000 UTC m=+0.233963901 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:21:15 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:21:15 standalone.localdomain podman[211609]: 2025-10-13 14:21:15.928537219 +0000 UTC m=+0.130375158 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, container_name=glance_api_internal, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-glance-api-container, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api)
Oct 13 14:21:15 standalone.localdomain podman[211469]: 2025-10-13 14:21:15.882228488 +0000 UTC m=+0.294175224 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, version=17.1.9, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, tcib_managed=true, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:21:15 standalone.localdomain podman[211390]: 2025-10-13 14:21:15.987061349 +0000 UTC m=+0.515858291 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, container_name=swift_proxy, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-swift-proxy-server, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:21:16 standalone.localdomain podman[211578]: 2025-10-13 14:21:16.005100361 +0000 UTC m=+0.270107896 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:21:16 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:21:16 standalone.localdomain podman[211469]: 2025-10-13 14:21:16.015887807 +0000 UTC m=+0.427834543 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, io.buildah.version=1.33.12, release=1, architecture=x86_64, container_name=glance_api_cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team)
Oct 13 14:21:16 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:21:16 standalone.localdomain podman[211506]: 2025-10-13 14:21:16.029008124 +0000 UTC m=+0.416642933 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, com.redhat.component=openstack-nova-novncproxy-container, vcs-type=git, container_name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:24:10, name=rhosp17/openstack-nova-novncproxy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, batch=17.1_20250721.1, release=1)
Oct 13 14:21:16 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:21:16 standalone.localdomain podman[211499]: 2025-10-13 14:21:16.059794832 +0000 UTC m=+0.454184452 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, container_name=glance_api, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:21:16 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:21:16 standalone.localdomain podman[211609]: 2025-10-13 14:21:16.139180653 +0000 UTC m=+0.341018622 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, release=1)
Oct 13 14:21:16 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:21:16 standalone.localdomain podman[211578]: 2025-10-13 14:21:16.363336807 +0000 UTC m=+0.628344362 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute)
Oct 13 14:21:16 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:21:16 standalone.localdomain ceph-mon[29756]: pgmap v1402: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:21:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/41133300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:21:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/41133300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:21:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1403: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.2:35700 [13/Oct/2025:14:21:18.009] placement placement/standalone.internalapi.localdomain 0/0/0/14/14 200 497 - - ---- 54/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:39499 [13/Oct/2025:14:01:44.171] mysql mysql/standalone.internalapi.localdomain 1/0/1174116 3362 -- 54/53/52/52/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:52033 [13/Oct/2025:14:01:28.123] mysql mysql/standalone.internalapi.localdomain 1/0/1190163 3362 -- 53/52/51/51/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:44697 [13/Oct/2025:14:01:31.302] mysql mysql/standalone.internalapi.localdomain 1/0/1186991 3362 -- 52/51/50/50/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:52483 [13/Oct/2025:14:01:24.629] mysql mysql/standalone.internalapi.localdomain 1/0/1193664 3362 -- 51/50/49/49/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:49241 [13/Oct/2025:14:01:34.523] mysql mysql/standalone.internalapi.localdomain 1/0/1183776 3362 -- 50/49/48/48/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:37869 [13/Oct/2025:14:01:40.969] mysql mysql/standalone.internalapi.localdomain 1/0/1177331 10457 -- 49/48/47/47/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:50361 [13/Oct/2025:14:01:37.693] mysql mysql/standalone.internalapi.localdomain 1/0/1180608 3362 -- 48/47/46/46/0 0/0
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: 172.17.0.100:33109 [13/Oct/2025:14:01:24.890] mysql mysql/standalone.internalapi.localdomain 1/0/1193414 3362 -- 47/46/45/45/0 0/0
Oct 13 14:21:18 standalone.localdomain systemd[1]: libpod-491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.scope: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd[1]: libpod-491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.scope: Consumed 14.389s CPU time.
Oct 13 14:21:18 standalone.localdomain podman[211340]: 2025-10-13 14:21:18.305947658 +0000 UTC m=+3.180147036 container stop 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, architecture=x86_64, com.redhat.component=openstack-barbican-api-container, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, summary=Red Hat OpenStack Platform 17.1 barbican-api, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-barbican-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 barbican-api, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, vcs-type=git, tcib_managed=true, build-date=2025-07-21T15:22:44, container_name=barbican_api, release=1)
Oct 13 14:21:18 standalone.localdomain podman[211340]: 2025-10-13 14:21:18.335740435 +0000 UTC m=+3.209939763 container died 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, name=rhosp17/openstack-barbican-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:44, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 barbican-api, summary=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, tcib_managed=true, config_id=tripleo_step3, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-barbican-api-container)
Oct 13 14:21:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.timer: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.
Oct 13 14:21:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Failed to open /run/systemd/transient/491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: No such file or directory
Oct 13 14:21:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d-userdata-shm.mount: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1c3ceeb611a5bf472a6616e0e2799f4dfa33cb00f5de706c2cb6f3f3948731f3-merged.mount: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain podman[211340]: 2025-10-13 14:21:18.382338684 +0000 UTC m=+3.256538012 container cleanup 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, container_name=barbican_api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step3, name=rhosp17/openstack-barbican-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 barbican-api, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:22:44, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-barbican-api-container, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:21:18 standalone.localdomain podman[211340]: barbican_api
Oct 13 14:21:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.timer: Failed to open /run/systemd/transient/491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.timer: No such file or directory
Oct 13 14:21:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Failed to open /run/systemd/transient/491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: No such file or directory
Oct 13 14:21:18 standalone.localdomain podman[211746]: 2025-10-13 14:21:18.391442058 +0000 UTC m=+0.072252429 container cleanup 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, build-date=2025-07-21T15:22:44, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api, release=1, name=rhosp17/openstack-barbican-api, batch=17.1_20250721.1, container_name=barbican_api, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-barbican-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 barbican-api, description=Red Hat OpenStack Platform 17.1 barbican-api, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:21:18 standalone.localdomain systemd[1]: libpod-conmon-491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.scope: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.timer: Failed to open /run/systemd/transient/491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.timer: No such file or directory
Oct 13 14:21:18 standalone.localdomain systemd[1]: 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: Failed to open /run/systemd/transient/491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d.service: No such file or directory
Oct 13 14:21:18 standalone.localdomain podman[211762]: 2025-10-13 14:21:18.481704236 +0000 UTC m=+0.057835200 container cleanup 491ca0980aede11d3169d1e26d7ffa25010d2ddc010b36417f83ee93a14dd17d (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1, name=barbican_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-api/images/17.1.9-1, container_name=barbican_api, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=ae70795be2aca1561a50310ee27fa2892365b5ed, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-api, name=rhosp17/openstack-barbican-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-api, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 5, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 barbican-api, managed_by=tripleo_ansible, com.redhat.component=openstack-barbican-api-container, build-date=2025-07-21T15:22:44, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-api)
Oct 13 14:21:18 standalone.localdomain podman[211762]: barbican_api
Oct 13 14:21:18 standalone.localdomain systemd[1]: tripleo_barbican_api.service: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd[1]: Stopped barbican_api container.
Oct 13 14:21:18 standalone.localdomain sudo[211325]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:18 standalone.localdomain sshd[211324]: Received disconnect from 192.168.122.11 port 59676:11: disconnected by user
Oct 13 14:21:18 standalone.localdomain sshd[211324]: Disconnected from user root 192.168.122.11 port 59676
Oct 13 14:21:18 standalone.localdomain sshd[211321]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:18 standalone.localdomain systemd[1]: session-119.scope: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd-logind[45629]: Session 119 logged out. Waiting for processes to exit.
Oct 13 14:21:18 standalone.localdomain systemd-logind[45629]: Removed session 119.
Oct 13 14:21:18 standalone.localdomain sshd[211774]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:21:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2829259868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: Server horizon/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:21:18 standalone.localdomain haproxy[70940]: proxy horizon has no server available!
Oct 13 14:21:18 standalone.localdomain sshd[211774]: Accepted publickey for root from 192.168.122.11 port 59686 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:18 standalone.localdomain systemd-logind[45629]: New session 120 of user root.
Oct 13 14:21:18 standalone.localdomain systemd[1]: Started Session 120 of User root.
Oct 13 14:21:18 standalone.localdomain sshd[211774]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:18 standalone.localdomain ceph-mon[29756]: pgmap v1403: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2829259868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:21:18 standalone.localdomain sudo[211780]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_barbican_worker.service
Oct 13 14:21:18 standalone.localdomain sudo[211780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:18 standalone.localdomain sudo[211780]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:18 standalone.localdomain sshd[211779]: Received disconnect from 192.168.122.11 port 59686:11: disconnected by user
Oct 13 14:21:18 standalone.localdomain sshd[211779]: Disconnected from user root 192.168.122.11 port 59686
Oct 13 14:21:18 standalone.localdomain sshd[211774]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:18 standalone.localdomain systemd-logind[45629]: Session 120 logged out. Waiting for processes to exit.
Oct 13 14:21:18 standalone.localdomain systemd[1]: session-120.scope: Deactivated successfully.
Oct 13 14:21:18 standalone.localdomain systemd-logind[45629]: Removed session 120.
Oct 13 14:21:18 standalone.localdomain sshd[211795]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:18 standalone.localdomain sshd[211795]: Accepted publickey for root from 192.168.122.11 port 59702 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:18 standalone.localdomain systemd-logind[45629]: New session 121 of user root.
Oct 13 14:21:18 standalone.localdomain systemd[1]: Started Session 121 of User root.
Oct 13 14:21:18 standalone.localdomain sshd[211795]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:19 standalone.localdomain sudo[211807]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_barbican_worker.service
Oct 13 14:21:19 standalone.localdomain sudo[211807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:19 standalone.localdomain systemd[1]: Stopping barbican_worker container...
Oct 13 14:21:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1404: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:20 standalone.localdomain ceph-mon[29756]: pgmap v1404: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1405: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:21:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:21:22 standalone.localdomain systemd[1]: tmp-crun.gyrgak.mount: Deactivated successfully.
Oct 13 14:21:22 standalone.localdomain podman[211982]: 2025-10-13 14:21:22.304213017 +0000 UTC m=+0.064302312 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, version=17.1.9)
Oct 13 14:21:22 standalone.localdomain podman[211981]: 2025-10-13 14:21:22.358757065 +0000 UTC m=+0.121127280 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, container_name=keystone_cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1)
Oct 13 14:21:22 standalone.localdomain podman[211981]: 2025-10-13 14:21:22.371802071 +0000 UTC m=+0.134172286 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:21:22 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:21:22 standalone.localdomain haproxy[70940]: Server keystone_public/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:21:22 standalone.localdomain haproxy[70940]: proxy keystone_public has no server available!
Oct 13 14:21:22 standalone.localdomain haproxy[70940]: Server keystone_admin/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:21:22 standalone.localdomain haproxy[70940]: proxy keystone_admin has no server available!
Oct 13 14:21:22 standalone.localdomain podman[211982]: 2025-10-13 14:21:22.412845967 +0000 UTC m=+0.172935282 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:05:11, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, container_name=nova_api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:21:22 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:21:22 standalone.localdomain ceph-mon[29756]: pgmap v1405: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:21:23
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'vms', 'manila_metadata', 'volumes', 'backups', 'manila_data', 'images']
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:21:23 standalone.localdomain runuser[212123]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:21:23 standalone.localdomain haproxy[70940]: 172.17.0.2:41262 [13/Oct/2025:14:21:20.627] keystone_public keystone_public/<NOSRV> 0/0/-1/-1/3001 503 217 - - SC-- 48/1/0/0/+3 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:21:23 standalone.localdomain haproxy[70940]: 172.17.0.2:49380 [13/Oct/2025:14:21:20.621] neutron neutron/standalone.internalapi.localdomain 0/0/0/3016/3016 503 409 - - ---- 48/1/0/0/0 0/0 "GET /v2.0/ports?device_id=8f68d5aa-abc4-451d-89d2-f5342b71831c&fields=binding%3Ahost_id&fields=binding%3Avif_type HTTP/1.1"
Oct 13 14:21:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1406: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:23 standalone.localdomain runuser[212123]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:24 standalone.localdomain ceph-mgr[29999]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 14:21:24 standalone.localdomain runuser[212247]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:21:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:21:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:21:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:21:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:21:24 standalone.localdomain podman[212292]: 2025-10-13 14:21:24.554899824 +0000 UTC m=+0.072270159 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, distribution-scope=public, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:24 standalone.localdomain podman[212293]: 2025-10-13 14:21:24.604130835 +0000 UTC m=+0.122021206 container health_status 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:18:19, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, architecture=x86_64, release=1, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, container_name=barbican_keystone_listener, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:21:24 standalone.localdomain podman[212293]: 2025-10-13 14:21:24.624289783 +0000 UTC m=+0.142180134 container exec_died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, release=1, container_name=barbican_keystone_listener, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T16:18:19, com.redhat.component=openstack-barbican-keystone-listener-container, architecture=x86_64)
Oct 13 14:21:24 standalone.localdomain podman[212292]: 2025-10-13 14:21:24.629832076 +0000 UTC m=+0.147202401 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9)
Oct 13 14:21:24 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Deactivated successfully.
Oct 13 14:21:24 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:21:24 standalone.localdomain podman[212296]: Error: container 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 is not running
Oct 13 14:21:24 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:21:24 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Failed with result 'exit-code'.
Oct 13 14:21:24 standalone.localdomain podman[212294]: 2025-10-13 14:21:24.756460786 +0000 UTC m=+0.267209006 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12)
Oct 13 14:21:24 standalone.localdomain podman[212294]: 2025-10-13 14:21:24.804595293 +0000 UTC m=+0.315343533 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:21:24 standalone.localdomain podman[212305]: 2025-10-13 14:21:24.813743957 +0000 UTC m=+0.322158034 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, release=1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, name=rhosp17/openstack-nova-api, container_name=nova_api_cron, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:21:24 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:21:24 standalone.localdomain podman[212305]: 2025-10-13 14:21:24.824837003 +0000 UTC m=+0.333251080 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T16:05:11, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:21:24 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:21:24 standalone.localdomain runuser[212247]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:24 standalone.localdomain runuser[212401]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:24 standalone.localdomain ceph-mon[29756]: pgmap v1406: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:25 standalone.localdomain systemd[1]: tmp-crun.7TLnQv.mount: Deactivated successfully.
Oct 13 14:21:25 standalone.localdomain runuser[212401]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1407: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:26 standalone.localdomain haproxy[70940]: Server barbican/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:21:26 standalone.localdomain haproxy[70940]: proxy barbican has no server available!
Oct 13 14:21:26 standalone.localdomain ceph-mon[29756]: pgmap v1407: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:21:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:21:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:21:27 standalone.localdomain podman[212483]: 2025-10-13 14:21:27.365064628 +0000 UTC m=+0.125413443 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:21:27 standalone.localdomain podman[212484]: 2025-10-13 14:21:27.335827139 +0000 UTC m=+0.096431852 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-northd-container, vcs-type=git, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 ovn-northd, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-northd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:21:27 standalone.localdomain systemd[1]: tmp-crun.uJe8j7.mount: Deactivated successfully.
Oct 13 14:21:27 standalone.localdomain podman[212510]: 2025-10-13 14:21:27.402454191 +0000 UTC m=+0.062649130 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T12:58:45, container_name=clustercheck, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=)
Oct 13 14:21:27 standalone.localdomain podman[212483]: 2025-10-13 14:21:27.413893747 +0000 UTC m=+0.174242552 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step5, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20250721.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, container_name=nova_compute)
Oct 13 14:21:27 standalone.localdomain podman[212484]: 2025-10-13 14:21:27.42073679 +0000 UTC m=+0.181341493 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, container_name=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:30:04, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_northd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-northd-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, io.buildah.version=1.33.12)
Oct 13 14:21:27 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:21:27 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:21:27 standalone.localdomain podman[212510]: 2025-10-13 14:21:27.453105907 +0000 UTC m=+0.113300856 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, container_name=clustercheck, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., config_id=tripleo_step2, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team)
Oct 13 14:21:27 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:21:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1408: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:21:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:21:28 standalone.localdomain ceph-mon[29756]: pgmap v1408: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1409: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:30 standalone.localdomain ceph-mon[29756]: pgmap v1409: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:31 standalone.localdomain podman[212628]: 2025-10-13 14:21:31.156521472 +0000 UTC m=+0.087682698 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, com.redhat.component=openstack-manila-share-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, name=rhosp17/openstack-manila-share, architecture=x86_64, tcib_managed=true, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, release=1, build-date=2025-07-21T15:22:36, description=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:21:31 standalone.localdomain podman[212628]: 2025-10-13 14:21:31.185815654 +0000 UTC m=+0.116976810 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:36, com.redhat.component=openstack-manila-share-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, name=rhosp17/openstack-manila-share, release=1, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share)
Oct 13 14:21:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:21:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:21:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:21:31 standalone.localdomain podman[212715]: 2025-10-13 14:21:31.575394375 +0000 UTC m=+0.085512832 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:31 standalone.localdomain podman[212715]: 2025-10-13 14:21:31.609983161 +0000 UTC m=+0.120101608 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp17/openstack-iscsid, vcs-type=git, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3)
Oct 13 14:21:31 standalone.localdomain podman[212717]: 2025-10-13 14:21:31.617926809 +0000 UTC m=+0.124946979 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, container_name=cinder_scheduler, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, managed_by=tripleo_ansible, build-date=2025-07-21T16:10:12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, name=rhosp17/openstack-cinder-scheduler, tcib_managed=true, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.component=openstack-cinder-scheduler-container)
Oct 13 14:21:31 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:21:31 standalone.localdomain podman[212717]: 2025-10-13 14:21:31.64787026 +0000 UTC m=+0.154890400 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, container_name=cinder_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-cinder-scheduler-container, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:10:12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true)
Oct 13 14:21:31 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:21:31 standalone.localdomain podman[212716]: 2025-10-13 14:21:31.743445514 +0000 UTC m=+0.251969161 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, release=1, vcs-type=git, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, vendor=Red Hat, Inc., tcib_managed=true, container_name=cinder_api_cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12)
Oct 13 14:21:31 standalone.localdomain podman[212716]: 2025-10-13 14:21:31.777089121 +0000 UTC m=+0.285612768 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, distribution-scope=public, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api_cron, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:21:31 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:21:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1410: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:32 standalone.localdomain ceph-mon[29756]: pgmap v1410: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1411: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:34 standalone.localdomain ceph-mon[29756]: pgmap v1411: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1412: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:36 standalone.localdomain runuser[212991]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:36 standalone.localdomain runuser[212991]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:36 standalone.localdomain ceph-mon[29756]: pgmap v1412: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:36 standalone.localdomain runuser[213060]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:37 standalone.localdomain runuser[213060]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:37 standalone.localdomain runuser[213122]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1413: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:38 standalone.localdomain runuser[213122]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:38 standalone.localdomain ceph-mon[29756]: pgmap v1413: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:21:39 standalone.localdomain podman[213195]: 2025-10-13 14:21:39.356376808 +0000 UTC m=+0.113240784 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, name=rhosp17/openstack-cinder-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, container_name=cinder_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:58:55, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:21:39 standalone.localdomain podman[213195]: 2025-10-13 14:21:39.388786407 +0000 UTC m=+0.145650363 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, container_name=cinder_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, build-date=2025-07-21T15:58:55, summary=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-api, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:21:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:21:39 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213325]: 2025-10-13 14:21:39.499924535 +0000 UTC m=+0.092138418 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=nova_conductor, architecture=x86_64, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-conductor, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:39 standalone.localdomain podman[213214]: 2025-10-13 14:21:39.452065026 +0000 UTC m=+0.198247820 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-type=git, distribution-scope=public, release=1, build-date=2025-07-21T15:44:03, container_name=neutron_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:21:39 standalone.localdomain podman[213325]: 2025-10-13 14:21:39.530731343 +0000 UTC m=+0.122945226 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, maintainer=OpenStack TripleO Team, container_name=nova_conductor, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, batch=17.1_20250721.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T15:44:17, distribution-scope=public, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container, io.buildah.version=1.33.12)
Oct 13 14:21:39 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213216]: 2025-10-13 14:21:39.404284149 +0000 UTC m=+0.150883875 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:21:39 standalone.localdomain podman[213216]: 2025-10-13 14:21:39.587789318 +0000 UTC m=+0.334388984 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, container_name=logrotate_crond, release=1, name=rhosp17/openstack-cron, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:21:39 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213190]: 2025-10-13 14:21:39.600846315 +0000 UTC m=+0.358280839 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:21:39 standalone.localdomain podman[213214]: 2025-10-13 14:21:39.653652648 +0000 UTC m=+0.399835512 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T15:44:03, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:39 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213295]: 2025-10-13 14:21:39.513874589 +0000 UTC m=+0.135781636 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, architecture=x86_64, io.openshift.expose-services=, container_name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4)
Oct 13 14:21:39 standalone.localdomain podman[213188]: 2025-10-13 14:21:39.588776459 +0000 UTC m=+0.354147420 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn)
Oct 13 14:21:39 standalone.localdomain podman[213295]: 2025-10-13 14:21:39.70099538 +0000 UTC m=+0.322902417 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, build-date=2025-07-21T15:56:26, tcib_managed=true, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, com.redhat.component=openstack-heat-api-container)
Oct 13 14:21:39 standalone.localdomain podman[213189]: 2025-10-13 14:21:39.656329861 +0000 UTC m=+0.414528328 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, name=rhosp17/openstack-nova-scheduler, distribution-scope=public, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, build-date=2025-07-21T16:02:54, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_scheduler)
Oct 13 14:21:39 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213190]: 2025-10-13 14:21:39.735225016 +0000 UTC m=+0.492659550 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:21:39 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213198]: 2025-10-13 14:21:39.737067942 +0000 UTC m=+0.479954533 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, name=rhosp17/openstack-heat-api, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:21:39 standalone.localdomain podman[213225]: 2025-10-13 14:21:39.79772669 +0000 UTC m=+0.533542741 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, name=rhosp17/openstack-manila-scheduler, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-manila-scheduler-container, release=1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, container_name=manila_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:21:39 standalone.localdomain podman[213289]: 2025-10-13 14:21:39.701666052 +0000 UTC m=+0.328428191 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, managed_by=tripleo_ansible, container_name=heat_engine, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:21:39 standalone.localdomain podman[213225]: 2025-10-13 14:21:39.816247117 +0000 UTC m=+0.552063198 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, version=17.1.9, name=rhosp17/openstack-manila-scheduler, release=1, build-date=2025-07-21T15:56:28, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:21:39 standalone.localdomain podman[213188]: 2025-10-13 14:21:39.826413173 +0000 UTC m=+0.591784104 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:49:55, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-heat-api-cfn-container, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, container_name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:21:39 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213289]: 2025-10-13 14:21:39.834987619 +0000 UTC m=+0.461749808 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container, version=17.1.9, container_name=heat_engine, architecture=x86_64, build-date=2025-07-21T15:44:11, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:21:39 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213189]: 2025-10-13 14:21:39.839219571 +0000 UTC m=+0.597418048 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-scheduler, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, version=17.1.9, vcs-type=git, container_name=nova_scheduler, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, build-date=2025-07-21T16:02:54)
Oct 13 14:21:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1414: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:39 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:21:39 standalone.localdomain podman[213198]: 2025-10-13 14:21:39.971860708 +0000 UTC m=+0.714747389 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:21:39 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:21:40 standalone.localdomain podman[213197]: 2025-10-13 14:21:39.754199085 +0000 UTC m=+0.504635771 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, release=1, com.redhat.component=openstack-manila-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=manila_api_cron, name=rhosp17/openstack-manila-api, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']})
Oct 13 14:21:40 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:21:40 standalone.localdomain ceph-mon[29756]: pgmap v1414: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:40 standalone.localdomain podman[213197]: 2025-10-13 14:21:40.26375639 +0000 UTC m=+1.014193136 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_api_cron, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, com.redhat.component=openstack-manila-api-container, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:06:43, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, vcs-type=git, name=rhosp17/openstack-manila-api, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true)
Oct 13 14:21:40 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:21:40 standalone.localdomain podman[213476]: 2025-10-13 14:21:40.681656503 +0000 UTC m=+0.160754374 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T16:18:24, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, release=1, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, com.redhat.component=openstack-cinder-backup-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1)
Oct 13 14:21:40 standalone.localdomain podman[213494]: 2025-10-13 14:21:40.733586928 +0000 UTC m=+0.061866916 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:18:24, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, architecture=x86_64)
Oct 13 14:21:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:40 standalone.localdomain podman[213476]: 2025-10-13 14:21:40.748919575 +0000 UTC m=+0.228017446 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, tcib_managed=true, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-cinder-backup, release=1, build-date=2025-07-21T16:18:24, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup)
Oct 13 14:21:40 standalone.localdomain haproxy[70940]: 172.17.0.100:41093 [13/Oct/2025:14:01:25.629] mysql mysql/standalone.internalapi.localdomain 1/0/1215294 2204 -- 45/45/44/44/0 0/0
Oct 13 14:21:40 standalone.localdomain systemd[1]: libpod-8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.scope: Deactivated successfully.
Oct 13 14:21:40 standalone.localdomain systemd[1]: libpod-8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.scope: Consumed 2.428s CPU time.
Oct 13 14:21:40 standalone.localdomain podman[211822]: 2025-10-13 14:21:40.96570781 +0000 UTC m=+21.841684250 container died 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, architecture=x86_64, distribution-scope=public, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=barbican_worker, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T15:36:22, com.redhat.component=openstack-barbican-worker-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, name=rhosp17/openstack-barbican-worker, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:21:41 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.timer: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.
Oct 13 14:21:41 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Failed to open /run/systemd/transient/8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: No such file or directory
Oct 13 14:21:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366-userdata-shm.mount: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain podman[211822]: 2025-10-13 14:21:41.159244192 +0000 UTC m=+22.035220602 container cleanup 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, config_id=tripleo_step3, build-date=2025-07-21T15:36:22, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-worker, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 barbican-worker, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, release=1, vcs-type=git, com.redhat.component=openstack-barbican-worker-container, name=rhosp17/openstack-barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, container_name=barbican_worker, managed_by=tripleo_ansible)
Oct 13 14:21:41 standalone.localdomain podman[211822]: barbican_worker
Oct 13 14:21:41 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.timer: Failed to open /run/systemd/transient/8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.timer: No such file or directory
Oct 13 14:21:41 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Failed to open /run/systemd/transient/8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: No such file or directory
Oct 13 14:21:41 standalone.localdomain podman[213505]: 2025-10-13 14:21:41.196284154 +0000 UTC m=+0.223324310 container cleanup 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, batch=17.1_20250721.1, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-barbican-worker, com.redhat.component=openstack-barbican-worker-container, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12, container_name=barbican_worker, summary=Red Hat OpenStack Platform 17.1 barbican-worker, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-worker, architecture=x86_64, release=1, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, io.openshift.expose-services=)
Oct 13 14:21:41 standalone.localdomain systemd[1]: libpod-conmon-8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.scope: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.timer: Failed to open /run/systemd/transient/8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.timer: No such file or directory
Oct 13 14:21:41 standalone.localdomain systemd[1]: 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: Failed to open /run/systemd/transient/8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366.service: No such file or directory
Oct 13 14:21:41 standalone.localdomain podman[213603]: 2025-10-13 14:21:41.302622722 +0000 UTC m=+0.076382177 container cleanup 8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366 (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1, name=barbican_worker, description=Red Hat OpenStack Platform 17.1 barbican-worker, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 barbican-worker, distribution-scope=public, vcs-ref=03a25abea8e7d92103fead83e0e18ebfb3f777c4, container_name=barbican_worker, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-worker/images/17.1.9-1, build-date=2025-07-21T15:36:22, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, architecture=x86_64, com.redhat.component=openstack-barbican-worker-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-worker:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 7, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_worker.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-worker, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-worker, io.openshift.expose-services=, name=rhosp17/openstack-barbican-worker, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, managed_by=tripleo_ansible)
Oct 13 14:21:41 standalone.localdomain podman[213603]: barbican_worker
Oct 13 14:21:41 standalone.localdomain systemd[1]: tripleo_barbican_worker.service: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain systemd[1]: Stopped barbican_worker container.
Oct 13 14:21:41 standalone.localdomain sudo[211807]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:41 standalone.localdomain sshd[211806]: Received disconnect from 192.168.122.11 port 59702:11: disconnected by user
Oct 13 14:21:41 standalone.localdomain sshd[211806]: Disconnected from user root 192.168.122.11 port 59702
Oct 13 14:21:41 standalone.localdomain sshd[211795]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:41 standalone.localdomain systemd[1]: session-121.scope: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain systemd-logind[45629]: Session 121 logged out. Waiting for processes to exit.
Oct 13 14:21:41 standalone.localdomain systemd-logind[45629]: Removed session 121.
Oct 13 14:21:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4dfe70c52a10e8bd718ed4752cbab0ea6264e7dcefe217a06d9a919012c408d9-merged.mount: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain sshd[213616]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:41 standalone.localdomain sshd[213616]: Accepted publickey for root from 192.168.122.11 port 43714 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:41 standalone.localdomain systemd-logind[45629]: New session 122 of user root.
Oct 13 14:21:41 standalone.localdomain systemd[1]: Started Session 122 of User root.
Oct 13 14:21:41 standalone.localdomain sshd[213616]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:41 standalone.localdomain sudo[213628]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_barbican_keystone_listener.service
Oct 13 14:21:41 standalone.localdomain sudo[213628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:41 standalone.localdomain sudo[213628]: pam_unix(sudo:session): session closed for user root
Oct 13 14:21:41 standalone.localdomain sshd[213627]: Received disconnect from 192.168.122.11 port 43714:11: disconnected by user
Oct 13 14:21:41 standalone.localdomain sshd[213627]: Disconnected from user root 192.168.122.11 port 43714
Oct 13 14:21:41 standalone.localdomain sshd[213616]: pam_unix(sshd:session): session closed for user root
Oct 13 14:21:41 standalone.localdomain systemd[1]: session-122.scope: Deactivated successfully.
Oct 13 14:21:41 standalone.localdomain systemd-logind[45629]: Session 122 logged out. Waiting for processes to exit.
Oct 13 14:21:41 standalone.localdomain systemd-logind[45629]: Removed session 122.
Oct 13 14:21:41 standalone.localdomain sshd[213643]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:21:41 standalone.localdomain sshd[213643]: Accepted publickey for root from 192.168.122.11 port 43720 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:21:41 standalone.localdomain systemd-logind[45629]: New session 123 of user root.
Oct 13 14:21:41 standalone.localdomain systemd[1]: Started Session 123 of User root.
Oct 13 14:21:41 standalone.localdomain sshd[213643]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1415: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:41 standalone.localdomain sudo[213647]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_barbican_keystone_listener.service
Oct 13 14:21:41 standalone.localdomain sudo[213647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:21:41 standalone.localdomain systemd[1]: Stopping barbican_keystone_listener container...
Oct 13 14:21:42 standalone.localdomain systemd[1]: tmp-crun.qdQgFo.mount: Deactivated successfully.
Oct 13 14:21:42 standalone.localdomain ceph-mon[29756]: pgmap v1415: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1416: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:44 standalone.localdomain ceph-mon[29756]: pgmap v1416: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:21:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:45 standalone.localdomain podman[213887]: 2025-10-13 14:21:45.813180681 +0000 UTC m=+0.083115836 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:21:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1417: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:45 standalone.localdomain podman[213907]: 2025-10-13 14:21:45.964224351 +0000 UTC m=+0.126576599 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:21:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:21:46 standalone.localdomain podman[213907]: 2025-10-13 14:21:46.025772266 +0000 UTC m=+0.188124474 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller)
Oct 13 14:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:21:46 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[213940]: 2025-10-13 14:21:46.063637444 +0000 UTC m=+0.068769321 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, container_name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:21:46 standalone.localdomain podman[213944]: 2025-10-13 14:21:46.132090494 +0000 UTC m=+0.132251476 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, version=17.1.9)
Oct 13 14:21:46 standalone.localdomain podman[213932]: 2025-10-13 14:21:46.169856399 +0000 UTC m=+0.176615856 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, build-date=2025-07-21T13:58:12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-placement-api, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 placement-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, container_name=placement_api)
Oct 13 14:21:46 standalone.localdomain podman[213940]: 2025-10-13 14:21:46.180621683 +0000 UTC m=+0.185753560 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, version=17.1.9, architecture=x86_64, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:21:46 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:21:46 standalone.localdomain podman[214000]: 2025-10-13 14:21:46.153553662 +0000 UTC m=+0.077184873 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., release=1, tcib_managed=true, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:21:46 standalone.localdomain podman[213934]: 2025-10-13 14:21:46.099329694 +0000 UTC m=+0.100710754 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, release=1, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-nova-api-container, container_name=nova_metadata, architecture=x86_64)
Oct 13 14:21:46 standalone.localdomain podman[213931]: 2025-10-13 14:21:46.270313194 +0000 UTC m=+0.286750543 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, container_name=swift_object_server, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, release=1, build-date=2025-07-21T14:56:28)
Oct 13 14:21:46 standalone.localdomain podman[214001]: 2025-10-13 14:21:46.219604146 +0000 UTC m=+0.139674896 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:21:46 standalone.localdomain podman[213934]: 2025-10-13 14:21:46.281215583 +0000 UTC m=+0.282596633 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, release=1, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_metadata, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, io.openshift.expose-services=, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-api-container, managed_by=tripleo_ansible)
Oct 13 14:21:46 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[213932]: 2025-10-13 14:21:46.296052686 +0000 UTC m=+0.302812173 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, vendor=Red Hat, Inc., name=rhosp17/openstack-placement-api, architecture=x86_64, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, build-date=2025-07-21T13:58:12, maintainer=OpenStack TripleO Team, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 placement-api, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.component=openstack-placement-api-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1)
Oct 13 14:21:46 standalone.localdomain podman[214000]: 2025-10-13 14:21:46.304398145 +0000 UTC m=+0.228029386 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, container_name=glance_api_cron, release=1, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:46 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[214003]: 2025-10-13 14:21:46.320248748 +0000 UTC m=+0.232939649 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, name=rhosp17/openstack-nova-novncproxy, batch=17.1_20250721.1, com.redhat.component=openstack-nova-novncproxy-container, tcib_managed=true, config_id=tripleo_step4, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, build-date=2025-07-21T15:24:10, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, io.buildah.version=1.33.12, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:21:46 standalone.localdomain podman[213944]: 2025-10-13 14:21:46.337805185 +0000 UTC m=+0.337966197 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12)
Oct 13 14:21:46 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[214011]: 2025-10-13 14:21:46.378682506 +0000 UTC m=+0.289677093 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git)
Oct 13 14:21:46 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[214001]: 2025-10-13 14:21:46.412612502 +0000 UTC m=+0.332683252 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, distribution-scope=public, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37)
Oct 13 14:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:21:46 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[213887]: 2025-10-13 14:21:46.45178647 +0000 UTC m=+0.721721625 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:21:46 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[213931]: 2025-10-13 14:21:46.482995121 +0000 UTC m=+0.499432480 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object)
Oct 13 14:21:46 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[214095]: 2025-10-13 14:21:46.530349185 +0000 UTC m=+0.316342834 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:21:46 standalone.localdomain podman[214011]: 2025-10-13 14:21:46.563868758 +0000 UTC m=+0.474863375 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, container_name=glance_api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:21:46 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[214003]: 2025-10-13 14:21:46.584869111 +0000 UTC m=+0.497560012 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, release=1, version=17.1.9, build-date=2025-07-21T15:24:10, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container, container_name=nova_vnc_proxy, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:21:46 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain podman[214167]: 2025-10-13 14:21:46.643759784 +0000 UTC m=+0.215551398 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Oct 13 14:21:46 standalone.localdomain podman[214095]: 2025-10-13 14:21:46.739022648 +0000 UTC m=+0.525016327 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20)
Oct 13 14:21:46 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:21:46 standalone.localdomain ceph-mon[29756]: pgmap v1417: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:47 standalone.localdomain podman[214167]: 2025-10-13 14:21:47.004660963 +0000 UTC m=+0.576452617 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Oct 13 14:21:47 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:21:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1418: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:48 standalone.localdomain runuser[214236]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:48 standalone.localdomain ceph-mon[29756]: pgmap v1418: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:49 standalone.localdomain runuser[214236]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:49 standalone.localdomain runuser[214297]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1419: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:50 standalone.localdomain runuser[214297]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:50 standalone.localdomain runuser[214359]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:21:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:50 standalone.localdomain runuser[214359]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:21:50 standalone.localdomain ceph-mon[29756]: pgmap v1419: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:51 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:21:51 standalone.localdomain recover_tripleo_nova_virtqemud[214508]: 93291
Oct 13 14:21:51 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:21:51 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:21:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1420: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:21:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:21:52 standalone.localdomain systemd[1]: tmp-crun.nxSjb4.mount: Deactivated successfully.
Oct 13 14:21:52 standalone.localdomain podman[214606]: 2025-10-13 14:21:52.827073357 +0000 UTC m=+0.089042121 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-keystone, version=17.1.9, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:21:52 standalone.localdomain podman[214606]: 2025-10-13 14:21:52.838326197 +0000 UTC m=+0.100294971 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:21:52 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:21:52 standalone.localdomain podman[214607]: 2025-10-13 14:21:52.94769172 +0000 UTC m=+0.207385393 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, release=1, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12)
Oct 13 14:21:52 standalone.localdomain ceph-mon[29756]: pgmap v1420: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:52 standalone.localdomain podman[214607]: 2025-10-13 14:21:52.98914683 +0000 UTC m=+0.248840443 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, release=1, container_name=nova_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, architecture=x86_64, tcib_managed=true, vcs-type=git)
Oct 13 14:21:53 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:21:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1421: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:21:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:21:54 standalone.localdomain podman[214756]: Error: container 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb is not running
Oct 13 14:21:54 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:21:54 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Failed with result 'exit-code'.
Oct 13 14:21:54 standalone.localdomain podman[214755]: 2025-10-13 14:21:54.794343805 +0000 UTC m=+0.063309040 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9)
Oct 13 14:21:54 standalone.localdomain podman[214755]: 2025-10-13 14:21:54.854836548 +0000 UTC m=+0.123801773 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:21:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:21:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:21:54 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:21:54 standalone.localdomain podman[214799]: 2025-10-13 14:21:54.945030654 +0000 UTC m=+0.064418775 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, container_name=nova_api_cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, build-date=2025-07-21T16:05:11, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, com.redhat.component=openstack-nova-api-container, distribution-scope=public, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:21:54 standalone.localdomain podman[214799]: 2025-10-13 14:21:54.954117816 +0000 UTC m=+0.073505937 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:05:11, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, release=1, vcs-type=git, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, vendor=Red Hat, Inc.)
Oct 13 14:21:54 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:21:54 standalone.localdomain ceph-mon[29756]: pgmap v1421: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:55 standalone.localdomain podman[214798]: 2025-10-13 14:21:55.000212171 +0000 UTC m=+0.120435488 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, architecture=x86_64, container_name=neutron_sriov_agent, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 14:21:55 standalone.localdomain podman[214798]: 2025-10-13 14:21:55.055835322 +0000 UTC m=+0.176058619 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, version=17.1.9, release=1, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, container_name=neutron_sriov_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:21:55 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:21:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:21:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1422: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:56 standalone.localdomain ceph-mon[29756]: pgmap v1422: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:21:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:21:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:21:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:21:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3536314737' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:21:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:21:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3536314737' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:21:57 standalone.localdomain systemd[1]: tmp-crun.ukzliX.mount: Deactivated successfully.
Oct 13 14:21:57 standalone.localdomain podman[214892]: 2025-10-13 14:21:57.634510483 +0000 UTC m=+0.146085526 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git)
Oct 13 14:21:57 standalone.localdomain podman[214913]: 2025-10-13 14:21:57.592793155 +0000 UTC m=+0.092695885 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, container_name=clustercheck, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:21:57 standalone.localdomain podman[214900]: 2025-10-13 14:21:57.610554198 +0000 UTC m=+0.113599396 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, architecture=x86_64, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-northd, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=ovn_cluster_northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-ovn-northd-container, version=17.1.9, build-date=2025-07-21T13:30:04)
Oct 13 14:21:57 standalone.localdomain podman[214913]: 2025-10-13 14:21:57.676832779 +0000 UTC m=+0.176735489 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container)
Oct 13 14:21:57 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:21:57 standalone.localdomain podman[214900]: 2025-10-13 14:21:57.693802597 +0000 UTC m=+0.196847795 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.buildah.version=1.33.12, build-date=2025-07-21T13:30:04, architecture=x86_64, config_id=ovn_cluster_northd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-northd, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, version=17.1.9, release=1, container_name=ovn_cluster_northd, io.openshift.expose-services=, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, summary=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:21:57 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:21:57 standalone.localdomain podman[214892]: 2025-10-13 14:21:57.71285992 +0000 UTC m=+0.224434983 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Oct 13 14:21:57 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:21:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3536314737' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:21:57 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3536314737' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:21:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1423: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:58 standalone.localdomain ceph-mon[29756]: pgmap v1423: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:21:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1424: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: pgmap v1424: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.951266) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365320951292, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2364, "num_deletes": 506, "total_data_size": 1950435, "memory_usage": 2000608, "flush_reason": "Manual Compaction"}
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365320968714, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1890392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32044, "largest_seqno": 34407, "table_properties": {"data_size": 1881654, "index_size": 4865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 21221, "raw_average_key_size": 18, "raw_value_size": 1861668, "raw_average_value_size": 1666, "num_data_blocks": 221, "num_entries": 1117, "num_filter_entries": 1117, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365127, "oldest_key_time": 1760365127, "file_creation_time": 1760365320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 17496 microseconds, and 4537 cpu microseconds.
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.968753) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1890392 bytes OK
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.968779) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.973889) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.973923) EVENT_LOG_v1 {"time_micros": 1760365320973916, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.973944) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1939405, prev total WAL file size 1939405, number of live WAL files 2.
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.974503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1846KB)], [71(4272KB)]
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365320974532, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 6265929, "oldest_snapshot_seqno": -1}
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 4208 keys, 5154751 bytes, temperature: kUnknown
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365320997583, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 5154751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5127354, "index_size": 15781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104626, "raw_average_key_size": 24, "raw_value_size": 5051887, "raw_average_value_size": 1200, "num_data_blocks": 659, "num_entries": 4208, "num_filter_entries": 4208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:22:00 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.997821) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 5154751 bytes
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000216) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 271.1 rd, 223.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.2 +0.0 blob) out(4.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5240, records dropped: 1032 output_compression: NoCompression
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000233) EVENT_LOG_v1 {"time_micros": 1760365321000225, "job": 40, "event": "compaction_finished", "compaction_time_micros": 23117, "compaction_time_cpu_micros": 9210, "output_level": 6, "num_output_files": 1, "total_output_size": 5154751, "num_input_records": 5240, "num_output_records": 4208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365321000454, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365321000762, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:00.974395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:22:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:22:01.000813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:22:01 standalone.localdomain sudo[215047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:22:01 standalone.localdomain sudo[215047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:22:01 standalone.localdomain sudo[215047]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:01 standalone.localdomain sudo[215131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:22:01 standalone.localdomain runuser[215148]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:01 standalone.localdomain sudo[215131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:22:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:22:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:22:01 standalone.localdomain systemd[1]: tmp-crun.bc24hm.mount: Deactivated successfully.
Oct 13 14:22:01 standalone.localdomain podman[215210]: 2025-10-13 14:22:01.817619343 +0000 UTC m=+0.075192471 container health_status e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.buildah.version=1.33.12, container_name=cinder_scheduler, com.redhat.component=openstack-cinder-scheduler-container, build-date=2025-07-21T16:10:12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-scheduler, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team)
Oct 13 14:22:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:22:01 standalone.localdomain podman[215210]: 2025-10-13 14:22:01.874513624 +0000 UTC m=+0.132086792 container exec_died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-scheduler-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=cinder_scheduler, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, build-date=2025-07-21T16:10:12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-scheduler)
Oct 13 14:22:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1425: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:01 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Deactivated successfully.
Oct 13 14:22:01 standalone.localdomain podman[215209]: 2025-10-13 14:22:01.87729245 +0000 UTC m=+0.135211638 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-iscsid)
Oct 13 14:22:01 standalone.localdomain podman[215209]: 2025-10-13 14:22:01.961134399 +0000 UTC m=+0.219053607 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:22:01 standalone.localdomain sudo[215131]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:01 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:22:01 standalone.localdomain podman[215242]: 2025-10-13 14:22:01.978920122 +0000 UTC m=+0.146199330 container health_status bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, tcib_managed=true, container_name=cinder_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, version=17.1.9, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:22:01 standalone.localdomain podman[215242]: 2025-10-13 14:22:01.98495824 +0000 UTC m=+0.152237438 container exec_died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, container_name=cinder_api_cron, version=17.1.9, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T15:58:55, config_id=tripleo_step4, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, name=rhosp17/openstack-cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:22:01 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Deactivated successfully.
Oct 13 14:22:02 standalone.localdomain runuser[215148]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:22:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 3b924700-7038-4546-8158-124c081d453f (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:22:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 3b924700-7038-4546-8158-124c081d453f (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:22:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 3b924700-7038-4546-8158-124c081d453f (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:22:02 standalone.localdomain sudo[215350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:22:02 standalone.localdomain sudo[215350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:22:02 standalone.localdomain sudo[215350]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:02 standalone.localdomain runuser[215365]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:02 standalone.localdomain runuser[215365]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:02 standalone.localdomain runuser[215460]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: pgmap v1425: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:22:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:22:03 standalone.localdomain podman[215559]: 2025-10-13 14:22:03.163397845 +0000 UTC m=+0.077481942 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, build-date=2025-07-21T12:58:45, release=1, name=rhosp17/openstack-mariadb, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:22:03 standalone.localdomain podman[215559]: 2025-10-13 14:22:03.193811242 +0000 UTC m=+0.107895339 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:03 standalone.localdomain runuser[215460]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:03 standalone.localdomain podman[215600]: 2025-10-13 14:22:03.418933246 +0000 UTC m=+0.064150307 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:11, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, architecture=x86_64, distribution-scope=public, release=1, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:22:03 standalone.localdomain podman[215600]: 2025-10-13 14:22:03.453514681 +0000 UTC m=+0.098731722 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vcs-type=git, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, name=rhosp17/openstack-haproxy, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:11, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64)
Oct 13 14:22:03 standalone.localdomain podman[215638]: 2025-10-13 14:22:03.69486764 +0000 UTC m=+0.055646761 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:22:03 standalone.localdomain podman[215638]: 2025-10-13 14:22:03.725918188 +0000 UTC m=+0.086697299 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vcs-type=git, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:22:03 standalone.localdomain systemd[1]: tmp-crun.mgZ15x.mount: Deactivated successfully.
Oct 13 14:22:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1426: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:22:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:22:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:22:05 standalone.localdomain ceph-mon[29756]: pgmap v1426: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:22:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1427: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:06 standalone.localdomain ceph-mon[29756]: pgmap v1427: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:07 standalone.localdomain podman[215757]: 2025-10-13 14:22:07.650620457 +0000 UTC m=+0.078840213 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, release=1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-volume-container, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, batch=17.1_20250721.1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:13:39, description=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:07 standalone.localdomain podman[215757]: 2025-10-13 14:22:07.685845784 +0000 UTC m=+0.114065530 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, name=rhosp17/openstack-cinder-volume, description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:13:39, com.redhat.component=openstack-cinder-volume-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:22:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1428: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:09 standalone.localdomain ceph-mon[29756]: pgmap v1428: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:22:09 standalone.localdomain podman[215804]: 2025-10-13 14:22:09.566730464 +0000 UTC m=+0.081687932 container health_status 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cinder-api, version=17.1.9, build-date=2025-07-21T15:58:55, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-api-container, container_name=cinder_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:22:09 standalone.localdomain podman[215829]: 2025-10-13 14:22:09.666706415 +0000 UTC m=+0.071851196 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, com.redhat.component=openstack-nova-conductor-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_conductor, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:22:09 standalone.localdomain podman[215829]: 2025-10-13 14:22:09.746134467 +0000 UTC m=+0.151279238 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, container_name=nova_conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, description=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-conductor, build-date=2025-07-21T15:44:17, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, com.redhat.component=openstack-nova-conductor-container, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:22:09 standalone.localdomain podman[215851]: 2025-10-13 14:22:09.771843627 +0000 UTC m=+0.083836040 container health_status 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, release=1, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=neutron_api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T15:44:03, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:22:09 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:22:09 standalone.localdomain podman[215882]: 2025-10-13 14:22:09.830380638 +0000 UTC m=+0.050062539 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1)
Oct 13 14:22:09 standalone.localdomain podman[215804]: 2025-10-13 14:22:09.838885382 +0000 UTC m=+0.353842790 container exec_died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-cinder-api-container, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=cinder_api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:58:55, managed_by=tripleo_ansible)
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:22:09 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Deactivated successfully.
Oct 13 14:22:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1429: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:09 standalone.localdomain podman[215881]: 2025-10-13 14:22:09.939559395 +0000 UTC m=+0.166038458 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:22:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:22:09 standalone.localdomain podman[215850]: 2025-10-13 14:22:09.817301161 +0000 UTC m=+0.128500700 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, release=1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:09 standalone.localdomain podman[215851]: 2025-10-13 14:22:09.970450636 +0000 UTC m=+0.282443079 container exec_died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-server, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=neutron_api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:22:10 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Deactivated successfully.
Oct 13 14:22:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:22:10 standalone.localdomain ceph-mon[29756]: pgmap v1429: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:10 standalone.localdomain haproxy[70940]: 172.17.0.100:52641 [13/Oct/2025:14:01:24.923] mysql mysql/standalone.internalapi.localdomain 1/0/1245278 82332 -- 44/44/43/43/0 0/0
Oct 13 14:22:10 standalone.localdomain podman[215881]: 2025-10-13 14:22:10.24281854 +0000 UTC m=+0.469297613 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T15:56:26, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, container_name=heat_api_cron, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:22:10 standalone.localdomain systemd[1]: libpod-5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.scope: Deactivated successfully.
Oct 13 14:22:10 standalone.localdomain systemd[1]: libpod-5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.scope: Consumed 5.932s CPU time.
Oct 13 14:22:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:22:10 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:22:10 standalone.localdomain podman[215990]: 2025-10-13 14:22:10.44627303 +0000 UTC m=+0.167466771 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:10 standalone.localdomain podman[215882]: 2025-10-13 14:22:10.458085488 +0000 UTC m=+0.677767409 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, release=1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, container_name=memcached, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:10 standalone.localdomain podman[215850]: 2025-10-13 14:22:10.458576803 +0000 UTC m=+0.769776412 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, release=1, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Oct 13 14:22:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:22:10 standalone.localdomain podman[215990]: 2025-10-13 14:22:10.482140726 +0000 UTC m=+0.203334567 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, release=1, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:22:10 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:22:10 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:22:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:11 standalone.localdomain podman[216027]: 2025-10-13 14:22:10.70628333 +0000 UTC m=+0.218573932 container health_status 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-manila-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-api, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, config_id=tripleo_step4, container_name=manila_api_cron)
Oct 13 14:22:11 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[213696]: 2025-10-13 14:22:11.120946721 +0000 UTC m=+29.165299022 container died 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T16:18:19, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, tcib_managed=true, container_name=barbican_keystone_listener, distribution-scope=public, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-barbican-keystone-listener-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, name=rhosp17/openstack-barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:22:11 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.timer: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.
Oct 13 14:22:11 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Failed to open /run/systemd/transient/5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: No such file or directory
Oct 13 14:22:11 standalone.localdomain podman[215959]: 2025-10-13 14:22:11.206947367 +0000 UTC m=+1.232231730 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, com.redhat.component=openstack-nova-scheduler-container, maintainer=OpenStack TripleO Team, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-nova-scheduler, build-date=2025-07-21T16:02:54, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:22:11 standalone.localdomain podman[215977]: 2025-10-13 14:22:10.893731992 +0000 UTC m=+0.749895443 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T15:44:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git)
Oct 13 14:22:11 standalone.localdomain podman[213696]: 2025-10-13 14:22:11.403397519 +0000 UTC m=+29.447749770 container cleanup 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, config_id=tripleo_step3, container_name=barbican_keystone_listener, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T16:18:19, architecture=x86_64, com.redhat.component=openstack-barbican-keystone-listener-container, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, name=rhosp17/openstack-barbican-keystone-listener, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:22:11 standalone.localdomain podman[213696]: barbican_keystone_listener
Oct 13 14:22:11 standalone.localdomain podman[215959]: 2025-10-13 14:22:11.409071426 +0000 UTC m=+1.434355749 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-scheduler, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, release=1, io.buildah.version=1.33.12, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:02:54, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, container_name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, com.redhat.component=openstack-nova-scheduler-container, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:22:11 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.timer: Failed to open /run/systemd/transient/5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.timer: No such file or directory
Oct 13 14:22:11 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Failed to open /run/systemd/transient/5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: No such file or directory
Oct 13 14:22:11 standalone.localdomain podman[215977]: 2025-10-13 14:22:11.449351979 +0000 UTC m=+1.305515430 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, com.redhat.component=openstack-heat-engine-container, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:11, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:22:11 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[216027]: 2025-10-13 14:22:11.501294645 +0000 UTC m=+1.013585187 container exec_died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-api, com.redhat.component=openstack-manila-api-container, description=Red Hat OpenStack Platform 17.1 manila-api, distribution-scope=public, build-date=2025-07-21T16:06:43, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=manila_api_cron, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:22:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f6698bdef4914a669560655838bf20b3925b44012adb424488f64497bc052998-merged.mount: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[215996]: 2025-10-13 14:22:11.61682188 +0000 UTC m=+1.324495381 container cleanup 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-barbican-keystone-listener, build-date=2025-07-21T16:18:19, config_id=tripleo_step3, container_name=barbican_keystone_listener, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.component=openstack-barbican-keystone-listener-container, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:22:11 standalone.localdomain systemd[1]: libpod-conmon-5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.scope: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[215930]: 2025-10-13 14:22:11.6457273 +0000 UTC m=+1.774998038 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, tcib_managed=true, container_name=heat_api_cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-cfn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public, release=1)
Oct 13 14:22:11 standalone.localdomain podman[215930]: 2025-10-13 14:22:11.695934962 +0000 UTC m=+1.825205690 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, container_name=heat_api_cfn, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:22:11 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.timer: Failed to open /run/systemd/transient/5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.timer: No such file or directory
Oct 13 14:22:11 standalone.localdomain systemd[1]: 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: Failed to open /run/systemd/transient/5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb.service: No such file or directory
Oct 13 14:22:11 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[215931]: 2025-10-13 14:22:11.460764045 +0000 UTC m=+1.588373081 container health_status 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-manila-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-manila-scheduler-container, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T15:56:28, container_name=manila_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']})
Oct 13 14:22:11 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[215931]: 2025-10-13 14:22:11.861808512 +0000 UTC m=+1.989417538 container exec_died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-type=git, architecture=x86_64, container_name=manila_scheduler, description=Red Hat OpenStack Platform 17.1 manila-scheduler, build-date=2025-07-21T15:56:28, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, name=rhosp17/openstack-manila-scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, com.redhat.component=openstack-manila-scheduler-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler)
Oct 13 14:22:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1430: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:11 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain podman[216184]: 2025-10-13 14:22:11.967574212 +0000 UTC m=+0.317824239 container cleanup 5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb (image=registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1, name=barbican_keystone_listener, name=rhosp17/openstack-barbican-keystone-listener, io.k8s.description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, vcs-ref=498699af89fe63bf70d22fa12e0bb862dd12874c, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd98175232adee9e03624d31ed0b22944'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-barbican-keystone-listener:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 6, 'user': 'barbican', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/barbican:/var/log/barbican:z', '/var/log/containers/httpd/barbican-api:/var/log/httpd:z', '/var/lib/kolla/config_files/barbican_keystone_listener.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/barbican:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 barbican-keystone-listener, build-date=2025-07-21T16:18:19, com.redhat.component=openstack-barbican-keystone-listener-container, container_name=barbican_keystone_listener, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-barbican-keystone-listener/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 barbican-keystone-listener)
Oct 13 14:22:11 standalone.localdomain podman[216184]: barbican_keystone_listener
Oct 13 14:22:11 standalone.localdomain systemd[1]: tripleo_barbican_keystone_listener.service: Deactivated successfully.
Oct 13 14:22:11 standalone.localdomain systemd[1]: Stopped barbican_keystone_listener container.
Oct 13 14:22:11 standalone.localdomain sudo[213647]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:11 standalone.localdomain sshd[213646]: Received disconnect from 192.168.122.11 port 43720:11: disconnected by user
Oct 13 14:22:11 standalone.localdomain sshd[213646]: Disconnected from user root 192.168.122.11 port 43720
Oct 13 14:22:11 standalone.localdomain systemd-logind[45629]: Session 123 logged out. Waiting for processes to exit.
Oct 13 14:22:11 standalone.localdomain sshd[213643]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:11 standalone.localdomain systemd[1]: session-123.scope: Deactivated successfully.
Oct 13 14:22:12 standalone.localdomain systemd-logind[45629]: Removed session 123.
Oct 13 14:22:12 standalone.localdomain sshd[216208]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:12 standalone.localdomain sshd[216208]: Accepted publickey for root from 192.168.122.11 port 44086 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:12 standalone.localdomain systemd-logind[45629]: New session 124 of user root.
Oct 13 14:22:12 standalone.localdomain systemd[1]: Started Session 124 of User root.
Oct 13 14:22:12 standalone.localdomain sshd[216208]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:12 standalone.localdomain sudo[216261]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_cinder_api.service
Oct 13 14:22:12 standalone.localdomain sudo[216261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:12 standalone.localdomain sudo[216261]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:12 standalone.localdomain sshd[216252]: Received disconnect from 192.168.122.11 port 44086:11: disconnected by user
Oct 13 14:22:12 standalone.localdomain sshd[216252]: Disconnected from user root 192.168.122.11 port 44086
Oct 13 14:22:12 standalone.localdomain sshd[216208]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:12 standalone.localdomain systemd[1]: session-124.scope: Deactivated successfully.
Oct 13 14:22:12 standalone.localdomain systemd-logind[45629]: Session 124 logged out. Waiting for processes to exit.
Oct 13 14:22:12 standalone.localdomain systemd-logind[45629]: Removed session 124.
Oct 13 14:22:12 standalone.localdomain sshd[216276]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:12 standalone.localdomain sshd[216276]: Accepted publickey for root from 192.168.122.11 port 44090 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:12 standalone.localdomain systemd-logind[45629]: New session 125 of user root.
Oct 13 14:22:12 standalone.localdomain systemd[1]: Started Session 125 of User root.
Oct 13 14:22:12 standalone.localdomain sshd[216276]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:12 standalone.localdomain sudo[216280]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_cinder_api.service
Oct 13 14:22:12 standalone.localdomain sudo[216280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:12 standalone.localdomain systemd[1]: Stopping cinder_api container...
Oct 13 14:22:13 standalone.localdomain ceph-mon[29756]: pgmap v1430: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1431: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:13 standalone.localdomain runuser[216405]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:14 standalone.localdomain ceph-mon[29756]: pgmap v1431: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:14 standalone.localdomain runuser[216405]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:14 standalone.localdomain runuser[216527]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:15 standalone.localdomain runuser[216527]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:15 standalone.localdomain runuser[216589]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:15 standalone.localdomain haproxy[70940]: 172.17.0.100:37171 [13/Oct/2025:14:07:03.298] mysql mysql/standalone.internalapi.localdomain 1/0/912413 737159 -- 43/43/42/42/0 0/0
Oct 13 14:22:15 standalone.localdomain systemd[1]: libpod-4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.scope: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain systemd[1]: libpod-4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.scope: Consumed 7.782s CPU time.
Oct 13 14:22:15 standalone.localdomain podman[216295]: 2025-10-13 14:22:15.727770866 +0000 UTC m=+3.199643093 container died 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, name=rhosp17/openstack-cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.component=openstack-cinder-api-container, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, container_name=cinder_api, managed_by=tripleo_ansible, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, version=17.1.9, release=1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:22:15 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.timer: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.
Oct 13 14:22:15 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Failed to open /run/systemd/transient/4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: No such file or directory
Oct 13 14:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f573d32cfb4fe0600026989e0f3eb3e23bdbbcdc54637c567c3b456cea4c706a-merged.mount: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain podman[216295]: 2025-10-13 14:22:15.784506951 +0000 UTC m=+3.256379178 container cleanup 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, version=17.1.9, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, container_name=cinder_api, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T15:58:55, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-cinder-api-container, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:15 standalone.localdomain podman[216295]: cinder_api
Oct 13 14:22:15 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.timer: Failed to open /run/systemd/transient/4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.timer: No such file or directory
Oct 13 14:22:15 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Failed to open /run/systemd/transient/4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: No such file or directory
Oct 13 14:22:15 standalone.localdomain podman[216636]: 2025-10-13 14:22:15.826110165 +0000 UTC m=+0.099082604 container cleanup 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-cinder-api, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, container_name=cinder_api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:22:15 standalone.localdomain systemd[1]: libpod-conmon-4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.scope: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1432: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:15 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.timer: Failed to open /run/systemd/transient/4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.timer: No such file or directory
Oct 13 14:22:15 standalone.localdomain systemd[1]: 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: Failed to open /run/systemd/transient/4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e.service: No such file or directory
Oct 13 14:22:15 standalone.localdomain podman[216655]: 2025-10-13 14:22:15.903000437 +0000 UTC m=+0.049393757 container cleanup 4a1f59362a03c2025b18481002d99d2a9ee6ac2e486d884b625039e6afa2e92e (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 cinder-api, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-api-container, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_id=tripleo_step4, container_name=cinder_api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-api, build-date=2025-07-21T15:58:55, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:22:15 standalone.localdomain podman[216655]: cinder_api
Oct 13 14:22:15 standalone.localdomain systemd[1]: tripleo_cinder_api.service: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain systemd[1]: Stopped cinder_api container.
Oct 13 14:22:15 standalone.localdomain sudo[216280]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:15 standalone.localdomain sshd[216279]: Received disconnect from 192.168.122.11 port 44090:11: disconnected by user
Oct 13 14:22:15 standalone.localdomain sshd[216279]: Disconnected from user root 192.168.122.11 port 44090
Oct 13 14:22:15 standalone.localdomain sshd[216276]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:15 standalone.localdomain systemd[1]: session-125.scope: Deactivated successfully.
Oct 13 14:22:15 standalone.localdomain systemd-logind[45629]: Session 125 logged out. Waiting for processes to exit.
Oct 13 14:22:15 standalone.localdomain systemd-logind[45629]: Removed session 125.
Oct 13 14:22:15 standalone.localdomain sshd[216669]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:15 standalone.localdomain runuser[216589]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:16 standalone.localdomain sshd[216669]: Accepted publickey for root from 192.168.122.11 port 44096 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:16 standalone.localdomain systemd-logind[45629]: New session 126 of user root.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started Session 126 of User root.
Oct 13 14:22:16 standalone.localdomain sshd[216669]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:16 standalone.localdomain sudo[216674]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_cinder_api_cron.service
Oct 13 14:22:16 standalone.localdomain sudo[216674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:22:16 standalone.localdomain sudo[216674]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:16 standalone.localdomain sshd[216673]: Received disconnect from 192.168.122.11 port 44096:11: disconnected by user
Oct 13 14:22:16 standalone.localdomain sshd[216673]: Disconnected from user root 192.168.122.11 port 44096
Oct 13 14:22:16 standalone.localdomain sshd[216669]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:16 standalone.localdomain systemd[1]: session-126.scope: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd-logind[45629]: Session 126 logged out. Waiting for processes to exit.
Oct 13 14:22:16 standalone.localdomain systemd-logind[45629]: Removed session 126.
Oct 13 14:22:16 standalone.localdomain sshd[216696]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:16 standalone.localdomain podman[216689]: 2025-10-13 14:22:16.243724588 +0000 UTC m=+0.062065341 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:28:44, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, container_name=ovn_controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:22:16 standalone.localdomain podman[216689]: 2025-10-13 14:22:16.289840163 +0000 UTC m=+0.108180926 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:22:16 standalone.localdomain sshd[216696]: Accepted publickey for root from 192.168.122.11 port 44110 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:16 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:22:16 standalone.localdomain systemd-logind[45629]: New session 127 of user root.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started Session 127 of User root.
Oct 13 14:22:16 standalone.localdomain sshd[216696]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:16 standalone.localdomain podman[216716]: 2025-10-13 14:22:16.342841602 +0000 UTC m=+0.070992169 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, distribution-scope=public, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible)
Oct 13 14:22:16 standalone.localdomain podman[216716]: 2025-10-13 14:22:16.386190601 +0000 UTC m=+0.114341178 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9)
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:22:16 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:22:16 standalone.localdomain sudo[216777]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_cinder_api_cron.service
Oct 13 14:22:16 standalone.localdomain sudo[216777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Stopping cinder_api_cron container...
Oct 13 14:22:16 standalone.localdomain podman[216738]: 2025-10-13 14:22:16.474645103 +0000 UTC m=+0.140258095 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, com.redhat.component=openstack-glance-api-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-glance-api, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, container_name=glance_api_cron, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:22:16 standalone.localdomain podman[216738]: 2025-10-13 14:22:16.482949501 +0000 UTC m=+0.148562473 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, container_name=glance_api_cron, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:22:16 standalone.localdomain podman[216728]: 2025-10-13 14:22:16.391197317 +0000 UTC m=+0.072197067 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-nova-api-container, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, container_name=nova_metadata, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:22:16 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:22:16 standalone.localdomain crond[112288]: (CRON) INFO (Shutting down)
Oct 13 14:22:16 standalone.localdomain systemd[1]: libpod-bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.scope: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain podman[216830]: 2025-10-13 14:22:16.542159753 +0000 UTC m=+0.068737629 container died bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T15:58:55, description=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cinder-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-cinder-api-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, version=17.1.9, container_name=cinder_api_cron)
Oct 13 14:22:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.timer: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.
Oct 13 14:22:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Failed to open /run/systemd/transient/bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: No such file or directory
Oct 13 14:22:16 standalone.localdomain podman[216728]: 2025-10-13 14:22:16.572679673 +0000 UTC m=+0.253679393 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T16:05:11, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_metadata, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:22:16 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:22:16 standalone.localdomain podman[216853]: 2025-10-13 14:22:16.609547581 +0000 UTC m=+0.107463505 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:22:16 standalone.localdomain podman[216813]: 2025-10-13 14:22:16.611014776 +0000 UTC m=+0.175485401 container health_status 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, config_id=tripleo_step4, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, build-date=2025-07-21T13:58:12, container_name=placement_api, managed_by=tripleo_ansible, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, com.redhat.component=openstack-placement-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-placement-api, summary=Red Hat OpenStack Platform 17.1 placement-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:16 standalone.localdomain podman[216855]: 2025-10-13 14:22:16.674713128 +0000 UTC m=+0.168446582 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9)
Oct 13 14:22:16 standalone.localdomain podman[216781]: 2025-10-13 14:22:16.724671003 +0000 UTC m=+0.320952728 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:22:16 standalone.localdomain podman[216821]: 2025-10-13 14:22:16.629357547 +0000 UTC m=+0.170145505 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy)
Oct 13 14:22:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d88352a2ff0b111ac4658de741be23293b9e0b9247be5bcd90b986f8e074eeae-merged.mount: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain podman[216935]: 2025-10-13 14:22:16.781993686 +0000 UTC m=+0.125614230 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, build-date=2025-07-21T15:24:10, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:22:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:22:16 standalone.localdomain podman[216830]: 2025-10-13 14:22:16.792708439 +0000 UTC m=+0.319286305 container cleanup bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, name=rhosp17/openstack-cinder-api, summary=Red Hat OpenStack Platform 17.1 cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=cinder_api_cron, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, version=17.1.9, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-cinder-api-container, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T15:58:55, io.buildah.version=1.33.12, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc.)
Oct 13 14:22:16 standalone.localdomain podman[216830]: cinder_api_cron
Oct 13 14:22:16 standalone.localdomain podman[216813]: 2025-10-13 14:22:16.802253406 +0000 UTC m=+0.366724031 container exec_died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, tcib_managed=true, com.redhat.component=openstack-placement-api-container, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 placement-api, container_name=placement_api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-placement-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:58:12, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:22:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.timer: Failed to open /run/systemd/transient/bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.timer: No such file or directory
Oct 13 14:22:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Failed to open /run/systemd/transient/bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: No such file or directory
Oct 13 14:22:16 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain podman[216877]: 2025-10-13 14:22:16.858667331 +0000 UTC m=+0.311252565 container cleanup bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, name=rhosp17/openstack-cinder-api, description=Red Hat OpenStack Platform 17.1 cinder-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-api-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., container_name=cinder_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, build-date=2025-07-21T15:58:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cinder-api)
Oct 13 14:22:16 standalone.localdomain podman[216893]: 2025-10-13 14:22:16.875408302 +0000 UTC m=+0.277821055 container health_status c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, health_status=healthy, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, container_name=glance_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:16 standalone.localdomain systemd[1]: libpod-conmon-bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.scope: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain podman[216853]: 2025-10-13 14:22:16.893778784 +0000 UTC m=+0.391694688 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4)
Oct 13 14:22:16 standalone.localdomain podman[216781]: 2025-10-13 14:22:16.906088566 +0000 UTC m=+0.502370281 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, architecture=x86_64, tcib_managed=true, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:22:16 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.timer: Failed to open /run/systemd/transient/bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.timer: No such file or directory
Oct 13 14:22:16 standalone.localdomain systemd[1]: bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: Failed to open /run/systemd/transient/bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192.service: No such file or directory
Oct 13 14:22:16 standalone.localdomain podman[217010]: 2025-10-13 14:22:16.943611654 +0000 UTC m=+0.055063444 container cleanup bcf2058ea487002fddeecb66c0377138043f7cadd509353c689b33f56a297192 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1, name=cinder_api_cron, architecture=x86_64, config_id=tripleo_step4, container_name=cinder_api_cron, description=Red Hat OpenStack Platform 17.1 cinder-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=9ec4c911b0dceec2244c71f081a5b32931afa57b, summary=Red Hat OpenStack Platform 17.1 cinder-api, vcs-type=git, batch=17.1_20250721.1, release=1, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-api, build-date=2025-07-21T15:58:55, com.redhat.component=openstack-cinder-api-container, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron cinder'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/log/containers/httpd/cinder-api:/var/log/httpd:z']})
Oct 13 14:22:16 standalone.localdomain podman[216821]: 2025-10-13 14:22:16.945004667 +0000 UTC m=+0.485792595 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-proxy-server-container, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, release=1, vendor=Red Hat, Inc.)
Oct 13 14:22:16 standalone.localdomain podman[217010]: cinder_api_cron
Oct 13 14:22:16 standalone.localdomain ceph-mon[29756]: pgmap v1432: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:16 standalone.localdomain systemd[1]: tripleo_cinder_api_cron.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd[1]: Stopped cinder_api_cron container.
Oct 13 14:22:16 standalone.localdomain sudo[216777]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:16 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain sshd[216755]: Received disconnect from 192.168.122.11 port 44110:11: disconnected by user
Oct 13 14:22:16 standalone.localdomain sshd[216755]: Disconnected from user root 192.168.122.11 port 44110
Oct 13 14:22:16 standalone.localdomain sshd[216696]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:16 standalone.localdomain systemd[1]: session-127.scope: Deactivated successfully.
Oct 13 14:22:16 standalone.localdomain systemd-logind[45629]: Session 127 logged out. Waiting for processes to exit.
Oct 13 14:22:16 standalone.localdomain systemd-logind[45629]: Removed session 127.
Oct 13 14:22:16 standalone.localdomain sshd[217033]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:16 standalone.localdomain podman[216855]: 2025-10-13 14:22:16.995707725 +0000 UTC m=+0.489441189 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, release=1, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:22:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:22:17 standalone.localdomain podman[216983]: 2025-10-13 14:22:17.081982569 +0000 UTC m=+0.286563926 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, release=1, build-date=2025-07-21T13:58:20, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=glance_api_internal, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:22:17 standalone.localdomain sshd[217033]: Accepted publickey for root from 192.168.122.11 port 44114 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:17 standalone.localdomain systemd-logind[45629]: New session 128 of user root.
Oct 13 14:22:17 standalone.localdomain podman[216893]: 2025-10-13 14:22:17.107089381 +0000 UTC m=+0.509502104 container exec_died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, vcs-type=git, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=glance_api, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4)
Oct 13 14:22:17 standalone.localdomain systemd[1]: Started Session 128 of User root.
Oct 13 14:22:17 standalone.localdomain podman[216935]: 2025-10-13 14:22:17.120758656 +0000 UTC m=+0.464379210 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.buildah.version=1.33.12, build-date=2025-07-21T15:24:10, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-nova-novncproxy, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, container_name=nova_vnc_proxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-novncproxy-container)
Oct 13 14:22:17 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Deactivated successfully.
Oct 13 14:22:17 standalone.localdomain sshd[217033]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:17 standalone.localdomain podman[217044]: 2025-10-13 14:22:17.158157709 +0000 UTC m=+0.072028441 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, vcs-type=git, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_id=tripleo_step4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:22:17 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:22:17 standalone.localdomain sudo[217075]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_cinder_scheduler.service
Oct 13 14:22:17 standalone.localdomain sudo[217075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:17 standalone.localdomain sudo[217075]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:17 standalone.localdomain sshd[217064]: Received disconnect from 192.168.122.11 port 44114:11: disconnected by user
Oct 13 14:22:17 standalone.localdomain sshd[217064]: Disconnected from user root 192.168.122.11 port 44114
Oct 13 14:22:17 standalone.localdomain sshd[217033]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:17 standalone.localdomain systemd[1]: session-128.scope: Deactivated successfully.
Oct 13 14:22:17 standalone.localdomain systemd-logind[45629]: Session 128 logged out. Waiting for processes to exit.
Oct 13 14:22:17 standalone.localdomain systemd-logind[45629]: Removed session 128.
Oct 13 14:22:17 standalone.localdomain sshd[217092]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:17 standalone.localdomain podman[216983]: 2025-10-13 14:22:17.285812821 +0000 UTC m=+0.490394188 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:22:17 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:22:17 standalone.localdomain sshd[217092]: Accepted publickey for root from 192.168.122.11 port 44116 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:17 standalone.localdomain systemd-logind[45629]: New session 129 of user root.
Oct 13 14:22:17 standalone.localdomain systemd[1]: Started Session 129 of User root.
Oct 13 14:22:17 standalone.localdomain sshd[217092]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:17 standalone.localdomain sudo[217106]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_cinder_scheduler.service
Oct 13 14:22:17 standalone.localdomain podman[217044]: 2025-10-13 14:22:17.478872518 +0000 UTC m=+0.392743240 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, release=1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:22:17 standalone.localdomain sudo[217106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:17 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:22:17 standalone.localdomain systemd[1]: Stopping cinder_scheduler container...
Oct 13 14:22:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1433: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:22:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3558181110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:22:18 standalone.localdomain ceph-mon[29756]: pgmap v1433: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3558181110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:22:18 standalone.localdomain haproxy[70940]: 172.17.0.2:48148 [13/Oct/2025:14:22:18.973] placement placement/standalone.internalapi.localdomain 0/0/0/11/11 200 497 - - ---- 43/1/0/0/0 0/0 "GET /placement/resource_providers/e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269/allocations HTTP/1.1"
Oct 13 14:22:19 standalone.localdomain haproxy[70940]: 172.17.0.100:57749 [13/Oct/2025:14:17:18.252] mysql mysql/standalone.internalapi.localdomain 1/0/301009 88053 -- 43/42/41/41/0 0/0
Oct 13 14:22:19 standalone.localdomain systemd[1]: libpod-e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.scope: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain systemd[1]: libpod-e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.scope: Consumed 4.205s CPU time.
Oct 13 14:22:19 standalone.localdomain podman[217121]: 2025-10-13 14:22:19.365379013 +0000 UTC m=+1.844085956 container died e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T16:10:12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cinder-scheduler, version=17.1.9, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, io.openshift.expose-services=, architecture=x86_64, container_name=cinder_scheduler, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, vcs-type=git, com.redhat.component=openstack-cinder-scheduler-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cinder-scheduler, tcib_managed=true, release=1, distribution-scope=public)
Oct 13 14:22:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.timer: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.
Oct 13 14:22:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Failed to open /run/systemd/transient/e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: No such file or directory
Oct 13 14:22:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e167f4177388f76c1ad6441c898b0a79fb6ad068ac7f87433fc33b2068b41b1-merged.mount: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 14:22:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2844561545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:22:19 standalone.localdomain podman[217121]: 2025-10-13 14:22:19.440434549 +0000 UTC m=+1.919141452 container cleanup e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, com.redhat.component=openstack-cinder-scheduler-container, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=cinder_scheduler, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, batch=17.1_20250721.1, name=rhosp17/openstack-cinder-scheduler, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, release=1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:19 standalone.localdomain podman[217121]: cinder_scheduler
Oct 13 14:22:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.timer: Failed to open /run/systemd/transient/e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.timer: No such file or directory
Oct 13 14:22:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Failed to open /run/systemd/transient/e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: No such file or directory
Oct 13 14:22:19 standalone.localdomain podman[217184]: 2025-10-13 14:22:19.454754884 +0000 UTC m=+0.082945471 container cleanup e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, build-date=2025-07-21T16:10:12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, container_name=cinder_scheduler, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cinder-scheduler-container, summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, name=rhosp17/openstack-cinder-scheduler, release=1, vendor=Red Hat, Inc., vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f)
Oct 13 14:22:19 standalone.localdomain systemd[1]: libpod-conmon-e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.scope: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.timer: Failed to open /run/systemd/transient/e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.timer: No such file or directory
Oct 13 14:22:19 standalone.localdomain systemd[1]: e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: Failed to open /run/systemd/transient/e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36.service: No such file or directory
Oct 13 14:22:19 standalone.localdomain podman[217209]: 2025-10-13 14:22:19.538026615 +0000 UTC m=+0.048368545 container cleanup e09e15fd89b5359684c8245e90c864d5bf67c3a5ade6dd5607b57820fb9bbb36 (image=registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1, name=cinder_scheduler, com.redhat.component=openstack-cinder-scheduler-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-scheduler, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bf2556c9454e19b68e3daf4f11f84a81'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cinder-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/cinder:/var/lib/kolla/config_files/src:ro', '/var/log/containers/cinder:/var/log/cinder:z', '/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-scheduler, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-scheduler, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-cinder-scheduler, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T16:10:12, vcs-ref=7b7413c44031c9981e6a5da4ae1e648d171f240f, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-scheduler/images/17.1.9-1, release=1, container_name=cinder_scheduler, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-scheduler)
Oct 13 14:22:19 standalone.localdomain podman[217209]: cinder_scheduler
Oct 13 14:22:19 standalone.localdomain systemd[1]: tripleo_cinder_scheduler.service: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain systemd[1]: Stopped cinder_scheduler container.
Oct 13 14:22:19 standalone.localdomain sudo[217106]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:19 standalone.localdomain sshd[217105]: Received disconnect from 192.168.122.11 port 44116:11: disconnected by user
Oct 13 14:22:19 standalone.localdomain sshd[217105]: Disconnected from user root 192.168.122.11 port 44116
Oct 13 14:22:19 standalone.localdomain sshd[217092]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:19 standalone.localdomain systemd[1]: session-129.scope: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain sshd[217222]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:19 standalone.localdomain systemd-logind[45629]: Session 129 logged out. Waiting for processes to exit.
Oct 13 14:22:19 standalone.localdomain systemd-logind[45629]: Removed session 129.
Oct 13 14:22:19 standalone.localdomain sshd[217222]: Accepted publickey for root from 192.168.122.11 port 44124 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:19 standalone.localdomain systemd-logind[45629]: New session 130 of user root.
Oct 13 14:22:19 standalone.localdomain systemd[1]: Started Session 130 of User root.
Oct 13 14:22:19 standalone.localdomain sshd[217222]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:19 standalone.localdomain sudo[217226]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_cinder_volume.service
Oct 13 14:22:19 standalone.localdomain sudo[217226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:19 standalone.localdomain sudo[217226]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:19 standalone.localdomain sshd[217225]: Received disconnect from 192.168.122.11 port 44124:11: disconnected by user
Oct 13 14:22:19 standalone.localdomain sshd[217225]: Disconnected from user root 192.168.122.11 port 44124
Oct 13 14:22:19 standalone.localdomain sshd[217222]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:19 standalone.localdomain systemd[1]: session-130.scope: Deactivated successfully.
Oct 13 14:22:19 standalone.localdomain systemd-logind[45629]: Session 130 logged out. Waiting for processes to exit.
Oct 13 14:22:19 standalone.localdomain systemd-logind[45629]: Removed session 130.
Oct 13 14:22:19 standalone.localdomain sshd[217241]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1434: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:19 standalone.localdomain sshd[217241]: Accepted publickey for root from 192.168.122.11 port 52006 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:19 standalone.localdomain systemd-logind[45629]: New session 131 of user root.
Oct 13 14:22:20 standalone.localdomain systemd[1]: Started Session 131 of User root.
Oct 13 14:22:20 standalone.localdomain sshd[217241]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2844561545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 14:22:20 standalone.localdomain sudo[217245]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_cinder_backup.service
Oct 13 14:22:20 standalone.localdomain sudo[217245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain sudo[217245]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:20 standalone.localdomain sshd[217244]: Received disconnect from 192.168.122.11 port 52006:11: disconnected by user
Oct 13 14:22:20 standalone.localdomain sshd[217244]: Disconnected from user root 192.168.122.11 port 52006
Oct 13 14:22:20 standalone.localdomain sshd[217241]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:20 standalone.localdomain systemd[1]: session-131.scope: Deactivated successfully.
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: Session 131 logged out. Waiting for processes to exit.
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: Removed session 131.
Oct 13 14:22:20 standalone.localdomain sshd[217260]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:20 standalone.localdomain sshd[217260]: Accepted publickey for root from 192.168.122.11 port 52020 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: New session 132 of user root.
Oct 13 14:22:20 standalone.localdomain systemd[1]: Started Session 132 of User root.
Oct 13 14:22:20 standalone.localdomain sshd[217260]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain sudo[217264]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_collectd.service
Oct 13 14:22:20 standalone.localdomain sudo[217264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain sudo[217264]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:20 standalone.localdomain sshd[217263]: Received disconnect from 192.168.122.11 port 52020:11: disconnected by user
Oct 13 14:22:20 standalone.localdomain sshd[217263]: Disconnected from user root 192.168.122.11 port 52020
Oct 13 14:22:20 standalone.localdomain sshd[217260]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:20 standalone.localdomain systemd[1]: session-132.scope: Deactivated successfully.
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: Session 132 logged out. Waiting for processes to exit.
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: Removed session 132.
Oct 13 14:22:20 standalone.localdomain sshd[217279]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:20 standalone.localdomain sshd[217279]: Accepted publickey for root from 192.168.122.11 port 52034 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: New session 133 of user root.
Oct 13 14:22:20 standalone.localdomain systemd[1]: Started Session 133 of User root.
Oct 13 14:22:20 standalone.localdomain sshd[217279]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain sudo[217291]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_glance_api.service
Oct 13 14:22:20 standalone.localdomain sudo[217291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain sudo[217291]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:20 standalone.localdomain sshd[217290]: Received disconnect from 192.168.122.11 port 52034:11: disconnected by user
Oct 13 14:22:20 standalone.localdomain sshd[217290]: Disconnected from user root 192.168.122.11 port 52034
Oct 13 14:22:20 standalone.localdomain sshd[217279]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:20 standalone.localdomain systemd[1]: session-133.scope: Deactivated successfully.
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: Session 133 logged out. Waiting for processes to exit.
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: Removed session 133.
Oct 13 14:22:20 standalone.localdomain sshd[217307]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:20 standalone.localdomain sshd[217307]: Accepted publickey for root from 192.168.122.11 port 52050 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:20 standalone.localdomain systemd-logind[45629]: New session 134 of user root.
Oct 13 14:22:20 standalone.localdomain systemd[1]: Started Session 134 of User root.
Oct 13 14:22:20 standalone.localdomain sshd[217307]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain sudo[217311]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_glance_api.service
Oct 13 14:22:20 standalone.localdomain sudo[217311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:20 standalone.localdomain systemd[1]: Stopping glance_api container...
Oct 13 14:22:21 standalone.localdomain haproxy[70940]: 172.17.0.100:38317 [13/Oct/2025:14:10:51.120] mysql mysql/standalone.internalapi.localdomain 1/0/689908 159376 -- 42/41/40/40/0 0/0
Oct 13 14:22:21 standalone.localdomain ceph-mon[29756]: pgmap v1434: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:21 standalone.localdomain systemd[1]: libpod-c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.scope: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd[1]: libpod-c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.scope: Consumed 20.504s CPU time.
Oct 13 14:22:21 standalone.localdomain podman[217326]: 2025-10-13 14:22:21.316535611 +0000 UTC m=+0.377822106 container died c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, architecture=x86_64, com.redhat.component=openstack-glance-api-container)
Oct 13 14:22:21 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.timer: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.
Oct 13 14:22:21 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Failed to open /run/systemd/transient/c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: No such file or directory
Oct 13 14:22:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67-merged.mount: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain podman[217326]: 2025-10-13 14:22:21.360629022 +0000 UTC m=+0.421915487 container cleanup c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, version=17.1.9, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:58:20, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:22:21 standalone.localdomain podman[217326]: glance_api
Oct 13 14:22:21 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.timer: Failed to open /run/systemd/transient/c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.timer: No such file or directory
Oct 13 14:22:21 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Failed to open /run/systemd/transient/c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: No such file or directory
Oct 13 14:22:21 standalone.localdomain podman[217340]: 2025-10-13 14:22:21.422249979 +0000 UTC m=+0.092517919 container cleanup c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:21 standalone.localdomain systemd[1]: libpod-conmon-c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.scope: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.timer: Failed to open /run/systemd/transient/c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.timer: No such file or directory
Oct 13 14:22:21 standalone.localdomain systemd[1]: c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: Failed to open /run/systemd/transient/c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a.service: No such file or directory
Oct 13 14:22:21 standalone.localdomain podman[217434]: 2025-10-13 14:22:21.501744193 +0000 UTC m=+0.049780110 container cleanup c3668dc48d54f4f351e3dc3e70584c26b311b31464eaaa985f0c036ac5a7715a (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:22:21 standalone.localdomain podman[217434]: glance_api
Oct 13 14:22:21 standalone.localdomain systemd[1]: tripleo_glance_api.service: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd[1]: Stopped glance_api container.
Oct 13 14:22:21 standalone.localdomain sudo[217311]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:21 standalone.localdomain sshd[217310]: Received disconnect from 192.168.122.11 port 52050:11: disconnected by user
Oct 13 14:22:21 standalone.localdomain sshd[217310]: Disconnected from user root 192.168.122.11 port 52050
Oct 13 14:22:21 standalone.localdomain sshd[217307]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:21 standalone.localdomain systemd[1]: session-134.scope: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd-logind[45629]: Session 134 logged out. Waiting for processes to exit.
Oct 13 14:22:21 standalone.localdomain systemd-logind[45629]: Removed session 134.
Oct 13 14:22:21 standalone.localdomain sshd[217457]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:21 standalone.localdomain sshd[217457]: Accepted publickey for root from 192.168.122.11 port 52058 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:21 standalone.localdomain systemd-logind[45629]: New session 135 of user root.
Oct 13 14:22:21 standalone.localdomain systemd[1]: Started Session 135 of User root.
Oct 13 14:22:21 standalone.localdomain sshd[217457]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:21 standalone.localdomain sudo[217461]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_gnocchi_api.service
Oct 13 14:22:21 standalone.localdomain sudo[217461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:21 standalone.localdomain sudo[217461]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:21 standalone.localdomain sshd[217460]: Received disconnect from 192.168.122.11 port 52058:11: disconnected by user
Oct 13 14:22:21 standalone.localdomain sshd[217460]: Disconnected from user root 192.168.122.11 port 52058
Oct 13 14:22:21 standalone.localdomain sshd[217457]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:21 standalone.localdomain systemd[1]: session-135.scope: Deactivated successfully.
Oct 13 14:22:21 standalone.localdomain systemd-logind[45629]: Session 135 logged out. Waiting for processes to exit.
Oct 13 14:22:21 standalone.localdomain systemd-logind[45629]: Removed session 135.
Oct 13 14:22:21 standalone.localdomain sshd[217476]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:21 standalone.localdomain sshd[217476]: Accepted publickey for root from 192.168.122.11 port 52070 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1435: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:21 standalone.localdomain systemd-logind[45629]: New session 136 of user root.
Oct 13 14:22:21 standalone.localdomain systemd[1]: Started Session 136 of User root.
Oct 13 14:22:21 standalone.localdomain sshd[217476]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain sudo[217480]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_gnocchi_metricd.service
Oct 13 14:22:22 standalone.localdomain sudo[217480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain sudo[217480]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:22 standalone.localdomain sshd[217479]: Received disconnect from 192.168.122.11 port 52070:11: disconnected by user
Oct 13 14:22:22 standalone.localdomain ceph-mon[29756]: pgmap v1435: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:22 standalone.localdomain sshd[217479]: Disconnected from user root 192.168.122.11 port 52070
Oct 13 14:22:22 standalone.localdomain sshd[217476]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:22 standalone.localdomain systemd[1]: session-136.scope: Deactivated successfully.
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: Session 136 logged out. Waiting for processes to exit.
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: Removed session 136.
Oct 13 14:22:22 standalone.localdomain sshd[217495]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:22 standalone.localdomain sshd[217495]: Accepted publickey for root from 192.168.122.11 port 52076 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: New session 137 of user root.
Oct 13 14:22:22 standalone.localdomain systemd[1]: Started Session 137 of User root.
Oct 13 14:22:22 standalone.localdomain sshd[217495]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain haproxy[70940]: 172.17.0.2:46082 [13/Oct/2025:14:22:22.269] keystone_public keystone_public/<NOSRV> 0/-1/-1/-1/0 503 217 - - SC-- 43/1/0/0/0 0/0 "GET /v3/auth/tokens HTTP/1.1"
Oct 13 14:22:22 standalone.localdomain haproxy[70940]: 172.17.0.2:55418 [13/Oct/2025:14:22:22.260] neutron neutron/standalone.internalapi.localdomain 0/0/0/11/11 503 409 - - ---- 43/1/0/0/0 0/0 "GET /v2.0/ports?device_id=54a46fec-332e-42f9-83ed-88e763d13f63&fields=binding%3Ahost_id&fields=binding%3Avif_type HTTP/1.1"
Oct 13 14:22:22 standalone.localdomain sudo[217540]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_gnocchi_statsd.service
Oct 13 14:22:22 standalone.localdomain sudo[217540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain sudo[217540]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:22 standalone.localdomain sshd[217539]: Received disconnect from 192.168.122.11 port 52076:11: disconnected by user
Oct 13 14:22:22 standalone.localdomain sshd[217539]: Disconnected from user root 192.168.122.11 port 52076
Oct 13 14:22:22 standalone.localdomain sshd[217495]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:22 standalone.localdomain systemd[1]: session-137.scope: Deactivated successfully.
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: Session 137 logged out. Waiting for processes to exit.
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: Removed session 137.
Oct 13 14:22:22 standalone.localdomain sshd[217555]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:22 standalone.localdomain sshd[217555]: Accepted publickey for root from 192.168.122.11 port 52080 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: New session 138 of user root.
Oct 13 14:22:22 standalone.localdomain systemd[1]: Started Session 138 of User root.
Oct 13 14:22:22 standalone.localdomain sshd[217555]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain sudo[217567]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_manila_api.service
Oct 13 14:22:22 standalone.localdomain sudo[217567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain sudo[217567]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:22 standalone.localdomain sshd[217558]: Received disconnect from 192.168.122.11 port 52080:11: disconnected by user
Oct 13 14:22:22 standalone.localdomain sshd[217558]: Disconnected from user root 192.168.122.11 port 52080
Oct 13 14:22:22 standalone.localdomain sshd[217555]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:22 standalone.localdomain systemd[1]: session-138.scope: Deactivated successfully.
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: Session 138 logged out. Waiting for processes to exit.
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: Removed session 138.
Oct 13 14:22:22 standalone.localdomain sshd[217582]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:22 standalone.localdomain sshd[217582]: Accepted publickey for root from 192.168.122.11 port 52088 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:22 standalone.localdomain systemd-logind[45629]: New session 139 of user root.
Oct 13 14:22:22 standalone.localdomain systemd[1]: Started Session 139 of User root.
Oct 13 14:22:22 standalone.localdomain sshd[217582]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain sudo[217627]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_manila_api.service
Oct 13 14:22:22 standalone.localdomain sudo[217627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:22 standalone.localdomain systemd[1]: Stopping manila_api container...
Oct 13 14:22:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:22:23 standalone.localdomain systemd[1]: tmp-crun.7QBfIk.mount: Deactivated successfully.
Oct 13 14:22:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:22:23 standalone.localdomain podman[217643]: 2025-10-13 14:22:23.056576779 +0000 UTC m=+0.179581588 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, release=1, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, vcs-type=git, build-date=2025-07-21T13:27:18, distribution-scope=public, container_name=keystone_cron, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:22:23 standalone.localdomain podman[217643]: 2025-10-13 14:22:23.065861258 +0000 UTC m=+0.188866027 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step3)
Oct 13 14:22:23 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:22:23
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_metadata', '.mgr', 'manila_data', 'vms', 'images', 'volumes']
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:22:23 standalone.localdomain podman[217677]: 2025-10-13 14:22:23.1568636 +0000 UTC m=+0.098288229 container health_status 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, health_status=healthy, build-date=2025-07-21T16:05:11, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, container_name=nova_api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:22:23 standalone.localdomain podman[217677]: 2025-10-13 14:22:23.288869757 +0000 UTC m=+0.230294346 container exec_died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, container_name=nova_api, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:22:23 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Deactivated successfully.
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:22:23 standalone.localdomain haproxy[70940]: Server cinder/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:22:23 standalone.localdomain haproxy[70940]: proxy cinder has no server available!
Oct 13 14:22:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1436: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:23 standalone.localdomain systemd[1]: tmp-crun.H6BmWq.mount: Deactivated successfully.
Oct 13 14:22:24 standalone.localdomain ceph-mon[29756]: pgmap v1436: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:22:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:22:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:22:25 standalone.localdomain systemd[1]: tmp-crun.vhUiad.mount: Deactivated successfully.
Oct 13 14:22:25 standalone.localdomain podman[217820]: 2025-10-13 14:22:25.294054445 +0000 UTC m=+0.055869899 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:22:25 standalone.localdomain podman[217820]: 2025-10-13 14:22:25.36683833 +0000 UTC m=+0.128653764 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:22:25 standalone.localdomain systemd[1]: tmp-crun.d7DVz6.mount: Deactivated successfully.
Oct 13 14:22:25 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:22:25 standalone.localdomain podman[217819]: 2025-10-13 14:22:25.393071336 +0000 UTC m=+0.157203152 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, tcib_managed=true, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team)
Oct 13 14:22:25 standalone.localdomain podman[217821]: 2025-10-13 14:22:25.494308795 +0000 UTC m=+0.246047946 container health_status 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, com.redhat.component=openstack-nova-api-container, vcs-type=git, release=1, container_name=nova_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-api, config_id=tripleo_step4)
Oct 13 14:22:25 standalone.localdomain podman[217821]: 2025-10-13 14:22:25.507401473 +0000 UTC m=+0.259140714 container exec_died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.component=openstack-nova-api-container, config_id=tripleo_step4, name=rhosp17/openstack-nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, container_name=nova_api_cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, build-date=2025-07-21T16:05:11)
Oct 13 14:22:25 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Deactivated successfully.
Oct 13 14:22:25 standalone.localdomain podman[217819]: 2025-10-13 14:22:25.566917375 +0000 UTC m=+0.331049241 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_dhcp, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, architecture=x86_64)
Oct 13 14:22:25 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:22:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1437: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:26 standalone.localdomain systemd[1]: libpod-2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038.scope: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain systemd[1]: libpod-2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038.scope: Consumed 7.094s CPU time.
Oct 13 14:22:26 standalone.localdomain podman[217642]: 2025-10-13 14:22:26.039910661 +0000 UTC m=+3.162253010 container died 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_id=tripleo_step4, architecture=x86_64, container_name=manila_api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-manila-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-api, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']})
Oct 13 14:22:26 standalone.localdomain podman[217642]: 2025-10-13 14:22:26.075179798 +0000 UTC m=+3.197522097 container cleanup 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=manila_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, summary=Red Hat OpenStack Platform 17.1 manila-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-manila-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1)
Oct 13 14:22:26 standalone.localdomain podman[217642]: manila_api
Oct 13 14:22:26 standalone.localdomain podman[217892]: 2025-10-13 14:22:26.09710634 +0000 UTC m=+0.051919226 container cleanup 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, maintainer=OpenStack TripleO Team, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, architecture=x86_64, container_name=manila_api, summary=Red Hat OpenStack Platform 17.1 manila-api, vendor=Red Hat, Inc., build-date=2025-07-21T16:06:43, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-manila-api, com.redhat.component=openstack-manila-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api)
Oct 13 14:22:26 standalone.localdomain systemd[1]: libpod-conmon-2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038.scope: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain podman[217906]: 2025-10-13 14:22:26.184660214 +0000 UTC m=+0.054697982 container cleanup 2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038 (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api, container_name=manila_api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, release=1, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, tcib_managed=true, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 manila-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, build-date=2025-07-21T16:06:43, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-manila-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:26 standalone.localdomain podman[217906]: manila_api
Oct 13 14:22:26 standalone.localdomain systemd[1]: tripleo_manila_api.service: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain systemd[1]: Stopped manila_api container.
Oct 13 14:22:26 standalone.localdomain sudo[217627]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:26 standalone.localdomain sshd[217626]: Received disconnect from 192.168.122.11 port 52088:11: disconnected by user
Oct 13 14:22:26 standalone.localdomain sshd[217626]: Disconnected from user root 192.168.122.11 port 52088
Oct 13 14:22:26 standalone.localdomain sshd[217582]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:26 standalone.localdomain systemd[1]: session-139.scope: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain systemd-logind[45629]: Session 139 logged out. Waiting for processes to exit.
Oct 13 14:22:26 standalone.localdomain systemd-logind[45629]: Removed session 139.
Oct 13 14:22:26 standalone.localdomain sshd[217917]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d229f95c042aa50f015ecf04790d97311b8723b668a1b17e0a931684b436bc8-merged.mount: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain sshd[217917]: Accepted publickey for root from 192.168.122.11 port 52102 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:26 standalone.localdomain systemd-logind[45629]: New session 140 of user root.
Oct 13 14:22:26 standalone.localdomain systemd[1]: Started Session 140 of User root.
Oct 13 14:22:26 standalone.localdomain sshd[217917]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:26 standalone.localdomain sudo[217921]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_manila_api_cron.service
Oct 13 14:22:26 standalone.localdomain sudo[217921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:26 standalone.localdomain sudo[217921]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:26 standalone.localdomain sshd[217920]: Received disconnect from 192.168.122.11 port 52102:11: disconnected by user
Oct 13 14:22:26 standalone.localdomain sshd[217920]: Disconnected from user root 192.168.122.11 port 52102
Oct 13 14:22:26 standalone.localdomain sshd[217917]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:26 standalone.localdomain systemd[1]: session-140.scope: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain systemd-logind[45629]: Session 140 logged out. Waiting for processes to exit.
Oct 13 14:22:26 standalone.localdomain systemd-logind[45629]: Removed session 140.
Oct 13 14:22:26 standalone.localdomain sshd[217936]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:26 standalone.localdomain runuser[217942]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:26 standalone.localdomain sshd[217936]: Accepted publickey for root from 192.168.122.11 port 52108 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:26 standalone.localdomain systemd-logind[45629]: New session 141 of user root.
Oct 13 14:22:26 standalone.localdomain systemd[1]: Started Session 141 of User root.
Oct 13 14:22:26 standalone.localdomain sshd[217936]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:26 standalone.localdomain sudo[217997]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_manila_api_cron.service
Oct 13 14:22:26 standalone.localdomain sudo[217997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:26 standalone.localdomain systemd[1]: Stopping manila_api_cron container...
Oct 13 14:22:26 standalone.localdomain systemd[1]: tmp-crun.XRuG4j.mount: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain crond[113315]: (CRON) INFO (Shutting down)
Oct 13 14:22:26 standalone.localdomain systemd[1]: libpod-7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.scope: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain podman[218012]: 2025-10-13 14:22:26.833157661 +0000 UTC m=+0.083144498 container died 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=, name=rhosp17/openstack-manila-api, release=1, com.redhat.component=openstack-manila-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=manila_api_cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:22:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.timer: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.
Oct 13 14:22:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Failed to open /run/systemd/transient/7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: No such file or directory
Oct 13 14:22:26 standalone.localdomain podman[218012]: 2025-10-13 14:22:26.865087864 +0000 UTC m=+0.115074681 container cleanup 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 manila-api, vcs-type=git, com.redhat.component=openstack-manila-api-container, architecture=x86_64, container_name=manila_api_cron, description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, release=1, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, build-date=2025-07-21T16:06:43, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1)
Oct 13 14:22:26 standalone.localdomain podman[218012]: manila_api_cron
Oct 13 14:22:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.timer: Failed to open /run/systemd/transient/7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.timer: No such file or directory
Oct 13 14:22:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Failed to open /run/systemd/transient/7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: No such file or directory
Oct 13 14:22:26 standalone.localdomain podman[218026]: 2025-10-13 14:22:26.909107224 +0000 UTC m=+0.063010051 container cleanup 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, architecture=x86_64, build-date=2025-07-21T16:06:43, description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 manila-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, container_name=manila_api_cron, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, name=rhosp17/openstack-manila-api, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, com.redhat.component=openstack-manila-api-container, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:22:26 standalone.localdomain systemd[1]: libpod-conmon-7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.scope: Deactivated successfully.
Oct 13 14:22:26 standalone.localdomain ceph-mon[29756]: pgmap v1437: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.timer: Failed to open /run/systemd/transient/7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.timer: No such file or directory
Oct 13 14:22:26 standalone.localdomain systemd[1]: 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: Failed to open /run/systemd/transient/7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f.service: No such file or directory
Oct 13 14:22:26 standalone.localdomain podman[218041]: 2025-10-13 14:22:26.998455694 +0000 UTC m=+0.061077451 container cleanup 7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f (image=registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1, name=manila_api_cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-manila-api, config_id=tripleo_step4, build-date=2025-07-21T16:06:43, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron manila'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z', '/var/log/containers/httpd/manila-api:/var/log/httpd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 manila-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=31c319d283854993bff29633e1e018a905c8b3a8, distribution-scope=public, com.redhat.component=openstack-manila-api-container, container_name=manila_api_cron, description=Red Hat OpenStack Platform 17.1 manila-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-api, io.openshift.expose-services=)
Oct 13 14:22:26 standalone.localdomain podman[218041]: manila_api_cron
Oct 13 14:22:27 standalone.localdomain systemd[1]: tripleo_manila_api_cron.service: Deactivated successfully.
Oct 13 14:22:27 standalone.localdomain systemd[1]: Stopped manila_api_cron container.
Oct 13 14:22:27 standalone.localdomain sudo[217997]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:27 standalone.localdomain sshd[217996]: Received disconnect from 192.168.122.11 port 52108:11: disconnected by user
Oct 13 14:22:27 standalone.localdomain sshd[217996]: Disconnected from user root 192.168.122.11 port 52108
Oct 13 14:22:27 standalone.localdomain sshd[217936]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:27 standalone.localdomain systemd[1]: session-141.scope: Deactivated successfully.
Oct 13 14:22:27 standalone.localdomain systemd-logind[45629]: Session 141 logged out. Waiting for processes to exit.
Oct 13 14:22:27 standalone.localdomain systemd-logind[45629]: Removed session 141.
Oct 13 14:22:27 standalone.localdomain sshd[218054]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:27 standalone.localdomain sshd[218054]: Accepted publickey for root from 192.168.122.11 port 52114 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:27 standalone.localdomain systemd-logind[45629]: New session 142 of user root.
Oct 13 14:22:27 standalone.localdomain systemd[1]: Started Session 142 of User root.
Oct 13 14:22:27 standalone.localdomain sshd[218054]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:27 standalone.localdomain runuser[217942]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:27 standalone.localdomain sudo[218074]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_manila_scheduler.service
Oct 13 14:22:27 standalone.localdomain sudo[218074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ea91c370b256a874ce5d07e2995f6a00f5191e3d387d95202833d1f28499d361-merged.mount: Deactivated successfully.
Oct 13 14:22:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ca43aba12eb6d31130b11d6d00d36d4f5d2e1bf793d6b2ad3541c950646a01f-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:27 standalone.localdomain sudo[218074]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:27 standalone.localdomain sshd[218064]: Received disconnect from 192.168.122.11 port 52114:11: disconnected by user
Oct 13 14:22:27 standalone.localdomain sshd[218064]: Disconnected from user root 192.168.122.11 port 52114
Oct 13 14:22:27 standalone.localdomain sshd[218054]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:27 standalone.localdomain systemd[1]: session-142.scope: Deactivated successfully.
Oct 13 14:22:27 standalone.localdomain systemd-logind[45629]: Session 142 logged out. Waiting for processes to exit.
Oct 13 14:22:27 standalone.localdomain systemd-logind[45629]: Removed session 142.
Oct 13 14:22:27 standalone.localdomain sshd[218089]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:27 standalone.localdomain runuser[218091]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:27 standalone.localdomain sshd[218089]: Accepted publickey for root from 192.168.122.11 port 52130 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:27 standalone.localdomain systemd-logind[45629]: New session 143 of user root.
Oct 13 14:22:27 standalone.localdomain systemd[1]: Started Session 143 of User root.
Oct 13 14:22:27 standalone.localdomain sshd[218089]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:27 standalone.localdomain sudo[218138]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_manila_scheduler.service
Oct 13 14:22:27 standalone.localdomain sudo[218138]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:27 standalone.localdomain systemd[1]: Stopping manila_scheduler container...
Oct 13 14:22:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:22:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:22:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:22:27 standalone.localdomain podman[218174]: 2025-10-13 14:22:27.852552738 +0000 UTC m=+0.117143796 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, com.redhat.component=openstack-ovn-northd-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-northd, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, build-date=2025-07-21T13:30:04, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, container_name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1)
Oct 13 14:22:27 standalone.localdomain podman[218174]: 2025-10-13 14:22:27.867869365 +0000 UTC m=+0.132460453 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-ovn-northd-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, build-date=2025-07-21T13:30:04, container_name=ovn_cluster_northd, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ovn-northd, vcs-type=git, config_id=ovn_cluster_northd, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:22:27 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:22:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1438: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:27 standalone.localdomain podman[218175]: 2025-10-13 14:22:27.806405292 +0000 UTC m=+0.071158075 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:22:27 standalone.localdomain runuser[218091]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:27 standalone.localdomain podman[218203]: 2025-10-13 14:22:27.928282715 +0000 UTC m=+0.099379364 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, release=1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute)
Oct 13 14:22:27 standalone.localdomain runuser[218250]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:27 standalone.localdomain podman[218175]: 2025-10-13 14:22:27.943979722 +0000 UTC m=+0.208732535 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_id=tripleo_step2, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, container_name=clustercheck, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 14:22:27 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:22:28 standalone.localdomain podman[218203]: 2025-10-13 14:22:28.011974519 +0000 UTC m=+0.183071198 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step5, distribution-scope=public, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, vendor=Red Hat, Inc.)
Oct 13 14:22:28 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:22:28 standalone.localdomain haproxy[70940]: 172.17.0.100:50221 [13/Oct/2025:14:05:18.354] mysql mysql/standalone.internalapi.localdomain 1/0/1030154 381176 -- 43/40/39/39/0 0/0
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:22:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:22:28 standalone.localdomain systemd[1]: libpod-194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.scope: Deactivated successfully.
Oct 13 14:22:28 standalone.localdomain systemd[1]: libpod-194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.scope: Consumed 3.796s CPU time.
Oct 13 14:22:28 standalone.localdomain podman[218157]: 2025-10-13 14:22:28.562665772 +0000 UTC m=+0.978755653 container died 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-scheduler, io.openshift.expose-services=, com.redhat.component=openstack-manila-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T15:56:28, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, distribution-scope=public, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, architecture=x86_64, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:22:28 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.timer: Deactivated successfully.
Oct 13 14:22:28 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.
Oct 13 14:22:28 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Failed to open /run/systemd/transient/194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: No such file or directory
Oct 13 14:22:28 standalone.localdomain runuser[218250]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d1deaee7c92302295cfa6e4899573241006c9f9f1593a37994b270ad876377d-merged.mount: Deactivated successfully.
Oct 13 14:22:28 standalone.localdomain podman[218157]: 2025-10-13 14:22:28.905538711 +0000 UTC m=+1.321628532 container cleanup 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.component=openstack-manila-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=manila_scheduler, version=17.1.9, build-date=2025-07-21T15:56:28, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-scheduler, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 manila-scheduler, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, distribution-scope=public)
Oct 13 14:22:28 standalone.localdomain podman[218157]: manila_scheduler
Oct 13 14:22:28 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.timer: Failed to open /run/systemd/transient/194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.timer: No such file or directory
Oct 13 14:22:28 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Failed to open /run/systemd/transient/194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: No such file or directory
Oct 13 14:22:28 standalone.localdomain podman[218312]: 2025-10-13 14:22:28.918283726 +0000 UTC m=+0.346247303 container cleanup 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, distribution-scope=public, name=rhosp17/openstack-manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 manila-scheduler, container_name=manila_scheduler, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, build-date=2025-07-21T15:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, com.redhat.component=openstack-manila-scheduler-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:28 standalone.localdomain systemd[1]: libpod-conmon-194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.scope: Deactivated successfully.
Oct 13 14:22:29 standalone.localdomain ceph-mon[29756]: pgmap v1438: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:29 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.timer: Failed to open /run/systemd/transient/194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.timer: No such file or directory
Oct 13 14:22:29 standalone.localdomain systemd[1]: 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: Failed to open /run/systemd/transient/194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d.service: No such file or directory
Oct 13 14:22:29 standalone.localdomain podman[218339]: 2025-10-13 14:22:29.040634554 +0000 UTC m=+0.087622487 container cleanup 194f87124fad25e4cb17f51f33588db6767f6802c971844efb4c2d676033e57d (image=registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1, name=manila_scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-scheduler, version=17.1.9, build-date=2025-07-21T15:56:28, tcib_managed=true, com.redhat.component=openstack-manila-scheduler-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-manila-scheduler, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 manila-scheduler, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6f421a64683e0927a1c7cc208f5b5307'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-manila-scheduler:17.1', 'net': 'host', 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/manila_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/manila:/var/lib/kolla/config_files/src:ro', '/var/log/containers/manila:/var/log/manila:z']}, vcs-ref=97f6cfd0a24f98c54f6849ead1c565302f21296d, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 manila-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-scheduler/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-scheduler, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=manila_scheduler)
Oct 13 14:22:29 standalone.localdomain podman[218339]: manila_scheduler
Oct 13 14:22:29 standalone.localdomain systemd[1]: tripleo_manila_scheduler.service: Deactivated successfully.
Oct 13 14:22:29 standalone.localdomain systemd[1]: Stopped manila_scheduler container.
Oct 13 14:22:29 standalone.localdomain sudo[218138]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:29 standalone.localdomain sshd[218137]: Received disconnect from 192.168.122.11 port 52130:11: disconnected by user
Oct 13 14:22:29 standalone.localdomain sshd[218137]: Disconnected from user root 192.168.122.11 port 52130
Oct 13 14:22:29 standalone.localdomain sshd[218089]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:29 standalone.localdomain systemd[1]: session-143.scope: Deactivated successfully.
Oct 13 14:22:29 standalone.localdomain systemd-logind[45629]: Session 143 logged out. Waiting for processes to exit.
Oct 13 14:22:29 standalone.localdomain systemd-logind[45629]: Removed session 143.
Oct 13 14:22:29 standalone.localdomain sshd[218354]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:29 standalone.localdomain sshd[218354]: Accepted publickey for root from 192.168.122.11 port 52140 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:29 standalone.localdomain systemd-logind[45629]: New session 144 of user root.
Oct 13 14:22:29 standalone.localdomain systemd[1]: Started Session 144 of User root.
Oct 13 14:22:29 standalone.localdomain sshd[218354]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:29 standalone.localdomain sudo[218358]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_neutron_api.service
Oct 13 14:22:29 standalone.localdomain sudo[218358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:29 standalone.localdomain sudo[218358]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:29 standalone.localdomain sshd[218357]: Received disconnect from 192.168.122.11 port 52140:11: disconnected by user
Oct 13 14:22:29 standalone.localdomain sshd[218357]: Disconnected from user root 192.168.122.11 port 52140
Oct 13 14:22:29 standalone.localdomain sshd[218354]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:29 standalone.localdomain systemd[1]: session-144.scope: Deactivated successfully.
Oct 13 14:22:29 standalone.localdomain systemd-logind[45629]: Session 144 logged out. Waiting for processes to exit.
Oct 13 14:22:29 standalone.localdomain systemd-logind[45629]: Removed session 144.
Oct 13 14:22:29 standalone.localdomain sshd[218373]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:29 standalone.localdomain sshd[218373]: Accepted publickey for root from 192.168.122.11 port 52148 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:29 standalone.localdomain systemd-logind[45629]: New session 145 of user root.
Oct 13 14:22:29 standalone.localdomain systemd[1]: Started Session 145 of User root.
Oct 13 14:22:29 standalone.localdomain sshd[218373]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:29 standalone.localdomain sudo[218377]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_neutron_api.service
Oct 13 14:22:29 standalone.localdomain sudo[218377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:29 standalone.localdomain haproxy[70940]: Server glance_api/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:22:29 standalone.localdomain haproxy[70940]: proxy glance_api has no server available!
Oct 13 14:22:29 standalone.localdomain systemd[1]: Stopping neutron_api container...
Oct 13 14:22:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1439: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:30 standalone.localdomain ceph-mon[29756]: pgmap v1439: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:47837 [13/Oct/2025:14:05:32.624] mysql mysql/standalone.internalapi.localdomain 1/0/1017553 119907 -- 41/39/38/38/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:56057 [13/Oct/2025:14:05:22.615] mysql mysql/standalone.internalapi.localdomain 1/0/1027564 157907 -- 40/38/37/37/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:42817 [13/Oct/2025:14:07:22.603] mysql mysql/standalone.internalapi.localdomain 1/0/907579 41361 -- 39/37/36/36/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:37003 [13/Oct/2025:14:11:35.159] mysql mysql/standalone.internalapi.localdomain 1/0/655037 2427301 -- 38/36/35/35/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:53323 [13/Oct/2025:14:11:34.988] mysql mysql/standalone.internalapi.localdomain 1/0/655210 2722846 -- 37/35/34/34/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:46077 [13/Oct/2025:14:11:35.174] mysql mysql/standalone.internalapi.localdomain 1/0/655022 2701039 -- 37/35/33/33/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:33899 [13/Oct/2025:14:11:33.412] mysql mysql/standalone.internalapi.localdomain 1/0/656785 3041610 -- 37/35/32/32/0 0/0
Oct 13 14:22:30 standalone.localdomain haproxy[70940]: 172.17.0.100:51829 [13/Oct/2025:14:11:33.562] mysql mysql/standalone.internalapi.localdomain 1/0/656636 2808382 -- 33/32/31/31/0 0/0
Oct 13 14:22:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:31 standalone.localdomain systemd[1]: tmp-crun.r4nfDM.mount: Deactivated successfully.
Oct 13 14:22:31 standalone.localdomain podman[218424]: 2025-10-13 14:22:31.307693159 +0000 UTC m=+0.079705980 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T15:22:36, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-manila-share-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, release=1, description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc.)
Oct 13 14:22:31 standalone.localdomain podman[218424]: 2025-10-13 14:22:31.342064378 +0000 UTC m=+0.114077239 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, name=rhosp17/openstack-manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, build-date=2025-07-21T15:22:36, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, com.redhat.component=openstack-manila-share-container, summary=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team)
Oct 13 14:22:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1440: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:22:32 standalone.localdomain podman[218568]: 2025-10-13 14:22:32.311331635 +0000 UTC m=+0.079867605 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, name=rhosp17/openstack-iscsid)
Oct 13 14:22:32 standalone.localdomain podman[218568]: 2025-10-13 14:22:32.320053118 +0000 UTC m=+0.088589108 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:22:32 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:22:32 standalone.localdomain ceph-mon[29756]: pgmap v1440: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1441: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:34 standalone.localdomain haproxy[70940]: Server manila/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:22:34 standalone.localdomain haproxy[70940]: proxy manila has no server available!
Oct 13 14:22:34 standalone.localdomain ceph-mon[29756]: pgmap v1441: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1442: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:36 standalone.localdomain ceph-mon[29756]: pgmap v1442: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:37 standalone.localdomain haproxy[70940]: 172.17.0.100:33781 [13/Oct/2025:14:05:32.168] mysql mysql/standalone.internalapi.localdomain 1/0/1025499 10869 -- 31/31/30/30/0 0/0
Oct 13 14:22:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1443: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:38 standalone.localdomain haproxy[70940]: 172.17.0.100:50099 [13/Oct/2025:14:05:32.176] mysql mysql/standalone.internalapi.localdomain 1/0/1025853 641474 -- 30/30/29/29/0 0/0
Oct 13 14:22:38 standalone.localdomain haproxy[70940]: 172.17.0.100:58565 [13/Oct/2025:14:05:22.559] mysql mysql/standalone.internalapi.localdomain 1/0/1035471 654436 -- 29/29/28/28/0 0/0
Oct 13 14:22:38 standalone.localdomain systemd[1]: libpod-9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.scope: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd[1]: libpod-9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.scope: Consumed 1min 13.745s CPU time.
Oct 13 14:22:38 standalone.localdomain podman[218392]: 2025-10-13 14:22:38.08068464 +0000 UTC m=+8.415213597 container stop 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, name=rhosp17/openstack-neutron-server, config_id=tripleo_step4, version=17.1.9, container_name=neutron_api, io.openshift.expose-services=, com.redhat.component=openstack-neutron-server-container, vcs-type=git)
Oct 13 14:22:38 standalone.localdomain ceph-mon[29756]: pgmap v1443: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:38 standalone.localdomain podman[218392]: 2025-10-13 14:22:38.188829015 +0000 UTC m=+8.523357982 container died 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=neutron_api, summary=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, name=rhosp17/openstack-neutron-server, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:44:03, io.buildah.version=1.33.12, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:22:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.timer: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.
Oct 13 14:22:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Failed to open /run/systemd/transient/9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: No such file or directory
Oct 13 14:22:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4eb8db4c42546afae0a3aee10f13d090e0d588e1e1565732d0102851e5608eab-merged.mount: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain podman[218392]: 2025-10-13 14:22:38.259013409 +0000 UTC m=+8.593542356 container cleanup 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, version=17.1.9, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:03, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., container_name=neutron_api, distribution-scope=public)
Oct 13 14:22:38 standalone.localdomain podman[218392]: neutron_api
Oct 13 14:22:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.timer: Failed to open /run/systemd/transient/9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.timer: No such file or directory
Oct 13 14:22:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Failed to open /run/systemd/transient/9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: No such file or directory
Oct 13 14:22:38 standalone.localdomain podman[218790]: 2025-10-13 14:22:38.271069284 +0000 UTC m=+0.176873694 container cleanup 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, build-date=2025-07-21T15:44:03, io.openshift.expose-services=, config_id=tripleo_step4, container_name=neutron_api, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, release=1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.9, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server)
Oct 13 14:22:38 standalone.localdomain systemd[1]: libpod-conmon-9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.scope: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.timer: Failed to open /run/systemd/transient/9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.timer: No such file or directory
Oct 13 14:22:38 standalone.localdomain systemd[1]: 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: Failed to open /run/systemd/transient/9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f.service: No such file or directory
Oct 13 14:22:38 standalone.localdomain podman[218803]: 2025-10-13 14:22:38.356330866 +0000 UTC m=+0.050516043 container cleanup 9a8449908eed7919bee96ffb3f6b15979e348c2330d309679f481a11092e631f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=neutron_api, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/log/containers/httpd/neutron-api:/var/log/httpd:z', '/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=a2a5d3babd6b02c0b20df6d01cd606fef9bdf69d, build-date=2025-07-21T15:44:03, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-server/images/17.1.9-1, release=1, vendor=Red Hat, Inc., container_name=neutron_api, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:22:38 standalone.localdomain podman[218803]: neutron_api
Oct 13 14:22:38 standalone.localdomain systemd[1]: tripleo_neutron_api.service: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd[1]: Stopped neutron_api container.
Oct 13 14:22:38 standalone.localdomain sudo[218377]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:38 standalone.localdomain sshd[218376]: Received disconnect from 192.168.122.11 port 52148:11: disconnected by user
Oct 13 14:22:38 standalone.localdomain sshd[218376]: Disconnected from user root 192.168.122.11 port 52148
Oct 13 14:22:38 standalone.localdomain sshd[218373]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:38 standalone.localdomain systemd[1]: session-145.scope: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd-logind[45629]: Session 145 logged out. Waiting for processes to exit.
Oct 13 14:22:38 standalone.localdomain systemd-logind[45629]: Removed session 145.
Oct 13 14:22:38 standalone.localdomain sshd[218816]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:38 standalone.localdomain sshd[218816]: Accepted publickey for root from 192.168.122.11 port 40058 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:38 standalone.localdomain systemd-logind[45629]: New session 146 of user root.
Oct 13 14:22:38 standalone.localdomain systemd[1]: Started Session 146 of User root.
Oct 13 14:22:38 standalone.localdomain sshd[218816]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:38 standalone.localdomain sudo[218820]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_placement_api.service
Oct 13 14:22:38 standalone.localdomain sudo[218820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:38 standalone.localdomain sudo[218820]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:38 standalone.localdomain sshd[218819]: Received disconnect from 192.168.122.11 port 40058:11: disconnected by user
Oct 13 14:22:38 standalone.localdomain sshd[218819]: Disconnected from user root 192.168.122.11 port 40058
Oct 13 14:22:38 standalone.localdomain sshd[218816]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:38 standalone.localdomain systemd[1]: session-146.scope: Deactivated successfully.
Oct 13 14:22:38 standalone.localdomain systemd-logind[45629]: Session 146 logged out. Waiting for processes to exit.
Oct 13 14:22:38 standalone.localdomain systemd-logind[45629]: Removed session 146.
Oct 13 14:22:38 standalone.localdomain sshd[218835]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:38 standalone.localdomain sshd[218835]: Accepted publickey for root from 192.168.122.11 port 40062 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:38 standalone.localdomain systemd-logind[45629]: New session 147 of user root.
Oct 13 14:22:38 standalone.localdomain systemd[1]: Started Session 147 of User root.
Oct 13 14:22:38 standalone.localdomain sshd[218835]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:38 standalone.localdomain sudo[218839]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_placement_api.service
Oct 13 14:22:38 standalone.localdomain sudo[218839]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:38 standalone.localdomain systemd[1]: Stopping placement_api container...
Oct 13 14:22:39 standalone.localdomain runuser[218881]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1444: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:39 standalone.localdomain runuser[218881]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:40 standalone.localdomain runuser[218950]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:22:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:22:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:22:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:22:40 standalone.localdomain runuser[218950]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:40 standalone.localdomain runuser[219047]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:40 standalone.localdomain podman[219001]: 2025-10-13 14:22:40.814679424 +0000 UTC m=+0.077830133 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, name=rhosp17/openstack-heat-api, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=heat_api, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:22:40 standalone.localdomain podman[219000]: 2025-10-13 14:22:40.865561547 +0000 UTC m=+0.128091347 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, build-date=2025-07-21T15:56:26, container_name=heat_api_cron, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:22:40 standalone.localdomain podman[219000]: 2025-10-13 14:22:40.872162972 +0000 UTC m=+0.134692762 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api_cron, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:22:40 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:22:40 standalone.localdomain systemd[1]: tmp-crun.iamHkV.mount: Deactivated successfully.
Oct 13 14:22:40 standalone.localdomain podman[219001]: 2025-10-13 14:22:40.923609843 +0000 UTC m=+0.186760522 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team)
Oct 13 14:22:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:40 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:22:40 standalone.localdomain ceph-mon[29756]: pgmap v1444: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:40 standalone.localdomain podman[219003]: 2025-10-13 14:22:40.924749878 +0000 UTC m=+0.181910891 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, container_name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:41 standalone.localdomain podman[219003]: 2025-10-13 14:22:41.009193375 +0000 UTC m=+0.266354378 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Oct 13 14:22:41 standalone.localdomain podman[219122]: 2025-10-13 14:22:41.012886761 +0000 UTC m=+0.085754849 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-cinder-backup-container, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, build-date=2025-07-21T16:18:24, version=17.1.9, io.openshift.expose-services=, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef)
Oct 13 14:22:41 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:22:41 standalone.localdomain podman[219002]: 2025-10-13 14:22:40.987991396 +0000 UTC m=+0.246545932 container health_status ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-conductor, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:17, container_name=nova_conductor, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, com.redhat.component=openstack-nova-conductor-container, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:22:41 standalone.localdomain podman[219002]: 2025-10-13 14:22:41.073764654 +0000 UTC m=+0.332319120 container exec_died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:44:17, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, name=rhosp17/openstack-nova-conductor, summary=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.component=openstack-nova-conductor-container, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_conductor, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor)
Oct 13 14:22:41 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Deactivated successfully.
Oct 13 14:22:41 standalone.localdomain podman[219161]: 2025-10-13 14:22:41.091657372 +0000 UTC m=+0.071974371 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, release=1, description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-type=git, name=rhosp17/openstack-cinder-backup, com.redhat.component=openstack-cinder-backup-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, build-date=2025-07-21T16:18:24, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:41 standalone.localdomain podman[219122]: 2025-10-13 14:22:41.096362917 +0000 UTC m=+0.169231025 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, tcib_managed=true, vcs-type=git, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-backup, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:18:24, io.openshift.expose-services=, com.redhat.component=openstack-cinder-backup-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cinder-backup, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1)
Oct 13 14:22:41 standalone.localdomain runuser[219047]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:22:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:22:41 standalone.localdomain podman[219265]: 2025-10-13 14:22:41.791846607 +0000 UTC m=+0.059817593 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, batch=17.1_20250721.1)
Oct 13 14:22:41 standalone.localdomain podman[219265]: 2025-10-13 14:22:41.816892526 +0000 UTC m=+0.084863542 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, container_name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:22:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:22:41 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:22:41 standalone.localdomain systemd[1]: tmp-crun.oLsW9M.mount: Deactivated successfully.
Oct 13 14:22:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1445: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:41 standalone.localdomain podman[219266]: 2025-10-13 14:22:41.900632872 +0000 UTC m=+0.169000910 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, batch=17.1_20250721.1, distribution-scope=public, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43)
Oct 13 14:22:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:22:41 standalone.localdomain podman[219266]: 2025-10-13 14:22:41.975222482 +0000 UTC m=+0.243590540 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, name=rhosp17/openstack-memcached, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true)
Oct 13 14:22:41 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:22:41 standalone.localdomain podman[219322]: 2025-10-13 14:22:41.98831988 +0000 UTC m=+0.068132221 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, name=rhosp17/openstack-heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, container_name=heat_api_cfn, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:22:42 standalone.localdomain podman[219322]: 2025-10-13 14:22:42.016792796 +0000 UTC m=+0.096605137 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, io.openshift.expose-services=, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9)
Oct 13 14:22:42 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain haproxy[70940]: 172.17.0.100:45641 [13/Oct/2025:14:05:23.214] mysql mysql/standalone.internalapi.localdomain 1/0/1038856 150751 -- 28/28/27/27/0 0/0
Oct 13 14:22:42 standalone.localdomain systemd[1]: libpod-53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.scope: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: libpod-53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.scope: Consumed 3.307s CPU time.
Oct 13 14:22:42 standalone.localdomain podman[219301]: 2025-10-13 14:22:41.968967308 +0000 UTC m=+0.135877009 container health_status 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, container_name=nova_scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-scheduler-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:02:54, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-scheduler)
Oct 13 14:22:42 standalone.localdomain podman[218862]: 2025-10-13 14:22:42.07350453 +0000 UTC m=+3.109048104 container died 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=placement_api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:12, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-placement-api-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, release=1)
Oct 13 14:22:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.timer: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.
Oct 13 14:22:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Failed to open /run/systemd/transient/53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: No such file or directory
Oct 13 14:22:42 standalone.localdomain podman[219301]: 2025-10-13 14:22:42.11946351 +0000 UTC m=+0.286373291 container exec_died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, build-date=2025-07-21T16:02:54, name=rhosp17/openstack-nova-scheduler, description=Red Hat OpenStack Platform 17.1 nova-scheduler, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-nova-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, container_name=nova_scheduler)
Oct 13 14:22:42 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain podman[218862]: 2025-10-13 14:22:42.197790287 +0000 UTC m=+3.233333881 container cleanup 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-placement-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, build-date=2025-07-21T13:58:12, container_name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-placement-api-container, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67)
Oct 13 14:22:42 standalone.localdomain podman[218862]: placement_api
Oct 13 14:22:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.timer: Failed to open /run/systemd/transient/53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.timer: No such file or directory
Oct 13 14:22:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Failed to open /run/systemd/transient/53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: No such file or directory
Oct 13 14:22:42 standalone.localdomain podman[219373]: 2025-10-13 14:22:42.21396338 +0000 UTC m=+0.091377084 container cleanup 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, summary=Red Hat OpenStack Platform 17.1 placement-api, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-placement-api, version=17.1.9, tcib_managed=true, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, container_name=placement_api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T13:58:12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, architecture=x86_64, com.redhat.component=openstack-placement-api-container, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1)
Oct 13 14:22:42 standalone.localdomain systemd[1]: libpod-conmon-53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.scope: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.timer: Failed to open /run/systemd/transient/53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.timer: No such file or directory
Oct 13 14:22:42 standalone.localdomain systemd[1]: 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: Failed to open /run/systemd/transient/53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b.service: No such file or directory
Oct 13 14:22:42 standalone.localdomain podman[219387]: 2025-10-13 14:22:42.304565969 +0000 UTC m=+0.064101905 container cleanup 53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b (image=registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1, name=placement_api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 placement-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 placement-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 placement-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-placement-api/images/17.1.9-1, vcs-ref=9aee5996bfe1f2bd5ce7b36f6b870ad402bd0b67, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2c5d4de4e0570c35257b2db1f73cb503'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-placement-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/placement:/var/log/placement:z', '/var/log/containers/httpd/placement:/var/log/httpd:z', '/var/lib/kolla/config_files/placement_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/placement:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T13:58:12, description=Red Hat OpenStack Platform 17.1 placement-api, name=rhosp17/openstack-placement-api, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-placement-api-container, container_name=placement_api)
Oct 13 14:22:42 standalone.localdomain podman[219387]: placement_api
Oct 13 14:22:42 standalone.localdomain systemd[1]: tripleo_placement_api.service: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: Stopped placement_api container.
Oct 13 14:22:42 standalone.localdomain sudo[218839]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:42 standalone.localdomain sshd[218838]: Received disconnect from 192.168.122.11 port 40062:11: disconnected by user
Oct 13 14:22:42 standalone.localdomain sshd[218838]: Disconnected from user root 192.168.122.11 port 40062
Oct 13 14:22:42 standalone.localdomain sshd[218835]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:42 standalone.localdomain systemd[1]: session-147.scope: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd-logind[45629]: Session 147 logged out. Waiting for processes to exit.
Oct 13 14:22:42 standalone.localdomain systemd-logind[45629]: Removed session 147.
Oct 13 14:22:42 standalone.localdomain sshd[219440]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:42 standalone.localdomain sshd[219440]: Accepted publickey for root from 192.168.122.11 port 59002 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:42 standalone.localdomain systemd-logind[45629]: New session 148 of user root.
Oct 13 14:22:42 standalone.localdomain systemd[1]: Started Session 148 of User root.
Oct 13 14:22:42 standalone.localdomain sshd[219440]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:42 standalone.localdomain sudo[219444]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_nova_api_cron.service
Oct 13 14:22:42 standalone.localdomain sudo[219444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:42 standalone.localdomain sudo[219444]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:42 standalone.localdomain sshd[219443]: Received disconnect from 192.168.122.11 port 59002:11: disconnected by user
Oct 13 14:22:42 standalone.localdomain sshd[219443]: Disconnected from user root 192.168.122.11 port 59002
Oct 13 14:22:42 standalone.localdomain sshd[219440]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:42 standalone.localdomain systemd[1]: session-148.scope: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd-logind[45629]: Session 148 logged out. Waiting for processes to exit.
Oct 13 14:22:42 standalone.localdomain systemd-logind[45629]: Removed session 148.
Oct 13 14:22:42 standalone.localdomain sshd[219459]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:42 standalone.localdomain sshd[219459]: Accepted publickey for root from 192.168.122.11 port 59018 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:42 standalone.localdomain systemd-logind[45629]: New session 149 of user root.
Oct 13 14:22:42 standalone.localdomain systemd[1]: Started Session 149 of User root.
Oct 13 14:22:42 standalone.localdomain sshd[219459]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:42 standalone.localdomain systemd[1]: tmp-crun.rzdh2K.mount: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e410b4438f0c300953cb1d70d8d493e57beab26e7850ef1c9e5f40f4d7c5012-merged.mount: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53e69b353525932e0bd0772e0e101a5a80e3e2e8b0937f35f3385dbb8192042b-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain sudo[219498]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_nova_api_cron.service
Oct 13 14:22:42 standalone.localdomain sudo[219498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:42 standalone.localdomain systemd[1]: Stopping nova_api_cron container...
Oct 13 14:22:42 standalone.localdomain crond[116760]: (CRON) INFO (Shutting down)
Oct 13 14:22:42 standalone.localdomain systemd[1]: tmp-crun.m1RZwv.mount: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain systemd[1]: libpod-9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.scope: Deactivated successfully.
Oct 13 14:22:42 standalone.localdomain ceph-mon[29756]: pgmap v1445: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:42 standalone.localdomain podman[219519]: 2025-10-13 14:22:42.986539678 +0000 UTC m=+0.091333183 container died 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-api, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:05:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:22:43 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.timer: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.
Oct 13 14:22:43 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Failed to open /run/systemd/transient/9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: No such file or directory
Oct 13 14:22:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain podman[219519]: 2025-10-13 14:22:43.038444073 +0000 UTC m=+0.143237518 container cleanup 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, maintainer=OpenStack TripleO Team, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vcs-type=git, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, com.redhat.component=openstack-nova-api-container, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, container_name=nova_api_cron, description=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, name=rhosp17/openstack-nova-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:22:43 standalone.localdomain podman[219519]: nova_api_cron
Oct 13 14:22:43 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.timer: Failed to open /run/systemd/transient/9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.timer: No such file or directory
Oct 13 14:22:43 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Failed to open /run/systemd/transient/9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: No such file or directory
Oct 13 14:22:43 standalone.localdomain podman[219537]: 2025-10-13 14:22:43.057248967 +0000 UTC m=+0.067104269 container cleanup 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, container_name=nova_api_cron, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, build-date=2025-07-21T16:05:11, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12)
Oct 13 14:22:43 standalone.localdomain systemd[1]: libpod-conmon-9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.scope: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.timer: Failed to open /run/systemd/transient/9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.timer: No such file or directory
Oct 13 14:22:43 standalone.localdomain systemd[1]: 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: Failed to open /run/systemd/transient/9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327.service: No such file or directory
Oct 13 14:22:43 standalone.localdomain podman[219555]: 2025-10-13 14:22:43.155586958 +0000 UTC m=+0.072327022 container cleanup 9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api_cron, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, summary=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api_cron, release=1, com.redhat.component=openstack-nova-api-container, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, build-date=2025-07-21T16:05:11, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron nova'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:43 standalone.localdomain podman[219555]: nova_api_cron
Oct 13 14:22:43 standalone.localdomain systemd[1]: tripleo_nova_api_cron.service: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain systemd[1]: Stopped nova_api_cron container.
Oct 13 14:22:43 standalone.localdomain sudo[219498]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:43 standalone.localdomain sshd[219462]: Received disconnect from 192.168.122.11 port 59018:11: disconnected by user
Oct 13 14:22:43 standalone.localdomain sshd[219462]: Disconnected from user root 192.168.122.11 port 59018
Oct 13 14:22:43 standalone.localdomain sshd[219459]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:43 standalone.localdomain systemd[1]: session-149.scope: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain systemd-logind[45629]: Session 149 logged out. Waiting for processes to exit.
Oct 13 14:22:43 standalone.localdomain systemd-logind[45629]: Removed session 149.
Oct 13 14:22:43 standalone.localdomain sshd[219568]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:43 standalone.localdomain sshd[219568]: Accepted publickey for root from 192.168.122.11 port 59020 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:43 standalone.localdomain systemd-logind[45629]: New session 150 of user root.
Oct 13 14:22:43 standalone.localdomain systemd[1]: Started Session 150 of User root.
Oct 13 14:22:43 standalone.localdomain sshd[219568]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:43 standalone.localdomain sudo[219613]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_nova_api.service
Oct 13 14:22:43 standalone.localdomain sudo[219613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:43 standalone.localdomain sudo[219613]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:43 standalone.localdomain sshd[219612]: Received disconnect from 192.168.122.11 port 59020:11: disconnected by user
Oct 13 14:22:43 standalone.localdomain sshd[219612]: Disconnected from user root 192.168.122.11 port 59020
Oct 13 14:22:43 standalone.localdomain sshd[219568]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:43 standalone.localdomain systemd[1]: session-150.scope: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain systemd-logind[45629]: Session 150 logged out. Waiting for processes to exit.
Oct 13 14:22:43 standalone.localdomain systemd-logind[45629]: Removed session 150.
Oct 13 14:22:43 standalone.localdomain sshd[219628]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:43 standalone.localdomain sshd[219628]: Accepted publickey for root from 192.168.122.11 port 59030 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:43 standalone.localdomain systemd-logind[45629]: New session 151 of user root.
Oct 13 14:22:43 standalone.localdomain systemd[1]: Started Session 151 of User root.
Oct 13 14:22:43 standalone.localdomain sshd[219628]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:43 standalone.localdomain sudo[219634]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_nova_api.service
Oct 13 14:22:43 standalone.localdomain sudo[219634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:43 standalone.localdomain systemd[1]: Stopping nova_api container...
Oct 13 14:22:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-954d2e0c0f5814816175699e6bf4259e781f0fc918aa8a9354f4e7f06e62c6b8-merged.mount: Deactivated successfully.
Oct 13 14:22:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1446: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:45 standalone.localdomain ceph-mon[29756]: pgmap v1446: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1447: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: Server neutron/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: proxy neutron has no server available!
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:22:46 standalone.localdomain podman[219741]: 2025-10-13 14:22:46.550749592 +0000 UTC m=+0.064856929 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:22:46 standalone.localdomain systemd[1]: tmp-crun.1ATXvw.mount: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain podman[219742]: 2025-10-13 14:22:46.630535515 +0000 UTC m=+0.145098526 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, build-date=2025-07-21T13:28:44, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:22:46 standalone.localdomain podman[219742]: 2025-10-13 14:22:46.650822656 +0000 UTC m=+0.165385657 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44)
Oct 13 14:22:46 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain podman[219741]: 2025-10-13 14:22:46.669376963 +0000 UTC m=+0.183484320 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:22:46 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain podman[219770]: 2025-10-13 14:22:46.656583745 +0000 UTC m=+0.087844064 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, release=1, architecture=x86_64, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:22:46 standalone.localdomain podman[219801]: 2025-10-13 14:22:46.720786933 +0000 UTC m=+0.063425344 container health_status a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, health_status=healthy, com.redhat.component=openstack-nova-api-container, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_metadata, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-api, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, release=1, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T16:05:11)
Oct 13 14:22:46 standalone.localdomain podman[219770]: 2025-10-13 14:22:46.741824018 +0000 UTC m=+0.173084317 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9, config_id=tripleo_step4, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api)
Oct 13 14:22:46 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain podman[219801]: 2025-10-13 14:22:46.778918212 +0000 UTC m=+0.121556653 container exec_died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, container_name=nova_metadata, description=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc.)
Oct 13 14:22:46 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: 172.17.0.100:52885 [13/Oct/2025:14:05:25.532] mysql mysql/standalone.internalapi.localdomain 1/0/1041316 33166 -- 27/27/26/26/0 0/0
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: 172.17.0.100:50967 [13/Oct/2025:14:09:11.849] mysql mysql/standalone.internalapi.localdomain 1/0/815000 51011 -- 27/27/24/24/0 0/0
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: 172.17.0.100:54013 [13/Oct/2025:14:05:25.534] mysql mysql/standalone.internalapi.localdomain 1/0/1041315 267554 -- 27/27/23/23/0 0/0
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: 172.17.0.100:33501 [13/Oct/2025:14:05:25.436] mysql mysql/standalone.internalapi.localdomain 1/0/1041412 60024 -- 27/27/25/25/0 0/0
Oct 13 14:22:46 standalone.localdomain haproxy[70940]: 172.17.0.100:39159 [13/Oct/2025:14:05:25.563] mysql mysql/standalone.internalapi.localdomain 1/0/1041285 11635 -- 23/23/22/22/0 0/0
Oct 13 14:22:46 standalone.localdomain systemd[1]: libpod-934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.scope: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain systemd[1]: libpod-934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.scope: Consumed 12.692s CPU time.
Oct 13 14:22:46 standalone.localdomain podman[219649]: 2025-10-13 14:22:46.85023399 +0000 UTC m=+3.093306924 container stop 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, container_name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api)
Oct 13 14:22:46 standalone.localdomain podman[219649]: 2025-10-13 14:22:46.889720388 +0000 UTC m=+3.132793322 container died 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, build-date=2025-07-21T16:05:11, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, name=rhosp17/openstack-nova-api, tcib_managed=true, vcs-type=git, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, com.redhat.component=openstack-nova-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:22:46 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.timer: Deactivated successfully.
Oct 13 14:22:46 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.
Oct 13 14:22:46 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Failed to open /run/systemd/transient/934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: No such file or directory
Oct 13 14:22:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:22:47 standalone.localdomain podman[219841]: 2025-10-13 14:22:47.032381407 +0000 UTC m=+0.075562092 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9)
Oct 13 14:22:47 standalone.localdomain podman[219649]: 2025-10-13 14:22:47.078206204 +0000 UTC m=+3.321279118 container cleanup 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, release=1, com.redhat.component=openstack-nova-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, build-date=2025-07-21T16:05:11, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:22:47 standalone.localdomain podman[219649]: nova_api
Oct 13 14:22:47 standalone.localdomain ceph-mon[29756]: pgmap v1447: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:22:47 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.timer: Failed to open /run/systemd/transient/934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.timer: No such file or directory
Oct 13 14:22:47 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Failed to open /run/systemd/transient/934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: No such file or directory
Oct 13 14:22:47 standalone.localdomain podman[219828]: 2025-10-13 14:22:47.099469195 +0000 UTC m=+0.235561510 container cleanup 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, distribution-scope=public, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-api-container, container_name=nova_api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-nova-api, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:22:47 standalone.localdomain systemd[1]: libpod-conmon-934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.scope: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:22:47 standalone.localdomain podman[219886]: 2025-10-13 14:22:47.212413199 +0000 UTC m=+0.111876912 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, container_name=swift_object_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:22:47 standalone.localdomain podman[219841]: 2025-10-13 14:22:47.297475986 +0000 UTC m=+0.340656661 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:47 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:22:47 standalone.localdomain podman[219861]: 2025-10-13 14:22:47.164362754 +0000 UTC m=+0.168730041 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, container_name=swift_proxy, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, name=rhosp17/openstack-swift-proxy-server, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:22:47 standalone.localdomain podman[219840]: 2025-10-13 14:22:47.270165536 +0000 UTC m=+0.316213880 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, version=17.1.9, container_name=swift_container_server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:47 standalone.localdomain podman[219840]: 2025-10-13 14:22:47.437979977 +0000 UTC m=+0.484028351 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, release=1, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64)
Oct 13 14:22:47 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain podman[219861]: 2025-10-13 14:22:47.47824948 +0000 UTC m=+0.482616787 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, name=rhosp17/openstack-swift-proxy-server, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37)
Oct 13 14:22:47 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:22:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-006dca8b53e2bed08474999b261511727460c03a977d9613a5d06166fa6108c2-merged.mount: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain podman[219937]: 2025-10-13 14:22:47.56441107 +0000 UTC m=+0.322650379 container health_status e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T15:24:10, com.redhat.component=openstack-nova-novncproxy-container, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, container_name=nova_vnc_proxy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, name=rhosp17/openstack-nova-novncproxy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9)
Oct 13 14:22:47 standalone.localdomain podman[219886]: 2025-10-13 14:22:47.580686217 +0000 UTC m=+0.480149910 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, distribution-scope=public, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git)
Oct 13 14:22:47 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain podman[219990]: 2025-10-13 14:22:47.656737673 +0000 UTC m=+0.137442397 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:22:47 standalone.localdomain systemd[1]: tmp-crun.hXZaqo.mount: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain podman[219971]: 2025-10-13 14:22:47.714746558 +0000 UTC m=+0.359298639 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 14:22:47 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.timer: Failed to open /run/systemd/transient/934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.timer: No such file or directory
Oct 13 14:22:47 standalone.localdomain systemd[1]: 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: Failed to open /run/systemd/transient/934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87.service: No such file or directory
Oct 13 14:22:47 standalone.localdomain podman[219893]: 2025-10-13 14:22:47.756178048 +0000 UTC m=+0.625975738 container cleanup 934172e8df82c41ad363d712cdfa3031653ae272a6d25153669b76f2d5778e87 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_api, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-api:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, container_name=nova_api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T16:05:11)
Oct 13 14:22:47 standalone.localdomain podman[219893]: nova_api
Oct 13 14:22:47 standalone.localdomain systemd[1]: tripleo_nova_api.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain systemd[1]: Stopped nova_api container.
Oct 13 14:22:47 standalone.localdomain sudo[219634]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:47 standalone.localdomain sshd[219633]: Received disconnect from 192.168.122.11 port 59030:11: disconnected by user
Oct 13 14:22:47 standalone.localdomain sshd[219633]: Disconnected from user root 192.168.122.11 port 59030
Oct 13 14:22:47 standalone.localdomain sshd[219628]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:47 standalone.localdomain systemd[1]: session-151.scope: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain systemd-logind[45629]: Session 151 logged out. Waiting for processes to exit.
Oct 13 14:22:47 standalone.localdomain systemd-logind[45629]: Removed session 151.
Oct 13 14:22:47 standalone.localdomain sshd[220036]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:47 standalone.localdomain podman[219937]: 2025-10-13 14:22:47.895995208 +0000 UTC m=+0.654234507 container exec_died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-novncproxy-container, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, container_name=nova_vnc_proxy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, build-date=2025-07-21T15:24:10, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-nova-novncproxy, release=1, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:22:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1448: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:47 standalone.localdomain sshd[220036]: Accepted publickey for root from 192.168.122.11 port 59036 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:47 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain podman[219971]: 2025-10-13 14:22:47.920592682 +0000 UTC m=+0.565144813 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:58:20, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container)
Oct 13 14:22:47 standalone.localdomain systemd-logind[45629]: New session 152 of user root.
Oct 13 14:22:47 standalone.localdomain systemd[1]: Started Session 152 of User root.
Oct 13 14:22:47 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:22:47 standalone.localdomain sshd[220036]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:48 standalone.localdomain sudo[220046]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_nova_conductor.service
Oct 13 14:22:48 standalone.localdomain sudo[220046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:48 standalone.localdomain podman[219990]: 2025-10-13 14:22:48.0619141 +0000 UTC m=+0.542618794 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:22:48 standalone.localdomain sudo[220046]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:48 standalone.localdomain sshd[220045]: Received disconnect from 192.168.122.11 port 59036:11: disconnected by user
Oct 13 14:22:48 standalone.localdomain sshd[220045]: Disconnected from user root 192.168.122.11 port 59036
Oct 13 14:22:48 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:22:48 standalone.localdomain sshd[220036]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:48 standalone.localdomain systemd[1]: session-152.scope: Deactivated successfully.
Oct 13 14:22:48 standalone.localdomain systemd-logind[45629]: Session 152 logged out. Waiting for processes to exit.
Oct 13 14:22:48 standalone.localdomain ceph-mon[29756]: pgmap v1448: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:48 standalone.localdomain systemd-logind[45629]: Removed session 152.
Oct 13 14:22:48 standalone.localdomain sshd[220061]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:48 standalone.localdomain sshd[220061]: Accepted publickey for root from 192.168.122.11 port 59042 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:48 standalone.localdomain systemd-logind[45629]: New session 153 of user root.
Oct 13 14:22:48 standalone.localdomain systemd[1]: Started Session 153 of User root.
Oct 13 14:22:48 standalone.localdomain sshd[220061]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:48 standalone.localdomain sudo[220074]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_nova_conductor.service
Oct 13 14:22:48 standalone.localdomain sudo[220074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:48 standalone.localdomain systemd[1]: Stopping nova_conductor container...
Oct 13 14:22:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1449: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: Server placement/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: proxy placement has no server available!
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: 172.17.0.100:39977 [13/Oct/2025:14:09:14.356] mysql mysql/standalone.internalapi.localdomain 1/0/816539 592829 -- 22/22/21/21/0 0/0
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: 172.17.0.100:48415 [13/Oct/2025:14:16:11.194] mysql mysql/standalone.internalapi.localdomain 1/0/399698 2606 -- 22/22/20/20/0 0/0
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: 172.17.0.100:50975 [13/Oct/2025:14:05:19.009] mysql mysql/standalone.internalapi.localdomain 1/0/1051886 2402 -- 20/20/19/19/0 0/0
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: 172.17.0.100:59209 [13/Oct/2025:14:16:11.130] mysql mysql/standalone.internalapi.localdomain 1/0/399763 7797 -- 19/19/18/18/0 0/0
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: 172.17.0.100:34887 [13/Oct/2025:14:05:19.200] mysql mysql/standalone.internalapi.localdomain 1/0/1051695 764339 -- 18/18/17/17/0 0/0
Oct 13 14:22:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:50 standalone.localdomain ceph-mon[29756]: pgmap v1449: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:50 standalone.localdomain haproxy[70940]: 172.17.0.100:32889 [13/Oct/2025:14:05:18.878] mysql mysql/standalone.internalapi.localdomain 1/0/1052107 37240 -- 17/17/16/16/0 0/0
Oct 13 14:22:50 standalone.localdomain systemd[1]: libpod-ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.scope: Deactivated successfully.
Oct 13 14:22:50 standalone.localdomain systemd[1]: libpod-ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.scope: Consumed 10.450s CPU time.
Oct 13 14:22:50 standalone.localdomain podman[220089]: 2025-10-13 14:22:50.992271113 +0000 UTC m=+2.580930862 container died ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, name=rhosp17/openstack-nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, build-date=2025-07-21T15:44:17, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, com.redhat.component=openstack-nova-conductor-container, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-conductor, managed_by=tripleo_ansible, container_name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:22:51 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.timer: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.
Oct 13 14:22:51 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Failed to open /run/systemd/transient/ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: No such file or directory
Oct 13 14:22:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3b9ec403efa2eb805f6ccd9ed5698840a2ea6e37a8c8a3241e70550634ef8a13-merged.mount: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain podman[220089]: 2025-10-13 14:22:51.064499511 +0000 UTC m=+2.653159230 container cleanup ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-conductor, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, container_name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:17, name=rhosp17/openstack-nova-conductor, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-conductor, vcs-type=git, com.redhat.component=openstack-nova-conductor-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:51 standalone.localdomain podman[220089]: nova_conductor
Oct 13 14:22:51 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.timer: Failed to open /run/systemd/transient/ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.timer: No such file or directory
Oct 13 14:22:51 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Failed to open /run/systemd/transient/ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: No such file or directory
Oct 13 14:22:51 standalone.localdomain podman[220120]: 2025-10-13 14:22:51.083986167 +0000 UTC m=+0.080187846 container cleanup ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, managed_by=tripleo_ansible, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, name=rhosp17/openstack-nova-conductor, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, version=17.1.9, container_name=nova_conductor, build-date=2025-07-21T15:44:17, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-conductor, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-conductor-container)
Oct 13 14:22:51 standalone.localdomain systemd[1]: libpod-conmon-ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.scope: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.timer: Failed to open /run/systemd/transient/ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.timer: No such file or directory
Oct 13 14:22:51 standalone.localdomain systemd[1]: ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: Failed to open /run/systemd/transient/ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288.service: No such file or directory
Oct 13 14:22:51 standalone.localdomain podman[220134]: 2025-10-13 14:22:51.168278689 +0000 UTC m=+0.051257586 container cleanup ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1, name=nova_conductor, container_name=nova_conductor, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-conductor, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, vcs-ref=b07d6825a5bf5ae5fdc7c696b8d7d801d21ea10d, description=Red Hat OpenStack Platform 17.1 nova-conductor, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-conductor:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:17, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-conductor, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-conductor/images/17.1.9-1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-conductor, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-conductor, com.redhat.component=openstack-nova-conductor-container)
Oct 13 14:22:51 standalone.localdomain podman[220134]: nova_conductor
Oct 13 14:22:51 standalone.localdomain systemd[1]: tripleo_nova_conductor.service: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain systemd[1]: Stopped nova_conductor container.
Oct 13 14:22:51 standalone.localdomain sudo[220074]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:51 standalone.localdomain sshd[220073]: Received disconnect from 192.168.122.11 port 59042:11: disconnected by user
Oct 13 14:22:51 standalone.localdomain sshd[220073]: Disconnected from user root 192.168.122.11 port 59042
Oct 13 14:22:51 standalone.localdomain sshd[220061]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:51 standalone.localdomain systemd[1]: session-153.scope: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain systemd-logind[45629]: Session 153 logged out. Waiting for processes to exit.
Oct 13 14:22:51 standalone.localdomain systemd-logind[45629]: Removed session 153.
Oct 13 14:22:51 standalone.localdomain sshd[220147]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:51 standalone.localdomain sshd[220147]: Accepted publickey for root from 192.168.122.11 port 36316 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:51 standalone.localdomain systemd-logind[45629]: New session 154 of user root.
Oct 13 14:22:51 standalone.localdomain systemd[1]: Started Session 154 of User root.
Oct 13 14:22:51 standalone.localdomain sshd[220147]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:51 standalone.localdomain sudo[220159]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_nova_metadata.service
Oct 13 14:22:51 standalone.localdomain sudo[220159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:51 standalone.localdomain sudo[220159]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:51 standalone.localdomain sshd[220158]: Received disconnect from 192.168.122.11 port 36316:11: disconnected by user
Oct 13 14:22:51 standalone.localdomain sshd[220158]: Disconnected from user root 192.168.122.11 port 36316
Oct 13 14:22:51 standalone.localdomain sshd[220147]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:51 standalone.localdomain systemd[1]: session-154.scope: Deactivated successfully.
Oct 13 14:22:51 standalone.localdomain systemd-logind[45629]: Session 154 logged out. Waiting for processes to exit.
Oct 13 14:22:51 standalone.localdomain systemd-logind[45629]: Removed session 154.
Oct 13 14:22:51 standalone.localdomain sshd[220174]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:51 standalone.localdomain sshd[220174]: Accepted publickey for root from 192.168.122.11 port 36320 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:51 standalone.localdomain systemd-logind[45629]: New session 155 of user root.
Oct 13 14:22:51 standalone.localdomain systemd[1]: Started Session 155 of User root.
Oct 13 14:22:51 standalone.localdomain sshd[220174]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:51 standalone.localdomain sudo[220260]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_nova_metadata.service
Oct 13 14:22:51 standalone.localdomain sudo[220260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:51 standalone.localdomain systemd[1]: Stopping nova_metadata container...
Oct 13 14:22:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1450: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:52 standalone.localdomain runuser[220294]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:52 standalone.localdomain runuser[220294]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:52 standalone.localdomain ceph-mon[29756]: pgmap v1450: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:53 standalone.localdomain runuser[220445]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:22:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:22:53 standalone.localdomain runuser[220445]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:53 standalone.localdomain runuser[220556]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:22:53 standalone.localdomain systemd[1]: tmp-crun.U9FNQz.mount: Deactivated successfully.
Oct 13 14:22:53 standalone.localdomain podman[220548]: 2025-10-13 14:22:53.803746588 +0000 UTC m=+0.071346262 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team)
Oct 13 14:22:53 standalone.localdomain podman[220548]: 2025-10-13 14:22:53.842812312 +0000 UTC m=+0.110411996 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-keystone-container, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, batch=17.1_20250721.1, container_name=keystone_cron, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:53 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:22:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1451: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:54 standalone.localdomain runuser[220556]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:22:54 standalone.localdomain haproxy[70940]: Server nova_osapi/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:22:54 standalone.localdomain haproxy[70940]: proxy nova_osapi has no server available!
Oct 13 14:22:54 standalone.localdomain haproxy[70940]: 172.17.0.100:52745 [13/Oct/2025:14:05:26.855] mysql mysql/standalone.internalapi.localdomain 1/0/1048018 3529 -- 16/16/15/15/0 0/0
Oct 13 14:22:54 standalone.localdomain haproxy[70940]: 172.17.0.100:44745 [13/Oct/2025:14:05:26.961] mysql mysql/standalone.internalapi.localdomain 1/0/1047913 2402 -- 16/16/14/14/0 0/0
Oct 13 14:22:54 standalone.localdomain haproxy[70940]: 172.17.0.100:33175 [13/Oct/2025:14:05:26.960] mysql mysql/standalone.internalapi.localdomain 1/0/1047914 2414 -- 15/15/13/13/0 0/0
Oct 13 14:22:54 standalone.localdomain haproxy[70940]: 172.17.0.100:48919 [13/Oct/2025:14:05:26.978] mysql mysql/standalone.internalapi.localdomain 1/0/1047896 3663 -- 14/14/12/12/0 0/0
Oct 13 14:22:54 standalone.localdomain systemd[1]: libpod-a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.scope: Deactivated successfully.
Oct 13 14:22:54 standalone.localdomain systemd[1]: libpod-a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.scope: Consumed 4.468s CPU time.
Oct 13 14:22:54 standalone.localdomain podman[220275]: 2025-10-13 14:22:54.877862857 +0000 UTC m=+3.101008924 container died a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, release=1, architecture=x86_64, io.openshift.expose-services=, container_name=nova_metadata, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-api, summary=Red Hat OpenStack Platform 17.1 nova-api, version=17.1.9, build-date=2025-07-21T16:05:11, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:22:54 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.timer: Deactivated successfully.
Oct 13 14:22:54 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.
Oct 13 14:22:54 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Failed to open /run/systemd/transient/a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: No such file or directory
Oct 13 14:22:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513-userdata-shm.mount: Deactivated successfully.
Oct 13 14:22:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a87f962b5d36e9727ad9caa16c1660fa93b6a87798b880bf239662f5eb69d8c8-merged.mount: Deactivated successfully.
Oct 13 14:22:54 standalone.localdomain podman[220275]: 2025-10-13 14:22:54.930424562 +0000 UTC m=+3.153570599 container cleanup a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, name=rhosp17/openstack-nova-api, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-api-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, build-date=2025-07-21T16:05:11, io.buildah.version=1.33.12, version=17.1.9, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, batch=17.1_20250721.1)
Oct 13 14:22:54 standalone.localdomain podman[220275]: nova_metadata
Oct 13 14:22:54 standalone.localdomain ceph-mon[29756]: pgmap v1451: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:54 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.timer: Failed to open /run/systemd/transient/a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.timer: No such file or directory
Oct 13 14:22:54 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Failed to open /run/systemd/transient/a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: No such file or directory
Oct 13 14:22:54 standalone.localdomain podman[220688]: 2025-10-13 14:22:54.99047028 +0000 UTC m=+0.097513415 container cleanup a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-api, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-api, com.redhat.component=openstack-nova-api-container, description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T16:05:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=nova_metadata, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:22:55 standalone.localdomain systemd[1]: libpod-conmon-a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.scope: Deactivated successfully.
Oct 13 14:22:55 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.timer: Failed to open /run/systemd/transient/a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.timer: No such file or directory
Oct 13 14:22:55 standalone.localdomain systemd[1]: a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: Failed to open /run/systemd/transient/a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513.service: No such file or directory
Oct 13 14:22:55 standalone.localdomain podman[220702]: 2025-10-13 14:22:55.095020533 +0000 UTC m=+0.072792196 container cleanup a2a40976f61a708b7659e2823d532174351a556b4b0588155ccb9589c2809513 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1, name=nova_metadata, architecture=x86_64, release=1, com.redhat.component=openstack-nova-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=1bf0113009f3d1746f0030e40da5d2674940fc4f, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:05:11, config_id=tripleo_step4, container_name=nova_metadata, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-api, description=Red Hat OpenStack Platform 17.1 nova-api, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '512448c809be25559cd4e0a76027bdf9'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-api:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova:z', '/var/log/containers/httpd/nova-metadata:/var/log/httpd:z', '/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_metadata:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-api/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:22:55 standalone.localdomain podman[220702]: nova_metadata
Oct 13 14:22:55 standalone.localdomain systemd[1]: tripleo_nova_metadata.service: Deactivated successfully.
Oct 13 14:22:55 standalone.localdomain systemd[1]: Stopped nova_metadata container.
Oct 13 14:22:55 standalone.localdomain sudo[220260]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:55 standalone.localdomain sshd[220250]: Received disconnect from 192.168.122.11 port 36320:11: disconnected by user
Oct 13 14:22:55 standalone.localdomain sshd[220250]: Disconnected from user root 192.168.122.11 port 36320
Oct 13 14:22:55 standalone.localdomain sshd[220174]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:55 standalone.localdomain systemd[1]: session-155.scope: Deactivated successfully.
Oct 13 14:22:55 standalone.localdomain sshd[220715]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:55 standalone.localdomain systemd-logind[45629]: Session 155 logged out. Waiting for processes to exit.
Oct 13 14:22:55 standalone.localdomain systemd-logind[45629]: Removed session 155.
Oct 13 14:22:55 standalone.localdomain sshd[220715]: Accepted publickey for root from 192.168.122.11 port 36322 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:55 standalone.localdomain systemd-logind[45629]: New session 156 of user root.
Oct 13 14:22:55 standalone.localdomain systemd[1]: Started Session 156 of User root.
Oct 13 14:22:55 standalone.localdomain sshd[220715]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:55 standalone.localdomain sudo[220719]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_nova_scheduler.service
Oct 13 14:22:55 standalone.localdomain sudo[220719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:55 standalone.localdomain sudo[220719]: pam_unix(sudo:session): session closed for user root
Oct 13 14:22:55 standalone.localdomain sshd[220718]: Received disconnect from 192.168.122.11 port 36322:11: disconnected by user
Oct 13 14:22:55 standalone.localdomain sshd[220718]: Disconnected from user root 192.168.122.11 port 36322
Oct 13 14:22:55 standalone.localdomain sshd[220715]: pam_unix(sshd:session): session closed for user root
Oct 13 14:22:55 standalone.localdomain systemd[1]: session-156.scope: Deactivated successfully.
Oct 13 14:22:55 standalone.localdomain systemd-logind[45629]: Session 156 logged out. Waiting for processes to exit.
Oct 13 14:22:55 standalone.localdomain systemd-logind[45629]: Removed session 156.
Oct 13 14:22:55 standalone.localdomain sshd[220742]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:22:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:22:55 standalone.localdomain podman[220743]: 2025-10-13 14:22:55.512091129 +0000 UTC m=+0.084548821 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, name=rhosp17/openstack-neutron-sriov-agent, config_id=tripleo_step4, container_name=neutron_sriov_agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container)
Oct 13 14:22:55 standalone.localdomain sshd[220742]: Accepted publickey for root from 192.168.122.11 port 36338 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:22:55 standalone.localdomain systemd-logind[45629]: New session 157 of user root.
Oct 13 14:22:55 standalone.localdomain systemd[1]: Started Session 157 of User root.
Oct 13 14:22:55 standalone.localdomain sshd[220742]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:55 standalone.localdomain podman[220743]: 2025-10-13 14:22:55.579994713 +0000 UTC m=+0.152452415 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-sriov-agent, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, container_name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:03:34)
Oct 13 14:22:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:22:55 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:22:55 standalone.localdomain sudo[220771]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_nova_scheduler.service
Oct 13 14:22:55 standalone.localdomain sudo[220771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:22:55 standalone.localdomain systemd[1]: Stopping nova_scheduler container...
Oct 13 14:22:55 standalone.localdomain podman[220785]: 2025-10-13 14:22:55.760749676 +0000 UTC m=+0.082426995 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp)
Oct 13 14:22:55 standalone.localdomain podman[220785]: 2025-10-13 14:22:55.812096564 +0000 UTC m=+0.133773943 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=neutron_dhcp)
Oct 13 14:22:55 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:22:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1452: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:22:56 standalone.localdomain ceph-mon[29756]: pgmap v1452: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:22:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1111517119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:22:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:22:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1111517119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:22:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1453: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:58 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1111517119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:22:58 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1111517119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:22:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:22:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:22:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:22:58 standalone.localdomain podman[220908]: 2025-10-13 14:22:58.299721152 +0000 UTC m=+0.060407060 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, maintainer=OpenStack TripleO Team, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=clustercheck, io.buildah.version=1.33.12)
Oct 13 14:22:58 standalone.localdomain podman[220906]: 2025-10-13 14:22:58.33887755 +0000 UTC m=+0.102199180 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=nova_compute, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, tcib_managed=true)
Oct 13 14:22:58 standalone.localdomain podman[220908]: 2025-10-13 14:22:58.348743198 +0000 UTC m=+0.109429106 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, com.redhat.component=openstack-mariadb-container, release=1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., container_name=clustercheck, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step2, tcib_managed=true, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:22:58 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:22:58 standalone.localdomain podman[220906]: 2025-10-13 14:22:58.368793261 +0000 UTC m=+0.132114831 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:22:58 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:22:58 standalone.localdomain podman[220907]: 2025-10-13 14:22:58.437599293 +0000 UTC m=+0.196537507 container health_status 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, tcib_managed=true, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-northd, release=1, container_name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:30:04, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-ovn-northd-container, name=rhosp17/openstack-ovn-northd)
Oct 13 14:22:58 standalone.localdomain podman[220907]: 2025-10-13 14:22:58.447362776 +0000 UTC m=+0.206300940 container exec_died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_cluster_northd, name=rhosp17/openstack-ovn-northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.openshift.expose-services=, release=1, com.redhat.component=openstack-ovn-northd-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, tcib_managed=true, config_id=ovn_cluster_northd, build-date=2025-07-21T13:30:04, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-northd, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:22:58 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Deactivated successfully.
Oct 13 14:22:59 standalone.localdomain ceph-mon[29756]: pgmap v1453: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:22:59 standalone.localdomain systemd[1]: tmp-crun.F9XK5w.mount: Deactivated successfully.
Oct 13 14:22:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1454: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:00 standalone.localdomain ceph-mon[29756]: pgmap v1454: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1455: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:02 standalone.localdomain sudo[221105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:23:02 standalone.localdomain sudo[221105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:23:02 standalone.localdomain sudo[221105]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:02 standalone.localdomain sudo[221120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:23:02 standalone.localdomain sudo[221120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:23:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:23:02 standalone.localdomain podman[221176]: 2025-10-13 14:23:02.561028016 +0000 UTC m=+0.077431130 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Oct 13 14:23:02 standalone.localdomain podman[221176]: 2025-10-13 14:23:02.573122262 +0000 UTC m=+0.089525376 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, architecture=x86_64, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 14:23:02 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:23:02 standalone.localdomain sudo[221120]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:02 standalone.localdomain haproxy[70940]: Server nova_metadata/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:23:02 standalone.localdomain haproxy[70940]: proxy nova_metadata has no server available!
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:23:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 8e392da9-454b-4192-8e6b-e12722e9d075 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:23:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 8e392da9-454b-4192-8e6b-e12722e9d075 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:23:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 8e392da9-454b-4192-8e6b-e12722e9d075 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: pgmap v1455: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:23:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:23:03 standalone.localdomain sudo[221275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:23:03 standalone.localdomain sudo[221275]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:23:03 standalone.localdomain sudo[221275]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:03 standalone.localdomain podman[221293]: 2025-10-13 14:23:03.313838678 +0000 UTC m=+0.070004649 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:23:03 standalone.localdomain podman[221293]: 2025-10-13 14:23:03.347083452 +0000 UTC m=+0.103249443 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, architecture=x86_64, build-date=2025-07-21T12:58:45, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:23:03 standalone.localdomain podman[221368]: 2025-10-13 14:23:03.566067906 +0000 UTC m=+0.075670156 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., release=1)
Oct 13 14:23:03 standalone.localdomain podman[221368]: 2025-10-13 14:23:03.598946629 +0000 UTC m=+0.108548769 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, release=1, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, com.redhat.component=openstack-haproxy-container, io.openshift.expose-services=, build-date=2025-07-21T13:08:11)
Oct 13 14:23:03 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:23:03 standalone.localdomain recover_tripleo_nova_virtqemud[221410]: 93291
Oct 13 14:23:03 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:23:03 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:23:03 standalone.localdomain podman[221414]: 2025-10-13 14:23:03.847374689 +0000 UTC m=+0.080117944 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, build-date=2025-07-21T13:08:05, tcib_managed=true, com.redhat.component=openstack-rabbitmq-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:23:03 standalone.localdomain podman[221414]: 2025-10-13 14:23:03.877078892 +0000 UTC m=+0.109822117 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:23:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1456: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:23:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:23:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:23:05 standalone.localdomain ceph-mon[29756]: pgmap v1456: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:23:05 standalone.localdomain runuser[221510]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:05 standalone.localdomain haproxy[70940]: 172.17.0.100:55295 [13/Oct/2025:14:05:20.028] mysql mysql/standalone.internalapi.localdomain 1/0/1065370 2414 -- 12/12/11/11/0 0/0
Oct 13 14:23:05 standalone.localdomain haproxy[70940]: 172.17.0.100:46543 [13/Oct/2025:14:11:13.671] mysql mysql/standalone.internalapi.localdomain 1/0/711914 18340 -- 11/11/10/10/0 0/0
Oct 13 14:23:05 standalone.localdomain haproxy[70940]: 172.17.0.100:46193 [13/Oct/2025:14:05:20.202] mysql mysql/standalone.internalapi.localdomain 1/0/1065382 151065 -- 10/10/9/9/0 0/0
Oct 13 14:23:05 standalone.localdomain haproxy[70940]: 172.17.0.100:55435 [13/Oct/2025:14:05:19.707] mysql mysql/standalone.internalapi.localdomain 1/0/1065878 4798 -- 9/9/8/8/0 0/0
Oct 13 14:23:05 standalone.localdomain runuser[221510]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:05 standalone.localdomain haproxy[70940]: 172.17.0.100:33923 [13/Oct/2025:14:05:19.593] mysql mysql/standalone.internalapi.localdomain 1/0/1066091 8068 -- 8/8/7/7/0 0/0
Oct 13 14:23:05 standalone.localdomain systemd[1]: libpod-37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.scope: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain systemd[1]: libpod-37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.scope: Consumed 3.928s CPU time.
Oct 13 14:23:05 standalone.localdomain podman[220792]: 2025-10-13 14:23:05.692218777 +0000 UTC m=+9.994538073 container died 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, vcs-type=git, name=rhosp17/openstack-nova-scheduler, distribution-scope=public, io.buildah.version=1.33.12, container_name=nova_scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, version=17.1.9, build-date=2025-07-21T16:02:54, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-nova-scheduler-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:23:05 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.timer: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.
Oct 13 14:23:05 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Failed to open /run/systemd/transient/37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: No such file or directory
Oct 13 14:23:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9405f1b2f2f7c524365dd7da82f79d342db2340e35c55f8c8aa59d7a473d1f4f-merged.mount: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa-userdata-shm.mount: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain podman[220792]: 2025-10-13 14:23:05.733851863 +0000 UTC m=+10.036171159 container cleanup 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, architecture=x86_64, com.redhat.component=openstack-nova-scheduler-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, name=rhosp17/openstack-nova-scheduler, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T16:02:54, container_name=nova_scheduler, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:23:05 standalone.localdomain podman[220792]: nova_scheduler
Oct 13 14:23:05 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.timer: Failed to open /run/systemd/transient/37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.timer: No such file or directory
Oct 13 14:23:05 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Failed to open /run/systemd/transient/37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: No such file or directory
Oct 13 14:23:05 standalone.localdomain podman[221579]: 2025-10-13 14:23:05.766382355 +0000 UTC m=+0.062788095 container cleanup 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, release=1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, tcib_managed=true, com.redhat.component=openstack-nova-scheduler-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, container_name=nova_scheduler, name=rhosp17/openstack-nova-scheduler, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, build-date=2025-07-21T16:02:54, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:23:05 standalone.localdomain systemd[1]: libpod-conmon-37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.scope: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain runuser[221603]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:05 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.timer: Failed to open /run/systemd/transient/37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.timer: No such file or directory
Oct 13 14:23:05 standalone.localdomain systemd[1]: 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: Failed to open /run/systemd/transient/37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa.service: No such file or directory
Oct 13 14:23:05 standalone.localdomain podman[221592]: 2025-10-13 14:23:05.887944858 +0000 UTC m=+0.090104035 container cleanup 37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1, name=nova_scheduler, build-date=2025-07-21T16:02:54, io.openshift.expose-services=, container_name=nova_scheduler, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-scheduler, name=rhosp17/openstack-nova-scheduler, tcib_managed=true, com.redhat.component=openstack-nova-scheduler-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-scheduler, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4f7cb55437a2b42333072591935a511528e79935, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-scheduler, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-scheduler/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-scheduler:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-scheduler)
Oct 13 14:23:05 standalone.localdomain podman[221592]: nova_scheduler
Oct 13 14:23:05 standalone.localdomain systemd[1]: tripleo_nova_scheduler.service: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain systemd[1]: Stopped nova_scheduler container.
Oct 13 14:23:05 standalone.localdomain sudo[220771]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:05 standalone.localdomain sshd[220769]: Received disconnect from 192.168.122.11 port 36338:11: disconnected by user
Oct 13 14:23:05 standalone.localdomain sshd[220769]: Disconnected from user root 192.168.122.11 port 36338
Oct 13 14:23:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1457: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:05 standalone.localdomain sshd[220742]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:05 standalone.localdomain systemd[1]: session-157.scope: Deactivated successfully.
Oct 13 14:23:05 standalone.localdomain systemd-logind[45629]: Session 157 logged out. Waiting for processes to exit.
Oct 13 14:23:05 standalone.localdomain systemd-logind[45629]: Removed session 157.
Oct 13 14:23:05 standalone.localdomain sshd[221620]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:06 standalone.localdomain sshd[221620]: Accepted publickey for root from 192.168.122.11 port 45070 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:06 standalone.localdomain systemd-logind[45629]: New session 158 of user root.
Oct 13 14:23:06 standalone.localdomain systemd[1]: Started Session 158 of User root.
Oct 13 14:23:06 standalone.localdomain sshd[221620]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:06 standalone.localdomain sudo[221652]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_nova_vnc_proxy.service
Oct 13 14:23:06 standalone.localdomain sudo[221652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:06 standalone.localdomain sudo[221652]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:06 standalone.localdomain sshd[221651]: Received disconnect from 192.168.122.11 port 45070:11: disconnected by user
Oct 13 14:23:06 standalone.localdomain sshd[221651]: Disconnected from user root 192.168.122.11 port 45070
Oct 13 14:23:06 standalone.localdomain sshd[221620]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:06 standalone.localdomain systemd-logind[45629]: Session 158 logged out. Waiting for processes to exit.
Oct 13 14:23:06 standalone.localdomain systemd[1]: session-158.scope: Deactivated successfully.
Oct 13 14:23:06 standalone.localdomain systemd-logind[45629]: Removed session 158.
Oct 13 14:23:06 standalone.localdomain sshd[221667]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:06 standalone.localdomain sshd[221667]: Accepted publickey for root from 192.168.122.11 port 45074 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:06 standalone.localdomain systemd-logind[45629]: New session 159 of user root.
Oct 13 14:23:06 standalone.localdomain systemd[1]: Started Session 159 of User root.
Oct 13 14:23:06 standalone.localdomain sshd[221667]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:06 standalone.localdomain sudo[221671]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_nova_vnc_proxy.service
Oct 13 14:23:06 standalone.localdomain sudo[221671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:06 standalone.localdomain systemd[1]: Stopping nova_vnc_proxy container...
Oct 13 14:23:06 standalone.localdomain runuser[221603]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:06 standalone.localdomain runuser[221706]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:06 standalone.localdomain systemd[1]: libpod-e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.scope: Deactivated successfully.
Oct 13 14:23:06 standalone.localdomain systemd[1]: libpod-e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.scope: Consumed 11.931s CPU time.
Oct 13 14:23:06 standalone.localdomain podman[221690]: 2025-10-13 14:23:06.876298338 +0000 UTC m=+0.415949573 container died e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, vcs-type=git, container_name=nova_vnc_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, release=1, version=17.1.9, build-date=2025-07-21T15:24:10, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, com.redhat.component=openstack-nova-novncproxy-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12)
Oct 13 14:23:06 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.timer: Deactivated successfully.
Oct 13 14:23:06 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.
Oct 13 14:23:06 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Failed to open /run/systemd/transient/e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: No such file or directory
Oct 13 14:23:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26-userdata-shm.mount: Deactivated successfully.
Oct 13 14:23:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8675278b750d990aa4a2dbfe907b259973251cdfa372b24fe7e026c128a60e7a-merged.mount: Deactivated successfully.
Oct 13 14:23:06 standalone.localdomain podman[221690]: 2025-10-13 14:23:06.926524331 +0000 UTC m=+0.466175476 container cleanup e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, com.redhat.component=openstack-nova-novncproxy-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-novncproxy, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, version=17.1.9, vcs-type=git, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, distribution-scope=public, container_name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, config_id=tripleo_step4, build-date=2025-07-21T15:24:10, managed_by=tripleo_ansible)
Oct 13 14:23:06 standalone.localdomain podman[221690]: nova_vnc_proxy
Oct 13 14:23:06 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.timer: Failed to open /run/systemd/transient/e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.timer: No such file or directory
Oct 13 14:23:06 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Failed to open /run/systemd/transient/e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: No such file or directory
Oct 13 14:23:06 standalone.localdomain podman[221764]: 2025-10-13 14:23:06.958482715 +0000 UTC m=+0.077117700 container cleanup e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1, build-date=2025-07-21T15:24:10, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-nova-novncproxy, io.openshift.expose-services=, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, com.redhat.component=openstack-nova-novncproxy-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_vnc_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy)
Oct 13 14:23:06 standalone.localdomain systemd[1]: libpod-conmon-e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.scope: Deactivated successfully.
Oct 13 14:23:07 standalone.localdomain ceph-mon[29756]: pgmap v1457: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:07 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.timer: Failed to open /run/systemd/transient/e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.timer: No such file or directory
Oct 13 14:23:07 standalone.localdomain systemd[1]: e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: Failed to open /run/systemd/transient/e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26.service: No such file or directory
Oct 13 14:23:07 standalone.localdomain podman[221778]: 2025-10-13 14:23:07.049725684 +0000 UTC m=+0.058293875 container cleanup e2e195aea4c93cf2e3ddab7fb7ed9f2a6decaf5eba130a7318116d45d98d3a26 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1, name=nova_vnc_proxy, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-novncproxy, version=17.1.9, release=1, distribution-scope=public, container_name=nova_vnc_proxy, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-novncproxy, description=Red Hat OpenStack Platform 17.1 nova-novncproxy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-novncproxy-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff6c887813d25a6bfef54f8920eca651'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-novncproxy:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T15:24:10, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-novncproxy, vcs-ref=e99c43a66afc0e37332388ec1b0622fff217c96f, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-novncproxy, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-novncproxy/images/17.1.9-1)
Oct 13 14:23:07 standalone.localdomain podman[221778]: nova_vnc_proxy
Oct 13 14:23:07 standalone.localdomain systemd[1]: tripleo_nova_vnc_proxy.service: Deactivated successfully.
Oct 13 14:23:07 standalone.localdomain systemd[1]: Stopped nova_vnc_proxy container.
Oct 13 14:23:07 standalone.localdomain sudo[221671]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain sshd[221670]: Received disconnect from 192.168.122.11 port 45074:11: disconnected by user
Oct 13 14:23:07 standalone.localdomain sshd[221670]: Disconnected from user root 192.168.122.11 port 45074
Oct 13 14:23:07 standalone.localdomain sshd[221667]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain systemd[1]: session-159.scope: Deactivated successfully.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Session 159 logged out. Waiting for processes to exit.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Removed session 159.
Oct 13 14:23:07 standalone.localdomain sshd[221793]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:07 standalone.localdomain sshd[221793]: Accepted publickey for root from 192.168.122.11 port 45090 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: New session 160 of user root.
Oct 13 14:23:07 standalone.localdomain systemd[1]: Started Session 160 of User root.
Oct 13 14:23:07 standalone.localdomain sshd[221793]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:07 standalone.localdomain runuser[221706]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:07 standalone.localdomain sudo[221803]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_api.service
Oct 13 14:23:07 standalone.localdomain sudo[221803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:07 standalone.localdomain sudo[221803]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain sshd[221802]: Received disconnect from 192.168.122.11 port 45090:11: disconnected by user
Oct 13 14:23:07 standalone.localdomain sshd[221802]: Disconnected from user root 192.168.122.11 port 45090
Oct 13 14:23:07 standalone.localdomain sshd[221793]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain systemd[1]: session-160.scope: Deactivated successfully.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Session 160 logged out. Waiting for processes to exit.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Removed session 160.
Oct 13 14:23:07 standalone.localdomain sshd[221819]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:07 standalone.localdomain sshd[221819]: Accepted publickey for root from 192.168.122.11 port 45096 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: New session 161 of user root.
Oct 13 14:23:07 standalone.localdomain systemd[1]: Started Session 161 of User root.
Oct 13 14:23:07 standalone.localdomain sshd[221819]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:07 standalone.localdomain sudo[221831]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_api_cron.service
Oct 13 14:23:07 standalone.localdomain sudo[221831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:07 standalone.localdomain sudo[221831]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain sshd[221823]: Received disconnect from 192.168.122.11 port 45096:11: disconnected by user
Oct 13 14:23:07 standalone.localdomain sshd[221823]: Disconnected from user root 192.168.122.11 port 45096
Oct 13 14:23:07 standalone.localdomain sshd[221819]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain systemd[1]: session-161.scope: Deactivated successfully.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Session 161 logged out. Waiting for processes to exit.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Removed session 161.
Oct 13 14:23:07 standalone.localdomain sshd[221848]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:07 standalone.localdomain podman[221852]: 2025-10-13 14:23:07.807427129 +0000 UTC m=+0.080807195 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:13:39, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, version=17.1.9, com.redhat.component=openstack-cinder-volume-container, name=rhosp17/openstack-cinder-volume)
Oct 13 14:23:07 standalone.localdomain sshd[221848]: Accepted publickey for root from 192.168.122.11 port 45104 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: New session 162 of user root.
Oct 13 14:23:07 standalone.localdomain podman[221852]: 2025-10-13 14:23:07.839956581 +0000 UTC m=+0.113336667 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vendor=Red Hat, Inc., com.redhat.component=openstack-cinder-volume-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, name=rhosp17/openstack-cinder-volume, summary=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:23:07 standalone.localdomain systemd[1]: Started Session 162 of User root.
Oct 13 14:23:07 standalone.localdomain sshd[221848]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1458: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:07 standalone.localdomain sudo[221886]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_evaluator.service
Oct 13 14:23:07 standalone.localdomain sudo[221886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:07 standalone.localdomain sudo[221886]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain sshd[221883]: Received disconnect from 192.168.122.11 port 45104:11: disconnected by user
Oct 13 14:23:07 standalone.localdomain sshd[221883]: Disconnected from user root 192.168.122.11 port 45104
Oct 13 14:23:07 standalone.localdomain sshd[221848]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:07 standalone.localdomain systemd[1]: session-162.scope: Deactivated successfully.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Session 162 logged out. Waiting for processes to exit.
Oct 13 14:23:07 standalone.localdomain systemd-logind[45629]: Removed session 162.
Oct 13 14:23:08 standalone.localdomain sshd[221901]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:08 standalone.localdomain ceph-mon[29756]: pgmap v1458: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:08 standalone.localdomain sshd[221901]: Accepted publickey for root from 192.168.122.11 port 45118 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: New session 163 of user root.
Oct 13 14:23:08 standalone.localdomain systemd[1]: Started Session 163 of User root.
Oct 13 14:23:08 standalone.localdomain sshd[221901]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:08 standalone.localdomain sudo[221905]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_listener.service
Oct 13 14:23:08 standalone.localdomain sudo[221905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:08 standalone.localdomain sudo[221905]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:08 standalone.localdomain sshd[221904]: Received disconnect from 192.168.122.11 port 45118:11: disconnected by user
Oct 13 14:23:08 standalone.localdomain sshd[221904]: Disconnected from user root 192.168.122.11 port 45118
Oct 13 14:23:08 standalone.localdomain sshd[221901]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:08 standalone.localdomain systemd[1]: session-163.scope: Deactivated successfully.
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: Session 163 logged out. Waiting for processes to exit.
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: Removed session 163.
Oct 13 14:23:08 standalone.localdomain sshd[221920]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:08 standalone.localdomain sshd[221920]: Accepted publickey for root from 192.168.122.11 port 45132 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: New session 164 of user root.
Oct 13 14:23:08 standalone.localdomain systemd[1]: Started Session 164 of User root.
Oct 13 14:23:08 standalone.localdomain sshd[221920]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:08 standalone.localdomain sudo[221924]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_aodh_notifier.service
Oct 13 14:23:08 standalone.localdomain sudo[221924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:08 standalone.localdomain sudo[221924]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:08 standalone.localdomain sshd[221923]: Received disconnect from 192.168.122.11 port 45132:11: disconnected by user
Oct 13 14:23:08 standalone.localdomain sshd[221923]: Disconnected from user root 192.168.122.11 port 45132
Oct 13 14:23:08 standalone.localdomain sshd[221920]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:08 standalone.localdomain systemd[1]: session-164.scope: Deactivated successfully.
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: Session 164 logged out. Waiting for processes to exit.
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: Removed session 164.
Oct 13 14:23:08 standalone.localdomain sshd[221939]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:08 standalone.localdomain sshd[221939]: Accepted publickey for root from 192.168.122.11 port 45136 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: New session 165 of user root.
Oct 13 14:23:08 standalone.localdomain systemd[1]: Started Session 165 of User root.
Oct 13 14:23:08 standalone.localdomain sshd[221939]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:08 standalone.localdomain sudo[221951]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ceilometer_agent_central.service
Oct 13 14:23:08 standalone.localdomain sudo[221951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:08 standalone.localdomain sudo[221951]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:08 standalone.localdomain sshd[221950]: Received disconnect from 192.168.122.11 port 45136:11: disconnected by user
Oct 13 14:23:08 standalone.localdomain sshd[221950]: Disconnected from user root 192.168.122.11 port 45136
Oct 13 14:23:08 standalone.localdomain sshd[221939]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:08 standalone.localdomain systemd[1]: session-165.scope: Deactivated successfully.
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: Session 165 logged out. Waiting for processes to exit.
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: Removed session 165.
Oct 13 14:23:08 standalone.localdomain sshd[221966]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:08 standalone.localdomain sshd[221966]: Accepted publickey for root from 192.168.122.11 port 45138 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:08 standalone.localdomain systemd-logind[45629]: New session 166 of user root.
Oct 13 14:23:08 standalone.localdomain systemd[1]: Started Session 166 of User root.
Oct 13 14:23:08 standalone.localdomain sshd[221966]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain sudo[221970]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ceilometer_agent_compute.service
Oct 13 14:23:09 standalone.localdomain sudo[221970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain sudo[221970]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:09 standalone.localdomain sshd[221969]: Received disconnect from 192.168.122.11 port 45138:11: disconnected by user
Oct 13 14:23:09 standalone.localdomain sshd[221969]: Disconnected from user root 192.168.122.11 port 45138
Oct 13 14:23:09 standalone.localdomain sshd[221966]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:09 standalone.localdomain systemd[1]: session-166.scope: Deactivated successfully.
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: Session 166 logged out. Waiting for processes to exit.
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: Removed session 166.
Oct 13 14:23:09 standalone.localdomain sshd[221985]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:09 standalone.localdomain sshd[221985]: Accepted publickey for root from 192.168.122.11 port 45150 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: New session 167 of user root.
Oct 13 14:23:09 standalone.localdomain systemd[1]: Started Session 167 of User root.
Oct 13 14:23:09 standalone.localdomain sshd[221985]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain sudo[221989]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ceilometer_agent_ipmi.service
Oct 13 14:23:09 standalone.localdomain sudo[221989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain sudo[221989]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:09 standalone.localdomain sshd[221988]: Received disconnect from 192.168.122.11 port 45150:11: disconnected by user
Oct 13 14:23:09 standalone.localdomain sshd[221988]: Disconnected from user root 192.168.122.11 port 45150
Oct 13 14:23:09 standalone.localdomain sshd[221985]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:09 standalone.localdomain systemd[1]: session-167.scope: Deactivated successfully.
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: Session 167 logged out. Waiting for processes to exit.
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: Removed session 167.
Oct 13 14:23:09 standalone.localdomain sshd[222004]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:09 standalone.localdomain sshd[222004]: Accepted publickey for root from 192.168.122.11 port 45154 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: New session 168 of user root.
Oct 13 14:23:09 standalone.localdomain systemd[1]: Started Session 168 of User root.
Oct 13 14:23:09 standalone.localdomain sshd[222004]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain sudo[222008]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ceilometer_agent_notification.service
Oct 13 14:23:09 standalone.localdomain sudo[222008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain sudo[222008]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:09 standalone.localdomain sshd[222007]: Received disconnect from 192.168.122.11 port 45154:11: disconnected by user
Oct 13 14:23:09 standalone.localdomain sshd[222007]: Disconnected from user root 192.168.122.11 port 45154
Oct 13 14:23:09 standalone.localdomain sshd[222004]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:09 standalone.localdomain systemd[1]: session-168.scope: Deactivated successfully.
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: Session 168 logged out. Waiting for processes to exit.
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: Removed session 168.
Oct 13 14:23:09 standalone.localdomain sshd[222030]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:09 standalone.localdomain sshd[222030]: Accepted publickey for root from 192.168.122.11 port 53408 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:09 standalone.localdomain systemd-logind[45629]: New session 169 of user root.
Oct 13 14:23:09 standalone.localdomain systemd[1]: Started Session 169 of User root.
Oct 13 14:23:09 standalone.localdomain sshd[222030]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1459: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:09 standalone.localdomain sudo[222035]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ovn_cluster_northd.service
Oct 13 14:23:10 standalone.localdomain sudo[222035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:10 standalone.localdomain sudo[222035]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:10 standalone.localdomain sshd[222034]: Received disconnect from 192.168.122.11 port 53408:11: disconnected by user
Oct 13 14:23:10 standalone.localdomain sshd[222034]: Disconnected from user root 192.168.122.11 port 53408
Oct 13 14:23:10 standalone.localdomain sshd[222030]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:10 standalone.localdomain systemd[1]: session-169.scope: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: Session 169 logged out. Waiting for processes to exit.
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: Removed session 169.
Oct 13 14:23:10 standalone.localdomain sshd[222050]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:10 standalone.localdomain sshd[222050]: Accepted publickey for root from 192.168.122.11 port 53420 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: New session 170 of user root.
Oct 13 14:23:10 standalone.localdomain systemd[1]: Started Session 170 of User root.
Oct 13 14:23:10 standalone.localdomain sshd[222050]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:10 standalone.localdomain sudo[222054]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_ovn_cluster_northd.service
Oct 13 14:23:10 standalone.localdomain sudo[222054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:10 standalone.localdomain systemd[1]: Stopping ovn_cluster_northd container...
Oct 13 14:23:10 standalone.localdomain systemd[1]: tmp-crun.9DsJhq.mount: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd[1]: libpod-89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.scope: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd[1]: libpod-89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.scope: Consumed 1.435s CPU time.
Oct 13 14:23:10 standalone.localdomain podman[222069]: 2025-10-13 14:23:10.407735834 +0000 UTC m=+0.080740033 container died 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, summary=Red Hat OpenStack Platform 17.1 ovn-northd, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-northd-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, build-date=2025-07-21T13:30:04, name=rhosp17/openstack-ovn-northd, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, config_id=ovn_cluster_northd, batch=17.1_20250721.1)
Oct 13 14:23:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.timer: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.
Oct 13 14:23:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Failed to open /run/systemd/transient/89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: No such file or directory
Oct 13 14:23:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71-userdata-shm.mount: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain podman[222069]: 2025-10-13 14:23:10.502351947 +0000 UTC m=+0.175356146 container cleanup 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, build-date=2025-07-21T13:30:04, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_cluster_northd, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-ovn-northd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, tcib_managed=true, config_id=ovn_cluster_northd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-northd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd)
Oct 13 14:23:10 standalone.localdomain podman[222069]: ovn_cluster_northd
Oct 13 14:23:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.timer: Failed to open /run/systemd/transient/89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.timer: No such file or directory
Oct 13 14:23:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Failed to open /run/systemd/transient/89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: No such file or directory
Oct 13 14:23:10 standalone.localdomain podman[222083]: 2025-10-13 14:23:10.516353823 +0000 UTC m=+0.104266555 container cleanup 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, com.redhat.component=openstack-ovn-northd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, config_id=ovn_cluster_northd, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, build-date=2025-07-21T13:30:04, summary=Red Hat OpenStack Platform 17.1 ovn-northd, vendor=Red Hat, Inc., container_name=ovn_cluster_northd, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, name=rhosp17/openstack-ovn-northd, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vcs-type=git)
Oct 13 14:23:10 standalone.localdomain systemd[1]: libpod-conmon-89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.scope: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.timer: Failed to open /run/systemd/transient/89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.timer: No such file or directory
Oct 13 14:23:10 standalone.localdomain systemd[1]: 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: Failed to open /run/systemd/transient/89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71.service: No such file or directory
Oct 13 14:23:10 standalone.localdomain podman[222098]: 2025-10-13 14:23:10.619874894 +0000 UTC m=+0.066768019 container cleanup 89e8344c4e856e180ec8440a3eec6ba629ba75e9c9dbfbd3c4dd930228ed6c71 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1, name=ovn_cluster_northd, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_cluster_northd, description=Red Hat OpenStack Platform 17.1 ovn-northd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-northd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-northd/images/17.1.9-1, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-northd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-northd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-northd-container, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-northd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_northd.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_northd, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-northd, vcs-type=git, vcs-ref=147a71e47b30e4fbe53497346242978639852ad4, build-date=2025-07-21T13:30:04)
Oct 13 14:23:10 standalone.localdomain podman[222098]: ovn_cluster_northd
Oct 13 14:23:10 standalone.localdomain systemd[1]: tripleo_ovn_cluster_northd.service: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd[1]: Stopped ovn_cluster_northd container.
Oct 13 14:23:10 standalone.localdomain sudo[222054]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:10 standalone.localdomain sshd[222053]: Received disconnect from 192.168.122.11 port 53420:11: disconnected by user
Oct 13 14:23:10 standalone.localdomain sshd[222053]: Disconnected from user root 192.168.122.11 port 53420
Oct 13 14:23:10 standalone.localdomain sshd[222050]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:10 standalone.localdomain systemd[1]: session-170.scope: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: Session 170 logged out. Waiting for processes to exit.
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: Removed session 170.
Oct 13 14:23:10 standalone.localdomain sshd[222111]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:10 standalone.localdomain sshd[222111]: Accepted publickey for root from 192.168.122.11 port 53428 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: New session 171 of user root.
Oct 13 14:23:10 standalone.localdomain systemd[1]: Started Session 171 of User root.
Oct 13 14:23:10 standalone.localdomain sshd[222111]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:10 standalone.localdomain sudo[222123]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_neutron_agent.service
Oct 13 14:23:10 standalone.localdomain sudo[222123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:23:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:23:10 standalone.localdomain sudo[222123]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:10 standalone.localdomain ceph-mon[29756]: pgmap v1459: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:10 standalone.localdomain sshd[222122]: Received disconnect from 192.168.122.11 port 53428:11: disconnected by user
Oct 13 14:23:10 standalone.localdomain sshd[222122]: Disconnected from user root 192.168.122.11 port 53428
Oct 13 14:23:10 standalone.localdomain sshd[222111]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:10 standalone.localdomain systemd[1]: session-171.scope: Deactivated successfully.
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: Session 171 logged out. Waiting for processes to exit.
Oct 13 14:23:10 standalone.localdomain systemd-logind[45629]: Removed session 171.
Oct 13 14:23:10 standalone.localdomain sshd[222150]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:11 standalone.localdomain sshd[222150]: Accepted publickey for root from 192.168.122.11 port 53434 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:11 standalone.localdomain podman[222137]: 2025-10-13 14:23:11.120305534 +0000 UTC m=+0.151584387 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron)
Oct 13 14:23:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: New session 172 of user root.
Oct 13 14:23:11 standalone.localdomain systemd[1]: Started Session 172 of User root.
Oct 13 14:23:11 standalone.localdomain sshd[222150]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:11 standalone.localdomain podman[222137]: 2025-10-13 14:23:11.159966908 +0000 UTC m=+0.191245771 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:23:11 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain podman[222139]: 2025-10-13 14:23:11.211360247 +0000 UTC m=+0.238824942 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, container_name=heat_api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:23:11 standalone.localdomain sudo[222184]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_api.service
Oct 13 14:23:11 standalone.localdomain podman[222139]: 2025-10-13 14:23:11.242584979 +0000 UTC m=+0.270049704 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api, config_id=tripleo_step4, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:23:11 standalone.localdomain sudo[222184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:11 standalone.localdomain sudo[222184]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:11 standalone.localdomain sshd[222178]: Received disconnect from 192.168.122.11 port 53434:11: disconnected by user
Oct 13 14:23:11 standalone.localdomain sshd[222178]: Disconnected from user root 192.168.122.11 port 53434
Oct 13 14:23:11 standalone.localdomain sshd[222150]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:11 standalone.localdomain systemd[1]: session-172.scope: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain podman[222172]: 2025-10-13 14:23:11.267154662 +0000 UTC m=+0.129152499 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-cron, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible)
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: Session 172 logged out. Waiting for processes to exit.
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: Removed session 172.
Oct 13 14:23:11 standalone.localdomain podman[222172]: 2025-10-13 14:23:11.27926715 +0000 UTC m=+0.141265017 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, container_name=logrotate_crond, distribution-scope=public, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:23:11 standalone.localdomain sshd[222220]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:11 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30-merged.mount: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain sshd[222220]: Accepted publickey for root from 192.168.122.11 port 53438 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: New session 173 of user root.
Oct 13 14:23:11 standalone.localdomain systemd[1]: Started Session 173 of User root.
Oct 13 14:23:11 standalone.localdomain sshd[222220]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:11 standalone.localdomain sudo[222225]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_inspector.service
Oct 13 14:23:11 standalone.localdomain sudo[222225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:11 standalone.localdomain sudo[222225]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:11 standalone.localdomain sshd[222224]: Received disconnect from 192.168.122.11 port 53438:11: disconnected by user
Oct 13 14:23:11 standalone.localdomain sshd[222224]: Disconnected from user root 192.168.122.11 port 53438
Oct 13 14:23:11 standalone.localdomain sshd[222220]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:11 standalone.localdomain systemd[1]: session-173.scope: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: Session 173 logged out. Waiting for processes to exit.
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: Removed session 173.
Oct 13 14:23:11 standalone.localdomain sshd[222240]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:11 standalone.localdomain sshd[222240]: Accepted publickey for root from 192.168.122.11 port 53442 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: New session 174 of user root.
Oct 13 14:23:11 standalone.localdomain systemd[1]: Started Session 174 of User root.
Oct 13 14:23:11 standalone.localdomain sshd[222240]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:11 standalone.localdomain sudo[222318]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_conductor.service
Oct 13 14:23:11 standalone.localdomain sudo[222318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:11 standalone.localdomain sudo[222318]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:11 standalone.localdomain sshd[222247]: Received disconnect from 192.168.122.11 port 53442:11: disconnected by user
Oct 13 14:23:11 standalone.localdomain sshd[222247]: Disconnected from user root 192.168.122.11 port 53442
Oct 13 14:23:11 standalone.localdomain sshd[222240]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:23:11 standalone.localdomain systemd[1]: session-174.scope: Deactivated successfully.
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: Session 174 logged out. Waiting for processes to exit.
Oct 13 14:23:11 standalone.localdomain systemd-logind[45629]: Removed session 174.
Oct 13 14:23:11 standalone.localdomain sshd[222350]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1460: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:11 standalone.localdomain podman[222349]: 2025-10-13 14:23:11.940032909 +0000 UTC m=+0.076495772 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:11, com.redhat.component=openstack-heat-engine-container, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1)
Oct 13 14:23:11 standalone.localdomain podman[222349]: 2025-10-13 14:23:11.986821904 +0000 UTC m=+0.123284747 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, com.redhat.component=openstack-heat-engine-container, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11)
Oct 13 14:23:11 standalone.localdomain sshd[222350]: Accepted publickey for root from 192.168.122.11 port 53446 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:12 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: New session 175 of user root.
Oct 13 14:23:12 standalone.localdomain systemd[1]: Started Session 175 of User root.
Oct 13 14:23:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:23:12 standalone.localdomain sshd[222350]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain podman[222376]: 2025-10-13 14:23:12.125730756 +0000 UTC m=+0.095008497 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, tcib_managed=true, container_name=memcached, io.buildah.version=1.33.12, vcs-type=git, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, com.redhat.component=openstack-memcached-container, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, config_id=tripleo_step1)
Oct 13 14:23:12 standalone.localdomain sudo[222398]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_inspector_dnsmasq.service
Oct 13 14:23:12 standalone.localdomain sudo[222398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain sudo[222398]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain sshd[222383]: Received disconnect from 192.168.122.11 port 53446:11: disconnected by user
Oct 13 14:23:12 standalone.localdomain sshd[222383]: Disconnected from user root 192.168.122.11 port 53446
Oct 13 14:23:12 standalone.localdomain sshd[222350]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain systemd[1]: session-175.scope: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Session 175 logged out. Waiting for processes to exit.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Removed session 175.
Oct 13 14:23:12 standalone.localdomain podman[222376]: 2025-10-13 14:23:12.174911406 +0000 UTC m=+0.144189127 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, name=rhosp17/openstack-memcached, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:23:12 standalone.localdomain sshd[222427]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:12 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain podman[222380]: 2025-10-13 14:23:12.180373866 +0000 UTC m=+0.136213699 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-type=git, container_name=heat_api_cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, version=17.1.9, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:23:12 standalone.localdomain podman[222380]: 2025-10-13 14:23:12.261190821 +0000 UTC m=+0.217030674 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:23:12 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain sshd[222427]: Accepted publickey for root from 192.168.122.11 port 53462 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: New session 176 of user root.
Oct 13 14:23:12 standalone.localdomain systemd[1]: Started Session 176 of User root.
Oct 13 14:23:12 standalone.localdomain sshd[222427]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain sudo[222447]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_pxe_http.service
Oct 13 14:23:12 standalone.localdomain sudo[222447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain sudo[222447]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain sshd[222446]: Received disconnect from 192.168.122.11 port 53462:11: disconnected by user
Oct 13 14:23:12 standalone.localdomain sshd[222446]: Disconnected from user root 192.168.122.11 port 53462
Oct 13 14:23:12 standalone.localdomain sshd[222427]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain systemd[1]: session-176.scope: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Session 176 logged out. Waiting for processes to exit.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Removed session 176.
Oct 13 14:23:12 standalone.localdomain sshd[222464]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:12 standalone.localdomain sshd[222464]: Accepted publickey for root from 192.168.122.11 port 53474 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: New session 177 of user root.
Oct 13 14:23:12 standalone.localdomain systemd[1]: Started Session 177 of User root.
Oct 13 14:23:12 standalone.localdomain sshd[222464]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain sudo[222507]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ironic_pxe_tftp.service
Oct 13 14:23:12 standalone.localdomain sudo[222507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain sudo[222507]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain sshd[222506]: Received disconnect from 192.168.122.11 port 53474:11: disconnected by user
Oct 13 14:23:12 standalone.localdomain sshd[222506]: Disconnected from user root 192.168.122.11 port 53474
Oct 13 14:23:12 standalone.localdomain sshd[222464]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain systemd[1]: session-177.scope: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Session 177 logged out. Waiting for processes to exit.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Removed session 177.
Oct 13 14:23:12 standalone.localdomain sshd[222522]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:12 standalone.localdomain sshd[222522]: Accepted publickey for root from 192.168.122.11 port 53486 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: New session 178 of user root.
Oct 13 14:23:12 standalone.localdomain systemd[1]: Started Session 178 of User root.
Oct 13 14:23:12 standalone.localdomain sshd[222522]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:12 standalone.localdomain sshd[222533]: Received disconnect from 192.168.122.11 port 53486:11: disconnected by user
Oct 13 14:23:12 standalone.localdomain sshd[222533]: Disconnected from user root 192.168.122.11 port 53486
Oct 13 14:23:12 standalone.localdomain sshd[222522]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:12 standalone.localdomain systemd[1]: session-178.scope: Deactivated successfully.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Session 178 logged out. Waiting for processes to exit.
Oct 13 14:23:12 standalone.localdomain systemd-logind[45629]: Removed session 178.
Oct 13 14:23:12 standalone.localdomain ceph-mon[29756]: pgmap v1460: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:12 standalone.localdomain sshd[222576]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:13 standalone.localdomain sshd[222576]: Accepted publickey for root from 192.168.122.11 port 53494 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: New session 179 of user root.
Oct 13 14:23:13 standalone.localdomain systemd[1]: Started Session 179 of User root.
Oct 13 14:23:13 standalone.localdomain sshd[222576]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:13 standalone.localdomain sshd[222591]: Received disconnect from 192.168.122.11 port 53494:11: disconnected by user
Oct 13 14:23:13 standalone.localdomain sshd[222591]: Disconnected from user root 192.168.122.11 port 53494
Oct 13 14:23:13 standalone.localdomain sshd[222576]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Session 179 logged out. Waiting for processes to exit.
Oct 13 14:23:13 standalone.localdomain systemd[1]: session-179.scope: Deactivated successfully.
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Removed session 179.
Oct 13 14:23:13 standalone.localdomain sshd[222605]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:13 standalone.localdomain sshd[222605]: Accepted publickey for root from 192.168.122.11 port 53498 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: New session 180 of user root.
Oct 13 14:23:13 standalone.localdomain systemd[1]: Started Session 180 of User root.
Oct 13 14:23:13 standalone.localdomain sshd[222605]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:13 standalone.localdomain sshd[222608]: Received disconnect from 192.168.122.11 port 53498:11: disconnected by user
Oct 13 14:23:13 standalone.localdomain sshd[222608]: Disconnected from user root 192.168.122.11 port 53498
Oct 13 14:23:13 standalone.localdomain sshd[222605]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Session 180 logged out. Waiting for processes to exit.
Oct 13 14:23:13 standalone.localdomain systemd[1]: session-180.scope: Deactivated successfully.
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Removed session 180.
Oct 13 14:23:13 standalone.localdomain sshd[222656]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:13 standalone.localdomain sshd[222656]: Accepted publickey for root from 192.168.122.11 port 53508 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: New session 181 of user root.
Oct 13 14:23:13 standalone.localdomain systemd[1]: Started Session 181 of User root.
Oct 13 14:23:13 standalone.localdomain sshd[222656]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:13 standalone.localdomain sshd[222668]: Received disconnect from 192.168.122.11 port 53508:11: disconnected by user
Oct 13 14:23:13 standalone.localdomain sshd[222668]: Disconnected from user root 192.168.122.11 port 53508
Oct 13 14:23:13 standalone.localdomain sshd[222656]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:13 standalone.localdomain systemd[1]: session-181.scope: Deactivated successfully.
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Session 181 logged out. Waiting for processes to exit.
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Removed session 181.
Oct 13 14:23:13 standalone.localdomain sshd[222682]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:13 standalone.localdomain sshd[222682]: Accepted publickey for root from 192.168.122.11 port 53516 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: New session 182 of user root.
Oct 13 14:23:13 standalone.localdomain systemd[1]: Started Session 182 of User root.
Oct 13 14:23:13 standalone.localdomain sshd[222682]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1461: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:13 standalone.localdomain sshd[222693]: Received disconnect from 192.168.122.11 port 53516:11: disconnected by user
Oct 13 14:23:13 standalone.localdomain sshd[222693]: Disconnected from user root 192.168.122.11 port 53516
Oct 13 14:23:13 standalone.localdomain sshd[222682]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:13 standalone.localdomain systemd[1]: session-182.scope: Deactivated successfully.
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Session 182 logged out. Waiting for processes to exit.
Oct 13 14:23:13 standalone.localdomain systemd-logind[45629]: Removed session 182.
Oct 13 14:23:13 standalone.localdomain sshd[222707]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:14 standalone.localdomain sshd[222707]: Accepted publickey for root from 192.168.122.11 port 53524 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: New session 183 of user root.
Oct 13 14:23:14 standalone.localdomain systemd[1]: Started Session 183 of User root.
Oct 13 14:23:14 standalone.localdomain sshd[222707]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:14 standalone.localdomain sshd[222710]: Received disconnect from 192.168.122.11 port 53524:11: disconnected by user
Oct 13 14:23:14 standalone.localdomain sshd[222710]: Disconnected from user root 192.168.122.11 port 53524
Oct 13 14:23:14 standalone.localdomain sshd[222707]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:14 standalone.localdomain systemd[1]: session-183.scope: Deactivated successfully.
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: Session 183 logged out. Waiting for processes to exit.
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: Removed session 183.
Oct 13 14:23:14 standalone.localdomain sshd[222724]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:14 standalone.localdomain sshd[222724]: Accepted publickey for root from 192.168.122.11 port 53532 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: New session 184 of user root.
Oct 13 14:23:14 standalone.localdomain systemd[1]: Started Session 184 of User root.
Oct 13 14:23:14 standalone.localdomain sshd[222724]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:14 standalone.localdomain sshd[222727]: Received disconnect from 192.168.122.11 port 53532:11: disconnected by user
Oct 13 14:23:14 standalone.localdomain sshd[222727]: Disconnected from user root 192.168.122.11 port 53532
Oct 13 14:23:14 standalone.localdomain sshd[222724]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:14 standalone.localdomain systemd[1]: session-184.scope: Deactivated successfully.
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: Session 184 logged out. Waiting for processes to exit.
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: Removed session 184.
Oct 13 14:23:14 standalone.localdomain sshd[222741]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:14 standalone.localdomain sshd[222741]: Accepted publickey for root from 192.168.122.11 port 53538 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: New session 185 of user root.
Oct 13 14:23:14 standalone.localdomain systemd[1]: Started Session 185 of User root.
Oct 13 14:23:14 standalone.localdomain sshd[222741]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:14 standalone.localdomain sshd[222797]: Received disconnect from 192.168.122.11 port 53538:11: disconnected by user
Oct 13 14:23:14 standalone.localdomain sshd[222797]: Disconnected from user root 192.168.122.11 port 53538
Oct 13 14:23:14 standalone.localdomain sshd[222741]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:14 standalone.localdomain systemd[1]: session-185.scope: Deactivated successfully.
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: Session 185 logged out. Waiting for processes to exit.
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: Removed session 185.
Oct 13 14:23:14 standalone.localdomain sshd[222811]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:14 standalone.localdomain sshd[222811]: Accepted publickey for root from 192.168.122.11 port 53540 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:14 standalone.localdomain systemd-logind[45629]: New session 186 of user root.
Oct 13 14:23:14 standalone.localdomain systemd[1]: Started Session 186 of User root.
Oct 13 14:23:14 standalone.localdomain sshd[222811]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:14 standalone.localdomain ceph-mon[29756]: pgmap v1461: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:15 standalone.localdomain sshd[222822]: Received disconnect from 192.168.122.11 port 53540:11: disconnected by user
Oct 13 14:23:15 standalone.localdomain sshd[222822]: Disconnected from user root 192.168.122.11 port 53540
Oct 13 14:23:15 standalone.localdomain sshd[222811]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:15 standalone.localdomain systemd[1]: session-186.scope: Deactivated successfully.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Session 186 logged out. Waiting for processes to exit.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Removed session 186.
Oct 13 14:23:15 standalone.localdomain sshd[222836]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:15 standalone.localdomain sshd[222836]: Accepted publickey for root from 192.168.122.11 port 53554 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: New session 187 of user root.
Oct 13 14:23:15 standalone.localdomain systemd[1]: Started Session 187 of User root.
Oct 13 14:23:15 standalone.localdomain sshd[222836]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:15 standalone.localdomain sshd[222839]: Received disconnect from 192.168.122.11 port 53554:11: disconnected by user
Oct 13 14:23:15 standalone.localdomain sshd[222839]: Disconnected from user root 192.168.122.11 port 53554
Oct 13 14:23:15 standalone.localdomain sshd[222836]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:15 standalone.localdomain systemd[1]: session-187.scope: Deactivated successfully.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Session 187 logged out. Waiting for processes to exit.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Removed session 187.
Oct 13 14:23:15 standalone.localdomain sshd[222853]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:15 standalone.localdomain sshd[222853]: Accepted publickey for root from 192.168.122.11 port 53562 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: New session 188 of user root.
Oct 13 14:23:15 standalone.localdomain systemd[1]: Started Session 188 of User root.
Oct 13 14:23:15 standalone.localdomain sshd[222853]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:15 standalone.localdomain sshd[222856]: Received disconnect from 192.168.122.11 port 53562:11: disconnected by user
Oct 13 14:23:15 standalone.localdomain sshd[222856]: Disconnected from user root 192.168.122.11 port 53562
Oct 13 14:23:15 standalone.localdomain sshd[222853]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:15 standalone.localdomain systemd[1]: session-188.scope: Deactivated successfully.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Session 188 logged out. Waiting for processes to exit.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Removed session 188.
Oct 13 14:23:15 standalone.localdomain sshd[222870]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:15 standalone.localdomain sshd[222870]: Accepted publickey for root from 192.168.122.11 port 53568 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: New session 189 of user root.
Oct 13 14:23:15 standalone.localdomain systemd[1]: Started Session 189 of User root.
Oct 13 14:23:15 standalone.localdomain sshd[222870]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:15 standalone.localdomain sshd[222873]: Received disconnect from 192.168.122.11 port 53568:11: disconnected by user
Oct 13 14:23:15 standalone.localdomain sshd[222873]: Disconnected from user root 192.168.122.11 port 53568
Oct 13 14:23:15 standalone.localdomain sshd[222870]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:15 standalone.localdomain systemd[1]: session-189.scope: Deactivated successfully.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Session 189 logged out. Waiting for processes to exit.
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: Removed session 189.
Oct 13 14:23:15 standalone.localdomain sshd[222887]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:15 standalone.localdomain sshd[222887]: Accepted publickey for root from 192.168.122.11 port 53578 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:15 standalone.localdomain systemd-logind[45629]: New session 190 of user root.
Oct 13 14:23:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1462: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:15 standalone.localdomain systemd[1]: Started Session 190 of User root.
Oct 13 14:23:15 standalone.localdomain haproxy[70940]: Server nova_novncproxy/standalone.internalapi.localdomain is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:23:15 standalone.localdomain haproxy[70940]: proxy nova_novncproxy has no server available!
Oct 13 14:23:15 standalone.localdomain sshd[222887]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:16 standalone.localdomain sshd[222897]: Received disconnect from 192.168.122.11 port 53578:11: disconnected by user
Oct 13 14:23:16 standalone.localdomain sshd[222897]: Disconnected from user root 192.168.122.11 port 53578
Oct 13 14:23:16 standalone.localdomain sshd[222887]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:16 standalone.localdomain systemd[1]: session-190.scope: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Session 190 logged out. Waiting for processes to exit.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Removed session 190.
Oct 13 14:23:16 standalone.localdomain sshd[222912]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:16 standalone.localdomain sshd[222912]: Accepted publickey for root from 192.168.122.11 port 53590 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: New session 191 of user root.
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started Session 191 of User root.
Oct 13 14:23:16 standalone.localdomain sshd[222912]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:16 standalone.localdomain sshd[222915]: Received disconnect from 192.168.122.11 port 53590:11: disconnected by user
Oct 13 14:23:16 standalone.localdomain sshd[222915]: Disconnected from user root 192.168.122.11 port 53590
Oct 13 14:23:16 standalone.localdomain sshd[222912]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:16 standalone.localdomain systemd[1]: session-191.scope: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Session 191 logged out. Waiting for processes to exit.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Removed session 191.
Oct 13 14:23:16 standalone.localdomain sshd[222929]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:16 standalone.localdomain sshd[222929]: Accepted publickey for root from 192.168.122.11 port 53602 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: New session 192 of user root.
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started Session 192 of User root.
Oct 13 14:23:16 standalone.localdomain sshd[222929]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:16 standalone.localdomain sshd[222932]: Received disconnect from 192.168.122.11 port 53602:11: disconnected by user
Oct 13 14:23:16 standalone.localdomain sshd[222932]: Disconnected from user root 192.168.122.11 port 53602
Oct 13 14:23:16 standalone.localdomain sshd[222929]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:16 standalone.localdomain systemd[1]: session-192.scope: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Session 192 logged out. Waiting for processes to exit.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Removed session 192.
Oct 13 14:23:16 standalone.localdomain sshd[222946]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:16 standalone.localdomain sshd[222946]: Accepted publickey for root from 192.168.122.11 port 53604 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: New session 193 of user root.
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started Session 193 of User root.
Oct 13 14:23:16 standalone.localdomain sshd[222946]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:23:16 standalone.localdomain sshd[222949]: Received disconnect from 192.168.122.11 port 53604:11: disconnected by user
Oct 13 14:23:16 standalone.localdomain sshd[222949]: Disconnected from user root 192.168.122.11 port 53604
Oct 13 14:23:16 standalone.localdomain sshd[222946]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:16 standalone.localdomain systemd[1]: session-193.scope: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Session 193 logged out. Waiting for processes to exit.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Removed session 193.
Oct 13 14:23:16 standalone.localdomain sshd[222981]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:16 standalone.localdomain podman[222964]: 2025-10-13 14:23:16.843657327 +0000 UTC m=+0.114065950 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9)
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:23:16 standalone.localdomain sshd[222981]: Accepted publickey for root from 192.168.122.11 port 53618 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: New session 194 of user root.
Oct 13 14:23:16 standalone.localdomain systemd[1]: Started Session 194 of User root.
Oct 13 14:23:16 standalone.localdomain sshd[222981]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:16 standalone.localdomain podman[222961]: 2025-10-13 14:23:16.910680832 +0000 UTC m=+0.180585990 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, release=1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:23:16 standalone.localdomain podman[222964]: 2025-10-13 14:23:16.935847435 +0000 UTC m=+0.206256028 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64)
Oct 13 14:23:16 standalone.localdomain podman[222961]: 2025-10-13 14:23:16.955846787 +0000 UTC m=+0.225751955 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 13 14:23:16 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain sshd[223010]: Received disconnect from 192.168.122.11 port 53618:11: disconnected by user
Oct 13 14:23:16 standalone.localdomain sshd[223010]: Disconnected from user root 192.168.122.11 port 53618
Oct 13 14:23:16 standalone.localdomain sshd[222981]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:16 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain systemd[1]: session-194.scope: Deactivated successfully.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Session 194 logged out. Waiting for processes to exit.
Oct 13 14:23:16 standalone.localdomain systemd-logind[45629]: Removed session 194.
Oct 13 14:23:17 standalone.localdomain sshd[223043]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:17 standalone.localdomain ceph-mon[29756]: pgmap v1462: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:17 standalone.localdomain podman[222996]: 2025-10-13 14:23:17.036212948 +0000 UTC m=+0.172741696 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, release=1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:23:17 standalone.localdomain podman[222996]: 2025-10-13 14:23:17.072893079 +0000 UTC m=+0.209421837 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, tcib_managed=true, container_name=glance_api_cron, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, architecture=x86_64, distribution-scope=public)
Oct 13 14:23:17 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain sshd[223043]: Accepted publickey for root from 192.168.122.11 port 53624 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: New session 195 of user root.
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started Session 195 of User root.
Oct 13 14:23:17 standalone.localdomain sshd[223043]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:17 standalone.localdomain sshd[223056]: Received disconnect from 192.168.122.11 port 53624:11: disconnected by user
Oct 13 14:23:17 standalone.localdomain sshd[223056]: Disconnected from user root 192.168.122.11 port 53624
Oct 13 14:23:17 standalone.localdomain sshd[223043]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Session 195 logged out. Waiting for processes to exit.
Oct 13 14:23:17 standalone.localdomain systemd[1]: session-195.scope: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Removed session 195.
Oct 13 14:23:17 standalone.localdomain sshd[223070]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:17 standalone.localdomain sshd[223070]: Accepted publickey for root from 192.168.122.11 port 53640 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: New session 196 of user root.
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started Session 196 of User root.
Oct 13 14:23:17 standalone.localdomain sshd[223070]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:17 standalone.localdomain podman[223073]: 2025-10-13 14:23:17.441450526 +0000 UTC m=+0.066851551 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:23:17 standalone.localdomain sshd[223079]: Received disconnect from 192.168.122.11 port 53640:11: disconnected by user
Oct 13 14:23:17 standalone.localdomain sshd[223079]: Disconnected from user root 192.168.122.11 port 53640
Oct 13 14:23:17 standalone.localdomain sshd[223070]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:23:17 standalone.localdomain systemd[1]: session-196.scope: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Session 196 logged out. Waiting for processes to exit.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Removed session 196.
Oct 13 14:23:17 standalone.localdomain sshd[223115]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:17 standalone.localdomain podman[223109]: 2025-10-13 14:23:17.55436983 +0000 UTC m=+0.075457189 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:23:17 standalone.localdomain sshd[223115]: Accepted publickey for root from 192.168.122.11 port 53648 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started Session 197 of User root.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: New session 197 of user root.
Oct 13 14:23:17 standalone.localdomain sshd[223115]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:17 standalone.localdomain podman[223073]: 2025-10-13 14:23:17.62700305 +0000 UTC m=+0.252404075 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, architecture=x86_64, distribution-scope=public, container_name=swift_account_server, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:23:17 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain podman[223139]: 2025-10-13 14:23:17.678720668 +0000 UTC m=+0.062139614 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, release=1, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:23:17 standalone.localdomain sshd[223155]: Received disconnect from 192.168.122.11 port 53648:11: disconnected by user
Oct 13 14:23:17 standalone.localdomain sshd[223155]: Disconnected from user root 192.168.122.11 port 53648
Oct 13 14:23:17 standalone.localdomain sshd[223115]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:17 standalone.localdomain systemd[1]: session-197.scope: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Session 197 logged out. Waiting for processes to exit.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Removed session 197.
Oct 13 14:23:17 standalone.localdomain sshd[223196]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:17 standalone.localdomain podman[223141]: 2025-10-13 14:23:17.731685496 +0000 UTC m=+0.114160473 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, build-date=2025-07-21T14:48:37)
Oct 13 14:23:17 standalone.localdomain podman[223109]: 2025-10-13 14:23:17.75011737 +0000 UTC m=+0.271204729 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-type=git, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc.)
Oct 13 14:23:17 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain sshd[223196]: Accepted publickey for root from 192.168.122.11 port 53654 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: New session 198 of user root.
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started Session 198 of User root.
Oct 13 14:23:17 standalone.localdomain sshd[223196]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:17 standalone.localdomain podman[223139]: 2025-10-13 14:23:17.883891012 +0000 UTC m=+0.267309948 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T14:56:28, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, container_name=swift_object_server, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, release=1, vcs-type=git)
Oct 13 14:23:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain runuser[223223]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1463: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:17 standalone.localdomain podman[223141]: 2025-10-13 14:23:17.930061619 +0000 UTC m=+0.312536656 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, container_name=swift_proxy)
Oct 13 14:23:17 standalone.localdomain sshd[223216]: Received disconnect from 192.168.122.11 port 53654:11: disconnected by user
Oct 13 14:23:17 standalone.localdomain sshd[223216]: Disconnected from user root 192.168.122.11 port 53654
Oct 13 14:23:17 standalone.localdomain sshd[223196]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:23:17 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain systemd[1]: session-198.scope: Deactivated successfully.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Session 198 logged out. Waiting for processes to exit.
Oct 13 14:23:17 standalone.localdomain systemd-logind[45629]: Removed session 198.
Oct 13 14:23:17 standalone.localdomain sshd[223292]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:18 standalone.localdomain podman[223258]: 2025-10-13 14:23:18.030971778 +0000 UTC m=+0.069939787 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:23:18 standalone.localdomain sshd[223292]: Accepted publickey for root from 192.168.122.11 port 53660 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: New session 199 of user root.
Oct 13 14:23:18 standalone.localdomain systemd[1]: Started Session 199 of User root.
Oct 13 14:23:18 standalone.localdomain sshd[223292]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:18 standalone.localdomain podman[223313]: 2025-10-13 14:23:18.154173752 +0000 UTC m=+0.050035658 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, release=1, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target)
Oct 13 14:23:18 standalone.localdomain sshd[223319]: Received disconnect from 192.168.122.11 port 53660:11: disconnected by user
Oct 13 14:23:18 standalone.localdomain sshd[223319]: Disconnected from user root 192.168.122.11 port 53660
Oct 13 14:23:18 standalone.localdomain sshd[223292]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:18 standalone.localdomain systemd[1]: session-199.scope: Deactivated successfully.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Session 199 logged out. Waiting for processes to exit.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Removed session 199.
Oct 13 14:23:18 standalone.localdomain sshd[223350]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:18 standalone.localdomain podman[223258]: 2025-10-13 14:23:18.244873683 +0000 UTC m=+0.283841692 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal)
Oct 13 14:23:18 standalone.localdomain sshd[223350]: Accepted publickey for root from 192.168.122.11 port 53668 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: New session 200 of user root.
Oct 13 14:23:18 standalone.localdomain systemd[1]: Started Session 200 of User root.
Oct 13 14:23:18 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:23:18 standalone.localdomain sshd[223350]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:18 standalone.localdomain sshd[223355]: Received disconnect from 192.168.122.11 port 53668:11: disconnected by user
Oct 13 14:23:18 standalone.localdomain sshd[223355]: Disconnected from user root 192.168.122.11 port 53668
Oct 13 14:23:18 standalone.localdomain sshd[223350]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:18 standalone.localdomain systemd[1]: session-200.scope: Deactivated successfully.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Session 200 logged out. Waiting for processes to exit.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Removed session 200.
Oct 13 14:23:18 standalone.localdomain sshd[223369]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:18 standalone.localdomain podman[223313]: 2025-10-13 14:23:18.482118885 +0000 UTC m=+0.377980791 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:23:18 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:23:18 standalone.localdomain runuser[223223]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:18 standalone.localdomain sshd[223369]: Accepted publickey for root from 192.168.122.11 port 53674 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: New session 201 of user root.
Oct 13 14:23:18 standalone.localdomain systemd[1]: Started Session 201 of User root.
Oct 13 14:23:18 standalone.localdomain sshd[223369]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:18 standalone.localdomain sshd[223388]: Received disconnect from 192.168.122.11 port 53674:11: disconnected by user
Oct 13 14:23:18 standalone.localdomain sshd[223388]: Disconnected from user root 192.168.122.11 port 53674
Oct 13 14:23:18 standalone.localdomain sshd[223369]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:18 standalone.localdomain systemd[1]: session-201.scope: Deactivated successfully.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Session 201 logged out. Waiting for processes to exit.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Removed session 201.
Oct 13 14:23:18 standalone.localdomain sshd[223406]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:18 standalone.localdomain runuser[223402]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:18 standalone.localdomain sshd[223406]: Accepted publickey for root from 192.168.122.11 port 53688 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: New session 202 of user root.
Oct 13 14:23:18 standalone.localdomain systemd[1]: Started Session 202 of User root.
Oct 13 14:23:18 standalone.localdomain sshd[223406]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:18 standalone.localdomain sshd[223450]: Received disconnect from 192.168.122.11 port 53688:11: disconnected by user
Oct 13 14:23:18 standalone.localdomain sshd[223450]: Disconnected from user root 192.168.122.11 port 53688
Oct 13 14:23:18 standalone.localdomain sshd[223406]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Session 202 logged out. Waiting for processes to exit.
Oct 13 14:23:18 standalone.localdomain systemd[1]: session-202.scope: Deactivated successfully.
Oct 13 14:23:18 standalone.localdomain systemd-logind[45629]: Removed session 202.
Oct 13 14:23:18 standalone.localdomain sshd[223464]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:19 standalone.localdomain ceph-mon[29756]: pgmap v1463: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:19 standalone.localdomain sshd[223464]: Accepted publickey for root from 192.168.122.11 port 53696 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: New session 203 of user root.
Oct 13 14:23:19 standalone.localdomain systemd[1]: Started Session 203 of User root.
Oct 13 14:23:19 standalone.localdomain sshd[223464]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:19 standalone.localdomain sshd[223475]: Received disconnect from 192.168.122.11 port 53696:11: disconnected by user
Oct 13 14:23:19 standalone.localdomain sshd[223475]: Disconnected from user root 192.168.122.11 port 53696
Oct 13 14:23:19 standalone.localdomain sshd[223464]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:19 standalone.localdomain systemd[1]: session-203.scope: Deactivated successfully.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Session 203 logged out. Waiting for processes to exit.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Removed session 203.
Oct 13 14:23:19 standalone.localdomain sshd[223489]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:19 standalone.localdomain sshd[223489]: Accepted publickey for root from 192.168.122.11 port 53698 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: New session 204 of user root.
Oct 13 14:23:19 standalone.localdomain systemd[1]: Started Session 204 of User root.
Oct 13 14:23:19 standalone.localdomain sshd[223489]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:19 standalone.localdomain runuser[223402]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:19 standalone.localdomain runuser[223502]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:19 standalone.localdomain sshd[223499]: Received disconnect from 192.168.122.11 port 53698:11: disconnected by user
Oct 13 14:23:19 standalone.localdomain sshd[223499]: Disconnected from user root 192.168.122.11 port 53698
Oct 13 14:23:19 standalone.localdomain sshd[223489]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:19 standalone.localdomain systemd[1]: session-204.scope: Deactivated successfully.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Session 204 logged out. Waiting for processes to exit.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Removed session 204.
Oct 13 14:23:19 standalone.localdomain sshd[223534]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:19 standalone.localdomain sshd[223534]: Accepted publickey for root from 192.168.122.11 port 53700 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: New session 205 of user root.
Oct 13 14:23:19 standalone.localdomain systemd[1]: Started Session 205 of User root.
Oct 13 14:23:19 standalone.localdomain sshd[223534]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:19 standalone.localdomain sshd[223565]: Received disconnect from 192.168.122.11 port 53700:11: disconnected by user
Oct 13 14:23:19 standalone.localdomain sshd[223565]: Disconnected from user root 192.168.122.11 port 53700
Oct 13 14:23:19 standalone.localdomain sshd[223534]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Session 205 logged out. Waiting for processes to exit.
Oct 13 14:23:19 standalone.localdomain systemd[1]: session-205.scope: Deactivated successfully.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Removed session 205.
Oct 13 14:23:19 standalone.localdomain sshd[223579]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:19 standalone.localdomain sshd[223579]: Accepted publickey for root from 192.168.122.11 port 40834 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: New session 206 of user root.
Oct 13 14:23:19 standalone.localdomain systemd[1]: Started Session 206 of User root.
Oct 13 14:23:19 standalone.localdomain sshd[223579]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1464: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:19 standalone.localdomain sshd[223582]: Received disconnect from 192.168.122.11 port 40834:11: disconnected by user
Oct 13 14:23:19 standalone.localdomain sshd[223582]: Disconnected from user root 192.168.122.11 port 40834
Oct 13 14:23:19 standalone.localdomain sshd[223579]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:19 standalone.localdomain systemd[1]: session-206.scope: Deactivated successfully.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Session 206 logged out. Waiting for processes to exit.
Oct 13 14:23:19 standalone.localdomain systemd-logind[45629]: Removed session 206.
Oct 13 14:23:19 standalone.localdomain sshd[223600]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:20 standalone.localdomain sshd[223600]: Accepted publickey for root from 192.168.122.11 port 40836 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: New session 207 of user root.
Oct 13 14:23:20 standalone.localdomain systemd[1]: Started Session 207 of User root.
Oct 13 14:23:20 standalone.localdomain sshd[223600]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:20 standalone.localdomain runuser[223502]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:20 standalone.localdomain sshd[223616]: Received disconnect from 192.168.122.11 port 40836:11: disconnected by user
Oct 13 14:23:20 standalone.localdomain sshd[223616]: Disconnected from user root 192.168.122.11 port 40836
Oct 13 14:23:20 standalone.localdomain sshd[223600]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:20 standalone.localdomain systemd[1]: session-207.scope: Deactivated successfully.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Session 207 logged out. Waiting for processes to exit.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Removed session 207.
Oct 13 14:23:20 standalone.localdomain sshd[223631]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:20 standalone.localdomain sshd[223631]: Accepted publickey for root from 192.168.122.11 port 40838 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: New session 208 of user root.
Oct 13 14:23:20 standalone.localdomain systemd[1]: Started Session 208 of User root.
Oct 13 14:23:20 standalone.localdomain sshd[223631]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:20 standalone.localdomain sshd[223634]: Received disconnect from 192.168.122.11 port 40838:11: disconnected by user
Oct 13 14:23:20 standalone.localdomain sshd[223634]: Disconnected from user root 192.168.122.11 port 40838
Oct 13 14:23:20 standalone.localdomain sshd[223631]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:20 standalone.localdomain systemd[1]: session-208.scope: Deactivated successfully.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Session 208 logged out. Waiting for processes to exit.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Removed session 208.
Oct 13 14:23:20 standalone.localdomain sshd[223649]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:20 standalone.localdomain sshd[223649]: Accepted publickey for root from 192.168.122.11 port 40842 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: New session 209 of user root.
Oct 13 14:23:20 standalone.localdomain systemd[1]: Started Session 209 of User root.
Oct 13 14:23:20 standalone.localdomain sshd[223649]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:20 standalone.localdomain sshd[223652]: Received disconnect from 192.168.122.11 port 40842:11: disconnected by user
Oct 13 14:23:20 standalone.localdomain sshd[223652]: Disconnected from user root 192.168.122.11 port 40842
Oct 13 14:23:20 standalone.localdomain sshd[223649]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:20 standalone.localdomain systemd[1]: session-209.scope: Deactivated successfully.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Session 209 logged out. Waiting for processes to exit.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Removed session 209.
Oct 13 14:23:20 standalone.localdomain sshd[223666]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:20 standalone.localdomain sshd[223666]: Accepted publickey for root from 192.168.122.11 port 40858 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: New session 210 of user root.
Oct 13 14:23:20 standalone.localdomain systemd[1]: Started Session 210 of User root.
Oct 13 14:23:20 standalone.localdomain sshd[223666]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:20 standalone.localdomain sshd[223669]: Received disconnect from 192.168.122.11 port 40858:11: disconnected by user
Oct 13 14:23:20 standalone.localdomain sshd[223669]: Disconnected from user root 192.168.122.11 port 40858
Oct 13 14:23:20 standalone.localdomain sshd[223666]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:20 standalone.localdomain systemd[1]: session-210.scope: Deactivated successfully.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Session 210 logged out. Waiting for processes to exit.
Oct 13 14:23:20 standalone.localdomain systemd-logind[45629]: Removed session 210.
Oct 13 14:23:21 standalone.localdomain sshd[223683]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:21 standalone.localdomain ceph-mon[29756]: pgmap v1464: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:21 standalone.localdomain sshd[223683]: Accepted publickey for root from 192.168.122.11 port 40874 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: New session 211 of user root.
Oct 13 14:23:21 standalone.localdomain systemd[1]: Started Session 211 of User root.
Oct 13 14:23:21 standalone.localdomain sshd[223683]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:21 standalone.localdomain sshd[223694]: Received disconnect from 192.168.122.11 port 40874:11: disconnected by user
Oct 13 14:23:21 standalone.localdomain sshd[223694]: Disconnected from user root 192.168.122.11 port 40874
Oct 13 14:23:21 standalone.localdomain sshd[223683]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:21 standalone.localdomain systemd[1]: session-211.scope: Deactivated successfully.
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: Session 211 logged out. Waiting for processes to exit.
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: Removed session 211.
Oct 13 14:23:21 standalone.localdomain sshd[223708]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:21 standalone.localdomain sshd[223708]: Accepted publickey for root from 192.168.122.11 port 40878 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: New session 212 of user root.
Oct 13 14:23:21 standalone.localdomain systemd[1]: Started Session 212 of User root.
Oct 13 14:23:21 standalone.localdomain sshd[223708]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:21 standalone.localdomain sshd[223711]: Received disconnect from 192.168.122.11 port 40878:11: disconnected by user
Oct 13 14:23:21 standalone.localdomain sshd[223711]: Disconnected from user root 192.168.122.11 port 40878
Oct 13 14:23:21 standalone.localdomain sshd[223708]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:21 standalone.localdomain systemd[1]: session-212.scope: Deactivated successfully.
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: Session 212 logged out. Waiting for processes to exit.
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: Removed session 212.
Oct 13 14:23:21 standalone.localdomain sshd[223725]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:21 standalone.localdomain sshd[223725]: Accepted publickey for root from 192.168.122.11 port 40894 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: New session 213 of user root.
Oct 13 14:23:21 standalone.localdomain systemd[1]: Started Session 213 of User root.
Oct 13 14:23:21 standalone.localdomain sshd[223725]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:21 standalone.localdomain sshd[223728]: Received disconnect from 192.168.122.11 port 40894:11: disconnected by user
Oct 13 14:23:21 standalone.localdomain sshd[223728]: Disconnected from user root 192.168.122.11 port 40894
Oct 13 14:23:21 standalone.localdomain sshd[223725]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:21 standalone.localdomain systemd[1]: session-213.scope: Deactivated successfully.
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: Session 213 logged out. Waiting for processes to exit.
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: Removed session 213.
Oct 13 14:23:21 standalone.localdomain sshd[223810]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1465: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:21 standalone.localdomain sshd[223810]: Accepted publickey for root from 192.168.122.11 port 40898 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:21 standalone.localdomain systemd-logind[45629]: New session 214 of user root.
Oct 13 14:23:21 standalone.localdomain systemd[1]: Started Session 214 of User root.
Oct 13 14:23:21 standalone.localdomain sshd[223810]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:22 standalone.localdomain sshd[223827]: Received disconnect from 192.168.122.11 port 40898:11: disconnected by user
Oct 13 14:23:22 standalone.localdomain sshd[223827]: Disconnected from user root 192.168.122.11 port 40898
Oct 13 14:23:22 standalone.localdomain sshd[223810]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:22 standalone.localdomain systemd[1]: session-214.scope: Deactivated successfully.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Session 214 logged out. Waiting for processes to exit.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Removed session 214.
Oct 13 14:23:22 standalone.localdomain sshd[223849]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:22 standalone.localdomain sshd[223849]: Accepted publickey for root from 192.168.122.11 port 40908 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: New session 215 of user root.
Oct 13 14:23:22 standalone.localdomain systemd[1]: Started Session 215 of User root.
Oct 13 14:23:22 standalone.localdomain sshd[223849]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:22 standalone.localdomain sshd[223852]: Received disconnect from 192.168.122.11 port 40908:11: disconnected by user
Oct 13 14:23:22 standalone.localdomain sshd[223852]: Disconnected from user root 192.168.122.11 port 40908
Oct 13 14:23:22 standalone.localdomain sshd[223849]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:22 standalone.localdomain systemd[1]: session-215.scope: Deactivated successfully.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Session 215 logged out. Waiting for processes to exit.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Removed session 215.
Oct 13 14:23:22 standalone.localdomain sshd[223866]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:22 standalone.localdomain sshd[223866]: Accepted publickey for root from 192.168.122.11 port 40916 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: New session 216 of user root.
Oct 13 14:23:22 standalone.localdomain systemd[1]: Started Session 216 of User root.
Oct 13 14:23:22 standalone.localdomain sshd[223866]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:22 standalone.localdomain sshd[223877]: Received disconnect from 192.168.122.11 port 40916:11: disconnected by user
Oct 13 14:23:22 standalone.localdomain sshd[223877]: Disconnected from user root 192.168.122.11 port 40916
Oct 13 14:23:22 standalone.localdomain sshd[223866]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:22 standalone.localdomain systemd[1]: session-216.scope: Deactivated successfully.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Session 216 logged out. Waiting for processes to exit.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Removed session 216.
Oct 13 14:23:22 standalone.localdomain sshd[223924]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:22 standalone.localdomain sshd[223924]: Accepted publickey for root from 192.168.122.11 port 40922 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: New session 217 of user root.
Oct 13 14:23:22 standalone.localdomain systemd[1]: Started Session 217 of User root.
Oct 13 14:23:22 standalone.localdomain sshd[223924]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:22 standalone.localdomain sshd[223927]: Received disconnect from 192.168.122.11 port 40922:11: disconnected by user
Oct 13 14:23:22 standalone.localdomain sshd[223927]: Disconnected from user root 192.168.122.11 port 40922
Oct 13 14:23:22 standalone.localdomain sshd[223924]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:22 standalone.localdomain systemd[1]: session-217.scope: Deactivated successfully.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Session 217 logged out. Waiting for processes to exit.
Oct 13 14:23:22 standalone.localdomain systemd-logind[45629]: Removed session 217.
Oct 13 14:23:22 standalone.localdomain sshd[223941]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:23 standalone.localdomain sshd[223941]: Accepted publickey for root from 192.168.122.11 port 40924 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:23 standalone.localdomain ceph-mon[29756]: pgmap v1465: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: New session 218 of user root.
Oct 13 14:23:23 standalone.localdomain systemd[1]: Started Session 218 of User root.
Oct 13 14:23:23 standalone.localdomain sshd[223941]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:23:23
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_metadata', '.mgr', 'images', 'backups', 'volumes', 'manila_data']
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:23:23 standalone.localdomain sshd[223984]: Received disconnect from 192.168.122.11 port 40924:11: disconnected by user
Oct 13 14:23:23 standalone.localdomain sshd[223984]: Disconnected from user root 192.168.122.11 port 40924
Oct 13 14:23:23 standalone.localdomain sshd[223941]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:23 standalone.localdomain systemd[1]: session-218.scope: Deactivated successfully.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Session 218 logged out. Waiting for processes to exit.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Removed session 218.
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:23:23 standalone.localdomain sshd[224007]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:23:23 standalone.localdomain sshd[224007]: Accepted publickey for root from 192.168.122.11 port 40932 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: New session 219 of user root.
Oct 13 14:23:23 standalone.localdomain systemd[1]: Started Session 219 of User root.
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:23:23 standalone.localdomain sshd[224007]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:23 standalone.localdomain sshd[224010]: Received disconnect from 192.168.122.11 port 40932:11: disconnected by user
Oct 13 14:23:23 standalone.localdomain sshd[224010]: Disconnected from user root 192.168.122.11 port 40932
Oct 13 14:23:23 standalone.localdomain sshd[224007]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:23 standalone.localdomain systemd[1]: session-219.scope: Deactivated successfully.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Session 219 logged out. Waiting for processes to exit.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Removed session 219.
Oct 13 14:23:23 standalone.localdomain sshd[224024]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:23 standalone.localdomain sshd[224024]: Accepted publickey for root from 192.168.122.11 port 40936 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: New session 220 of user root.
Oct 13 14:23:23 standalone.localdomain systemd[1]: Started Session 220 of User root.
Oct 13 14:23:23 standalone.localdomain sshd[224024]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:23 standalone.localdomain sshd[224068]: Received disconnect from 192.168.122.11 port 40936:11: disconnected by user
Oct 13 14:23:23 standalone.localdomain sshd[224068]: Disconnected from user root 192.168.122.11 port 40936
Oct 13 14:23:23 standalone.localdomain sshd[224024]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:23 standalone.localdomain systemd[1]: session-220.scope: Deactivated successfully.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Session 220 logged out. Waiting for processes to exit.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Removed session 220.
Oct 13 14:23:23 standalone.localdomain sshd[224084]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:23 standalone.localdomain sshd[224084]: Accepted publickey for root from 192.168.122.11 port 40946 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: New session 221 of user root.
Oct 13 14:23:23 standalone.localdomain systemd[1]: Started Session 221 of User root.
Oct 13 14:23:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:23:23 standalone.localdomain sshd[224084]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1466: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:23 standalone.localdomain sshd[224088]: Received disconnect from 192.168.122.11 port 40946:11: disconnected by user
Oct 13 14:23:23 standalone.localdomain sshd[224088]: Disconnected from user root 192.168.122.11 port 40946
Oct 13 14:23:23 standalone.localdomain sshd[224084]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:23 standalone.localdomain systemd[1]: tmp-crun.hUS6IJ.mount: Deactivated successfully.
Oct 13 14:23:23 standalone.localdomain systemd[1]: session-221.scope: Deactivated successfully.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Session 221 logged out. Waiting for processes to exit.
Oct 13 14:23:23 standalone.localdomain systemd-logind[45629]: Removed session 221.
Oct 13 14:23:23 standalone.localdomain sshd[224117]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:23 standalone.localdomain podman[224087]: 2025-10-13 14:23:23.986117353 +0000 UTC m=+0.108692993 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vcs-type=git, config_id=tripleo_step3, version=17.1.9, name=rhosp17/openstack-keystone, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:23:24 standalone.localdomain podman[224087]: 2025-10-13 14:23:24.026887851 +0000 UTC m=+0.149463471 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, container_name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, build-date=2025-07-21T13:27:18, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, tcib_managed=true)
Oct 13 14:23:24 standalone.localdomain ceph-mon[29756]: pgmap v1466: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:24 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:23:24 standalone.localdomain sshd[224117]: Accepted publickey for root from 192.168.122.11 port 40948 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: New session 222 of user root.
Oct 13 14:23:24 standalone.localdomain systemd[1]: Started Session 222 of User root.
Oct 13 14:23:24 standalone.localdomain sshd[224117]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:24 standalone.localdomain sshd[224130]: Received disconnect from 192.168.122.11 port 40948:11: disconnected by user
Oct 13 14:23:24 standalone.localdomain sshd[224130]: Disconnected from user root 192.168.122.11 port 40948
Oct 13 14:23:24 standalone.localdomain sshd[224117]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:24 standalone.localdomain systemd[1]: session-222.scope: Deactivated successfully.
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: Session 222 logged out. Waiting for processes to exit.
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: Removed session 222.
Oct 13 14:23:24 standalone.localdomain sshd[224145]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:24 standalone.localdomain sshd[224145]: Accepted publickey for root from 192.168.122.11 port 40954 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: New session 223 of user root.
Oct 13 14:23:24 standalone.localdomain systemd[1]: Started Session 223 of User root.
Oct 13 14:23:24 standalone.localdomain sshd[224145]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:24 standalone.localdomain sshd[224148]: Received disconnect from 192.168.122.11 port 40954:11: disconnected by user
Oct 13 14:23:24 standalone.localdomain sshd[224148]: Disconnected from user root 192.168.122.11 port 40954
Oct 13 14:23:24 standalone.localdomain sshd[224145]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:24 standalone.localdomain systemd[1]: session-223.scope: Deactivated successfully.
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: Session 223 logged out. Waiting for processes to exit.
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: Removed session 223.
Oct 13 14:23:24 standalone.localdomain sshd[224162]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:24 standalone.localdomain sshd[224162]: Accepted publickey for root from 192.168.122.11 port 40966 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: New session 224 of user root.
Oct 13 14:23:24 standalone.localdomain systemd[1]: Started Session 224 of User root.
Oct 13 14:23:24 standalone.localdomain sshd[224162]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:24 standalone.localdomain sshd[224165]: Received disconnect from 192.168.122.11 port 40966:11: disconnected by user
Oct 13 14:23:24 standalone.localdomain sshd[224165]: Disconnected from user root 192.168.122.11 port 40966
Oct 13 14:23:24 standalone.localdomain sshd[224162]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:24 standalone.localdomain systemd[1]: session-224.scope: Deactivated successfully.
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: Session 224 logged out. Waiting for processes to exit.
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: Removed session 224.
Oct 13 14:23:24 standalone.localdomain sshd[224229]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:24 standalone.localdomain sshd[224229]: Accepted publickey for root from 192.168.122.11 port 40970 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:24 standalone.localdomain systemd-logind[45629]: New session 225 of user root.
Oct 13 14:23:24 standalone.localdomain systemd[1]: Started Session 225 of User root.
Oct 13 14:23:24 standalone.localdomain sshd[224229]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:25 standalone.localdomain sshd[224235]: Received disconnect from 192.168.122.11 port 40970:11: disconnected by user
Oct 13 14:23:25 standalone.localdomain sshd[224235]: Disconnected from user root 192.168.122.11 port 40970
Oct 13 14:23:25 standalone.localdomain sshd[224229]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:25 standalone.localdomain systemd[1]: session-225.scope: Deactivated successfully.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Session 225 logged out. Waiting for processes to exit.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Removed session 225.
Oct 13 14:23:25 standalone.localdomain sshd[224249]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:25 standalone.localdomain sshd[224249]: Accepted publickey for root from 192.168.122.11 port 40974 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: New session 226 of user root.
Oct 13 14:23:25 standalone.localdomain systemd[1]: Started Session 226 of User root.
Oct 13 14:23:25 standalone.localdomain sshd[224249]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:25 standalone.localdomain sshd[224260]: Received disconnect from 192.168.122.11 port 40974:11: disconnected by user
Oct 13 14:23:25 standalone.localdomain sshd[224260]: Disconnected from user root 192.168.122.11 port 40974
Oct 13 14:23:25 standalone.localdomain sshd[224249]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:25 standalone.localdomain systemd[1]: session-226.scope: Deactivated successfully.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Session 226 logged out. Waiting for processes to exit.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Removed session 226.
Oct 13 14:23:25 standalone.localdomain sshd[224274]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:25 standalone.localdomain sshd[224274]: Accepted publickey for root from 192.168.122.11 port 40982 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: New session 227 of user root.
Oct 13 14:23:25 standalone.localdomain systemd[1]: Started Session 227 of User root.
Oct 13 14:23:25 standalone.localdomain sshd[224274]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:25 standalone.localdomain sshd[224277]: Received disconnect from 192.168.122.11 port 40982:11: disconnected by user
Oct 13 14:23:25 standalone.localdomain sshd[224277]: Disconnected from user root 192.168.122.11 port 40982
Oct 13 14:23:25 standalone.localdomain sshd[224274]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:25 standalone.localdomain systemd[1]: session-227.scope: Deactivated successfully.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Session 227 logged out. Waiting for processes to exit.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Removed session 227.
Oct 13 14:23:25 standalone.localdomain sshd[224291]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:23:25 standalone.localdomain sshd[224291]: Accepted publickey for root from 192.168.122.11 port 40998 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: New session 228 of user root.
Oct 13 14:23:25 standalone.localdomain systemd[1]: Started Session 228 of User root.
Oct 13 14:23:25 standalone.localdomain sshd[224291]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:25 standalone.localdomain systemd[1]: tmp-crun.4s19Ub.mount: Deactivated successfully.
Oct 13 14:23:25 standalone.localdomain podman[224293]: 2025-10-13 14:23:25.827627449 +0000 UTC m=+0.093077997 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_sriov_agent, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:23:25 standalone.localdomain podman[224293]: 2025-10-13 14:23:25.884024013 +0000 UTC m=+0.149474541 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-type=git, container_name=neutron_sriov_agent, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container)
Oct 13 14:23:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:23:25 standalone.localdomain sshd[224310]: Received disconnect from 192.168.122.11 port 40998:11: disconnected by user
Oct 13 14:23:25 standalone.localdomain sshd[224310]: Disconnected from user root 192.168.122.11 port 40998
Oct 13 14:23:25 standalone.localdomain sshd[224291]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1467: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:25 standalone.localdomain systemd[1]: session-228.scope: Deactivated successfully.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Session 228 logged out. Waiting for processes to exit.
Oct 13 14:23:25 standalone.localdomain systemd-logind[45629]: Removed session 228.
Oct 13 14:23:25 standalone.localdomain sshd[224343]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:25 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:23:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:26 standalone.localdomain sshd[224343]: Accepted publickey for root from 192.168.122.11 port 41004 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:26 standalone.localdomain podman[224332]: 2025-10-13 14:23:26.064361294 +0000 UTC m=+0.155437457 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: New session 229 of user root.
Oct 13 14:23:26 standalone.localdomain systemd[1]: Started Session 229 of User root.
Oct 13 14:23:26 standalone.localdomain sshd[224343]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:26 standalone.localdomain podman[224332]: 2025-10-13 14:23:26.152113055 +0000 UTC m=+0.243189218 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, container_name=neutron_dhcp, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, vcs-type=git, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, maintainer=OpenStack TripleO Team, distribution-scope=public)
Oct 13 14:23:26 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:23:26 standalone.localdomain sshd[224356]: Received disconnect from 192.168.122.11 port 41004:11: disconnected by user
Oct 13 14:23:26 standalone.localdomain sshd[224356]: Disconnected from user root 192.168.122.11 port 41004
Oct 13 14:23:26 standalone.localdomain systemd[1]: session-229.scope: Deactivated successfully.
Oct 13 14:23:26 standalone.localdomain sshd[224343]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Session 229 logged out. Waiting for processes to exit.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Removed session 229.
Oct 13 14:23:26 standalone.localdomain sshd[224381]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:26 standalone.localdomain sshd[224381]: Accepted publickey for root from 192.168.122.11 port 41012 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: New session 230 of user root.
Oct 13 14:23:26 standalone.localdomain systemd[1]: Started Session 230 of User root.
Oct 13 14:23:26 standalone.localdomain sshd[224381]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:26 standalone.localdomain sshd[224384]: Received disconnect from 192.168.122.11 port 41012:11: disconnected by user
Oct 13 14:23:26 standalone.localdomain sshd[224384]: Disconnected from user root 192.168.122.11 port 41012
Oct 13 14:23:26 standalone.localdomain sshd[224381]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:26 standalone.localdomain systemd[1]: session-230.scope: Deactivated successfully.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Session 230 logged out. Waiting for processes to exit.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Removed session 230.
Oct 13 14:23:26 standalone.localdomain sshd[224398]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:26 standalone.localdomain sshd[224398]: Accepted publickey for root from 192.168.122.11 port 41020 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: New session 231 of user root.
Oct 13 14:23:26 standalone.localdomain systemd[1]: Started Session 231 of User root.
Oct 13 14:23:26 standalone.localdomain sshd[224398]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:26 standalone.localdomain sshd[224401]: Received disconnect from 192.168.122.11 port 41020:11: disconnected by user
Oct 13 14:23:26 standalone.localdomain sshd[224401]: Disconnected from user root 192.168.122.11 port 41020
Oct 13 14:23:26 standalone.localdomain sshd[224398]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:26 standalone.localdomain systemd[1]: session-231.scope: Deactivated successfully.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Session 231 logged out. Waiting for processes to exit.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Removed session 231.
Oct 13 14:23:26 standalone.localdomain sshd[224415]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:26 standalone.localdomain sshd[224415]: Accepted publickey for root from 192.168.122.11 port 41026 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: New session 232 of user root.
Oct 13 14:23:26 standalone.localdomain systemd[1]: Started Session 232 of User root.
Oct 13 14:23:26 standalone.localdomain sshd[224415]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:26 standalone.localdomain sshd[224418]: Received disconnect from 192.168.122.11 port 41026:11: disconnected by user
Oct 13 14:23:26 standalone.localdomain sshd[224418]: Disconnected from user root 192.168.122.11 port 41026
Oct 13 14:23:26 standalone.localdomain sshd[224415]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:26 standalone.localdomain systemd[1]: session-232.scope: Deactivated successfully.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Session 232 logged out. Waiting for processes to exit.
Oct 13 14:23:26 standalone.localdomain systemd-logind[45629]: Removed session 232.
Oct 13 14:23:26 standalone.localdomain sshd[224432]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:27 standalone.localdomain ceph-mon[29756]: pgmap v1467: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:27 standalone.localdomain sshd[224432]: Accepted publickey for root from 192.168.122.11 port 41038 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:27 standalone.localdomain systemd-logind[45629]: New session 233 of user root.
Oct 13 14:23:27 standalone.localdomain systemd[1]: Started Session 233 of User root.
Oct 13 14:23:27 standalone.localdomain sshd[224432]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:27 standalone.localdomain sshd[224435]: Received disconnect from 192.168.122.11 port 41038:11: disconnected by user
Oct 13 14:23:27 standalone.localdomain sshd[224435]: Disconnected from user root 192.168.122.11 port 41038
Oct 13 14:23:27 standalone.localdomain sshd[224432]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:27 standalone.localdomain systemd[1]: session-233.scope: Deactivated successfully.
Oct 13 14:23:27 standalone.localdomain systemd-logind[45629]: Session 233 logged out. Waiting for processes to exit.
Oct 13 14:23:27 standalone.localdomain systemd-logind[45629]: Removed session 233.
Oct 13 14:23:27 standalone.localdomain sshd[224453]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:27 standalone.localdomain sshd[224453]: Accepted publickey for root from 192.168.122.11 port 41052 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:27 standalone.localdomain systemd-logind[45629]: New session 234 of user root.
Oct 13 14:23:27 standalone.localdomain systemd[1]: Started Session 234 of User root.
Oct 13 14:23:27 standalone.localdomain sshd[224453]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:27 standalone.localdomain sudo[224461]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config openstack-cinder-volume
Oct 13 14:23:27 standalone.localdomain sudo[224461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1468: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:28 standalone.localdomain sudo[224461]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:28 standalone.localdomain sshd[224460]: Received disconnect from 192.168.122.11 port 41052:11: disconnected by user
Oct 13 14:23:28 standalone.localdomain sshd[224460]: Disconnected from user root 192.168.122.11 port 41052
Oct 13 14:23:28 standalone.localdomain sshd[224453]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:28 standalone.localdomain systemd[1]: session-234.scope: Deactivated successfully.
Oct 13 14:23:28 standalone.localdomain systemd-logind[45629]: Session 234 logged out. Waiting for processes to exit.
Oct 13 14:23:28 standalone.localdomain systemd-logind[45629]: Removed session 234.
Oct 13 14:23:28 standalone.localdomain sshd[224477]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:28 standalone.localdomain sshd[224477]: Accepted publickey for root from 192.168.122.11 port 41058 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:28 standalone.localdomain systemd-logind[45629]: New session 235 of user root.
Oct 13 14:23:28 standalone.localdomain systemd[1]: Started Session 235 of User root.
Oct 13 14:23:28 standalone.localdomain sshd[224477]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:28 standalone.localdomain sudo[224488]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource disable openstack-cinder-volume
Oct 13 14:23:28 standalone.localdomain sudo[224488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:23:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:23:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:23:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:23:28 standalone.localdomain podman[224507]: 2025-10-13 14:23:28.797998897 +0000 UTC m=+0.064927701 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:23:28 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:23:28 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       openstack-cinder-volume-podman-0     (                             standalone )  due to node availability
Oct 13 14:23:28 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 63, saving inputs in /var/lib/pacemaker/pengine/pe-input-63.bz2
Oct 13 14:23:28 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation openstack-cinder-volume-podman-0_stop_0 locally on standalone
Oct 13 14:23:28 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for openstack-cinder-volume-podman-0 on standalone
Oct 13 14:23:28 standalone.localdomain podman[224507]: 2025-10-13 14:23:28.84535544 +0000 UTC m=+0.112284274 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, version=17.1.9, config_id=tripleo_step2, vcs-type=git, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:23:28 standalone.localdomain systemd[1]: tmp-crun.XLR6WA.mount: Deactivated successfully.
Oct 13 14:23:28 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:23:28 standalone.localdomain podman[224506]: 2025-10-13 14:23:28.866450357 +0000 UTC m=+0.133077722 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:23:28 standalone.localdomain sudo[224488]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:28 standalone.localdomain sshd[224480]: Received disconnect from 192.168.122.11 port 41058:11: disconnected by user
Oct 13 14:23:28 standalone.localdomain sshd[224480]: Disconnected from user root 192.168.122.11 port 41058
Oct 13 14:23:28 standalone.localdomain sshd[224477]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:28 standalone.localdomain podman[224506]: 2025-10-13 14:23:28.89581478 +0000 UTC m=+0.162442145 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, container_name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.buildah.version=1.33.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:23:28 standalone.localdomain systemd[1]: session-235.scope: Deactivated successfully.
Oct 13 14:23:28 standalone.localdomain systemd-logind[45629]: Session 235 logged out. Waiting for processes to exit.
Oct 13 14:23:28 standalone.localdomain systemd-logind[45629]: Removed session 235.
Oct 13 14:23:28 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:23:28 standalone.localdomain sshd[224589]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:28 standalone.localdomain podman[224563]: 2025-10-13 14:23:28.971723922 +0000 UTC m=+0.111097627 container exec a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-volume, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, batch=17.1_20250721.1, build-date=2025-07-21T16:13:39, name=rhosp17/openstack-cinder-volume, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-cinder-volume-container, vendor=Red Hat, Inc., release=1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:23:29 standalone.localdomain podman[224563]: 2025-10-13 14:23:29.000435726 +0000 UTC m=+0.139809461 container exec_died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-volume, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, batch=17.1_20250721.1, build-date=2025-07-21T16:13:39, description=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, vcs-type=git)
Oct 13 14:23:29 standalone.localdomain sshd[224589]: Accepted publickey for root from 192.168.122.11 port 41060 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:29 standalone.localdomain systemd-logind[45629]: New session 236 of user root.
Oct 13 14:23:29 standalone.localdomain ceph-mon[29756]: pgmap v1468: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:29 standalone.localdomain systemd[1]: Started Session 236 of User root.
Oct 13 14:23:29 standalone.localdomain sshd[224589]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:29 standalone.localdomain sudo[224625]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config openstack-cinder-backup
Oct 13 14:23:29 standalone.localdomain sudo[224625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:29 standalone.localdomain sudo[224625]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:29 standalone.localdomain sshd[224620]: Received disconnect from 192.168.122.11 port 41060:11: disconnected by user
Oct 13 14:23:29 standalone.localdomain sshd[224620]: Disconnected from user root 192.168.122.11 port 41060
Oct 13 14:23:29 standalone.localdomain sshd[224589]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:29 standalone.localdomain systemd[1]: session-236.scope: Deactivated successfully.
Oct 13 14:23:29 standalone.localdomain systemd-logind[45629]: Session 236 logged out. Waiting for processes to exit.
Oct 13 14:23:29 standalone.localdomain systemd-logind[45629]: Removed session 236.
Oct 13 14:23:29 standalone.localdomain sshd[224649]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:29 standalone.localdomain sshd[224649]: Accepted publickey for root from 192.168.122.11 port 58392 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:29 standalone.localdomain systemd-logind[45629]: New session 237 of user root.
Oct 13 14:23:29 standalone.localdomain systemd[1]: Started Session 237 of User root.
Oct 13 14:23:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1469: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:29 standalone.localdomain sshd[224649]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:30 standalone.localdomain sudo[224653]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource disable openstack-cinder-backup
Oct 13 14:23:30 standalone.localdomain sudo[224653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:30 standalone.localdomain ceph-mon[29756]: pgmap v1469: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:30 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 63 aborted by openstack-cinder-backup-meta_attributes-target-role doing create target-role=Stopped: Configuration change
Oct 13 14:23:30 standalone.localdomain sudo[224653]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:30 standalone.localdomain sshd[224652]: Received disconnect from 192.168.122.11 port 58392:11: disconnected by user
Oct 13 14:23:30 standalone.localdomain sshd[224652]: Disconnected from user root 192.168.122.11 port 58392
Oct 13 14:23:30 standalone.localdomain sshd[224649]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:30 standalone.localdomain systemd[1]: session-237.scope: Deactivated successfully.
Oct 13 14:23:30 standalone.localdomain systemd-logind[45629]: Session 237 logged out. Waiting for processes to exit.
Oct 13 14:23:30 standalone.localdomain systemd-logind[45629]: Removed session 237.
Oct 13 14:23:30 standalone.localdomain sshd[224682]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:30 standalone.localdomain runuser[224688]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:30 standalone.localdomain sshd[224682]: Accepted publickey for root from 192.168.122.11 port 58406 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:30 standalone.localdomain systemd-logind[45629]: New session 238 of user root.
Oct 13 14:23:30 standalone.localdomain systemd[1]: Started Session 238 of User root.
Oct 13 14:23:30 standalone.localdomain sshd[224682]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:30 standalone.localdomain sudo[224735]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config openstack-manila-share
Oct 13 14:23:30 standalone.localdomain sudo[224735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:31 standalone.localdomain runuser[224688]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:31 standalone.localdomain podman[224777]: 2025-10-13 14:23:31.456678627 +0000 UTC m=+0.068893574 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, version=17.1.9, architecture=x86_64, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, build-date=2025-07-21T15:22:36, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, distribution-scope=public, com.redhat.component=openstack-manila-share-container, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-manila-share, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 manila-share, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, release=1, batch=17.1_20250721.1)
Oct 13 14:23:31 standalone.localdomain podman[224777]: 2025-10-13 14:23:31.486223257 +0000 UTC m=+0.098438214 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-manila-share-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:23:31 standalone.localdomain sudo[224735]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:31 standalone.localdomain sshd[224734]: Received disconnect from 192.168.122.11 port 58406:11: disconnected by user
Oct 13 14:23:31 standalone.localdomain sshd[224734]: Disconnected from user root 192.168.122.11 port 58406
Oct 13 14:23:31 standalone.localdomain sshd[224682]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:31 standalone.localdomain systemd[1]: session-238.scope: Deactivated successfully.
Oct 13 14:23:31 standalone.localdomain systemd-logind[45629]: Session 238 logged out. Waiting for processes to exit.
Oct 13 14:23:31 standalone.localdomain systemd-logind[45629]: Removed session 238.
Oct 13 14:23:31 standalone.localdomain runuser[224808]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:31 standalone.localdomain sshd[224816]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:31 standalone.localdomain sshd[224816]: Accepted publickey for root from 192.168.122.11 port 58422 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:31 standalone.localdomain systemd-logind[45629]: New session 239 of user root.
Oct 13 14:23:31 standalone.localdomain systemd[1]: Started Session 239 of User root.
Oct 13 14:23:31 standalone.localdomain sshd[224816]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:31 standalone.localdomain sudo[224858]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource disable openstack-manila-share
Oct 13 14:23:31 standalone.localdomain sudo[224858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1470: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:32 standalone.localdomain runuser[224808]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:32 standalone.localdomain runuser[224964]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:32 standalone.localdomain ceph-mon[29756]: pgmap v1470: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:32 standalone.localdomain sudo[224858]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:32 standalone.localdomain sshd[224857]: Received disconnect from 192.168.122.11 port 58422:11: disconnected by user
Oct 13 14:23:32 standalone.localdomain sshd[224857]: Disconnected from user root 192.168.122.11 port 58422
Oct 13 14:23:32 standalone.localdomain sshd[224816]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:32 standalone.localdomain systemd[1]: session-239.scope: Deactivated successfully.
Oct 13 14:23:32 standalone.localdomain systemd-logind[45629]: Session 239 logged out. Waiting for processes to exit.
Oct 13 14:23:32 standalone.localdomain systemd-logind[45629]: Removed session 239.
Oct 13 14:23:32 standalone.localdomain sshd[225025]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:32 standalone.localdomain sshd[225025]: Accepted publickey for root from 192.168.122.11 port 58438 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:32 standalone.localdomain systemd-logind[45629]: New session 240 of user root.
Oct 13 14:23:32 standalone.localdomain systemd[1]: Started Session 240 of User root.
Oct 13 14:23:32 standalone.localdomain sshd[225025]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:32 standalone.localdomain sudo[225029]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config openstack-cinder-volume
Oct 13 14:23:32 standalone.localdomain sudo[225029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:23:32 standalone.localdomain sshd[225100]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:32 standalone.localdomain systemd[1]: tmp-crun.2PLlwW.mount: Deactivated successfully.
Oct 13 14:23:32 standalone.localdomain podman[225089]: 2025-10-13 14:23:32.738708196 +0000 UTC m=+0.083284763 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12)
Oct 13 14:23:32 standalone.localdomain podman[225089]: 2025-10-13 14:23:32.771834087 +0000 UTC m=+0.116410624 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:23:32 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:23:32 standalone.localdomain runuser[224964]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:33 standalone.localdomain sudo[225029]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:33 standalone.localdomain sshd[225028]: Received disconnect from 192.168.122.11 port 58438:11: disconnected by user
Oct 13 14:23:33 standalone.localdomain sshd[225028]: Disconnected from user root 192.168.122.11 port 58438
Oct 13 14:23:33 standalone.localdomain sshd[225025]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:33 standalone.localdomain systemd[1]: session-240.scope: Deactivated successfully.
Oct 13 14:23:33 standalone.localdomain systemd-logind[45629]: Session 240 logged out. Waiting for processes to exit.
Oct 13 14:23:33 standalone.localdomain systemd-logind[45629]: Removed session 240.
Oct 13 14:23:33 standalone.localdomain sshd[225160]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:33 standalone.localdomain sshd[225160]: Accepted publickey for root from 192.168.122.11 port 58444 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:33 standalone.localdomain systemd-logind[45629]: New session 241 of user root.
Oct 13 14:23:33 standalone.localdomain systemd[1]: Started Session 241 of User root.
Oct 13 14:23:33 standalone.localdomain sshd[225160]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:33 standalone.localdomain sshd[225100]: Received disconnect from 80.94.93.176 port 45246:11:  [preauth]
Oct 13 14:23:33 standalone.localdomain sshd[225100]: Disconnected from authenticating user root 80.94.93.176 port 45246 [preauth]
Oct 13 14:23:33 standalone.localdomain sudo[225172]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource status openstack-cinder-volume
Oct 13 14:23:33 standalone.localdomain sudo[225172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1471: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:34 standalone.localdomain sudo[225172]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:34 standalone.localdomain sshd[225171]: Received disconnect from 192.168.122.11 port 58444:11: disconnected by user
Oct 13 14:23:34 standalone.localdomain sshd[225171]: Disconnected from user root 192.168.122.11 port 58444
Oct 13 14:23:34 standalone.localdomain sshd[225160]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:34 standalone.localdomain systemd[1]: session-241.scope: Deactivated successfully.
Oct 13 14:23:34 standalone.localdomain systemd-logind[45629]: Session 241 logged out. Waiting for processes to exit.
Oct 13 14:23:34 standalone.localdomain systemd-logind[45629]: Removed session 241.
Oct 13 14:23:34 standalone.localdomain sshd[225231]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:34 standalone.localdomain sshd[225231]: Accepted publickey for root from 192.168.122.11 port 58448 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:34 standalone.localdomain systemd-logind[45629]: New session 242 of user root.
Oct 13 14:23:34 standalone.localdomain systemd[1]: Started Session 242 of User root.
Oct 13 14:23:34 standalone.localdomain sshd[225231]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:34 standalone.localdomain sudo[225235]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config openstack-cinder-backup
Oct 13 14:23:34 standalone.localdomain sudo[225235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:34 standalone.localdomain sudo[225235]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:34 standalone.localdomain sshd[225234]: Received disconnect from 192.168.122.11 port 58448:11: disconnected by user
Oct 13 14:23:34 standalone.localdomain sshd[225234]: Disconnected from user root 192.168.122.11 port 58448
Oct 13 14:23:34 standalone.localdomain sshd[225231]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:34 standalone.localdomain systemd-logind[45629]: Session 242 logged out. Waiting for processes to exit.
Oct 13 14:23:34 standalone.localdomain systemd[1]: session-242.scope: Deactivated successfully.
Oct 13 14:23:34 standalone.localdomain systemd-logind[45629]: Removed session 242.
Oct 13 14:23:34 standalone.localdomain sshd[225312]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:34 standalone.localdomain sshd[225312]: Accepted publickey for root from 192.168.122.11 port 58454 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:34 standalone.localdomain systemd-logind[45629]: New session 243 of user root.
Oct 13 14:23:34 standalone.localdomain ceph-mon[29756]: pgmap v1471: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:34 standalone.localdomain systemd[1]: Started Session 243 of User root.
Oct 13 14:23:35 standalone.localdomain sshd[225312]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:35 standalone.localdomain sudo[225316]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource status openstack-cinder-backup
Oct 13 14:23:35 standalone.localdomain sudo[225316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:35 standalone.localdomain sudo[225316]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:35 standalone.localdomain sshd[225315]: Received disconnect from 192.168.122.11 port 58454:11: disconnected by user
Oct 13 14:23:35 standalone.localdomain sshd[225315]: Disconnected from user root 192.168.122.11 port 58454
Oct 13 14:23:35 standalone.localdomain sshd[225312]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:35 standalone.localdomain systemd[1]: session-243.scope: Deactivated successfully.
Oct 13 14:23:35 standalone.localdomain systemd-logind[45629]: Session 243 logged out. Waiting for processes to exit.
Oct 13 14:23:35 standalone.localdomain systemd-logind[45629]: Removed session 243.
Oct 13 14:23:35 standalone.localdomain sshd[225340]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:35 standalone.localdomain sshd[225340]: Accepted publickey for root from 192.168.122.11 port 58466 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:35 standalone.localdomain systemd-logind[45629]: New session 244 of user root.
Oct 13 14:23:35 standalone.localdomain systemd[1]: Started Session 244 of User root.
Oct 13 14:23:35 standalone.localdomain sshd[225340]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:35 standalone.localdomain sudo[225344]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config openstack-manila-share
Oct 13 14:23:35 standalone.localdomain sudo[225344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1472: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:36 standalone.localdomain sudo[225344]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:36 standalone.localdomain sshd[225343]: Received disconnect from 192.168.122.11 port 58466:11: disconnected by user
Oct 13 14:23:36 standalone.localdomain sshd[225343]: Disconnected from user root 192.168.122.11 port 58466
Oct 13 14:23:36 standalone.localdomain sshd[225340]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:36 standalone.localdomain systemd[1]: session-244.scope: Deactivated successfully.
Oct 13 14:23:36 standalone.localdomain systemd-logind[45629]: Session 244 logged out. Waiting for processes to exit.
Oct 13 14:23:36 standalone.localdomain systemd-logind[45629]: Removed session 244.
Oct 13 14:23:36 standalone.localdomain sshd[225368]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:23:36 standalone.localdomain sshd[225368]: Accepted publickey for root from 192.168.122.11 port 58476 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:23:36 standalone.localdomain systemd-logind[45629]: New session 245 of user root.
Oct 13 14:23:36 standalone.localdomain systemd[1]: Started Session 245 of User root.
Oct 13 14:23:36 standalone.localdomain sshd[225368]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:36 standalone.localdomain sudo[225372]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource status openstack-manila-share
Oct 13 14:23:36 standalone.localdomain sudo[225372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:23:37 standalone.localdomain ceph-mon[29756]: pgmap v1472: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:37 standalone.localdomain sudo[225372]: pam_unix(sudo:session): session closed for user root
Oct 13 14:23:37 standalone.localdomain sshd[225371]: Received disconnect from 192.168.122.11 port 58476:11: disconnected by user
Oct 13 14:23:37 standalone.localdomain sshd[225371]: Disconnected from user root 192.168.122.11 port 58476
Oct 13 14:23:37 standalone.localdomain sshd[225368]: pam_unix(sshd:session): session closed for user root
Oct 13 14:23:37 standalone.localdomain systemd[1]: session-245.scope: Deactivated successfully.
Oct 13 14:23:37 standalone.localdomain systemd-logind[45629]: Session 245 logged out. Waiting for processes to exit.
Oct 13 14:23:37 standalone.localdomain systemd-logind[45629]: Removed session 245.
Oct 13 14:23:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1473: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:38 standalone.localdomain ceph-mon[29756]: pgmap v1473: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1474: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:40 standalone.localdomain ceph-mon[29756]: pgmap v1474: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:41 standalone.localdomain systemd[1]: tmp-crun.6uGnK8.mount: Deactivated successfully.
Oct 13 14:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:23:41 standalone.localdomain podman[225423]: 2025-10-13 14:23:41.263109509 +0000 UTC m=+0.136025914 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, name=rhosp17/openstack-cinder-backup, release=1, architecture=x86_64, build-date=2025-07-21T16:18:24, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cinder-backup, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-backup-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:23:41 standalone.localdomain podman[225423]: 2025-10-13 14:23:41.297953962 +0000 UTC m=+0.170870337 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, com.redhat.component=openstack-cinder-backup-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-backup, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, build-date=2025-07-21T16:18:24)
Oct 13 14:23:41 standalone.localdomain podman[225441]: 2025-10-13 14:23:41.341395114 +0000 UTC m=+0.067775779 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container)
Oct 13 14:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:23:41 standalone.localdomain podman[225441]: 2025-10-13 14:23:41.349846437 +0000 UTC m=+0.076227142 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, container_name=heat_api_cron, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1)
Oct 13 14:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:23:41 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:23:41 standalone.localdomain podman[225473]: 2025-10-13 14:23:41.431403355 +0000 UTC m=+0.073353573 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, container_name=logrotate_crond, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:23:41 standalone.localdomain podman[225473]: 2025-10-13 14:23:41.442228752 +0000 UTC m=+0.084178970 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:23:41 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:23:41 standalone.localdomain podman[225472]: 2025-10-13 14:23:41.521746375 +0000 UTC m=+0.165859891 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-heat-api-container, release=1, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:23:41 standalone.localdomain podman[225472]: 2025-10-13 14:23:41.555877417 +0000 UTC m=+0.199990893 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, com.redhat.component=openstack-heat-api-container, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, tcib_managed=true)
Oct 13 14:23:41 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:23:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1475: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:23:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:23:42 standalone.localdomain systemd[1]: tmp-crun.aOWJQ8.mount: Deactivated successfully.
Oct 13 14:23:42 standalone.localdomain podman[225605]: 2025-10-13 14:23:42.314037406 +0000 UTC m=+0.077999958 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, distribution-scope=public, batch=17.1_20250721.1, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:23:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:23:42 standalone.localdomain podman[225606]: 2025-10-13 14:23:42.389468833 +0000 UTC m=+0.143824775 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=memcached, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vcs-type=git)
Oct 13 14:23:42 standalone.localdomain podman[225605]: 2025-10-13 14:23:42.408465245 +0000 UTC m=+0.172427837 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1)
Oct 13 14:23:42 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:23:42 standalone.localdomain podman[225606]: 2025-10-13 14:23:42.461320729 +0000 UTC m=+0.215676661 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, vendor=Red Hat, Inc., tcib_managed=true, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, com.redhat.component=openstack-memcached-container)
Oct 13 14:23:42 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:23:42 standalone.localdomain podman[225635]: 2025-10-13 14:23:42.564462698 +0000 UTC m=+0.231376810 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, container_name=heat_api_cfn, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:23:42 standalone.localdomain podman[225635]: 2025-10-13 14:23:42.605046091 +0000 UTC m=+0.271960193 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, container_name=heat_api_cfn, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, release=1, tcib_managed=true, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, build-date=2025-07-21T14:49:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:23:42 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:23:43 standalone.localdomain ceph-mon[29756]: pgmap v1475: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:43 standalone.localdomain runuser[225772]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1476: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:44 standalone.localdomain runuser[225772]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:44 standalone.localdomain runuser[225884]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:44 standalone.localdomain runuser[225884]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:44 standalone.localdomain runuser[225999]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:45 standalone.localdomain ceph-mon[29756]: pgmap v1476: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:45 standalone.localdomain runuser[225999]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1477: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:47 standalone.localdomain ceph-mon[29756]: pgmap v1477: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:23:47 standalone.localdomain podman[226073]: 2025-10-13 14:23:47.132248127 +0000 UTC m=+0.065465947 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, distribution-scope=public)
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:23:47 standalone.localdomain systemd[1]: tmp-crun.MczxwY.mount: Deactivated successfully.
Oct 13 14:23:47 standalone.localdomain podman[226074]: 2025-10-13 14:23:47.166657428 +0000 UTC m=+0.092783868 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller)
Oct 13 14:23:47 standalone.localdomain podman[226073]: 2025-10-13 14:23:47.183639926 +0000 UTC m=+0.116857736 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:23:47 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:23:47 standalone.localdomain podman[226106]: 2025-10-13 14:23:47.231229807 +0000 UTC m=+0.062978700 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.component=openstack-glance-api-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=glance_api_cron, build-date=2025-07-21T13:58:20)
Oct 13 14:23:47 standalone.localdomain podman[226074]: 2025-10-13 14:23:47.261999244 +0000 UTC m=+0.188125744 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44)
Oct 13 14:23:47 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:23:47 standalone.localdomain podman[226106]: 2025-10-13 14:23:47.313887299 +0000 UTC m=+0.145636212 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, container_name=glance_api_cron, release=1, vcs-type=git, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20)
Oct 13 14:23:47 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:23:47 standalone.localdomain podman[226147]: 2025-10-13 14:23:47.836756726 +0000 UTC m=+0.102470758 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, vcs-type=git, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, release=1)
Oct 13 14:23:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1478: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:23:48 standalone.localdomain ceph-mon[29756]: pgmap v1478: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:48 standalone.localdomain podman[226147]: 2025-10-13 14:23:48.05508869 +0000 UTC m=+0.320802732 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, container_name=swift_account_server, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:23:48 standalone.localdomain podman[226172]: 2025-10-13 14:23:48.06441082 +0000 UTC m=+0.083729106 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy)
Oct 13 14:23:48 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:23:48 standalone.localdomain podman[226171]: 2025-10-13 14:23:48.129085902 +0000 UTC m=+0.148014456 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, version=17.1.9, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_object_server, build-date=2025-07-21T14:56:28, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:23:48 standalone.localdomain systemd[1]: tmp-crun.9EGwhK.mount: Deactivated successfully.
Oct 13 14:23:48 standalone.localdomain podman[226173]: 2025-10-13 14:23:48.192534356 +0000 UTC m=+0.206341740 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, distribution-scope=public)
Oct 13 14:23:48 standalone.localdomain podman[226172]: 2025-10-13 14:23:48.263657759 +0000 UTC m=+0.282976065 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, vcs-type=git, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, container_name=swift_proxy, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:23:48 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:23:48 standalone.localdomain podman[226173]: 2025-10-13 14:23:48.356834028 +0000 UTC m=+0.370641482 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_container_server, name=rhosp17/openstack-swift-container)
Oct 13 14:23:48 standalone.localdomain podman[226171]: 2025-10-13 14:23:48.37005537 +0000 UTC m=+0.388984034 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, name=rhosp17/openstack-swift-object, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc.)
Oct 13 14:23:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:23:48 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:23:48 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:23:48 standalone.localdomain podman[226252]: 2025-10-13 14:23:48.438722156 +0000 UTC m=+0.058481881 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:23:48 standalone.localdomain podman[226252]: 2025-10-13 14:23:48.605038611 +0000 UTC m=+0.224798336 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:23:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:23:48 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:23:48 standalone.localdomain podman[226277]: 2025-10-13 14:23:48.682128499 +0000 UTC m=+0.057250252 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:23:49 standalone.localdomain podman[226277]: 2025-10-13 14:23:49.075058874 +0000 UTC m=+0.450180577 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vcs-type=git)
Oct 13 14:23:49 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:23:49 standalone.localdomain systemd[1]: tmp-crun.syhT8a.mount: Deactivated successfully.
Oct 13 14:23:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1479: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:50 standalone.localdomain haproxy[70940]: 172.17.0.100:59719 [13/Oct/2025:14:17:18.370] mysql mysql/standalone.internalapi.localdomain 1/0/392042 200217 -- 7/7/6/6/0 0/0
Oct 13 14:23:50 standalone.localdomain systemd[1]: session-c15.scope: Deactivated successfully.
Oct 13 14:23:50 standalone.localdomain systemd[1]: libpod-a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1.scope: Deactivated successfully.
Oct 13 14:23:50 standalone.localdomain systemd[1]: libpod-a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1.scope: Consumed 37.581s CPU time.
Oct 13 14:23:50 standalone.localdomain podman[224610]: 2025-10-13 14:23:50.479187302 +0000 UTC m=+21.454171003 container died a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-volume, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-cinder-volume-container, version=17.1.9, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T16:13:39, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-volume, vcs-type=git)
Oct 13 14:23:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88091550f23654f6033122dfc572214e3b2bb01634a4ff86212dbe1ecacd4fa1-merged.mount: Deactivated successfully.
Oct 13 14:23:50 standalone.localdomain podman[224610]: 2025-10-13 14:23:50.534475402 +0000 UTC m=+21.509459003 container cleanup a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, architecture=x86_64, com.redhat.component=openstack-cinder-volume-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cinder-volume, name=rhosp17/openstack-cinder-volume, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:13:39, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:23:50 standalone.localdomain podman[226317]: 2025-10-13 14:23:50.535865475 +0000 UTC m=+0.052756742 container cleanup a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, com.redhat.component=openstack-cinder-volume-container, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cinder-volume, build-date=2025-07-21T16:13:39, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cinder-volume, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-cinder-volume, architecture=x86_64, distribution-scope=public)
Oct 13 14:23:50 standalone.localdomain podman(openstack-cinder-volume-podman-0)[226335]: INFO: openstack-cinder-volume-podman-0
Oct 13 14:23:50 standalone.localdomain podman(openstack-cinder-volume-podman-0)[226356]: NOTICE: Cleaning up inactive container, openstack-cinder-volume-podman-0.
Oct 13 14:23:50 standalone.localdomain podman[226360]: 2025-10-13 14:23:50.688603127 +0000 UTC m=+0.066760767 container remove a1e8490897f87a9405f80b64b39929fcd3a015d8c68c20baebbb009d9c093bb1 (image=cluster.common.tag/cinder-volume:pcmklatest, name=openstack-cinder-volume-podman-0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-volume, tcib_managed=true, build-date=2025-07-21T16:13:39, description=Red Hat OpenStack Platform 17.1 cinder-volume, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-volume, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cinder-volume, vcs-ref=af3b5586fa00fe6777665adc3f875bc7677ec35f, vcs-type=git, distribution-scope=public, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-volume, com.redhat.component=openstack-cinder-volume-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-volume/images/17.1.9-1)
Oct 13 14:23:50 standalone.localdomain podman(openstack-cinder-volume-podman-0)[226377]: INFO: openstack-cinder-volume-podman-0
Oct 13 14:23:50 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for openstack-cinder-volume-podman-0 on standalone: ok
Oct 13 14:23:50 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 63 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-63.bz2): Complete
Oct 13 14:23:50 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       openstack-cinder-backup-podman-0     (                             standalone )  due to node availability
Oct 13 14:23:50 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       openstack-manila-share-podman-0      (                             standalone )  due to node availability
Oct 13 14:23:50 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 64, saving inputs in /var/lib/pacemaker/pengine/pe-input-64.bz2
Oct 13 14:23:50 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation openstack-cinder-backup-podman-0_stop_0 locally on standalone
Oct 13 14:23:50 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for openstack-cinder-backup-podman-0 on standalone
Oct 13 14:23:50 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation openstack-manila-share-podman-0_stop_0 locally on standalone
Oct 13 14:23:50 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for openstack-manila-share-podman-0 on standalone
Oct 13 14:23:50 standalone.localdomain podman[226394]: 2025-10-13 14:23:50.865054217 +0000 UTC m=+0.074938262 container exec 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-backup-container, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:18:24, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cinder-backup, name=rhosp17/openstack-cinder-backup, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, tcib_managed=true)
Oct 13 14:23:50 standalone.localdomain podman[226393]: 2025-10-13 14:23:50.917677575 +0000 UTC m=+0.127570230 container exec cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-manila-share, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, version=17.1.9, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:22:36, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, io.buildah.version=1.33.12)
Oct 13 14:23:50 standalone.localdomain podman[226393]: 2025-10-13 14:23:50.945991665 +0000 UTC m=+0.155884310 container exec_died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, io.buildah.version=1.33.12, release=1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-manila-share-container, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, name=rhosp17/openstack-manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1)
Oct 13 14:23:50 standalone.localdomain ceph-mon[29756]: pgmap v1479: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:51 standalone.localdomain podman[226394]: 2025-10-13 14:23:51.051096016 +0000 UTC m=+0.260980081 container exec_died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-backup, architecture=x86_64, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, vendor=Red Hat, Inc., name=rhosp17/openstack-cinder-backup, tcib_managed=true, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-cinder-backup-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1)
Oct 13 14:23:51 standalone.localdomain systemd[1]: tmp-crun.lJMEua.mount: Deactivated successfully.
Oct 13 14:23:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1480: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:53 standalone.localdomain ceph-mon[29756]: pgmap v1480: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:23:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1481: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:54 standalone.localdomain haproxy[70940]: 172.17.0.100:48647 [13/Oct/2025:14:17:47.996] mysql mysql/standalone.internalapi.localdomain 1/0/366058 106659 -- 6/6/5/5/0 0/0
Oct 13 14:23:54 standalone.localdomain haproxy[70940]: 172.17.0.100:52813 [13/Oct/2025:14:07:38.867] mysql mysql/standalone.internalapi.localdomain 1/0/975189 3463 -- 5/5/4/4/0 0/0
Oct 13 14:23:54 standalone.localdomain systemd[1]: libpod-4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1.scope: Deactivated successfully.
Oct 13 14:23:54 standalone.localdomain systemd[1]: libpod-4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1.scope: Consumed 15.441s CPU time.
Oct 13 14:23:54 standalone.localdomain ceph-mon[29756]: pgmap v1481: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:54 standalone.localdomain podman[226471]: 2025-10-13 14:23:54.112833587 +0000 UTC m=+3.040663827 container died 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-cinder-backup-container, release=1, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, build-date=2025-07-21T16:18:24, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc.)
Oct 13 14:23:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:23:54 standalone.localdomain systemd[1]: session-c17.scope: Deactivated successfully.
Oct 13 14:23:54 standalone.localdomain systemd[1]: tmp-crun.HoTy8o.mount: Deactivated successfully.
Oct 13 14:23:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3d6a2ce9ac4bdda31bae9808821f8518f622a69e5dcca675d77c846d63c16ec0-merged.mount: Deactivated successfully.
Oct 13 14:23:54 standalone.localdomain podman[226471]: 2025-10-13 14:23:54.240243721 +0000 UTC m=+3.168073921 container cleanup 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cinder-backup, vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, tcib_managed=true, build-date=2025-07-21T16:18:24, io.buildah.version=1.33.12, com.redhat.component=openstack-cinder-backup-container, architecture=x86_64, name=rhosp17/openstack-cinder-backup)
Oct 13 14:23:54 standalone.localdomain podman[226716]: 2025-10-13 14:23:54.249995415 +0000 UTC m=+0.128925383 container cleanup 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, description=Red Hat OpenStack Platform 17.1 cinder-backup, vendor=Red Hat, Inc., vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, summary=Red Hat OpenStack Platform 17.1 cinder-backup, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cinder-backup-container, name=rhosp17/openstack-cinder-backup, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, io.buildah.version=1.33.12, build-date=2025-07-21T16:18:24)
Oct 13 14:23:54 standalone.localdomain podman(openstack-cinder-backup-podman-0)[226754]: INFO: openstack-cinder-backup-podman-0
Oct 13 14:23:54 standalone.localdomain podman[226717]: 2025-10-13 14:23:54.256250879 +0000 UTC m=+0.132366149 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, container_name=keystone_cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:23:54 standalone.localdomain podman[226717]: 2025-10-13 14:23:54.291783425 +0000 UTC m=+0.167898635 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, version=17.1.9, name=rhosp17/openstack-keystone, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, container_name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:23:54 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:23:54 standalone.localdomain podman(openstack-cinder-backup-podman-0)[226777]: NOTICE: Cleaning up inactive container, openstack-cinder-backup-podman-0.
Oct 13 14:23:54 standalone.localdomain podman[226781]: 2025-10-13 14:23:54.38708399 +0000 UTC m=+0.056338534 container remove 4dd392bfafaafb4053923e65af2fd187c0e3e02838f937e20ecf87c5711e52c1 (image=cluster.common.tag/cinder-backup:pcmklatest, name=openstack-cinder-backup-podman-0, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=ed21240e516258fd858a12143f4b670f212cffef, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cinder-backup, com.redhat.component=openstack-cinder-backup-container, release=1, description=Red Hat OpenStack Platform 17.1 cinder-backup, summary=Red Hat OpenStack Platform 17.1 cinder-backup, io.k8s.description=Red Hat OpenStack Platform 17.1 cinder-backup, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cinder-backup/images/17.1.9-1, name=rhosp17/openstack-cinder-backup, build-date=2025-07-21T16:18:24, distribution-scope=public)
Oct 13 14:23:54 standalone.localdomain podman(openstack-cinder-backup-podman-0)[226799]: INFO: openstack-cinder-backup-podman-0
Oct 13 14:23:54 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for openstack-cinder-backup-podman-0 on standalone: ok
Oct 13 14:23:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1482: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:23:56 standalone.localdomain runuser[226875]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:23:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:23:56 standalone.localdomain runuser[226875]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:56 standalone.localdomain systemd[1]: tmp-crun.CiXmWR.mount: Deactivated successfully.
Oct 13 14:23:56 standalone.localdomain podman[226923]: 2025-10-13 14:23:56.845100367 +0000 UTC m=+0.096621607 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:23:56 standalone.localdomain podman[226925]: 2025-10-13 14:23:56.878652861 +0000 UTC m=+0.130057247 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T16:03:34, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:23:56 standalone.localdomain podman[226923]: 2025-10-13 14:23:56.883190342 +0000 UTC m=+0.134711612 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, batch=17.1_20250721.1, container_name=neutron_dhcp, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12)
Oct 13 14:23:56 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:23:56 standalone.localdomain podman[226925]: 2025-10-13 14:23:56.910425569 +0000 UTC m=+0.161829955 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:23:56 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:23:56 standalone.localdomain runuser[226991]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:57 standalone.localdomain ceph-mon[29756]: pgmap v1482: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:57 standalone.localdomain runuser[226991]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:57 standalone.localdomain runuser[227081]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:23:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1483: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:58 standalone.localdomain ceph-mon[29756]: pgmap v1483: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:58 standalone.localdomain runuser[227081]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:23:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:23:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:23:59 standalone.localdomain podman[227156]: 2025-10-13 14:23:59.815368732 +0000 UTC m=+0.074133458 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, architecture=x86_64, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12)
Oct 13 14:23:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1484: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:23:59 standalone.localdomain systemd[1]: tmp-crun.8ny9Pp.mount: Deactivated successfully.
Oct 13 14:23:59 standalone.localdomain podman[227155]: 2025-10-13 14:23:59.933156556 +0000 UTC m=+0.193009605 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step5, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:23:59 standalone.localdomain podman[227156]: 2025-10-13 14:23:59.94288841 +0000 UTC m=+0.201653146 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-type=git, build-date=2025-07-21T12:58:45, container_name=clustercheck, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, release=1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:23:59 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:24:00 standalone.localdomain podman[227155]: 2025-10-13 14:24:00.003603629 +0000 UTC m=+0.263456648 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, container_name=nova_compute, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:24:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:24:00 standalone.localdomain ceph-mon[29756]: pgmap v1484: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1485: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: pgmap v1485: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:03 standalone.localdomain sudo[227367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:24:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:24:03 standalone.localdomain sudo[227367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:24:03 standalone.localdomain sudo[227367]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:03 standalone.localdomain sudo[227396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:24:03 standalone.localdomain sudo[227396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:24:03 standalone.localdomain podman[227389]: 2025-10-13 14:24:03.19230688 +0000 UTC m=+0.073393134 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-07-21T13:27:15, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:24:03 standalone.localdomain podman[227389]: 2025-10-13 14:24:03.200205776 +0000 UTC m=+0.081292050 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-iscsid, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Oct 13 14:24:03 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:24:03 standalone.localdomain systemd[1]: tmp-crun.12M95b.mount: Deactivated successfully.
Oct 13 14:24:03 standalone.localdomain podman[227467]: 2025-10-13 14:24:03.589686094 +0000 UTC m=+0.122697729 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container)
Oct 13 14:24:03 standalone.localdomain podman[227467]: 2025-10-13 14:24:03.61978033 +0000 UTC m=+0.152791995 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:24:03 standalone.localdomain podman[227510]: 2025-10-13 14:24:03.742064325 +0000 UTC m=+0.102099718 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, name=rhosp17/openstack-haproxy, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, version=17.1.9)
Oct 13 14:24:03 standalone.localdomain podman[227510]: 2025-10-13 14:24:03.77085587 +0000 UTC m=+0.130891273 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-haproxy-container, name=rhosp17/openstack-haproxy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, build-date=2025-07-21T13:08:11, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 14:24:03 standalone.localdomain sudo[227396]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:24:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:24:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 60496064-d4cd-4200-96a1-ceae931d2653 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:24:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 60496064-d4cd-4200-96a1-ceae931d2653 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:24:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 60496064-d4cd-4200-96a1-ceae931d2653 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:24:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1486: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:03 standalone.localdomain sudo[227603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:24:03 standalone.localdomain sudo[227603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:24:03 standalone.localdomain sudo[227603]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:04 standalone.localdomain podman[227602]: 2025-10-13 14:24:04.000506115 +0000 UTC m=+0.080164274 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:24:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:24:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:24:04 standalone.localdomain podman[227602]: 2025-10-13 14:24:04.032053788 +0000 UTC m=+0.111711947 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, name=rhosp17/openstack-rabbitmq, release=1)
Oct 13 14:24:05 standalone.localdomain ceph-mon[29756]: pgmap v1486: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1487: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:07 standalone.localdomain ceph-mon[29756]: pgmap v1487: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1488: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:08 standalone.localdomain ceph-mon[29756]: pgmap v1488: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:08 standalone.localdomain runuser[227745]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:09 standalone.localdomain runuser[227745]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:09 standalone.localdomain runuser[227814]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1489: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:10 standalone.localdomain runuser[227814]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:10 standalone.localdomain runuser[227868]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:10 standalone.localdomain runuser[227868]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:11 standalone.localdomain ceph-mon[29756]: pgmap v1489: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:24:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:24:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:24:11 standalone.localdomain podman[227944]: 2025-10-13 14:24:11.803606916 +0000 UTC m=+0.071873969 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, architecture=x86_64, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible)
Oct 13 14:24:11 standalone.localdomain podman[227944]: 2025-10-13 14:24:11.810063914 +0000 UTC m=+0.078330967 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, container_name=logrotate_crond, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52)
Oct 13 14:24:11 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:24:11 standalone.localdomain podman[227942]: 2025-10-13 14:24:11.85780221 +0000 UTC m=+0.125810735 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=heat_api_cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:24:11 standalone.localdomain podman[227942]: 2025-10-13 14:24:11.865099624 +0000 UTC m=+0.133108139 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, container_name=heat_api_cron, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:24:11 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:24:11 standalone.localdomain podman[227943]: 2025-10-13 14:24:11.923544159 +0000 UTC m=+0.190935355 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., version=17.1.9, container_name=heat_api)
Oct 13 14:24:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1490: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:11 standalone.localdomain podman[227943]: 2025-10-13 14:24:11.952056335 +0000 UTC m=+0.219447481 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, distribution-scope=public, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:24:11 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:24:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:24:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:24:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:24:12 standalone.localdomain systemd[1]: tmp-crun.1FcTrO.mount: Deactivated successfully.
Oct 13 14:24:12 standalone.localdomain podman[228097]: 2025-10-13 14:24:12.811024513 +0000 UTC m=+0.076023165 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, managed_by=tripleo_ansible, container_name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, release=1, build-date=2025-07-21T15:44:11, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:24:12 standalone.localdomain podman[228097]: 2025-10-13 14:24:12.826072176 +0000 UTC m=+0.091070758 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, com.redhat.component=openstack-heat-engine-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, build-date=2025-07-21T15:44:11, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:24:12 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:24:12 standalone.localdomain podman[228098]: 2025-10-13 14:24:12.880885129 +0000 UTC m=+0.141315311 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container)
Oct 13 14:24:12 standalone.localdomain podman[228096]: 2025-10-13 14:24:12.923667143 +0000 UTC m=+0.190715599 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, container_name=heat_api_cfn, distribution-scope=public, release=1, build-date=2025-07-21T14:49:55, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, vcs-type=git, name=rhosp17/openstack-heat-api-cfn)
Oct 13 14:24:12 standalone.localdomain podman[228096]: 2025-10-13 14:24:12.945943627 +0000 UTC m=+0.212992093 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, release=1)
Oct 13 14:24:12 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:24:12 standalone.localdomain podman[228098]: 2025-10-13 14:24:12.999173252 +0000 UTC m=+0.259603464 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, architecture=x86_64, version=17.1.9)
Oct 13 14:24:13 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:24:13 standalone.localdomain ceph-mon[29756]: pgmap v1490: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:13 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:24:13 standalone.localdomain recover_tripleo_nova_virtqemud[228264]: 93291
Oct 13 14:24:13 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:24:13 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:24:13 standalone.localdomain systemd[1]: tmp-crun.0G1M84.mount: Deactivated successfully.
Oct 13 14:24:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1491: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:14 standalone.localdomain haproxy[70940]: 172.17.0.100:42445 [13/Oct/2025:14:08:29.044] mysql mysql/standalone.internalapi.localdomain 1/0/945204 491161 -- 4/4/3/3/0 0/0
Oct 13 14:24:14 standalone.localdomain systemd[1]: libpod-cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84.scope: Deactivated successfully.
Oct 13 14:24:14 standalone.localdomain systemd[1]: libpod-cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84.scope: Consumed 35.100s CPU time.
Oct 13 14:24:14 standalone.localdomain podman[226456]: 2025-10-13 14:24:14.281150551 +0000 UTC m=+23.311715797 container died cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, name=rhosp17/openstack-manila-share, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:22:36, com.redhat.component=openstack-manila-share-container, version=17.1.9, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public)
Oct 13 14:24:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9fab759af8d1b0f9e496bf7fab8e111ee85c632828a3c4090f86bd90b5fee65d-merged.mount: Deactivated successfully.
Oct 13 14:24:14 standalone.localdomain podman[226456]: 2025-10-13 14:24:14.320924982 +0000 UTC m=+23.351490218 container cleanup cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, version=17.1.9, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, build-date=2025-07-21T15:22:36, description=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, com.redhat.component=openstack-manila-share-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:24:14 standalone.localdomain podman(openstack-manila-share-podman-0)[228325]: INFO: openstack-manila-share-podman-0
Oct 13 14:24:14 standalone.localdomain podman[228306]: 2025-10-13 14:24:14.341752572 +0000 UTC m=+0.058175157 container cleanup cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, io.buildah.version=1.33.12, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74, build-date=2025-07-21T15:22:36, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, version=17.1.9, description=Red Hat OpenStack Platform 17.1 manila-share, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, summary=Red Hat OpenStack Platform 17.1 manila-share, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, name=rhosp17/openstack-manila-share)
Oct 13 14:24:14 standalone.localdomain podman(openstack-manila-share-podman-0)[228351]: NOTICE: Cleaning up inactive container, openstack-manila-share-podman-0.
Oct 13 14:24:14 standalone.localdomain podman[228355]: 2025-10-13 14:24:14.495796763 +0000 UTC m=+0.076423819 container remove cc29ed68483392cb2cdc01cab113ca642e612b6aa6231b8b1bcd77e6152b9a84 (image=cluster.common.tag/manila-share:pcmklatest, name=openstack-manila-share-podman-0, release=1, version=17.1.9, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 manila-share, name=rhosp17/openstack-manila-share, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-manila-share/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 manila-share, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-manila-share-container, description=Red Hat OpenStack Platform 17.1 manila-share, summary=Red Hat OpenStack Platform 17.1 manila-share, vcs-type=git, build-date=2025-07-21T15:22:36, io.buildah.version=1.33.12, vcs-ref=f80893301ee085e56ff39a8511f93439ec603a74)
Oct 13 14:24:14 standalone.localdomain podman(openstack-manila-share-podman-0)[228372]: INFO: openstack-manila-share-podman-0
Oct 13 14:24:14 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for openstack-manila-share-podman-0 on standalone: ok
Oct 13 14:24:14 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 64 (Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-64.bz2): Complete
Oct 13 14:24:14 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:24:15 standalone.localdomain ceph-mon[29756]: pgmap v1491: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1492: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:17 standalone.localdomain ceph-mon[29756]: pgmap v1492: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:24:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:24:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:24:17 standalone.localdomain podman[228453]: 2025-10-13 14:24:17.836624377 +0000 UTC m=+0.087348743 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git)
Oct 13 14:24:17 standalone.localdomain podman[228452]: 2025-10-13 14:24:17.885736296 +0000 UTC m=+0.138076961 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=glance_api_cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:24:17 standalone.localdomain podman[228452]: 2025-10-13 14:24:17.890936596 +0000 UTC m=+0.143277261 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, vendor=Red Hat, Inc., container_name=glance_api_cron, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9)
Oct 13 14:24:17 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:24:17 standalone.localdomain podman[228453]: 2025-10-13 14:24:17.910359742 +0000 UTC m=+0.161084108 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:24:17 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:24:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1493: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:17 standalone.localdomain podman[228454]: 2025-10-13 14:24:17.992331539 +0000 UTC m=+0.237373681 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, release=1, tcib_managed=true, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:24:18 standalone.localdomain podman[228454]: 2025-10-13 14:24:18.017805311 +0000 UTC m=+0.262847463 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-ovn-controller, release=1)
Oct 13 14:24:18 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:24:18 standalone.localdomain ceph-mon[29756]: pgmap v1493: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:24:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:24:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:24:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:24:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:24:18 standalone.localdomain podman[228526]: 2025-10-13 14:24:18.82334497 +0000 UTC m=+0.089565361 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.buildah.version=1.33.12, container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, distribution-scope=public)
Oct 13 14:24:18 standalone.localdomain podman[228538]: 2025-10-13 14:24:18.873641534 +0000 UTC m=+0.129178028 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, container_name=swift_account_server, name=rhosp17/openstack-swift-account, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public)
Oct 13 14:24:18 standalone.localdomain podman[228525]: 2025-10-13 14:24:18.849000018 +0000 UTC m=+0.120016027 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, release=1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 14:24:18 standalone.localdomain podman[228527]: 2025-10-13 14:24:18.907529825 +0000 UTC m=+0.172871750 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:24:18 standalone.localdomain podman[228533]: 2025-10-13 14:24:18.803634574 +0000 UTC m=+0.065184163 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:24:19 standalone.localdomain podman[228533]: 2025-10-13 14:24:19.00374681 +0000 UTC m=+0.265296389 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, container_name=swift_container_server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:24:19 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:24:19 standalone.localdomain podman[228526]: 2025-10-13 14:24:19.020746972 +0000 UTC m=+0.286967373 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:24:19 standalone.localdomain podman[228538]: 2025-10-13 14:24:19.032928046 +0000 UTC m=+0.288464640 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, release=1, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-account-container, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account)
Oct 13 14:24:19 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:24:19 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:24:19 standalone.localdomain podman[228525]: 2025-10-13 14:24:19.071581443 +0000 UTC m=+0.342597472 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, com.redhat.component=openstack-swift-object-container, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:24:19 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:24:19 standalone.localdomain podman[228527]: 2025-10-13 14:24:19.110808437 +0000 UTC m=+0.376150382 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-glance-api, release=1)
Oct 13 14:24:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:24:19 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:24:19 standalone.localdomain podman[228657]: 2025-10-13 14:24:19.224616053 +0000 UTC m=+0.082401902 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 14:24:19 standalone.localdomain podman[228657]: 2025-10-13 14:24:19.579965026 +0000 UTC m=+0.437750785 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:24:19 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:24:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1494: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:20 standalone.localdomain ceph-mon[29756]: pgmap v1494: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:21 standalone.localdomain runuser[228701]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1495: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:22 standalone.localdomain runuser[228701]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:22 standalone.localdomain runuser[228852]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:22 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:24:22 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txb75ed3feb2344a91987d4-0068ed0b96" "proxy-server 2" 0.0004 "-" 23 -
Oct 13 14:24:22 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txb75ed3feb2344a91987d4-0068ed0b96)
Oct 13 14:24:22 standalone.localdomain runuser[228852]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:22 standalone.localdomain runuser[228925]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:23 standalone.localdomain ceph-mon[29756]: pgmap v1495: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:24:23
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'backups', 'manila_data', 'volumes', '.mgr', 'manila_metadata']
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:24:23 standalone.localdomain runuser[228925]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1496: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:24:24 standalone.localdomain podman[229113]: 2025-10-13 14:24:24.816560032 +0000 UTC m=+0.081317979 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, version=17.1.9, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 14:24:24 standalone.localdomain podman[229113]: 2025-10-13 14:24:24.851142083 +0000 UTC m=+0.115900040 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, release=1, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, container_name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:24:24 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:24:25 standalone.localdomain ceph-mon[29756]: pgmap v1496: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1497: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:27 standalone.localdomain ceph-mon[29756]: pgmap v1497: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:24:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:24:27 standalone.localdomain podman[229202]: 2025-10-13 14:24:27.163393144 +0000 UTC m=+0.056942451 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, container_name=neutron_dhcp, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 14:24:27 standalone.localdomain podman[229202]: 2025-10-13 14:24:27.206835388 +0000 UTC m=+0.100384705 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, distribution-scope=public, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent)
Oct 13 14:24:27 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:24:27 standalone.localdomain systemd[1]: tmp-crun.V9Gfrq.mount: Deactivated successfully.
Oct 13 14:24:27 standalone.localdomain podman[229203]: 2025-10-13 14:24:27.244608128 +0000 UTC m=+0.134531612 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, vcs-type=git, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.expose-services=, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34)
Oct 13 14:24:27 standalone.localdomain podman[229203]: 2025-10-13 14:24:27.287125604 +0000 UTC m=+0.177049138 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, distribution-scope=public, version=17.1.9, release=1)
Oct 13 14:24:27 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:24:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1498: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:24:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:24:29 standalone.localdomain ceph-mon[29756]: pgmap v1498: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1499: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:30 standalone.localdomain ceph-mon[29756]: pgmap v1499: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:24:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:24:30 standalone.localdomain podman[229283]: 2025-10-13 14:24:30.803551023 +0000 UTC m=+0.072881880 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, release=1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5)
Oct 13 14:24:30 standalone.localdomain podman[229283]: 2025-10-13 14:24:30.826738464 +0000 UTC m=+0.096069321 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=nova_compute, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:24:30 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:24:30 standalone.localdomain podman[229284]: 2025-10-13 14:24:30.840768585 +0000 UTC m=+0.109060790 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64)
Oct 13 14:24:30 standalone.localdomain podman[229284]: 2025-10-13 14:24:30.939855989 +0000 UTC m=+0.208148204 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.component=openstack-mariadb-container)
Oct 13 14:24:30 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:24:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1500: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:33 standalone.localdomain ceph-mon[29756]: pgmap v1500: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:24:33 standalone.localdomain systemd[1]: tmp-crun.8y1CXP.mount: Deactivated successfully.
Oct 13 14:24:33 standalone.localdomain podman[229527]: 2025-10-13 14:24:33.803667416 +0000 UTC m=+0.072117696 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:24:33 standalone.localdomain podman[229527]: 2025-10-13 14:24:33.837025671 +0000 UTC m=+0.105475941 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.9, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:24:33 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:24:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1501: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:34 standalone.localdomain runuser[229600]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:34 standalone.localdomain runuser[229600]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:34 standalone.localdomain runuser[229669]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:35 standalone.localdomain ceph-mon[29756]: pgmap v1501: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:35 standalone.localdomain runuser[229669]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:35 standalone.localdomain runuser[229776]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1502: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:36 standalone.localdomain runuser[229776]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:37 standalone.localdomain ceph-mon[29756]: pgmap v1502: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1503: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:38 standalone.localdomain ceph-mon[29756]: pgmap v1503: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1504: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:40 standalone.localdomain ceph-mon[29756]: pgmap v1504: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1505: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:42 standalone.localdomain sshd[229890]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:42 standalone.localdomain sshd[229890]: Accepted publickey for root from 192.168.122.11 port 39626 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:24:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:24:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:24:42 standalone.localdomain systemd-logind[45629]: New session 246 of user root.
Oct 13 14:24:42 standalone.localdomain systemd[1]: Started Session 246 of User root.
Oct 13 14:24:42 standalone.localdomain sshd[229890]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:42 standalone.localdomain podman[229893]: 2025-10-13 14:24:42.378288344 +0000 UTC m=+0.087174808 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, version=17.1.9, container_name=heat_api_cron)
Oct 13 14:24:42 standalone.localdomain podman[229893]: 2025-10-13 14:24:42.41168065 +0000 UTC m=+0.120567104 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.component=openstack-heat-api-container, build-date=2025-07-21T15:56:26)
Oct 13 14:24:42 standalone.localdomain sudo[230031]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ovn_cluster_northd.service
Oct 13 14:24:42 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:24:42 standalone.localdomain podman[229894]: 2025-10-13 14:24:42.428816896 +0000 UTC m=+0.137525064 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=heat_api, vcs-type=git, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, release=1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:24:42 standalone.localdomain sudo[230031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:42 standalone.localdomain sudo[230031]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:42 standalone.localdomain sshd[229933]: Received disconnect from 192.168.122.11 port 39626:11: disconnected by user
Oct 13 14:24:42 standalone.localdomain sshd[229933]: Disconnected from user root 192.168.122.11 port 39626
Oct 13 14:24:42 standalone.localdomain sshd[229890]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:42 standalone.localdomain systemd[1]: session-246.scope: Deactivated successfully.
Oct 13 14:24:42 standalone.localdomain systemd-logind[45629]: Session 246 logged out. Waiting for processes to exit.
Oct 13 14:24:42 standalone.localdomain systemd-logind[45629]: Removed session 246.
Oct 13 14:24:42 standalone.localdomain podman[229894]: 2025-10-13 14:24:42.4621659 +0000 UTC m=+0.170874108 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:24:42 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:24:42 standalone.localdomain podman[229895]: 2025-10-13 14:24:42.478813532 +0000 UTC m=+0.187863871 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:24:42 standalone.localdomain podman[229895]: 2025-10-13 14:24:42.493866724 +0000 UTC m=+0.202917093 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Oct 13 14:24:42 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:24:42 standalone.localdomain sshd[230054]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:42 standalone.localdomain sshd[230054]: Accepted publickey for root from 192.168.122.11 port 39642 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:24:42 standalone.localdomain systemd-logind[45629]: New session 247 of user root.
Oct 13 14:24:42 standalone.localdomain systemd[1]: Started Session 247 of User root.
Oct 13 14:24:42 standalone.localdomain sshd[230054]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:24:43 standalone.localdomain podman[230057]: 2025-10-13 14:24:43.005055873 +0000 UTC m=+0.088615233 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, name=rhosp17/openstack-heat-engine, vendor=Red Hat, Inc., container_name=heat_engine, tcib_managed=true, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, release=1, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:24:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:24:43 standalone.localdomain sudo[230074]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/nft add rule inet filter INPUT ip saddr 172.17.1.0/24 tcp dport 6641 ct state new counter accept
Oct 13 14:24:43 standalone.localdomain sudo[230074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:43 standalone.localdomain ceph-mon[29756]: pgmap v1505: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:43 standalone.localdomain podman[230057]: 2025-10-13 14:24:43.060887696 +0000 UTC m=+0.144447076 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, release=1, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:24:43 standalone.localdomain sudo[230074]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:43 standalone.localdomain sshd[230064]: Received disconnect from 192.168.122.11 port 39642:11: disconnected by user
Oct 13 14:24:43 standalone.localdomain sshd[230064]: Disconnected from user root 192.168.122.11 port 39642
Oct 13 14:24:43 standalone.localdomain sshd[230054]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:43 standalone.localdomain systemd[1]: session-247.scope: Deactivated successfully.
Oct 13 14:24:43 standalone.localdomain systemd-logind[45629]: Session 247 logged out. Waiting for processes to exit.
Oct 13 14:24:43 standalone.localdomain systemd-logind[45629]: Removed session 247.
Oct 13 14:24:43 standalone.localdomain sshd[230168]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:43 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:24:43 standalone.localdomain podman[230092]: 2025-10-13 14:24:43.14761713 +0000 UTC m=+0.131987424 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:24:43 standalone.localdomain podman[230138]: 2025-10-13 14:24:43.198607266 +0000 UTC m=+0.142298961 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-memcached, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, container_name=memcached, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true)
Oct 13 14:24:43 standalone.localdomain sshd[230168]: Accepted publickey for root from 192.168.122.11 port 39654 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:43 standalone.localdomain podman[230092]: 2025-10-13 14:24:43.213803673 +0000 UTC m=+0.198173997 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, release=1, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:24:43 standalone.localdomain systemd-logind[45629]: New session 248 of user root.
Oct 13 14:24:43 standalone.localdomain systemd[1]: Started Session 248 of User root.
Oct 13 14:24:43 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:24:43 standalone.localdomain sshd[230168]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:43 standalone.localdomain podman[230138]: 2025-10-13 14:24:43.275977362 +0000 UTC m=+0.219669047 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, config_id=tripleo_step1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., container_name=memcached, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public)
Oct 13 14:24:43 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:24:43 standalone.localdomain sudo[230200]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/nft add rule inet filter INPUT ip saddr 172.17.1.0/24 tcp dport 6642 ct state new counter accept
Oct 13 14:24:43 standalone.localdomain sudo[230200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:43 standalone.localdomain sudo[230200]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:43 standalone.localdomain sshd[230197]: Received disconnect from 192.168.122.11 port 39654:11: disconnected by user
Oct 13 14:24:43 standalone.localdomain sshd[230197]: Disconnected from user root 192.168.122.11 port 39654
Oct 13 14:24:43 standalone.localdomain sshd[230168]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:43 standalone.localdomain systemd[1]: session-248.scope: Deactivated successfully.
Oct 13 14:24:43 standalone.localdomain systemd-logind[45629]: Session 248 logged out. Waiting for processes to exit.
Oct 13 14:24:43 standalone.localdomain systemd-logind[45629]: Removed session 248.
Oct 13 14:24:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1506: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:44 standalone.localdomain ceph-mon[29756]: pgmap v1506: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1507: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:47 standalone.localdomain runuser[230380]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:47 standalone.localdomain ceph-mon[29756]: pgmap v1507: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:47 standalone.localdomain runuser[230380]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:47 standalone.localdomain runuser[230449]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1508: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:48 standalone.localdomain ceph-mon[29756]: pgmap v1508: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:48 standalone.localdomain runuser[230449]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:48 standalone.localdomain runuser[230511]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:24:48 standalone.localdomain systemd[1]: tmp-crun.xZs1AX.mount: Deactivated successfully.
Oct 13 14:24:48 standalone.localdomain podman[230559]: 2025-10-13 14:24:48.808876913 +0000 UTC m=+0.074986963 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Oct 13 14:24:48 standalone.localdomain podman[230560]: 2025-10-13 14:24:48.854397182 +0000 UTC m=+0.115740146 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:24:48 standalone.localdomain podman[230559]: 2025-10-13 14:24:48.860885431 +0000 UTC m=+0.126995481 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, container_name=ovn_metadata_agent, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:24:48 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:24:48 standalone.localdomain podman[230558]: 2025-10-13 14:24:48.912170945 +0000 UTC m=+0.178234284 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.buildah.version=1.33.12, container_name=glance_api_cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:24:48 standalone.localdomain podman[230558]: 2025-10-13 14:24:48.920436969 +0000 UTC m=+0.186500288 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, container_name=glance_api_cron, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:24:48 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:24:48 standalone.localdomain podman[230560]: 2025-10-13 14:24:48.944263512 +0000 UTC m=+0.205606476 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, version=17.1.9, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:24:48 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:24:49 standalone.localdomain runuser[230511]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:24:49 standalone.localdomain sshd[230640]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:49 standalone.localdomain sshd[230640]: Accepted publickey for root from 192.168.122.11 port 39664 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:24:49 standalone.localdomain systemd-logind[45629]: New session 249 of user root.
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started Session 249 of User root.
Oct 13 14:24:49 standalone.localdomain sshd[230640]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:49 standalone.localdomain podman[230652]: 2025-10-13 14:24:49.618980351 +0000 UTC m=+0.087503668 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=swift_container_server, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, version=17.1.9, name=rhosp17/openstack-swift-container, vcs-type=git, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:24:49 standalone.localdomain sudo[230721]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ovn_cluster_north_db_server.service
Oct 13 14:24:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:24:49 standalone.localdomain sudo[230721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:49 standalone.localdomain sudo[230721]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:49 standalone.localdomain sudo[230742]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_ovn_cluster_north_db_server.service
Oct 13 14:24:49 standalone.localdomain sudo[230742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:49 standalone.localdomain systemd[1]: Stopping ovn_cluster_north_db_server container...
Oct 13 14:24:49 standalone.localdomain podman[230645]: 2025-10-13 14:24:49.70555382 +0000 UTC m=+0.178757140 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=swift_proxy, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:24:49 standalone.localdomain podman[230644]: 2025-10-13 14:24:49.781331107 +0000 UTC m=+0.262507012 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:24:49 standalone.localdomain podman[230646]: 2025-10-13 14:24:49.761616862 +0000 UTC m=+0.234986117 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, release=1, config_id=tripleo_step4, container_name=glance_api_internal, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, distribution-scope=public)
Oct 13 14:24:49 standalone.localdomain systemd[1]: tmp-crun.rsmqgb.mount: Deactivated successfully.
Oct 13 14:24:49 standalone.localdomain podman[230652]: 2025-10-13 14:24:49.855831165 +0000 UTC m=+0.324354452 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:24:49 standalone.localdomain systemd[1]: libpod-ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd.scope: Deactivated successfully.
Oct 13 14:24:49 standalone.localdomain systemd[1]: libpod-ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd.scope: Consumed 1.237s CPU time.
Oct 13 14:24:49 standalone.localdomain podman[230755]: 2025-10-13 14:24:49.865643927 +0000 UTC m=+0.103340994 container died ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, build-date=2025-07-21T13:58:29, container_name=ovn_cluster_north_db_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, tcib_managed=true, com.redhat.component=openstack-ovn-nb-db-server-container, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-ovn-nb-db-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, architecture=x86_64, config_id=ovn_cluster_north_db_server, managed_by=tripleo_ansible)
Oct 13 14:24:49 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:24:49 standalone.localdomain podman[230725]: 2025-10-13 14:24:49.909614577 +0000 UTC m=+0.229182189 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:24:49 standalone.localdomain podman[230663]: 2025-10-13 14:24:49.833275463 +0000 UTC m=+0.296188818 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-swift-account-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account)
Oct 13 14:24:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1509: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:49 standalone.localdomain podman[230755]: 2025-10-13 14:24:49.955021631 +0000 UTC m=+0.192718668 container cleanup ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_north_db_server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ovn-nb-db-server-container, build-date=2025-07-21T13:58:29, container_name=ovn_cluster_north_db_server, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-nb-db-server, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1)
Oct 13 14:24:49 standalone.localdomain podman[230755]: ovn_cluster_north_db_server
Oct 13 14:24:49 standalone.localdomain podman[230646]: 2025-10-13 14:24:49.978293637 +0000 UTC m=+0.451662872 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, build-date=2025-07-21T13:58:20, vcs-type=git, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true)
Oct 13 14:24:49 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:24:49 standalone.localdomain podman[230797]: 2025-10-13 14:24:49.993112841 +0000 UTC m=+0.115544989 container cleanup ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, com.redhat.component=openstack-ovn-nb-db-server-container, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_cluster_north_db_server, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:29, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, config_id=ovn_cluster_north_db_server, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, name=rhosp17/openstack-ovn-nb-db-server, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public)
Oct 13 14:24:50 standalone.localdomain systemd[1]: libpod-conmon-ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd.scope: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain podman[230644]: 2025-10-13 14:24:50.025873068 +0000 UTC m=+0.507048983 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, container_name=swift_object_server, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true)
Oct 13 14:24:50 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain podman[230837]: 2025-10-13 14:24:50.168121146 +0000 UTC m=+0.144853489 container cleanup ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1, name=ovn_cluster_north_db_server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-nb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_north_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, release=1, vcs-ref=32be821f5e6e2eafd9d374c1cb4e3391f317d8a0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ovn-nb-db-server-container, description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-nb-db-server, io.buildah.version=1.33.12, container_name=ovn_cluster_north_db_server, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-nb-db-server/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-nb-db-server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:29, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_north_db_server, summary=Red Hat OpenStack Platform 17.1 ovn-nb-db-server)
Oct 13 14:24:50 standalone.localdomain podman[230837]: ovn_cluster_north_db_server
Oct 13 14:24:50 standalone.localdomain podman[230663]: 2025-10-13 14:24:50.16955679 +0000 UTC m=+0.632470165 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, version=17.1.9, architecture=x86_64)
Oct 13 14:24:50 standalone.localdomain systemd[1]: tripleo_ovn_cluster_north_db_server.service: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: Stopped ovn_cluster_north_db_server container.
Oct 13 14:24:50 standalone.localdomain sudo[230742]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:50 standalone.localdomain sshd[230690]: Received disconnect from 192.168.122.11 port 39664:11: disconnected by user
Oct 13 14:24:50 standalone.localdomain sshd[230690]: Disconnected from user root 192.168.122.11 port 39664
Oct 13 14:24:50 standalone.localdomain sshd[230640]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:50 standalone.localdomain sshd[230860]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:50 standalone.localdomain systemd[1]: session-249.scope: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd-logind[45629]: Session 249 logged out. Waiting for processes to exit.
Oct 13 14:24:50 standalone.localdomain systemd-logind[45629]: Removed session 249.
Oct 13 14:24:50 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain podman[230645]: 2025-10-13 14:24:50.241881461 +0000 UTC m=+0.715084811 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:24:50 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain podman[230725]: 2025-10-13 14:24:50.280854238 +0000 UTC m=+0.600421850 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:24:50 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain sshd[230860]: Accepted publickey for root from 192.168.122.11 port 48556 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:50 standalone.localdomain systemd-logind[45629]: New session 250 of user root.
Oct 13 14:24:50 standalone.localdomain systemd[1]: Started Session 250 of User root.
Oct 13 14:24:50 standalone.localdomain sshd[230860]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:50 standalone.localdomain sudo[230881]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl is-active tripleo_ovn_cluster_south_db_server.service
Oct 13 14:24:50 standalone.localdomain sudo[230881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:50 standalone.localdomain sudo[230881]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:50 standalone.localdomain sudo[230884]:     root : PWD=/root ; USER=root ; COMMAND=/bin/systemctl stop tripleo_ovn_cluster_south_db_server.service
Oct 13 14:24:50 standalone.localdomain sudo[230884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:50 standalone.localdomain systemd[1]: Stopping ovn_cluster_south_db_server container...
Oct 13 14:24:50 standalone.localdomain systemd[1]: libpod-7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc.scope: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: libpod-7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc.scope: Consumed 1.163s CPU time.
Oct 13 14:24:50 standalone.localdomain podman[230887]: 2025-10-13 14:24:50.625983917 +0000 UTC m=+0.067936227 container died 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_south_db_server, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ovn-sb-db-server, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-sb-db-server-container, io.openshift.expose-services=, container_name=ovn_cluster_south_db_server, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:30:03, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:24:50 standalone.localdomain podman[230887]: 2025-10-13 14:24:50.73709937 +0000 UTC m=+0.179051680 container cleanup 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, container_name=ovn_cluster_south_db_server, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, build-date=2025-07-21T13:30:03, com.redhat.component=openstack-ovn-sb-db-server-container, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, distribution-scope=public, config_id=ovn_cluster_south_db_server, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-sb-db-server, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:24:50 standalone.localdomain podman[230887]: ovn_cluster_south_db_server
Oct 13 14:24:50 standalone.localdomain podman[230901]: 2025-10-13 14:24:50.778890623 +0000 UTC m=+0.146212612 container cleanup 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, config_id=ovn_cluster_south_db_server, container_name=ovn_cluster_south_db_server, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, batch=17.1_20250721.1, build-date=2025-07-21T13:30:03, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-sb-db-server, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-sb-db-server-container, managed_by=tripleo_ansible, distribution-scope=public)
Oct 13 14:24:50 standalone.localdomain systemd[1]: tmp-crun.OTcbPv.mount: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e-merged.mount: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc-userdata-shm.mount: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214-merged.mount: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab93f5a8aebab950d7042eb12f896fc198b839db33e9cc399846b7838f5955fd-userdata-shm.mount: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: libpod-conmon-7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc.scope: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain podman[230915]: 2025-10-13 14:24:50.86544801 +0000 UTC m=+0.046106336 container cleanup 7272b751557c8cdc0da43a14f05ee355557fd0fa740aa3b5d15804858f1764dc (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1, name=ovn_cluster_south_db_server, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=adefe2c1281b2ff32c3fa30eaf67641e1f998cc3, build-date=2025-07-21T13:30:03, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-sb-db-server:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ovn_cluster_south_db_server.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/var/lib/openvswitch/ovn:/var/lib/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/run/openvswitch:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/lib/openvswitch/ovn:/var/lib/ovn:shared,z', '/var/lib/openvswitch/ovn:/etc/openvswitch:shared,z', '/var/lib/openvswitch/ovn:/etc/ovn:shared,z', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/ovn:z', '/var/lib/config-data/ansible-generated/ovn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, name=rhosp17/openstack-ovn-sb-db-server, summary=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-sb-db-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=ovn_cluster_south_db_server, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_cluster_south_db_server, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-sb-db-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-sb-db-server/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1)
Oct 13 14:24:50 standalone.localdomain podman[230915]: ovn_cluster_south_db_server
Oct 13 14:24:50 standalone.localdomain systemd[1]: tripleo_ovn_cluster_south_db_server.service: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd[1]: Stopped ovn_cluster_south_db_server container.
Oct 13 14:24:50 standalone.localdomain sudo[230884]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:50 standalone.localdomain sshd[230867]: Received disconnect from 192.168.122.11 port 48556:11: disconnected by user
Oct 13 14:24:50 standalone.localdomain sshd[230867]: Disconnected from user root 192.168.122.11 port 48556
Oct 13 14:24:50 standalone.localdomain sshd[230860]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:50 standalone.localdomain systemd[1]: session-250.scope: Deactivated successfully.
Oct 13 14:24:50 standalone.localdomain systemd-logind[45629]: Session 250 logged out. Waiting for processes to exit.
Oct 13 14:24:50 standalone.localdomain systemd-logind[45629]: Removed session 250.
Oct 13 14:24:51 standalone.localdomain ceph-mon[29756]: pgmap v1509: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:51 standalone.localdomain sshd[230926]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:51 standalone.localdomain sshd[230926]: Accepted publickey for root from 192.168.122.11 port 48558 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: New session 251 of user root.
Oct 13 14:24:51 standalone.localdomain systemd[1]: Started Session 251 of User root.
Oct 13 14:24:51 standalone.localdomain sshd[230926]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:51 standalone.localdomain sudo[230938]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/0
Oct 13 14:24:51 standalone.localdomain sudo[230938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:51 standalone.localdomain sudo[230938]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:51 standalone.localdomain sshd[230937]: Received disconnect from 192.168.122.11 port 48558:11: disconnected by user
Oct 13 14:24:51 standalone.localdomain sshd[230937]: Disconnected from user root 192.168.122.11 port 48558
Oct 13 14:24:51 standalone.localdomain sshd[230926]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:51 standalone.localdomain systemd[1]: session-251.scope: Deactivated successfully.
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: Session 251 logged out. Waiting for processes to exit.
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: Removed session 251.
Oct 13 14:24:51 standalone.localdomain sshd[230953]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:51 standalone.localdomain sshd[230953]: Accepted publickey for root from 192.168.122.11 port 48568 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: New session 252 of user root.
Oct 13 14:24:51 standalone.localdomain systemd[1]: Started Session 252 of User root.
Oct 13 14:24:51 standalone.localdomain sshd[230953]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:51 standalone.localdomain sudo[230957]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/1
Oct 13 14:24:51 standalone.localdomain sudo[230957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:51 standalone.localdomain sudo[230957]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:51 standalone.localdomain sshd[230956]: Received disconnect from 192.168.122.11 port 48568:11: disconnected by user
Oct 13 14:24:51 standalone.localdomain sshd[230956]: Disconnected from user root 192.168.122.11 port 48568
Oct 13 14:24:51 standalone.localdomain sshd[230953]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:51 standalone.localdomain systemd[1]: session-252.scope: Deactivated successfully.
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: Session 252 logged out. Waiting for processes to exit.
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: Removed session 252.
Oct 13 14:24:51 standalone.localdomain sshd[230972]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:51 standalone.localdomain sshd[230972]: Accepted publickey for root from 192.168.122.11 port 48570 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: New session 253 of user root.
Oct 13 14:24:51 standalone.localdomain systemd[1]: Started Session 253 of User root.
Oct 13 14:24:51 standalone.localdomain sshd[230972]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:51 standalone.localdomain sudo[230976]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/0
Oct 13 14:24:51 standalone.localdomain sudo[230976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:51 standalone.localdomain sudo[230976]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:51 standalone.localdomain sshd[230975]: Received disconnect from 192.168.122.11 port 48570:11: disconnected by user
Oct 13 14:24:51 standalone.localdomain sshd[230975]: Disconnected from user root 192.168.122.11 port 48570
Oct 13 14:24:51 standalone.localdomain sshd[230972]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:51 standalone.localdomain systemd[1]: session-253.scope: Deactivated successfully.
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: Session 253 logged out. Waiting for processes to exit.
Oct 13 14:24:51 standalone.localdomain systemd-logind[45629]: Removed session 253.
Oct 13 14:24:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1510: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:51 standalone.localdomain sshd[230991]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:24:52 standalone.localdomain sshd[230991]: Accepted publickey for root from 192.168.122.11 port 48586 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:24:52 standalone.localdomain systemd-logind[45629]: New session 254 of user root.
Oct 13 14:24:52 standalone.localdomain systemd[1]: Started Session 254 of User root.
Oct 13 14:24:52 standalone.localdomain sshd[230991]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:52 standalone.localdomain sudo[230995]:     root : PWD=/root ; USER=root ; COMMAND=/bin/cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/1
Oct 13 14:24:52 standalone.localdomain sudo[230995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:24:52 standalone.localdomain sudo[230995]: pam_unix(sudo:session): session closed for user root
Oct 13 14:24:52 standalone.localdomain sshd[230994]: Received disconnect from 192.168.122.11 port 48586:11: disconnected by user
Oct 13 14:24:52 standalone.localdomain sshd[230994]: Disconnected from user root 192.168.122.11 port 48586
Oct 13 14:24:52 standalone.localdomain sshd[230991]: pam_unix(sshd:session): session closed for user root
Oct 13 14:24:52 standalone.localdomain systemd-logind[45629]: Session 254 logged out. Waiting for processes to exit.
Oct 13 14:24:52 standalone.localdomain systemd[1]: session-254.scope: Deactivated successfully.
Oct 13 14:24:52 standalone.localdomain systemd-logind[45629]: Removed session 254.
Oct 13 14:24:53 standalone.localdomain ceph-mon[29756]: pgmap v1510: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:24:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1511: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:55 standalone.localdomain ceph-mon[29756]: pgmap v1511: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:24:55 standalone.localdomain systemd[1]: tmp-crun.AFR5zZ.mount: Deactivated successfully.
Oct 13 14:24:55 standalone.localdomain podman[231302]: 2025-10-13 14:24:55.823374777 +0000 UTC m=+0.087403835 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step3, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible)
Oct 13 14:24:55 standalone.localdomain podman[231302]: 2025-10-13 14:24:55.862965733 +0000 UTC m=+0.126994861 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, description=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, vcs-type=git, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:24:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:24:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1512: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:24:57 standalone.localdomain ceph-mon[29756]: pgmap v1512: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:24:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:24:57 standalone.localdomain podman[231373]: 2025-10-13 14:24:57.787692391 +0000 UTC m=+0.055035670 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:24:57 standalone.localdomain podman[231372]: 2025-10-13 14:24:57.854869664 +0000 UTC m=+0.124348079 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, container_name=neutron_dhcp)
Oct 13 14:24:57 standalone.localdomain podman[231373]: 2025-10-13 14:24:57.885117603 +0000 UTC m=+0.152461042 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=neutron_sriov_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:24:57 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:24:57 standalone.localdomain podman[231372]: 2025-10-13 14:24:57.940150424 +0000 UTC m=+0.209628919 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_id=tripleo_step4, release=1, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, managed_by=tripleo_ansible, container_name=neutron_dhcp, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:24:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1513: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:57 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:24:58 standalone.localdomain ceph-mon[29756]: pgmap v1513: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:24:59 standalone.localdomain runuser[231440]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:24:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1514: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:00 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:25:00 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx38c7dfd482f24f14b9ca3-0068ed0bbc" "proxy-server 2" 0.0006 "-" 23 -
Oct 13 14:25:00 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx38c7dfd482f24f14b9ca3-0068ed0bbc)
Oct 13 14:25:00 standalone.localdomain runuser[231440]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:00 standalone.localdomain runuser[231509]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:01 standalone.localdomain ceph-mon[29756]: pgmap v1514: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:01 standalone.localdomain runuser[231509]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:01 standalone.localdomain runuser[231563]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:25:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:25:01 standalone.localdomain podman[231619]: 2025-10-13 14:25:01.804845446 +0000 UTC m=+0.072569248 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45)
Oct 13 14:25:01 standalone.localdomain systemd[1]: tmp-crun.xiVg3l.mount: Deactivated successfully.
Oct 13 14:25:01 standalone.localdomain podman[231618]: 2025-10-13 14:25:01.845698211 +0000 UTC m=+0.116498149 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., container_name=nova_compute)
Oct 13 14:25:01 standalone.localdomain podman[231618]: 2025-10-13 14:25:01.868806331 +0000 UTC m=+0.139606269 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_compute, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:25:01 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:25:01 standalone.localdomain podman[231619]: 2025-10-13 14:25:01.926487453 +0000 UTC m=+0.194211285 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, architecture=x86_64, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, release=1)
Oct 13 14:25:01 standalone.localdomain runuser[231563]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:01 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:25:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1515: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:02 standalone.localdomain ceph-mon[29756]: pgmap v1515: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:03 standalone.localdomain podman[231875]: 2025-10-13 14:25:03.762542997 +0000 UTC m=+0.103132348 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45)
Oct 13 14:25:03 standalone.localdomain podman[231875]: 2025-10-13 14:25:03.850592902 +0000 UTC m=+0.191182283 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, build-date=2025-07-21T12:58:45, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:25:03 standalone.localdomain podman[231911]: 2025-10-13 14:25:03.911671487 +0000 UTC m=+0.106208122 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, name=rhosp17/openstack-haproxy, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1)
Oct 13 14:25:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:25:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1516: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:04 standalone.localdomain sudo[231953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:25:04 standalone.localdomain sudo[231953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:25:04 standalone.localdomain sudo[231953]: pam_unix(sudo:session): session closed for user root
Oct 13 14:25:04 standalone.localdomain sudo[231989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:25:04 standalone.localdomain sudo[231989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:25:04 standalone.localdomain podman[231930]: 2025-10-13 14:25:04.145620122 +0000 UTC m=+0.212710123 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: pgmap v1516: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:04 standalone.localdomain podman[231911]: 2025-10-13 14:25:04.294339589 +0000 UTC m=+0.488876264 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:08:11, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-haproxy, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:25:04 standalone.localdomain podman[231931]: 2025-10-13 14:25:04.270861638 +0000 UTC m=+0.335760612 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, version=17.1.9, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:25:04 standalone.localdomain podman[231931]: 2025-10-13 14:25:04.475033719 +0000 UTC m=+0.539932663 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, release=1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:25:04 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:25:04 standalone.localdomain podman[231985]: 2025-10-13 14:25:04.618409611 +0000 UTC m=+0.555607774 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vcs-type=git)
Oct 13 14:25:04 standalone.localdomain podman[231985]: 2025-10-13 14:25:04.719179516 +0000 UTC m=+0.656377649 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:25:04 standalone.localdomain sudo[231989]: pam_unix(sudo:session): session closed for user root
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:25:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:25:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev f2bafeee-d0af-48ff-b229-6bc9ad37d5ed (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:25:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev f2bafeee-d0af-48ff-b229-6bc9ad37d5ed (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:25:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event f2bafeee-d0af-48ff-b229-6bc9ad37d5ed (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:25:04 standalone.localdomain sudo[232106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:25:04 standalone.localdomain sudo[232106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:25:04 standalone.localdomain sudo[232106]: pam_unix(sudo:session): session closed for user root
Oct 13 14:25:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:25:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:25:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:25:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:25:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1517: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:06 standalone.localdomain ceph-mon[29756]: pgmap v1517: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1518: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:25:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:25:09 standalone.localdomain ceph-mon[29756]: pgmap v1518: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:25:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1519: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:25:10 standalone.localdomain ceph-mon[29756]: pgmap v1519: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1520: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:12 standalone.localdomain ceph-mon[29756]: pgmap v1520: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:12 standalone.localdomain runuser[232248]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:25:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:25:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:25:12 standalone.localdomain systemd[1]: tmp-crun.LqTnXC.mount: Deactivated successfully.
Oct 13 14:25:12 standalone.localdomain podman[232362]: 2025-10-13 14:25:12.824672815 +0000 UTC m=+0.080795212 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, vcs-type=git, version=17.1.9, container_name=heat_api)
Oct 13 14:25:12 standalone.localdomain podman[232361]: 2025-10-13 14:25:12.868207082 +0000 UTC m=+0.125312169 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, container_name=heat_api_cron)
Oct 13 14:25:12 standalone.localdomain podman[232361]: 2025-10-13 14:25:12.898605016 +0000 UTC m=+0.155710083 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, container_name=heat_api_cron, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:25:12 standalone.localdomain podman[232362]: 2025-10-13 14:25:12.901309238 +0000 UTC m=+0.157431655 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true)
Oct 13 14:25:12 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:25:12 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:25:13 standalone.localdomain podman[232365]: 2025-10-13 14:25:13.025103101 +0000 UTC m=+0.272927543 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Oct 13 14:25:13 standalone.localdomain podman[232365]: 2025-10-13 14:25:13.034023495 +0000 UTC m=+0.281847927 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:25:13 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:25:13 standalone.localdomain runuser[232248]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:13 standalone.localdomain runuser[232489]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:25:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:25:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:25:13 standalone.localdomain podman[232575]: 2025-10-13 14:25:13.800919726 +0000 UTC m=+0.070287059 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, container_name=heat_api_cfn, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:25:13 standalone.localdomain podman[232577]: 2025-10-13 14:25:13.860779484 +0000 UTC m=+0.123878155 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, release=1, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 14:25:13 standalone.localdomain podman[232577]: 2025-10-13 14:25:13.875446994 +0000 UTC m=+0.138545645 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:25:13 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:25:13 standalone.localdomain podman[232575]: 2025-10-13 14:25:13.886908596 +0000 UTC m=+0.156275919 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, managed_by=tripleo_ansible)
Oct 13 14:25:13 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:25:13 standalone.localdomain runuser[232489]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:13 standalone.localdomain runuser[232651]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:13 standalone.localdomain systemd[1]: tmp-crun.dg2QW9.mount: Deactivated successfully.
Oct 13 14:25:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1521: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:13 standalone.localdomain podman[232576]: 2025-10-13 14:25:13.953611085 +0000 UTC m=+0.219588744 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, build-date=2025-07-21T15:44:11, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vendor=Red Hat, Inc., container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:25:13 standalone.localdomain podman[232576]: 2025-10-13 14:25:13.970819793 +0000 UTC m=+0.236797462 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, summary=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., container_name=heat_engine, vcs-type=git, release=1)
Oct 13 14:25:13 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:25:14 standalone.localdomain runuser[232651]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:15 standalone.localdomain ceph-mon[29756]: pgmap v1521: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1522: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:17 standalone.localdomain ceph-mon[29756]: pgmap v1522: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1523: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:18 standalone.localdomain ceph-mon[29756]: pgmap v1523: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:25:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:25:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:25:19 standalone.localdomain podman[232856]: 2025-10-13 14:25:19.805248929 +0000 UTC m=+0.070692401 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T16:28:53, version=17.1.9)
Oct 13 14:25:19 standalone.localdomain podman[232856]: 2025-10-13 14:25:19.857712191 +0000 UTC m=+0.123155653 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git)
Oct 13 14:25:19 standalone.localdomain podman[232855]: 2025-10-13 14:25:19.861303881 +0000 UTC m=+0.126743663 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T13:58:20)
Oct 13 14:25:19 standalone.localdomain podman[232855]: 2025-10-13 14:25:19.869519783 +0000 UTC m=+0.134959585 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, container_name=glance_api_cron, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:25:19 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:25:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:25:19 standalone.localdomain podman[232857]: 2025-10-13 14:25:19.911394329 +0000 UTC m=+0.170066884 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:25:19 standalone.localdomain podman[232857]: 2025-10-13 14:25:19.931805276 +0000 UTC m=+0.190477831 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:25:19 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Deactivated successfully.
Oct 13 14:25:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1524: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:19 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:25:20 standalone.localdomain podman[232916]: 2025-10-13 14:25:20.015848157 +0000 UTC m=+0.112756494 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, tcib_managed=true, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=swift_container_server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.022382) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365520022442, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2108, "num_deletes": 251, "total_data_size": 2025128, "memory_usage": 2067680, "flush_reason": "Manual Compaction"}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365520030990, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1955310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34408, "largest_seqno": 36515, "table_properties": {"data_size": 1947099, "index_size": 4978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16902, "raw_average_key_size": 19, "raw_value_size": 1930392, "raw_average_value_size": 2260, "num_data_blocks": 227, "num_entries": 854, "num_filter_entries": 854, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365321, "oldest_key_time": 1760365321, "file_creation_time": 1760365520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8658 microseconds, and 3531 cpu microseconds.
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.031040) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1955310 bytes OK
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.031076) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.032403) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.032419) EVENT_LOG_v1 {"time_micros": 1760365520032414, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.032439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2016145, prev total WAL file size 2016145, number of live WAL files 2.
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.032998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1909KB)], [74(5033KB)]
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365520033026, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 7110061, "oldest_snapshot_seqno": -1}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 4544 keys, 6080056 bytes, temperature: kUnknown
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365520058362, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 6080056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6049049, "index_size": 18558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 112052, "raw_average_key_size": 24, "raw_value_size": 5966237, "raw_average_value_size": 1312, "num_data_blocks": 774, "num_entries": 4544, "num_filter_entries": 4544, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.058653) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 6080056 bytes
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.060293) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 279.1 rd, 238.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 4.9 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 5062, records dropped: 518 output_compression: NoCompression
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.060311) EVENT_LOG_v1 {"time_micros": 1760365520060302, "job": 42, "event": "compaction_finished", "compaction_time_micros": 25473, "compaction_time_cpu_micros": 10947, "output_level": 6, "num_output_files": 1, "total_output_size": 6080056, "num_input_records": 5062, "num_output_records": 4544, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365520060619, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365520061030, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.032891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.061143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.061150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.061152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.061153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:25:20 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:25:20.061155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:25:20 standalone.localdomain podman[232916]: 2025-10-13 14:25:20.193799122 +0000 UTC m=+0.290707479 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, vcs-type=git, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:25:20 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:25:20 standalone.localdomain podman[232958]: 2025-10-13 14:25:20.299619491 +0000 UTC m=+0.074416826 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, distribution-scope=public, name=rhosp17/openstack-glance-api)
Oct 13 14:25:20 standalone.localdomain podman[232957]: 2025-10-13 14:25:20.279621957 +0000 UTC m=+0.061727217 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git)
Oct 13 14:25:20 standalone.localdomain podman[232957]: 2025-10-13 14:25:20.485877331 +0000 UTC m=+0.267982591 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, io.buildah.version=1.33.12, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, architecture=x86_64)
Oct 13 14:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:25:20 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:25:20 standalone.localdomain podman[232958]: 2025-10-13 14:25:20.515732509 +0000 UTC m=+0.290529684 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, version=17.1.9, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:25:20 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:25:20 standalone.localdomain podman[233010]: 2025-10-13 14:25:20.563079632 +0000 UTC m=+0.051113551 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:25:20 standalone.localdomain podman[233011]: 2025-10-13 14:25:20.656622585 +0000 UTC m=+0.135190742 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:25:20 standalone.localdomain podman[233013]: 2025-10-13 14:25:20.632682669 +0000 UTC m=+0.114391024 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc.)
Oct 13 14:25:20 standalone.localdomain podman[233013]: 2025-10-13 14:25:20.843833344 +0000 UTC m=+0.325541679 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, release=1)
Oct 13 14:25:20 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:25:20 standalone.localdomain podman[233011]: 2025-10-13 14:25:20.869921035 +0000 UTC m=+0.348489192 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:25:20 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:25:20 standalone.localdomain podman[233010]: 2025-10-13 14:25:20.908961164 +0000 UTC m=+0.396995103 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, build-date=2025-07-21T14:48:37, distribution-scope=public, container_name=nova_migration_target, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:25:20 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:25:21 standalone.localdomain ceph-mon[29756]: pgmap v1524: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1525: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:22 standalone.localdomain ceph-mon[29756]: pgmap v1525: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:25:23
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_metadata', 'vms', 'manila_data', '.mgr', 'volumes', 'images']
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:25:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1526: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:25 standalone.localdomain ceph-mon[29756]: pgmap v1526: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:25 standalone.localdomain runuser[233338]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:25 standalone.localdomain runuser[233338]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1527: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:25 standalone.localdomain runuser[233452]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:26 standalone.localdomain ceph-mon[29756]: pgmap v1527: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:26 standalone.localdomain runuser[233452]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:26 standalone.localdomain runuser[233514]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:25:26 standalone.localdomain podman[233561]: 2025-10-13 14:25:26.841240744 +0000 UTC m=+0.096701631 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.buildah.version=1.33.12, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-keystone-container)
Oct 13 14:25:26 standalone.localdomain podman[233561]: 2025-10-13 14:25:26.854913713 +0000 UTC m=+0.110374620 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, com.redhat.component=openstack-keystone-container, version=17.1.9, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:25:26 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:25:27 standalone.localdomain runuser[233514]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1528: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:25:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:25:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:25:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:25:28 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:25:28 standalone.localdomain recover_tripleo_nova_virtqemud[233620]: 93291
Oct 13 14:25:28 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:25:28 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:25:28 standalone.localdomain podman[233608]: 2025-10-13 14:25:28.823824178 +0000 UTC m=+0.085559698 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, version=17.1.9, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, io.openshift.expose-services=, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34)
Oct 13 14:25:28 standalone.localdomain podman[233608]: 2025-10-13 14:25:28.865291432 +0000 UTC m=+0.127026922 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, release=1, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-sriov-agent, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T16:03:34, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc.)
Oct 13 14:25:28 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:25:28 standalone.localdomain podman[233607]: 2025-10-13 14:25:28.876410173 +0000 UTC m=+0.138604727 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, version=17.1.9, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, container_name=neutron_dhcp, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:25:28 standalone.localdomain podman[233607]: 2025-10-13 14:25:28.962972042 +0000 UTC m=+0.225166586 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, build-date=2025-07-21T16:28:54, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:25:28 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:25:29 standalone.localdomain ceph-mon[29756]: pgmap v1528: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1529: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:31 standalone.localdomain ceph-mon[29756]: pgmap v1529: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1530: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:25:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:25:32 standalone.localdomain podman[233771]: 2025-10-13 14:25:32.831882943 +0000 UTC m=+0.092089649 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:25:32 standalone.localdomain podman[233771]: 2025-10-13 14:25:32.861202903 +0000 UTC m=+0.121409589 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:25:32 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:25:32 standalone.localdomain podman[233772]: 2025-10-13 14:25:32.883747036 +0000 UTC m=+0.140177327 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-type=git, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, release=1, tcib_managed=true, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:25:32 standalone.localdomain podman[233772]: 2025-10-13 14:25:32.933895345 +0000 UTC m=+0.190325586 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, managed_by=tripleo_ansible, container_name=clustercheck, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container)
Oct 13 14:25:32 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:25:33 standalone.localdomain ceph-mon[29756]: pgmap v1530: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:25:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:25:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1531: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:34 standalone.localdomain ceph-mon[29756]: pgmap v1531: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:25:34 standalone.localdomain podman[233978]: 2025-10-13 14:25:34.823067722 +0000 UTC m=+0.081480674 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:25:34 standalone.localdomain podman[233978]: 2025-10-13 14:25:34.856989353 +0000 UTC m=+0.115402345 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, release=1, com.redhat.component=openstack-iscsid-container, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, tcib_managed=true)
Oct 13 14:25:34 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:25:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1532: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:37 standalone.localdomain ceph-mon[29756]: pgmap v1532: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:37 standalone.localdomain runuser[234078]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1533: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:38 standalone.localdomain ceph-mon[29756]: pgmap v1533: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:38 standalone.localdomain runuser[234078]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:38 standalone.localdomain runuser[234147]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:39 standalone.localdomain runuser[234147]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:39 standalone.localdomain runuser[234209]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1534: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:39 standalone.localdomain runuser[234209]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:41 standalone.localdomain ceph-mon[29756]: pgmap v1534: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1535: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:42 standalone.localdomain ceph-mon[29756]: pgmap v1535: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:25:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:25:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:25:43 standalone.localdomain podman[234423]: 2025-10-13 14:25:43.781802635 +0000 UTC m=+0.052052690 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, build-date=2025-07-21T15:56:26, distribution-scope=public, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:25:43 standalone.localdomain podman[234423]: 2025-10-13 14:25:43.811083214 +0000 UTC m=+0.081333249 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, version=17.1.9, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, release=1, architecture=x86_64, io.buildah.version=1.33.12, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:25:43 standalone.localdomain podman[234422]: 2025-10-13 14:25:43.837079942 +0000 UTC m=+0.110531475 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:25:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:25:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:25:43 standalone.localdomain podman[234422]: 2025-10-13 14:25:43.916846632 +0000 UTC m=+0.190298175 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, container_name=heat_api_cron, io.openshift.expose-services=, release=1)
Oct 13 14:25:43 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:25:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:25:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1536: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:43 standalone.localdomain podman[234522]: 2025-10-13 14:25:43.965142165 +0000 UTC m=+0.057179836 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step1, container_name=memcached, build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true)
Oct 13 14:25:44 standalone.localdomain podman[234522]: 2025-10-13 14:25:44.08609429 +0000 UTC m=+0.178131941 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, build-date=2025-07-21T12:58:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, config_id=tripleo_step1, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12)
Oct 13 14:25:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:25:44 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:25:44 standalone.localdomain podman[234424]: 2025-10-13 14:25:44.11280712 +0000 UTC m=+0.380545037 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:25:44 standalone.localdomain podman[234424]: 2025-10-13 14:25:44.122841158 +0000 UTC m=+0.390579065 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:25:44 standalone.localdomain ceph-mon[29756]: pgmap v1536: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:44 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:25:44 standalone.localdomain podman[234535]: 2025-10-13 14:25:44.175692941 +0000 UTC m=+0.213285751 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:25:44 standalone.localdomain podman[234562]: 2025-10-13 14:25:44.243574077 +0000 UTC m=+0.124267418 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, distribution-scope=public, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:25:44 standalone.localdomain podman[234535]: 2025-10-13 14:25:44.262971641 +0000 UTC m=+0.300564461 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, tcib_managed=true, release=1, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:25:44 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:25:44 standalone.localdomain podman[234562]: 2025-10-13 14:25:44.325927145 +0000 UTC m=+0.206620416 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:11, container_name=heat_engine, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:25:44 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:25:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1537: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:47 standalone.localdomain ceph-mon[29756]: pgmap v1537: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:25:47 standalone.localdomain object-server[234719]: Object update sweep starting on /srv/node/d1 (pid: 16)
Oct 13 14:25:47 standalone.localdomain object-server[234719]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 16)
Oct 13 14:25:47 standalone.localdomain object-server[234719]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:25:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.08s
Oct 13 14:25:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1538: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:48 standalone.localdomain ceph-mon[29756]: pgmap v1538: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1539: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:50 standalone.localdomain runuser[234756]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:25:50 standalone.localdomain podman[234803]: 2025-10-13 14:25:50.840310001 +0000 UTC m=+0.097599527 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:25:50 standalone.localdomain podman[234804]: 2025-10-13 14:25:50.822136903 +0000 UTC m=+0.079566025 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team)
Oct 13 14:25:50 standalone.localdomain podman[234802]: 2025-10-13 14:25:50.886553892 +0000 UTC m=+0.149332907 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:25:50 standalone.localdomain podman[234815]: 2025-10-13 14:25:50.936129174 +0000 UTC m=+0.189913573 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53)
Oct 13 14:25:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:25:50 standalone.localdomain podman[234801]: 2025-10-13 14:25:50.989586426 +0000 UTC m=+0.252210496 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:25:51 standalone.localdomain podman[234823]: 2025-10-13 14:25:50.952567498 +0000 UTC m=+0.199285370 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:25:51 standalone.localdomain ceph-mon[29756]: pgmap v1539: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:51 standalone.localdomain podman[234823]: 2025-10-13 14:25:51.03336979 +0000 UTC m=+0.280087652 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, build-date=2025-07-21T13:28:44, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, architecture=x86_64)
Oct 13 14:25:51 standalone.localdomain podman[234823]: unhealthy
Oct 13 14:25:51 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:25:51 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:25:51 standalone.localdomain podman[234946]: 2025-10-13 14:25:51.048919258 +0000 UTC m=+0.059024454 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1)
Oct 13 14:25:51 standalone.localdomain podman[234803]: 2025-10-13 14:25:51.052832808 +0000 UTC m=+0.310122364 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, version=17.1.9, build-date=2025-07-21T13:58:20, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible)
Oct 13 14:25:51 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain podman[234804]: 2025-10-13 14:25:51.065003022 +0000 UTC m=+0.322432124 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1)
Oct 13 14:25:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:51 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain podman[234895]: 2025-10-13 14:25:51.101783681 +0000 UTC m=+0.195919447 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-type=git, tcib_managed=true)
Oct 13 14:25:51 standalone.localdomain podman[234802]: 2025-10-13 14:25:51.117778932 +0000 UTC m=+0.380557907 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:25:51 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain podman[234894]: 2025-10-13 14:25:51.209809388 +0000 UTC m=+0.305421690 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:25:51 standalone.localdomain podman[234815]: 2025-10-13 14:25:51.226105179 +0000 UTC m=+0.479889598 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:25:51 standalone.localdomain podman[234801]: 2025-10-13 14:25:51.226582673 +0000 UTC m=+0.489206733 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:58:20, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container)
Oct 13 14:25:51 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain podman[234895]: 2025-10-13 14:25:51.333796517 +0000 UTC m=+0.427932263 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., version=17.1.9, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_account_server, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, name=rhosp17/openstack-swift-account)
Oct 13 14:25:51 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain runuser[234756]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:51 standalone.localdomain podman[234946]: 2025-10-13 14:25:51.394218592 +0000 UTC m=+0.404323818 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:25:51 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain podman[234894]: 2025-10-13 14:25:51.452911734 +0000 UTC m=+0.548524046 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, container_name=swift_proxy, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:25:51 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain runuser[235040]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:51 standalone.localdomain systemd[1]: tmp-crun.pjSyl2.mount: Deactivated successfully.
Oct 13 14:25:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1540: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:52 standalone.localdomain ceph-mon[29756]: pgmap v1540: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:52 standalone.localdomain runuser[235040]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:52 standalone.localdomain runuser[235102]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:25:52 standalone.localdomain runuser[235102]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:25:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1541: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:55 standalone.localdomain ceph-mon[29756]: pgmap v1541: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1542: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:25:57 standalone.localdomain ceph-mon[29756]: pgmap v1542: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:25:57 standalone.localdomain podman[235460]: 2025-10-13 14:25:57.263538849 +0000 UTC m=+0.058309562 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=keystone_cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:25:57 standalone.localdomain podman[235460]: 2025-10-13 14:25:57.271807573 +0000 UTC m=+0.066578296 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-keystone, vcs-type=git, config_id=tripleo_step3, container_name=keystone_cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, release=1, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:25:57 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:25:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1543: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:59 standalone.localdomain ceph-mon[29756]: pgmap v1543: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:25:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:25:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:25:59 standalone.localdomain podman[235530]: 2025-10-13 14:25:59.801625992 +0000 UTC m=+0.066530605 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, version=17.1.9, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:25:59 standalone.localdomain podman[235530]: 2025-10-13 14:25:59.896430223 +0000 UTC m=+0.161334866 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, name=rhosp17/openstack-neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:25:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1544: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:00 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:26:00 standalone.localdomain podman[235531]: 2025-10-13 14:25:59.904421199 +0000 UTC m=+0.165368849 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:26:00 standalone.localdomain podman[235531]: 2025-10-13 14:26:00.044952725 +0000 UTC m=+0.305900375 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:26:00 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:26:00 standalone.localdomain ceph-mon[29756]: pgmap v1544: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:01 standalone.localdomain anacron[91273]: Job `cron.weekly' started
Oct 13 14:26:01 standalone.localdomain anacron[91273]: Job `cron.weekly' terminated
Oct 13 14:26:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1545: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:02 standalone.localdomain ceph-mon[29756]: pgmap v1545: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:03 standalone.localdomain runuser[235699]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:26:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:26:03 standalone.localdomain systemd[1]: tmp-crun.mtwYyW.mount: Deactivated successfully.
Oct 13 14:26:03 standalone.localdomain podman[235786]: 2025-10-13 14:26:03.805464927 +0000 UTC m=+0.066611685 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vcs-type=git, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, version=17.1.9, build-date=2025-07-21T12:58:45, container_name=clustercheck)
Oct 13 14:26:03 standalone.localdomain podman[235786]: 2025-10-13 14:26:03.844073063 +0000 UTC m=+0.105219801 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9, release=1, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, container_name=clustercheck, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:26:03 standalone.localdomain systemd[1]: tmp-crun.P4zgbx.mount: Deactivated successfully.
Oct 13 14:26:03 standalone.localdomain podman[235785]: 2025-10-13 14:26:03.857549697 +0000 UTC m=+0.120168651 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Oct 13 14:26:03 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:26:03 standalone.localdomain podman[235785]: 2025-10-13 14:26:03.905921463 +0000 UTC m=+0.168540417 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:26:03 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:26:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1546: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:04 standalone.localdomain podman[235899]: 2025-10-13 14:26:04.100582481 +0000 UTC m=+0.056752423 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, release=1, com.redhat.component=openstack-mariadb-container, architecture=x86_64, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team)
Oct 13 14:26:04 standalone.localdomain podman[235899]: 2025-10-13 14:26:04.128998894 +0000 UTC m=+0.085168836 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, version=17.1.9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:26:04 standalone.localdomain runuser[235699]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:04 standalone.localdomain runuser[235946]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:04 standalone.localdomain podman[236035]: 2025-10-13 14:26:04.507241909 +0000 UTC m=+0.062016165 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, tcib_managed=true, name=rhosp17/openstack-haproxy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:08:11, batch=17.1_20250721.1)
Oct 13 14:26:04 standalone.localdomain podman[236035]: 2025-10-13 14:26:04.510084077 +0000 UTC m=+0.064858363 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, release=1, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:26:04 standalone.localdomain podman[236077]: 2025-10-13 14:26:04.953970058 +0000 UTC m=+0.079198203 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:26:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:26:04 standalone.localdomain podman[236077]: 2025-10-13 14:26:04.991058678 +0000 UTC m=+0.116286843 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, build-date=2025-07-21T13:08:05, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team)
Oct 13 14:26:05 standalone.localdomain runuser[235946]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:05 standalone.localdomain sudo[236118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:26:05 standalone.localdomain sudo[236118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:26:05 standalone.localdomain sudo[236118]: pam_unix(sudo:session): session closed for user root
Oct 13 14:26:05 standalone.localdomain runuser[236140]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:05 standalone.localdomain ceph-mon[29756]: pgmap v1546: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:05 standalone.localdomain sudo[236151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 14:26:05 standalone.localdomain sudo[236151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:26:05 standalone.localdomain podman[236101]: 2025-10-13 14:26:05.150725591 +0000 UTC m=+0.178844063 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Oct 13 14:26:05 standalone.localdomain podman[236101]: 2025-10-13 14:26:05.18260166 +0000 UTC m=+0.210720132 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:26:05 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:26:05 standalone.localdomain runuser[236140]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:05 standalone.localdomain podman[236294]: 2025-10-13 14:26:05.880928575 +0000 UTC m=+0.072567440 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64)
Oct 13 14:26:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1547: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:06 standalone.localdomain podman[236294]: 2025-10-13 14:26:06.003966284 +0000 UTC m=+0.195605169 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, RELEASE=main, vendor=Red Hat, Inc., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:06 standalone.localdomain sudo[236151]: pam_unix(sudo:session): session closed for user root
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:06 standalone.localdomain sudo[236459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:26:06 standalone.localdomain sudo[236459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:26:06 standalone.localdomain sudo[236459]: pam_unix(sudo:session): session closed for user root
Oct 13 14:26:06 standalone.localdomain sudo[236474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:26:06 standalone.localdomain sudo[236474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:26:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 3000.0 total, 600.0 interval
                                                        Cumulative writes: 8200 writes, 36K keys, 8200 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                        Cumulative WAL: 8200 writes, 8200 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1576 writes, 7104 keys, 1576 commit groups, 1.0 writes per commit group, ingest: 5.88 MB, 0.01 MB/s
                                                        Interval WAL: 1576 writes, 1576 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    118.6      0.18              0.08        21    0.008       0      0       0.0       0.0
                                                          L6      1/0    5.80 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8    222.3    186.7      0.43              0.23        20    0.021     77K    11K       0.0       0.0
                                                         Sum      1/0    5.80 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.8    157.2    166.7      0.61              0.31        41    0.015     77K    11K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    191.8    196.2      0.13              0.05         8    0.016     19K   2509       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    222.3    186.7      0.43              0.23        20    0.021     77K    11K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    120.4      0.18              0.08        20    0.009       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 3000.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.021, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.10 GB write, 0.03 MB/s write, 0.09 GB read, 0.03 MB/s read, 0.6 seconds
                                                        Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.1 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 14.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000275 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1313,13.53 MB,4.3931%) FilterBlock(42,219.55 KB,0.0696108%) IndexBlock(42,341.86 KB,0.108392%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: pgmap v1547: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:07 standalone.localdomain sudo[236474]: pam_unix(sudo:session): session closed for user root
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:26:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:26:07 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev e84fc3cc-f4fb-4843-a5b2-3ca1103b7bc4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:26:07 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev e84fc3cc-f4fb-4843-a5b2-3ca1103b7bc4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:26:07 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event e84fc3cc-f4fb-4843-a5b2-3ca1103b7bc4 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:26:07 standalone.localdomain sudo[236530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:26:07 standalone.localdomain sudo[236530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:26:07 standalone.localdomain sudo[236530]: pam_unix(sudo:session): session closed for user root
Oct 13 14:26:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1548: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:08 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:26:08 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:26:08 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:08 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:26:08 standalone.localdomain ceph-mon[29756]: pgmap v1548: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:26:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:26:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1549: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:26:10 standalone.localdomain ceph-mon[29756]: pgmap v1549: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1550: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:12 standalone.localdomain ceph-mon[29756]: pgmap v1550: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1551: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:26:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:26:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:26:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:26:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:26:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:26:14 standalone.localdomain podman[236809]: 2025-10-13 14:26:14.835259844 +0000 UTC m=+0.099847288 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, build-date=2025-07-21T15:44:11, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, vcs-type=git)
Oct 13 14:26:14 standalone.localdomain podman[236809]: 2025-10-13 14:26:14.880324158 +0000 UTC m=+0.144911612 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, release=1, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:26:14 standalone.localdomain systemd[1]: tmp-crun.2eWsYq.mount: Deactivated successfully.
Oct 13 14:26:14 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:26:14 standalone.localdomain podman[236810]: 2025-10-13 14:26:14.987063656 +0000 UTC m=+0.248439581 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, release=1, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=memcached, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:26:15 standalone.localdomain ceph-mon[29756]: pgmap v1551: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:15 standalone.localdomain podman[236810]: 2025-10-13 14:26:15.036851595 +0000 UTC m=+0.298227500 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, vcs-type=git, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, container_name=memcached, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:26:15 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:26:15 standalone.localdomain podman[236828]: 2025-10-13 14:26:15.03928581 +0000 UTC m=+0.291308517 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1)
Oct 13 14:26:15 standalone.localdomain podman[236828]: 2025-10-13 14:26:15.132534473 +0000 UTC m=+0.384557210 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, batch=17.1_20250721.1)
Oct 13 14:26:15 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:26:15 standalone.localdomain podman[236808]: 2025-10-13 14:26:15.223131135 +0000 UTC m=+0.487155582 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T14:49:55, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:26:15 standalone.localdomain podman[236808]: 2025-10-13 14:26:15.248893276 +0000 UTC m=+0.512917733 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:49:55, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible)
Oct 13 14:26:15 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:26:15 standalone.localdomain podman[236811]: 2025-10-13 14:26:15.332157993 +0000 UTC m=+0.590637650 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, container_name=heat_api_cron, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:26:15 standalone.localdomain podman[236816]: 2025-10-13 14:26:14.892670446 +0000 UTC m=+0.145065906 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, distribution-scope=public, container_name=heat_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:26:15 standalone.localdomain podman[236811]: 2025-10-13 14:26:15.371062438 +0000 UTC m=+0.629542095 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, container_name=heat_api_cron, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:26:15 standalone.localdomain podman[236816]: 2025-10-13 14:26:15.387935416 +0000 UTC m=+0.640330886 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, container_name=heat_api, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:26:15 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:26:15 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:26:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1552: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:16 standalone.localdomain ceph-mon[29756]: pgmap v1552: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:16 standalone.localdomain runuser[237018]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:17 standalone.localdomain runuser[237018]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:17 standalone.localdomain runuser[237087]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:17 standalone.localdomain runuser[237087]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:17 standalone.localdomain runuser[237141]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1553: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:18 standalone.localdomain runuser[237141]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:19 standalone.localdomain ceph-mon[29756]: pgmap v1553: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1554: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:20 standalone.localdomain ceph-mon[29756]: pgmap v1554: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:26:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:26:21 standalone.localdomain systemd[1]: tmp-crun.JimcDG.mount: Deactivated successfully.
Oct 13 14:26:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1555: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:22 standalone.localdomain podman[237234]: 2025-10-13 14:26:21.980469721 +0000 UTC m=+0.245030945 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:26:22 standalone.localdomain podman[237231]: 2025-10-13 14:26:22.04456862 +0000 UTC m=+0.314848239 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:26:22 standalone.localdomain podman[237231]: 2025-10-13 14:26:22.08298562 +0000 UTC m=+0.353265209 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, container_name=glance_api_cron, managed_by=tripleo_ansible, config_id=tripleo_step4)
Oct 13 14:26:22 standalone.localdomain podman[237257]: 2025-10-13 14:26:22.095165364 +0000 UTC m=+0.346391718 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:26:22 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:26:22 standalone.localdomain podman[237245]: 2025-10-13 14:26:21.842719171 +0000 UTC m=+0.103394856 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, container_name=glance_api_internal, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 14:26:22 standalone.localdomain podman[237262]: 2025-10-13 14:26:22.002140817 +0000 UTC m=+0.250224265 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4)
Oct 13 14:26:22 standalone.localdomain podman[237245]: 2025-10-13 14:26:22.133049617 +0000 UTC m=+0.393725342 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:26:22 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:26:22 standalone.localdomain ceph-mon[29756]: pgmap v1555: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:22 standalone.localdomain podman[237257]: 2025-10-13 14:26:22.160745068 +0000 UTC m=+0.411971432 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:26:22 standalone.localdomain podman[237257]: unhealthy
Oct 13 14:26:22 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:26:22 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:26:22 standalone.localdomain podman[237265]: 2025-10-13 14:26:22.140544417 +0000 UTC m=+0.380218497 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, vcs-type=git, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, container_name=swift_account_server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account)
Oct 13 14:26:22 standalone.localdomain podman[237232]: 2025-10-13 14:26:22.250254737 +0000 UTC m=+0.519734872 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12)
Oct 13 14:26:22 standalone.localdomain podman[237234]: 2025-10-13 14:26:22.279075902 +0000 UTC m=+0.543637136 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy)
Oct 13 14:26:22 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:26:22 standalone.localdomain podman[237262]: 2025-10-13 14:26:22.35128974 +0000 UTC m=+0.599373218 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-07-21T13:28:44, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:26:22 standalone.localdomain podman[237262]: unhealthy
Oct 13 14:26:22 standalone.localdomain podman[237265]: 2025-10-13 14:26:22.363898696 +0000 UTC m=+0.603572756 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:26:22 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:26:22 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:26:22 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:26:22 standalone.localdomain podman[237233]: 2025-10-13 14:26:22.436129214 +0000 UTC m=+0.701864215 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, build-date=2025-07-21T14:48:37)
Oct 13 14:26:22 standalone.localdomain podman[237232]: 2025-10-13 14:26:22.465874108 +0000 UTC m=+0.735354243 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, container_name=swift_object_server, name=rhosp17/openstack-swift-object, tcib_managed=true)
Oct 13 14:26:22 standalone.localdomain podman[237252]: 2025-10-13 14:26:22.495566959 +0000 UTC m=+0.742972896 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-type=git, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, build-date=2025-07-21T15:54:32)
Oct 13 14:26:22 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:26:22 standalone.localdomain podman[237252]: 2025-10-13 14:26:22.677213918 +0000 UTC m=+0.924619865 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, release=1, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12)
Oct 13 14:26:22 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:26:22 standalone.localdomain podman[237233]: 2025-10-13 14:26:22.797060988 +0000 UTC m=+1.062795959 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:26:22 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:26:23
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_metadata', 'images', 'manila_data', 'volumes', '.mgr', 'vms']
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:26:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1556: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:25 standalone.localdomain ceph-mon[29756]: pgmap v1556: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1557: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:26 standalone.localdomain ceph-mon[29756]: pgmap v1557: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:26:27 standalone.localdomain podman[237748]: 2025-10-13 14:26:27.80000315 +0000 UTC m=+0.069036151 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, version=17.1.9)
Oct 13 14:26:27 standalone.localdomain podman[237748]: 2025-10-13 14:26:27.811930176 +0000 UTC m=+0.080963197 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, release=1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, name=rhosp17/openstack-keystone, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc.)
Oct 13 14:26:27 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:26:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1558: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:26:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:26:29 standalone.localdomain ceph-mon[29756]: pgmap v1558: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:29 standalone.localdomain runuser[237780]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:29 standalone.localdomain runuser[237780]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1559: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:29 standalone.localdomain runuser[237849]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:30 standalone.localdomain ceph-mon[29756]: pgmap v1559: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:30 standalone.localdomain runuser[237849]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:30 standalone.localdomain runuser[237911]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:26:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:26:30 standalone.localdomain systemd[1]: tmp-crun.2MkkQi.mount: Deactivated successfully.
Oct 13 14:26:30 standalone.localdomain podman[237959]: 2025-10-13 14:26:30.801482404 +0000 UTC m=+0.070860507 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:26:30 standalone.localdomain podman[237958]: 2025-10-13 14:26:30.844938229 +0000 UTC m=+0.116166389 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:26:30 standalone.localdomain podman[237959]: 2025-10-13 14:26:30.862126527 +0000 UTC m=+0.131504610 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, version=17.1.9, release=1, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-neutron-sriov-agent, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=neutron_sriov_agent, vcs-type=git)
Oct 13 14:26:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:26:30 standalone.localdomain podman[237958]: 2025-10-13 14:26:30.909222903 +0000 UTC m=+0.180451083 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-dhcp-agent, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:26:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:26:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:31 standalone.localdomain runuser[237911]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1560: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:32 standalone.localdomain ceph-mon[29756]: pgmap v1560: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1561: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:26:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:26:34 standalone.localdomain podman[238257]: 2025-10-13 14:26:34.782390116 +0000 UTC m=+0.056999891 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, container_name=nova_compute, release=1, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:26:34 standalone.localdomain systemd[1]: tmp-crun.M80c3u.mount: Deactivated successfully.
Oct 13 14:26:34 standalone.localdomain podman[238257]: 2025-10-13 14:26:34.879901001 +0000 UTC m=+0.154510776 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:26:34 standalone.localdomain podman[238258]: 2025-10-13 14:26:34.886782132 +0000 UTC m=+0.156670012 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, container_name=clustercheck, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true)
Oct 13 14:26:34 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:26:34 standalone.localdomain podman[238258]: 2025-10-13 14:26:34.932899339 +0000 UTC m=+0.202787619 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, architecture=x86_64, config_id=tripleo_step2, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:26:34 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:26:35 standalone.localdomain ceph-mon[29756]: pgmap v1561: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:26:35 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:26:35 standalone.localdomain recover_tripleo_nova_virtqemud[238332]: 93291
Oct 13 14:26:35 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:26:35 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:26:35 standalone.localdomain podman[238330]: 2025-10-13 14:26:35.809580891 +0000 UTC m=+0.071733145 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, name=rhosp17/openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, build-date=2025-07-21T13:27:15)
Oct 13 14:26:35 standalone.localdomain podman[238330]: 2025-10-13 14:26:35.817542665 +0000 UTC m=+0.079694919 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:26:35 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:26:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1562: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:37 standalone.localdomain ceph-mon[29756]: pgmap v1562: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1563: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:38 standalone.localdomain ceph-mon[29756]: pgmap v1563: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1564: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:41 standalone.localdomain ceph-mon[29756]: pgmap v1564: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:41 standalone.localdomain runuser[238456]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1565: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:42 standalone.localdomain ceph-mon[29756]: pgmap v1565: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:42 standalone.localdomain runuser[238456]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:42 standalone.localdomain runuser[238525]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:43 standalone.localdomain runuser[238525]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:43 standalone.localdomain runuser[238661]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1566: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:44 standalone.localdomain runuser[238661]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:45 standalone.localdomain ceph-mon[29756]: pgmap v1566: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:26:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:26:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:26:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:26:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:26:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:26:45 standalone.localdomain podman[238868]: 2025-10-13 14:26:45.828766009 +0000 UTC m=+0.091235552 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, release=1, container_name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, build-date=2025-07-21T14:49:55)
Oct 13 14:26:45 standalone.localdomain podman[238870]: 2025-10-13 14:26:45.87727607 +0000 UTC m=+0.131748457 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, version=17.1.9, release=1, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public)
Oct 13 14:26:45 standalone.localdomain podman[238868]: 2025-10-13 14:26:45.88282393 +0000 UTC m=+0.145293463 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, name=rhosp17/openstack-heat-api-cfn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:26:45 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:26:45 standalone.localdomain podman[238876]: 2025-10-13 14:26:45.929183813 +0000 UTC m=+0.179365379 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, distribution-scope=public, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-type=git, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, name=rhosp17/openstack-heat-api)
Oct 13 14:26:45 standalone.localdomain podman[238876]: 2025-10-13 14:26:45.939785379 +0000 UTC m=+0.189966975 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, container_name=heat_api_cron, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, release=1, config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 14:26:45 standalone.localdomain podman[238869]: 2025-10-13 14:26:45.974063122 +0000 UTC m=+0.234123811 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, build-date=2025-07-21T15:44:11, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:26:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1567: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:45 standalone.localdomain podman[238869]: 2025-10-13 14:26:45.995305714 +0000 UTC m=+0.255366393 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, architecture=x86_64, container_name=heat_engine, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:11, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:26:46 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:26:46 standalone.localdomain podman[238882]: 2025-10-13 14:26:45.856214422 +0000 UTC m=+0.099381492 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, release=1, com.redhat.component=openstack-heat-api-container, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git)
Oct 13 14:26:46 standalone.localdomain podman[238882]: 2025-10-13 14:26:46.035734575 +0000 UTC m=+0.278901665 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, vcs-type=git)
Oct 13 14:26:46 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:26:46 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:26:46 standalone.localdomain podman[238870]: 2025-10-13 14:26:46.152327236 +0000 UTC m=+0.406799633 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, vcs-type=git, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, config_id=tripleo_step1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, name=rhosp17/openstack-memcached, io.openshift.expose-services=, io.buildah.version=1.33.12, container_name=memcached)
Oct 13 14:26:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:46 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:26:46 standalone.localdomain podman[238887]: 2025-10-13 14:26:46.238758961 +0000 UTC m=+0.483975454 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, architecture=x86_64, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:26:46 standalone.localdomain podman[238887]: 2025-10-13 14:26:46.248313944 +0000 UTC m=+0.493530447 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, release=1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9)
Oct 13 14:26:46 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:26:47 standalone.localdomain ceph-mon[29756]: pgmap v1567: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1568: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:49 standalone.localdomain ceph-mon[29756]: pgmap v1568: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1569: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:50 standalone.localdomain ceph-mon[29756]: pgmap v1569: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1570: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:52 standalone.localdomain ceph-mon[29756]: pgmap v1570: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:26:52 standalone.localdomain podman[239124]: 2025-10-13 14:26:52.835724625 +0000 UTC m=+0.084865968 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., release=1, architecture=x86_64)
Oct 13 14:26:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:26:52 standalone.localdomain podman[239139]: 2025-10-13 14:26:52.892404415 +0000 UTC m=+0.133343466 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, container_name=ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:26:52 standalone.localdomain podman[239116]: 2025-10-13 14:26:52.994267613 +0000 UTC m=+0.255136116 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20)
Oct 13 14:26:53 standalone.localdomain podman[239132]: 2025-10-13 14:26:53.008988125 +0000 UTC m=+0.246447659 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:26:53 standalone.localdomain podman[239153]: 2025-10-13 14:26:52.969625116 +0000 UTC m=+0.200854488 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible)
Oct 13 14:26:53 standalone.localdomain podman[239139]: 2025-10-13 14:26:53.039083299 +0000 UTC m=+0.280022330 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, version=17.1.9, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true)
Oct 13 14:26:53 standalone.localdomain podman[239139]: unhealthy
Oct 13 14:26:53 standalone.localdomain podman[239124]: 2025-10-13 14:26:53.048985104 +0000 UTC m=+0.298126497 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, build-date=2025-07-21T13:58:20, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, container_name=glance_api_internal, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9)
Oct 13 14:26:53 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:26:53 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:26:53 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain podman[239132]: 2025-10-13 14:26:53.105742797 +0000 UTC m=+0.343202341 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent)
Oct 13 14:26:53 standalone.localdomain podman[239132]: unhealthy
Oct 13 14:26:53 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:26:53 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:26:53 standalone.localdomain podman[239118]: 2025-10-13 14:26:53.12439321 +0000 UTC m=+0.379598539 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc.)
Oct 13 14:26:53 standalone.localdomain podman[239116]: 2025-10-13 14:26:53.135017375 +0000 UTC m=+0.395885878 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, container_name=glance_api_cron, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:26:53 standalone.localdomain podman[239208]: 2025-10-13 14:26:52.93588856 +0000 UTC m=+0.084354641 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:26:53 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain podman[239143]: 2025-10-13 14:26:53.055477813 +0000 UTC m=+0.289708447 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-account-container, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:26:53 standalone.localdomain podman[239153]: 2025-10-13 14:26:53.202904081 +0000 UTC m=+0.434133463 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, build-date=2025-07-21T15:54:32, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1)
Oct 13 14:26:53 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain podman[239143]: 2025-10-13 14:26:53.228447425 +0000 UTC m=+0.462678069 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:26:53 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain podman[239117]: 2025-10-13 14:26:53.234841521 +0000 UTC m=+0.490468023 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, container_name=swift_object_server, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, architecture=x86_64)
Oct 13 14:26:53 standalone.localdomain podman[239208]: 2025-10-13 14:26:53.309832215 +0000 UTC m=+0.458298296 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:26:53 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain podman[239118]: 2025-10-13 14:26:53.323862395 +0000 UTC m=+0.579067734 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, io.buildah.version=1.33.12, container_name=swift_proxy, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:26:53 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain podman[239117]: 2025-10-13 14:26:53.421930937 +0000 UTC m=+0.677557459 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, container_name=swift_object_server, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4)
Oct 13 14:26:53 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain systemd[1]: tmp-crun.2WJnQD.mount: Deactivated successfully.
Oct 13 14:26:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1571: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:54 standalone.localdomain runuser[239537]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:55 standalone.localdomain ceph-mon[29756]: pgmap v1571: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:55 standalone.localdomain runuser[239537]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:55 standalone.localdomain runuser[239617]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1572: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:56 standalone.localdomain runuser[239617]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:56 standalone.localdomain runuser[239679]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.171164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365616171208, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1136, "num_deletes": 251, "total_data_size": 964763, "memory_usage": 985352, "flush_reason": "Manual Compaction"}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365616178393, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 602930, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36516, "largest_seqno": 37651, "table_properties": {"data_size": 599059, "index_size": 1536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10351, "raw_average_key_size": 20, "raw_value_size": 590544, "raw_average_value_size": 1174, "num_data_blocks": 70, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365521, "oldest_key_time": 1760365521, "file_creation_time": 1760365616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 7302 microseconds, and 2680 cpu microseconds.
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.178453) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 602930 bytes OK
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.178520) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.180463) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.180500) EVENT_LOG_v1 {"time_micros": 1760365616180477, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.180520) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 959440, prev total WAL file size 959929, number of live WAL files 2.
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.181067) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323531' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(588KB)], [77(5937KB)]
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365616181115, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 6682986, "oldest_snapshot_seqno": -1}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 4569 keys, 4946140 bytes, temperature: kUnknown
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365616207378, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 4946140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4918580, "index_size": 15048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 112627, "raw_average_key_size": 24, "raw_value_size": 4838881, "raw_average_value_size": 1059, "num_data_blocks": 631, "num_entries": 4569, "num_filter_entries": 4569, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.207583) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 4946140 bytes
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.209278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.9 rd, 187.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.8 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(19.3) write-amplify(8.2) OK, records in: 5047, records dropped: 478 output_compression: NoCompression
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.209298) EVENT_LOG_v1 {"time_micros": 1760365616209288, "job": 44, "event": "compaction_finished", "compaction_time_micros": 26318, "compaction_time_cpu_micros": 13085, "output_level": 6, "num_output_files": 1, "total_output_size": 4946140, "num_input_records": 5047, "num_output_records": 4569, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365616209462, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365616209969, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.180976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.210039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.210044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.210046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.210048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:26:56 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:26:56.210050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:26:56 standalone.localdomain runuser[239679]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:26:57 standalone.localdomain ceph-mon[29756]: pgmap v1572: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1573: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:26:58 standalone.localdomain podman[239842]: 2025-10-13 14:26:58.814703989 +0000 UTC m=+0.076899473 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, container_name=keystone_cron)
Oct 13 14:26:58 standalone.localdomain podman[239842]: 2025-10-13 14:26:58.850605842 +0000 UTC m=+0.112801336 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, version=17.1.9, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:26:58 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:26:59 standalone.localdomain ceph-mon[29756]: pgmap v1573: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:26:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1574: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:00 standalone.localdomain ceph-mon[29756]: pgmap v1574: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:27:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:27:01 standalone.localdomain podman[239885]: 2025-10-13 14:27:01.797384057 +0000 UTC m=+0.062934434 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, container_name=neutron_dhcp, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:27:01 standalone.localdomain podman[239886]: 2025-10-13 14:27:01.85673891 +0000 UTC m=+0.119117780 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:27:01 standalone.localdomain podman[239885]: 2025-10-13 14:27:01.887467263 +0000 UTC m=+0.153017630 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, tcib_managed=true, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible)
Oct 13 14:27:01 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:27:01 standalone.localdomain podman[239886]: 2025-10-13 14:27:01.915560546 +0000 UTC m=+0.177939416 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, release=1, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, version=17.1.9)
Oct 13 14:27:01 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:27:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1575: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:02 standalone.localdomain ceph-mon[29756]: pgmap v1575: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1576: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:04 standalone.localdomain ceph-mon[29756]: pgmap v1576: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:04 standalone.localdomain podman[240084]: 2025-10-13 14:27:04.294852924 +0000 UTC m=+0.123755161 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.component=openstack-mariadb-container, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, name=rhosp17/openstack-mariadb, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45)
Oct 13 14:27:04 standalone.localdomain podman[240084]: 2025-10-13 14:27:04.340826125 +0000 UTC m=+0.169728372 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, version=17.1.9, com.redhat.component=openstack-mariadb-container, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:27:04 standalone.localdomain podman[240158]: 2025-10-13 14:27:04.686516362 +0000 UTC m=+0.142226219 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, name=rhosp17/openstack-haproxy, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:27:04 standalone.localdomain podman[240158]: 2025-10-13 14:27:04.761813474 +0000 UTC m=+0.217523331 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-haproxy-container, release=1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50)
Oct 13 14:27:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:27:05 standalone.localdomain podman[240239]: 2025-10-13 14:27:05.138049838 +0000 UTC m=+0.093620115 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, release=1, version=17.1.9, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05)
Oct 13 14:27:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:27:05 standalone.localdomain podman[240239]: 2025-10-13 14:27:05.29832054 +0000 UTC m=+0.253890817 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, release=1, architecture=x86_64)
Oct 13 14:27:05 standalone.localdomain podman[240257]: 2025-10-13 14:27:05.272137156 +0000 UTC m=+0.127493437 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, release=1, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute)
Oct 13 14:27:05 standalone.localdomain podman[240257]: 2025-10-13 14:27:05.900245865 +0000 UTC m=+0.755602166 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Oct 13 14:27:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:27:05 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:27:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1577: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:06 standalone.localdomain podman[240258]: 2025-10-13 14:27:05.909615172 +0000 UTC m=+0.764025883 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_id=tripleo_step2, container_name=clustercheck, name=rhosp17/openstack-mariadb)
Oct 13 14:27:06 standalone.localdomain podman[240258]: 2025-10-13 14:27:06.114119872 +0000 UTC m=+0.968530603 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, release=1, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=clustercheck, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:27:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:06 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:27:06 standalone.localdomain ceph-mon[29756]: pgmap v1577: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:06 standalone.localdomain podman[240320]: 2025-10-13 14:27:06.546560174 +0000 UTC m=+0.610246943 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:27:06 standalone.localdomain podman[240320]: 2025-10-13 14:27:06.613658654 +0000 UTC m=+0.677345453 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:27:06 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:27:07 standalone.localdomain runuser[240427]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:07 standalone.localdomain sudo[240417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:27:07 standalone.localdomain sudo[240417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:27:07 standalone.localdomain sudo[240417]: pam_unix(sudo:session): session closed for user root
Oct 13 14:27:07 standalone.localdomain sudo[240452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:27:07 standalone.localdomain sudo[240452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:27:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1578: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:07 standalone.localdomain sudo[240452]: pam_unix(sudo:session): session closed for user root
Oct 13 14:27:08 standalone.localdomain runuser[240427]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:27:08 standalone.localdomain runuser[240552]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:08 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:14:27:08 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx28c01d51d8bd4112abb75-0068ed0c3c" "proxy-server 2" 0.0007 "-" 20 -
Oct 13 14:27:08 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx28c01d51d8bd4112abb75-0068ed0c3c)
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:27:08 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev dcd8fbea-8cc8-40bf-94ba-6f46f4e7ae3f (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:27:08 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev dcd8fbea-8cc8-40bf-94ba-6f46f4e7ae3f (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:27:08 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event dcd8fbea-8cc8-40bf-94ba-6f46f4e7ae3f (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:27:08 standalone.localdomain sudo[240597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:27:08 standalone.localdomain sudo[240597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:27:08 standalone.localdomain sudo[240597]: pam_unix(sudo:session): session closed for user root
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: pgmap v1578: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:27:08 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:27:08 standalone.localdomain runuser[240552]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:08 standalone.localdomain runuser[240621]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:27:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 3000.1 total, 600.0 interval
                                                        Cumulative writes: 6951 writes, 30K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                                        Cumulative WAL: 6951 writes, 1290 syncs, 5.39 writes per sync, written: 0.03 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1093 writes, 4895 keys, 1093 commit groups, 1.0 writes per commit group, ingest: 6.14 MB, 0.01 MB/s
                                                        Interval WAL: 1093 writes, 317 syncs, 3.45 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 14:27:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:27:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:27:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:27:09 standalone.localdomain runuser[240621]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:27:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:27:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:27:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1579: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:10 standalone.localdomain ceph-mon[29756]: pgmap v1579: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1580: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:12 standalone.localdomain ceph-mon[29756]: pgmap v1580: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1581: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:15 standalone.localdomain ceph-mon[29756]: pgmap v1581: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 14:27:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1582: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:27:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:27:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:27:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:27:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:27:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:27:16 standalone.localdomain podman[240950]: 2025-10-13 14:27:16.874731464 +0000 UTC m=+0.125951989 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=heat_api_cfn, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, name=rhosp17/openstack-heat-api-cfn, tcib_managed=true)
Oct 13 14:27:16 standalone.localdomain podman[240950]: 2025-10-13 14:27:16.898794552 +0000 UTC m=+0.150015097 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, release=1, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-cfn-container, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, container_name=heat_api_cfn, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:27:16 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:27:16 standalone.localdomain podman[240952]: 2025-10-13 14:27:16.825601195 +0000 UTC m=+0.078448041 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vcs-type=git, release=1, version=17.1.9, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:27:16 standalone.localdomain podman[240951]: 2025-10-13 14:27:16.938694187 +0000 UTC m=+0.194953617 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:44:11, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-engine-container, batch=17.1_20250721.1, container_name=heat_engine)
Oct 13 14:27:16 standalone.localdomain podman[240976]: 2025-10-13 14:27:16.978564562 +0000 UTC m=+0.217316675 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52)
Oct 13 14:27:16 standalone.localdomain podman[240976]: 2025-10-13 14:27:16.984311329 +0000 UTC m=+0.223063432 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20250721.1, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, architecture=x86_64, vcs-type=git)
Oct 13 14:27:16 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:27:17 standalone.localdomain podman[240970]: 2025-10-13 14:27:16.85212206 +0000 UTC m=+0.092430040 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-heat-api-container, architecture=x86_64, container_name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:27:17 standalone.localdomain podman[240970]: 2025-10-13 14:27:17.033383995 +0000 UTC m=+0.273692005 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, tcib_managed=true)
Oct 13 14:27:17 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:27:17 standalone.localdomain podman[240958]: 2025-10-13 14:27:17.044543988 +0000 UTC m=+0.291120681 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.component=openstack-heat-api-container, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:27:17 standalone.localdomain podman[240958]: 2025-10-13 14:27:17.051981477 +0000 UTC m=+0.298558170 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, container_name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:27:17 standalone.localdomain podman[240952]: 2025-10-13 14:27:17.058724404 +0000 UTC m=+0.311571250 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached)
Oct 13 14:27:17 standalone.localdomain podman[240951]: 2025-10-13 14:27:17.059035273 +0000 UTC m=+0.315294703 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:27:17 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:27:17 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:27:17 standalone.localdomain ceph-mon[29756]: pgmap v1582: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:17 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:27:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1583: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:19 standalone.localdomain ceph-mon[29756]: pgmap v1583: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1584: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:20 standalone.localdomain runuser[241173]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:20 standalone.localdomain ceph-mon[29756]: pgmap v1584: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:20 standalone.localdomain runuser[241173]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:20 standalone.localdomain runuser[241242]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:21 standalone.localdomain runuser[241242]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:21 standalone.localdomain runuser[241304]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1585: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:22 standalone.localdomain runuser[241304]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:22 standalone.localdomain ceph-mon[29756]: pgmap v1585: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:27:23
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'volumes', 'backups', 'manila_metadata', 'manila_data', '.mgr', 'images']
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:27:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:27:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1586: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:24 standalone.localdomain podman[241464]: 2025-10-13 14:27:24.107306725 +0000 UTC m=+0.355519358 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, container_name=glance_api_internal)
Oct 13 14:27:24 standalone.localdomain podman[241462]: 2025-10-13 14:27:23.960847517 +0000 UTC m=+0.215753596 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git)
Oct 13 14:27:24 standalone.localdomain ceph-mon[29756]: pgmap v1586: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:24 standalone.localdomain systemd[1]: tmp-crun.moLvP1.mount: Deactivated successfully.
Oct 13 14:27:24 standalone.localdomain podman[241463]: 2025-10-13 14:27:24.207557414 +0000 UTC m=+0.463119643 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, version=17.1.9, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4)
Oct 13 14:27:24 standalone.localdomain podman[241461]: 2025-10-13 14:27:24.280814693 +0000 UTC m=+0.539074055 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, container_name=swift_object_server)
Oct 13 14:27:24 standalone.localdomain podman[241464]: 2025-10-13 14:27:24.301809509 +0000 UTC m=+0.550022162 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, version=17.1.9, release=1, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, architecture=x86_64, container_name=glance_api_internal)
Oct 13 14:27:24 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:27:24 standalone.localdomain podman[241462]: 2025-10-13 14:27:24.415743178 +0000 UTC m=+0.670649237 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:27:24 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:27:24 standalone.localdomain podman[241463]: 2025-10-13 14:27:24.56756072 +0000 UTC m=+0.823122929 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1)
Oct 13 14:27:24 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:27:24 standalone.localdomain podman[241461]: 2025-10-13 14:27:24.619052571 +0000 UTC m=+0.877311973 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc.)
Oct 13 14:27:24 standalone.localdomain podman[241499]: 2025-10-13 14:27:24.316557271 +0000 UTC m=+0.542304034 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=swift_account_server, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container)
Oct 13 14:27:24 standalone.localdomain podman[241499]: 2025-10-13 14:27:24.659292456 +0000 UTC m=+0.885039219 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, release=1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, io.openshift.expose-services=, container_name=swift_account_server)
Oct 13 14:27:24 standalone.localdomain podman[241495]: 2025-10-13 14:27:24.673633528 +0000 UTC m=+0.912174164 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:27:24 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:27:24 standalone.localdomain podman[241495]: 2025-10-13 14:27:24.866724898 +0000 UTC m=+1.105265554 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, distribution-scope=public, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4)
Oct 13 14:27:24 standalone.localdomain podman[241495]: unhealthy
Oct 13 14:27:24 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:27:24 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:27:24 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:27:24 standalone.localdomain podman[241460]: 2025-10-13 14:27:24.878806878 +0000 UTC m=+1.130674754 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, release=1, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container)
Oct 13 14:27:25 standalone.localdomain podman[241460]: 2025-10-13 14:27:25.026265266 +0000 UTC m=+1.278133162 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git)
Oct 13 14:27:25 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:27:25 standalone.localdomain podman[241487]: 2025-10-13 14:27:25.17193364 +0000 UTC m=+1.400093557 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, tcib_managed=true, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:27:25 standalone.localdomain podman[241487]: 2025-10-13 14:27:25.282334941 +0000 UTC m=+1.510494878 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true)
Oct 13 14:27:25 standalone.localdomain podman[241487]: unhealthy
Oct 13 14:27:25 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:27:25 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:27:25 standalone.localdomain podman[241480]: 2025-10-13 14:27:24.739716386 +0000 UTC m=+0.984499534 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:27:25 standalone.localdomain podman[241480]: 2025-10-13 14:27:25.379378711 +0000 UTC m=+1.624161929 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:27:25 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:27:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1587: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:26 standalone.localdomain ceph-mon[29756]: pgmap v1587: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1588: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:27:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:27:29 standalone.localdomain ceph-mon[29756]: pgmap v1588: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:27:29 standalone.localdomain podman[241896]: 2025-10-13 14:27:29.87853821 +0000 UTC m=+0.140856707 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, container_name=keystone_cron, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, version=17.1.9, release=1, vcs-type=git)
Oct 13 14:27:29 standalone.localdomain podman[241896]: 2025-10-13 14:27:29.923880582 +0000 UTC m=+0.186199089 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, tcib_managed=true, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, release=1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:27:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1589: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:30 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:27:30 standalone.localdomain ceph-mon[29756]: pgmap v1589: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1590: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:32 standalone.localdomain ceph-mon[29756]: pgmap v1590: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:27:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:27:32 standalone.localdomain runuser[241944]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:32 standalone.localdomain podman[241950]: 2025-10-13 14:27:32.83560616 +0000 UTC m=+0.092265615 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54)
Oct 13 14:27:32 standalone.localdomain systemd[1]: tmp-crun.QLTKkd.mount: Deactivated successfully.
Oct 13 14:27:32 standalone.localdomain podman[241953]: 2025-10-13 14:27:32.911575883 +0000 UTC m=+0.153948909 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:27:32 standalone.localdomain podman[241950]: 2025-10-13 14:27:32.993819239 +0000 UTC m=+0.250478694 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, tcib_managed=true, vcs-type=git, architecture=x86_64, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, build-date=2025-07-21T16:28:54)
Oct 13 14:27:33 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:27:33 standalone.localdomain podman[241953]: 2025-10-13 14:27:33.121873151 +0000 UTC m=+0.364246217 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, release=1)
Oct 13 14:27:33 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:27:33 standalone.localdomain runuser[241944]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:33 standalone.localdomain runuser[242135]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1591: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:34 standalone.localdomain runuser[242135]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:34 standalone.localdomain runuser[242240]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:34 standalone.localdomain runuser[242240]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:35 standalone.localdomain ceph-mon[29756]: pgmap v1591: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1592: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:27:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:27:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:27:36 standalone.localdomain podman[242405]: 2025-10-13 14:27:36.798101318 +0000 UTC m=+0.056779586 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, release=1, tcib_managed=true, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:27:36 standalone.localdomain podman[242405]: 2025-10-13 14:27:36.830695718 +0000 UTC m=+0.089373976 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:27:36 standalone.localdomain podman[242404]: 2025-10-13 14:27:36.842579343 +0000 UTC m=+0.101930951 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, vcs-type=git, release=1)
Oct 13 14:27:36 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:27:36 standalone.localdomain podman[242404]: 2025-10-13 14:27:36.850928959 +0000 UTC m=+0.110280557 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T13:27:15, com.redhat.component=openstack-iscsid-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1)
Oct 13 14:27:36 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:27:36 standalone.localdomain podman[242406]: 2025-10-13 14:27:36.899575354 +0000 UTC m=+0.156103415 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:27:36 standalone.localdomain podman[242406]: 2025-10-13 14:27:36.93791093 +0000 UTC m=+0.194439001 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:27:36 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:27:37 standalone.localdomain ceph-mon[29756]: pgmap v1592: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1593: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:38 standalone.localdomain ceph-mon[29756]: pgmap v1593: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1594: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:41 standalone.localdomain ceph-mon[29756]: pgmap v1594: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1595: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:42 standalone.localdomain ceph-mon[29756]: pgmap v1595: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1596: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:45 standalone.localdomain ceph-mon[29756]: pgmap v1596: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:45 standalone.localdomain runuser[242813]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:45 standalone.localdomain sshd[242858]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:27:45 standalone.localdomain sshd[242858]: Accepted publickey for root from 192.168.122.11 port 60960 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:27:45 standalone.localdomain systemd-logind[45629]: New session 255 of user root.
Oct 13 14:27:45 standalone.localdomain systemd[1]: Started Session 255 of User root.
Oct 13 14:27:45 standalone.localdomain sshd[242858]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:27:45 standalone.localdomain sudo[242866]:     root : PWD=/root ; USER=root ; COMMAND=/bin/python3 -c import configparser; c = configparser.ConfigParser(); c.read('/var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf'); print(c['simple_crypto_plugin']['kek'])
Oct 13 14:27:45 standalone.localdomain sudo[242866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:27:45 standalone.localdomain sudo[242866]: pam_unix(sudo:session): session closed for user root
Oct 13 14:27:45 standalone.localdomain sshd[242861]: Received disconnect from 192.168.122.11 port 60960:11: disconnected by user
Oct 13 14:27:45 standalone.localdomain sshd[242861]: Disconnected from user root 192.168.122.11 port 60960
Oct 13 14:27:45 standalone.localdomain sshd[242858]: pam_unix(sshd:session): session closed for user root
Oct 13 14:27:45 standalone.localdomain systemd[1]: session-255.scope: Deactivated successfully.
Oct 13 14:27:45 standalone.localdomain systemd-logind[45629]: Session 255 logged out. Waiting for processes to exit.
Oct 13 14:27:45 standalone.localdomain systemd-logind[45629]: Removed session 255.
Oct 13 14:27:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1597: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:46 standalone.localdomain runuser[242813]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:46 standalone.localdomain runuser[242901]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:46 standalone.localdomain runuser[242901]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:46 standalone.localdomain runuser[242955]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:47 standalone.localdomain ceph-mon[29756]: pgmap v1597: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:27:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:27:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:27:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:27:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:27:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:27:47 standalone.localdomain podman[243064]: 2025-10-13 14:27:47.396844843 +0000 UTC m=+0.077005397 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:27:47 standalone.localdomain podman[243064]: 2025-10-13 14:27:47.415561208 +0000 UTC m=+0.095721782 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, build-date=2025-07-21T15:44:11, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-heat-engine-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:27:47 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:27:47 standalone.localdomain systemd[1]: tmp-crun.oQlChP.mount: Deactivated successfully.
Oct 13 14:27:47 standalone.localdomain podman[243063]: 2025-10-13 14:27:47.464592863 +0000 UTC m=+0.148352117 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.component=openstack-heat-api-cfn-container)
Oct 13 14:27:47 standalone.localdomain podman[243063]: 2025-10-13 14:27:47.495910905 +0000 UTC m=+0.179670209 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=heat_api_cfn, architecture=x86_64)
Oct 13 14:27:47 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:27:47 standalone.localdomain podman[243067]: 2025-10-13 14:27:47.528770124 +0000 UTC m=+0.205201632 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, com.redhat.component=openstack-heat-api-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible)
Oct 13 14:27:47 standalone.localdomain podman[243093]: 2025-10-13 14:27:47.538868745 +0000 UTC m=+0.196850547 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, name=rhosp17/openstack-cron, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.)
Oct 13 14:27:47 standalone.localdomain podman[243067]: 2025-10-13 14:27:47.566057559 +0000 UTC m=+0.242489067 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, container_name=heat_api_cron, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26)
Oct 13 14:27:47 standalone.localdomain podman[243093]: 2025-10-13 14:27:47.575904712 +0000 UTC m=+0.233886494 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, release=1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true)
Oct 13 14:27:47 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:27:47 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:27:47 standalone.localdomain runuser[242955]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:47 standalone.localdomain podman[243065]: 2025-10-13 14:27:47.662971836 +0000 UTC m=+0.330284144 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, summary=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-memcached, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43)
Oct 13 14:27:47 standalone.localdomain podman[243073]: 2025-10-13 14:27:47.644196049 +0000 UTC m=+0.319097181 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64)
Oct 13 14:27:47 standalone.localdomain podman[243065]: 2025-10-13 14:27:47.707903785 +0000 UTC m=+0.375216133 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., name=rhosp17/openstack-memcached, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step1, vcs-type=git, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, version=17.1.9, release=1, io.openshift.expose-services=)
Oct 13 14:27:47 standalone.localdomain podman[243073]: 2025-10-13 14:27:47.727854198 +0000 UTC m=+0.402755360 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api, release=1)
Oct 13 14:27:47 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:27:47 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:27:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1598: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:49 standalone.localdomain ceph-mon[29756]: pgmap v1598: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1599: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:50 standalone.localdomain ceph-mon[29756]: pgmap v1599: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1600: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:52 standalone.localdomain ceph-mon[29756]: pgmap v1600: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:27:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1601: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:27:54 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:27:54 standalone.localdomain recover_tripleo_nova_virtqemud[243454]: 93291
Oct 13 14:27:54 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:27:54 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:27:54 standalone.localdomain podman[243436]: 2025-10-13 14:27:54.790587084 +0000 UTC m=+0.062552111 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, io.buildah.version=1.33.12)
Oct 13 14:27:54 standalone.localdomain systemd[1]: tmp-crun.scUQoR.mount: Deactivated successfully.
Oct 13 14:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:27:54 standalone.localdomain podman[243435]: 2025-10-13 14:27:54.913027295 +0000 UTC m=+0.185329583 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:27:54 standalone.localdomain podman[243493]: 2025-10-13 14:27:54.956741608 +0000 UTC m=+0.074337365 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, version=17.1.9, container_name=swift_account_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:27:54 standalone.localdomain podman[243437]: 2025-10-13 14:27:54.86824626 +0000 UTC m=+0.138254447 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, container_name=glance_api_internal, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64)
Oct 13 14:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:27:55 standalone.localdomain podman[243436]: 2025-10-13 14:27:55.023171067 +0000 UTC m=+0.295136114 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, release=1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 14:27:55 standalone.localdomain ceph-mon[29756]: pgmap v1601: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:27:55 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:27:55 standalone.localdomain podman[243437]: 2025-10-13 14:27:55.107809597 +0000 UTC m=+0.377817704 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=glance_api_internal, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., release=1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:27:55 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:27:55 standalone.localdomain podman[243493]: 2025-10-13 14:27:55.21439633 +0000 UTC m=+0.331992107 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, name=rhosp17/openstack-swift-account, container_name=swift_account_server, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:27:55 standalone.localdomain podman[243529]: 2025-10-13 14:27:55.19387724 +0000 UTC m=+0.198865979 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, version=17.1.9, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4)
Oct 13 14:27:55 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:27:55 standalone.localdomain podman[243435]: 2025-10-13 14:27:55.257856594 +0000 UTC m=+0.530158892 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:27:55 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:27:55 standalone.localdomain podman[243528]: 2025-10-13 14:27:55.091065803 +0000 UTC m=+0.095920788 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., container_name=swift_object_server, release=1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-type=git, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9)
Oct 13 14:27:55 standalone.localdomain podman[243607]: 2025-10-13 14:27:55.227654687 +0000 UTC m=+0.129869019 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=glance_api_cron, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-glance-api-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:27:55 standalone.localdomain podman[243607]: 2025-10-13 14:27:55.36713645 +0000 UTC m=+0.269350772 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, release=1, container_name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:27:55 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:27:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:27:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:27:55 standalone.localdomain podman[243529]: 2025-10-13 14:27:55.434042375 +0000 UTC m=+0.439031134 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Oct 13 14:27:55 standalone.localdomain podman[243529]: unhealthy
Oct 13 14:27:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:27:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:27:55 standalone.localdomain podman[243528]: 2025-10-13 14:27:55.485841215 +0000 UTC m=+0.490696200 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:27:55 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:27:55 standalone.localdomain podman[243652]: 2025-10-13 14:27:55.54622687 +0000 UTC m=+0.111751553 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc.)
Oct 13 14:27:55 standalone.localdomain podman[243653]: 2025-10-13 14:27:55.608448181 +0000 UTC m=+0.171221409 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:27:55 standalone.localdomain podman[243653]: 2025-10-13 14:27:55.620872353 +0000 UTC m=+0.183645601 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:27:55 standalone.localdomain podman[243653]: unhealthy
Oct 13 14:27:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:27:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:27:55 standalone.localdomain podman[243652]: 2025-10-13 14:27:55.773980494 +0000 UTC m=+0.339505197 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, io.buildah.version=1.33.12, container_name=swift_container_server, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:27:55 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:27:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1602: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:27:57 standalone.localdomain ceph-mon[29756]: pgmap v1602: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1603: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:58 standalone.localdomain runuser[243815]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:58 standalone.localdomain runuser[243815]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:59 standalone.localdomain runuser[243876]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:27:59 standalone.localdomain ceph-mon[29756]: pgmap v1603: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:27:59 standalone.localdomain runuser[243876]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:27:59 standalone.localdomain runuser[243938]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1604: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:00 standalone.localdomain runuser[243938]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:28:00 standalone.localdomain podman[244004]: 2025-10-13 14:28:00.824446034 +0000 UTC m=+0.083422953 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_id=tripleo_step3, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:28:00 standalone.localdomain podman[244004]: 2025-10-13 14:28:00.830926053 +0000 UTC m=+0.089902972 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, container_name=keystone_cron, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:28:00 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:28:01 standalone.localdomain ceph-mon[29756]: pgmap v1604: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1605: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:02 standalone.localdomain ceph-mon[29756]: pgmap v1605: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:28:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:28:03 standalone.localdomain podman[244132]: 2025-10-13 14:28:03.794795671 +0000 UTC m=+0.059150767 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, vcs-type=git, architecture=x86_64, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:28:03 standalone.localdomain podman[244132]: 2025-10-13 14:28:03.863834982 +0000 UTC m=+0.128190148 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:28:03 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:28:03 standalone.localdomain podman[244131]: 2025-10-13 14:28:03.865999808 +0000 UTC m=+0.130194069 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_id=tripleo_step4, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git)
Oct 13 14:28:03 standalone.localdomain podman[244131]: 2025-10-13 14:28:03.9474753 +0000 UTC m=+0.211669551 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, container_name=neutron_dhcp, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T16:28:54, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:28:03 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:28:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1606: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:04 standalone.localdomain podman[244233]: 2025-10-13 14:28:04.538515101 +0000 UTC m=+0.091704627 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:28:04 standalone.localdomain podman[244233]: 2025-10-13 14:28:04.57200939 +0000 UTC m=+0.125198896 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:28:04 standalone.localdomain podman[244307]: 2025-10-13 14:28:04.951017599 +0000 UTC m=+0.076710956 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:28:04 standalone.localdomain podman[244307]: 2025-10-13 14:28:04.982813046 +0000 UTC m=+0.108506393 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, distribution-scope=public, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:08:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-haproxy-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:28:05 standalone.localdomain ceph-mon[29756]: pgmap v1606: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:05 standalone.localdomain podman[244390]: 2025-10-13 14:28:05.761522271 +0000 UTC m=+0.084672092 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, release=1, vcs-type=git, build-date=2025-07-21T13:08:05, io.openshift.expose-services=)
Oct 13 14:28:05 standalone.localdomain podman[244390]: 2025-10-13 14:28:05.797904869 +0000 UTC m=+0.121054710 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, io.openshift.expose-services=, com.redhat.component=openstack-rabbitmq-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, vcs-type=git)
Oct 13 14:28:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1607: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.354919) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365686354968, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 903, "num_deletes": 255, "total_data_size": 695759, "memory_usage": 713080, "flush_reason": "Manual Compaction"}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365686363523, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 682945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37652, "largest_seqno": 38554, "table_properties": {"data_size": 678925, "index_size": 1749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8895, "raw_average_key_size": 18, "raw_value_size": 670680, "raw_average_value_size": 1417, "num_data_blocks": 80, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365616, "oldest_key_time": 1760365616, "file_creation_time": 1760365686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 8635 microseconds, and 2119 cpu microseconds.
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.363553) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 682945 bytes OK
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.363578) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.366571) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.366617) EVENT_LOG_v1 {"time_micros": 1760365686366607, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.366640) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 691295, prev total WAL file size 691784, number of live WAL files 2.
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.368019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(666KB)], [80(4830KB)]
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365686368080, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 5629085, "oldest_snapshot_seqno": -1}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 4516 keys, 5533569 bytes, temperature: kUnknown
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365686395903, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 5533569, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5504761, "index_size": 16423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 112363, "raw_average_key_size": 24, "raw_value_size": 5424327, "raw_average_value_size": 1201, "num_data_blocks": 689, "num_entries": 4516, "num_filter_entries": 4516, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.396284) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 5533569 bytes
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.398933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 198.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 4.7 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(16.3) write-amplify(8.1) OK, records in: 5042, records dropped: 526 output_compression: NoCompression
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.398974) EVENT_LOG_v1 {"time_micros": 1760365686398958, "job": 46, "event": "compaction_finished", "compaction_time_micros": 27941, "compaction_time_cpu_micros": 11999, "output_level": 6, "num_output_files": 1, "total_output_size": 5533569, "num_input_records": 5042, "num_output_records": 4516, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365686399200, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365686399599, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.367906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.399647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.399652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.399653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.399655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:06.399656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:07 standalone.localdomain ceph-mon[29756]: pgmap v1607: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:28:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:28:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:28:07 standalone.localdomain podman[244451]: 2025-10-13 14:28:07.421200509 +0000 UTC m=+0.086001513 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9)
Oct 13 14:28:07 standalone.localdomain podman[244451]: 2025-10-13 14:28:07.47301743 +0000 UTC m=+0.137818424 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step5)
Oct 13 14:28:07 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:28:07 standalone.localdomain podman[244453]: 2025-10-13 14:28:07.561176448 +0000 UTC m=+0.218369608 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, distribution-scope=public, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, container_name=clustercheck, name=rhosp17/openstack-mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:28:07 standalone.localdomain systemd[1]: tmp-crun.xL4iUo.mount: Deactivated successfully.
Oct 13 14:28:07 standalone.localdomain podman[244445]: 2025-10-13 14:28:07.67131501 +0000 UTC m=+0.333276376 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:28:07 standalone.localdomain podman[244445]: 2025-10-13 14:28:07.718242771 +0000 UTC m=+0.380204167 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:28:07 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:28:07 standalone.localdomain podman[244453]: 2025-10-13 14:28:07.734658085 +0000 UTC m=+0.391851235 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, container_name=clustercheck)
Oct 13 14:28:07 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:28:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1608: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:08 standalone.localdomain ceph-mon[29756]: pgmap v1608: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:08 standalone.localdomain sudo[244585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:28:08 standalone.localdomain sudo[244585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:28:08 standalone.localdomain sudo[244585]: pam_unix(sudo:session): session closed for user root
Oct 13 14:28:08 standalone.localdomain sudo[244600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 14:28:08 standalone.localdomain sudo[244600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:28:08 standalone.localdomain sudo[244600]: pam_unix(sudo:session): session closed for user root
Oct 13 14:28:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:09 standalone.localdomain sudo[244637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:28:09 standalone.localdomain sudo[244637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:28:09 standalone.localdomain sudo[244637]: pam_unix(sudo:session): session closed for user root
Oct 13 14:28:09 standalone.localdomain sudo[244652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:28:09 standalone.localdomain sudo[244652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:28:09 standalone.localdomain sudo[244652]: pam_unix(sudo:session): session closed for user root
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:28:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:28:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 7e738ecc-6848-45d0-b008-ede01d135712 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:28:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 7e738ecc-6848-45d0-b008-ede01d135712 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:28:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 7e738ecc-6848-45d0-b008-ede01d135712 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:28:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1609: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:10 standalone.localdomain sudo[244707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:28:10 standalone.localdomain sudo[244707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:28:10 standalone.localdomain sudo[244707]: pam_unix(sudo:session): session closed for user root
Oct 13 14:28:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:28:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:28:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:28:10 standalone.localdomain runuser[244734]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:11 standalone.localdomain ceph-mon[29756]: pgmap v1609: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:11 standalone.localdomain runuser[244734]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:11 standalone.localdomain runuser[244803]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1610: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:12 standalone.localdomain runuser[244803]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:12 standalone.localdomain runuser[244857]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:12 standalone.localdomain ceph-mon[29756]: pgmap v1610: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:12 standalone.localdomain runuser[244857]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1611: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:28:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:28:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:15 standalone.localdomain ceph-mon[29756]: pgmap v1611: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:28:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1612: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:16 standalone.localdomain ceph-mon[29756]: pgmap v1612: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:28:17 standalone.localdomain systemd[1]: tmp-crun.WWdA8t.mount: Deactivated successfully.
Oct 13 14:28:17 standalone.localdomain podman[245223]: 2025-10-13 14:28:17.813716792 +0000 UTC m=+0.084794226 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1)
Oct 13 14:28:17 standalone.localdomain podman[245226]: 2025-10-13 14:28:17.879594164 +0000 UTC m=+0.143676633 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Oct 13 14:28:17 standalone.localdomain podman[245223]: 2025-10-13 14:28:17.898812705 +0000 UTC m=+0.169890099 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, container_name=heat_api_cfn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:49:55, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:28:17 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:28:17 standalone.localdomain podman[245224]: 2025-10-13 14:28:17.860930072 +0000 UTC m=+0.128284391 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:11, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:28:17 standalone.localdomain podman[245226]: 2025-10-13 14:28:17.941474144 +0000 UTC m=+0.205556633 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:28:17 standalone.localdomain podman[245225]: 2025-10-13 14:28:17.951764651 +0000 UTC m=+0.219206283 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-heat-api-container, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, distribution-scope=public, container_name=heat_api_cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:28:17 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:28:17 standalone.localdomain podman[245275]: 2025-10-13 14:28:17.915025193 +0000 UTC m=+0.080872885 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, container_name=memcached, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, architecture=x86_64, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:28:17 standalone.localdomain podman[245224]: 2025-10-13 14:28:17.99600462 +0000 UTC m=+0.263358879 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, release=1, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=)
Oct 13 14:28:18 standalone.localdomain podman[245277]: 2025-10-13 14:28:18.007301126 +0000 UTC m=+0.167370031 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, container_name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, batch=17.1_20250721.1)
Oct 13 14:28:18 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:28:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1613: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:18 standalone.localdomain podman[245225]: 2025-10-13 14:28:18.018209802 +0000 UTC m=+0.285651434 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, container_name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:28:18 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:28:18 standalone.localdomain podman[245277]: 2025-10-13 14:28:18.038770443 +0000 UTC m=+0.198839348 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=heat_api, distribution-scope=public)
Oct 13 14:28:18 standalone.localdomain podman[245275]: 2025-10-13 14:28:18.049403459 +0000 UTC m=+0.215251161 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-memcached, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']})
Oct 13 14:28:18 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:28:18 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:28:19 standalone.localdomain ceph-mon[29756]: pgmap v1613: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1614: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:21 standalone.localdomain ceph-mon[29756]: pgmap v1614: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1615: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:22 standalone.localdomain ceph-mon[29756]: pgmap v1615: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:28:23
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'manila_data', 'backups', 'manila_metadata', 'vms', 'images', 'volumes']
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:28:23 standalone.localdomain runuser[245400]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1616: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:24 standalone.localdomain runuser[245400]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:24 standalone.localdomain runuser[245594]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:25 standalone.localdomain ceph-mon[29756]: pgmap v1616: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:25 standalone.localdomain runuser[245594]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:25 standalone.localdomain runuser[245697]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:28:25 standalone.localdomain runuser[245697]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:25 standalone.localdomain podman[245795]: 2025-10-13 14:28:25.851703617 +0000 UTC m=+0.097134214 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc.)
Oct 13 14:28:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:28:25 standalone.localdomain podman[245796]: 2025-10-13 14:28:25.825557525 +0000 UTC m=+0.083758594 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1, build-date=2025-07-21T14:48:37, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:28:25 standalone.localdomain podman[245912]: 2025-10-13 14:28:25.98269605 +0000 UTC m=+0.083316330 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true)
Oct 13 14:28:25 standalone.localdomain podman[245794]: 2025-10-13 14:28:25.886867168 +0000 UTC m=+0.147236623 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, com.redhat.component=openstack-glance-api-container, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api)
Oct 13 14:28:25 standalone.localdomain podman[245815]: 2025-10-13 14:28:25.940039161 +0000 UTC m=+0.182086723 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, release=1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, build-date=2025-07-21T13:28:44, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:28:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1617: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:26 standalone.localdomain podman[245822]: 2025-10-13 14:28:25.963062277 +0000 UTC m=+0.201942792 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., architecture=x86_64, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:28:26 standalone.localdomain podman[245815]: 2025-10-13 14:28:26.023890605 +0000 UTC m=+0.265938157 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-type=git, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:28:26 standalone.localdomain podman[245815]: unhealthy
Oct 13 14:28:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:28:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:28:26 standalone.localdomain podman[245795]: 2025-10-13 14:28:26.039162274 +0000 UTC m=+0.284592861 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vendor=Red Hat, Inc., container_name=swift_object_server, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:28:26 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain podman[245797]: 2025-10-13 14:28:26.08496143 +0000 UTC m=+0.335273516 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, architecture=x86_64, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, release=1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4)
Oct 13 14:28:26 standalone.localdomain podman[245804]: 2025-10-13 14:28:25.914020091 +0000 UTC m=+0.156034392 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, architecture=x86_64, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., release=1, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:28:26 standalone.localdomain podman[245822]: 2025-10-13 14:28:26.142952612 +0000 UTC m=+0.381833137 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., architecture=x86_64, container_name=swift_account_server, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, version=17.1.9, vcs-type=git, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:28:26 standalone.localdomain podman[245804]: 2025-10-13 14:28:26.149769521 +0000 UTC m=+0.391783812 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T13:58:20, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=glance_api_internal, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api)
Oct 13 14:28:26 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain podman[245810]: 2025-10-13 14:28:26.118261953 +0000 UTC m=+0.362742600 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, architecture=x86_64, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:28:26 standalone.localdomain podman[245796]: 2025-10-13 14:28:26.181764104 +0000 UTC m=+0.439965163 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:28:26 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain podman[245810]: 2025-10-13 14:28:26.20085833 +0000 UTC m=+0.445338977 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1)
Oct 13 14:28:26 standalone.localdomain podman[245810]: unhealthy
Oct 13 14:28:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:28:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:28:26 standalone.localdomain podman[245912]: 2025-10-13 14:28:26.221987809 +0000 UTC m=+0.322608119 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, release=1, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:28:26 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain podman[245797]: 2025-10-13 14:28:26.255741685 +0000 UTC m=+0.506053781 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, release=1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:28:26 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain podman[245794]: 2025-10-13 14:28:26.277024519 +0000 UTC m=+0.537393954 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, container_name=glance_api_cron, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64)
Oct 13 14:28:26 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:28:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:27 standalone.localdomain ceph-mon[29756]: pgmap v1617: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1618: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:28:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:28:29 standalone.localdomain ceph-mon[29756]: pgmap v1618: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1619: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:31 standalone.localdomain ceph-mon[29756]: pgmap v1619: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:28:31 standalone.localdomain podman[246099]: 2025-10-13 14:28:31.83210749 +0000 UTC m=+0.096931057 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:28:31 standalone.localdomain podman[246099]: 2025-10-13 14:28:31.843508562 +0000 UTC m=+0.108332109 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, release=1, name=rhosp17/openstack-keystone, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:28:31 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:28:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1620: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:32 standalone.localdomain ceph-mon[29756]: pgmap v1620: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1621: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:28:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:28:34 standalone.localdomain podman[246286]: 2025-10-13 14:28:34.831465462 +0000 UTC m=+0.092521210 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:03:34, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public)
Oct 13 14:28:34 standalone.localdomain podman[246286]: 2025-10-13 14:28:34.900942679 +0000 UTC m=+0.161998417 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, release=1, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:28:34 standalone.localdomain podman[246283]: 2025-10-13 14:28:34.908768151 +0000 UTC m=+0.177154755 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, container_name=neutron_dhcp, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-dhcp-agent-container, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git)
Oct 13 14:28:34 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:28:34 standalone.localdomain podman[246283]: 2025-10-13 14:28:34.984960726 +0000 UTC m=+0.253347360 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, build-date=2025-07-21T16:28:54, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:28:34 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:28:35 standalone.localdomain ceph-mon[29756]: pgmap v1621: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1622: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:36 standalone.localdomain runuser[246417]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.624862) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365716624907, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 251, "total_data_size": 366816, "memory_usage": 377352, "flush_reason": "Manual Compaction"}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365716729749, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 360772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38555, "largest_seqno": 39124, "table_properties": {"data_size": 357827, "index_size": 930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7094, "raw_average_key_size": 19, "raw_value_size": 351829, "raw_average_value_size": 956, "num_data_blocks": 41, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365686, "oldest_key_time": 1760365686, "file_creation_time": 1760365716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 104931 microseconds, and 1374 cpu microseconds.
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.729794) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 360772 bytes OK
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.729815) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.733120) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.733136) EVENT_LOG_v1 {"time_micros": 1760365716733131, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.733153) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 363607, prev total WAL file size 363607, number of live WAL files 2.
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.733737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(352KB)], [83(5403KB)]
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365716733771, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 5894341, "oldest_snapshot_seqno": -1}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 4365 keys, 4891209 bytes, temperature: kUnknown
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365716793392, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 4891209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4864177, "index_size": 15032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 109871, "raw_average_key_size": 25, "raw_value_size": 4787094, "raw_average_value_size": 1096, "num_data_blocks": 624, "num_entries": 4365, "num_filter_entries": 4365, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.793715) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 4891209 bytes
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.795267) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.7 rd, 81.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 5.3 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(29.9) write-amplify(13.6) OK, records in: 4884, records dropped: 519 output_compression: NoCompression
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.795294) EVENT_LOG_v1 {"time_micros": 1760365716795281, "job": 48, "event": "compaction_finished", "compaction_time_micros": 59745, "compaction_time_cpu_micros": 10148, "output_level": 6, "num_output_files": 1, "total_output_size": 4891209, "num_input_records": 4884, "num_output_records": 4365, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365716795523, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365716796237, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.733650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.796312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.796318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.796320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.796321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:28:36.796323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:28:37 standalone.localdomain runuser[246417]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:37 standalone.localdomain ceph-mon[29756]: pgmap v1622: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:37 standalone.localdomain runuser[246486]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:28:37 standalone.localdomain podman[246584]: 2025-10-13 14:28:37.7921067 +0000 UTC m=+0.065653739 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=nova_compute, distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Oct 13 14:28:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:28:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:28:37 standalone.localdomain podman[246584]: 2025-10-13 14:28:37.847050928 +0000 UTC m=+0.120597997 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:28:37 standalone.localdomain podman[246608]: 2025-10-13 14:28:37.865229119 +0000 UTC m=+0.055338930 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, release=1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-iscsid, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:28:37 standalone.localdomain runuser[246486]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:37 standalone.localdomain runuser[246648]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:37 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:28:37 standalone.localdomain podman[246608]: 2025-10-13 14:28:37.906776623 +0000 UTC m=+0.096886464 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=iscsid)
Oct 13 14:28:37 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:28:37 standalone.localdomain podman[246609]: 2025-10-13 14:28:37.987647033 +0000 UTC m=+0.172755570 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, name=rhosp17/openstack-mariadb, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, container_name=clustercheck, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:28:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1623: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:38 standalone.localdomain podman[246609]: 2025-10-13 14:28:38.067296104 +0000 UTC m=+0.252404631 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, build-date=2025-07-21T12:58:45, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git)
Oct 13 14:28:38 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:28:38 standalone.localdomain runuser[246648]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:39 standalone.localdomain ceph-mon[29756]: pgmap v1623: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1624: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:41 standalone.localdomain ceph-mon[29756]: pgmap v1624: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1625: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:42 standalone.localdomain ceph-mon[29756]: pgmap v1625: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1626: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:45 standalone.localdomain ceph-mon[29756]: pgmap v1626: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1627: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:47 standalone.localdomain ceph-mon[29756]: pgmap v1627: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1628: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:48 standalone.localdomain ceph-mon[29756]: pgmap v1628: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:28:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:28:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:28:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:28:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:28:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:28:48 standalone.localdomain podman[247087]: 2025-10-13 14:28:48.829966695 +0000 UTC m=+0.094305574 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-heat-api, container_name=heat_api_cron, release=1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:28:48 standalone.localdomain podman[247094]: 2025-10-13 14:28:48.86537622 +0000 UTC m=+0.121991670 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:28:48 standalone.localdomain podman[247088]: 2025-10-13 14:28:48.883000554 +0000 UTC m=+0.138474819 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, container_name=heat_api, distribution-scope=public, io.buildah.version=1.33.12, release=1, tcib_managed=true, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:28:48 standalone.localdomain podman[247088]: 2025-10-13 14:28:48.931887336 +0000 UTC m=+0.187361671 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-container, version=17.1.9, vcs-type=git)
Oct 13 14:28:48 standalone.localdomain podman[247086]: 2025-10-13 14:28:48.941598205 +0000 UTC m=+0.205496321 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step1, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, tcib_managed=true, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:28:48 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:28:48 standalone.localdomain podman[247086]: 2025-10-13 14:28:48.964823594 +0000 UTC m=+0.228721710 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, tcib_managed=true, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, managed_by=tripleo_ansible, release=1, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:28:48 standalone.localdomain podman[247087]: 2025-10-13 14:28:48.971007694 +0000 UTC m=+0.235346593 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, vcs-type=git, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:28:48 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:28:48 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:28:49 standalone.localdomain podman[247085]: 2025-10-13 14:28:48.976320979 +0000 UTC m=+0.242131644 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, config_id=tripleo_step4, container_name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:28:49 standalone.localdomain podman[247094]: 2025-10-13 14:28:49.045439034 +0000 UTC m=+0.302054544 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Oct 13 14:28:49 standalone.localdomain podman[247085]: 2025-10-13 14:28:49.058001053 +0000 UTC m=+0.323811768 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, container_name=heat_engine, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=)
Oct 13 14:28:49 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:28:49 standalone.localdomain podman[247084]: 2025-10-13 14:28:48.917544212 +0000 UTC m=+0.184965716 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T14:49:55, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99)
Oct 13 14:28:49 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:28:49 standalone.localdomain podman[247084]: 2025-10-13 14:28:49.100897818 +0000 UTC m=+0.368319422 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:28:49 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:28:49 standalone.localdomain runuser[247227]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:49 standalone.localdomain runuser[247227]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1629: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:50 standalone.localdomain runuser[247296]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:50 standalone.localdomain runuser[247296]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:50 standalone.localdomain runuser[247358]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:28:51 standalone.localdomain ceph-mon[29756]: pgmap v1629: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:51 standalone.localdomain runuser[247358]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:28:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1630: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:52 standalone.localdomain ceph-mon[29756]: pgmap v1630: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:28:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1631: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:55 standalone.localdomain ceph-mon[29756]: pgmap v1631: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:56 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:14:28:56 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txa2e54fe16d154503af072-0068ed0ca8" "proxy-server 2" 0.0007 "-" 21 -
Oct 13 14:28:56 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txa2e54fe16d154503af072-0068ed0ca8)
Oct 13 14:28:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1632: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:28:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:28:56 standalone.localdomain podman[247674]: 2025-10-13 14:28:56.842019988 +0000 UTC m=+0.093468139 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, com.redhat.component=openstack-swift-proxy-server-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:28:56 standalone.localdomain podman[247697]: 2025-10-13 14:28:56.894545791 +0000 UTC m=+0.137337164 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, tcib_managed=true, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team)
Oct 13 14:28:56 standalone.localdomain podman[247672]: 2025-10-13 14:28:56.892471597 +0000 UTC m=+0.148311913 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, release=1, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:28:56 standalone.localdomain podman[247697]: 2025-10-13 14:28:56.906369257 +0000 UTC m=+0.149160640 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:28:56 standalone.localdomain podman[247702]: 2025-10-13 14:28:56.983624154 +0000 UTC m=+0.217444610 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:28:57 standalone.localdomain podman[247681]: 2025-10-13 14:28:56.954655109 +0000 UTC m=+0.203422157 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-glance-api-container)
Oct 13 14:28:57 standalone.localdomain podman[247697]: unhealthy
Oct 13 14:28:57 standalone.localdomain podman[247692]: 2025-10-13 14:28:57.014186929 +0000 UTC m=+0.257662663 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, version=17.1.9, distribution-scope=public, container_name=swift_container_server, build-date=2025-07-21T15:54:32, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:28:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:28:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:28:57 standalone.localdomain podman[247674]: 2025-10-13 14:28:57.076835014 +0000 UTC m=+0.328283185 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=)
Oct 13 14:28:57 standalone.localdomain ceph-mon[29756]: pgmap v1632: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:57 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:28:57 standalone.localdomain podman[247671]: 2025-10-13 14:28:56.995582804 +0000 UTC m=+0.255426744 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 14:28:57 standalone.localdomain podman[247702]: 2025-10-13 14:28:57.115809569 +0000 UTC m=+0.349630005 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp17/openstack-ovn-controller, architecture=x86_64, version=17.1.9, vcs-type=git, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:28:57 standalone.localdomain podman[247702]: unhealthy
Oct 13 14:28:57 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:28:57 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:28:57 standalone.localdomain podman[247671]: 2025-10-13 14:28:57.128841882 +0000 UTC m=+0.388685822 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, container_name=glance_api_cron, build-date=2025-07-21T13:58:20)
Oct 13 14:28:57 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:28:57 standalone.localdomain podman[247673]: 2025-10-13 14:28:57.149656735 +0000 UTC m=+0.405006916 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target)
Oct 13 14:28:57 standalone.localdomain podman[247681]: 2025-10-13 14:28:57.154876656 +0000 UTC m=+0.403643694 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=)
Oct 13 14:28:57 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:28:57 standalone.localdomain podman[247714]: 2025-10-13 14:28:57.078749824 +0000 UTC m=+0.300841897 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:28:57 standalone.localdomain podman[247672]: 2025-10-13 14:28:57.18701085 +0000 UTC m=+0.442851156 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28)
Oct 13 14:28:57 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:28:57 standalone.localdomain podman[247714]: 2025-10-13 14:28:57.264795672 +0000 UTC m=+0.486887735 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, distribution-scope=public, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team)
Oct 13 14:28:57 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:28:57 standalone.localdomain podman[247692]: 2025-10-13 14:28:57.315130568 +0000 UTC m=+0.558606302 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:28:57 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:28:57 standalone.localdomain podman[247673]: 2025-10-13 14:28:57.52391916 +0000 UTC m=+0.779269371 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4)
Oct 13 14:28:57 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:28:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1633: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:28:59 standalone.localdomain ceph-mon[29756]: pgmap v1633: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1634: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:01 standalone.localdomain ceph-mon[29756]: pgmap v1634: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:01 standalone.localdomain runuser[248008]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1635: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:02 standalone.localdomain ceph-mon[29756]: pgmap v1635: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:29:02 standalone.localdomain runuser[248008]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:02 standalone.localdomain podman[248066]: 2025-10-13 14:29:02.787825491 +0000 UTC m=+0.052858934 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step3, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, container_name=keystone_cron, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:29:02 standalone.localdomain podman[248066]: 2025-10-13 14:29:02.820887933 +0000 UTC m=+0.085921366 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, container_name=keystone_cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, release=1, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 14:29:02 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:29:02 standalone.localdomain runuser[248096]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:03 standalone.localdomain runuser[248096]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:03 standalone.localdomain runuser[248150]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1636: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:04 standalone.localdomain runuser[248150]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:04 standalone.localdomain podman[248344]: 2025-10-13 14:29:04.664833053 +0000 UTC m=+0.065882117 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, name=rhosp17/openstack-mariadb, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:29:04 standalone.localdomain podman[248344]: 2025-10-13 14:29:04.696911584 +0000 UTC m=+0.097960658 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:29:05 standalone.localdomain ceph-mon[29756]: pgmap v1636: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:05 standalone.localdomain podman[248426]: 2025-10-13 14:29:05.141856073 +0000 UTC m=+0.073743879 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, tcib_managed=true, name=rhosp17/openstack-haproxy, io.buildah.version=1.33.12, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:29:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:29:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:29:05 standalone.localdomain podman[248426]: 2025-10-13 14:29:05.155941609 +0000 UTC m=+0.087829425 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, build-date=2025-07-21T13:08:11, batch=17.1_20250721.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:29:05 standalone.localdomain systemd[1]: tmp-crun.hqDlRZ.mount: Deactivated successfully.
Oct 13 14:29:05 standalone.localdomain podman[248445]: 2025-10-13 14:29:05.242471253 +0000 UTC m=+0.080243671 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, vcs-type=git, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54)
Oct 13 14:29:05 standalone.localdomain podman[248445]: 2025-10-13 14:29:05.287917447 +0000 UTC m=+0.125689905 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, container_name=neutron_dhcp, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:29:05 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:29:05 standalone.localdomain podman[248446]: 2025-10-13 14:29:05.349012785 +0000 UTC m=+0.184954776 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.expose-services=, container_name=neutron_sriov_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:29:05 standalone.localdomain podman[248446]: 2025-10-13 14:29:05.390030553 +0000 UTC m=+0.225972624 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, container_name=neutron_sriov_agent, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:29:05 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:29:05 standalone.localdomain systemd[1]: tmp-crun.zsu2lW.mount: Deactivated successfully.
Oct 13 14:29:05 standalone.localdomain podman[248557]: 2025-10-13 14:29:05.932069902 +0000 UTC m=+0.080204379 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, com.redhat.component=openstack-rabbitmq-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:29:05 standalone.localdomain podman[248557]: 2025-10-13 14:29:05.9649889 +0000 UTC m=+0.113123367 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, release=1, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:29:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1637: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:07 standalone.localdomain ceph-mon[29756]: pgmap v1637: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1638: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:29:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:29:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:29:08 standalone.localdomain podman[248659]: 2025-10-13 14:29:08.796470838 +0000 UTC m=+0.063910396 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, release=1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container)
Oct 13 14:29:08 standalone.localdomain podman[248659]: 2025-10-13 14:29:08.855576754 +0000 UTC m=+0.123016302 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, io.openshift.expose-services=, release=1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, container_name=clustercheck, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible)
Oct 13 14:29:08 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:29:08 standalone.localdomain podman[248657]: 2025-10-13 14:29:08.903323079 +0000 UTC m=+0.172660136 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:29:08 standalone.localdomain podman[248657]: 2025-10-13 14:29:08.91274636 +0000 UTC m=+0.182083417 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:29:08 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:29:08 standalone.localdomain podman[248658]: 2025-10-13 14:29:08.856453491 +0000 UTC m=+0.121350591 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-type=git, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute)
Oct 13 14:29:08 standalone.localdomain podman[248658]: 2025-10-13 14:29:08.995059985 +0000 UTC m=+0.259957075 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:29:09 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:29:09 standalone.localdomain ceph-mon[29756]: pgmap v1638: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1639: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:10 standalone.localdomain sudo[248757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:29:10 standalone.localdomain sudo[248757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:29:10 standalone.localdomain sudo[248757]: pam_unix(sudo:session): session closed for user root
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: pgmap v1639: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:10 standalone.localdomain sudo[248772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:29:10 standalone.localdomain sudo[248772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:29:10 standalone.localdomain sudo[248772]: pam_unix(sudo:session): session closed for user root
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:29:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:29:10 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 9c5b66f2-0b02-49f6-9238-7c044913b8ce (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:29:10 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 9c5b66f2-0b02-49f6-9238-7c044913b8ce (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:29:10 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 9c5b66f2-0b02-49f6-9238-7c044913b8ce (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:29:10 standalone.localdomain sudo[248819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:29:10 standalone.localdomain sudo[248819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:29:10 standalone.localdomain sudo[248819]: pam_unix(sudo:session): session closed for user root
Oct 13 14:29:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:29:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:29:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:29:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:29:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1640: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:12 standalone.localdomain ceph-mon[29756]: pgmap v1640: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1641: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:29:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:29:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:29:14 standalone.localdomain runuser[248995]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:15 standalone.localdomain ceph-mon[29756]: pgmap v1641: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:29:15 standalone.localdomain runuser[248995]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:15 standalone.localdomain runuser[249105]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1642: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:16 standalone.localdomain runuser[249105]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:16 standalone.localdomain runuser[249208]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:16 standalone.localdomain runuser[249208]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:17 standalone.localdomain ceph-mon[29756]: pgmap v1642: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1643: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:18 standalone.localdomain ceph-mon[29756]: pgmap v1643: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:29:19 standalone.localdomain recover_tripleo_nova_virtqemud[249394]: 93291
Oct 13 14:29:19 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:29:19 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:29:19 standalone.localdomain systemd[1]: tmp-crun.IMCbUl.mount: Deactivated successfully.
Oct 13 14:29:19 standalone.localdomain podman[249350]: 2025-10-13 14:29:19.829848057 +0000 UTC m=+0.094375877 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, name=rhosp17/openstack-heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, release=1, build-date=2025-07-21T15:44:11, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=heat_engine, batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:29:19 standalone.localdomain podman[249349]: 2025-10-13 14:29:19.869317586 +0000 UTC m=+0.133334270 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-heat-api-cfn, tcib_managed=true, container_name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:29:19 standalone.localdomain podman[249350]: 2025-10-13 14:29:19.897719375 +0000 UTC m=+0.162247215 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=heat_engine, release=1, architecture=x86_64, build-date=2025-07-21T15:44:11, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:29:19 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:29:19 standalone.localdomain podman[249363]: 2025-10-13 14:29:19.88979973 +0000 UTC m=+0.142479404 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, release=1, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:29:19 standalone.localdomain podman[249349]: 2025-10-13 14:29:19.955909382 +0000 UTC m=+0.219926066 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, container_name=heat_api_cfn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public)
Oct 13 14:29:19 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:29:19 standalone.localdomain podman[249351]: 2025-10-13 14:29:19.974202828 +0000 UTC m=+0.234658342 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T12:58:43, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached)
Oct 13 14:29:19 standalone.localdomain podman[249363]: 2025-10-13 14:29:19.9749086 +0000 UTC m=+0.227588304 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=heat_api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64)
Oct 13 14:29:19 standalone.localdomain podman[249370]: 2025-10-13 14:29:19.944468299 +0000 UTC m=+0.195163431 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:29:20 standalone.localdomain podman[249351]: 2025-10-13 14:29:20.022708717 +0000 UTC m=+0.283164211 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, architecture=x86_64, com.redhat.component=openstack-memcached-container, version=17.1.9, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:29:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1644: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:20 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:29:20 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:29:20 standalone.localdomain podman[249355]: 2025-10-13 14:29:20.127800785 +0000 UTC m=+0.384816573 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1)
Oct 13 14:29:20 standalone.localdomain podman[249355]: 2025-10-13 14:29:20.142311093 +0000 UTC m=+0.399326921 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, container_name=heat_api_cron, io.buildah.version=1.33.12)
Oct 13 14:29:20 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:29:20 standalone.localdomain podman[249370]: 2025-10-13 14:29:20.180285116 +0000 UTC m=+0.430980228 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-cron-container, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, architecture=x86_64, vcs-type=git)
Oct 13 14:29:20 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:29:21 standalone.localdomain ceph-mon[29756]: pgmap v1644: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1645: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:22 standalone.localdomain ceph-mon[29756]: pgmap v1645: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:29:23
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'images', 'manila_metadata', 'vms', '.mgr', 'backups', 'volumes']
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:29:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1646: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:25 standalone.localdomain ceph-mon[29756]: pgmap v1646: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1647: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:27 standalone.localdomain ceph-mon[29756]: pgmap v1647: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:27 standalone.localdomain runuser[249766]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:29:27 standalone.localdomain podman[249813]: 2025-10-13 14:29:27.491254715 +0000 UTC m=+0.069487248 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, container_name=swift_proxy, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:29:27 standalone.localdomain podman[249818]: 2025-10-13 14:29:27.524237125 +0000 UTC m=+0.091922412 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, tcib_managed=true, container_name=swift_container_server, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32)
Oct 13 14:29:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:29:27 standalone.localdomain systemd[1]: tmp-crun.g5AWi0.mount: Deactivated successfully.
Oct 13 14:29:27 standalone.localdomain podman[249852]: 2025-10-13 14:29:27.587617723 +0000 UTC m=+0.144852467 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, distribution-scope=public, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account)
Oct 13 14:29:27 standalone.localdomain podman[249825]: 2025-10-13 14:29:27.621049876 +0000 UTC m=+0.190573840 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:29:27 standalone.localdomain podman[249825]: 2025-10-13 14:29:27.636767042 +0000 UTC m=+0.206290986 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, release=1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:29:27 standalone.localdomain podman[249825]: unhealthy
Oct 13 14:29:27 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:29:27 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:29:27 standalone.localdomain podman[249813]: 2025-10-13 14:29:27.685732445 +0000 UTC m=+0.263964998 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, build-date=2025-07-21T14:48:37, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:29:27 standalone.localdomain podman[249811]: 2025-10-13 14:29:27.643951054 +0000 UTC m=+0.224772818 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, container_name=glance_api_cron, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:29:27 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:29:27 standalone.localdomain podman[249811]: 2025-10-13 14:29:27.727738743 +0000 UTC m=+0.308560537 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-glance-api, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, release=1, version=17.1.9)
Oct 13 14:29:27 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:29:27 standalone.localdomain podman[249812]: 2025-10-13 14:29:27.689268914 +0000 UTC m=+0.264401832 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64)
Oct 13 14:29:27 standalone.localdomain podman[249814]: 2025-10-13 14:29:27.744214982 +0000 UTC m=+0.316040707 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, container_name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, release=1)
Oct 13 14:29:27 standalone.localdomain podman[249818]: 2025-10-13 14:29:27.748752982 +0000 UTC m=+0.316438289 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:29:27 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:29:27 standalone.localdomain podman[249834]: 2025-10-13 14:29:27.784462596 +0000 UTC m=+0.347292423 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:29:27 standalone.localdomain podman[249922]: 2025-10-13 14:29:27.820351945 +0000 UTC m=+0.222420495 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git)
Oct 13 14:29:27 standalone.localdomain podman[249852]: 2025-10-13 14:29:27.837731712 +0000 UTC m=+0.394966416 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account)
Oct 13 14:29:27 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:29:27 standalone.localdomain podman[249834]: 2025-10-13 14:29:27.85708661 +0000 UTC m=+0.419916427 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, version=17.1.9, architecture=x86_64, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:29:27 standalone.localdomain podman[249834]: unhealthy
Oct 13 14:29:27 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:29:27 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:29:27 standalone.localdomain podman[249812]: 2025-10-13 14:29:27.878863993 +0000 UTC m=+0.453996901 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, config_id=tripleo_step4, container_name=swift_object_server, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:29:27 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:29:27 standalone.localdomain runuser[249766]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:27 standalone.localdomain podman[249814]: 2025-10-13 14:29:27.927741833 +0000 UTC m=+0.499567548 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1)
Oct 13 14:29:27 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1648: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:28 standalone.localdomain runuser[250036]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:28 standalone.localdomain podman[249922]: 2025-10-13 14:29:28.16161479 +0000 UTC m=+0.563683350 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:29:28 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:29:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:29:28 standalone.localdomain runuser[250036]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:28 standalone.localdomain runuser[250146]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:29 standalone.localdomain ceph-mon[29756]: pgmap v1648: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:29 standalone.localdomain runuser[250146]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1649: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:31 standalone.localdomain ceph-mon[29756]: pgmap v1649: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1650: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:32 standalone.localdomain ceph-mon[29756]: pgmap v1650: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:29:33 standalone.localdomain systemd[1]: tmp-crun.tQo8FW.mount: Deactivated successfully.
Oct 13 14:29:33 standalone.localdomain podman[250244]: 2025-10-13 14:29:33.838023739 +0000 UTC m=+0.103789848 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, name=rhosp17/openstack-keystone, release=1, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:29:33 standalone.localdomain podman[250244]: 2025-10-13 14:29:33.873798985 +0000 UTC m=+0.139565134 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:29:33 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:29:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1651: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:35 standalone.localdomain ceph-mon[29756]: pgmap v1651: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:29:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:29:35 standalone.localdomain podman[250487]: 2025-10-13 14:29:35.805265249 +0000 UTC m=+0.073504052 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., container_name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public)
Oct 13 14:29:35 standalone.localdomain podman[250486]: 2025-10-13 14:29:35.857563935 +0000 UTC m=+0.126707787 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, batch=17.1_20250721.1)
Oct 13 14:29:35 standalone.localdomain podman[250487]: 2025-10-13 14:29:35.881477134 +0000 UTC m=+0.149715977 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, container_name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:29:35 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:29:35 standalone.localdomain podman[250486]: 2025-10-13 14:29:35.901839443 +0000 UTC m=+0.170983305 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9)
Oct 13 14:29:35 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:29:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1652: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:37 standalone.localdomain ceph-mon[29756]: pgmap v1652: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1653: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:39 standalone.localdomain ceph-mon[29756]: pgmap v1653: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:29:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:29:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:29:39 standalone.localdomain podman[250621]: 2025-10-13 14:29:39.827325516 +0000 UTC m=+0.081428067 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, distribution-scope=public)
Oct 13 14:29:39 standalone.localdomain podman[250620]: 2025-10-13 14:29:39.88472268 +0000 UTC m=+0.143284879 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:29:39 standalone.localdomain podman[250621]: 2025-10-13 14:29:39.895807832 +0000 UTC m=+0.149910413 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, build-date=2025-07-21T12:58:45, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.component=openstack-mariadb-container, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true)
Oct 13 14:29:39 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:29:39 standalone.localdomain podman[250620]: 2025-10-13 14:29:39.916519603 +0000 UTC m=+0.175081742 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, release=1, tcib_managed=true)
Oct 13 14:29:39 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:29:39 standalone.localdomain podman[250619]: 2025-10-13 14:29:39.986557757 +0000 UTC m=+0.244856798 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:29:40 standalone.localdomain podman[250619]: 2025-10-13 14:29:40.000118605 +0000 UTC m=+0.258417606 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2)
Oct 13 14:29:40 standalone.localdomain runuser[250703]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:40 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:29:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1654: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:40 standalone.localdomain runuser[250703]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:40 standalone.localdomain runuser[250775]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:41 standalone.localdomain ceph-mon[29756]: pgmap v1654: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:41 standalone.localdomain runuser[250775]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:41 standalone.localdomain runuser[250829]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1655: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:42 standalone.localdomain runuser[250829]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:42 standalone.localdomain ceph-mon[29756]: pgmap v1655: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1656: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:45 standalone.localdomain ceph-mon[29756]: pgmap v1656: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1657: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:47 standalone.localdomain ceph-mon[29756]: pgmap v1657: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1658: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:49 standalone.localdomain ceph-mon[29756]: pgmap v1658: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1659: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:50 standalone.localdomain ceph-mon[29756]: pgmap v1659: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:29:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:29:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:29:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:29:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:29:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:29:50 standalone.localdomain podman[251227]: 2025-10-13 14:29:50.82946641 +0000 UTC m=+0.081917182 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-heat-api-cfn-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, batch=17.1_20250721.1, architecture=x86_64, release=1, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:49:55, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:29:50 standalone.localdomain podman[251252]: 2025-10-13 14:29:50.855740382 +0000 UTC m=+0.083062397 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, version=17.1.9, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:29:50 standalone.localdomain systemd[1]: tmp-crun.PjCSsL.mount: Deactivated successfully.
Oct 13 14:29:50 standalone.localdomain podman[251229]: 2025-10-13 14:29:50.902615741 +0000 UTC m=+0.147658595 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=)
Oct 13 14:29:50 standalone.localdomain podman[251246]: 2025-10-13 14:29:50.922271618 +0000 UTC m=+0.153906527 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, container_name=heat_api, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, config_id=tripleo_step4)
Oct 13 14:29:50 standalone.localdomain podman[251229]: 2025-10-13 14:29:50.926709475 +0000 UTC m=+0.171752309 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:29:50 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:29:50 standalone.localdomain podman[251252]: 2025-10-13 14:29:50.940947855 +0000 UTC m=+0.168269880 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:29:50 standalone.localdomain podman[251246]: 2025-10-13 14:29:50.955800794 +0000 UTC m=+0.187435733 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc.)
Oct 13 14:29:50 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:29:50 standalone.localdomain podman[251227]: 2025-10-13 14:29:50.964814132 +0000 UTC m=+0.217264904 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-heat-api-cfn, container_name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1)
Oct 13 14:29:50 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:29:50 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:29:50 standalone.localdomain podman[251235]: 2025-10-13 14:29:50.942344188 +0000 UTC m=+0.182499420 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:29:51 standalone.localdomain podman[251235]: 2025-10-13 14:29:51.028961265 +0000 UTC m=+0.269116477 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, release=1, com.redhat.component=openstack-heat-api-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:29:51 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:29:51 standalone.localdomain podman[251228]: 2025-10-13 14:29:51.043348239 +0000 UTC m=+0.291255521 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, build-date=2025-07-21T15:44:11, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=heat_engine, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:29:51 standalone.localdomain podman[251228]: 2025-10-13 14:29:51.073836511 +0000 UTC m=+0.321743823 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, tcib_managed=true, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:29:51 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:29:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1660: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:52 standalone.localdomain ceph-mon[29756]: pgmap v1660: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:52 standalone.localdomain runuser[251376]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:29:53 standalone.localdomain runuser[251376]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:53 standalone.localdomain runuser[251445]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1661: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:54 standalone.localdomain runuser[251445]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:54 standalone.localdomain runuser[251509]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:29:54 standalone.localdomain runuser[251509]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:29:55 standalone.localdomain ceph-mon[29756]: pgmap v1661: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1662: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:29:57 standalone.localdomain ceph-mon[29756]: pgmap v1662: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:29:57 standalone.localdomain podman[251797]: 2025-10-13 14:29:57.80553523 +0000 UTC m=+0.076672640 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:29:57 standalone.localdomain systemd[1]: tmp-crun.1QVct0.mount: Deactivated successfully.
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:29:57 standalone.localdomain podman[251796]: 2025-10-13 14:29:57.923114213 +0000 UTC m=+0.194249513 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, container_name=swift_proxy)
Oct 13 14:29:57 standalone.localdomain podman[251823]: 2025-10-13 14:29:57.876448102 +0000 UTC m=+0.065192006 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, version=17.1.9, tcib_managed=true, io.openshift.expose-services=)
Oct 13 14:29:57 standalone.localdomain podman[251825]: 2025-10-13 14:29:57.930128011 +0000 UTC m=+0.116925375 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, container_name=swift_container_server, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, tcib_managed=true)
Oct 13 14:29:57 standalone.localdomain podman[251823]: 2025-10-13 14:29:57.958786076 +0000 UTC m=+0.147529990 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, architecture=x86_64, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:29:57 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:29:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:29:57 standalone.localdomain podman[251863]: 2025-10-13 14:29:57.981441206 +0000 UTC m=+0.079425275 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, container_name=swift_account_server, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:29:58 standalone.localdomain podman[251797]: 2025-10-13 14:29:58.004715885 +0000 UTC m=+0.275853285 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:53, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:29:58 standalone.localdomain podman[251797]: unhealthy
Oct 13 14:29:58 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:29:58 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:29:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1663: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:29:58 standalone.localdomain podman[251917]: 2025-10-13 14:29:58.112260009 +0000 UTC m=+0.125089737 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, distribution-scope=public, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:29:58 standalone.localdomain podman[251862]: 2025-10-13 14:29:58.021546535 +0000 UTC m=+0.125457567 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Oct 13 14:29:58 standalone.localdomain podman[251796]: 2025-10-13 14:29:58.134858577 +0000 UTC m=+0.405993847 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, container_name=swift_proxy, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 14:29:58 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:29:58 standalone.localdomain podman[251825]: 2025-10-13 14:29:58.155779813 +0000 UTC m=+0.342577157 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-swift-container, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:29:58 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:29:58 standalone.localdomain podman[251863]: 2025-10-13 14:29:58.173754089 +0000 UTC m=+0.271738158 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, version=17.1.9, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible)
Oct 13 14:29:58 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:29:58 standalone.localdomain podman[251867]: 2025-10-13 14:29:58.084548272 +0000 UTC m=+0.180361384 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T14:56:28, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, vcs-type=git, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12)
Oct 13 14:29:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:29:58 standalone.localdomain podman[251862]: 2025-10-13 14:29:58.20678267 +0000 UTC m=+0.310693702 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, container_name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc.)
Oct 13 14:29:58 standalone.localdomain podman[251862]: unhealthy
Oct 13 14:29:58 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:29:58 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:29:58 standalone.localdomain podman[251979]: 2025-10-13 14:29:58.249738267 +0000 UTC m=+0.054771934 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, name=rhosp17/openstack-nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:29:58 standalone.localdomain podman[251867]: 2025-10-13 14:29:58.273889743 +0000 UTC m=+0.369702825 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, container_name=swift_object_server, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:29:58 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:29:58 standalone.localdomain podman[251917]: 2025-10-13 14:29:58.297297196 +0000 UTC m=+0.310126884 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, container_name=glance_api_internal, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, com.redhat.component=openstack-glance-api-container)
Oct 13 14:29:58 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:29:58 standalone.localdomain podman[251979]: 2025-10-13 14:29:58.629844653 +0000 UTC m=+0.434878310 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:48:37)
Oct 13 14:29:58 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:29:59 standalone.localdomain ceph-mon[29756]: pgmap v1663: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1664: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:01 standalone.localdomain ceph-mon[29756]: pgmap v1664: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1665: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:02 standalone.localdomain ceph-mon[29756]: pgmap v1665: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1666: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:30:04 standalone.localdomain podman[252232]: 2025-10-13 14:30:04.794802596 +0000 UTC m=+0.059781908 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-mariadb, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:30:04 standalone.localdomain podman[252260]: 2025-10-13 14:30:04.862753405 +0000 UTC m=+0.060771228 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container)
Oct 13 14:30:04 standalone.localdomain systemd[1]: tmp-crun.IHO3Og.mount: Deactivated successfully.
Oct 13 14:30:04 standalone.localdomain podman[252230]: 2025-10-13 14:30:04.903864166 +0000 UTC m=+0.168618011 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, build-date=2025-07-21T13:27:18, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, container_name=keystone_cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:30:04 standalone.localdomain podman[252230]: 2025-10-13 14:30:04.946959258 +0000 UTC m=+0.211713113 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, container_name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9)
Oct 13 14:30:04 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:30:04 standalone.localdomain podman[252232]: 2025-10-13 14:30:04.969642459 +0000 UTC m=+0.234621801 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-mariadb-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64)
Oct 13 14:30:05 standalone.localdomain ceph-mon[29756]: pgmap v1666: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:05 standalone.localdomain podman[252333]: 2025-10-13 14:30:05.242858941 +0000 UTC m=+0.059020614 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:30:05 standalone.localdomain podman[252333]: 2025-10-13 14:30:05.273813238 +0000 UTC m=+0.089974911 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, distribution-scope=public, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, build-date=2025-07-21T13:08:11, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:30:05 standalone.localdomain runuser[252409]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:05 standalone.localdomain runuser[252409]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1667: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:06 standalone.localdomain podman[252514]: 2025-10-13 14:30:06.090543476 +0000 UTC m=+0.095542583 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, build-date=2025-07-21T13:08:05, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true)
Oct 13 14:30:06 standalone.localdomain podman[252514]: 2025-10-13 14:30:06.099703089 +0000 UTC m=+0.104702196 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-rabbitmq, io.buildah.version=1.33.12, version=17.1.9, release=1, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b)
Oct 13 14:30:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:30:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:30:06 standalone.localdomain runuser[252558]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:06 standalone.localdomain podman[252542]: 2025-10-13 14:30:06.168887647 +0000 UTC m=+0.061486911 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., container_name=neutron_dhcp)
Oct 13 14:30:06 standalone.localdomain podman[252543]: 2025-10-13 14:30:06.184683865 +0000 UTC m=+0.070530891 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:30:06 standalone.localdomain podman[252543]: 2025-10-13 14:30:06.24696087 +0000 UTC m=+0.132807916 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, release=1, container_name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:30:06 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:30:06 standalone.localdomain podman[252542]: 2025-10-13 14:30:06.297292715 +0000 UTC m=+0.189891999 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, build-date=2025-07-21T16:28:54, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1)
Oct 13 14:30:06 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:30:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:06 standalone.localdomain runuser[252558]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:06 standalone.localdomain runuser[252653]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:07 standalone.localdomain ceph-mon[29756]: pgmap v1667: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:07 standalone.localdomain runuser[252653]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1668: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:09 standalone.localdomain ceph-mon[29756]: pgmap v1668: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1669: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:30:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:30:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:30:10 standalone.localdomain podman[252797]: 2025-10-13 14:30:10.816820364 +0000 UTC m=+0.076652240 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:30:10 standalone.localdomain podman[252797]: 2025-10-13 14:30:10.857869862 +0000 UTC m=+0.117701718 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, distribution-scope=public, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, vcs-type=git, config_id=tripleo_step3)
Oct 13 14:30:10 standalone.localdomain podman[252799]: 2025-10-13 14:30:10.870637236 +0000 UTC m=+0.127281933 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-mariadb-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, container_name=clustercheck, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public)
Oct 13 14:30:10 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:30:10 standalone.localdomain podman[252798]: 2025-10-13 14:30:10.844942983 +0000 UTC m=+0.102037005 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, release=1, config_id=tripleo_step5, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:30:10 standalone.localdomain sudo[252857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:30:10 standalone.localdomain podman[252798]: 2025-10-13 14:30:10.929260388 +0000 UTC m=+0.186354430 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:30:10 standalone.localdomain sudo[252857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:30:10 standalone.localdomain sudo[252857]: pam_unix(sudo:session): session closed for user root
Oct 13 14:30:10 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:30:10 standalone.localdomain podman[252799]: 2025-10-13 14:30:10.956856481 +0000 UTC m=+0.213501108 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, distribution-scope=public, release=1, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, config_id=tripleo_step2)
Oct 13 14:30:10 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:30:10 standalone.localdomain sudo[252896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:30:11 standalone.localdomain sudo[252896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: pgmap v1669: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:11 standalone.localdomain sudo[252896]: pam_unix(sudo:session): session closed for user root
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:30:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:30:11 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1d5e86bd-3ece-4b61-bfe4-fa2256b3171b (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:30:11 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1d5e86bd-3ece-4b61-bfe4-fa2256b3171b (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:30:11 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1d5e86bd-3ece-4b61-bfe4-fa2256b3171b (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:30:11 standalone.localdomain sudo[252951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:30:11 standalone.localdomain sudo[252951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:30:11 standalone.localdomain sudo[252951]: pam_unix(sudo:session): session closed for user root
Oct 13 14:30:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1670: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:30:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:30:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:30:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:30:12 standalone.localdomain ceph-mon[29756]: pgmap v1670: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1671: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:30:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:30:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:30:15 standalone.localdomain ceph-mon[29756]: pgmap v1671: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:30:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1672: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:17 standalone.localdomain ceph-mon[29756]: pgmap v1672: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1673: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:18 standalone.localdomain runuser[253229]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:18 standalone.localdomain runuser[253229]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:18 standalone.localdomain runuser[253351]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:19 standalone.localdomain ceph-mon[29756]: pgmap v1673: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:19 standalone.localdomain runuser[253351]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:19 standalone.localdomain runuser[253413]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1674: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:20 standalone.localdomain ceph-mon[29756]: pgmap v1674: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:20 standalone.localdomain runuser[253413]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:30:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:30:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:30:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:30:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:30:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:30:21 standalone.localdomain podman[253489]: 2025-10-13 14:30:21.889361432 +0000 UTC m=+0.143249408 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-memcached, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:30:21 standalone.localdomain podman[253488]: 2025-10-13 14:30:21.8475613 +0000 UTC m=+0.104923513 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=heat_engine, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, vendor=Red Hat, Inc., release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:30:21 standalone.localdomain podman[253489]: 2025-10-13 14:30:21.909775343 +0000 UTC m=+0.163663319 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, com.redhat.component=openstack-memcached-container, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']})
Oct 13 14:30:21 standalone.localdomain podman[253488]: 2025-10-13 14:30:21.927940464 +0000 UTC m=+0.185302647 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, tcib_managed=true, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, container_name=heat_engine, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:30:21 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:30:22 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:30:22 standalone.localdomain podman[253507]: 2025-10-13 14:30:21.980502739 +0000 UTC m=+0.224102697 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=logrotate_crond, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:30:22 standalone.localdomain podman[253495]: 2025-10-13 14:30:22.001416724 +0000 UTC m=+0.253597177 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, build-date=2025-07-21T15:56:26, architecture=x86_64, name=rhosp17/openstack-heat-api, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=heat_api_cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:30:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1675: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:22 standalone.localdomain podman[253507]: 2025-10-13 14:30:22.079879159 +0000 UTC m=+0.323479107 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:30:22 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:30:22 standalone.localdomain podman[253501]: 2025-10-13 14:30:22.186817514 +0000 UTC m=+0.435502069 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api)
Oct 13 14:30:22 standalone.localdomain podman[253501]: 2025-10-13 14:30:22.220068501 +0000 UTC m=+0.468753016 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, tcib_managed=true, container_name=heat_api, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4)
Oct 13 14:30:22 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:30:22 standalone.localdomain podman[253487]: 2025-10-13 14:30:22.237597643 +0000 UTC m=+0.498138434 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., container_name=heat_api_cfn, distribution-scope=public, release=1, architecture=x86_64)
Oct 13 14:30:22 standalone.localdomain podman[253487]: 2025-10-13 14:30:22.338955065 +0000 UTC m=+0.599495846 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:49:55, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:30:22 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:30:22 standalone.localdomain podman[253495]: 2025-10-13 14:30:22.390378234 +0000 UTC m=+0.642558647 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, container_name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:30:22 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:30:22 standalone.localdomain ceph-mon[29756]: pgmap v1675: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:22 standalone.localdomain systemd[1]: tmp-crun.L3pKuJ.mount: Deactivated successfully.
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:30:23
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'manila_data', 'volumes', 'backups', '.mgr', 'manila_metadata', 'vms']
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:30:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1676: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:25 standalone.localdomain ceph-mon[29756]: pgmap v1676: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1677: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:26 standalone.localdomain ceph-mon[29756]: pgmap v1677: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1678: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:30:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:30:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:30:28 standalone.localdomain systemd[1]: tmp-crun.s5rIVO.mount: Deactivated successfully.
Oct 13 14:30:28 standalone.localdomain podman[253943]: 2025-10-13 14:30:28.831210805 +0000 UTC m=+0.091859649 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1)
Oct 13 14:30:28 standalone.localdomain podman[253944]: 2025-10-13 14:30:28.88314712 +0000 UTC m=+0.142290338 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, version=17.1.9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:30:28 standalone.localdomain podman[253960]: 2025-10-13 14:30:28.943615469 +0000 UTC m=+0.190459546 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, container_name=swift_container_server, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-swift-container-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:30:29 standalone.localdomain podman[253952]: 2025-10-13 14:30:29.006648267 +0000 UTC m=+0.251527084 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, release=1, architecture=x86_64, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:30:29 standalone.localdomain podman[253972]: 2025-10-13 14:30:28.96694322 +0000 UTC m=+0.205067318 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, vcs-type=git, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, version=17.1.9)
Oct 13 14:30:29 standalone.localdomain podman[253942]: 2025-10-13 14:30:28.867394773 +0000 UTC m=+0.130993128 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:30:29 standalone.localdomain podman[253978]: 2025-10-13 14:30:29.022697423 +0000 UTC m=+0.260815861 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, distribution-scope=public, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc.)
Oct 13 14:30:29 standalone.localdomain podman[253941]: 2025-10-13 14:30:28.890024292 +0000 UTC m=+0.153395740 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, container_name=glance_api_cron, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, version=17.1.9)
Oct 13 14:30:29 standalone.localdomain podman[253969]: 2025-10-13 14:30:28.995534033 +0000 UTC m=+0.237832720 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:30:29 standalone.localdomain podman[253944]: 2025-10-13 14:30:29.062675188 +0000 UTC m=+0.321818436 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, release=1)
Oct 13 14:30:29 standalone.localdomain podman[253942]: 2025-10-13 14:30:29.070419237 +0000 UTC m=+0.334017612 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, container_name=swift_object_server, vcs-type=git)
Oct 13 14:30:29 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:30:29 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:30:29 standalone.localdomain podman[253972]: 2025-10-13 14:30:29.098183355 +0000 UTC m=+0.336307443 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:30:29 standalone.localdomain podman[253972]: unhealthy
Oct 13 14:30:29 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:30:29 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:30:29 standalone.localdomain ceph-mon[29756]: pgmap v1678: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:29 standalone.localdomain podman[253941]: 2025-10-13 14:30:29.122939311 +0000 UTC m=+0.386310779 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, release=1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:30:29 standalone.localdomain podman[253960]: 2025-10-13 14:30:29.130137952 +0000 UTC m=+0.376982039 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T15:54:32, version=17.1.9, managed_by=tripleo_ansible, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:30:29 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:30:29 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:30:29 standalone.localdomain podman[253952]: 2025-10-13 14:30:29.188748284 +0000 UTC m=+0.433627081 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:30:29 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:30:29 standalone.localdomain podman[253969]: 2025-10-13 14:30:29.225348014 +0000 UTC m=+0.467646701 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:30:29 standalone.localdomain podman[253969]: unhealthy
Oct 13 14:30:29 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:30:29 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:30:29 standalone.localdomain podman[253978]: 2025-10-13 14:30:29.283805831 +0000 UTC m=+0.521924249 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, container_name=swift_account_server, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible)
Oct 13 14:30:29 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:30:29 standalone.localdomain podman[253943]: 2025-10-13 14:30:29.336859271 +0000 UTC m=+0.597508115 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team)
Oct 13 14:30:29 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:30:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1679: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:30 standalone.localdomain runuser[254159]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:31 standalone.localdomain ceph-mon[29756]: pgmap v1679: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:31 standalone.localdomain runuser[254159]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:31 standalone.localdomain runuser[254228]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1680: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:32 standalone.localdomain runuser[254228]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:32 standalone.localdomain runuser[254282]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:32 standalone.localdomain ceph-mon[29756]: pgmap v1680: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:30:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:30:33 standalone.localdomain runuser[254282]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1681: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:35 standalone.localdomain ceph-mon[29756]: pgmap v1681: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:30:35 standalone.localdomain podman[254530]: 2025-10-13 14:30:35.819492432 +0000 UTC m=+0.075465193 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, release=1, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, container_name=keystone_cron, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, io.openshift.expose-services=, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:30:35 standalone.localdomain podman[254530]: 2025-10-13 14:30:35.853524353 +0000 UTC m=+0.109497104 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, managed_by=tripleo_ansible, version=17.1.9, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, release=1, build-date=2025-07-21T13:27:18, container_name=keystone_cron, batch=17.1_20250721.1)
Oct 13 14:30:35 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:30:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1682: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:30:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:30:36 standalone.localdomain systemd[1]: tmp-crun.uPv8hE.mount: Deactivated successfully.
Oct 13 14:30:36 standalone.localdomain podman[254599]: 2025-10-13 14:30:36.819015309 +0000 UTC m=+0.088638721 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.expose-services=, release=1, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:30:36 standalone.localdomain systemd[1]: tmp-crun.jtD8Wh.mount: Deactivated successfully.
Oct 13 14:30:36 standalone.localdomain podman[254598]: 2025-10-13 14:30:36.899395822 +0000 UTC m=+0.169369085 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, release=1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, vcs-type=git, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 14:30:36 standalone.localdomain podman[254599]: 2025-10-13 14:30:36.92230074 +0000 UTC m=+0.191924132 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, architecture=x86_64, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, vcs-type=git, container_name=neutron_sriov_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 14:30:36 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:30:36 standalone.localdomain podman[254598]: 2025-10-13 14:30:36.962934886 +0000 UTC m=+0.232908159 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, com.redhat.component=openstack-neutron-dhcp-agent-container, architecture=x86_64, build-date=2025-07-21T16:28:54, vcs-type=git, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:30:36 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:30:37 standalone.localdomain ceph-mon[29756]: pgmap v1682: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1683: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:38 standalone.localdomain ceph-mon[29756]: pgmap v1683: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1684: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:41 standalone.localdomain ceph-mon[29756]: pgmap v1684: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:30:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:30:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:30:41 standalone.localdomain podman[254740]: 2025-10-13 14:30:41.802819804 +0000 UTC m=+0.068220919 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:30:41 standalone.localdomain podman[254740]: 2025-10-13 14:30:41.820202061 +0000 UTC m=+0.085603156 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=nova_compute, maintainer=OpenStack TripleO Team)
Oct 13 14:30:41 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:30:41 standalone.localdomain podman[254741]: 2025-10-13 14:30:41.858802784 +0000 UTC m=+0.124943442 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:30:41 standalone.localdomain podman[254739]: 2025-10-13 14:30:41.907910831 +0000 UTC m=+0.176599358 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, architecture=x86_64, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, container_name=iscsid)
Oct 13 14:30:41 standalone.localdomain podman[254739]: 2025-10-13 14:30:41.917862529 +0000 UTC m=+0.186551046 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git)
Oct 13 14:30:41 standalone.localdomain podman[254741]: 2025-10-13 14:30:41.926258369 +0000 UTC m=+0.192399027 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-mariadb, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, distribution-scope=public, release=1, batch=17.1_20250721.1, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team)
Oct 13 14:30:41 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:30:41 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:30:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1685: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:42 standalone.localdomain ceph-mon[29756]: pgmap v1685: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:43 standalone.localdomain runuser[254842]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1686: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:44 standalone.localdomain runuser[254842]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:44 standalone.localdomain runuser[254995]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:45 standalone.localdomain ceph-mon[29756]: pgmap v1686: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:45 standalone.localdomain runuser[254995]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:45 standalone.localdomain runuser[255098]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:45 standalone.localdomain runuser[255098]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1687: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:46 standalone.localdomain ceph-mon[29756]: pgmap v1687: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:30:47 standalone.localdomain object-server[255254]: Object update sweep starting on /srv/node/d1 (pid: 17)
Oct 13 14:30:47 standalone.localdomain object-server[255254]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 17)
Oct 13 14:30:47 standalone.localdomain object-server[255254]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:30:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.09s
Oct 13 14:30:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1688: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:49 standalone.localdomain ceph-mon[29756]: pgmap v1688: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1689: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:51 standalone.localdomain ceph-mon[29756]: pgmap v1689: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1690: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:52 standalone.localdomain ceph-mon[29756]: pgmap v1690: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:30:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:30:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:30:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:30:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:30:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:30:52 standalone.localdomain systemd[1]: tmp-crun.AT7XLO.mount: Deactivated successfully.
Oct 13 14:30:52 standalone.localdomain podman[255354]: 2025-10-13 14:30:52.872839554 +0000 UTC m=+0.125888161 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, container_name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:30:52 standalone.localdomain podman[255348]: 2025-10-13 14:30:52.834699286 +0000 UTC m=+0.097630958 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, build-date=2025-07-21T14:49:55, container_name=heat_api_cfn, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 14:30:52 standalone.localdomain podman[255350]: 2025-10-13 14:30:52.853695443 +0000 UTC m=+0.113815508 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, release=1, architecture=x86_64, tcib_managed=true)
Oct 13 14:30:52 standalone.localdomain podman[255364]: 2025-10-13 14:30:52.910691784 +0000 UTC m=+0.163582126 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:30:52 standalone.localdomain podman[255348]: 2025-10-13 14:30:52.916875365 +0000 UTC m=+0.179807007 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, container_name=heat_api_cfn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 14:30:52 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:30:52 standalone.localdomain podman[255349]: 2025-10-13 14:30:52.953654951 +0000 UTC m=+0.214745336 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, release=1, architecture=x86_64, build-date=2025-07-21T15:44:11, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-engine-container, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, vendor=Red Hat, Inc., vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:30:52 standalone.localdomain podman[255354]: 2025-10-13 14:30:52.959021827 +0000 UTC m=+0.212070444 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible)
Oct 13 14:30:52 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:30:53 standalone.localdomain podman[255349]: 2025-10-13 14:30:53.000829199 +0000 UTC m=+0.261919584 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=heat_engine, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, vcs-type=git, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=)
Oct 13 14:30:53 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:30:53 standalone.localdomain podman[255368]: 2025-10-13 14:30:53.019583379 +0000 UTC m=+0.268168248 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, architecture=x86_64, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:30:53 standalone.localdomain podman[255368]: 2025-10-13 14:30:53.029688491 +0000 UTC m=+0.278273390 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, version=17.1.9, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron)
Oct 13 14:30:53 standalone.localdomain podman[255350]: 2025-10-13 14:30:53.039710071 +0000 UTC m=+0.299830136 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:30:53 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:30:53 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:30:53 standalone.localdomain podman[255364]: 2025-10-13 14:30:53.092548264 +0000 UTC m=+0.345438746 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-heat-api)
Oct 13 14:30:53 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:30:53 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:30:53 standalone.localdomain recover_tripleo_nova_virtqemud[255495]: 93291
Oct 13 14:30:53 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:30:53 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:30:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1691: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:55 standalone.localdomain ceph-mon[29756]: pgmap v1691: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1692: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:56 standalone.localdomain runuser[255731]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:30:56 standalone.localdomain runuser[255731]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:57 standalone.localdomain runuser[255792]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:57 standalone.localdomain ceph-mon[29756]: pgmap v1692: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:57 standalone.localdomain runuser[255792]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:57 standalone.localdomain runuser[255854]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:30:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1693: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:58 standalone.localdomain runuser[255854]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:30:59 standalone.localdomain ceph-mon[29756]: pgmap v1693: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:30:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:30:59 standalone.localdomain podman[256037]: 2025-10-13 14:30:59.862745421 +0000 UTC m=+0.100596219 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, vcs-type=git, container_name=glance_api_internal, version=17.1.9, release=1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container)
Oct 13 14:30:59 standalone.localdomain podman[256025]: 2025-10-13 14:30:59.839597895 +0000 UTC m=+0.083838412 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, managed_by=tripleo_ansible, container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, architecture=x86_64, release=1)
Oct 13 14:30:59 standalone.localdomain podman[256047]: 2025-10-13 14:30:59.953163705 +0000 UTC m=+0.183724809 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:30:59 standalone.localdomain podman[256047]: 2025-10-13 14:30:59.958564362 +0000 UTC m=+0.189125466 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Oct 13 14:30:59 standalone.localdomain podman[256047]: unhealthy
Oct 13 14:30:59 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:30:59 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:30:59 standalone.localdomain podman[256065]: 2025-10-13 14:30:59.924141427 +0000 UTC m=+0.144739362 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_id=tripleo_step4, release=1, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container)
Oct 13 14:30:59 standalone.localdomain podman[256018]: 2025-10-13 14:30:59.897915997 +0000 UTC m=+0.146231519 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:31:00 standalone.localdomain podman[256017]: 2025-10-13 14:31:00.045527919 +0000 UTC m=+0.297152823 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:58:20, container_name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, architecture=x86_64, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:31:00 standalone.localdomain podman[256043]: 2025-10-13 14:31:00.056781507 +0000 UTC m=+0.291176029 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, container_name=swift_container_server, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1)
Oct 13 14:31:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1694: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:00 standalone.localdomain podman[256025]: 2025-10-13 14:31:00.067456587 +0000 UTC m=+0.311697084 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, release=1, container_name=swift_proxy, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:31:00 standalone.localdomain podman[256019]: 2025-10-13 14:30:59.819866515 +0000 UTC m=+0.068943030 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4)
Oct 13 14:31:00 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain podman[256017]: 2025-10-13 14:31:00.083021457 +0000 UTC m=+0.334646341 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:58:20, container_name=glance_api_cron, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:31:00 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain podman[256018]: 2025-10-13 14:31:00.103954365 +0000 UTC m=+0.352269857 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=swift_object_server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-swift-object, architecture=x86_64)
Oct 13 14:31:00 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain podman[256065]: 2025-10-13 14:31:00.141795573 +0000 UTC m=+0.362393488 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, tcib_managed=true)
Oct 13 14:31:00 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain podman[256019]: 2025-10-13 14:31:00.158813189 +0000 UTC m=+0.407889724 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Oct 13 14:31:00 standalone.localdomain podman[256059]: 2025-10-13 14:31:00.157716426 +0000 UTC m=+0.381542651 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:31:00 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain ceph-mon[29756]: pgmap v1694: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:00 standalone.localdomain podman[256059]: 2025-10-13 14:31:00.237739208 +0000 UTC m=+0.461565423 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:31:00 standalone.localdomain podman[256059]: unhealthy
Oct 13 14:31:00 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:31:00 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:31:00 standalone.localdomain podman[256043]: 2025-10-13 14:31:00.259788159 +0000 UTC m=+0.494182651 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, release=1)
Oct 13 14:31:00 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain podman[256037]: 2025-10-13 14:31:00.311078215 +0000 UTC m=+0.548929023 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:31:00 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:31:00 standalone.localdomain systemd[1]: tmp-crun.8KWger.mount: Deactivated successfully.
Oct 13 14:31:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1695: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:02 standalone.localdomain ceph-mon[29756]: pgmap v1695: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1696: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:05 standalone.localdomain podman[256349]: 2025-10-13 14:31:05.073760848 +0000 UTC m=+0.069281512 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:31:05 standalone.localdomain podman[256349]: 2025-10-13 14:31:05.10489179 +0000 UTC m=+0.100412444 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:31:05 standalone.localdomain ceph-mon[29756]: pgmap v1696: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:05 standalone.localdomain podman[256422]: 2025-10-13 14:31:05.387562835 +0000 UTC m=+0.080503429 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50)
Oct 13 14:31:05 standalone.localdomain podman[256422]: 2025-10-13 14:31:05.421889336 +0000 UTC m=+0.114829920 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.component=openstack-haproxy-container, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, build-date=2025-07-21T13:08:11, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:31:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1697: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:06 standalone.localdomain systemd[1]: tmp-crun.atkDCH.mount: Deactivated successfully.
Oct 13 14:31:06 standalone.localdomain podman[256507]: 2025-10-13 14:31:06.224709355 +0000 UTC m=+0.077603950 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, com.redhat.component=openstack-rabbitmq-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, release=1)
Oct 13 14:31:06 standalone.localdomain podman[256507]: 2025-10-13 14:31:06.235808758 +0000 UTC m=+0.088703353 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, architecture=x86_64, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9)
Oct 13 14:31:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:31:06 standalone.localdomain podman[256564]: 2025-10-13 14:31:06.371844611 +0000 UTC m=+0.125181960 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, tcib_managed=true, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, container_name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64)
Oct 13 14:31:06 standalone.localdomain podman[256564]: 2025-10-13 14:31:06.378775725 +0000 UTC m=+0.132113064 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, container_name=keystone_cron, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git)
Oct 13 14:31:06 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:31:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:07 standalone.localdomain ceph-mon[29756]: pgmap v1697: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:31:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:31:07 standalone.localdomain podman[256613]: 2025-10-13 14:31:07.571736169 +0000 UTC m=+0.069046434 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, name=rhosp17/openstack-neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=neutron_sriov_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public)
Oct 13 14:31:07 standalone.localdomain podman[256612]: 2025-10-13 14:31:07.626243194 +0000 UTC m=+0.122229619 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, batch=17.1_20250721.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9)
Oct 13 14:31:07 standalone.localdomain podman[256613]: 2025-10-13 14:31:07.633938142 +0000 UTC m=+0.131248397 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=neutron_sriov_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.buildah.version=1.33.12, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, maintainer=OpenStack TripleO Team)
Oct 13 14:31:07 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:31:07 standalone.localdomain podman[256612]: 2025-10-13 14:31:07.661769052 +0000 UTC m=+0.157755287 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T16:28:54, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=)
Oct 13 14:31:07 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:31:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1698: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:08 standalone.localdomain runuser[256696]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:09 standalone.localdomain ceph-mon[29756]: pgmap v1698: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:09 standalone.localdomain runuser[256696]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:09 standalone.localdomain runuser[256793]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1699: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:10 standalone.localdomain runuser[256793]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:10 standalone.localdomain runuser[256847]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:11 standalone.localdomain ceph-mon[29756]: pgmap v1699: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:11 standalone.localdomain runuser[256847]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:11 standalone.localdomain sudo[256921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:31:11 standalone.localdomain sudo[256921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:31:11 standalone.localdomain sudo[256921]: pam_unix(sudo:session): session closed for user root
Oct 13 14:31:11 standalone.localdomain sudo[256936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:31:11 standalone.localdomain sudo[256936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:31:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:31:11 standalone.localdomain systemd[1]: tmp-crun.17235x.mount: Deactivated successfully.
Oct 13 14:31:11 standalone.localdomain podman[256951]: 2025-10-13 14:31:11.993764906 +0000 UTC m=+0.097607008 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T14:48:37, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:31:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:31:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:31:12 standalone.localdomain podman[256951]: 2025-10-13 14:31:12.020329006 +0000 UTC m=+0.124171098 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:31:12 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:31:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1700: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:12 standalone.localdomain systemd[1]: tmp-crun.ZviEBj.mount: Deactivated successfully.
Oct 13 14:31:12 standalone.localdomain podman[256976]: 2025-10-13 14:31:12.094686654 +0000 UTC m=+0.075248956 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, config_id=tripleo_step2, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:31:12 standalone.localdomain podman[256975]: 2025-10-13 14:31:12.134394901 +0000 UTC m=+0.115866751 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-07-21T13:27:15, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1)
Oct 13 14:31:12 standalone.localdomain podman[256976]: 2025-10-13 14:31:12.13986191 +0000 UTC m=+0.120424192 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:31:12 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:31:12 standalone.localdomain podman[256975]: 2025-10-13 14:31:12.193225399 +0000 UTC m=+0.174697229 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, release=1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:31:12 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:31:12 standalone.localdomain sudo[256936]: pam_unix(sudo:session): session closed for user root
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:31:12 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ba80a8d0-01f5-4961-ab1a-ee5c4564da1e (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:31:12 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ba80a8d0-01f5-4961-ab1a-ee5c4564da1e (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:31:12 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ba80a8d0-01f5-4961-ab1a-ee5c4564da1e (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:31:12 standalone.localdomain sudo[257076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:31:12 standalone.localdomain sudo[257076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:31:12 standalone.localdomain sudo[257076]: pam_unix(sudo:session): session closed for user root
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: pgmap v1700: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:31:12 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:31:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1701: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:31:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:31:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:31:15 standalone.localdomain ceph-mon[29756]: pgmap v1701: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:31:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1702: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:17 standalone.localdomain ceph-mon[29756]: pgmap v1702: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1703: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:19 standalone.localdomain ceph-mon[29756]: pgmap v1703: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:19 standalone.localdomain sshd[257403]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:31:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1704: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:20 standalone.localdomain sshd[257403]: Received disconnect from 193.46.255.244 port 45300:11:  [preauth]
Oct 13 14:31:20 standalone.localdomain sshd[257403]: Disconnected from authenticating user root 193.46.255.244 port 45300 [preauth]
Oct 13 14:31:21 standalone.localdomain ceph-mon[29756]: pgmap v1704: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:21 standalone.localdomain runuser[257433]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1705: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:22 standalone.localdomain runuser[257433]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:22 standalone.localdomain runuser[257494]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:22 standalone.localdomain ceph-mon[29756]: pgmap v1705: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:31:23
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'backups', 'vms', 'images', 'manila_data', 'volumes']
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:31:23 standalone.localdomain runuser[257494]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:23 standalone.localdomain runuser[257556]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:31:23 standalone.localdomain podman[257606]: 2025-10-13 14:31:23.813287025 +0000 UTC m=+0.073100200 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, build-date=2025-07-21T15:56:26)
Oct 13 14:31:23 standalone.localdomain podman[257606]: 2025-10-13 14:31:23.847890134 +0000 UTC m=+0.107703319 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.component=openstack-heat-api-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, vcs-type=git, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:31:23 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:31:23 standalone.localdomain podman[257605]: 2025-10-13 14:31:23.918684762 +0000 UTC m=+0.180314204 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, container_name=memcached, batch=17.1_20250721.1, config_id=tripleo_step1, com.redhat.component=openstack-memcached-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:43, name=rhosp17/openstack-memcached, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:31:24 standalone.localdomain runuser[257556]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:24 standalone.localdomain podman[257608]: 2025-10-13 14:31:24.019398854 +0000 UTC m=+0.273508613 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, container_name=logrotate_crond, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:31:24 standalone.localdomain podman[257607]: 2025-10-13 14:31:23.972310409 +0000 UTC m=+0.229154522 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-heat-api, distribution-scope=public, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., container_name=heat_api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9)
Oct 13 14:31:24 standalone.localdomain podman[257604]: 2025-10-13 14:31:24.03057476 +0000 UTC m=+0.288355992 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, name=rhosp17/openstack-heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, container_name=heat_engine, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9)
Oct 13 14:31:24 standalone.localdomain podman[257604]: 2025-10-13 14:31:24.045332146 +0000 UTC m=+0.303113368 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, config_id=tripleo_step4, release=1, build-date=2025-07-21T15:44:11, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, container_name=heat_engine, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1)
Oct 13 14:31:24 standalone.localdomain podman[257607]: 2025-10-13 14:31:24.053917531 +0000 UTC m=+0.310761634 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=heat_api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:31:24 standalone.localdomain podman[257603]: 2025-10-13 14:31:23.950186055 +0000 UTC m=+0.211592970 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api-cfn, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_step4, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, build-date=2025-07-21T14:49:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, distribution-scope=public, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:31:24 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:31:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1706: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:24 standalone.localdomain podman[257608]: 2025-10-13 14:31:24.077019364 +0000 UTC m=+0.331129123 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:31:24 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:31:24 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:31:24 standalone.localdomain podman[257605]: 2025-10-13 14:31:24.099235321 +0000 UTC m=+0.360864783 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, container_name=memcached, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:31:24 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:31:24 standalone.localdomain podman[257603]: 2025-10-13 14:31:24.136970668 +0000 UTC m=+0.398377603 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, batch=17.1_20250721.1, vcs-type=git, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, architecture=x86_64, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:31:24 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:31:25 standalone.localdomain ceph-mon[29756]: pgmap v1706: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1707: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:27 standalone.localdomain ceph-mon[29756]: pgmap v1707: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1708: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:31:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:31:29 standalone.localdomain ceph-mon[29756]: pgmap v1708: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1709: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:30 standalone.localdomain ceph-mon[29756]: pgmap v1709: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:31:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:31:30 standalone.localdomain systemd[1]: tmp-crun.HeDmvt.mount: Deactivated successfully.
Oct 13 14:31:30 standalone.localdomain podman[258070]: 2025-10-13 14:31:30.853543378 +0000 UTC m=+0.106201792 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=swift_proxy, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:31:30 standalone.localdomain podman[258082]: 2025-10-13 14:31:30.916791283 +0000 UTC m=+0.154546296 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:31:30 standalone.localdomain podman[258068]: 2025-10-13 14:31:30.871594887 +0000 UTC m=+0.136768368 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, tcib_managed=true, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:31:30 standalone.localdomain podman[258067]: 2025-10-13 14:31:30.993533554 +0000 UTC m=+0.255188336 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, architecture=x86_64)
Oct 13 14:31:31 standalone.localdomain podman[258069]: 2025-10-13 14:31:31.002543212 +0000 UTC m=+0.261232253 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public)
Oct 13 14:31:31 standalone.localdomain podman[258089]: 2025-10-13 14:31:31.046592744 +0000 UTC m=+0.295509572 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, release=1, container_name=ovn_metadata_agent, version=17.1.9, build-date=2025-07-21T16:28:53)
Oct 13 14:31:31 standalone.localdomain podman[258070]: 2025-10-13 14:31:31.112226183 +0000 UTC m=+0.364884597 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.buildah.version=1.33.12, vcs-type=git, container_name=swift_proxy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:31:31 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain podman[258067]: 2025-10-13 14:31:31.183807954 +0000 UTC m=+0.445462726 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, container_name=glance_api_cron, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:31:31 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain podman[258079]: 2025-10-13 14:31:30.964478487 +0000 UTC m=+0.210736603 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:31:31 standalone.localdomain podman[258096]: 2025-10-13 14:31:30.970717849 +0000 UTC m=+0.205916004 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, build-date=2025-07-21T16:11:22)
Oct 13 14:31:31 standalone.localdomain podman[258068]: 2025-10-13 14:31:31.237840334 +0000 UTC m=+0.503013785 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, distribution-scope=public, build-date=2025-07-21T14:56:28, container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64)
Oct 13 14:31:31 standalone.localdomain podman[258079]: 2025-10-13 14:31:31.249761443 +0000 UTC m=+0.496019569 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, distribution-scope=public, release=1, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:31:31 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain podman[258096]: 2025-10-13 14:31:31.270215244 +0000 UTC m=+0.505413249 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_id=tripleo_step4, container_name=swift_account_server, version=17.1.9, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public)
Oct 13 14:31:31 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain podman[258082]: 2025-10-13 14:31:31.279858152 +0000 UTC m=+0.517613165 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=swift_container_server, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:31:31 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain podman[258090]: 2025-10-13 14:31:31.328396823 +0000 UTC m=+0.573030719 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Oct 13 14:31:31 standalone.localdomain podman[258069]: 2025-10-13 14:31:31.34289292 +0000 UTC m=+0.601581951 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:31:31 standalone.localdomain podman[258090]: 2025-10-13 14:31:31.353852039 +0000 UTC m=+0.598485905 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:31:31 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain podman[258090]: unhealthy
Oct 13 14:31:31 standalone.localdomain podman[258089]: 2025-10-13 14:31:31.431224079 +0000 UTC m=+0.680140887 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:31:31 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:31:31 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:31:31 standalone.localdomain podman[258089]: unhealthy
Oct 13 14:31:31 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:31:31 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:31:31 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:31:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1710: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:32 standalone.localdomain ceph-mon[29756]: pgmap v1710: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1711: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:34 standalone.localdomain runuser[258313]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:35 standalone.localdomain ceph-mon[29756]: pgmap v1711: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:35 standalone.localdomain runuser[258313]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:35 standalone.localdomain runuser[258464]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:35 standalone.localdomain runuser[258464]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:35 standalone.localdomain runuser[258600]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1712: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:36 standalone.localdomain runuser[258600]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:31:36 standalone.localdomain systemd[1]: tmp-crun.Bmsb5A.mount: Deactivated successfully.
Oct 13 14:31:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:36 standalone.localdomain podman[258706]: 2025-10-13 14:31:36.791186959 +0000 UTC m=+0.063815363 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, architecture=x86_64, container_name=keystone_cron, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, managed_by=tripleo_ansible)
Oct 13 14:31:36 standalone.localdomain podman[258706]: 2025-10-13 14:31:36.825886781 +0000 UTC m=+0.098515215 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-keystone, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., version=17.1.9, container_name=keystone_cron, managed_by=tripleo_ansible)
Oct 13 14:31:36 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:31:37 standalone.localdomain ceph-mon[29756]: pgmap v1712: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:31:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:31:37 standalone.localdomain systemd[1]: tmp-crun.YhVgDQ.mount: Deactivated successfully.
Oct 13 14:31:37 standalone.localdomain podman[258734]: 2025-10-13 14:31:37.827802932 +0000 UTC m=+0.095237294 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:54, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=)
Oct 13 14:31:37 standalone.localdomain podman[258735]: 2025-10-13 14:31:37.927254735 +0000 UTC m=+0.186864795 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, release=1, container_name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:31:37 standalone.localdomain systemd[1]: tmp-crun.WNkTIN.mount: Deactivated successfully.
Oct 13 14:31:37 standalone.localdomain podman[258734]: 2025-10-13 14:31:37.957548911 +0000 UTC m=+0.224983273 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=neutron_dhcp, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:31:37 standalone.localdomain podman[258735]: 2025-10-13 14:31:37.978148957 +0000 UTC m=+0.237759087 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:31:37 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:31:38 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:31:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1713: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:39 standalone.localdomain ceph-mon[29756]: pgmap v1713: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1714: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:41 standalone.localdomain ceph-mon[29756]: pgmap v1714: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1715: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:31:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:31:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:31:42 standalone.localdomain podman[258875]: 2025-10-13 14:31:42.793783407 +0000 UTC m=+0.063556395 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-07-21T13:27:15, vcs-type=git, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, tcib_managed=true, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:31:42 standalone.localdomain ceph-mon[29756]: pgmap v1715: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:42 standalone.localdomain podman[258875]: 2025-10-13 14:31:42.826618702 +0000 UTC m=+0.096391680 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12)
Oct 13 14:31:42 standalone.localdomain systemd[1]: tmp-crun.nSynVH.mount: Deactivated successfully.
Oct 13 14:31:42 standalone.localdomain podman[258876]: 2025-10-13 14:31:42.848977613 +0000 UTC m=+0.114938063 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20250721.1, container_name=nova_compute, io.buildah.version=1.33.12, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Oct 13 14:31:42 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:31:42 standalone.localdomain systemd[1]: tmp-crun.Bag9br.mount: Deactivated successfully.
Oct 13 14:31:42 standalone.localdomain podman[258877]: 2025-10-13 14:31:42.892699384 +0000 UTC m=+0.157155957 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:31:42 standalone.localdomain podman[258877]: 2025-10-13 14:31:42.934778754 +0000 UTC m=+0.199235317 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, release=1, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vendor=Red Hat, Inc., container_name=clustercheck, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:31:42 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:31:42 standalone.localdomain podman[258876]: 2025-10-13 14:31:42.996248794 +0000 UTC m=+0.262209244 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:31:43 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:31:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1716: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:45 standalone.localdomain ceph-mon[29756]: pgmap v1716: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1717: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:47 standalone.localdomain ceph-mon[29756]: pgmap v1717: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:47 standalone.localdomain runuser[259201]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:47 standalone.localdomain runuser[259201]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:47 standalone.localdomain runuser[259270]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1718: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:48 standalone.localdomain ceph-mon[29756]: pgmap v1718: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:48 standalone.localdomain runuser[259270]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:48 standalone.localdomain runuser[259332]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:31:49 standalone.localdomain runuser[259332]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:31:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1719: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:50 standalone.localdomain ceph-mon[29756]: pgmap v1719: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1720: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:52 standalone.localdomain ceph-mon[29756]: pgmap v1720: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:31:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1721: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:54 standalone.localdomain ceph-mon[29756]: pgmap v1721: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:31:54 standalone.localdomain podman[259495]: 2025-10-13 14:31:54.895190516 +0000 UTC m=+0.158692614 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T12:58:43, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, name=rhosp17/openstack-memcached, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, container_name=memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, vendor=Red Hat, Inc.)
Oct 13 14:31:55 standalone.localdomain podman[259493]: 2025-10-13 14:31:55.044079367 +0000 UTC m=+0.307751651 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.component=openstack-heat-api-cfn-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api-cfn, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:49:55, container_name=heat_api_cfn, config_id=tripleo_step4, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12)
Oct 13 14:31:55 standalone.localdomain systemd[1]: tmp-crun.OXyMXs.mount: Deactivated successfully.
Oct 13 14:31:55 standalone.localdomain podman[259520]: 2025-10-13 14:31:55.202977198 +0000 UTC m=+0.449178871 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., release=1, container_name=logrotate_crond, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 14:31:55 standalone.localdomain podman[259493]: 2025-10-13 14:31:55.220304303 +0000 UTC m=+0.483976607 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, distribution-scope=public, version=17.1.9, container_name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4)
Oct 13 14:31:55 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:31:55 standalone.localdomain podman[259520]: 2025-10-13 14:31:55.445530403 +0000 UTC m=+0.691732096 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, release=1, tcib_managed=true, container_name=logrotate_crond, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Oct 13 14:31:55 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:31:55 standalone.localdomain podman[259494]: 2025-10-13 14:31:55.594860658 +0000 UTC m=+0.858818941 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, vcs-type=git, build-date=2025-07-21T15:44:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-engine, tcib_managed=true, container_name=heat_engine, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:31:55 standalone.localdomain podman[259494]: 2025-10-13 14:31:55.774202359 +0000 UTC m=+1.038160572 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, release=1, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-engine, container_name=heat_engine, vcs-type=git, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.component=openstack-heat-engine-container, version=17.1.9)
Oct 13 14:31:55 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:31:55 standalone.localdomain podman[259495]: 2025-10-13 14:31:55.871502557 +0000 UTC m=+1.135004635 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T12:58:43, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1)
Oct 13 14:31:56 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:31:56 standalone.localdomain podman[259496]: 2025-10-13 14:31:55.797678185 +0000 UTC m=+1.058094267 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:31:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1722: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:56 standalone.localdomain podman[259496]: 2025-10-13 14:31:56.12110488 +0000 UTC m=+1.381520932 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:31:56 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:31:56 standalone.localdomain ceph-mon[29756]: pgmap v1722: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:56 standalone.localdomain podman[259506]: 2025-10-13 14:31:56.362572311 +0000 UTC m=+1.610982363 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, release=1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:31:56 standalone.localdomain podman[259506]: 2025-10-13 14:31:56.443937286 +0000 UTC m=+1.692347318 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, container_name=heat_api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:31:56 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:31:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:31:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1723: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: pgmap v1723: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.320165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365918320249, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2104, "num_deletes": 251, "total_data_size": 1986604, "memory_usage": 2029200, "flush_reason": "Manual Compaction"}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365918382608, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1927421, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39125, "largest_seqno": 41228, "table_properties": {"data_size": 1919189, "index_size": 4999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16738, "raw_average_key_size": 19, "raw_value_size": 1902530, "raw_average_value_size": 2246, "num_data_blocks": 227, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365717, "oldest_key_time": 1760365717, "file_creation_time": 1760365918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 62459 microseconds, and 3828 cpu microseconds.
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.382650) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1927421 bytes OK
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.382677) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.391245) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.391270) EVENT_LOG_v1 {"time_micros": 1760365918391264, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.391287) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1977649, prev total WAL file size 1977649, number of live WAL files 2.
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.392935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1882KB)], [86(4776KB)]
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365918392961, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 6818630, "oldest_snapshot_seqno": -1}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 4694 keys, 5787547 bytes, temperature: kUnknown
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365918443115, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 5787547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5756864, "index_size": 17850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117133, "raw_average_key_size": 24, "raw_value_size": 5672496, "raw_average_value_size": 1208, "num_data_blocks": 742, "num_entries": 4694, "num_filter_entries": 4694, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760365918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.443323) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 5787547 bytes
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.457285) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.8 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.7 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 5212, records dropped: 518 output_compression: NoCompression
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.457308) EVENT_LOG_v1 {"time_micros": 1760365918457298, "job": 50, "event": "compaction_finished", "compaction_time_micros": 50229, "compaction_time_cpu_micros": 10582, "output_level": 6, "num_output_files": 1, "total_output_size": 5787547, "num_input_records": 5212, "num_output_records": 4694, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365918457572, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760365918458004, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.392853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.458078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.458084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.458087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.458090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:31:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:31:58.458093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:31:59 standalone.localdomain runuser[259967]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1724: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:00 standalone.localdomain ceph-mon[29756]: pgmap v1724: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:00 standalone.localdomain runuser[259967]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:00 standalone.localdomain runuser[260036]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:01 standalone.localdomain runuser[260036]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:01 standalone.localdomain runuser[260090]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:32:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1725: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:02 standalone.localdomain podman[260167]: 2025-10-13 14:32:02.03571185 +0000 UTC m=+0.282406478 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, container_name=swift_container_server, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:32:02 standalone.localdomain runuser[260090]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:02 standalone.localdomain podman[260146]: 2025-10-13 14:32:02.098979635 +0000 UTC m=+0.355598409 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, release=1, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12)
Oct 13 14:32:02 standalone.localdomain podman[260167]: 2025-10-13 14:32:02.26805585 +0000 UTC m=+0.514750458 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, vcs-type=git, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, container_name=swift_container_server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:32:02 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:32:02 standalone.localdomain podman[260146]: 2025-10-13 14:32:02.583011192 +0000 UTC m=+0.839629986 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:32:02 standalone.localdomain podman[260145]: 2025-10-13 14:32:02.259921418 +0000 UTC m=+0.516703297 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, container_name=glance_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, release=1)
Oct 13 14:32:02 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:32:02 standalone.localdomain podman[260172]: 2025-10-13 14:32:02.857780933 +0000 UTC m=+1.100003302 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:32:02 standalone.localdomain podman[260192]: 2025-10-13 14:32:02.471149386 +0000 UTC m=+0.699713853 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, release=1, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:32:02 standalone.localdomain podman[260145]: 2025-10-13 14:32:02.96187727 +0000 UTC m=+1.218659209 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, release=1, architecture=x86_64, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, container_name=glance_api_cron, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:32:03 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:32:03 standalone.localdomain podman[260153]: 2025-10-13 14:32:02.407436207 +0000 UTC m=+0.652395171 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, architecture=x86_64, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, release=1, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git)
Oct 13 14:32:03 standalone.localdomain podman[260147]: 2025-10-13 14:32:02.558591988 +0000 UTC m=+0.813800239 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:32:03 standalone.localdomain ceph-mon[29756]: pgmap v1725: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:03 standalone.localdomain podman[260186]: 2025-10-13 14:32:02.726073783 +0000 UTC m=+0.959761570 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:32:03 standalone.localdomain podman[260192]: 2025-10-13 14:32:03.131309475 +0000 UTC m=+1.359874002 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12)
Oct 13 14:32:03 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:32:03 standalone.localdomain podman[260148]: 2025-10-13 14:32:03.230093997 +0000 UTC m=+1.477455255 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., release=1, distribution-scope=public, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, config_id=tripleo_step4)
Oct 13 14:32:03 standalone.localdomain podman[260153]: 2025-10-13 14:32:03.245533035 +0000 UTC m=+1.490492009 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:32:03 standalone.localdomain podman[260147]: 2025-10-13 14:32:03.248517067 +0000 UTC m=+1.503725358 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:32:03 standalone.localdomain podman[260172]: 2025-10-13 14:32:03.248085664 +0000 UTC m=+1.490308053 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc.)
Oct 13 14:32:03 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:32:03 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:32:03 standalone.localdomain podman[260186]: 2025-10-13 14:32:03.406403196 +0000 UTC m=+1.640091002 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:32:03 standalone.localdomain podman[260186]: unhealthy
Oct 13 14:32:03 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:32:03 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:32:03 standalone.localdomain podman[260148]: 2025-10-13 14:32:03.451882492 +0000 UTC m=+1.699243790 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, container_name=swift_proxy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:32:03 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:32:03 standalone.localdomain podman[260172]: unhealthy
Oct 13 14:32:03 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:32:03 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:32:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1726: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:05 standalone.localdomain ceph-mon[29756]: pgmap v1726: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:05 standalone.localdomain podman[260438]: 2025-10-13 14:32:05.223994942 +0000 UTC m=+0.085266705 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:32:05 standalone.localdomain podman[260438]: 2025-10-13 14:32:05.253963969 +0000 UTC m=+0.115235712 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T12:58:45, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:32:05 standalone.localdomain podman[260485]: 2025-10-13 14:32:05.565632809 +0000 UTC m=+0.108600037 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, tcib_managed=true, release=1, vcs-type=git, com.redhat.component=openstack-haproxy-container, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9)
Oct 13 14:32:05 standalone.localdomain podman[260485]: 2025-10-13 14:32:05.598959609 +0000 UTC m=+0.141926807 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, release=1, vcs-type=git, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:08:11, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1)
Oct 13 14:32:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1727: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:06 standalone.localdomain ceph-mon[29756]: pgmap v1727: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:06 standalone.localdomain podman[260639]: 2025-10-13 14:32:06.485893677 +0000 UTC m=+0.209460713 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container)
Oct 13 14:32:06 standalone.localdomain podman[260639]: 2025-10-13 14:32:06.623440418 +0000 UTC m=+0.347007454 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, name=rhosp17/openstack-rabbitmq, batch=17.1_20250721.1, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public)
Oct 13 14:32:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:32:07 standalone.localdomain podman[260718]: 2025-10-13 14:32:07.618895389 +0000 UTC m=+0.066894768 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=keystone_cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:32:07 standalone.localdomain podman[260718]: 2025-10-13 14:32:07.630884179 +0000 UTC m=+0.078883578 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, container_name=keystone_cron, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, version=17.1.9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:32:07 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:32:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1728: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:08 standalone.localdomain ceph-mon[29756]: pgmap v1728: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:32:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:32:08 standalone.localdomain systemd[1]: tmp-crun.TBEJaY.mount: Deactivated successfully.
Oct 13 14:32:08 standalone.localdomain podman[260746]: 2025-10-13 14:32:08.856969967 +0000 UTC m=+0.122868358 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, build-date=2025-07-21T16:28:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=neutron_dhcp, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:32:08 standalone.localdomain podman[260747]: 2025-10-13 14:32:08.832730088 +0000 UTC m=+0.095742639 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, build-date=2025-07-21T16:03:34, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:32:08 standalone.localdomain podman[260747]: 2025-10-13 14:32:08.91305386 +0000 UTC m=+0.176066421 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, tcib_managed=true, release=1, com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:32:08 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:32:08 standalone.localdomain podman[260746]: 2025-10-13 14:32:08.972298321 +0000 UTC m=+0.238196712 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T16:28:54, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true)
Oct 13 14:32:09 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:32:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1729: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:11 standalone.localdomain ceph-mon[29756]: pgmap v1729: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1730: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:12 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:32:12 standalone.localdomain recover_tripleo_nova_virtqemud[260883]: 93291
Oct 13 14:32:12 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:32:12 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:32:12 standalone.localdomain sudo[260884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:32:12 standalone.localdomain sudo[260884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:32:12 standalone.localdomain sudo[260884]: pam_unix(sudo:session): session closed for user root
Oct 13 14:32:12 standalone.localdomain sudo[260899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:32:12 standalone.localdomain sudo[260899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:32:12 standalone.localdomain runuser[260918]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: pgmap v1730: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:13 standalone.localdomain sudo[260899]: pam_unix(sudo:session): session closed for user root
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:32:13 standalone.localdomain runuser[260918]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:32:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:32:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 6a9a7378-a99a-4331-9b46-7f53f0f24bb8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:32:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 6a9a7378-a99a-4331-9b46-7f53f0f24bb8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:32:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 6a9a7378-a99a-4331-9b46-7f53f0f24bb8 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:32:13 standalone.localdomain sudo[261020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:32:13 standalone.localdomain sudo[261020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:32:13 standalone.localdomain sudo[261020]: pam_unix(sudo:session): session closed for user root
Oct 13 14:32:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:32:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:32:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:32:13 standalone.localdomain runuser[261066]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:13 standalone.localdomain podman[261036]: 2025-10-13 14:32:13.511612782 +0000 UTC m=+0.071154631 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=nova_compute, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:32:13 standalone.localdomain podman[261036]: 2025-10-13 14:32:13.557597392 +0000 UTC m=+0.117139241 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:32:13 standalone.localdomain systemd[1]: tmp-crun.6hC4FT.mount: Deactivated successfully.
Oct 13 14:32:13 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:32:13 standalone.localdomain podman[261037]: 2025-10-13 14:32:13.575873928 +0000 UTC m=+0.133636181 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, managed_by=tripleo_ansible, com.redhat.component=openstack-mariadb-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9)
Oct 13 14:32:13 standalone.localdomain podman[261035]: 2025-10-13 14:32:13.610164247 +0000 UTC m=+0.169479309 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid)
Oct 13 14:32:13 standalone.localdomain podman[261035]: 2025-10-13 14:32:13.617762491 +0000 UTC m=+0.177077543 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:32:13 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:32:13 standalone.localdomain podman[261037]: 2025-10-13 14:32:13.673169153 +0000 UTC m=+0.230931386 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, tcib_managed=true, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, vcs-type=git, name=rhosp17/openstack-mariadb, version=17.1.9, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:32:13 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:32:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:32:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:32:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:32:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:32:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1731: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:14 standalone.localdomain runuser[261066]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:14 standalone.localdomain runuser[261178]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:32:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:32:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:32:14 standalone.localdomain runuser[261178]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:15 standalone.localdomain ceph-mon[29756]: pgmap v1731: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:32:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1732: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:17 standalone.localdomain ceph-mon[29756]: pgmap v1732: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1733: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:19 standalone.localdomain ceph-mon[29756]: pgmap v1733: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1734: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:20 standalone.localdomain ceph-mon[29756]: pgmap v1734: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1735: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:23 standalone.localdomain ceph-mon[29756]: pgmap v1735: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:32:23
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'vms', '.mgr', 'volumes', 'manila_metadata', 'backups', 'images']
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:32:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1736: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:25 standalone.localdomain ceph-mon[29756]: pgmap v1736: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:25 standalone.localdomain runuser[261629]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:32:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:32:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:32:25 standalone.localdomain podman[261756]: 2025-10-13 14:32:25.924596369 +0000 UTC m=+0.184285266 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, release=1, version=17.1.9, container_name=heat_api_cfn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git)
Oct 13 14:32:25 standalone.localdomain podman[261757]: 2025-10-13 14:32:25.882300962 +0000 UTC m=+0.142163094 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:32:25 standalone.localdomain podman[261757]: 2025-10-13 14:32:25.968198086 +0000 UTC m=+0.228060198 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:32:26 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:32:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1737: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:26 standalone.localdomain runuser[261629]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:26 standalone.localdomain podman[261756]: 2025-10-13 14:32:26.183162879 +0000 UTC m=+0.442851756 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, container_name=heat_api_cfn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:32:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:32:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:32:26 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:32:26 standalone.localdomain runuser[261913]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:26 standalone.localdomain ceph-mon[29756]: pgmap v1737: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:26 standalone.localdomain podman[261887]: 2025-10-13 14:32:26.332314178 +0000 UTC m=+0.114997175 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.9, container_name=memcached, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:32:26 standalone.localdomain podman[261787]: 2025-10-13 14:32:26.197860203 +0000 UTC m=+0.291123097 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., architecture=x86_64, container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:11, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, release=1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:32:26 standalone.localdomain podman[261887]: 2025-10-13 14:32:26.401582978 +0000 UTC m=+0.184266075 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, container_name=memcached, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, build-date=2025-07-21T12:58:43)
Oct 13 14:32:26 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:32:26 standalone.localdomain podman[261896]: 2025-10-13 14:32:26.402236628 +0000 UTC m=+0.173907334 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, distribution-scope=public, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, tcib_managed=true, release=1, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc.)
Oct 13 14:32:26 standalone.localdomain podman[261787]: 2025-10-13 14:32:26.478415252 +0000 UTC m=+0.571678176 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, vendor=Red Hat, Inc., container_name=heat_engine, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, vcs-type=git)
Oct 13 14:32:26 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:32:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:32:26 standalone.localdomain podman[261896]: 2025-10-13 14:32:26.608245425 +0000 UTC m=+0.379916121 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, distribution-scope=public, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api)
Oct 13 14:32:26 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:32:26 standalone.localdomain podman[261979]: 2025-10-13 14:32:26.689042451 +0000 UTC m=+0.088794185 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:32:26 standalone.localdomain podman[261979]: 2025-10-13 14:32:26.716923962 +0000 UTC m=+0.116675746 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, container_name=heat_api, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, config_id=tripleo_step4)
Oct 13 14:32:26 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:32:26 standalone.localdomain runuser[261913]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:26 standalone.localdomain runuser[262056]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:27 standalone.localdomain runuser[262056]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1738: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:32:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:32:29 standalone.localdomain ceph-mon[29756]: pgmap v1738: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1739: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:31 standalone.localdomain ceph-mon[29756]: pgmap v1739: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1740: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:32:32 standalone.localdomain systemd[1]: tmp-crun.SWduZ2.mount: Deactivated successfully.
Oct 13 14:32:32 standalone.localdomain podman[262215]: 2025-10-13 14:32:32.824271609 +0000 UTC m=+0.093119199 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:32:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:32:32 standalone.localdomain podman[262238]: 2025-10-13 14:32:32.908680128 +0000 UTC m=+0.062816673 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:32:33 standalone.localdomain podman[262215]: 2025-10-13 14:32:33.008857433 +0000 UTC m=+0.277705033 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, container_name=swift_container_server)
Oct 13 14:32:33 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:32:33 standalone.localdomain ceph-mon[29756]: pgmap v1740: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:33 standalone.localdomain podman[262238]: 2025-10-13 14:32:33.098860555 +0000 UTC m=+0.252997070 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:32:33 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:32:33 standalone.localdomain podman[262284]: 2025-10-13 14:32:33.336762585 +0000 UTC m=+0.077741082 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1)
Oct 13 14:32:33 standalone.localdomain podman[262272]: 2025-10-13 14:32:33.305640563 +0000 UTC m=+0.192206349 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-glance-api, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:32:33 standalone.localdomain podman[262272]: 2025-10-13 14:32:33.390919099 +0000 UTC m=+0.277484875 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, architecture=x86_64, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:32:33 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:32:33 standalone.localdomain podman[262323]: 2025-10-13 14:32:33.51066579 +0000 UTC m=+0.085052840 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:32:33 standalone.localdomain podman[262324]: 2025-10-13 14:32:33.556133575 +0000 UTC m=+0.128257495 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, distribution-scope=public, io.buildah.version=1.33.12, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container)
Oct 13 14:32:33 standalone.localdomain podman[262284]: 2025-10-13 14:32:33.584962065 +0000 UTC m=+0.325940552 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, release=1, name=rhosp17/openstack-swift-account, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:32:33 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:32:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:32:33 standalone.localdomain podman[262380]: 2025-10-13 14:32:33.682553431 +0000 UTC m=+0.064327549 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, architecture=x86_64, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:32:33 standalone.localdomain podman[262360]: 2025-10-13 14:32:33.654117372 +0000 UTC m=+0.120450553 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, container_name=ovn_controller, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, release=1)
Oct 13 14:32:33 standalone.localdomain podman[262360]: 2025-10-13 14:32:33.745067812 +0000 UTC m=+0.211400993 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-07-21T13:28:44, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:32:33 standalone.localdomain podman[262360]: unhealthy
Oct 13 14:32:33 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:32:33 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:32:33 standalone.localdomain podman[262324]: 2025-10-13 14:32:33.77734615 +0000 UTC m=+0.349470040 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T13:58:20, version=17.1.9, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64)
Oct 13 14:32:33 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:32:33 standalone.localdomain podman[262405]: 2025-10-13 14:32:33.755451744 +0000 UTC m=+0.070779688 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4)
Oct 13 14:32:33 standalone.localdomain podman[262405]: 2025-10-13 14:32:33.845019701 +0000 UTC m=+0.160347645 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1)
Oct 13 14:32:33 standalone.localdomain podman[262405]: unhealthy
Oct 13 14:32:33 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:32:33 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:32:33 standalone.localdomain podman[262323]: 2025-10-13 14:32:33.884890213 +0000 UTC m=+0.459277283 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:32:33 standalone.localdomain podman[262380]: 2025-10-13 14:32:33.911906948 +0000 UTC m=+0.293681046 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1)
Oct 13 14:32:33 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:32:33 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:32:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1741: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:35 standalone.localdomain ceph-mon[29756]: pgmap v1741: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1742: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:36 standalone.localdomain ceph-mon[29756]: pgmap v1742: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:32:37 standalone.localdomain podman[262676]: 2025-10-13 14:32:37.826614988 +0000 UTC m=+0.089305421 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, com.redhat.component=openstack-keystone-container, release=1, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-keystone, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron)
Oct 13 14:32:37 standalone.localdomain podman[262676]: 2025-10-13 14:32:37.83444612 +0000 UTC m=+0.097136553 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, version=17.1.9, container_name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, tcib_managed=true, vcs-type=git, release=1, vendor=Red Hat, Inc.)
Oct 13 14:32:37 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:32:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1743: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:38 standalone.localdomain runuser[262700]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:38 standalone.localdomain ceph-mon[29756]: pgmap v1743: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:38 standalone.localdomain haproxy[70940]: 172.17.0.5:53672 [13/Oct/2025:14:32:38.474] mysql mysql/standalone.internalapi.localdomain 1/0/1 183 -- 5/5/4/4/0 0/0
Oct 13 14:32:38 standalone.localdomain haproxy[70940]: 172.17.0.5:53688 [13/Oct/2025:14:32:38.474] mysql mysql/standalone.internalapi.localdomain 1/0/10 183 -- 4/4/3/3/0 0/0
Oct 13 14:32:38 standalone.localdomain runuser[262700]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:38 standalone.localdomain runuser[262769]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:39 standalone.localdomain runuser[262769]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:39 standalone.localdomain runuser[262831]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:32:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:32:39 standalone.localdomain systemd[1]: tmp-crun.Hu7aMO.mount: Deactivated successfully.
Oct 13 14:32:39 standalone.localdomain podman[262878]: 2025-10-13 14:32:39.809537393 +0000 UTC m=+0.078157467 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, container_name=neutron_dhcp, release=1)
Oct 13 14:32:39 standalone.localdomain podman[262878]: 2025-10-13 14:32:39.861055924 +0000 UTC m=+0.129675998 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, container_name=neutron_dhcp, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:32:39 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:32:39 standalone.localdomain podman[262879]: 2025-10-13 14:32:39.903350872 +0000 UTC m=+0.166665591 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_sriov_agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:32:39 standalone.localdomain podman[262879]: 2025-10-13 14:32:39.969119194 +0000 UTC m=+0.232433903 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:32:39 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:32:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1744: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:40 standalone.localdomain runuser[262831]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:40 standalone.localdomain ceph-mon[29756]: pgmap v1744: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1745: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:43 standalone.localdomain ceph-mon[29756]: pgmap v1745: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:32:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:32:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:32:43 standalone.localdomain systemd[1]: tmp-crun.8GTdHb.mount: Deactivated successfully.
Oct 13 14:32:43 standalone.localdomain podman[263023]: 2025-10-13 14:32:43.831757085 +0000 UTC m=+0.087252227 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, managed_by=tripleo_ansible, vcs-type=git, release=1, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9)
Oct 13 14:32:43 standalone.localdomain podman[263022]: 2025-10-13 14:32:43.919611599 +0000 UTC m=+0.177055092 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:32:43 standalone.localdomain podman[263023]: 2025-10-13 14:32:43.939482743 +0000 UTC m=+0.194977905 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_id=tripleo_step2, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, architecture=x86_64, com.redhat.component=openstack-mariadb-container)
Oct 13 14:32:43 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:32:44 standalone.localdomain podman[263021]: 2025-10-13 14:32:44.025074878 +0000 UTC m=+0.284184153 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, release=1, distribution-scope=public, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1)
Oct 13 14:32:44 standalone.localdomain podman[263022]: 2025-10-13 14:32:44.048514982 +0000 UTC m=+0.305958495 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:32:44 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:32:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1746: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:44 standalone.localdomain podman[263021]: 2025-10-13 14:32:44.12026248 +0000 UTC m=+0.379371765 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64)
Oct 13 14:32:44 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:32:44 standalone.localdomain ceph-mon[29756]: pgmap v1746: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1747: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:47 standalone.localdomain ceph-mon[29756]: pgmap v1747: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1748: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:48 standalone.localdomain ceph-mon[29756]: pgmap v1748: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1749: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:50 standalone.localdomain runuser[263421]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:51 standalone.localdomain ceph-mon[29756]: pgmap v1749: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:51 standalone.localdomain runuser[263421]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:51 standalone.localdomain runuser[263482]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:52 standalone.localdomain runuser[263482]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:52 standalone.localdomain runuser[263544]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:32:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1750: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:52 standalone.localdomain haproxy[70940]: 172.17.0.5:55552 [13/Oct/2025:14:32:52.250] mysql mysql/standalone.internalapi.localdomain 1/0/2 183 -- 4/4/3/3/0 0/0
Oct 13 14:32:52 standalone.localdomain haproxy[70940]: 172.17.0.5:55562 [13/Oct/2025:14:32:52.698] mysql mysql/standalone.internalapi.localdomain 1/0/1 183 -- 4/4/3/3/0 0/0
Oct 13 14:32:52 standalone.localdomain runuser[263544]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:32:53 standalone.localdomain ceph-mon[29756]: pgmap v1750: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1751: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:54 standalone.localdomain ceph-mon[29756]: pgmap v1751: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1752: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:32:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:32:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:32:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:32:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:32:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:32:56 standalone.localdomain podman[263810]: 2025-10-13 14:32:56.871703714 +0000 UTC m=+0.127991179 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api_cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api)
Oct 13 14:32:56 standalone.localdomain podman[263810]: 2025-10-13 14:32:56.880728199 +0000 UTC m=+0.137015654 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=heat_api_cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public)
Oct 13 14:32:56 standalone.localdomain podman[263805]: 2025-10-13 14:32:56.913388046 +0000 UTC m=+0.175439276 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, tcib_managed=true, name=rhosp17/openstack-heat-engine, release=1, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, distribution-scope=public, vcs-type=git)
Oct 13 14:32:56 standalone.localdomain podman[263805]: 2025-10-13 14:32:56.932791808 +0000 UTC m=+0.194843068 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:11, release=1, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, com.redhat.component=openstack-heat-engine-container, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:32:56 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:32:56 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:32:56 standalone.localdomain podman[263803]: 2025-10-13 14:32:56.838779818 +0000 UTC m=+0.098487727 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, container_name=heat_api_cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99)
Oct 13 14:32:57 standalone.localdomain podman[263803]: 2025-10-13 14:32:57.026851269 +0000 UTC m=+0.286559148 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, com.redhat.component=openstack-heat-api-cfn-container, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, tcib_managed=true, container_name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:32:57 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:32:57 standalone.localdomain podman[263833]: 2025-10-13 14:32:56.99708637 +0000 UTC m=+0.241699488 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, release=1, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=)
Oct 13 14:32:57 standalone.localdomain podman[263833]: 2025-10-13 14:32:57.083180399 +0000 UTC m=+0.327793517 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, com.redhat.component=openstack-heat-api-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T15:56:26, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:32:57 standalone.localdomain podman[263822]: 2025-10-13 14:32:57.046623783 +0000 UTC m=+0.297821642 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:32:57 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:32:57 standalone.localdomain podman[263822]: 2025-10-13 14:32:57.127586494 +0000 UTC m=+0.378784323 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, vcs-type=git, release=1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:32:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:32:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:32:57 standalone.localdomain podman[263809]: 2025-10-13 14:32:57.1723153 +0000 UTC m=+0.432784203 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-memcached-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, build-date=2025-07-21T12:58:43, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:32:57 standalone.localdomain ceph-mon[29756]: pgmap v1752: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:57 standalone.localdomain podman[263809]: 2025-10-13 14:32:57.192423203 +0000 UTC m=+0.452892096 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step1, build-date=2025-07-21T12:58:43, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:32:57 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:32:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1753: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:58 standalone.localdomain ceph-mon[29756]: pgmap v1753: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:32:59 standalone.localdomain haproxy[70940]: 172.17.0.5:38182 [13/Oct/2025:14:32:59.417] mysql mysql/standalone.internalapi.localdomain 1/0/1 183 -- 4/4/3/3/0 0/0
Oct 13 14:33:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1754: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:00 standalone.localdomain ceph-mon[29756]: pgmap v1754: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:01 standalone.localdomain haproxy[70940]: 172.17.0.5:40278 [13/Oct/2025:14:33:01.559] mysql mysql/standalone.internalapi.localdomain 1/0/4 183 -- 4/4/3/3/0 0/0
Oct 13 14:33:01 standalone.localdomain haproxy[70940]: 172.17.0.5:40290 [13/Oct/2025:14:33:01.988] mysql mysql/standalone.internalapi.localdomain 1/0/2 183 -- 4/4/3/3/0 0/0
Oct 13 14:33:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1755: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:03 standalone.localdomain ceph-mon[29756]: pgmap v1755: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:03 standalone.localdomain runuser[264131]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:33:03 standalone.localdomain systemd[1]: tmp-crun.wtIpRc.mount: Deactivated successfully.
Oct 13 14:33:03 standalone.localdomain podman[264176]: 2025-10-13 14:33:03.809080424 +0000 UTC m=+0.081001504 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, architecture=x86_64)
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:33:03 standalone.localdomain systemd[1]: tmp-crun.2Jsw3m.mount: Deactivated successfully.
Oct 13 14:33:03 standalone.localdomain podman[264176]: 2025-10-13 14:33:03.853544901 +0000 UTC m=+0.125465941 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.buildah.version=1.33.12, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git)
Oct 13 14:33:03 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:33:03 standalone.localdomain podman[264177]: 2025-10-13 14:33:03.858631397 +0000 UTC m=+0.129938999 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T14:56:28, architecture=x86_64, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4)
Oct 13 14:33:03 standalone.localdomain podman[264226]: 2025-10-13 14:33:03.916830963 +0000 UTC m=+0.085827971 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:33:03 standalone.localdomain podman[264227]: 2025-10-13 14:33:03.960046762 +0000 UTC m=+0.125931275 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:33:03 standalone.localdomain podman[264178]: 2025-10-13 14:33:03.912767109 +0000 UTC m=+0.180560783 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:54:32, version=17.1.9)
Oct 13 14:33:03 standalone.localdomain runuser[264131]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:33:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:33:04 standalone.localdomain podman[264265]: 2025-10-13 14:33:04.030573925 +0000 UTC m=+0.140386977 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Oct 13 14:33:04 standalone.localdomain podman[264177]: 2025-10-13 14:33:04.057204738 +0000 UTC m=+0.328512360 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true)
Oct 13 14:33:04 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:33:04 standalone.localdomain podman[264311]: 2025-10-13 14:33:04.07266594 +0000 UTC m=+0.077788006 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37)
Oct 13 14:33:04 standalone.localdomain podman[264265]: 2025-10-13 14:33:04.091593268 +0000 UTC m=+0.201406330 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:33:04 standalone.localdomain podman[264265]: unhealthy
Oct 13 14:33:04 standalone.localdomain podman[264227]: 2025-10-13 14:33:04.102104129 +0000 UTC m=+0.267988662 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public)
Oct 13 14:33:04 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:33:04 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:33:04 standalone.localdomain podman[264227]: unhealthy
Oct 13 14:33:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1756: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:04 standalone.localdomain podman[264226]: 2025-10-13 14:33:04.117974913 +0000 UTC m=+0.286971911 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, release=1, container_name=glance_api_internal, com.redhat.component=openstack-glance-api-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api)
Oct 13 14:33:04 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:33:04 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:33:04 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:33:04 standalone.localdomain runuser[264380]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:04 standalone.localdomain podman[264178]: 2025-10-13 14:33:04.148072012 +0000 UTC m=+0.415865686 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, release=1, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32)
Oct 13 14:33:04 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:33:04 standalone.localdomain podman[264179]: 2025-10-13 14:33:04.109186614 +0000 UTC m=+0.375977957 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, container_name=swift_account_server)
Oct 13 14:33:04 standalone.localdomain ceph-mon[29756]: pgmap v1756: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:04 standalone.localdomain podman[264317]: 2025-10-13 14:33:04.241231505 +0000 UTC m=+0.244120383 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy)
Oct 13 14:33:04 standalone.localdomain podman[264179]: 2025-10-13 14:33:04.315266046 +0000 UTC m=+0.582057389 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:33:04 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:33:04 standalone.localdomain podman[264311]: 2025-10-13 14:33:04.425878332 +0000 UTC m=+0.431000418 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Oct 13 14:33:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:33:04 standalone.localdomain podman[264317]: 2025-10-13 14:33:04.440815928 +0000 UTC m=+0.443704826 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, container_name=swift_proxy, release=1, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, tcib_managed=true)
Oct 13 14:33:04 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:33:04 standalone.localdomain runuser[264380]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:04 standalone.localdomain runuser[264462]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:05 standalone.localdomain runuser[264462]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:05 standalone.localdomain podman[264554]: 2025-10-13 14:33:05.502561379 +0000 UTC m=+0.215868402 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9)
Oct 13 14:33:05 standalone.localdomain podman[264590]: 2025-10-13 14:33:05.593823554 +0000 UTC m=+0.074800675 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45)
Oct 13 14:33:05 standalone.localdomain podman[264554]: 2025-10-13 14:33:05.64017796 +0000 UTC m=+0.353484993 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:33:05 standalone.localdomain podman[264605]: 2025-10-13 14:33:05.794998845 +0000 UTC m=+0.161485260 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T13:08:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, release=1, description=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:33:05 standalone.localdomain podman[264605]: 2025-10-13 14:33:05.865012373 +0000 UTC m=+0.231498828 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, tcib_managed=true)
Oct 13 14:33:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1757: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:06 standalone.localdomain podman[264769]: 2025-10-13 14:33:06.784167891 +0000 UTC m=+0.053616258 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-rabbitmq-container, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, build-date=2025-07-21T13:08:05)
Oct 13 14:33:06 standalone.localdomain podman[264769]: 2025-10-13 14:33:06.816269071 +0000 UTC m=+0.085717448 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-rabbitmq-container, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-rabbitmq, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 rabbitmq)
Oct 13 14:33:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:07 standalone.localdomain ceph-mon[29756]: pgmap v1757: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1758: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:08 standalone.localdomain ceph-mon[29756]: pgmap v1758: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:33:08 standalone.localdomain podman[264856]: 2025-10-13 14:33:08.858600025 +0000 UTC m=+0.126738971 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, container_name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, architecture=x86_64)
Oct 13 14:33:08 standalone.localdomain podman[264856]: 2025-10-13 14:33:08.899663048 +0000 UTC m=+0.167801984 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=keystone_cron, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:33:09 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:33:09 standalone.localdomain haproxy[70940]: 172.17.0.5:42940 [13/Oct/2025:14:33:09.326] mysql mysql/standalone.internalapi.localdomain 1/0/1 183 -- 4/4/3/3/0 0/0
Oct 13 14:33:09 standalone.localdomain haproxy[70940]: 172.17.0.5:42942 [13/Oct/2025:14:33:09.529] mysql mysql/standalone.internalapi.localdomain 1/0/2 183 -- 4/4/3/3/0 0/0
Oct 13 14:33:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1759: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:10 standalone.localdomain haproxy[70940]: 172.17.0.5:55828 [13/Oct/2025:14:33:10.181] mysql mysql/standalone.internalapi.localdomain 1/0/1 183 -- 5/5/4/4/0 0/0
Oct 13 14:33:10 standalone.localdomain haproxy[70940]: 172.17.0.5:55840 [13/Oct/2025:14:33:10.182] mysql mysql/standalone.internalapi.localdomain 1/0/2 183 -- 4/4/3/3/0 0/0
Oct 13 14:33:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:33:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:33:10 standalone.localdomain systemd[1]: tmp-crun.ffXhJr.mount: Deactivated successfully.
Oct 13 14:33:10 standalone.localdomain podman[264944]: 2025-10-13 14:33:10.802658889 +0000 UTC m=+0.069536704 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, config_id=tripleo_step4, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, name=rhosp17/openstack-neutron-dhcp-agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, tcib_managed=true)
Oct 13 14:33:10 standalone.localdomain podman[264945]: 2025-10-13 14:33:10.854904143 +0000 UTC m=+0.120394986 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, tcib_managed=true, release=1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4)
Oct 13 14:33:10 standalone.localdomain podman[264944]: 2025-10-13 14:33:10.876503343 +0000 UTC m=+0.143381168 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, version=17.1.9, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team)
Oct 13 14:33:10 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:33:10 standalone.localdomain podman[264945]: 2025-10-13 14:33:10.899245167 +0000 UTC m=+0.164736010 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:33:10 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:33:11 standalone.localdomain ceph-mon[29756]: pgmap v1759: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:11 standalone.localdomain systemd[1]: tmp-crun.6oNQqM.mount: Deactivated successfully.
Oct 13 14:33:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1760: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:13 standalone.localdomain ceph-mon[29756]: pgmap v1760: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:13 standalone.localdomain sudo[265016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:33:13 standalone.localdomain sudo[265016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:33:13 standalone.localdomain sudo[265016]: pam_unix(sudo:session): session closed for user root
Oct 13 14:33:13 standalone.localdomain sudo[265031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:33:13 standalone.localdomain sudo[265031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:33:14 standalone.localdomain sudo[265031]: pam_unix(sudo:session): session closed for user root
Oct 13 14:33:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1761: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:33:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 0bde9245-46d4-4ecd-996a-83a997a9b570 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:33:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 0bde9245-46d4-4ecd-996a-83a997a9b570 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:33:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 0bde9245-46d4-4ecd-996a-83a997a9b570 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:33:14 standalone.localdomain sudo[265088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:33:14 standalone.localdomain sudo[265088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:33:14 standalone.localdomain sudo[265088]: pam_unix(sudo:session): session closed for user root
Oct 13 14:33:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:33:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:33:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:33:14 standalone.localdomain systemd[1]: tmp-crun.fMSIkn.mount: Deactivated successfully.
Oct 13 14:33:14 standalone.localdomain podman[265105]: 2025-10-13 14:33:14.300468502 +0000 UTC m=+0.063911522 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step2, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=)
Oct 13 14:33:14 standalone.localdomain podman[265103]: 2025-10-13 14:33:14.349672104 +0000 UTC m=+0.114029112 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, architecture=x86_64, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:33:14 standalone.localdomain podman[265103]: 2025-10-13 14:33:14.368618592 +0000 UTC m=+0.132975590 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:33:14 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:33:14 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:33:14 standalone.localdomain podman[265104]: 2025-10-13 14:33:14.404009143 +0000 UTC m=+0.168159665 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute)
Oct 13 14:33:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:33:14 standalone.localdomain podman[265105]: 2025-10-13 14:33:14.423187208 +0000 UTC m=+0.186630238 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, name=rhosp17/openstack-mariadb, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step2, version=17.1.9, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:33:14 standalone.localdomain podman[265104]: 2025-10-13 14:33:14.430872023 +0000 UTC m=+0.195022545 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step5, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:33:14 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:33:14 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:33:15 standalone.localdomain ceph-mon[29756]: pgmap v1761: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:33:16 standalone.localdomain runuser[265319]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1762: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:16 standalone.localdomain ceph-mon[29756]: pgmap v1762: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:16 standalone.localdomain runuser[265319]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:16 standalone.localdomain runuser[265429]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:17 standalone.localdomain runuser[265429]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:17 standalone.localdomain runuser[265532]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:18 standalone.localdomain runuser[265532]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1763: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:19 standalone.localdomain ceph-mon[29756]: pgmap v1763: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1764: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:20 standalone.localdomain ceph-mon[29756]: pgmap v1764: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1765: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:33:23
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'manila_metadata', 'backups', 'volumes', 'vms', 'manila_data', '.mgr']
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:33:23 standalone.localdomain ceph-mon[29756]: pgmap v1765: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:33:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1766: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:25 standalone.localdomain ceph-mon[29756]: pgmap v1766: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1767: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:27 standalone.localdomain ceph-mon[29756]: pgmap v1767: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:33:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:33:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:33:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:33:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:33:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:33:27 standalone.localdomain systemd[1]: tmp-crun.nUXyau.mount: Deactivated successfully.
Oct 13 14:33:27 standalone.localdomain podman[265931]: 2025-10-13 14:33:27.765304197 +0000 UTC m=+0.124850183 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:11, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:33:27 standalone.localdomain podman[265939]: 2025-10-13 14:33:27.811843057 +0000 UTC m=+0.158850381 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, container_name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, tcib_managed=true, config_id=tripleo_step4, version=17.1.9)
Oct 13 14:33:27 standalone.localdomain podman[265931]: 2025-10-13 14:33:27.83813829 +0000 UTC m=+0.197684286 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, com.redhat.component=openstack-heat-engine-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, distribution-scope=public)
Oct 13 14:33:27 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:33:27 standalone.localdomain podman[265939]: 2025-10-13 14:33:27.885153154 +0000 UTC m=+0.232160528 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:33:27 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:33:27 standalone.localdomain podman[265937]: 2025-10-13 14:33:27.963524467 +0000 UTC m=+0.320666280 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, container_name=heat_api_cron, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:33:28 standalone.localdomain podman[265930]: 2025-10-13 14:33:28.026219171 +0000 UTC m=+0.390141191 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, container_name=heat_api_cfn, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, release=1)
Oct 13 14:33:28 standalone.localdomain podman[265932]: 2025-10-13 14:33:28.088935105 +0000 UTC m=+0.441999603 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, version=17.1.9, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, build-date=2025-07-21T12:58:43, distribution-scope=public, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1768: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:28 standalone.localdomain podman[265946]: 2025-10-13 14:33:28.137537188 +0000 UTC m=+0.480297622 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Oct 13 14:33:28 standalone.localdomain podman[265946]: 2025-10-13 14:33:28.15001224 +0000 UTC m=+0.492772674 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, release=1, build-date=2025-07-21T13:07:52, architecture=x86_64, container_name=logrotate_crond, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Oct 13 14:33:28 standalone.localdomain podman[265937]: 2025-10-13 14:33:28.155824597 +0000 UTC m=+0.512966450 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, release=1, version=17.1.9, build-date=2025-07-21T15:56:26, vcs-type=git, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:33:28 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:33:28 standalone.localdomain podman[265932]: 2025-10-13 14:33:28.18965915 +0000 UTC m=+0.542723668 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 memcached, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 14:33:28 standalone.localdomain ceph-mon[29756]: pgmap v1768: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:28 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:33:28 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:33:28 standalone.localdomain podman[265930]: 2025-10-13 14:33:28.311641724 +0000 UTC m=+0.675563724 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:49:55, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-cfn-container)
Oct 13 14:33:28 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:33:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:33:28 standalone.localdomain runuser[266077]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:29 standalone.localdomain runuser[266077]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:29 standalone.localdomain runuser[266146]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:30 standalone.localdomain runuser[266146]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:30 standalone.localdomain runuser[266200]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1769: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:30 standalone.localdomain runuser[266200]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:31 standalone.localdomain ceph-mon[29756]: pgmap v1769: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1770: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:33 standalone.localdomain ceph-mon[29756]: pgmap v1770: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1771: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:34 standalone.localdomain ceph-mon[29756]: pgmap v1771: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:33:34 standalone.localdomain podman[266369]: 2025-10-13 14:33:34.820792231 +0000 UTC m=+0.076698312 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., container_name=ovn_metadata_agent)
Oct 13 14:33:34 standalone.localdomain podman[266369]: 2025-10-13 14:33:34.830430596 +0000 UTC m=+0.086336687 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4)
Oct 13 14:33:34 standalone.localdomain podman[266369]: unhealthy
Oct 13 14:33:34 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:33:34 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:33:34 standalone.localdomain podman[266355]: 2025-10-13 14:33:34.869913911 +0000 UTC m=+0.132344921 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9)
Oct 13 14:33:35 standalone.localdomain systemd[1]: tmp-crun.J3SVy5.mount: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266375]: 2025-10-13 14:33:35.076211468 +0000 UTC m=+0.325467946 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:33:35 standalone.localdomain podman[266385]: 2025-10-13 14:33:35.037382963 +0000 UTC m=+0.289189639 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, release=1, version=17.1.9, architecture=x86_64, container_name=swift_account_server, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git)
Oct 13 14:33:35 standalone.localdomain podman[266385]: 2025-10-13 14:33:35.248219359 +0000 UTC m=+0.500026025 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:33:35 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266356]: 2025-10-13 14:33:35.344654043 +0000 UTC m=+0.609753565 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:33:35 standalone.localdomain podman[266355]: 2025-10-13 14:33:35.415684341 +0000 UTC m=+0.678115361 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:33:35 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266353]: 2025-10-13 14:33:35.148609758 +0000 UTC m=+0.420670472 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, container_name=glance_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:33:35 standalone.localdomain podman[266356]: 2025-10-13 14:33:35.56404961 +0000 UTC m=+0.829149052 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true)
Oct 13 14:33:35 standalone.localdomain podman[266375]: 2025-10-13 14:33:35.620160532 +0000 UTC m=+0.869416870 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Oct 13 14:33:35 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266375]: unhealthy
Oct 13 14:33:35 standalone.localdomain podman[266353]: 2025-10-13 14:33:35.631553731 +0000 UTC m=+0.903614465 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api)
Oct 13 14:33:35 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:33:35 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:33:35 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266368]: 2025-10-13 14:33:35.689250822 +0000 UTC m=+0.946698740 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, container_name=swift_container_server, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:33:35 standalone.localdomain systemd[1]: tmp-crun.rPHqWw.mount: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266354]: 2025-10-13 14:33:35.836854498 +0000 UTC m=+1.107232320 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:33:35 standalone.localdomain podman[266362]: 2025-10-13 14:33:35.811401051 +0000 UTC m=+1.072717697 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:33:35 standalone.localdomain podman[266368]: 2025-10-13 14:33:35.89885649 +0000 UTC m=+1.156304428 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32)
Oct 13 14:33:35 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:33:35 standalone.localdomain podman[266362]: 2025-10-13 14:33:35.992591381 +0000 UTC m=+1.253907967 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, architecture=x86_64, release=1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:33:36 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:33:36 standalone.localdomain podman[266354]: 2025-10-13 14:33:36.027114526 +0000 UTC m=+1.297492338 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, distribution-scope=public, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28)
Oct 13 14:33:36 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:33:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1772: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:36 standalone.localdomain ceph-mon[29756]: pgmap v1772: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1773: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:39 standalone.localdomain ceph-mon[29756]: pgmap v1773: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:33:39 standalone.localdomain podman[266812]: 2025-10-13 14:33:39.786893127 +0000 UTC m=+0.060864960 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-keystone, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=keystone_cron, io.buildah.version=1.33.12, com.redhat.component=openstack-keystone-container, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 14:33:39 standalone.localdomain podman[266812]: 2025-10-13 14:33:39.800947786 +0000 UTC m=+0.074919589 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, release=1, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T13:27:18)
Oct 13 14:33:39 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:33:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1774: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:40 standalone.localdomain ceph-mon[29756]: pgmap v1774: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:41 standalone.localdomain runuser[266896]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:33:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:33:41 standalone.localdomain systemd[1]: tmp-crun.WLxnc6.mount: Deactivated successfully.
Oct 13 14:33:41 standalone.localdomain podman[266950]: 2025-10-13 14:33:41.813207251 +0000 UTC m=+0.078494617 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vendor=Red Hat, Inc., version=17.1.9, container_name=neutron_sriov_agent, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:03:34, distribution-scope=public)
Oct 13 14:33:41 standalone.localdomain podman[266950]: 2025-10-13 14:33:41.862813336 +0000 UTC m=+0.128100682 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, version=17.1.9, build-date=2025-07-21T16:03:34, distribution-scope=public, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Oct 13 14:33:41 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:33:41 standalone.localdomain runuser[266896]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:41 standalone.localdomain podman[266949]: 2025-10-13 14:33:41.814505631 +0000 UTC m=+0.083966084 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, container_name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 14:33:41 standalone.localdomain podman[266949]: 2025-10-13 14:33:41.946949984 +0000 UTC m=+0.216410447 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, release=1)
Oct 13 14:33:41 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:33:42 standalone.localdomain runuser[267009]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1775: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:42 standalone.localdomain runuser[267009]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:42 standalone.localdomain runuser[267071]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:43 standalone.localdomain ceph-mon[29756]: pgmap v1775: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:43 standalone.localdomain runuser[267071]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1776: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:44 standalone.localdomain ceph-mon[29756]: pgmap v1776: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:33:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:33:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:33:44 standalone.localdomain podman[267149]: 2025-10-13 14:33:44.830316431 +0000 UTC m=+0.082447938 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, config_id=tripleo_step2, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, container_name=clustercheck, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc.)
Oct 13 14:33:44 standalone.localdomain systemd[1]: tmp-crun.6iYuX9.mount: Deactivated successfully.
Oct 13 14:33:44 standalone.localdomain podman[267148]: 2025-10-13 14:33:44.812734804 +0000 UTC m=+0.070044559 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step5, version=17.1.9, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:33:44 standalone.localdomain podman[267149]: 2025-10-13 14:33:44.886937819 +0000 UTC m=+0.139069346 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, container_name=clustercheck, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, release=1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, name=rhosp17/openstack-mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:33:44 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:33:44 standalone.localdomain podman[267148]: 2025-10-13 14:33:44.907874969 +0000 UTC m=+0.165184714 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute)
Oct 13 14:33:44 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:33:44 standalone.localdomain podman[267147]: 2025-10-13 14:33:44.872157018 +0000 UTC m=+0.129070681 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, container_name=iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:33:44 standalone.localdomain podman[267147]: 2025-10-13 14:33:44.956887804 +0000 UTC m=+0.213801447 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12)
Oct 13 14:33:44 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:33:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1777: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:47 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:14:33:47 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txff55371d2b5a466898476-0068ed0dcb" "proxy-server 2" 0.0006 "-" 20 -
Oct 13 14:33:47 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txff55371d2b5a466898476-0068ed0dcb)
Oct 13 14:33:47 standalone.localdomain ceph-mon[29756]: pgmap v1777: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1778: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:48 standalone.localdomain ceph-mon[29756]: pgmap v1778: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1779: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:51 standalone.localdomain ceph-mon[29756]: pgmap v1779: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1780: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:53 standalone.localdomain ceph-mon[29756]: pgmap v1780: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:33:53 standalone.localdomain runuser[267564]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1781: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:54 standalone.localdomain ceph-mon[29756]: pgmap v1781: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:54 standalone.localdomain runuser[267564]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:54 standalone.localdomain runuser[267627]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:55 standalone.localdomain runuser[267627]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:55 standalone.localdomain runuser[267689]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:33:55 standalone.localdomain runuser[267689]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:33:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1782: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:57 standalone.localdomain ceph-mon[29756]: pgmap v1782: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:33:57 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:33:57 standalone.localdomain recover_tripleo_nova_virtqemud[267969]: 93291
Oct 13 14:33:57 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:33:57 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:33:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1783: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:58 standalone.localdomain ceph-mon[29756]: pgmap v1783: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:33:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:33:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:33:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:33:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:33:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:33:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:33:58 standalone.localdomain podman[267979]: 2025-10-13 14:33:58.839964795 +0000 UTC m=+0.090401020 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=heat_engine, name=rhosp17/openstack-heat-engine, tcib_managed=true, build-date=2025-07-21T15:44:11, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, version=17.1.9)
Oct 13 14:33:58 standalone.localdomain podman[267994]: 2025-10-13 14:33:58.894045785 +0000 UTC m=+0.133465094 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, container_name=heat_api, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api)
Oct 13 14:33:58 standalone.localdomain podman[267980]: 2025-10-13 14:33:58.909216979 +0000 UTC m=+0.151629739 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, version=17.1.9, release=1, architecture=x86_64, name=rhosp17/openstack-memcached, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:33:58 standalone.localdomain podman[267996]: 2025-10-13 14:33:58.866270689 +0000 UTC m=+0.098237281 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, build-date=2025-07-21T13:07:52, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-cron, tcib_managed=true)
Oct 13 14:33:58 standalone.localdomain podman[267994]: 2025-10-13 14:33:58.925760404 +0000 UTC m=+0.165179733 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:33:58 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:33:58 standalone.localdomain podman[267979]: 2025-10-13 14:33:58.944752794 +0000 UTC m=+0.195189019 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, release=1, version=17.1.9, tcib_managed=true, container_name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1)
Oct 13 14:33:58 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:33:58 standalone.localdomain podman[267996]: 2025-10-13 14:33:58.99671232 +0000 UTC m=+0.228678912 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, config_id=tripleo_step4, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:33:59 standalone.localdomain podman[267978]: 2025-10-13 14:33:59.011590454 +0000 UTC m=+0.261737351 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=heat_api_cfn, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, release=1)
Oct 13 14:33:59 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:33:59 standalone.localdomain podman[267981]: 2025-10-13 14:33:59.058870377 +0000 UTC m=+0.298400399 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, release=1, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:33:59 standalone.localdomain podman[267981]: 2025-10-13 14:33:59.065626834 +0000 UTC m=+0.305156776 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, container_name=heat_api_cron, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:33:59 standalone.localdomain podman[267980]: 2025-10-13 14:33:59.230668442 +0000 UTC m=+0.473081152 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1)
Oct 13 14:33:59 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:33:59 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:33:59 standalone.localdomain podman[267978]: 2025-10-13 14:33:59.243370049 +0000 UTC m=+0.493516916 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:33:59 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:33:59 standalone.localdomain systemd[1]: tmp-crun.37Q9WG.mount: Deactivated successfully.
Oct 13 14:34:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1784: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:01 standalone.localdomain ceph-mon[29756]: pgmap v1784: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1785: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:03 standalone.localdomain ceph-mon[29756]: pgmap v1785: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1786: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:05 standalone.localdomain ceph-mon[29756]: pgmap v1786: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:34:05 standalone.localdomain podman[268300]: 2025-10-13 14:34:05.813193609 +0000 UTC m=+0.104230673 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, architecture=x86_64, name=rhosp17/openstack-mariadb, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:34:05 standalone.localdomain podman[268311]: 2025-10-13 14:34:05.83419875 +0000 UTC m=+0.090317698 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:34:05 standalone.localdomain podman[268322]: 2025-10-13 14:34:05.869494007 +0000 UTC m=+0.113099093 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, version=17.1.9)
Oct 13 14:34:05 standalone.localdomain podman[268322]: 2025-10-13 14:34:05.882080622 +0000 UTC m=+0.125685728 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:34:05 standalone.localdomain podman[268337]: 2025-10-13 14:34:05.883475644 +0000 UTC m=+0.125038667 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, container_name=swift_account_server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:34:05 standalone.localdomain podman[268380]: 2025-10-13 14:34:05.949860491 +0000 UTC m=+0.128930887 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:34:05 standalone.localdomain podman[268310]: 2025-10-13 14:34:05.907457306 +0000 UTC m=+0.169156794 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:34:05 standalone.localdomain podman[268306]: 2025-10-13 14:34:05.974801802 +0000 UTC m=+0.241317177 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=)
Oct 13 14:34:05 standalone.localdomain podman[268306]: 2025-10-13 14:34:05.98097889 +0000 UTC m=+0.247494285 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.buildah.version=1.33.12, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, architecture=x86_64)
Oct 13 14:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:34:05 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:34:05 standalone.localdomain podman[268322]: unhealthy
Oct 13 14:34:06 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:34:06 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:34:06 standalone.localdomain podman[268311]: 2025-10-13 14:34:06.02188858 +0000 UTC m=+0.278007448 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, architecture=x86_64, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, release=1)
Oct 13 14:34:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:34:06 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:34:06 standalone.localdomain podman[268300]: 2025-10-13 14:34:06.058726954 +0000 UTC m=+0.349764038 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:34:06 standalone.localdomain podman[268433]: 2025-10-13 14:34:06.07006606 +0000 UTC m=+0.131866096 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, name=rhosp17/openstack-haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git)
Oct 13 14:34:06 standalone.localdomain podman[268316]: 2025-10-13 14:34:06.025333175 +0000 UTC m=+0.280056430 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, batch=17.1_20250721.1, container_name=ovn_metadata_agent)
Oct 13 14:34:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:34:06 standalone.localdomain podman[268454]: 2025-10-13 14:34:06.128643148 +0000 UTC m=+0.127026328 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, container_name=swift_container_server, build-date=2025-07-21T15:54:32, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:34:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1787: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:06 standalone.localdomain podman[268471]: 2025-10-13 14:34:06.14672774 +0000 UTC m=+0.113644670 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, container_name=glance_api_internal, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:34:06 standalone.localdomain podman[268433]: 2025-10-13 14:34:06.154815847 +0000 UTC m=+0.216615893 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, release=1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-haproxy-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:34:06 standalone.localdomain podman[268337]: 2025-10-13 14:34:06.206598538 +0000 UTC m=+0.448161581 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:34:06 standalone.localdomain ceph-mon[29756]: pgmap v1787: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:06 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:34:06 standalone.localdomain podman[268310]: 2025-10-13 14:34:06.290763307 +0000 UTC m=+0.552462815 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9)
Oct 13 14:34:06 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:34:06 standalone.localdomain podman[268316]: 2025-10-13 14:34:06.309778528 +0000 UTC m=+0.564501773 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 14:34:06 standalone.localdomain podman[268316]: unhealthy
Oct 13 14:34:06 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:34:06 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:34:06 standalone.localdomain podman[268471]: 2025-10-13 14:34:06.329288834 +0000 UTC m=+0.296205784 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, architecture=x86_64, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, config_id=tripleo_step4)
Oct 13 14:34:06 standalone.localdomain podman[268454]: 2025-10-13 14:34:06.33474572 +0000 UTC m=+0.333128900 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, distribution-scope=public)
Oct 13 14:34:06 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:34:06 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:34:06 standalone.localdomain podman[268579]: 2025-10-13 14:34:06.393684599 +0000 UTC m=+0.268833227 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server)
Oct 13 14:34:06 standalone.localdomain runuser[268701]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:06 standalone.localdomain podman[268579]: 2025-10-13 14:34:06.595694106 +0000 UTC m=+0.470842754 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vcs-type=git, container_name=swift_object_server, config_id=tripleo_step4, release=1, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28)
Oct 13 14:34:06 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:34:06 standalone.localdomain podman[268750]: 2025-10-13 14:34:06.91885973 +0000 UTC m=+0.055976750 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-rabbitmq-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:34:06 standalone.localdomain podman[268750]: 2025-10-13 14:34:06.95224893 +0000 UTC m=+0.089366000 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:08:05, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.component=openstack-rabbitmq-container, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:34:07 standalone.localdomain runuser[268701]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:07 standalone.localdomain runuser[268846]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:07 standalone.localdomain runuser[268846]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:07 standalone.localdomain runuser[268900]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1788: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:08 standalone.localdomain ceph-mon[29756]: pgmap v1788: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:08 standalone.localdomain runuser[268900]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1789: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:34:10 standalone.localdomain systemd[1]: tmp-crun.gUUqjh.mount: Deactivated successfully.
Oct 13 14:34:10 standalone.localdomain podman[269035]: 2025-10-13 14:34:10.795002533 +0000 UTC m=+0.069843913 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, container_name=keystone_cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-keystone-container, version=17.1.9, name=rhosp17/openstack-keystone, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:34:10 standalone.localdomain podman[269035]: 2025-10-13 14:34:10.807924737 +0000 UTC m=+0.082766097 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, container_name=keystone_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, distribution-scope=public)
Oct 13 14:34:10 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:34:11 standalone.localdomain ceph-mon[29756]: pgmap v1789: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1790: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:12 standalone.localdomain ceph-mon[29756]: pgmap v1790: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:34:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:34:12 standalone.localdomain podman[269071]: 2025-10-13 14:34:12.797357146 +0000 UTC m=+0.062347884 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., release=1, tcib_managed=true, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12)
Oct 13 14:34:12 standalone.localdomain podman[269071]: 2025-10-13 14:34:12.847454346 +0000 UTC m=+0.112445094 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, build-date=2025-07-21T16:28:54, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, tcib_managed=true, container_name=neutron_dhcp)
Oct 13 14:34:12 standalone.localdomain systemd[1]: tmp-crun.w9grIm.mount: Deactivated successfully.
Oct 13 14:34:12 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:34:12 standalone.localdomain podman[269072]: 2025-10-13 14:34:12.862032212 +0000 UTC m=+0.129223107 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, tcib_managed=true, release=1, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:34:12 standalone.localdomain podman[269072]: 2025-10-13 14:34:12.921021022 +0000 UTC m=+0.188211907 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:34:12 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:34:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1791: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:14 standalone.localdomain sudo[269128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:34:14 standalone.localdomain sudo[269128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:34:14 standalone.localdomain sudo[269128]: pam_unix(sudo:session): session closed for user root
Oct 13 14:34:14 standalone.localdomain ceph-mon[29756]: pgmap v1791: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:14 standalone.localdomain sudo[269143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:34:14 standalone.localdomain sudo[269143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:34:14 standalone.localdomain sudo[269143]: pam_unix(sudo:session): session closed for user root
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:34:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev f40e7b70-6f82-455a-a4ca-83a52e7363ae (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:34:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev f40e7b70-6f82-455a-a4ca-83a52e7363ae (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:34:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event f40e7b70-6f82-455a-a4ca-83a52e7363ae (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:34:15 standalone.localdomain sudo[269197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:34:15 standalone.localdomain sudo[269197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:34:15 standalone.localdomain sudo[269197]: pam_unix(sudo:session): session closed for user root
Oct 13 14:34:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:34:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:34:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:34:15 standalone.localdomain systemd[1]: tmp-crun.FkUpKQ.mount: Deactivated successfully.
Oct 13 14:34:15 standalone.localdomain podman[269214]: 2025-10-13 14:34:15.237219235 +0000 UTC m=+0.068756109 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:34:15 standalone.localdomain podman[269213]: 2025-10-13 14:34:15.305769238 +0000 UTC m=+0.141976595 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, version=17.1.9)
Oct 13 14:34:15 standalone.localdomain podman[269214]: 2025-10-13 14:34:15.326050077 +0000 UTC m=+0.157586961 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, architecture=x86_64, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:34:15 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:34:15 standalone.localdomain podman[269212]: 2025-10-13 14:34:15.290710838 +0000 UTC m=+0.127921165 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:15, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:34:15 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:34:15 standalone.localdomain podman[269212]: 2025-10-13 14:34:15.375790456 +0000 UTC m=+0.213000773 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, maintainer=OpenStack TripleO Team)
Oct 13 14:34:15 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:34:15 standalone.localdomain podman[269213]: 2025-10-13 14:34:15.427773452 +0000 UTC m=+0.263980809 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37)
Oct 13 14:34:15 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:34:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1792: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:16 standalone.localdomain ceph-mon[29756]: pgmap v1792: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1793: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:18 standalone.localdomain ceph-mon[29756]: pgmap v1793: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:18 standalone.localdomain runuser[269538]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:19 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:34:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:34:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:34:19 standalone.localdomain runuser[269538]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:19 standalone.localdomain runuser[269607]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1794: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:20 standalone.localdomain runuser[269607]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:20 standalone.localdomain runuser[269661]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:34:20 standalone.localdomain ceph-mon[29756]: pgmap v1794: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:20 standalone.localdomain runuser[269661]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1795: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:34:23
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'manila_data', 'backups', 'volumes', '.mgr', 'manila_metadata']
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:34:23 standalone.localdomain ceph-mon[29756]: pgmap v1795: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:34:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1796: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:25 standalone.localdomain ceph-mon[29756]: pgmap v1796: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1797: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:26 standalone.localdomain ceph-mon[29756]: pgmap v1797: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1798: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:28 standalone.localdomain ceph-mon[29756]: pgmap v1798: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:34:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:34:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:34:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:34:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:34:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:34:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:34:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:34:29 standalone.localdomain systemd[1]: tmp-crun.MtXR4J.mount: Deactivated successfully.
Oct 13 14:34:29 standalone.localdomain podman[270080]: 2025-10-13 14:34:29.858973576 +0000 UTC m=+0.096927620 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, tcib_managed=true)
Oct 13 14:34:29 standalone.localdomain podman[270060]: 2025-10-13 14:34:29.923413523 +0000 UTC m=+0.175814928 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, container_name=heat_engine, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, vcs-type=git, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:11, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team)
Oct 13 14:34:29 standalone.localdomain podman[270080]: 2025-10-13 14:34:29.938272497 +0000 UTC m=+0.176226561 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:34:29 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:34:29 standalone.localdomain podman[270067]: 2025-10-13 14:34:29.988014005 +0000 UTC m=+0.233109726 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:34:29 standalone.localdomain podman[270074]: 2025-10-13 14:34:29.941828605 +0000 UTC m=+0.184309116 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, release=1, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:34:29 standalone.localdomain podman[270067]: 2025-10-13 14:34:29.998874627 +0000 UTC m=+0.243970318 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, name=rhosp17/openstack-heat-api, distribution-scope=public, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, release=1, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, container_name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 14:34:30 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:34:30 standalone.localdomain podman[270066]: 2025-10-13 14:34:30.047399648 +0000 UTC m=+0.296779691 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.component=openstack-memcached-container, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, container_name=memcached)
Oct 13 14:34:30 standalone.localdomain podman[270060]: 2025-10-13 14:34:30.11624453 +0000 UTC m=+0.368645955 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, architecture=x86_64, container_name=heat_engine, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc.)
Oct 13 14:34:30 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:34:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1799: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:30 standalone.localdomain podman[270066]: 2025-10-13 14:34:30.167872505 +0000 UTC m=+0.417252548 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, tcib_managed=true, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T12:58:43, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, managed_by=tripleo_ansible)
Oct 13 14:34:30 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:34:30 standalone.localdomain podman[270059]: 2025-10-13 14:34:30.122756378 +0000 UTC m=+0.380353122 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, config_id=tripleo_step4, release=1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, container_name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public)
Oct 13 14:34:30 standalone.localdomain podman[270059]: 2025-10-13 14:34:30.257862862 +0000 UTC m=+0.515459566 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, managed_by=tripleo_ansible, container_name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T14:49:55, batch=17.1_20250721.1, release=1, version=17.1.9, config_id=tripleo_step4, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:34:30 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:34:30 standalone.localdomain podman[270074]: 2025-10-13 14:34:30.278015857 +0000 UTC m=+0.520496368 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, managed_by=tripleo_ansible, container_name=heat_api, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, release=1)
Oct 13 14:34:30 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:34:31 standalone.localdomain ceph-mon[29756]: pgmap v1799: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:31 standalone.localdomain runuser[270263]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1800: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:32 standalone.localdomain runuser[270263]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:32 standalone.localdomain runuser[270332]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:32 standalone.localdomain ceph-mon[29756]: pgmap v1800: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:32 standalone.localdomain runuser[270332]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:32 standalone.localdomain runuser[270394]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:33 standalone.localdomain runuser[270394]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1801: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:35 standalone.localdomain ceph-mon[29756]: pgmap v1801: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1802: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:34:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:34:36 standalone.localdomain systemd[1]: tmp-crun.RDFwu7.mount: Deactivated successfully.
Oct 13 14:34:36 standalone.localdomain podman[270643]: 2025-10-13 14:34:36.868463616 +0000 UTC m=+0.114129595 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, name=rhosp17/openstack-swift-object, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:34:36 standalone.localdomain podman[270644]: 2025-10-13 14:34:36.908368384 +0000 UTC m=+0.152649170 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:34:36 standalone.localdomain podman[270682]: 2025-10-13 14:34:36.947924181 +0000 UTC m=+0.168199075 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, architecture=x86_64, distribution-scope=public, release=1, version=17.1.9, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:34:36 standalone.localdomain podman[270642]: 2025-10-13 14:34:36.971680917 +0000 UTC m=+0.221760580 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, container_name=glance_api_cron, architecture=x86_64, name=rhosp17/openstack-glance-api, release=1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20)
Oct 13 14:34:36 standalone.localdomain podman[270658]: 2025-10-13 14:34:36.921399152 +0000 UTC m=+0.158016595 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:34:37 standalone.localdomain podman[270675]: 2025-10-13 14:34:37.019209298 +0000 UTC m=+0.245505466 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ovn_controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44)
Oct 13 14:34:37 standalone.localdomain podman[270642]: 2025-10-13 14:34:37.02879958 +0000 UTC m=+0.278879233 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12)
Oct 13 14:34:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain podman[270650]: 2025-10-13 14:34:37.077629661 +0000 UTC m=+0.306851678 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 14:34:37 standalone.localdomain podman[270675]: 2025-10-13 14:34:37.080662784 +0000 UTC m=+0.306958992 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12)
Oct 13 14:34:37 standalone.localdomain podman[270675]: unhealthy
Oct 13 14:34:37 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:34:37 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:34:37 standalone.localdomain podman[270652]: 2025-10-13 14:34:36.844622198 +0000 UTC m=+0.084894342 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, container_name=swift_container_server, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, config_id=tripleo_step4, architecture=x86_64, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.buildah.version=1.33.12)
Oct 13 14:34:37 standalone.localdomain podman[270643]: 2025-10-13 14:34:37.104960105 +0000 UTC m=+0.350626174 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, release=1, distribution-scope=public, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, build-date=2025-07-21T14:56:28)
Oct 13 14:34:37 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain podman[270652]: 2025-10-13 14:34:37.130421262 +0000 UTC m=+0.370693406 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, container_name=swift_container_server, distribution-scope=public, name=rhosp17/openstack-swift-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:34:37 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain podman[270645]: 2025-10-13 14:34:37.170418124 +0000 UTC m=+0.411103631 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-proxy-server, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, release=1, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:34:37 standalone.localdomain podman[270658]: 2025-10-13 14:34:37.206866546 +0000 UTC m=+0.443483989 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:28:53)
Oct 13 14:34:37 standalone.localdomain ceph-mon[29756]: pgmap v1802: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:37 standalone.localdomain podman[270658]: unhealthy
Oct 13 14:34:37 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:34:37 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:34:37 standalone.localdomain podman[270682]: 2025-10-13 14:34:37.230883389 +0000 UTC m=+0.451158293 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, vcs-type=git, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:34:37 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain podman[270644]: 2025-10-13 14:34:37.251739536 +0000 UTC m=+0.496020312 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:34:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:37 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain podman[270650]: 2025-10-13 14:34:37.306937141 +0000 UTC m=+0.536159148 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:34:37 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain podman[270645]: 2025-10-13 14:34:37.38490406 +0000 UTC m=+0.625589607 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-proxy-server, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, container_name=swift_proxy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1)
Oct 13 14:34:37 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:34:37 standalone.localdomain systemd[1]: tmp-crun.gS9qCF.mount: Deactivated successfully.
Oct 13 14:34:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1803: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:38 standalone.localdomain ceph-mon[29756]: pgmap v1803: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1804: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:40 standalone.localdomain ceph-mon[29756]: pgmap v1804: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:34:41 standalone.localdomain podman[270980]: 2025-10-13 14:34:41.817335556 +0000 UTC m=+0.090557916 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, io.buildah.version=1.33.12)
Oct 13 14:34:41 standalone.localdomain podman[270980]: 2025-10-13 14:34:41.856098049 +0000 UTC m=+0.129320389 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, release=1, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, com.redhat.component=openstack-keystone-container, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone)
Oct 13 14:34:41 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:34:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1805: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:43 standalone.localdomain ceph-mon[29756]: pgmap v1805: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:34:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:34:43 standalone.localdomain systemd[1]: tmp-crun.DXC3CV.mount: Deactivated successfully.
Oct 13 14:34:43 standalone.localdomain podman[271015]: 2025-10-13 14:34:43.826466706 +0000 UTC m=+0.093062451 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, version=17.1.9, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, container_name=neutron_dhcp, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:34:43 standalone.localdomain podman[271015]: 2025-10-13 14:34:43.870083017 +0000 UTC m=+0.136678812 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, vcs-type=git, architecture=x86_64, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-dhcp-agent-container, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, batch=17.1_20250721.1, container_name=neutron_dhcp, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc.)
Oct 13 14:34:43 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:34:43 standalone.localdomain podman[271016]: 2025-10-13 14:34:43.925557211 +0000 UTC m=+0.187328340 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64)
Oct 13 14:34:43 standalone.localdomain podman[271016]: 2025-10-13 14:34:43.967947444 +0000 UTC m=+0.229718573 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, build-date=2025-07-21T16:03:34, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, container_name=neutron_sriov_agent, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible)
Oct 13 14:34:43 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:34:44 standalone.localdomain runuser[271075]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1806: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:44 standalone.localdomain ceph-mon[29756]: pgmap v1806: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:44 standalone.localdomain runuser[271075]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:44 standalone.localdomain runuser[271138]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:45 standalone.localdomain runuser[271138]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:45 standalone.localdomain runuser[271200]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:34:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:34:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:34:45 standalone.localdomain systemd[1]: tmp-crun.Q4qH1O.mount: Deactivated successfully.
Oct 13 14:34:45 standalone.localdomain podman[271248]: 2025-10-13 14:34:45.802610949 +0000 UTC m=+0.067896493 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:34:45 standalone.localdomain podman[271249]: 2025-10-13 14:34:45.83209952 +0000 UTC m=+0.090166534 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=clustercheck, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git)
Oct 13 14:34:45 standalone.localdomain podman[271248]: 2025-10-13 14:34:45.890880414 +0000 UTC m=+0.156165978 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team)
Oct 13 14:34:45 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:34:45 standalone.localdomain podman[271247]: 2025-10-13 14:34:45.858055072 +0000 UTC m=+0.121966455 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.expose-services=)
Oct 13 14:34:45 standalone.localdomain podman[271249]: 2025-10-13 14:34:45.915937909 +0000 UTC m=+0.174004953 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, architecture=x86_64, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9)
Oct 13 14:34:45 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:34:45 standalone.localdomain podman[271247]: 2025-10-13 14:34:45.940919572 +0000 UTC m=+0.204830975 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, container_name=iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:34:45 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:34:46 standalone.localdomain runuser[271200]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1807: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:47 standalone.localdomain ceph-mon[29756]: pgmap v1807: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1808: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:48 standalone.localdomain ceph-mon[29756]: pgmap v1808: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1809: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:51 standalone.localdomain ceph-mon[29756]: pgmap v1809: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1810: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:52 standalone.localdomain ceph-mon[29756]: pgmap v1810: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:34:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1811: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:55 standalone.localdomain ceph-mon[29756]: pgmap v1811: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1812: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:56 standalone.localdomain runuser[271817]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:57 standalone.localdomain ceph-mon[29756]: pgmap v1812: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:34:57 standalone.localdomain runuser[271817]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:57 standalone.localdomain runuser[271968]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:58 standalone.localdomain runuser[271968]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:34:58 standalone.localdomain runuser[272022]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:34:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1813: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:58 standalone.localdomain ceph-mon[29756]: pgmap v1813: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:34:58 standalone.localdomain runuser[272022]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1814: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:35:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:35:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:35:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:35:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:35:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:35:00 standalone.localdomain podman[272143]: 2025-10-13 14:35:00.807030892 +0000 UTC m=+0.065166840 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-memcached-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:35:00 standalone.localdomain podman[272143]: 2025-10-13 14:35:00.829814507 +0000 UTC m=+0.087950485 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-memcached, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:35:00 standalone.localdomain podman[272149]: 2025-10-13 14:35:00.846587929 +0000 UTC m=+0.100429997 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:35:00 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:35:00 standalone.localdomain podman[272149]: 2025-10-13 14:35:00.859319128 +0000 UTC m=+0.113161196 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, distribution-scope=public, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 14:35:00 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:35:00 standalone.localdomain podman[272142]: 2025-10-13 14:35:00.926044955 +0000 UTC m=+0.185376849 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, build-date=2025-07-21T15:44:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, container_name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc.)
Oct 13 14:35:00 standalone.localdomain podman[272141]: 2025-10-13 14:35:00.976409373 +0000 UTC m=+0.237896593 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, distribution-scope=public, architecture=x86_64, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn)
Oct 13 14:35:01 standalone.localdomain podman[272141]: 2025-10-13 14:35:01.0051462 +0000 UTC m=+0.266633450 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api-cfn, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, container_name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:49:55, config_id=tripleo_step4)
Oct 13 14:35:01 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:35:01 standalone.localdomain podman[272164]: 2025-10-13 14:35:00.904413465 +0000 UTC m=+0.148837075 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52)
Oct 13 14:35:01 standalone.localdomain podman[272142]: 2025-10-13 14:35:01.059974694 +0000 UTC m=+0.319306608 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, com.redhat.component=openstack-heat-engine-container, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, distribution-scope=public, release=1, build-date=2025-07-21T15:44:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:35:01 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:35:01 standalone.localdomain podman[272164]: 2025-10-13 14:35:01.089086042 +0000 UTC m=+0.333509642 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, build-date=2025-07-21T13:07:52, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, tcib_managed=true)
Oct 13 14:35:01 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:35:01 standalone.localdomain podman[272159]: 2025-10-13 14:35:00.954398561 +0000 UTC m=+0.197027567 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, tcib_managed=true, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-heat-api-container, version=17.1.9, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc.)
Oct 13 14:35:01 standalone.localdomain podman[272159]: 2025-10-13 14:35:01.142017648 +0000 UTC m=+0.384646724 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:35:01 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:35:01 standalone.localdomain ceph-mon[29756]: pgmap v1814: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1815: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:02 standalone.localdomain ceph-mon[29756]: pgmap v1815: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1816: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:05 standalone.localdomain sshd[272368]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:35:05 standalone.localdomain sshd[272368]: Accepted publickey for root from 192.168.122.11 port 57396 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:35:05 standalone.localdomain systemd-logind[45629]: New session 256 of user root.
Oct 13 14:35:05 standalone.localdomain systemd[1]: Started Session 256 of User root.
Oct 13 14:35:05 standalone.localdomain sshd[272368]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:35:05 standalone.localdomain ceph-mon[29756]: pgmap v1816: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:05 standalone.localdomain sudo[272372]:     root : PWD=/root ; USER=root ; COMMAND=/bin/python3 -c import configparser; c = configparser.ConfigParser(); c.read('/var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf'); print(c['DEFAULT']['default_volume_type'])
Oct 13 14:35:05 standalone.localdomain sudo[272372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:35:05 standalone.localdomain sudo[272372]: pam_unix(sudo:session): session closed for user root
Oct 13 14:35:05 standalone.localdomain sshd[272371]: Received disconnect from 192.168.122.11 port 57396:11: disconnected by user
Oct 13 14:35:05 standalone.localdomain sshd[272371]: Disconnected from user root 192.168.122.11 port 57396
Oct 13 14:35:05 standalone.localdomain sshd[272368]: pam_unix(sshd:session): session closed for user root
Oct 13 14:35:05 standalone.localdomain systemd[1]: session-256.scope: Deactivated successfully.
Oct 13 14:35:05 standalone.localdomain systemd-logind[45629]: Session 256 logged out. Waiting for processes to exit.
Oct 13 14:35:05 standalone.localdomain systemd-logind[45629]: Removed session 256.
Oct 13 14:35:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1817: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:06 standalone.localdomain podman[272439]: 2025-10-13 14:35:06.176625092 +0000 UTC m=+0.082767938 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:35:06 standalone.localdomain podman[272439]: 2025-10-13 14:35:06.204450232 +0000 UTC m=+0.110593078 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12)
Oct 13 14:35:06 standalone.localdomain ceph-mon[29756]: pgmap v1817: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:06 standalone.localdomain systemd[1]: tmp-crun.ORaYTQ.mount: Deactivated successfully.
Oct 13 14:35:06 standalone.localdomain podman[272466]: 2025-10-13 14:35:06.387090307 +0000 UTC m=+0.189305009 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.component=openstack-haproxy-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, release=1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:35:06 standalone.localdomain podman[272466]: 2025-10-13 14:35:06.477747474 +0000 UTC m=+0.279962246 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-haproxy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., architecture=x86_64, release=1, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:35:07 standalone.localdomain podman[272637]: 2025-10-13 14:35:07.074579854 +0000 UTC m=+0.089945668 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:35:07 standalone.localdomain podman[272637]: 2025-10-13 14:35:07.1068967 +0000 UTC m=+0.122262524 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9)
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:35:07 standalone.localdomain podman[272679]: 2025-10-13 14:35:07.239791916 +0000 UTC m=+0.075812835 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc.)
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:35:07 standalone.localdomain podman[272655]: 2025-10-13 14:35:07.264528051 +0000 UTC m=+0.180207741 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, version=17.1.9, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, config_id=tripleo_step4, container_name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:35:07 standalone.localdomain podman[272655]: 2025-10-13 14:35:07.269924886 +0000 UTC m=+0.185604566 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, container_name=glance_api_cron, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, vcs-type=git, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:35:07 standalone.localdomain podman[272681]: 2025-10-13 14:35:07.276775256 +0000 UTC m=+0.115529529 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container, distribution-scope=public, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, container_name=swift_container_server)
Oct 13 14:35:07 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:35:07 standalone.localdomain podman[272679]: 2025-10-13 14:35:07.284328496 +0000 UTC m=+0.120349395 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.9, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, tcib_managed=true)
Oct 13 14:35:07 standalone.localdomain podman[272679]: unhealthy
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:35:07 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:35:07 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:35:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:07 standalone.localdomain podman[272720]: 2025-10-13 14:35:07.320741817 +0000 UTC m=+0.065164830 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public)
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:35:07 standalone.localdomain podman[272757]: 2025-10-13 14:35:07.359119469 +0000 UTC m=+0.057526387 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, version=17.1.9, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:35:07 standalone.localdomain podman[272720]: 2025-10-13 14:35:07.359786889 +0000 UTC m=+0.104209902 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Oct 13 14:35:07 standalone.localdomain podman[272720]: unhealthy
Oct 13 14:35:07 standalone.localdomain podman[272678]: 2025-10-13 14:35:07.36897843 +0000 UTC m=+0.212966022 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, release=1, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_object_server, vendor=Red Hat, Inc.)
Oct 13 14:35:07 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:35:07 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:35:07 standalone.localdomain podman[272778]: 2025-10-13 14:35:07.423973368 +0000 UTC m=+0.072354439 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, architecture=x86_64)
Oct 13 14:35:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:35:07 standalone.localdomain podman[272721]: 2025-10-13 14:35:07.481760893 +0000 UTC m=+0.224062131 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=)
Oct 13 14:35:07 standalone.localdomain podman[272681]: 2025-10-13 14:35:07.51377512 +0000 UTC m=+0.352529373 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, batch=17.1_20250721.1, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-swift-container, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:35:07 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:35:07 standalone.localdomain podman[272833]: 2025-10-13 14:35:07.579794095 +0000 UTC m=+0.096525678 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:48:37, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:35:07 standalone.localdomain podman[272678]: 2025-10-13 14:35:07.611596406 +0000 UTC m=+0.455584038 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container)
Oct 13 14:35:07 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:35:07 standalone.localdomain podman[272778]: 2025-10-13 14:35:07.645795319 +0000 UTC m=+0.294176380 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, maintainer=OpenStack TripleO Team)
Oct 13 14:35:07 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:35:07 standalone.localdomain podman[272721]: 2025-10-13 14:35:07.657186207 +0000 UTC m=+0.399487405 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 14:35:07 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:35:07 standalone.localdomain podman[272757]: 2025-10-13 14:35:07.710457794 +0000 UTC m=+0.408864672 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:35:07 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:35:07 standalone.localdomain podman[272833]: 2025-10-13 14:35:07.772731545 +0000 UTC m=+0.289463128 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, release=1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, version=17.1.9)
Oct 13 14:35:07 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:35:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1818: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:08 standalone.localdomain ceph-mon[29756]: pgmap v1818: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:09 standalone.localdomain runuser[272935]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:10 standalone.localdomain runuser[272935]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1819: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:10 standalone.localdomain runuser[273004]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:10 standalone.localdomain runuser[273004]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:10 standalone.localdomain runuser[273066]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:11 standalone.localdomain ceph-mon[29756]: pgmap v1819: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:11 standalone.localdomain runuser[273066]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1820: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:12 standalone.localdomain ceph-mon[29756]: pgmap v1820: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:35:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:12 standalone.localdomain podman[273185]: 2025-10-13 14:35:12.36243608 +0000 UTC m=+0.111987450 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, build-date=2025-07-21T13:27:18, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, release=1, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=keystone_cron, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:35:12 standalone.localdomain podman[273185]: 2025-10-13 14:35:12.370794976 +0000 UTC m=+0.120346356 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, com.redhat.component=openstack-keystone-container, vendor=Red Hat, Inc., container_name=keystone_cron, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:35:12 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1129638757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1129638757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3102097061' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3102097061' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1129638757' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1129638757' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3102097061' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3102097061' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3083869769' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:35:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3083869769' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1821: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3083869769' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3083869769' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:14 standalone.localdomain ceph-mon[29756]: pgmap v1821: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:35:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:35:14 standalone.localdomain podman[273223]: 2025-10-13 14:35:14.855371738 +0000 UTC m=+0.114395833 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:35:14 standalone.localdomain podman[273223]: 2025-10-13 14:35:14.90586652 +0000 UTC m=+0.164890595 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:35:14 standalone.localdomain podman[273222]: 2025-10-13 14:35:14.924565901 +0000 UTC m=+0.187769173 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, distribution-scope=public)
Oct 13 14:35:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:35:14 standalone.localdomain podman[273222]: 2025-10-13 14:35:14.963815939 +0000 UTC m=+0.227019241 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, version=17.1.9, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:54)
Oct 13 14:35:14 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:35:15 standalone.localdomain sudo[273279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:35:15 standalone.localdomain sudo[273279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:35:15 standalone.localdomain sudo[273279]: pam_unix(sudo:session): session closed for user root
Oct 13 14:35:15 standalone.localdomain sudo[273294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:35:15 standalone.localdomain sudo[273294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:35:15 standalone.localdomain sudo[273294]: pam_unix(sudo:session): session closed for user root
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:35:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:35:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 4258d232-c75a-4761-b2b8-11b3f0d53610 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:35:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 4258d232-c75a-4761-b2b8-11b3f0d53610 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:35:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 4258d232-c75a-4761-b2b8-11b3f0d53610 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:35:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:35:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:35:16 standalone.localdomain sudo[273349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:35:16 standalone.localdomain sudo[273349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:35:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:35:16 standalone.localdomain sudo[273349]: pam_unix(sudo:session): session closed for user root
Oct 13 14:35:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:35:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:35:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1822: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:16 standalone.localdomain systemd[1]: tmp-crun.JcHEOP.mount: Deactivated successfully.
Oct 13 14:35:16 standalone.localdomain podman[273406]: 2025-10-13 14:35:16.142961993 +0000 UTC m=+0.066674646 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, com.redhat.component=openstack-mariadb-container, distribution-scope=public, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=clustercheck, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step2, io.buildah.version=1.33.12)
Oct 13 14:35:16 standalone.localdomain podman[273406]: 2025-10-13 14:35:16.205694898 +0000 UTC m=+0.129407551 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, io.buildah.version=1.33.12, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, tcib_managed=true, vcs-type=git, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team)
Oct 13 14:35:16 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:35:16 standalone.localdomain podman[273405]: 2025-10-13 14:35:16.177789936 +0000 UTC m=+0.099227420 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.buildah.version=1.33.12, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute)
Oct 13 14:35:16 standalone.localdomain podman[273404]: 2025-10-13 14:35:16.196386904 +0000 UTC m=+0.124997307 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, release=1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-07-21T13:27:15, vendor=Red Hat, Inc.)
Oct 13 14:35:16 standalone.localdomain podman[273404]: 2025-10-13 14:35:16.292961651 +0000 UTC m=+0.221572084 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:35:16 standalone.localdomain podman[273405]: 2025-10-13 14:35:16.311694604 +0000 UTC m=+0.233132088 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step5, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:35:16 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:35:16 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: pgmap v1822: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:17 standalone.localdomain systemd[1]: tmp-crun.ilyE8n.mount: Deactivated successfully.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.325220) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117325283, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2100, "num_deletes": 250, "total_data_size": 1963358, "memory_usage": 2006192, "flush_reason": "Manual Compaction"}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117332948, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1142839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41229, "largest_seqno": 43328, "table_properties": {"data_size": 1137062, "index_size": 2801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15518, "raw_average_key_size": 20, "raw_value_size": 1123846, "raw_average_value_size": 1480, "num_data_blocks": 131, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760365920, "oldest_key_time": 1760365920, "file_creation_time": 1760366117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 7822 microseconds, and 3745 cpu microseconds.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.333022) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1142839 bytes OK
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.333069) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.335374) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.335407) EVENT_LOG_v1 {"time_micros": 1760366117335398, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.335434) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1954408, prev total WAL file size 1966060, number of live WAL files 2.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.336326) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1116KB)], [89(5651KB)]
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117336370, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 6930386, "oldest_snapshot_seqno": -1}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5030 keys, 5629726 bytes, temperature: kUnknown
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117401370, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 5629726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5599087, "index_size": 16974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 124069, "raw_average_key_size": 24, "raw_value_size": 5511017, "raw_average_value_size": 1095, "num_data_blocks": 714, "num_entries": 5030, "num_filter_entries": 5030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.401813) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 5629726 bytes
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.417735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.4 rd, 86.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.5 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(11.0) write-amplify(4.9) OK, records in: 5453, records dropped: 423 output_compression: NoCompression
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.417804) EVENT_LOG_v1 {"time_micros": 1760366117417778, "job": 52, "event": "compaction_finished", "compaction_time_micros": 65164, "compaction_time_cpu_micros": 21456, "output_level": 6, "num_output_files": 1, "total_output_size": 5629726, "num_input_records": 5453, "num_output_records": 5030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117418312, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117419575, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.336218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.419633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.419642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.419646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.419649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.419653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.420549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117420705, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 261, "num_deletes": 251, "total_data_size": 13706, "memory_usage": 20040, "flush_reason": "Manual Compaction"}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117436240, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 13715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43329, "largest_seqno": 43589, "table_properties": {"data_size": 11911, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4752, "raw_average_key_size": 18, "raw_value_size": 8431, "raw_average_value_size": 32, "num_data_blocks": 2, "num_entries": 261, "num_filter_entries": 261, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366117, "oldest_key_time": 1760366117, "file_creation_time": 1760366117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 15740 microseconds, and 1278 cpu microseconds.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.436307) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 13715 bytes OK
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.436339) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.448287) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.448357) EVENT_LOG_v1 {"time_micros": 1760366117448343, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.448392) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 11652, prev total WAL file size 11652, number of live WAL files 2.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.449371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(13KB)], [92(5497KB)]
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117449515, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 5643441, "oldest_snapshot_seqno": -1}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 4780 keys, 4618847 bytes, temperature: kUnknown
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117479471, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 4618847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4591170, "index_size": 14653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119569, "raw_average_key_size": 25, "raw_value_size": 4508697, "raw_average_value_size": 943, "num_data_blocks": 608, "num_entries": 4780, "num_filter_entries": 4780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.479907) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 4618847 bytes
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.481852) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.4 rd, 153.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 5.4 +0.0 blob) out(4.4 +0.0 blob), read-write-amplify(748.3) write-amplify(336.8) OK, records in: 5291, records dropped: 511 output_compression: NoCompression
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.481873) EVENT_LOG_v1 {"time_micros": 1760366117481863, "job": 54, "event": "compaction_finished", "compaction_time_micros": 30120, "compaction_time_cpu_micros": 20879, "output_level": 6, "num_output_files": 1, "total_output_size": 4618847, "num_input_records": 5291, "num_output_records": 4780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117482007, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366117482549, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.449187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.482672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.482690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.482692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.482693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:35:17.482698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:35:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1823: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:18 standalone.localdomain ceph-mon[29756]: pgmap v1823: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:19 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:35:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:35:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:35:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1824: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:35:20 standalone.localdomain ceph-mon[29756]: pgmap v1824: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1825: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:22 standalone.localdomain runuser[273760]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:22 standalone.localdomain ceph-mon[29756]: pgmap v1825: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:22 standalone.localdomain runuser[273760]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:23 standalone.localdomain runuser[273821]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:35:23
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'images', 'vms', 'manila_metadata', '.mgr', 'volumes', 'manila_data']
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:35:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:35:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2953394957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:35:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2953394957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:35:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2953394957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:35:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2953394957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:35:23 standalone.localdomain runuser[273821]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:23 standalone.localdomain runuser[273883]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1826: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:24 standalone.localdomain runuser[273883]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:24 standalone.localdomain ceph-mon[29756]: pgmap v1826: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1827: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:27 standalone.localdomain ceph-mon[29756]: pgmap v1827: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:27 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:35:27 standalone.localdomain recover_tripleo_nova_virtqemud[274181]: 93291
Oct 13 14:35:27 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:35:27 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1828: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:28 standalone.localdomain ceph-mon[29756]: pgmap v1828: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:35:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:35:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1829: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:31 standalone.localdomain ceph-mon[29756]: pgmap v1829: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:35:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:35:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:35:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:35:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:35:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:35:31 standalone.localdomain systemd[1]: tmp-crun.rb14SO.mount: Deactivated successfully.
Oct 13 14:35:31 standalone.localdomain podman[274267]: 2025-10-13 14:35:31.859295536 +0000 UTC m=+0.113418304 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, build-date=2025-07-21T14:49:55, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:35:31 standalone.localdomain podman[274267]: 2025-10-13 14:35:31.894933614 +0000 UTC m=+0.149056382 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64)
Oct 13 14:35:31 standalone.localdomain podman[274270]: 2025-10-13 14:35:31.905468075 +0000 UTC m=+0.150030681 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api_cron, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, vcs-type=git, managed_by=tripleo_ansible, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:35:31 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:35:31 standalone.localdomain podman[274270]: 2025-10-13 14:35:31.911132738 +0000 UTC m=+0.155695354 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, com.redhat.component=openstack-heat-api-container, version=17.1.9, build-date=2025-07-21T15:56:26, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:35:31 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:35:31 standalone.localdomain podman[274268]: 2025-10-13 14:35:31.9449508 +0000 UTC m=+0.199353966 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:11, name=rhosp17/openstack-heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:35:31 standalone.localdomain podman[274269]: 2025-10-13 14:35:31.856380327 +0000 UTC m=+0.111212386 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, vcs-type=git, version=17.1.9, container_name=memcached, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:35:31 standalone.localdomain podman[274286]: 2025-10-13 14:35:31.874803788 +0000 UTC m=+0.112578127 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-api-container, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, version=17.1.9, container_name=heat_api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:35:31 standalone.localdomain podman[274268]: 2025-10-13 14:35:31.995856855 +0000 UTC m=+0.250260001 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, vcs-type=git, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:35:32 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:35:32 standalone.localdomain podman[274286]: 2025-10-13 14:35:32.010814031 +0000 UTC m=+0.248588360 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:35:32 standalone.localdomain podman[274288]: 2025-10-13 14:35:31.977933597 +0000 UTC m=+0.205582266 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, version=17.1.9, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52)
Oct 13 14:35:32 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:35:32 standalone.localdomain podman[274269]: 2025-10-13 14:35:32.038893778 +0000 UTC m=+0.293725847 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, container_name=memcached, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:35:32 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:35:32 standalone.localdomain podman[274288]: 2025-10-13 14:35:32.061752596 +0000 UTC m=+0.289401265 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, io.openshift.expose-services=, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-07-21T13:07:52, container_name=logrotate_crond)
Oct 13 14:35:32 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:35:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1830: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:32 standalone.localdomain ceph-mon[29756]: pgmap v1830: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:35:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:35:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1831: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:34 standalone.localdomain runuser[274433]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:35 standalone.localdomain ceph-mon[29756]: pgmap v1831: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:35 standalone.localdomain runuser[274433]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:35 standalone.localdomain runuser[274502]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1832: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:36 standalone.localdomain ceph-mon[29756]: pgmap v1832: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:36 standalone.localdomain runuser[274502]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:36 standalone.localdomain runuser[274597]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:37 standalone.localdomain runuser[274597]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:35:37 standalone.localdomain systemd[1]: tmp-crun.pk32tR.mount: Deactivated successfully.
Oct 13 14:35:37 standalone.localdomain podman[274836]: 2025-10-13 14:35:37.821928099 +0000 UTC m=+0.084171100 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, tcib_managed=true, container_name=swift_object_server, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T14:56:28)
Oct 13 14:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:35:37 standalone.localdomain podman[274849]: 2025-10-13 14:35:37.844463827 +0000 UTC m=+0.088821812 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:35:37 standalone.localdomain podman[274864]: 2025-10-13 14:35:37.891802582 +0000 UTC m=+0.128776392 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 14:35:37 standalone.localdomain podman[274848]: 2025-10-13 14:35:37.928707029 +0000 UTC m=+0.181890314 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, architecture=x86_64, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:35:37 standalone.localdomain podman[274849]: 2025-10-13 14:35:37.938709324 +0000 UTC m=+0.183067309 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent)
Oct 13 14:35:37 standalone.localdomain podman[274849]: unhealthy
Oct 13 14:35:37 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:35:37 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:35:37 standalone.localdomain podman[274929]: 2025-10-13 14:35:37.907879223 +0000 UTC m=+0.060585650 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, build-date=2025-07-21T14:48:37, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:35:37 standalone.localdomain podman[274835]: 2025-10-13 14:35:37.97657385 +0000 UTC m=+0.243279937 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, config_id=tripleo_step4, container_name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:35:38 standalone.localdomain podman[274835]: 2025-10-13 14:35:38.015785897 +0000 UTC m=+0.282491994 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, release=1, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:35:38 standalone.localdomain podman[274884]: 2025-10-13 14:35:38.022651587 +0000 UTC m=+0.257513612 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:35:38 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:35:38 standalone.localdomain podman[274836]: 2025-10-13 14:35:38.054967493 +0000 UTC m=+0.317210514 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_object_server, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 14:35:38 standalone.localdomain podman[274864]: 2025-10-13 14:35:38.075704506 +0000 UTC m=+0.312678306 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=swift_account_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-swift-account-container, architecture=x86_64, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:35:38 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:35:38 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:35:38 standalone.localdomain podman[274856]: 2025-10-13 14:35:38.077028196 +0000 UTC m=+0.321207735 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-type=git, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:35:38 standalone.localdomain podman[274848]: 2025-10-13 14:35:38.133772769 +0000 UTC m=+0.386956054 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.buildah.version=1.33.12, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, release=1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-swift-container)
Oct 13 14:35:38 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:35:38 standalone.localdomain podman[274856]: 2025-10-13 14:35:38.156593336 +0000 UTC m=+0.400772885 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:35:38 standalone.localdomain podman[274856]: unhealthy
Oct 13 14:35:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1833: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:38 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:35:38 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:35:38 standalone.localdomain podman[274842]: 2025-10-13 14:35:38.231279485 +0000 UTC m=+0.486760539 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, batch=17.1_20250721.1, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64)
Oct 13 14:35:38 standalone.localdomain podman[274929]: 2025-10-13 14:35:38.26023806 +0000 UTC m=+0.412944497 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:35:38 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:35:38 standalone.localdomain podman[274884]: 2025-10-13 14:35:38.339582881 +0000 UTC m=+0.574444906 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12)
Oct 13 14:35:38 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:35:38 standalone.localdomain ceph-mon[29756]: pgmap v1833: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:38 standalone.localdomain podman[274842]: 2025-10-13 14:35:38.431996482 +0000 UTC m=+0.687477546 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, name=rhosp17/openstack-glance-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:35:38 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:35:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1834: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:41 standalone.localdomain ceph-mon[29756]: pgmap v1834: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1835: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:42 standalone.localdomain ceph-mon[29756]: pgmap v1835: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:35:42 standalone.localdomain systemd[1]: tmp-crun.ckfUSL.mount: Deactivated successfully.
Oct 13 14:35:42 standalone.localdomain podman[275137]: 2025-10-13 14:35:42.79612408 +0000 UTC m=+0.065899873 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, release=1, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, build-date=2025-07-21T13:27:18, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:35:42 standalone.localdomain podman[275137]: 2025-10-13 14:35:42.833898333 +0000 UTC m=+0.103674136 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, io.openshift.expose-services=, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:35:42 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:35:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1836: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:45 standalone.localdomain ceph-mon[29756]: pgmap v1836: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:35:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:35:45 standalone.localdomain podman[275182]: 2025-10-13 14:35:45.80566695 +0000 UTC m=+0.072835494 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.expose-services=, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git)
Oct 13 14:35:45 standalone.localdomain podman[275183]: 2025-10-13 14:35:45.867883769 +0000 UTC m=+0.129877585 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:35:45 standalone.localdomain podman[275182]: 2025-10-13 14:35:45.921302599 +0000 UTC m=+0.188471133 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., container_name=neutron_dhcp, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:28:54, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:35:45 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:35:45 standalone.localdomain podman[275183]: 2025-10-13 14:35:45.977092333 +0000 UTC m=+0.239086159 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-sriov-agent)
Oct 13 14:35:45 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:35:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1837: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:46 standalone.localdomain ceph-mon[29756]: pgmap v1837: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:35:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:35:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:35:46 standalone.localdomain podman[275362]: 2025-10-13 14:35:46.799436886 +0000 UTC m=+0.065901584 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container)
Oct 13 14:35:46 standalone.localdomain systemd[1]: tmp-crun.eB4IxD.mount: Deactivated successfully.
Oct 13 14:35:46 standalone.localdomain podman[275363]: 2025-10-13 14:35:46.841272353 +0000 UTC m=+0.101915733 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute)
Oct 13 14:35:46 standalone.localdomain podman[275364]: 2025-10-13 14:35:46.854195887 +0000 UTC m=+0.115629931 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, architecture=x86_64, com.redhat.component=openstack-mariadb-container, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, container_name=clustercheck, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:35:46 standalone.localdomain podman[275362]: 2025-10-13 14:35:46.860726447 +0000 UTC m=+0.127191135 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-iscsid-container, version=17.1.9, container_name=iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, managed_by=tripleo_ansible, release=1, tcib_managed=true)
Oct 13 14:35:46 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:35:46 standalone.localdomain podman[275363]: 2025-10-13 14:35:46.891016631 +0000 UTC m=+0.151660061 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, version=17.1.9)
Oct 13 14:35:46 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:35:46 standalone.localdomain podman[275364]: 2025-10-13 14:35:46.920854692 +0000 UTC m=+0.182288676 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, container_name=clustercheck)
Oct 13 14:35:46 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:35:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:35:47 standalone.localdomain object-server[275487]: Object update sweep starting on /srv/node/d1 (pid: 18)
Oct 13 14:35:47 standalone.localdomain object-server[275487]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 18)
Oct 13 14:35:47 standalone.localdomain object-server[275487]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:35:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.07s
Oct 13 14:35:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:47 standalone.localdomain runuser[275499]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1838: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:48 standalone.localdomain runuser[275499]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:48 standalone.localdomain ceph-mon[29756]: pgmap v1838: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:48 standalone.localdomain runuser[275602]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:48 standalone.localdomain runuser[275602]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:48 standalone.localdomain runuser[275664]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:35:49 standalone.localdomain runuser[275664]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:35:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1839: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:50 standalone.localdomain ceph-mon[29756]: pgmap v1839: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1840: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:52 standalone.localdomain ceph-mon[29756]: pgmap v1840: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:35:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1841: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:55 standalone.localdomain ceph-mon[29756]: pgmap v1841: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1842: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:56 standalone.localdomain ceph-mon[29756]: pgmap v1842: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:35:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1843: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:35:58 standalone.localdomain ceph-mon[29756]: pgmap v1843: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1844: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:00 standalone.localdomain runuser[276110]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:00 standalone.localdomain ceph-mon[29756]: pgmap v1844: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:00 standalone.localdomain runuser[276110]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:01 standalone.localdomain runuser[276179]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:01 standalone.localdomain runuser[276179]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:01 standalone.localdomain runuser[276286]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1845: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:02 standalone.localdomain ceph-mon[29756]: pgmap v1845: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:02 standalone.localdomain runuser[276286]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:36:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:36:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:36:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:36:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:36:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:36:02 standalone.localdomain podman[276352]: 2025-10-13 14:36:02.855736147 +0000 UTC m=+0.104157441 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, build-date=2025-07-21T15:44:11, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 14:36:02 standalone.localdomain podman[276353]: 2025-10-13 14:36:02.896703858 +0000 UTC m=+0.137793958 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, container_name=memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12)
Oct 13 14:36:02 standalone.localdomain podman[276353]: 2025-10-13 14:36:02.91677507 +0000 UTC m=+0.157865190 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, io.openshift.expose-services=, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, name=rhosp17/openstack-memcached, release=1, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, container_name=memcached, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9)
Oct 13 14:36:02 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:36:02 standalone.localdomain podman[276364]: 2025-10-13 14:36:02.95967116 +0000 UTC m=+0.194085846 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, container_name=heat_api, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:36:02 standalone.localdomain podman[276364]: 2025-10-13 14:36:02.998369291 +0000 UTC m=+0.232783997 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:36:03 standalone.localdomain podman[276358]: 2025-10-13 14:36:03.008960414 +0000 UTC m=+0.242901936 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, release=1, container_name=heat_api_cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api)
Oct 13 14:36:03 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:36:03 standalone.localdomain podman[276358]: 2025-10-13 14:36:03.021864709 +0000 UTC m=+0.255806241 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api_cron)
Oct 13 14:36:03 standalone.localdomain podman[276352]: 2025-10-13 14:36:03.032265365 +0000 UTC m=+0.280686669 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, build-date=2025-07-21T15:44:11, release=1, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:36:03 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:36:03 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:36:03 standalone.localdomain podman[276371]: 2025-10-13 14:36:03.0792679 +0000 UTC m=+0.300281417 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=logrotate_crond, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, build-date=2025-07-21T13:07:52, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1)
Oct 13 14:36:03 standalone.localdomain podman[276371]: 2025-10-13 14:36:03.095992181 +0000 UTC m=+0.317005708 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1)
Oct 13 14:36:03 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:36:03 standalone.localdomain podman[276351]: 2025-10-13 14:36:03.164986307 +0000 UTC m=+0.416999800 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, release=1, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:36:03 standalone.localdomain podman[276351]: 2025-10-13 14:36:03.200114109 +0000 UTC m=+0.452127602 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.component=openstack-heat-api-cfn-container, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 14:36:03 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:36:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1846: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:04 standalone.localdomain ceph-mon[29756]: pgmap v1846: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1847: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:06 standalone.localdomain ceph-mon[29756]: pgmap v1847: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:06 standalone.localdomain systemd[1]: tmp-crun.AEW4TI.mount: Deactivated successfully.
Oct 13 14:36:06 standalone.localdomain podman[276524]: 2025-10-13 14:36:06.348003182 +0000 UTC m=+0.087604515 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:36:06 standalone.localdomain podman[276524]: 2025-10-13 14:36:06.382899627 +0000 UTC m=+0.122500920 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-type=git, version=17.1.9, com.redhat.component=openstack-mariadb-container, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:36:06 standalone.localdomain systemd[1]: tmp-crun.lK5Wa0.mount: Deactivated successfully.
Oct 13 14:36:06 standalone.localdomain podman[276599]: 2025-10-13 14:36:06.592257558 +0000 UTC m=+0.082501259 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-haproxy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, architecture=x86_64)
Oct 13 14:36:06 standalone.localdomain podman[276599]: 2025-10-13 14:36:06.621538532 +0000 UTC m=+0.111782203 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, name=rhosp17/openstack-haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-haproxy-container, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 3600.0 total, 600.0 interval
                                                        Cumulative writes: 9712 writes, 43K keys, 9712 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                                        Cumulative WAL: 9712 writes, 9712 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1512 writes, 7095 keys, 1512 commit groups, 1.0 writes per commit group, ingest: 5.73 MB, 0.01 MB/s
                                                        Interval WAL: 1512 writes, 1512 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     66.6      0.38              0.09        27    0.014       0      0       0.0       0.0
                                                          L6      1/0    4.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    190.6    159.8      0.69              0.32        26    0.027    108K    14K       0.0       0.0
                                                         Sum      1/0    4.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.3    122.3    126.4      1.07              0.41        53    0.020    108K    14K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.6     76.9     73.9      0.47              0.10        12    0.039     30K   2975       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    190.6    159.8      0.69              0.32        26    0.027    108K    14K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     67.0      0.38              0.09        26    0.015       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 3600.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.025, interval 0.004
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.13 GB write, 0.04 MB/s write, 0.13 GB read, 0.04 MB/s read, 1.1 seconds
                                                        Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.5 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 18.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000256 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1718,17.85 MB,5.79654%) FilterBlock(54,301.55 KB,0.0956102%) IndexBlock(54,462.86 KB,0.146757%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 14:36:07 standalone.localdomain podman[276736]: 2025-10-13 14:36:07.296441494 +0000 UTC m=+0.149728912 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-rabbitmq-container, batch=17.1_20250721.1)
Oct 13 14:36:07 standalone.localdomain podman[276736]: 2025-10-13 14:36:07.331443852 +0000 UTC m=+0.184731270 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T13:08:05, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc.)
Oct 13 14:36:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1848: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:08 standalone.localdomain ceph-mon[29756]: pgmap v1848: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:36:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:36:08 standalone.localdomain systemd[1]: tmp-crun.hzWsOv.mount: Deactivated successfully.
Oct 13 14:36:08 standalone.localdomain podman[276845]: 2025-10-13 14:36:08.865655645 +0000 UTC m=+0.126275805 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true)
Oct 13 14:36:08 standalone.localdomain systemd[1]: tmp-crun.60nPW8.mount: Deactivated successfully.
Oct 13 14:36:08 standalone.localdomain podman[276844]: 2025-10-13 14:36:08.907340448 +0000 UTC m=+0.171468976 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, container_name=swift_object_server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container)
Oct 13 14:36:08 standalone.localdomain podman[276864]: 2025-10-13 14:36:08.960600363 +0000 UTC m=+0.190925918 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:36:08 standalone.localdomain podman[276864]: 2025-10-13 14:36:08.967581846 +0000 UTC m=+0.197907411 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, build-date=2025-07-21T16:28:53, maintainer=OpenStack TripleO Team, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Oct 13 14:36:08 standalone.localdomain podman[276864]: unhealthy
Oct 13 14:36:08 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:36:08 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:36:08 standalone.localdomain podman[276858]: 2025-10-13 14:36:08.945980416 +0000 UTC m=+0.189030920 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, container_name=swift_container_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:36:09 standalone.localdomain podman[276869]: 2025-10-13 14:36:09.008614579 +0000 UTC m=+0.248680342 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:36:09 standalone.localdomain podman[276869]: 2025-10-13 14:36:09.048810466 +0000 UTC m=+0.288876219 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:36:09 standalone.localdomain podman[276869]: unhealthy
Oct 13 14:36:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:36:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:36:09 standalone.localdomain podman[276844]: 2025-10-13 14:36:09.092056236 +0000 UTC m=+0.356184774 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4)
Oct 13 14:36:09 standalone.localdomain podman[276849]: 2025-10-13 14:36:09.053316794 +0000 UTC m=+0.304530528 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, version=17.1.9)
Oct 13 14:36:09 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:36:09 standalone.localdomain podman[276858]: 2025-10-13 14:36:09.118793502 +0000 UTC m=+0.361844026 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=swift_container_server, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team)
Oct 13 14:36:09 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:36:09 standalone.localdomain podman[276874]: 2025-10-13 14:36:09.108029703 +0000 UTC m=+0.342921768 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, tcib_managed=true)
Oct 13 14:36:09 standalone.localdomain podman[276843]: 2025-10-13 14:36:09.16722986 +0000 UTC m=+0.431131450 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, container_name=glance_api_cron, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:36:09 standalone.localdomain podman[276843]: 2025-10-13 14:36:09.177662859 +0000 UTC m=+0.441564439 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, vcs-type=git, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:36:09 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:36:09 standalone.localdomain podman[276846]: 2025-10-13 14:36:09.212517814 +0000 UTC m=+0.468996788 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-proxy-server-container, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, batch=17.1_20250721.1, release=1, container_name=swift_proxy, distribution-scope=public, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77)
Oct 13 14:36:09 standalone.localdomain podman[276845]: 2025-10-13 14:36:09.225969623 +0000 UTC m=+0.486589793 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20250721.1, release=1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Oct 13 14:36:09 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:36:09 standalone.localdomain podman[276849]: 2025-10-13 14:36:09.256814545 +0000 UTC m=+0.508028269 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, tcib_managed=true, config_id=tripleo_step4)
Oct 13 14:36:09 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:36:09 standalone.localdomain podman[276874]: 2025-10-13 14:36:09.306514212 +0000 UTC m=+0.541406277 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1)
Oct 13 14:36:09 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:36:09 standalone.localdomain podman[276846]: 2025-10-13 14:36:09.423349749 +0000 UTC m=+0.679828703 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, config_id=tripleo_step4, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=swift_proxy)
Oct 13 14:36:09 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:36:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1849: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:11 standalone.localdomain ceph-mon[29756]: pgmap v1849: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1850: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.421839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366172421876, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 754, "num_deletes": 256, "total_data_size": 565120, "memory_usage": 579808, "flush_reason": "Manual Compaction"}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366172428330, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 555557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43590, "largest_seqno": 44343, "table_properties": {"data_size": 551990, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7925, "raw_average_key_size": 18, "raw_value_size": 544729, "raw_average_value_size": 1275, "num_data_blocks": 65, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366118, "oldest_key_time": 1760366118, "file_creation_time": 1760366172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 6542 microseconds, and 2446 cpu microseconds.
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.428377) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 555557 bytes OK
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.428401) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.432147) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.432181) EVENT_LOG_v1 {"time_micros": 1760366172432174, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.432204) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 561218, prev total WAL file size 561707, number of live WAL files 2.
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.432777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323533' seq:72057594037927935, type:22 .. '6C6F676D0031353035' seq:0, type:0; will stop at (end)
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(542KB)], [95(4510KB)]
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366172432815, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 5174404, "oldest_snapshot_seqno": -1}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 4680 keys, 5072304 bytes, temperature: kUnknown
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366172456763, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 5072304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5043866, "index_size": 15670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 118352, "raw_average_key_size": 25, "raw_value_size": 4961664, "raw_average_value_size": 1060, "num_data_blocks": 650, "num_entries": 4680, "num_filter_entries": 4680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.457023) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 5072304 bytes
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.458731) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.3 rd, 211.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 4.4 +0.0 blob) out(4.8 +0.0 blob), read-write-amplify(18.4) write-amplify(9.1) OK, records in: 5207, records dropped: 527 output_compression: NoCompression
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.458756) EVENT_LOG_v1 {"time_micros": 1760366172458745, "job": 56, "event": "compaction_finished", "compaction_time_micros": 24037, "compaction_time_cpu_micros": 12778, "output_level": 6, "num_output_files": 1, "total_output_size": 5072304, "num_input_records": 5207, "num_output_records": 4680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366172458968, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366172459409, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.432690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.459524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.459541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.459543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.459545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:36:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:36:12.459547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:36:13 standalone.localdomain runuser[277145]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:13 standalone.localdomain ceph-mon[29756]: pgmap v1850: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:36:13 standalone.localdomain runuser[277145]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:13 standalone.localdomain podman[277203]: 2025-10-13 14:36:13.793879252 +0000 UTC m=+0.066157341 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, com.redhat.component=openstack-keystone-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, tcib_managed=true, distribution-scope=public, container_name=keystone_cron, config_id=tripleo_step3)
Oct 13 14:36:13 standalone.localdomain podman[277203]: 2025-10-13 14:36:13.832890093 +0000 UTC m=+0.105168182 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vcs-type=git, container_name=keystone_cron, version=17.1.9, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, release=1, tcib_managed=true)
Oct 13 14:36:13 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:36:13 standalone.localdomain runuser[277233]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1851: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:14 standalone.localdomain runuser[277233]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:14 standalone.localdomain runuser[277297]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:15 standalone.localdomain runuser[277297]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:15 standalone.localdomain ceph-mon[29756]: pgmap v1851: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:16 standalone.localdomain sudo[277363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:36:16 standalone.localdomain sudo[277363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:36:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:36:16 standalone.localdomain sudo[277363]: pam_unix(sudo:session): session closed for user root
Oct 13 14:36:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1852: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:36:16 standalone.localdomain sudo[277380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 14:36:16 standalone.localdomain sudo[277380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:36:16 standalone.localdomain podman[277377]: 2025-10-13 14:36:16.280071765 +0000 UTC m=+0.091971738 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, build-date=2025-07-21T16:28:54, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:36:16 standalone.localdomain ceph-mon[29756]: pgmap v1852: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:16 standalone.localdomain podman[277377]: 2025-10-13 14:36:16.343355717 +0000 UTC m=+0.155255690 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_id=tripleo_step4, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, container_name=neutron_dhcp)
Oct 13 14:36:16 standalone.localdomain podman[277379]: 2025-10-13 14:36:16.387628868 +0000 UTC m=+0.197172631 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:36:16 standalone.localdomain systemd[1]: tmp-crun.YPA31p.mount: Deactivated successfully.
Oct 13 14:36:16 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:36:16 standalone.localdomain podman[277379]: 2025-10-13 14:36:16.458872423 +0000 UTC m=+0.268416015 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=neutron_sriov_agent, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-sriov-agent, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:36:16 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:36:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:36:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:36:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:36:17 standalone.localdomain podman[277616]: 2025-10-13 14:36:17.087977677 +0000 UTC m=+0.144082470 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9)
Oct 13 14:36:17 standalone.localdomain podman[277617]: 2025-10-13 14:36:17.03795846 +0000 UTC m=+0.089165853 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, name=rhosp17/openstack-mariadb, container_name=clustercheck, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:36:17 standalone.localdomain podman[277616]: 2025-10-13 14:36:17.118045945 +0000 UTC m=+0.174150748 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, release=1, tcib_managed=true, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:36:17 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:36:17 standalone.localdomain podman[277617]: 2025-10-13 14:36:17.170336191 +0000 UTC m=+0.221543544 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step2, container_name=clustercheck, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:36:17 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:36:17 standalone.localdomain podman[277615]: 2025-10-13 14:36:17.219932895 +0000 UTC m=+0.276774920 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=)
Oct 13 14:36:17 standalone.localdomain podman[277615]: 2025-10-13 14:36:17.230810697 +0000 UTC m=+0.287652742 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, container_name=iscsid)
Oct 13 14:36:17 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:36:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:17 standalone.localdomain systemd[1]: tmp-crun.PExtMQ.mount: Deactivated successfully.
Oct 13 14:36:17 standalone.localdomain podman[277777]: 2025-10-13 14:36:17.435535797 +0000 UTC m=+0.095111895 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, ceph=True, GIT_CLEAN=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc.)
Oct 13 14:36:17 standalone.localdomain podman[277777]: 2025-10-13 14:36:17.567828355 +0000 UTC m=+0.227404513 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git)
Oct 13 14:36:18 standalone.localdomain sudo[277380]: pam_unix(sudo:session): session closed for user root
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:18 standalone.localdomain sudo[277928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:36:18 standalone.localdomain sudo[277928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:36:18 standalone.localdomain sudo[277928]: pam_unix(sudo:session): session closed for user root
Oct 13 14:36:18 standalone.localdomain sudo[277943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:36:18 standalone.localdomain sudo[277943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:36:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1853: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3824528611' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3824528611' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:36:18 standalone.localdomain sudo[277943]: pam_unix(sudo:session): session closed for user root
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:36:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b65babca-74e4-4a31-a5ce-9c5a0d890dfd (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:36:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b65babca-74e4-4a31-a5ce-9c5a0d890dfd (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:36:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b65babca-74e4-4a31-a5ce-9c5a0d890dfd (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:36:18 standalone.localdomain sudo[277997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:36:18 standalone.localdomain sudo[277997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:36:18 standalone.localdomain sudo[277997]: pam_unix(sudo:session): session closed for user root
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: pgmap v1853: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3824528611' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3824528611' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:36:19 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:36:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1854: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:36:20 standalone.localdomain ceph-mon[29756]: pgmap v1854: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1855: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:36:23
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'backups', '.mgr', 'images', 'volumes', 'vms']
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:36:23 standalone.localdomain ceph-mon[29756]: pgmap v1855: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:36:24 standalone.localdomain ceph-mgr[29999]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 14:36:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1856: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:25 standalone.localdomain ceph-mon[29756]: pgmap v1856: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:25 standalone.localdomain runuser[278127]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1857: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:26 standalone.localdomain runuser[278127]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:26 standalone.localdomain runuser[278237]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:27 standalone.localdomain runuser[278237]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:27 standalone.localdomain runuser[278373]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:27 standalone.localdomain ceph-mon[29756]: pgmap v1857: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:27 standalone.localdomain runuser[278373]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1858: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:28 standalone.localdomain ceph-mon[29756]: pgmap v1858: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:36:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:36:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1859: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:31 standalone.localdomain ceph-mon[29756]: pgmap v1859: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1860: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:33 standalone.localdomain ceph-mon[29756]: pgmap v1860: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:36:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:36:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:36:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:36:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:36:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:36:33 standalone.localdomain podman[278636]: 2025-10-13 14:36:33.832033053 +0000 UTC m=+0.088242635 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, container_name=heat_api, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:36:33 standalone.localdomain podman[278636]: 2025-10-13 14:36:33.858870512 +0000 UTC m=+0.115080094 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=heat_api, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, architecture=x86_64)
Oct 13 14:36:33 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:36:33 standalone.localdomain podman[278626]: 2025-10-13 14:36:33.961838795 +0000 UTC m=+0.215749576 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 14:36:34 standalone.localdomain podman[278622]: 2025-10-13 14:36:34.025744986 +0000 UTC m=+0.292819799 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, vcs-type=git, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:36:34 standalone.localdomain podman[278622]: 2025-10-13 14:36:34.067885923 +0000 UTC m=+0.334960746 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, release=1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1)
Oct 13 14:36:34 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:36:34 standalone.localdomain podman[278626]: 2025-10-13 14:36:34.102453408 +0000 UTC m=+0.356364259 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, release=1, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:36:34 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:36:34 standalone.localdomain podman[278623]: 2025-10-13 14:36:33.934831632 +0000 UTC m=+0.200525903 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=heat_engine, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11)
Oct 13 14:36:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1861: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:34 standalone.localdomain podman[278637]: 2025-10-13 14:36:34.263645739 +0000 UTC m=+0.505377138 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1)
Oct 13 14:36:34 standalone.localdomain podman[278624]: 2025-10-13 14:36:34.223612457 +0000 UTC m=+0.481912972 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, build-date=2025-07-21T12:58:43, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, container_name=memcached, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:36:34 standalone.localdomain podman[278623]: 2025-10-13 14:36:34.278752859 +0000 UTC m=+0.544447140 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, release=1, distribution-scope=public, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, name=rhosp17/openstack-heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, build-date=2025-07-21T15:44:11)
Oct 13 14:36:34 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:36:34 standalone.localdomain podman[278637]: 2025-10-13 14:36:34.301135743 +0000 UTC m=+0.542867112 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52)
Oct 13 14:36:34 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:36:34 standalone.localdomain podman[278624]: 2025-10-13 14:36:34.354800812 +0000 UTC m=+0.613101327 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, tcib_managed=true, build-date=2025-07-21T12:58:43, version=17.1.9, config_id=tripleo_step1, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-memcached, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=memcached)
Oct 13 14:36:34 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:36:35 standalone.localdomain ceph-mon[29756]: pgmap v1861: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1862: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:37 standalone.localdomain ceph-mon[29756]: pgmap v1862: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1863: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:38 standalone.localdomain ceph-mon[29756]: pgmap v1863: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:38 standalone.localdomain runuser[279002]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:39 standalone.localdomain runuser[279002]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:39 standalone.localdomain runuser[279071]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:36:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:36:39 standalone.localdomain podman[279117]: 2025-10-13 14:36:39.830186201 +0000 UTC m=+0.095475985 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vendor=Red Hat, Inc., container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, distribution-scope=public, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:36:39 standalone.localdomain podman[279152]: 2025-10-13 14:36:39.84457765 +0000 UTC m=+0.083591533 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, distribution-scope=public)
Oct 13 14:36:39 standalone.localdomain podman[279152]: 2025-10-13 14:36:39.880744054 +0000 UTC m=+0.119757957 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:36:39 standalone.localdomain podman[279152]: unhealthy
Oct 13 14:36:39 standalone.localdomain systemd[1]: tmp-crun.SY4qiR.mount: Deactivated successfully.
Oct 13 14:36:39 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:36:39 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:36:39 standalone.localdomain podman[279121]: 2025-10-13 14:36:39.903509499 +0000 UTC m=+0.159259743 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:36:39 standalone.localdomain podman[279116]: 2025-10-13 14:36:39.93794596 +0000 UTC m=+0.201197572 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, release=1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:36:39 standalone.localdomain podman[279116]: 2025-10-13 14:36:39.947808492 +0000 UTC m=+0.211060104 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:36:39 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:36:39 standalone.localdomain podman[279118]: 2025-10-13 14:36:39.988040559 +0000 UTC m=+0.241765480 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-07-21T14:48:37)
Oct 13 14:36:40 standalone.localdomain podman[279131]: 2025-10-13 14:36:40.003317346 +0000 UTC m=+0.253939093 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=glance_api_internal, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:36:40 standalone.localdomain podman[279117]: 2025-10-13 14:36:40.051055013 +0000 UTC m=+0.316344827 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:36:40 standalone.localdomain podman[279153]: 2025-10-13 14:36:39.965225103 +0000 UTC m=+0.203274445 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, container_name=swift_account_server, tcib_managed=true, release=1)
Oct 13 14:36:40 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:36:40 standalone.localdomain podman[279141]: 2025-10-13 14:36:40.039179411 +0000 UTC m=+0.282899537 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Oct 13 14:36:40 standalone.localdomain podman[279132]: 2025-10-13 14:36:40.060673307 +0000 UTC m=+0.298238875 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T15:54:32, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:36:40 standalone.localdomain podman[279141]: 2025-10-13 14:36:40.116662716 +0000 UTC m=+0.360382852 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, release=1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, vcs-type=git, version=17.1.9)
Oct 13 14:36:40 standalone.localdomain podman[279141]: unhealthy
Oct 13 14:36:40 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:36:40 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:36:40 standalone.localdomain podman[279153]: 2025-10-13 14:36:40.161790663 +0000 UTC m=+0.399839985 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-swift-account-container, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=swift_account_server, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:36:40 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:36:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1864: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:40 standalone.localdomain runuser[279071]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:40 standalone.localdomain runuser[279330]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:40 standalone.localdomain podman[279121]: 2025-10-13 14:36:40.217513564 +0000 UTC m=+0.473263788 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-proxy-server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77)
Oct 13 14:36:40 standalone.localdomain podman[279131]: 2025-10-13 14:36:40.225912741 +0000 UTC m=+0.476534488 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, managed_by=tripleo_ansible, release=1, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-glance-api, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:36:40 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:36:40 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:36:40 standalone.localdomain podman[279132]: 2025-10-13 14:36:40.268867743 +0000 UTC m=+0.506433351 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, architecture=x86_64, name=rhosp17/openstack-swift-container, vcs-type=git, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4)
Oct 13 14:36:40 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:36:40 standalone.localdomain podman[279118]: 2025-10-13 14:36:40.370933408 +0000 UTC m=+0.624658349 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, version=17.1.9, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:36:40 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:36:40 standalone.localdomain runuser[279330]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:41 standalone.localdomain ceph-mon[29756]: pgmap v1864: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1865: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:43 standalone.localdomain ceph-mon[29756]: pgmap v1865: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1866: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:36:44 standalone.localdomain podman[279483]: 2025-10-13 14:36:44.809382645 +0000 UTC m=+0.079434076 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-keystone, container_name=keystone_cron, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:36:44 standalone.localdomain podman[279483]: 2025-10-13 14:36:44.84688101 +0000 UTC m=+0.116932401 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, container_name=keystone_cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3)
Oct 13 14:36:44 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:36:45 standalone.localdomain ceph-mon[29756]: pgmap v1866: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1867: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:36:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:36:46 standalone.localdomain podman[279559]: 2025-10-13 14:36:46.793977737 +0000 UTC m=+0.062253402 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, io.openshift.expose-services=)
Oct 13 14:36:46 standalone.localdomain podman[279560]: 2025-10-13 14:36:46.853957677 +0000 UTC m=+0.118745115 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, version=17.1.9, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1)
Oct 13 14:36:46 standalone.localdomain podman[279560]: 2025-10-13 14:36:46.898311731 +0000 UTC m=+0.163099219 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, build-date=2025-07-21T16:03:34, vendor=Red Hat, Inc.)
Oct 13 14:36:46 standalone.localdomain podman[279559]: 2025-10-13 14:36:46.90778493 +0000 UTC m=+0.176060605 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., release=1, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 14:36:46 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:36:46 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:36:47 standalone.localdomain ceph-mon[29756]: pgmap v1867: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:36:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:36:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:36:47 standalone.localdomain podman[279737]: 2025-10-13 14:36:47.811247529 +0000 UTC m=+0.076929049 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, release=1, com.redhat.component=openstack-iscsid-container)
Oct 13 14:36:47 standalone.localdomain podman[279738]: 2025-10-13 14:36:47.873477498 +0000 UTC m=+0.131768643 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, version=17.1.9, config_id=tripleo_step5, container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute)
Oct 13 14:36:47 standalone.localdomain podman[279737]: 2025-10-13 14:36:47.898117601 +0000 UTC m=+0.163799091 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.33.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T13:27:15, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, container_name=iscsid, distribution-scope=public, version=17.1.9, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:36:47 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:36:47 standalone.localdomain podman[279744]: 2025-10-13 14:36:47.924581568 +0000 UTC m=+0.181542872 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=clustercheck, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, config_id=tripleo_step2, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, release=1, vcs-type=git, com.redhat.component=openstack-mariadb-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:36:47 standalone.localdomain podman[279738]: 2025-10-13 14:36:47.949916582 +0000 UTC m=+0.208207727 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:36:47 standalone.localdomain podman[279744]: 2025-10-13 14:36:47.968892521 +0000 UTC m=+0.225853825 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, distribution-scope=public, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, name=rhosp17/openstack-mariadb, version=17.1.9, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:36:47 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:36:48 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:36:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1868: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:48 standalone.localdomain ceph-mon[29756]: pgmap v1868: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:48 standalone.localdomain systemd[1]: tmp-crun.Lcp5sy.mount: Deactivated successfully.
Oct 13 14:36:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1869: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:51 standalone.localdomain ceph-mon[29756]: pgmap v1869: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:51 standalone.localdomain runuser[279897]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:52 standalone.localdomain runuser[279897]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1870: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:52 standalone.localdomain runuser[280019]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:52 standalone.localdomain runuser[280019]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:52 standalone.localdomain runuser[280073]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:36:53 standalone.localdomain ceph-mon[29756]: pgmap v1870: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:53 standalone.localdomain runuser[280073]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:36:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1871: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:55 standalone.localdomain ceph-mon[29756]: pgmap v1871: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1872: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:57 standalone.localdomain ceph-mon[29756]: pgmap v1872: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:36:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1873: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:36:58 standalone.localdomain ceph-mon[29756]: pgmap v1873: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1874: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:01 standalone.localdomain ceph-mon[29756]: pgmap v1874: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1875: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:03 standalone.localdomain ceph-mon[29756]: pgmap v1875: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:04 standalone.localdomain runuser[280519]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1876: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:04 standalone.localdomain runuser[280519]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:37:04 standalone.localdomain recover_tripleo_nova_virtqemud[280611]: 93291
Oct 13 14:37:04 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:37:04 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:37:04 standalone.localdomain runuser[280653]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:04 standalone.localdomain podman[280592]: 2025-10-13 14:37:04.845521865 +0000 UTC m=+0.104126511 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., name=rhosp17/openstack-memcached, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=memcached, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1)
Oct 13 14:37:04 standalone.localdomain podman[280592]: 2025-10-13 14:37:04.890887909 +0000 UTC m=+0.149492555 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, container_name=memcached, build-date=2025-07-21T12:58:43, distribution-scope=public, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 memcached, release=1, batch=17.1_20250721.1)
Oct 13 14:37:04 standalone.localdomain podman[280594]: 2025-10-13 14:37:04.902169594 +0000 UTC m=+0.151074443 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, container_name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, tcib_managed=true, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:37:04 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:37:04 standalone.localdomain podman[280593]: 2025-10-13 14:37:04.875756617 +0000 UTC m=+0.127787611 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public)
Oct 13 14:37:04 standalone.localdomain podman[280595]: 2025-10-13 14:37:04.936663966 +0000 UTC m=+0.184472082 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, release=1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1)
Oct 13 14:37:04 standalone.localdomain podman[280594]: 2025-10-13 14:37:04.955839272 +0000 UTC m=+0.204744131 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, build-date=2025-07-21T15:56:26, distribution-scope=public, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, release=1)
Oct 13 14:37:04 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:37:04 standalone.localdomain podman[280595]: 2025-10-13 14:37:04.97477547 +0000 UTC m=+0.222583576 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:37:04 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:37:05 standalone.localdomain podman[280591]: 2025-10-13 14:37:05.026296922 +0000 UTC m=+0.283857895 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, release=1, architecture=x86_64, vcs-type=git, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, build-date=2025-07-21T15:44:11, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, com.redhat.component=openstack-heat-engine-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:37:05 standalone.localdomain podman[280593]: 2025-10-13 14:37:05.060687072 +0000 UTC m=+0.312718146 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, tcib_managed=true, version=17.1.9, build-date=2025-07-21T15:56:26, container_name=heat_api_cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:37:05 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:37:05 standalone.localdomain podman[280590]: 2025-10-13 14:37:05.027796599 +0000 UTC m=+0.286550699 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, container_name=heat_api_cfn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:37:05 standalone.localdomain podman[280590]: 2025-10-13 14:37:05.107188802 +0000 UTC m=+0.365942872 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=heat_api_cfn, com.redhat.component=openstack-heat-api-cfn-container, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:37:05 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:37:05 standalone.localdomain podman[280591]: 2025-10-13 14:37:05.130423031 +0000 UTC m=+0.387984044 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_engine, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:37:05 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:37:05 standalone.localdomain ceph-mon[29756]: pgmap v1876: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:05 standalone.localdomain runuser[280653]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:05 standalone.localdomain runuser[280791]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1877: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:06 standalone.localdomain runuser[280791]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:06 standalone.localdomain podman[280859]: 2025-10-13 14:37:06.495255584 +0000 UTC m=+0.073078903 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, release=1)
Oct 13 14:37:06 standalone.localdomain podman[280859]: 2025-10-13 14:37:06.527879089 +0000 UTC m=+0.105702358 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, build-date=2025-07-21T12:58:45, architecture=x86_64, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:37:06 standalone.localdomain podman[280894]: 2025-10-13 14:37:06.746620917 +0000 UTC m=+0.087083670 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, architecture=x86_64, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy)
Oct 13 14:37:06 standalone.localdomain podman[280894]: 2025-10-13 14:37:06.780900553 +0000 UTC m=+0.121363276 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, name=rhosp17/openstack-haproxy, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, build-date=2025-07-21T13:08:11, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:37:07 standalone.localdomain ceph-mon[29756]: pgmap v1877: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:07 standalone.localdomain systemd[1]: tmp-crun.T4OPQw.mount: Deactivated successfully.
Oct 13 14:37:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:07 standalone.localdomain podman[281049]: 2025-10-13 14:37:07.467107064 +0000 UTC m=+0.097689397 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, summary=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, distribution-scope=public, release=1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:37:07 standalone.localdomain systemd-journald[48591]: Data hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Oct 13 14:37:07 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 14:37:07 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 14:37:07 standalone.localdomain podman[281049]: 2025-10-13 14:37:07.473791061 +0000 UTC m=+0.104373334 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:37:07 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 14:37:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1878: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:08 standalone.localdomain ceph-mon[29756]: pgmap v1878: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 3600.1 total, 600.0 interval
                                                        Cumulative writes: 6951 writes, 30K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                                        Cumulative WAL: 6951 writes, 1290 syncs, 5.39 writes per sync, written: 0.03 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 14:37:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1879: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:37:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:37:10 standalone.localdomain systemd[1]: tmp-crun.EEX730.mount: Deactivated successfully.
Oct 13 14:37:10 standalone.localdomain podman[281203]: 2025-10-13 14:37:10.883401442 +0000 UTC m=+0.125713561 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public)
Oct 13 14:37:10 standalone.localdomain podman[281246]: 2025-10-13 14:37:10.893697649 +0000 UTC m=+0.098816863 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:37:10 standalone.localdomain podman[281228]: 2025-10-13 14:37:10.915860542 +0000 UTC m=+0.132972855 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:28:53, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:37:10 standalone.localdomain podman[281228]: 2025-10-13 14:37:10.959747942 +0000 UTC m=+0.176860215 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:37:10 standalone.localdomain podman[281197]: 2025-10-13 14:37:10.980250304 +0000 UTC m=+0.222289815 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, config_id=tripleo_step4)
Oct 13 14:37:10 standalone.localdomain podman[281228]: unhealthy
Oct 13 14:37:10 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:37:10 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:37:11 standalone.localdomain podman[281215]: 2025-10-13 14:37:11.047152102 +0000 UTC m=+0.266389041 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.component=openstack-glance-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, version=17.1.9, container_name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1)
Oct 13 14:37:11 standalone.localdomain podman[281196]: 2025-10-13 14:37:10.902090908 +0000 UTC m=+0.151770584 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, com.redhat.component=openstack-glance-api-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12)
Oct 13 14:37:11 standalone.localdomain podman[281204]: 2025-10-13 14:37:11.07144032 +0000 UTC m=+0.302368659 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, release=1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, distribution-scope=public, container_name=swift_proxy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12)
Oct 13 14:37:11 standalone.localdomain podman[281196]: 2025-10-13 14:37:11.088817905 +0000 UTC m=+0.338497581 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-glance-api-container, container_name=glance_api_cron, build-date=2025-07-21T13:58:20, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9)
Oct 13 14:37:11 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:37:11 standalone.localdomain podman[281236]: 2025-10-13 14:37:11.118448568 +0000 UTC m=+0.327603296 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, container_name=ovn_controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:37:11 standalone.localdomain podman[281236]: 2025-10-13 14:37:11.160833122 +0000 UTC m=+0.369987890 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, release=1)
Oct 13 14:37:11 standalone.localdomain podman[281236]: unhealthy
Oct 13 14:37:11 standalone.localdomain podman[281216]: 2025-10-13 14:37:11.171993326 +0000 UTC m=+0.397991433 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:37:11 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:37:11 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:37:11 standalone.localdomain podman[281197]: 2025-10-13 14:37:11.191408254 +0000 UTC m=+0.433447775 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team)
Oct 13 14:37:11 standalone.localdomain podman[281246]: 2025-10-13 14:37:11.197208362 +0000 UTC m=+0.402327656 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.component=openstack-swift-account-container, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:37:11 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:37:11 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:37:11 standalone.localdomain ceph-mon[29756]: pgmap v1879: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:11 standalone.localdomain podman[281203]: 2025-10-13 14:37:11.266873677 +0000 UTC m=+0.509185806 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:37:11 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:37:11 standalone.localdomain podman[281204]: 2025-10-13 14:37:11.317198736 +0000 UTC m=+0.548127075 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77)
Oct 13 14:37:11 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:37:11 standalone.localdomain podman[281216]: 2025-10-13 14:37:11.340964207 +0000 UTC m=+0.566962354 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, tcib_managed=true, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., release=1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, build-date=2025-07-21T15:54:32, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:37:11 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:37:11 standalone.localdomain podman[281215]: 2025-10-13 14:37:11.371753875 +0000 UTC m=+0.590990804 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-glance-api-container)
Oct 13 14:37:11 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:37:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1880: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:13 standalone.localdomain ceph-mon[29756]: pgmap v1880: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1881: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:15 standalone.localdomain ceph-mon[29756]: pgmap v1881: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 14:37:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:37:15 standalone.localdomain podman[281500]: 2025-10-13 14:37:15.811681226 +0000 UTC m=+0.075653590 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, release=1, distribution-scope=public, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone)
Oct 13 14:37:15 standalone.localdomain podman[281500]: 2025-10-13 14:37:15.847008644 +0000 UTC m=+0.110980978 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18)
Oct 13 14:37:15 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:37:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1882: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:16 standalone.localdomain runuser[281572]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:17 standalone.localdomain ceph-mon[29756]: pgmap v1882: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:17 standalone.localdomain runuser[281572]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:17 standalone.localdomain runuser[281722]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:37:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:37:17 standalone.localdomain systemd[1]: tmp-crun.67xU1a.mount: Deactivated successfully.
Oct 13 14:37:17 standalone.localdomain podman[281810]: 2025-10-13 14:37:17.84404643 +0000 UTC m=+0.108766980 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=neutron_sriov_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969)
Oct 13 14:37:17 standalone.localdomain podman[281810]: 2025-10-13 14:37:17.888789817 +0000 UTC m=+0.153510377 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, release=1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.expose-services=, build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:37:17 standalone.localdomain podman[281809]: 2025-10-13 14:37:17.901656413 +0000 UTC m=+0.168970893 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, name=rhosp17/openstack-neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, version=17.1.9, container_name=neutron_dhcp, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git)
Oct 13 14:37:17 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:37:17 standalone.localdomain podman[281809]: 2025-10-13 14:37:17.927759837 +0000 UTC m=+0.195074317 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, version=17.1.9, architecture=x86_64, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T16:28:54, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:37:17 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:37:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:37:18 standalone.localdomain podman[281861]: 2025-10-13 14:37:18.005249832 +0000 UTC m=+0.056215332 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.33.12, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:37:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:37:18 standalone.localdomain podman[281861]: 2025-10-13 14:37:18.03994699 +0000 UTC m=+0.090912480 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:15, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:37:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:37:18 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:37:18 standalone.localdomain podman[281880]: 2025-10-13 14:37:18.080954612 +0000 UTC m=+0.054397175 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, release=1, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, io.openshift.expose-services=, container_name=clustercheck, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, config_id=tripleo_step2, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:37:18 standalone.localdomain podman[281892]: 2025-10-13 14:37:18.122855422 +0000 UTC m=+0.053590740 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:37:18 standalone.localdomain podman[281880]: 2025-10-13 14:37:18.158038816 +0000 UTC m=+0.131481369 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, release=1, config_id=tripleo_step2, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9)
Oct 13 14:37:18 standalone.localdomain runuser[281722]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:18 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:37:18 standalone.localdomain podman[281892]: 2025-10-13 14:37:18.173764109 +0000 UTC m=+0.104499447 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.33.12)
Oct 13 14:37:18 standalone.localdomain runuser[281952]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:18 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:37:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1883: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:18 standalone.localdomain ceph-mon[29756]: pgmap v1883: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:37:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/912592943' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:37:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:37:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/912592943' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:37:18 standalone.localdomain runuser[281952]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:18 standalone.localdomain sudo[282057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:37:18 standalone.localdomain sudo[282057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:37:18 standalone.localdomain sudo[282057]: pam_unix(sudo:session): session closed for user root
Oct 13 14:37:18 standalone.localdomain sudo[282073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:37:18 standalone.localdomain sudo[282073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:37:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/912592943' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:37:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/912592943' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:37:19 standalone.localdomain sudo[282073]: pam_unix(sudo:session): session closed for user root
Oct 13 14:37:19 standalone.localdomain sudo[282121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:37:19 standalone.localdomain sudo[282121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:37:19 standalone.localdomain sudo[282121]: pam_unix(sudo:session): session closed for user root
Oct 13 14:37:19 standalone.localdomain sudo[282136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Oct 13 14:37:19 standalone.localdomain sudo[282136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:37:20 standalone.localdomain sudo[282136]: pam_unix(sudo:session): session closed for user root
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:37:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:37:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 7b565d57-7ef0-46ae-aed4-14bd8eaf2c7e (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:37:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 7b565d57-7ef0-46ae-aed4-14bd8eaf2c7e (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:37:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 7b565d57-7ef0-46ae-aed4-14bd8eaf2c7e (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:37:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1884: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:20 standalone.localdomain sudo[282179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:37:20 standalone.localdomain sudo[282179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:37:20 standalone.localdomain sudo[282179]: pam_unix(sudo:session): session closed for user root
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:37:21 standalone.localdomain ceph-mon[29756]: pgmap v1884: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1885: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:37:23
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'volumes', 'manila_data', 'vms', 'manila_metadata', 'images', '.mgr']
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:37:23 standalone.localdomain ceph-mon[29756]: pgmap v1885: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:37:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1886: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:24 standalone.localdomain ceph-mon[29756]: pgmap v1886: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:37:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:37:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:37:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1887: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:26 standalone.localdomain ceph-mon[29756]: pgmap v1887: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1888: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:28 standalone.localdomain ceph-mon[29756]: pgmap v1888: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:37:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:37:29 standalone.localdomain runuser[282530]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:30 standalone.localdomain runuser[282530]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1889: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:30 standalone.localdomain runuser[282599]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:30 standalone.localdomain runuser[282599]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:30 standalone.localdomain runuser[282653]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:31 standalone.localdomain ceph-mon[29756]: pgmap v1889: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:31 standalone.localdomain runuser[282653]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1890: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:33 standalone.localdomain ceph-mon[29756]: pgmap v1890: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1891: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:35 standalone.localdomain ceph-mon[29756]: pgmap v1891: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:37:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:37:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:37:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:37:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:37:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:37:35 standalone.localdomain podman[282815]: 2025-10-13 14:37:35.81764108 +0000 UTC m=+0.068735827 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, release=1, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64)
Oct 13 14:37:35 standalone.localdomain podman[282815]: 2025-10-13 14:37:35.826914035 +0000 UTC m=+0.078008782 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:37:35 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:37:35 standalone.localdomain podman[282807]: 2025-10-13 14:37:35.872204769 +0000 UTC m=+0.131802338 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, release=1, distribution-scope=public, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, vcs-type=git, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:37:35 standalone.localdomain podman[282808]: 2025-10-13 14:37:35.919449524 +0000 UTC m=+0.175101242 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, com.redhat.component=openstack-memcached-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, build-date=2025-07-21T12:58:43, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:37:35 standalone.localdomain podman[282808]: 2025-10-13 14:37:35.942776492 +0000 UTC m=+0.198428220 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, container_name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, release=1, build-date=2025-07-21T12:58:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:37:35 standalone.localdomain podman[282825]: 2025-10-13 14:37:35.969168185 +0000 UTC m=+0.216857088 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:37:35 standalone.localdomain podman[282807]: 2025-10-13 14:37:35.970183155 +0000 UTC m=+0.229780734 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, com.redhat.component=openstack-heat-engine-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, version=17.1.9, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, container_name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:37:36 standalone.localdomain podman[282825]: 2025-10-13 14:37:36.001995574 +0000 UTC m=+0.249684427 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:37:36 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:37:36 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:37:36 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:37:36 standalone.localdomain podman[282806]: 2025-10-13 14:37:36.072271238 +0000 UTC m=+0.332659561 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, container_name=heat_api_cfn, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container)
Oct 13 14:37:36 standalone.localdomain podman[282806]: 2025-10-13 14:37:36.097780084 +0000 UTC m=+0.358168397 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, container_name=heat_api_cfn, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public)
Oct 13 14:37:36 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:37:36 standalone.localdomain podman[282820]: 2025-10-13 14:37:36.201192776 +0000 UTC m=+0.447966950 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, name=rhosp17/openstack-heat-api, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=heat_api, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4)
Oct 13 14:37:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1892: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:36 standalone.localdomain podman[282820]: 2025-10-13 14:37:36.227735524 +0000 UTC m=+0.474509688 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc.)
Oct 13 14:37:36 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:37:36 standalone.localdomain ceph-mon[29756]: pgmap v1892: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1893: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:38 standalone.localdomain ceph-mon[29756]: pgmap v1893: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1894: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:41 standalone.localdomain ceph-mon[29756]: pgmap v1894: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:37:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:37:41 standalone.localdomain systemd[1]: tmp-crun.VCGT6y.mount: Deactivated successfully.
Oct 13 14:37:41 standalone.localdomain podman[283218]: 2025-10-13 14:37:41.899329299 +0000 UTC m=+0.139021961 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, config_id=tripleo_step4, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 14:37:41 standalone.localdomain podman[283208]: 2025-10-13 14:37:41.913534856 +0000 UTC m=+0.158421478 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, container_name=swift_proxy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:37:41 standalone.localdomain podman[283226]: 2025-10-13 14:37:41.867461547 +0000 UTC m=+0.099392960 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:37:41 standalone.localdomain podman[283228]: 2025-10-13 14:37:41.884244604 +0000 UTC m=+0.105565071 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:37:41 standalone.localdomain podman[283200]: 2025-10-13 14:37:41.965430103 +0000 UTC m=+0.224289615 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, release=1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, distribution-scope=public, container_name=glance_api_cron, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:37:41 standalone.localdomain podman[283201]: 2025-10-13 14:37:41.976226575 +0000 UTC m=+0.229060042 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, distribution-scope=public, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, tcib_managed=true, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64)
Oct 13 14:37:41 standalone.localdomain podman[283200]: 2025-10-13 14:37:41.999503642 +0000 UTC m=+0.258363154 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, version=17.1.9, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api)
Oct 13 14:37:42 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain podman[283228]: 2025-10-13 14:37:42.021744076 +0000 UTC m=+0.243064503 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, build-date=2025-07-21T13:28:44, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, vcs-type=git)
Oct 13 14:37:42 standalone.localdomain podman[283228]: unhealthy
Oct 13 14:37:42 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:37:42 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:37:42 standalone.localdomain podman[283237]: 2025-10-13 14:37:42.035550311 +0000 UTC m=+0.250865803 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, release=1, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:37:42 standalone.localdomain podman[283202]: 2025-10-13 14:37:42.03874579 +0000 UTC m=+0.286054487 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:37:42 standalone.localdomain podman[283213]: 2025-10-13 14:37:42.075120049 +0000 UTC m=+0.308765595 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=)
Oct 13 14:37:42 standalone.localdomain podman[283218]: 2025-10-13 14:37:42.092795554 +0000 UTC m=+0.332488256 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, release=1, com.redhat.component=openstack-swift-container-container, distribution-scope=public, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server)
Oct 13 14:37:42 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain podman[283226]: 2025-10-13 14:37:42.105217716 +0000 UTC m=+0.337149119 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:37:42 standalone.localdomain podman[283226]: unhealthy
Oct 13 14:37:42 standalone.localdomain podman[283208]: 2025-10-13 14:37:42.120523777 +0000 UTC m=+0.365410379 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, release=1, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:37:42 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:37:42 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:37:42 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain runuser[283402]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:42 standalone.localdomain podman[283201]: 2025-10-13 14:37:42.162786289 +0000 UTC m=+0.415619726 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, release=1, tcib_managed=true, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:37:42 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1895: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:42 standalone.localdomain podman[283237]: 2025-10-13 14:37:42.222794056 +0000 UTC m=+0.438109528 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:37:42 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain podman[283213]: 2025-10-13 14:37:42.269796413 +0000 UTC m=+0.503441979 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_internal, distribution-scope=public)
Oct 13 14:37:42 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain podman[283202]: 2025-10-13 14:37:42.380117749 +0000 UTC m=+0.627426466 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true)
Oct 13 14:37:42 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:37:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:42 standalone.localdomain runuser[283402]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:42 standalone.localdomain runuser[283535]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:43 standalone.localdomain ceph-mon[29756]: pgmap v1895: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:43 standalone.localdomain runuser[283535]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:43 standalone.localdomain runuser[283597]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1896: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:44 standalone.localdomain runuser[283597]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:45 standalone.localdomain ceph-mon[29756]: pgmap v1896: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1897: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:46 standalone.localdomain ceph-mon[29756]: pgmap v1897: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:37:46 standalone.localdomain systemd[1]: tmp-crun.BDdRCV.mount: Deactivated successfully.
Oct 13 14:37:46 standalone.localdomain podman[283681]: 2025-10-13 14:37:46.822683289 +0000 UTC m=+0.086354079 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:18, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-keystone, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, batch=17.1_20250721.1)
Oct 13 14:37:46 standalone.localdomain podman[283681]: 2025-10-13 14:37:46.83699742 +0000 UTC m=+0.100668250 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, container_name=keystone_cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, config_id=tripleo_step3, name=rhosp17/openstack-keystone, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, architecture=x86_64)
Oct 13 14:37:46 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:37:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1898: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:48 standalone.localdomain ceph-mon[29756]: pgmap v1898: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:37:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:37:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:37:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:37:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:37:48 standalone.localdomain podman[283931]: 2025-10-13 14:37:48.824813943 +0000 UTC m=+0.080381615 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-mariadb, release=1)
Oct 13 14:37:48 standalone.localdomain podman[283931]: 2025-10-13 14:37:48.869585752 +0000 UTC m=+0.125153424 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, io.openshift.expose-services=, release=1, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, version=17.1.9, name=rhosp17/openstack-mariadb, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:37:48 standalone.localdomain systemd[1]: tmp-crun.bmm1Dq.mount: Deactivated successfully.
Oct 13 14:37:48 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:37:48 standalone.localdomain podman[283924]: 2025-10-13 14:37:48.884901753 +0000 UTC m=+0.145823240 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:37:48 standalone.localdomain podman[283924]: 2025-10-13 14:37:48.909382207 +0000 UTC m=+0.170303664 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute)
Oct 13 14:37:48 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:37:48 standalone.localdomain podman[283923]: 2025-10-13 14:37:48.918397174 +0000 UTC m=+0.179026112 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=iscsid, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 14:37:48 standalone.localdomain podman[283922]: 2025-10-13 14:37:48.982357243 +0000 UTC m=+0.249166772 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, container_name=neutron_dhcp, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:37:49 standalone.localdomain podman[283922]: 2025-10-13 14:37:49.025663276 +0000 UTC m=+0.292472775 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=neutron_dhcp, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, distribution-scope=public, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64)
Oct 13 14:37:49 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:37:49 standalone.localdomain podman[283923]: 2025-10-13 14:37:49.104671559 +0000 UTC m=+0.365300537 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:37:49 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:37:49 standalone.localdomain podman[283927]: 2025-10-13 14:37:49.029739912 +0000 UTC m=+0.281691893 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, build-date=2025-07-21T16:03:34, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, version=17.1.9)
Oct 13 14:37:49 standalone.localdomain podman[283927]: 2025-10-13 14:37:49.218150061 +0000 UTC m=+0.470102062 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=neutron_sriov_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969)
Oct 13 14:37:49 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:37:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1899: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:51 standalone.localdomain ceph-mon[29756]: pgmap v1899: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1900: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:37:53 standalone.localdomain ceph-mon[29756]: pgmap v1900: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1901: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:54 standalone.localdomain ceph-mon[29756]: pgmap v1901: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:54 standalone.localdomain runuser[284157]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:55 standalone.localdomain runuser[284157]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:55 standalone.localdomain runuser[284226]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1902: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:56 standalone.localdomain runuser[284226]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:56 standalone.localdomain runuser[284280]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:37:57 standalone.localdomain runuser[284280]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:37:57 standalone.localdomain ceph-mon[29756]: pgmap v1902: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:37:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1903: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:37:58 standalone.localdomain ceph-mon[29756]: pgmap v1903: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1904: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:01 standalone.localdomain ceph-mon[29756]: pgmap v1904: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1905: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:03 standalone.localdomain ceph-mon[29756]: pgmap v1905: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1906: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:04 standalone.localdomain ceph-mon[29756]: pgmap v1906: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1907: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:06 standalone.localdomain podman[284717]: 2025-10-13 14:38:06.624694156 +0000 UTC m=+0.062977120 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, name=rhosp17/openstack-mariadb, release=1, com.redhat.component=openstack-mariadb-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:38:06 standalone.localdomain podman[284717]: 2025-10-13 14:38:06.6637944 +0000 UTC m=+0.102077374 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team)
Oct 13 14:38:06 standalone.localdomain podman[284742]: 2025-10-13 14:38:06.733567507 +0000 UTC m=+0.087500735 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, release=1, description=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9)
Oct 13 14:38:06 standalone.localdomain systemd[1]: tmp-crun.hHJNZt.mount: Deactivated successfully.
Oct 13 14:38:06 standalone.localdomain podman[284736]: 2025-10-13 14:38:06.780145721 +0000 UTC m=+0.136368330 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, container_name=heat_api_cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, build-date=2025-07-21T14:49:55, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:38:06 standalone.localdomain podman[284755]: 2025-10-13 14:38:06.794623846 +0000 UTC m=+0.139294028 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-heat-api-container, vcs-type=git, build-date=2025-07-21T15:56:26, release=1, config_id=tripleo_step4, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9)
Oct 13 14:38:06 standalone.localdomain podman[284760]: 2025-10-13 14:38:06.866260622 +0000 UTC m=+0.207124007 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, managed_by=tripleo_ansible, container_name=logrotate_crond, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git)
Oct 13 14:38:06 standalone.localdomain podman[284760]: 2025-10-13 14:38:06.875017191 +0000 UTC m=+0.215880596 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:52, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Oct 13 14:38:06 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:38:06 standalone.localdomain podman[284742]: 2025-10-13 14:38:06.927369603 +0000 UTC m=+0.281302861 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, name=rhosp17/openstack-heat-engine, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-engine-container, build-date=2025-07-21T15:44:11, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1)
Oct 13 14:38:06 standalone.localdomain podman[284755]: 2025-10-13 14:38:06.927803137 +0000 UTC m=+0.272473309 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, container_name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, com.redhat.component=openstack-heat-api-container, version=17.1.9, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, vcs-type=git)
Oct 13 14:38:06 standalone.localdomain podman[284850]: 2025-10-13 14:38:06.928669653 +0000 UTC m=+0.116307671 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, release=1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:38:06 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:38:06 standalone.localdomain podman[284743]: 2025-10-13 14:38:06.84896318 +0000 UTC m=+0.196210371 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step1, container_name=memcached, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12)
Oct 13 14:38:06 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:38:07 standalone.localdomain podman[284756]: 2025-10-13 14:38:07.02863097 +0000 UTC m=+0.370637360 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-heat-api-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, container_name=heat_api, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26)
Oct 13 14:38:07 standalone.localdomain podman[284756]: 2025-10-13 14:38:07.06206597 +0000 UTC m=+0.404072380 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, release=1, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible)
Oct 13 14:38:07 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:38:07 standalone.localdomain podman[284736]: 2025-10-13 14:38:07.079559408 +0000 UTC m=+0.435782077 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, container_name=heat_api_cfn, version=17.1.9, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git)
Oct 13 14:38:07 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:38:07 standalone.localdomain podman[284896]: 2025-10-13 14:38:07.168685022 +0000 UTC m=+0.232438777 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, build-date=2025-07-21T13:08:11, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:38:07 standalone.localdomain podman[284850]: 2025-10-13 14:38:07.172635873 +0000 UTC m=+0.360273921 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, tcib_managed=true, build-date=2025-07-21T13:08:11, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-haproxy-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:38:07 standalone.localdomain podman[284743]: 2025-10-13 14:38:07.182970531 +0000 UTC m=+0.530217772 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, container_name=memcached, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, vcs-type=git, com.redhat.component=openstack-memcached-container, summary=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:38:07 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:38:07 standalone.localdomain ceph-mon[29756]: pgmap v1907: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:07 standalone.localdomain podman[285051]: 2025-10-13 14:38:07.59515607 +0000 UTC m=+0.078217789 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-rabbitmq, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, distribution-scope=public, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, version=17.1.9)
Oct 13 14:38:07 standalone.localdomain podman[285051]: 2025-10-13 14:38:07.626984399 +0000 UTC m=+0.110046148 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., com.redhat.component=openstack-rabbitmq-container, io.openshift.expose-services=, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1)
Oct 13 14:38:07 standalone.localdomain runuser[285086]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1908: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:08 standalone.localdomain runuser[285086]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:08 standalone.localdomain ceph-mon[29756]: pgmap v1908: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:08 standalone.localdomain runuser[285237]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:09 standalone.localdomain runuser[285237]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:09 standalone.localdomain runuser[285299]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:09 standalone.localdomain runuser[285299]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1909: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:11 standalone.localdomain ceph-mon[29756]: pgmap v1909: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1910: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:38:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:38:12 standalone.localdomain podman[285382]: 2025-10-13 14:38:12.694575378 +0000 UTC m=+0.089286269 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, name=rhosp17/openstack-swift-object, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, config_id=tripleo_step4, container_name=swift_object_server, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28)
Oct 13 14:38:12 standalone.localdomain podman[285424]: 2025-10-13 14:38:12.747937851 +0000 UTC m=+0.118949192 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 14:38:12 standalone.localdomain podman[285396]: 2025-10-13 14:38:12.806586777 +0000 UTC m=+0.186610645 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, architecture=x86_64)
Oct 13 14:38:12 standalone.localdomain podman[285395]: 2025-10-13 14:38:12.860074183 +0000 UTC m=+0.243452854 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9)
Oct 13 14:38:12 standalone.localdomain podman[285406]: 2025-10-13 14:38:12.908169205 +0000 UTC m=+0.286201913 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.)
Oct 13 14:38:12 standalone.localdomain podman[285424]: 2025-10-13 14:38:12.929106779 +0000 UTC m=+0.300118110 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=swift_account_server, vcs-type=git, tcib_managed=true, architecture=x86_64, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, distribution-scope=public, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1)
Oct 13 14:38:12 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:38:12 standalone.localdomain podman[285406]: 2025-10-13 14:38:12.950291561 +0000 UTC m=+0.328324279 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:38:12 standalone.localdomain podman[285406]: unhealthy
Oct 13 14:38:12 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:38:12 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:38:12 standalone.localdomain podman[285382]: 2025-10-13 14:38:12.989830818 +0000 UTC m=+0.384541719 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, container_name=swift_object_server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:56:28, release=1)
Oct 13 14:38:13 standalone.localdomain podman[285381]: 2025-10-13 14:38:13.003697825 +0000 UTC m=+0.399657184 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, release=1, com.redhat.component=openstack-glance-api-container, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:38:13 standalone.localdomain podman[285396]: 2025-10-13 14:38:13.011762303 +0000 UTC m=+0.391786161 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:38:13 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:38:13 standalone.localdomain podman[285417]: 2025-10-13 14:38:13.021007258 +0000 UTC m=+0.394674221 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:38:13 standalone.localdomain podman[285381]: 2025-10-13 14:38:13.035228416 +0000 UTC m=+0.431187795 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:38:13 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:38:13 standalone.localdomain podman[285417]: 2025-10-13 14:38:13.055989724 +0000 UTC m=+0.429656677 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.33.12, release=1)
Oct 13 14:38:13 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:38:13 standalone.localdomain podman[285417]: unhealthy
Oct 13 14:38:13 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:38:13 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:38:13 standalone.localdomain podman[285389]: 2025-10-13 14:38:13.106380816 +0000 UTC m=+0.493701649 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:38:13 standalone.localdomain podman[285395]: 2025-10-13 14:38:13.136316697 +0000 UTC m=+0.519695358 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, container_name=glance_api_internal)
Oct 13 14:38:13 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:38:13 standalone.localdomain podman[285383]: 2025-10-13 14:38:13.212328677 +0000 UTC m=+0.597974678 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:38:13 standalone.localdomain podman[285389]: 2025-10-13 14:38:13.271685704 +0000 UTC m=+0.659006547 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:38:13 standalone.localdomain ceph-mon[29756]: pgmap v1910: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:13 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:38:13 standalone.localdomain podman[285383]: 2025-10-13 14:38:13.525954962 +0000 UTC m=+0.911600963 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, batch=17.1_20250721.1)
Oct 13 14:38:13 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:38:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1911: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:15 standalone.localdomain ceph-mon[29756]: pgmap v1911: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1912: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:16 standalone.localdomain ceph-mon[29756]: pgmap v1912: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:38:17 standalone.localdomain podman[285814]: 2025-10-13 14:38:17.804285505 +0000 UTC m=+0.067493648 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 keystone, release=1, tcib_managed=true, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, build-date=2025-07-21T13:27:18, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-keystone-container, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:38:17 standalone.localdomain podman[285814]: 2025-10-13 14:38:17.811948392 +0000 UTC m=+0.075156545 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, build-date=2025-07-21T13:27:18, container_name=keystone_cron)
Oct 13 14:38:17 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:38:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1913: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3453084511' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3453084511' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: pgmap v1913: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3453084511' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:38:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3453084511' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:38:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:38:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:38:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:38:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:38:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:38:19 standalone.localdomain podman[285958]: 2025-10-13 14:38:19.826947441 +0000 UTC m=+0.071796511 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, container_name=clustercheck, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T12:58:45)
Oct 13 14:38:19 standalone.localdomain podman[285932]: 2025-10-13 14:38:19.798764293 +0000 UTC m=+0.063520575 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:38:19 standalone.localdomain podman[285931]: 2025-10-13 14:38:19.857735889 +0000 UTC m=+0.125301818 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, container_name=neutron_dhcp, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9)
Oct 13 14:38:19 standalone.localdomain podman[285932]: 2025-10-13 14:38:19.886437503 +0000 UTC m=+0.151193765 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, container_name=iscsid, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:38:19 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:38:19 standalone.localdomain podman[285931]: 2025-10-13 14:38:19.909902845 +0000 UTC m=+0.177468784 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:54, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, batch=17.1_20250721.1, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:38:19 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:38:19 standalone.localdomain podman[285958]: 2025-10-13 14:38:19.962450833 +0000 UTC m=+0.207299883 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, tcib_managed=true, build-date=2025-07-21T12:58:45, config_id=tripleo_step2, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, container_name=clustercheck, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git)
Oct 13 14:38:20 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:38:20 standalone.localdomain systemd[1]: tmp-crun.JpvgFP.mount: Deactivated successfully.
Oct 13 14:38:20 standalone.localdomain podman[285938]: 2025-10-13 14:38:20.120766086 +0000 UTC m=+0.380926328 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:38:20 standalone.localdomain podman[285944]: 2025-10-13 14:38:20.092020141 +0000 UTC m=+0.344135135 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-sriov-agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, container_name=neutron_sriov_agent, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:38:20 standalone.localdomain podman[285938]: 2025-10-13 14:38:20.172074136 +0000 UTC m=+0.432234378 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1)
Oct 13 14:38:20 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:38:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1914: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:20 standalone.localdomain podman[285944]: 2025-10-13 14:38:20.228216304 +0000 UTC m=+0.480331298 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, release=1, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-sriov-agent-container, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:38:20 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:38:20 standalone.localdomain sudo[286067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:38:20 standalone.localdomain sudo[286067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:20 standalone.localdomain sudo[286067]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:20 standalone.localdomain sudo[286084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 14:38:20 standalone.localdomain sudo[286084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:20 standalone.localdomain runuser[286103]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:20 standalone.localdomain sudo[286084]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:38:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:38:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:20 standalone.localdomain sudo[286168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:38:20 standalone.localdomain sudo[286168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:20 standalone.localdomain sudo[286168]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:20 standalone.localdomain sudo[286183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:38:20 standalone.localdomain sudo[286183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:21 standalone.localdomain runuser[286103]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:21 standalone.localdomain ceph-mon[29756]: pgmap v1914: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:21 standalone.localdomain runuser[286238]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:21 standalone.localdomain sudo[286183]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:21 standalone.localdomain sudo[286299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:38:21 standalone.localdomain sudo[286299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:21 standalone.localdomain sudo[286299]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:21 standalone.localdomain sudo[286314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- inventory --format=json-pretty --filter-for-batch
Oct 13 14:38:21 standalone.localdomain sudo[286314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:21 standalone.localdomain runuser[286238]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:21 standalone.localdomain runuser[286338]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1915: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:22 standalone.localdomain podman[286433]: 
Oct 13 14:38:22 standalone.localdomain podman[286433]: 2025-10-13 14:38:22.263133376 +0000 UTC m=+0.045010756 container create ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_meninsky, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph)
Oct 13 14:38:22 standalone.localdomain systemd[1]: Started libpod-conmon-ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179.scope.
Oct 13 14:38:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:38:22 standalone.localdomain podman[286433]: 2025-10-13 14:38:22.334028338 +0000 UTC m=+0.115905778 container init ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_meninsky, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, release=553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7)
Oct 13 14:38:22 standalone.localdomain podman[286433]: 2025-10-13 14:38:22.244580905 +0000 UTC m=+0.026458315 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 14:38:22 standalone.localdomain podman[286433]: 2025-10-13 14:38:22.346158643 +0000 UTC m=+0.128036023 container start ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_meninsky, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 14:38:22 standalone.localdomain podman[286433]: 2025-10-13 14:38:22.347903486 +0000 UTC m=+0.129780886 container attach ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_meninsky, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, com.redhat.component=rhceph-container, release=553, vcs-type=git)
Oct 13 14:38:22 standalone.localdomain focused_meninsky[286449]: 167 167
Oct 13 14:38:22 standalone.localdomain podman[286433]: 2025-10-13 14:38:22.353591711 +0000 UTC m=+0.135469111 container died ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_meninsky, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=)
Oct 13 14:38:22 standalone.localdomain systemd[1]: libpod-ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179.scope: Deactivated successfully.
Oct 13 14:38:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-89d7b47f386a1243c2373999ae3a953b6f578955e3e8ff514cf261454068bbbb-merged.mount: Deactivated successfully.
Oct 13 14:38:22 standalone.localdomain podman[286454]: 2025-10-13 14:38:22.435025217 +0000 UTC m=+0.072187302 container remove ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_meninsky, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7)
Oct 13 14:38:22 standalone.localdomain systemd[1]: libpod-conmon-ac9ab2fd545f5ecf556aa37d98a3f8de3f378bcffddec8cd91b1692ae76af179.scope: Deactivated successfully.
Oct 13 14:38:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:22 standalone.localdomain podman[286483]: 
Oct 13 14:38:22 standalone.localdomain podman[286483]: 2025-10-13 14:38:22.604635109 +0000 UTC m=+0.062867157 container create e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_hamilton, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55)
Oct 13 14:38:22 standalone.localdomain runuser[286338]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:22 standalone.localdomain systemd[1]: Started libpod-conmon-e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c.scope.
Oct 13 14:38:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 14:38:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd8c93eec9b43e81b94b87456c38b899abaf8ddaefeb02757f49fe78c80663b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 14:38:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd8c93eec9b43e81b94b87456c38b899abaf8ddaefeb02757f49fe78c80663b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 14:38:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd8c93eec9b43e81b94b87456c38b899abaf8ddaefeb02757f49fe78c80663b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 14:38:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd8c93eec9b43e81b94b87456c38b899abaf8ddaefeb02757f49fe78c80663b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 14:38:22 standalone.localdomain podman[286483]: 2025-10-13 14:38:22.669188176 +0000 UTC m=+0.127420224 container init e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_hamilton, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, GIT_BRANCH=main, version=7, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:38:22 standalone.localdomain podman[286483]: 2025-10-13 14:38:22.676694308 +0000 UTC m=+0.134926446 container start e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_hamilton, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, architecture=x86_64, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 14:38:22 standalone.localdomain podman[286483]: 2025-10-13 14:38:22.676906154 +0000 UTC m=+0.135138202 container attach e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_hamilton, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7)
Oct 13 14:38:22 standalone.localdomain podman[286483]: 2025-10-13 14:38:22.583645452 +0000 UTC m=+0.041877530 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:38:23
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'manila_data', 'images', 'backups', 'volumes', 'vms']
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: pgmap v1915: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]: [
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:     {
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "available": false,
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "ceph_device": false,
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "lsm_data": {},
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "lvs": [],
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "path": "/dev/sr0",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "rejected_reasons": [
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "Has a FileSystem",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "Insufficient space (<5GB)"
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         ],
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         "sys_api": {
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "actuators": null,
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "device_nodes": "sr0",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "human_readable_size": "482.00 KB",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "id_bus": "ata",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "model": "QEMU DVD-ROM",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "nr_requests": "2",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "partitions": {},
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "path": "/dev/sr0",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "removable": "1",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "rev": "2.5+",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "ro": "0",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "rotational": "1",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "sas_address": "",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "sas_device_handle": "",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "scheduler_mode": "mq-deadline",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "sectors": 0,
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "sectorsize": "2048",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "size": 493568.0,
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "support_discard": "0",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "type": "disk",
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:             "vendor": "QEMU"
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:         }
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]:     }
Oct 13 14:38:23 standalone.localdomain pensive_hamilton[286503]: ]
Oct 13 14:38:23 standalone.localdomain systemd[1]: libpod-e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c.scope: Deactivated successfully.
Oct 13 14:38:23 standalone.localdomain podman[286483]: 2025-10-13 14:38:23.737346918 +0000 UTC m=+1.195578976 container died e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_hamilton, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, ceph=True, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12)
Oct 13 14:38:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4dd8c93eec9b43e81b94b87456c38b899abaf8ddaefeb02757f49fe78c80663b-merged.mount: Deactivated successfully.
Oct 13 14:38:23 standalone.localdomain podman[288543]: 2025-10-13 14:38:23.808956793 +0000 UTC m=+0.065015962 container remove e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_hamilton, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public)
Oct 13 14:38:23 standalone.localdomain systemd[1]: libpod-conmon-e20a51d6138046add082190e09fe5b6a12e79b98c397f52e8207dbf14b33970c.scope: Deactivated successfully.
Oct 13 14:38:23 standalone.localdomain sudo[286314]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:38:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 67653e31-496a-4d5b-bb2b-25af319ceb0a (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 67653e31-496a-4d5b-bb2b-25af319ceb0a (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:38:23 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 67653e31-496a-4d5b-bb2b-25af319ceb0a (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:38:23 standalone.localdomain sudo[288557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:38:23 standalone.localdomain sudo[288557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:38:23 standalone.localdomain sudo[288557]: pam_unix(sudo:session): session closed for user root
Oct 13 14:38:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1916: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: pgmap v1916: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:24 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:38:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1917: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:27 standalone.localdomain ceph-mon[29756]: pgmap v1917: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1918: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:28 standalone.localdomain ceph-mon[29756]: pgmap v1918: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:38:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:38:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1919: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.282180) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366310282235, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1615, "num_deletes": 251, "total_data_size": 1550068, "memory_usage": 1581744, "flush_reason": "Manual Compaction"}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366310290643, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1477788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44344, "largest_seqno": 45958, "table_properties": {"data_size": 1471164, "index_size": 3776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13975, "raw_average_key_size": 19, "raw_value_size": 1457671, "raw_average_value_size": 2067, "num_data_blocks": 171, "num_entries": 705, "num_filter_entries": 705, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366172, "oldest_key_time": 1760366172, "file_creation_time": 1760366310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 8724 microseconds, and 3531 cpu microseconds.
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.290887) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1477788 bytes OK
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.291046) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.293361) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.293387) EVENT_LOG_v1 {"time_micros": 1760366310293381, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.293408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1542904, prev total WAL file size 1542904, number of live WAL files 2.
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.295164) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1443KB)], [98(4953KB)]
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366310295239, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 6550092, "oldest_snapshot_seqno": -1}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 4865 keys, 5530420 bytes, temperature: kUnknown
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366310334828, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 5530420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5499977, "index_size": 17162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 122717, "raw_average_key_size": 25, "raw_value_size": 5413777, "raw_average_value_size": 1112, "num_data_blocks": 711, "num_entries": 4865, "num_filter_entries": 4865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.335665) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 5530420 bytes
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.343918) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.8 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 4.8 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.7) OK, records in: 5385, records dropped: 520 output_compression: NoCompression
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.343960) EVENT_LOG_v1 {"time_micros": 1760366310343944, "job": 58, "event": "compaction_finished", "compaction_time_micros": 40246, "compaction_time_cpu_micros": 23839, "output_level": 6, "num_output_files": 1, "total_output_size": 5530420, "num_input_records": 5385, "num_output_records": 4865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366310344323, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366310344824, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.294999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.344940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.344946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.344948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.344950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:38:30 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:38:30.344952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:38:31 standalone.localdomain ceph-mon[29756]: pgmap v1919: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1920: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:32 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:14:38:32 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx9957ed2b117f4b9f9d828-0068ed0ee8" "proxy-server 2" 0.0009 "-" 21 -
Oct 13 14:38:32 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx9957ed2b117f4b9f9d828-0068ed0ee8)
Oct 13 14:38:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:33 standalone.localdomain runuser[288908]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:33 standalone.localdomain ceph-mon[29756]: pgmap v1920: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:33 standalone.localdomain runuser[288908]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:34 standalone.localdomain runuser[288977]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1921: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:34 standalone.localdomain runuser[288977]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:34 standalone.localdomain runuser[289041]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:35 standalone.localdomain ceph-mon[29756]: pgmap v1921: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:35 standalone.localdomain runuser[289041]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1922: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:36 standalone.localdomain ceph-mon[29756]: pgmap v1922: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:38:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:38:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:38:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:38:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:38:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:38:37 standalone.localdomain systemd[1]: tmp-crun.vQMujv.mount: Deactivated successfully.
Oct 13 14:38:37 standalone.localdomain podman[289220]: 2025-10-13 14:38:37.927190121 +0000 UTC m=+0.179891800 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:38:37 standalone.localdomain podman[289220]: 2025-10-13 14:38:37.96584492 +0000 UTC m=+0.218546599 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:38:37 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:38:37 standalone.localdomain podman[289205]: 2025-10-13 14:38:37.982657728 +0000 UTC m=+0.247043066 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, container_name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, release=1, build-date=2025-07-21T14:49:55, name=rhosp17/openstack-heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:38:37 standalone.localdomain podman[289206]: 2025-10-13 14:38:37.843698521 +0000 UTC m=+0.108353257 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, container_name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:44:11, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:38:38 standalone.localdomain podman[289205]: 2025-10-13 14:38:38.015074776 +0000 UTC m=+0.279460144 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, release=1, name=rhosp17/openstack-heat-api-cfn, vcs-type=git)
Oct 13 14:38:38 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:38:38 standalone.localdomain podman[289219]: 2025-10-13 14:38:38.024539477 +0000 UTC m=+0.278500914 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, container_name=heat_api, release=1, com.redhat.component=openstack-heat-api-container, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, vcs-type=git, tcib_managed=true, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, vendor=Red Hat, Inc.)
Oct 13 14:38:38 standalone.localdomain podman[289206]: 2025-10-13 14:38:38.089817217 +0000 UTC m=+0.354471943 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, version=17.1.9, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, release=1)
Oct 13 14:38:38 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:38:38 standalone.localdomain podman[289207]: 2025-10-13 14:38:37.898776865 +0000 UTC m=+0.156634383 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-memcached, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, container_name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, distribution-scope=public)
Oct 13 14:38:38 standalone.localdomain podman[289219]: 2025-10-13 14:38:38.10777237 +0000 UTC m=+0.361733827 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-type=git, version=17.1.9, build-date=2025-07-21T15:56:26, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., container_name=heat_api, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:38:38 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:38:38 standalone.localdomain podman[289207]: 2025-10-13 14:38:38.135813983 +0000 UTC m=+0.393671501 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, tcib_managed=true, name=rhosp17/openstack-memcached, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=memcached, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, build-date=2025-07-21T12:58:43)
Oct 13 14:38:38 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:38:38 standalone.localdomain podman[289208]: 2025-10-13 14:38:38.178663352 +0000 UTC m=+0.432544677 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-heat-api-container, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:38:38 standalone.localdomain podman[289208]: 2025-10-13 14:38:38.185822502 +0000 UTC m=+0.439703817 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., distribution-scope=public, release=1, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, name=rhosp17/openstack-heat-api)
Oct 13 14:38:38 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:38:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1923: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:38 standalone.localdomain ceph-mon[29756]: pgmap v1923: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1924: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:41 standalone.localdomain ceph-mon[29756]: pgmap v1924: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1925: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:43 standalone.localdomain ceph-mon[29756]: pgmap v1925: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:38:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:38:43 standalone.localdomain podman[289567]: 2025-10-13 14:38:43.866172806 +0000 UTC m=+0.126493936 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, release=1, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:38:43 standalone.localdomain podman[289575]: 2025-10-13 14:38:43.92868215 +0000 UTC m=+0.163481314 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:38:43 standalone.localdomain podman[289592]: 2025-10-13 14:38:43.977636816 +0000 UTC m=+0.199633006 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, architecture=x86_64, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git)
Oct 13 14:38:44 standalone.localdomain podman[289574]: 2025-10-13 14:38:44.023271212 +0000 UTC m=+0.255427045 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public)
Oct 13 14:38:44 standalone.localdomain podman[289592]: 2025-10-13 14:38:44.044571457 +0000 UTC m=+0.266567637 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:38:44 standalone.localdomain podman[289592]: unhealthy
Oct 13 14:38:44 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:38:44 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:38:44 standalone.localdomain podman[289575]: 2025-10-13 14:38:44.133095703 +0000 UTC m=+0.367894917 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4)
Oct 13 14:38:44 standalone.localdomain podman[289581]: 2025-10-13 14:38:44.14862857 +0000 UTC m=+0.377296205 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, release=1, io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, vcs-type=git, version=17.1.9)
Oct 13 14:38:44 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain podman[289584]: 2025-10-13 14:38:44.187662462 +0000 UTC m=+0.404383829 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:38:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1926: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:44 standalone.localdomain podman[289584]: 2025-10-13 14:38:44.262441044 +0000 UTC m=+0.479162411 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9)
Oct 13 14:38:44 standalone.localdomain podman[289584]: unhealthy
Oct 13 14:38:44 standalone.localdomain podman[289597]: 2025-10-13 14:38:44.280933404 +0000 UTC m=+0.500687815 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, release=1, version=17.1.9, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 14:38:44 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:38:44 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:38:44 standalone.localdomain podman[289568]: 2025-10-13 14:38:44.333040947 +0000 UTC m=+0.574423944 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:38:44 standalone.localdomain podman[289582]: 2025-10-13 14:38:44.252976163 +0000 UTC m=+0.482897967 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=swift_container_server)
Oct 13 14:38:44 standalone.localdomain podman[289581]: 2025-10-13 14:38:44.388642119 +0000 UTC m=+0.617309794 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_internal, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, vcs-type=git, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:38:44 standalone.localdomain podman[289567]: 2025-10-13 14:38:44.40589917 +0000 UTC m=+0.666220220 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., release=1, container_name=glance_api_cron, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:38:44 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain podman[289582]: 2025-10-13 14:38:44.457820149 +0000 UTC m=+0.687741943 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:38:44 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain podman[289597]: 2025-10-13 14:38:44.496295153 +0000 UTC m=+0.716049584 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:38:44 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain podman[289574]: 2025-10-13 14:38:44.509681885 +0000 UTC m=+0.741837748 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:38:44 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain podman[289568]: 2025-10-13 14:38:44.587168301 +0000 UTC m=+0.828551308 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, vcs-type=git, container_name=swift_object_server, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, version=17.1.9, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object)
Oct 13 14:38:44 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:38:44 standalone.localdomain systemd[1]: tmp-crun.ghiwxr.mount: Deactivated successfully.
Oct 13 14:38:45 standalone.localdomain ceph-mon[29756]: pgmap v1926: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:46 standalone.localdomain runuser[289809]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1927: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:46 standalone.localdomain ceph-mon[29756]: pgmap v1927: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:46 standalone.localdomain runuser[289809]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:46 standalone.localdomain runuser[289878]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:47 standalone.localdomain runuser[289878]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:47 standalone.localdomain runuser[289973]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:38:47 standalone.localdomain systemd[1]: Starting dnf makecache...
Oct 13 14:38:48 standalone.localdomain podman[290110]: 2025-10-13 14:38:48.030749277 +0000 UTC m=+0.088004850 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, com.redhat.component=openstack-keystone-container, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, release=1, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:38:48 standalone.localdomain podman[290110]: 2025-10-13 14:38:48.071847262 +0000 UTC m=+0.129102795 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, tcib_managed=true, container_name=keystone_cron, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=)
Oct 13 14:38:48 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:38:48 standalone.localdomain runuser[289973]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:48 standalone.localdomain dnf[290112]: Updating Subscription Management repositories.
Oct 13 14:38:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1928: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:48 standalone.localdomain ceph-mon[29756]: pgmap v1928: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:38:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:38:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:38:50 standalone.localdomain podman[290239]: 2025-10-13 14:38:50.062009247 +0000 UTC m=+0.081753698 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, name=rhosp17/openstack-neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, config_id=tripleo_step4, container_name=neutron_dhcp, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:54, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true)
Oct 13 14:38:50 standalone.localdomain systemd[1]: tmp-crun.KhYglU.mount: Deactivated successfully.
Oct 13 14:38:50 standalone.localdomain podman[290269]: 2025-10-13 14:38:50.15143465 +0000 UTC m=+0.084157172 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, container_name=clustercheck, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, release=1, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:38:50 standalone.localdomain podman[290269]: 2025-10-13 14:38:50.192956548 +0000 UTC m=+0.125679090 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, release=1, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, container_name=clustercheck, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64)
Oct 13 14:38:50 standalone.localdomain dnf[290112]: Metadata cache refreshed recently.
Oct 13 14:38:50 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:38:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1929: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:38:50 standalone.localdomain podman[290240]: 2025-10-13 14:38:50.254893855 +0000 UTC m=+0.273732168 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, config_id=tripleo_step3, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git)
Oct 13 14:38:50 standalone.localdomain podman[290239]: 2025-10-13 14:38:50.259716174 +0000 UTC m=+0.279460645 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:38:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:38:50 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:38:50 standalone.localdomain podman[290332]: 2025-10-13 14:38:50.335001931 +0000 UTC m=+0.054705325 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, managed_by=tripleo_ansible, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent)
Oct 13 14:38:50 standalone.localdomain podman[290240]: 2025-10-13 14:38:50.36227535 +0000 UTC m=+0.381113663 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, config_id=tripleo_step3, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, vcs-type=git)
Oct 13 14:38:50 standalone.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 13 14:38:50 standalone.localdomain systemd[1]: Finished dnf makecache.
Oct 13 14:38:50 standalone.localdomain systemd[1]: dnf-makecache.service: Consumed 2.325s CPU time.
Oct 13 14:38:50 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:38:50 standalone.localdomain podman[290332]: 2025-10-13 14:38:50.444262935 +0000 UTC m=+0.163966349 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:38:50 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:38:50 standalone.localdomain podman[290314]: 2025-10-13 14:38:50.648248113 +0000 UTC m=+0.410460196 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, container_name=nova_compute, release=1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git)
Oct 13 14:38:50 standalone.localdomain podman[290314]: 2025-10-13 14:38:50.681441265 +0000 UTC m=+0.443653358 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:38:50 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:38:51 standalone.localdomain ceph-mon[29756]: pgmap v1929: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1930: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:52 standalone.localdomain ceph-mon[29756]: pgmap v1930: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:38:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1931: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:55 standalone.localdomain ceph-mon[29756]: pgmap v1931: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1932: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:56 standalone.localdomain ceph-mon[29756]: pgmap v1932: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:38:57 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:38:57 standalone.localdomain recover_tripleo_nova_virtqemud[290607]: 93291
Oct 13 14:38:57 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:38:57 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:38:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1933: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:58 standalone.localdomain ceph-mon[29756]: pgmap v1933: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:38:58 standalone.localdomain runuser[290657]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:38:59 standalone.localdomain runuser[290657]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:38:59 standalone.localdomain runuser[290767]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:00 standalone.localdomain runuser[290767]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:00 standalone.localdomain runuser[290865]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1934: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:00 standalone.localdomain ceph-mon[29756]: pgmap v1934: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:00 standalone.localdomain runuser[290865]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1935: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:03 standalone.localdomain ceph-mon[29756]: pgmap v1935: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1936: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:05 standalone.localdomain ceph-mon[29756]: pgmap v1936: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1937: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:06 standalone.localdomain podman[291029]: 2025-10-13 14:39:06.774614982 +0000 UTC m=+0.068282902 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, version=17.1.9, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:39:06 standalone.localdomain podman[291029]: 2025-10-13 14:39:06.803876073 +0000 UTC m=+0.097543923 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, vcs-type=git, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:39:07 standalone.localdomain podman[291070]: 2025-10-13 14:39:07.303734201 +0000 UTC m=+0.090780216 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-haproxy, com.redhat.component=openstack-haproxy-container, distribution-scope=public, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, release=1)
Oct 13 14:39:07 standalone.localdomain ceph-mon[29756]: pgmap v1937: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:07 standalone.localdomain podman[291070]: 2025-10-13 14:39:07.336975214 +0000 UTC m=+0.124021139 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, release=1, version=17.1.9, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, vcs-type=git, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 haproxy, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:39:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:07 standalone.localdomain podman[291145]: 2025-10-13 14:39:07.718423646 +0000 UTC m=+0.061976619 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:05, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, com.redhat.component=openstack-rabbitmq-container)
Oct 13 14:39:07 standalone.localdomain podman[291145]: 2025-10-13 14:39:07.746578273 +0000 UTC m=+0.090131196 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, tcib_managed=true, name=rhosp17/openstack-rabbitmq, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-rabbitmq-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:39:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1938: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:08 standalone.localdomain ceph-mon[29756]: pgmap v1938: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:39:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:39:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:39:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:39:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:39:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:39:08 standalone.localdomain podman[291307]: 2025-10-13 14:39:08.801860129 +0000 UTC m=+0.069170031 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, vcs-type=git, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:39:08 standalone.localdomain systemd[1]: tmp-crun.V0PNLH.mount: Deactivated successfully.
Oct 13 14:39:08 standalone.localdomain podman[291307]: 2025-10-13 14:39:08.854501728 +0000 UTC m=+0.121811610 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, container_name=heat_api_cfn, build-date=2025-07-21T14:49:55, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1)
Oct 13 14:39:08 standalone.localdomain podman[291310]: 2025-10-13 14:39:08.858260755 +0000 UTC m=+0.119978755 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, io.buildah.version=1.33.12, config_id=tripleo_step4, release=1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:39:08 standalone.localdomain podman[291327]: 2025-10-13 14:39:08.917143437 +0000 UTC m=+0.173663657 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12)
Oct 13 14:39:08 standalone.localdomain podman[291327]: 2025-10-13 14:39:08.924864085 +0000 UTC m=+0.181384315 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1, version=17.1.9, name=rhosp17/openstack-cron, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, vcs-type=git, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, container_name=logrotate_crond)
Oct 13 14:39:08 standalone.localdomain podman[291310]: 2025-10-13 14:39:08.94289241 +0000 UTC m=+0.204610410 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=heat_api_cron, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:39:08 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:39:08 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:39:08 standalone.localdomain podman[291309]: 2025-10-13 14:39:08.967189888 +0000 UTC m=+0.231457147 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, container_name=memcached, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, name=rhosp17/openstack-memcached, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe)
Oct 13 14:39:08 standalone.localdomain podman[291309]: 2025-10-13 14:39:08.992625741 +0000 UTC m=+0.256893010 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, container_name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T12:58:43, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:39:09 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:39:09 standalone.localdomain podman[291308]: 2025-10-13 14:39:09.00821207 +0000 UTC m=+0.275539043 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, container_name=heat_engine, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-heat-engine, release=1, tcib_managed=true, vcs-type=git, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:39:09 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:39:09 standalone.localdomain podman[291321]: 2025-10-13 14:39:09.067151764 +0000 UTC m=+0.325600683 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, container_name=heat_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4)
Oct 13 14:39:09 standalone.localdomain podman[291308]: 2025-10-13 14:39:09.081276769 +0000 UTC m=+0.348603822 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T15:44:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:39:09 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:39:09 standalone.localdomain podman[291321]: 2025-10-13 14:39:09.101922536 +0000 UTC m=+0.360371435 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, container_name=heat_api, distribution-scope=public, version=17.1.9, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, tcib_managed=true, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:39:09 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:39:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1939: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:11 standalone.localdomain ceph-mon[29756]: pgmap v1939: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:11 standalone.localdomain runuser[291515]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:11 standalone.localdomain runuser[291515]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:12 standalone.localdomain runuser[291576]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1940: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:12 standalone.localdomain runuser[291576]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:12 standalone.localdomain runuser[291638]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:13 standalone.localdomain ceph-mon[29756]: pgmap v1940: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:13 standalone.localdomain runuser[291638]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1941: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:14 standalone.localdomain ceph-mon[29756]: pgmap v1941: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:14 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:39:14 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 65, saving inputs in /var/lib/pacemaker/pengine/pe-input-65.bz2
Oct 13 14:39:14 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 65 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-65.bz2): Complete
Oct 13 14:39:14 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:39:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:39:14 standalone.localdomain podman[291767]: 2025-10-13 14:39:14.857393922 +0000 UTC m=+0.115934669 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, release=1, architecture=x86_64, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git)
Oct 13 14:39:14 standalone.localdomain systemd[1]: tmp-crun.MmTppO.mount: Deactivated successfully.
Oct 13 14:39:14 standalone.localdomain podman[291769]: 2025-10-13 14:39:14.871322341 +0000 UTC m=+0.121284974 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:39:14 standalone.localdomain podman[291783]: 2025-10-13 14:39:14.911391435 +0000 UTC m=+0.148284486 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 14:39:14 standalone.localdomain podman[291770]: 2025-10-13 14:39:14.924261391 +0000 UTC m=+0.165082573 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:39:14 standalone.localdomain podman[291767]: 2025-10-13 14:39:14.944607157 +0000 UTC m=+0.203147924 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, release=1, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, build-date=2025-07-21T13:58:20)
Oct 13 14:39:14 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:39:15 standalone.localdomain podman[291789]: 2025-10-13 14:39:15.018608065 +0000 UTC m=+0.253679730 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, distribution-scope=public, version=17.1.9, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:39:15 standalone.localdomain podman[291789]: 2025-10-13 14:39:15.032848294 +0000 UTC m=+0.267919939 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:39:15 standalone.localdomain podman[291789]: unhealthy
Oct 13 14:39:15 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:39:15 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:39:15 standalone.localdomain podman[291794]: 2025-10-13 14:39:15.069458301 +0000 UTC m=+0.296595132 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, distribution-scope=public, release=1, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:39:15 standalone.localdomain podman[291783]: 2025-10-13 14:39:15.108251405 +0000 UTC m=+0.345144456 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:39:15 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:39:15 standalone.localdomain podman[291770]: 2025-10-13 14:39:15.13440306 +0000 UTC m=+0.375224222 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4)
Oct 13 14:39:15 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:39:15 standalone.localdomain podman[291793]: 2025-10-13 14:39:15.173793472 +0000 UTC m=+0.405017028 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T13:28:44, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:39:15 standalone.localdomain podman[291780]: 2025-10-13 14:39:15.181859161 +0000 UTC m=+0.418619958 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, managed_by=tripleo_ansible, container_name=glance_api_internal, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, release=1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:39:15 standalone.localdomain podman[291793]: 2025-10-13 14:39:15.233115618 +0000 UTC m=+0.464339184 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:39:15 standalone.localdomain podman[291793]: unhealthy
Oct 13 14:39:15 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:39:15 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:39:15 standalone.localdomain podman[291769]: 2025-10-13 14:39:15.257042905 +0000 UTC m=+0.507005558 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1)
Oct 13 14:39:15 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:39:15 standalone.localdomain podman[291794]: 2025-10-13 14:39:15.281854919 +0000 UTC m=+0.508991760 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 14:39:15 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:39:15 standalone.localdomain podman[291768]: 2025-10-13 14:39:15.235213903 +0000 UTC m=+0.493973757 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc.)
Oct 13 14:39:15 standalone.localdomain podman[291780]: 2025-10-13 14:39:15.385857971 +0000 UTC m=+0.622618768 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, name=rhosp17/openstack-glance-api, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:39:15 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:39:15 standalone.localdomain podman[291768]: 2025-10-13 14:39:15.428463282 +0000 UTC m=+0.687223216 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-swift-object-container)
Oct 13 14:39:15 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:39:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1942: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:16 standalone.localdomain sshd[291983]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:39:16 standalone.localdomain sshd[291983]: Received disconnect from 193.46.255.103 port 64544:11:  [preauth]
Oct 13 14:39:16 standalone.localdomain sshd[291983]: Disconnected from authenticating user root 193.46.255.103 port 64544 [preauth]
Oct 13 14:39:17 standalone.localdomain ceph-mon[29756]: pgmap v1942: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1943: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2432996724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2432996724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: pgmap v1943: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2432996724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:39:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2432996724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:39:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:39:18 standalone.localdomain podman[292173]: 2025-10-13 14:39:18.796647959 +0000 UTC m=+0.070616974 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, architecture=x86_64, container_name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:39:18 standalone.localdomain podman[292173]: 2025-10-13 14:39:18.807936878 +0000 UTC m=+0.081905843 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:39:18 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:39:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1944: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:39:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:39:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:39:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:39:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:39:20 standalone.localdomain podman[292250]: 2025-10-13 14:39:20.829715405 +0000 UTC m=+0.083762129 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, container_name=neutron_dhcp, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent)
Oct 13 14:39:20 standalone.localdomain systemd[1]: tmp-crun.QIeFRB.mount: Deactivated successfully.
Oct 13 14:39:20 standalone.localdomain podman[292252]: 2025-10-13 14:39:20.842647634 +0000 UTC m=+0.094903163 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, container_name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team)
Oct 13 14:39:20 standalone.localdomain podman[292252]: 2025-10-13 14:39:20.884048779 +0000 UTC m=+0.136304328 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, name=rhosp17/openstack-neutron-sriov-agent, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:03:34, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, tcib_managed=true, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:39:20 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:39:20 standalone.localdomain podman[292266]: 2025-10-13 14:39:20.929640262 +0000 UTC m=+0.174607006 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 14:39:20 standalone.localdomain podman[292253]: 2025-10-13 14:39:20.88668665 +0000 UTC m=+0.133053948 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172)
Oct 13 14:39:20 standalone.localdomain podman[292250]: 2025-10-13 14:39:20.949250366 +0000 UTC m=+0.203297100 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, batch=17.1_20250721.1, container_name=neutron_dhcp, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public)
Oct 13 14:39:20 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:39:20 standalone.localdomain podman[292253]: 2025-10-13 14:39:20.970801769 +0000 UTC m=+0.217169027 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, config_id=tripleo_step2, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, distribution-scope=public, name=rhosp17/openstack-mariadb, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:39:20 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:39:20 standalone.localdomain podman[292251]: 2025-10-13 14:39:20.983680055 +0000 UTC m=+0.239381920 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.33.12, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:39:20 standalone.localdomain podman[292251]: 2025-10-13 14:39:20.992822646 +0000 UTC m=+0.248524531 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-iscsid, version=17.1.9, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, build-date=2025-07-21T13:27:15, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:39:21 standalone.localdomain podman[292266]: 2025-10-13 14:39:21.005426605 +0000 UTC m=+0.250393329 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:39:21 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:39:21 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:39:21 standalone.localdomain ceph-mon[29756]: pgmap v1944: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1945: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:39:23
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'manila_data', 'volumes', 'backups', 'manila_metadata', '.mgr']
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:39:23 standalone.localdomain ceph-mon[29756]: pgmap v1945: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:39:24 standalone.localdomain sudo[292453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:39:24 standalone.localdomain sudo[292453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:39:24 standalone.localdomain sudo[292453]: pam_unix(sudo:session): session closed for user root
Oct 13 14:39:24 standalone.localdomain runuser[292472]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:24 standalone.localdomain sudo[292474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:39:24 standalone.localdomain sudo[292474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:39:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1946: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:24 standalone.localdomain runuser[292472]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:24 standalone.localdomain sudo[292474]: pam_unix(sudo:session): session closed for user root
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:39:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:39:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 6bc2eb1e-3b53-4aba-a858-7bdca4639668 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:39:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 6bc2eb1e-3b53-4aba-a858-7bdca4639668 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:39:24 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 6bc2eb1e-3b53-4aba-a858-7bdca4639668 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:39:24 standalone.localdomain runuser[292591]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:24 standalone.localdomain sudo[292599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:39:24 standalone.localdomain sudo[292599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:39:24 standalone.localdomain sudo[292599]: pam_unix(sudo:session): session closed for user root
Oct 13 14:39:25 standalone.localdomain ceph-mon[29756]: pgmap v1946: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:39:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:39:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:39:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:39:25 standalone.localdomain runuser[292591]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:25 standalone.localdomain runuser[292660]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:26 standalone.localdomain runuser[292660]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1947: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:26 standalone.localdomain ceph-mon[29756]: pgmap v1947: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1948: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:39:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:39:28 standalone.localdomain ceph-mon[29756]: pgmap v1948: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:39:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:39:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:39:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1949: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:39:30 standalone.localdomain ceph-mon[29756]: pgmap v1949: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1950: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:33 standalone.localdomain ceph-mon[29756]: pgmap v1950: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1951: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:35 standalone.localdomain ceph-mon[29756]: pgmap v1951: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1952: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:36 standalone.localdomain runuser[293070]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:37 standalone.localdomain runuser[293070]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:37 standalone.localdomain ceph-mon[29756]: pgmap v1952: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:37 standalone.localdomain runuser[293139]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:38 standalone.localdomain runuser[293139]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:38 standalone.localdomain runuser[293283]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1953: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:38 standalone.localdomain ceph-mon[29756]: pgmap v1953: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:38 standalone.localdomain runuser[293283]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:39:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:39:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:39:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:39:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:39:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:39:39 standalone.localdomain systemd[1]: tmp-crun.erfz5Y.mount: Deactivated successfully.
Oct 13 14:39:39 standalone.localdomain podman[293474]: 2025-10-13 14:39:39.851673399 +0000 UTC m=+0.109079199 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, release=1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc., name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, summary=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, architecture=x86_64, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:39:39 standalone.localdomain podman[293498]: 2025-10-13 14:39:39.88808953 +0000 UTC m=+0.132435228 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.9)
Oct 13 14:39:39 standalone.localdomain podman[293474]: 2025-10-13 14:39:39.897057076 +0000 UTC m=+0.154462866 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, config_id=tripleo_step1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, io.openshift.expose-services=, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, build-date=2025-07-21T12:58:43, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:39:39 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:39:39 standalone.localdomain podman[293485]: 2025-10-13 14:39:39.942382602 +0000 UTC m=+0.196462479 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-type=git, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, config_id=tripleo_step4)
Oct 13 14:39:39 standalone.localdomain podman[293489]: 2025-10-13 14:39:39.972099866 +0000 UTC m=+0.222791099 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, container_name=heat_api, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:39:40 standalone.localdomain podman[293498]: 2025-10-13 14:39:40.000982236 +0000 UTC m=+0.245327914 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:39:40 standalone.localdomain podman[293485]: 2025-10-13 14:39:40.001717248 +0000 UTC m=+0.255797125 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:39:40 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:39:40 standalone.localdomain podman[293472]: 2025-10-13 14:39:39.820831589 +0000 UTC m=+0.088009560 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, build-date=2025-07-21T14:49:55, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:39:40 standalone.localdomain podman[293473]: 2025-10-13 14:39:40.076231512 +0000 UTC m=+0.343058172 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-engine-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, name=rhosp17/openstack-heat-engine, distribution-scope=public, version=17.1.9, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11)
Oct 13 14:39:40 standalone.localdomain podman[293473]: 2025-10-13 14:39:40.098441895 +0000 UTC m=+0.365268575 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, name=rhosp17/openstack-heat-engine, architecture=x86_64, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:39:40 standalone.localdomain podman[293472]: 2025-10-13 14:39:40.104962706 +0000 UTC m=+0.372140657 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=heat_api_cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-heat-api-cfn, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public)
Oct 13 14:39:40 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:39:40 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:39:40 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:39:40 standalone.localdomain podman[293489]: 2025-10-13 14:39:40.206538163 +0000 UTC m=+0.457229436 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1)
Oct 13 14:39:40 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:39:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1954: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:41 standalone.localdomain ceph-mon[29756]: pgmap v1954: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1955: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:43 standalone.localdomain ceph-mon[29756]: pgmap v1955: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1956: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:45 standalone.localdomain ceph-mon[29756]: pgmap v1956: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:39:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:39:45 standalone.localdomain podman[293743]: 2025-10-13 14:39:45.845783062 +0000 UTC m=+0.084617576 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public)
Oct 13 14:39:45 standalone.localdomain podman[293709]: 2025-10-13 14:39:45.825818317 +0000 UTC m=+0.091207989 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:39:45 standalone.localdomain podman[293708]: 2025-10-13 14:39:45.91071061 +0000 UTC m=+0.176234216 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, vcs-type=git, release=1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, container_name=glance_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12)
Oct 13 14:39:45 standalone.localdomain podman[293708]: 2025-10-13 14:39:45.920972126 +0000 UTC m=+0.186495772 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=glance_api_cron, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:39:45 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:39:45 standalone.localdomain podman[293724]: 2025-10-13 14:39:45.989288959 +0000 UTC m=+0.241306789 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, tcib_managed=true, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-swift-container, version=17.1.9, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:39:46 standalone.localdomain podman[293709]: 2025-10-13 14:39:46.00781602 +0000 UTC m=+0.273205692 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=swift_object_server, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, release=1, architecture=x86_64, config_id=tripleo_step4)
Oct 13 14:39:46 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:39:46 standalone.localdomain podman[293722]: 2025-10-13 14:39:45.9675284 +0000 UTC m=+0.207664435 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, container_name=glance_api_internal, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:39:46 standalone.localdomain podman[293735]: 2025-10-13 14:39:46.065792225 +0000 UTC m=+0.309009154 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, version=17.1.9, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, release=1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:39:46 standalone.localdomain podman[293711]: 2025-10-13 14:39:46.085361977 +0000 UTC m=+0.330238517 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, container_name=swift_proxy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, release=1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 14:39:46 standalone.localdomain podman[293734]: 2025-10-13 14:39:45.891595782 +0000 UTC m=+0.135158292 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, release=1, distribution-scope=public)
Oct 13 14:39:46 standalone.localdomain podman[293743]: 2025-10-13 14:39:46.113150333 +0000 UTC m=+0.351984817 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22)
Oct 13 14:39:46 standalone.localdomain podman[293734]: 2025-10-13 14:39:46.121849581 +0000 UTC m=+0.365412091 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 13 14:39:46 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:39:46 standalone.localdomain podman[293734]: unhealthy
Oct 13 14:39:46 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:39:46 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:39:46 standalone.localdomain podman[293710]: 2025-10-13 14:39:46.010114351 +0000 UTC m=+0.268243759 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=)
Oct 13 14:39:46 standalone.localdomain podman[293722]: 2025-10-13 14:39:46.189386099 +0000 UTC m=+0.429522124 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:39:46 standalone.localdomain podman[293735]: 2025-10-13 14:39:46.195798617 +0000 UTC m=+0.439015546 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.9, tcib_managed=true, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public)
Oct 13 14:39:46 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:39:46 standalone.localdomain podman[293735]: unhealthy
Oct 13 14:39:46 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:39:46 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:39:46 standalone.localdomain podman[293724]: 2025-10-13 14:39:46.21181616 +0000 UTC m=+0.463833980 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., release=1, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9)
Oct 13 14:39:46 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:39:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1957: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:46 standalone.localdomain podman[293711]: 2025-10-13 14:39:46.293499734 +0000 UTC m=+0.538376274 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy)
Oct 13 14:39:46 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:39:46 standalone.localdomain ceph-mon[29756]: pgmap v1957: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:46 standalone.localdomain podman[293710]: 2025-10-13 14:39:46.391540052 +0000 UTC m=+0.649669480 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., container_name=nova_migration_target, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4)
Oct 13 14:39:46 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:39:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1958: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:48 standalone.localdomain ceph-mon[29756]: pgmap v1958: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:49 standalone.localdomain runuser[294156]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:39:49 standalone.localdomain podman[294201]: 2025-10-13 14:39:49.797625445 +0000 UTC m=+0.061746521 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-keystone-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=keystone_cron, version=17.1.9)
Oct 13 14:39:49 standalone.localdomain podman[294201]: 2025-10-13 14:39:49.803816726 +0000 UTC m=+0.067937752 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, version=17.1.9, release=1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-keystone, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:39:49 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:39:49 standalone.localdomain runuser[294156]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:50 standalone.localdomain runuser[294236]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1959: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:50 standalone.localdomain runuser[294236]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:50 standalone.localdomain runuser[294298]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:39:51 standalone.localdomain ceph-mon[29756]: pgmap v1959: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:51 standalone.localdomain runuser[294298]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:39:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:39:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:39:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:39:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:39:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:39:51 standalone.localdomain podman[294380]: 2025-10-13 14:39:51.812232723 +0000 UTC m=+0.064983841 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, container_name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:39:51 standalone.localdomain podman[294380]: 2025-10-13 14:39:51.85077667 +0000 UTC m=+0.103527788 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T16:03:34, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, container_name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:39:51 standalone.localdomain podman[294364]: 2025-10-13 14:39:51.857520977 +0000 UTC m=+0.127299219 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, container_name=neutron_dhcp, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=)
Oct 13 14:39:51 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:39:51 standalone.localdomain podman[294371]: 2025-10-13 14:39:51.910683173 +0000 UTC m=+0.165068862 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, build-date=2025-07-21T14:48:37, config_id=tripleo_step5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, tcib_managed=true)
Oct 13 14:39:51 standalone.localdomain podman[294365]: 2025-10-13 14:39:51.959354881 +0000 UTC m=+0.220546229 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:39:51 standalone.localdomain podman[294371]: 2025-10-13 14:39:51.959781735 +0000 UTC m=+0.214167414 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step5, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:39:52 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:39:52 standalone.localdomain podman[294365]: 2025-10-13 14:39:52.041613594 +0000 UTC m=+0.302804962 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., container_name=iscsid, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid)
Oct 13 14:39:52 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:39:52 standalone.localdomain podman[294381]: 2025-10-13 14:39:52.013503759 +0000 UTC m=+0.263208254 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=clustercheck, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:39:52 standalone.localdomain podman[294364]: 2025-10-13 14:39:52.096784182 +0000 UTC m=+0.366562434 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, release=1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, container_name=neutron_dhcp)
Oct 13 14:39:52 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:39:52 standalone.localdomain podman[294381]: 2025-10-13 14:39:52.148527205 +0000 UTC m=+0.398231700 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, distribution-scope=public, tcib_managed=true, container_name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:39:52 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:39:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1960: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:39:53 standalone.localdomain ceph-mon[29756]: pgmap v1960: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1961: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:55 standalone.localdomain ceph-mon[29756]: pgmap v1961: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1962: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:56 standalone.localdomain ceph-mon[29756]: pgmap v1962: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:39:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1963: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:39:58 standalone.localdomain ceph-mon[29756]: pgmap v1963: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1964: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:01 standalone.localdomain ceph-mon[29756]: pgmap v1964: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:02 standalone.localdomain runuser[294875]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1965: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:02 standalone.localdomain runuser[294875]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:02 standalone.localdomain runuser[294944]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:03 standalone.localdomain ceph-mon[29756]: pgmap v1965: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:03 standalone.localdomain runuser[294944]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:03 standalone.localdomain runuser[294998]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:04 standalone.localdomain runuser[294998]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1966: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:05 standalone.localdomain ceph-mon[29756]: pgmap v1966: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1967: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:06 standalone.localdomain ceph-mon[29756]: pgmap v1967: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:06 standalone.localdomain podman[295146]: 2025-10-13 14:40:06.931464524 +0000 UTC m=+0.075984660 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:40:06 standalone.localdomain podman[295146]: 2025-10-13 14:40:06.965064399 +0000 UTC m=+0.109584615 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T12:58:45)
Oct 13 14:40:07 standalone.localdomain podman[295180]: 2025-10-13 14:40:07.441520906 +0000 UTC m=+0.069573854 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 haproxy, release=1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-haproxy-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, name=rhosp17/openstack-haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:40:07 standalone.localdomain podman[295180]: 2025-10-13 14:40:07.475070558 +0000 UTC m=+0.103123556 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.expose-services=, name=rhosp17/openstack-haproxy, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50)
Oct 13 14:40:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:07 standalone.localdomain podman[295220]: 2025-10-13 14:40:07.874879246 +0000 UTC m=+0.090915140 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, com.redhat.component=openstack-rabbitmq-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-rabbitmq, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, description=Red Hat OpenStack Platform 17.1 rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true)
Oct 13 14:40:07 standalone.localdomain podman[295220]: 2025-10-13 14:40:07.906981904 +0000 UTC m=+0.123017808 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T13:08:05, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-rabbitmq, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, release=1)
Oct 13 14:40:07 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:40:07 standalone.localdomain recover_tripleo_nova_virtqemud[295292]: 93291
Oct 13 14:40:07 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:40:07 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:40:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1968: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:08 standalone.localdomain ceph-mon[29756]: pgmap v1968: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1969: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:40:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:40:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:40:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:40:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:40:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:40:10 standalone.localdomain podman[295497]: 2025-10-13 14:40:10.834505615 +0000 UTC m=+0.078455136 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, vcs-type=git)
Oct 13 14:40:10 standalone.localdomain podman[295481]: 2025-10-13 14:40:10.884888906 +0000 UTC m=+0.139536437 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, container_name=heat_api_cfn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public)
Oct 13 14:40:10 standalone.localdomain podman[295481]: 2025-10-13 14:40:10.920726379 +0000 UTC m=+0.175373870 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, com.redhat.component=openstack-heat-api-cfn-container, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, name=rhosp17/openstack-heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn)
Oct 13 14:40:10 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:40:10 standalone.localdomain podman[295497]: 2025-10-13 14:40:10.963007371 +0000 UTC m=+0.206956902 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, vendor=Red Hat, Inc., container_name=heat_api, release=1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:40:10 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:40:11 standalone.localdomain podman[295503]: 2025-10-13 14:40:11.007213682 +0000 UTC m=+0.241131924 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container)
Oct 13 14:40:11 standalone.localdomain podman[295483]: 2025-10-13 14:40:11.044302193 +0000 UTC m=+0.292776553 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, container_name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, release=1, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, com.redhat.component=openstack-memcached-container, vcs-type=git)
Oct 13 14:40:11 standalone.localdomain podman[295484]: 2025-10-13 14:40:11.100689039 +0000 UTC m=+0.344840496 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, release=1, architecture=x86_64, name=rhosp17/openstack-heat-api, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=)
Oct 13 14:40:11 standalone.localdomain podman[295484]: 2025-10-13 14:40:11.116030372 +0000 UTC m=+0.360181859 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, vcs-type=git, release=1, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, container_name=heat_api_cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:40:11 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:40:11 standalone.localdomain podman[295482]: 2025-10-13 14:40:11.14881061 +0000 UTC m=+0.403606325 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, tcib_managed=true, container_name=heat_engine, release=1, distribution-scope=public, name=rhosp17/openstack-heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:40:11 standalone.localdomain podman[295483]: 2025-10-13 14:40:11.171038115 +0000 UTC m=+0.419512475 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.component=openstack-memcached-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=memcached, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, architecture=x86_64, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc.)
Oct 13 14:40:11 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:40:11 standalone.localdomain podman[295482]: 2025-10-13 14:40:11.195996113 +0000 UTC m=+0.450791818 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, com.redhat.component=openstack-heat-engine-container, io.openshift.expose-services=, container_name=heat_engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:40:11 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:40:11 standalone.localdomain podman[295503]: 2025-10-13 14:40:11.221968122 +0000 UTC m=+0.455886344 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, batch=17.1_20250721.1)
Oct 13 14:40:11 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:40:11 standalone.localdomain ceph-mon[29756]: pgmap v1969: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:11 standalone.localdomain account-server[114171]: Devices pass completed: 0.00s
Oct 13 14:40:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1970: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:13 standalone.localdomain ceph-mon[29756]: pgmap v1970: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1971: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:14 standalone.localdomain ceph-mon[29756]: pgmap v1971: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:14 standalone.localdomain runuser[295703]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:15 standalone.localdomain runuser[295703]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:15 standalone.localdomain runuser[295772]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:16 standalone.localdomain runuser[295772]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:16 standalone.localdomain runuser[295834]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1972: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:40:16 standalone.localdomain runuser[295834]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:40:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:40:16 standalone.localdomain systemd[1]: tmp-crun.Sf7j0V.mount: Deactivated successfully.
Oct 13 14:40:16 standalone.localdomain podman[295893]: 2025-10-13 14:40:16.858469517 +0000 UTC m=+0.106759788 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Oct 13 14:40:16 standalone.localdomain podman[295890]: 2025-10-13 14:40:16.828372191 +0000 UTC m=+0.090223459 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.openshift.expose-services=)
Oct 13 14:40:16 standalone.localdomain podman[295908]: 2025-10-13 14:40:16.893131014 +0000 UTC m=+0.143200079 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, batch=17.1_20250721.1)
Oct 13 14:40:16 standalone.localdomain podman[295916]: 2025-10-13 14:40:16.927500692 +0000 UTC m=+0.173910805 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:40:16 standalone.localdomain podman[295935]: 2025-10-13 14:40:16.958024881 +0000 UTC m=+0.199601285 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, architecture=x86_64, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, release=1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public)
Oct 13 14:40:16 standalone.localdomain podman[295890]: 2025-10-13 14:40:16.963105938 +0000 UTC m=+0.224957216 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-glance-api, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, container_name=glance_api_cron)
Oct 13 14:40:16 standalone.localdomain podman[295935]: 2025-10-13 14:40:16.970942349 +0000 UTC m=+0.212518753 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, container_name=ovn_controller, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:40:16 standalone.localdomain podman[295935]: unhealthy
Oct 13 14:40:16 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:40:16 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:40:16 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:40:17 standalone.localdomain podman[295894]: 2025-10-13 14:40:16.868471835 +0000 UTC m=+0.123968707 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-proxy-server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, architecture=x86_64, container_name=swift_proxy, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:40:17 standalone.localdomain podman[295906]: 2025-10-13 14:40:17.031578115 +0000 UTC m=+0.284588231 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, container_name=glance_api_internal, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc.)
Oct 13 14:40:17 standalone.localdomain podman[295891]: 2025-10-13 14:40:17.039581032 +0000 UTC m=+0.300531252 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, version=17.1.9, container_name=swift_object_server, distribution-scope=public, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:40:17 standalone.localdomain podman[295946]: 2025-10-13 14:40:17.09439234 +0000 UTC m=+0.328375240 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, build-date=2025-07-21T16:11:22, distribution-scope=public, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, container_name=swift_account_server, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:40:17 standalone.localdomain podman[295916]: 2025-10-13 14:40:17.128350915 +0000 UTC m=+0.374761018 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.9, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:40:17 standalone.localdomain podman[295916]: unhealthy
Oct 13 14:40:17 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:40:17 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:40:17 standalone.localdomain podman[295894]: 2025-10-13 14:40:17.17074989 +0000 UTC m=+0.426246792 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, container_name=swift_proxy, release=1, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:40:17 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:40:17 standalone.localdomain podman[295891]: 2025-10-13 14:40:17.202012702 +0000 UTC m=+0.462962932 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, container_name=swift_object_server, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:40:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:40:17 standalone.localdomain podman[295908]: 2025-10-13 14:40:17.221001647 +0000 UTC m=+0.471070722 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, tcib_managed=true, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git)
Oct 13 14:40:17 standalone.localdomain podman[295893]: 2025-10-13 14:40:17.22401042 +0000 UTC m=+0.472300711 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, distribution-scope=public, release=1, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:40:17 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:40:17 standalone.localdomain podman[295906]: 2025-10-13 14:40:17.252011072 +0000 UTC m=+0.505021198 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:58:20, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=glance_api_internal, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:40:17 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:40:17 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:40:17 standalone.localdomain podman[295946]: 2025-10-13 14:40:17.312831904 +0000 UTC m=+0.546814844 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-type=git, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:40:17 standalone.localdomain ceph-mon[29756]: pgmap v1972: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:17 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:40:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1973: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1258286677' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1258286677' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: pgmap v1973: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1258286677' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:40:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1258286677' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:40:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1974: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:40:20 standalone.localdomain podman[296339]: 2025-10-13 14:40:20.795194695 +0000 UTC m=+0.061370520 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, release=1, com.redhat.component=openstack-keystone-container, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=keystone_cron, build-date=2025-07-21T13:27:18, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone)
Oct 13 14:40:20 standalone.localdomain podman[296339]: 2025-10-13 14:40:20.801648473 +0000 UTC m=+0.067824318 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, release=1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-keystone-container, container_name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:40:20 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:40:21 standalone.localdomain ceph-mon[29756]: pgmap v1974: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1975: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:22 standalone.localdomain ceph-mon[29756]: pgmap v1975: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:40:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:40:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:40:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:40:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:40:22 standalone.localdomain podman[296389]: 2025-10-13 14:40:22.813840226 +0000 UTC m=+0.067966503 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=clustercheck, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, tcib_managed=true, build-date=2025-07-21T12:58:45, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, io.buildah.version=1.33.12)
Oct 13 14:40:22 standalone.localdomain podman[296375]: 2025-10-13 14:40:22.795125741 +0000 UTC m=+0.061199625 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, build-date=2025-07-21T16:28:54, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, batch=17.1_20250721.1)
Oct 13 14:40:22 standalone.localdomain podman[296389]: 2025-10-13 14:40:22.865797946 +0000 UTC m=+0.119924243 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, batch=17.1_20250721.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=clustercheck, vcs-type=git, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, build-date=2025-07-21T12:58:45, io.openshift.expose-services=)
Oct 13 14:40:22 standalone.localdomain podman[296375]: 2025-10-13 14:40:22.879838899 +0000 UTC m=+0.145912763 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, build-date=2025-07-21T16:28:54, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, version=17.1.9, container_name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:40:22 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:40:22 standalone.localdomain podman[296376]: 2025-10-13 14:40:22.850204226 +0000 UTC m=+0.112879606 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, container_name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:40:22 standalone.localdomain podman[296377]: 2025-10-13 14:40:22.913704321 +0000 UTC m=+0.171825760 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_compute, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5)
Oct 13 14:40:22 standalone.localdomain podman[296383]: 2025-10-13 14:40:22.92504543 +0000 UTC m=+0.177901987 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, version=17.1.9, vcs-type=git, container_name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, tcib_managed=true, build-date=2025-07-21T16:03:34, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, name=rhosp17/openstack-neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:40:22 standalone.localdomain podman[296377]: 2025-10-13 14:40:22.930788677 +0000 UTC m=+0.188910116 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step5, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, container_name=nova_compute)
Oct 13 14:40:22 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:40:22 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:40:22 standalone.localdomain podman[296383]: 2025-10-13 14:40:22.961802901 +0000 UTC m=+0.214659448 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, container_name=neutron_sriov_agent, distribution-scope=public, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.component=openstack-neutron-sriov-agent-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:40:22 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:40:22 standalone.localdomain podman[296376]: 2025-10-13 14:40:22.983935993 +0000 UTC m=+0.246611343 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1)
Oct 13 14:40:22 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:40:23
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'backups', 'manila_metadata', 'manila_data', 'vms']
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:40:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1976: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:24 standalone.localdomain sudo[296576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:40:24 standalone.localdomain sudo[296576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:40:24 standalone.localdomain sudo[296576]: pam_unix(sudo:session): session closed for user root
Oct 13 14:40:24 standalone.localdomain sudo[296591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:40:24 standalone.localdomain sudo[296591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: pgmap v1976: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:25 standalone.localdomain sudo[296591]: pam_unix(sudo:session): session closed for user root
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:40:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:40:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b006af4c-9f2b-4f32-b610-a0591b486d87 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:40:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b006af4c-9f2b-4f32-b610-a0591b486d87 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:40:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b006af4c-9f2b-4f32-b610-a0591b486d87 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:40:25 standalone.localdomain sudo[296646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:40:25 standalone.localdomain sudo[296646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:40:25 standalone.localdomain sudo[296646]: pam_unix(sudo:session): session closed for user root
Oct 13 14:40:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1977: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:40:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:40:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:40:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:40:27 standalone.localdomain runuser[296681]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:27 standalone.localdomain ceph-mon[29756]: pgmap v1977: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:27 standalone.localdomain runuser[296681]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:28 standalone.localdomain runuser[296791]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1978: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:40:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:40:28 standalone.localdomain ceph-mon[29756]: pgmap v1978: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:28 standalone.localdomain runuser[296791]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:28 standalone.localdomain runuser[296968]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:29 standalone.localdomain runuser[296968]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:40:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:40:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:40:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1979: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:40:30 standalone.localdomain ceph-mon[29756]: pgmap v1979: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1980: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:33 standalone.localdomain ceph-mon[29756]: pgmap v1980: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:40:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:40:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1981: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:35 standalone.localdomain ceph-mon[29756]: pgmap v1981: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1982: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:36 standalone.localdomain ceph-mon[29756]: pgmap v1982: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1983: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:38 standalone.localdomain ceph-mon[29756]: pgmap v1983: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:40 standalone.localdomain runuser[297419]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1984: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:40 standalone.localdomain runuser[297419]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:40 standalone.localdomain runuser[297488]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:41 standalone.localdomain ceph-mon[29756]: pgmap v1984: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:41 standalone.localdomain runuser[297488]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:41 standalone.localdomain runuser[297550]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:40:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:40:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:40:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:40:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:40:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:40:41 standalone.localdomain podman[297598]: 2025-10-13 14:40:41.82715681 +0000 UTC m=+0.087222616 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, release=1, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:40:41 standalone.localdomain podman[297598]: 2025-10-13 14:40:41.850993404 +0000 UTC m=+0.111059280 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, release=1, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, io.buildah.version=1.33.12, build-date=2025-07-21T15:44:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, version=17.1.9, vcs-type=git)
Oct 13 14:40:41 standalone.localdomain podman[297597]: 2025-10-13 14:40:41.807618808 +0000 UTC m=+0.071908054 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, container_name=heat_api_cfn, managed_by=tripleo_ansible, build-date=2025-07-21T14:49:55, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:40:41 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:40:41 standalone.localdomain podman[297599]: 2025-10-13 14:40:41.867316017 +0000 UTC m=+0.131420237 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:43, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, name=rhosp17/openstack-memcached, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:40:41 standalone.localdomain podman[297599]: 2025-10-13 14:40:41.884060952 +0000 UTC m=+0.148165172 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, managed_by=tripleo_ansible, container_name=memcached, tcib_managed=true, vcs-type=git)
Oct 13 14:40:41 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:40:41 standalone.localdomain podman[297597]: 2025-10-13 14:40:41.894003948 +0000 UTC m=+0.158293194 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, tcib_managed=true, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, build-date=2025-07-21T14:49:55, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, config_id=tripleo_step4, container_name=heat_api_cfn)
Oct 13 14:40:41 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:40:41 standalone.localdomain podman[297600]: 2025-10-13 14:40:41.971820763 +0000 UTC m=+0.236783599 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible)
Oct 13 14:40:42 standalone.localdomain podman[297602]: 2025-10-13 14:40:42.023453533 +0000 UTC m=+0.281746544 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-07-21T13:07:52, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Oct 13 14:40:42 standalone.localdomain podman[297600]: 2025-10-13 14:40:42.05518449 +0000 UTC m=+0.320147366 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:40:42 standalone.localdomain podman[297602]: 2025-10-13 14:40:42.058674947 +0000 UTC m=+0.316968068 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-cron, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 14:40:42 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:40:42 standalone.localdomain podman[297601]: 2025-10-13 14:40:42.077771725 +0000 UTC m=+0.335895871 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, container_name=heat_api, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1)
Oct 13 14:40:42 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:40:42 standalone.localdomain podman[297601]: 2025-10-13 14:40:42.159007746 +0000 UTC m=+0.417131912 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, distribution-scope=public, com.redhat.component=openstack-heat-api-container, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team)
Oct 13 14:40:42 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:40:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1985: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:42 standalone.localdomain runuser[297550]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:42 standalone.localdomain ceph-mon[29756]: pgmap v1985: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1986: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:45 standalone.localdomain ceph-mon[29756]: pgmap v1986: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1987: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:46 standalone.localdomain ceph-mon[29756]: pgmap v1987: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:40:47 standalone.localdomain object-server[297841]: Object update sweep starting on /srv/node/d1 (pid: 19)
Oct 13 14:40:47 standalone.localdomain object-server[297841]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 19)
Oct 13 14:40:47 standalone.localdomain object-server[297841]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:40:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.07s
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:40:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:40:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:40:47 standalone.localdomain podman[297852]: 2025-10-13 14:40:47.835911213 +0000 UTC m=+0.085356828 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:40:47 standalone.localdomain systemd[1]: tmp-crun.hTHSyQ.mount: Deactivated successfully.
Oct 13 14:40:47 standalone.localdomain podman[297850]: 2025-10-13 14:40:47.888531303 +0000 UTC m=+0.147924665 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-glance-api, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, config_id=tripleo_step4)
Oct 13 14:40:47 standalone.localdomain podman[297869]: 2025-10-13 14:40:47.904061281 +0000 UTC m=+0.149745341 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, container_name=swift_container_server, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:54:32, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:40:47 standalone.localdomain podman[297854]: 2025-10-13 14:40:47.946789096 +0000 UTC m=+0.198500311 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=swift_proxy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:40:47 standalone.localdomain podman[297851]: 2025-10-13 14:40:47.923633914 +0000 UTC m=+0.183215312 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:40:47 standalone.localdomain podman[297888]: 2025-10-13 14:40:47.989623635 +0000 UTC m=+0.223598745 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, container_name=swift_account_server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:40:48 standalone.localdomain podman[297850]: 2025-10-13 14:40:48.023658652 +0000 UTC m=+0.283052044 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, architecture=x86_64, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:40:48 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain podman[297875]: 2025-10-13 14:40:48.043831324 +0000 UTC m=+0.286349587 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible)
Oct 13 14:40:48 standalone.localdomain podman[297869]: 2025-10-13 14:40:48.097769174 +0000 UTC m=+0.343453234 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, version=17.1.9, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, release=1, batch=17.1_20250721.1)
Oct 13 14:40:48 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain podman[297851]: 2025-10-13 14:40:48.118861113 +0000 UTC m=+0.378442501 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, release=1, architecture=x86_64, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28)
Oct 13 14:40:48 standalone.localdomain podman[297875]: 2025-10-13 14:40:48.130293755 +0000 UTC m=+0.372812008 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:40:48 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain podman[297875]: unhealthy
Oct 13 14:40:48 standalone.localdomain podman[297868]: 2025-10-13 14:40:48.100466277 +0000 UTC m=+0.343098933 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, container_name=glance_api_internal, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9)
Oct 13 14:40:48 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:40:48 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:40:48 standalone.localdomain podman[297854]: 2025-10-13 14:40:48.183966538 +0000 UTC m=+0.435677793 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_proxy, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:40:48 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain podman[297887]: 2025-10-13 14:40:48.256050206 +0000 UTC m=+0.490496460 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:40:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1988: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:48 standalone.localdomain podman[297887]: 2025-10-13 14:40:48.268792559 +0000 UTC m=+0.503238823 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:40:48 standalone.localdomain podman[297887]: unhealthy
Oct 13 14:40:48 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:40:48 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:40:48 standalone.localdomain podman[297868]: 2025-10-13 14:40:48.285640738 +0000 UTC m=+0.528273414 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, io.buildah.version=1.33.12, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, name=rhosp17/openstack-glance-api)
Oct 13 14:40:48 standalone.localdomain podman[297852]: 2025-10-13 14:40:48.293965474 +0000 UTC m=+0.543411079 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.)
Oct 13 14:40:48 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain podman[297888]: 2025-10-13 14:40:48.339444254 +0000 UTC m=+0.573419384 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:40:48 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:40:48 standalone.localdomain ceph-mon[29756]: pgmap v1988: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1989: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:51 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:40:51 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txea2439ce02a040bd8834b-0068ed0f73" "proxy-server 2" 0.0010 "-" 23 -
Oct 13 14:40:51 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txea2439ce02a040bd8834b-0068ed0f73)
Oct 13 14:40:51 standalone.localdomain ceph-mon[29756]: pgmap v1989: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:40:51 standalone.localdomain podman[298296]: 2025-10-13 14:40:51.802746768 +0000 UTC m=+0.066030533 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-keystone, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:40:51 standalone.localdomain podman[298296]: 2025-10-13 14:40:51.83724322 +0000 UTC m=+0.100526975 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, distribution-scope=public, release=1, container_name=keystone_cron, name=rhosp17/openstack-keystone, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:40:51 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:40:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1990: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:52 standalone.localdomain ceph-mon[29756]: pgmap v1990: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:52 standalone.localdomain runuser[298327]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:40:53 standalone.localdomain runuser[298327]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:53 standalone.localdomain runuser[298396]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:40:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:40:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:40:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:40:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:40:53 standalone.localdomain podman[298443]: 2025-10-13 14:40:53.810242467 +0000 UTC m=+0.068237712 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, version=17.1.9, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, vcs-type=git)
Oct 13 14:40:53 standalone.localdomain systemd[1]: tmp-crun.hbMiEZ.mount: Deactivated successfully.
Oct 13 14:40:53 standalone.localdomain podman[298443]: 2025-10-13 14:40:53.86232607 +0000 UTC m=+0.120321305 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:40:53 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:40:53 standalone.localdomain podman[298441]: 2025-10-13 14:40:53.865542009 +0000 UTC m=+0.131230711 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, architecture=x86_64, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:40:53 standalone.localdomain podman[298441]: 2025-10-13 14:40:53.945342685 +0000 UTC m=+0.211031397 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, vendor=Red Hat, Inc., release=1, tcib_managed=true, build-date=2025-07-21T16:28:54, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, version=17.1.9, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:40:53 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:40:53 standalone.localdomain podman[298449]: 2025-10-13 14:40:53.919112378 +0000 UTC m=+0.175896916 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, distribution-scope=public, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:40:54 standalone.localdomain podman[298442]: 2025-10-13 14:40:54.018475977 +0000 UTC m=+0.278783943 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, version=17.1.9, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-iscsid, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, container_name=iscsid, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 14:40:54 standalone.localdomain podman[298442]: 2025-10-13 14:40:54.02638932 +0000 UTC m=+0.286697286 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:40:54 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:40:54 standalone.localdomain podman[298450]: 2025-10-13 14:40:54.067149296 +0000 UTC m=+0.321524720 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, config_id=tripleo_step2, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, container_name=clustercheck, io.buildah.version=1.33.12, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:40:54 standalone.localdomain podman[298449]: 2025-10-13 14:40:54.099320495 +0000 UTC m=+0.356105113 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, release=1, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:03:34, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=neutron_sriov_agent)
Oct 13 14:40:54 standalone.localdomain podman[298450]: 2025-10-13 14:40:54.112891873 +0000 UTC m=+0.367267237 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb)
Oct 13 14:40:54 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:40:54 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:40:54 standalone.localdomain runuser[298396]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:54 standalone.localdomain runuser[298577]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:40:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1991: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:54 standalone.localdomain runuser[298577]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:40:55 standalone.localdomain ceph-mon[29756]: pgmap v1991: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1992: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:57 standalone.localdomain ceph-mon[29756]: pgmap v1992: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:40:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1993: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:40:58 standalone.localdomain ceph-mon[29756]: pgmap v1993: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1994: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:01 standalone.localdomain ceph-mon[29756]: pgmap v1994: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1995: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:03 standalone.localdomain ceph-mon[29756]: pgmap v1995: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1996: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:04 standalone.localdomain ceph-mon[29756]: pgmap v1996: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:05 standalone.localdomain runuser[299078]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:06 standalone.localdomain runuser[299078]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1997: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:06 standalone.localdomain runuser[299147]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:06 standalone.localdomain runuser[299147]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:06 standalone.localdomain runuser[299209]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:07 standalone.localdomain podman[299259]: 2025-10-13 14:41:07.09915312 +0000 UTC m=+0.092478218 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, release=1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-mariadb, version=17.1.9, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:41:07 standalone.localdomain podman[299259]: 2025-10-13 14:41:07.133081864 +0000 UTC m=+0.126406992 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, com.redhat.component=openstack-mariadb-container, io.openshift.expose-services=, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, distribution-scope=public, io.buildah.version=1.33.12, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, vendor=Red Hat, Inc., vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:41:07 standalone.localdomain ceph-mon[29756]: pgmap v1997: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:07 standalone.localdomain podman[299297]: 2025-10-13 14:41:07.589217326 +0000 UTC m=+0.079614672 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, name=rhosp17/openstack-haproxy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T13:08:11, description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-haproxy-container)
Oct 13 14:41:07 standalone.localdomain podman[299318]: 2025-10-13 14:41:07.672630483 +0000 UTC m=+0.069316774 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.openshift.expose-services=, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, vcs-type=git, name=rhosp17/openstack-haproxy, tcib_managed=true, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:41:07 standalone.localdomain podman[299297]: 2025-10-13 14:41:07.676195583 +0000 UTC m=+0.166592919 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, distribution-scope=public, name=rhosp17/openstack-haproxy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50)
Oct 13 14:41:07 standalone.localdomain runuser[299209]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:08 standalone.localdomain podman[299346]: 2025-10-13 14:41:08.01101079 +0000 UTC m=+0.069045246 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, name=rhosp17/openstack-rabbitmq, com.redhat.component=openstack-rabbitmq-container, io.openshift.expose-services=, release=1, tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:08:05, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, architecture=x86_64, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b)
Oct 13 14:41:08 standalone.localdomain podman[299346]: 2025-10-13 14:41:08.039458045 +0000 UTC m=+0.097492501 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 rabbitmq, build-date=2025-07-21T13:08:05, architecture=x86_64, release=1, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-rabbitmq-container, io.openshift.expose-services=, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, name=rhosp17/openstack-rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:41:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1998: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:08 standalone.localdomain ceph-mon[29756]: pgmap v1998: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v1999: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:11 standalone.localdomain ceph-mon[29756]: pgmap v1999: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2000: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:41:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:41:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:41:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:41:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:41:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:41:12 standalone.localdomain ceph-mon[29756]: pgmap v2000: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:12 standalone.localdomain systemd[1]: tmp-crun.nXw71J.mount: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299617]: 2025-10-13 14:41:12.386714042 +0000 UTC m=+0.078829798 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, distribution-scope=public, config_id=tripleo_step1, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1)
Oct 13 14:41:12 standalone.localdomain systemd[1]: tmp-crun.iatmEE.mount: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299615]: 2025-10-13 14:41:12.442261912 +0000 UTC m=+0.135886475 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.component=openstack-heat-api-cfn-container, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, release=1, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:41:12 standalone.localdomain podman[299617]: 2025-10-13 14:41:12.454454807 +0000 UTC m=+0.146570583 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T12:58:43, container_name=memcached, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, name=rhosp17/openstack-memcached, tcib_managed=true, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, release=1, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']})
Oct 13 14:41:12 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299618]: 2025-10-13 14:41:12.492416876 +0000 UTC m=+0.181087406 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, release=1, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:41:12 standalone.localdomain podman[299615]: 2025-10-13 14:41:12.497044369 +0000 UTC m=+0.190668912 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, container_name=heat_api_cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:49:55, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api-cfn)
Oct 13 14:41:12 standalone.localdomain podman[299616]: 2025-10-13 14:41:12.456079667 +0000 UTC m=+0.149704930 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T15:44:11, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, distribution-scope=public, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, architecture=x86_64, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=heat_engine, release=1, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine)
Oct 13 14:41:12 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299618]: 2025-10-13 14:41:12.525442203 +0000 UTC m=+0.214112723 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, container_name=heat_api_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:41:12 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299616]: 2025-10-13 14:41:12.539532036 +0000 UTC m=+0.233157319 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T15:44:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, tcib_managed=true, batch=17.1_20250721.1, container_name=heat_engine, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:41:12 standalone.localdomain podman[299622]: 2025-10-13 14:41:12.550108291 +0000 UTC m=+0.236017456 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., release=1, container_name=heat_api, version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, build-date=2025-07-21T15:56:26)
Oct 13 14:41:12 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299622]: 2025-10-13 14:41:12.584004116 +0000 UTC m=+0.269913361 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-heat-api)
Oct 13 14:41:12 standalone.localdomain podman[299635]: 2025-10-13 14:41:12.59226435 +0000 UTC m=+0.275852273 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T13:07:52, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-cron, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:41:12 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain podman[299635]: 2025-10-13 14:41:12.603779825 +0000 UTC m=+0.287367718 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T13:07:52, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:41:12 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:41:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2001: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:15 standalone.localdomain ceph-mon[29756]: pgmap v2001: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2002: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:16 standalone.localdomain ceph-mon[29756]: pgmap v2002: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:41:18 standalone.localdomain podman[299852]: 2025-10-13 14:41:18.147567096 +0000 UTC m=+0.073253477 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=glance_api_cron, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, release=1, io.openshift.expose-services=, com.redhat.component=openstack-glance-api-container, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 14:41:18 standalone.localdomain podman[299852]: 2025-10-13 14:41:18.159523194 +0000 UTC m=+0.085209575 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, container_name=glance_api_cron, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:41:18 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain systemd[1]: tmp-crun.47sXxJ.mount: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2003: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:41:18 standalone.localdomain runuser[299957]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:41:18 standalone.localdomain podman[299882]: 2025-10-13 14:41:18.319257101 +0000 UTC m=+0.143954833 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:41:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:41:18 standalone.localdomain podman[299882]: 2025-10-13 14:41:18.379829975 +0000 UTC m=+0.204527737 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9)
Oct 13 14:41:18 standalone.localdomain podman[299882]: unhealthy
Oct 13 14:41:18 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:41:18 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:41:18 standalone.localdomain podman[300069]: 2025-10-13 14:41:18.449382196 +0000 UTC m=+0.061674379 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, release=1, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:41:18 standalone.localdomain podman[299972]: 2025-10-13 14:41:18.477663567 +0000 UTC m=+0.185390408 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, container_name=swift_proxy, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:41:18 standalone.localdomain podman[299876]: 2025-10-13 14:41:18.278526267 +0000 UTC m=+0.107819120 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:41:18 standalone.localdomain podman[299875]: 2025-10-13 14:41:18.386032467 +0000 UTC m=+0.219200770 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-swift-object)
Oct 13 14:41:18 standalone.localdomain podman[299876]: 2025-10-13 14:41:18.511520099 +0000 UTC m=+0.340812932 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, version=17.1.9, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:41:18 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain podman[300023]: 2025-10-13 14:41:18.520192577 +0000 UTC m=+0.182105007 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Oct 13 14:41:18 standalone.localdomain podman[300041]: 2025-10-13 14:41:18.427756572 +0000 UTC m=+0.084950287 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, container_name=glance_api_internal, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:58:20, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible)
Oct 13 14:41:18 standalone.localdomain podman[299980]: 2025-10-13 14:41:18.563350865 +0000 UTC m=+0.267721153 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:41:18 standalone.localdomain podman[299980]: 2025-10-13 14:41:18.576950374 +0000 UTC m=+0.281320662 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true)
Oct 13 14:41:18 standalone.localdomain podman[299980]: unhealthy
Oct 13 14:41:18 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:41:18 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:41:18 standalone.localdomain podman[299875]: 2025-10-13 14:41:18.626769388 +0000 UTC m=+0.459937711 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, container_name=swift_object_server, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object)
Oct 13 14:41:18 standalone.localdomain podman[300069]: 2025-10-13 14:41:18.646789754 +0000 UTC m=+0.259081937 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:41:18 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain podman[299972]: 2025-10-13 14:41:18.665849771 +0000 UTC m=+0.373576602 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, version=17.1.9, container_name=swift_proxy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:41:18 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain podman[300041]: 2025-10-13 14:41:18.677121498 +0000 UTC m=+0.334315193 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, container_name=glance_api_internal, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, version=17.1.9, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, architecture=x86_64)
Oct 13 14:41:18 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:41:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2296884229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:41:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:41:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2296884229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:41:18 standalone.localdomain ceph-mon[29756]: pgmap v2003: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:18 standalone.localdomain podman[300023]: 2025-10-13 14:41:18.850820005 +0000 UTC m=+0.512732475 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, architecture=x86_64)
Oct 13 14:41:18 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:41:18 standalone.localdomain runuser[299957]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:19 standalone.localdomain runuser[300290]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:19 standalone.localdomain runuser[300290]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:19 standalone.localdomain runuser[300393]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2296884229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:41:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2296884229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:41:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2004: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:20 standalone.localdomain runuser[300393]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:20 standalone.localdomain ceph-mon[29756]: pgmap v2004: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2005: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:41:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:22 standalone.localdomain podman[300475]: 2025-10-13 14:41:22.806844028 +0000 UTC m=+0.073455262 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, container_name=keystone_cron, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:27:18, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, release=1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:41:22 standalone.localdomain podman[300475]: 2025-10-13 14:41:22.8406874 +0000 UTC m=+0.107298704 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-keystone-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, release=1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, build-date=2025-07-21T13:27:18, container_name=keystone_cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, summary=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:41:22 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:41:23
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_data', 'vms', '.mgr', 'images', 'manila_metadata', 'volumes']
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:41:23 standalone.localdomain ceph-mon[29756]: pgmap v2005: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:41:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2006: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:24 standalone.localdomain ceph-mon[29756]: pgmap v2006: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:41:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:41:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:41:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:41:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:41:24 standalone.localdomain systemd[1]: tmp-crun.fLLzy4.mount: Deactivated successfully.
Oct 13 14:41:24 standalone.localdomain podman[300568]: 2025-10-13 14:41:24.813552773 +0000 UTC m=+0.077614320 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, name=rhosp17/openstack-iscsid, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:41:24 standalone.localdomain podman[300586]: 2025-10-13 14:41:24.81636775 +0000 UTC m=+0.067199120 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, vcs-type=git, com.redhat.component=openstack-mariadb-container, build-date=2025-07-21T12:58:45, container_name=clustercheck, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:41:24 standalone.localdomain podman[300568]: 2025-10-13 14:41:24.82381979 +0000 UTC m=+0.087881367 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, container_name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T13:27:15, architecture=x86_64)
Oct 13 14:41:24 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:41:24 standalone.localdomain podman[300570]: 2025-10-13 14:41:24.856158075 +0000 UTC m=+0.112042641 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.component=openstack-neutron-sriov-agent-container, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:41:24 standalone.localdomain podman[300586]: 2025-10-13 14:41:24.897824137 +0000 UTC m=+0.148655527 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=clustercheck, config_id=tripleo_step2, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, tcib_managed=true, vendor=Red Hat, Inc., release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, io.openshift.expose-services=)
Oct 13 14:41:24 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:41:24 standalone.localdomain podman[300569]: 2025-10-13 14:41:24.913420248 +0000 UTC m=+0.174041849 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:41:24 standalone.localdomain podman[300570]: 2025-10-13 14:41:24.919874236 +0000 UTC m=+0.175758792 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.component=openstack-neutron-sriov-agent-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:41:24 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:41:24 standalone.localdomain podman[300569]: 2025-10-13 14:41:24.9349556 +0000 UTC m=+0.195577221 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:41:24 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:41:24 standalone.localdomain podman[300567]: 2025-10-13 14:41:24.962377494 +0000 UTC m=+0.225792662 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-07-21T16:28:54, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:41:25 standalone.localdomain podman[300567]: 2025-10-13 14:41:25.001733986 +0000 UTC m=+0.265149153 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., release=1, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54)
Oct 13 14:41:25 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:41:25 standalone.localdomain sudo[300706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:41:25 standalone.localdomain sudo[300706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:41:25 standalone.localdomain sudo[300706]: pam_unix(sudo:session): session closed for user root
Oct 13 14:41:25 standalone.localdomain sudo[300721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:41:25 standalone.localdomain sudo[300721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:41:25 standalone.localdomain systemd[1]: tmp-crun.U0f0jd.mount: Deactivated successfully.
Oct 13 14:41:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2007: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:26 standalone.localdomain sudo[300721]: pam_unix(sudo:session): session closed for user root
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:41:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:41:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1410f223-3947-44bf-b2d3-08db24896914 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:41:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1410f223-3947-44bf-b2d3-08db24896914 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:41:26 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1410f223-3947-44bf-b2d3-08db24896914 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:41:26 standalone.localdomain sudo[300776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:41:26 standalone.localdomain sudo[300776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:41:26 standalone.localdomain sudo[300776]: pam_unix(sudo:session): session closed for user root
Oct 13 14:41:27 standalone.localdomain ceph-mon[29756]: pgmap v2007: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:41:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:41:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:41:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:41:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2008: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:41:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:41:28 standalone.localdomain ceph-mon[29756]: pgmap v2008: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:41:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:41:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:41:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2009: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:41:30 standalone.localdomain ceph-mon[29756]: pgmap v2009: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:30 standalone.localdomain runuser[301032]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:31 standalone.localdomain runuser[301032]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:31 standalone.localdomain runuser[301101]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:32 standalone.localdomain runuser[301101]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:32 standalone.localdomain runuser[301155]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2010: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:32 standalone.localdomain ceph-mon[29756]: pgmap v2010: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:32 standalone.localdomain runuser[301155]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2011: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:34 standalone.localdomain ceph-mon[29756]: pgmap v2011: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2012: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:36 standalone.localdomain ceph-mon[29756]: pgmap v2012: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2013: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:38 standalone.localdomain ceph-mon[29756]: pgmap v2013: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2014: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:41 standalone.localdomain ceph-mon[29756]: pgmap v2014: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2015: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:42 standalone.localdomain ceph-mon[29756]: pgmap v2015: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:41:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:42 standalone.localdomain recover_tripleo_nova_virtqemud[301606]: 93291
Oct 13 14:41:42 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:41:42 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:41:42 standalone.localdomain podman[301574]: 2025-10-13 14:41:42.832537877 +0000 UTC m=+0.073978558 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, build-date=2025-07-21T15:56:26, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, com.redhat.component=openstack-heat-api-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9)
Oct 13 14:41:42 standalone.localdomain podman[301574]: 2025-10-13 14:41:42.837347795 +0000 UTC m=+0.078788466 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, description=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-heat-api, build-date=2025-07-21T15:56:26, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, container_name=heat_api_cron, vcs-type=git)
Oct 13 14:41:42 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:41:42 standalone.localdomain systemd[1]: tmp-crun.meOrCb.mount: Deactivated successfully.
Oct 13 14:41:42 standalone.localdomain podman[301575]: 2025-10-13 14:41:42.89304235 +0000 UTC m=+0.132822020 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:41:42 standalone.localdomain podman[301575]: 2025-10-13 14:41:42.931812104 +0000 UTC m=+0.171591784 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api)
Oct 13 14:41:42 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:41:42 standalone.localdomain podman[301585]: 2025-10-13 14:41:42.98141984 +0000 UTC m=+0.213894885 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T13:07:52, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:41:42 standalone.localdomain podman[301561]: 2025-10-13 14:41:42.935859518 +0000 UTC m=+0.192270630 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, container_name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:41:43 standalone.localdomain podman[301561]: 2025-10-13 14:41:43.019798122 +0000 UTC m=+0.276209234 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, container_name=heat_api_cfn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1)
Oct 13 14:41:43 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:41:43 standalone.localdomain podman[301562]: 2025-10-13 14:41:43.105406297 +0000 UTC m=+0.358434705 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, vcs-type=git, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, container_name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public)
Oct 13 14:41:43 standalone.localdomain podman[301562]: 2025-10-13 14:41:43.16461432 +0000 UTC m=+0.417642728 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, name=rhosp17/openstack-heat-engine, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, vcs-type=git, io.buildah.version=1.33.12, config_id=tripleo_step4)
Oct 13 14:41:43 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:41:43 standalone.localdomain podman[301585]: 2025-10-13 14:41:43.273607915 +0000 UTC m=+0.506083030 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, build-date=2025-07-21T13:07:52)
Oct 13 14:41:43 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:41:43 standalone.localdomain podman[301568]: 2025-10-13 14:41:43.36600633 +0000 UTC m=+0.610697451 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, version=17.1.9, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., container_name=memcached, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:41:43 standalone.localdomain podman[301568]: 2025-10-13 14:41:43.390997579 +0000 UTC m=+0.635688710 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, container_name=memcached, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-memcached, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step1, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, distribution-scope=public)
Oct 13 14:41:43 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:41:43 standalone.localdomain runuser[301700]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:44 standalone.localdomain runuser[301700]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2016: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:44 standalone.localdomain runuser[301769]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:44 standalone.localdomain ceph-mon[29756]: pgmap v2016: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:44 standalone.localdomain runuser[301769]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:44 standalone.localdomain runuser[301886]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:45 standalone.localdomain runuser[301886]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2017: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:47 standalone.localdomain ceph-mon[29756]: pgmap v2017: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2018: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:41:48 standalone.localdomain ceph-mon[29756]: pgmap v2018: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:48 standalone.localdomain systemd[1]: tmp-crun.5wffOe.mount: Deactivated successfully.
Oct 13 14:41:48 standalone.localdomain podman[302059]: 2025-10-13 14:41:48.859787772 +0000 UTC m=+0.090120155 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, version=17.1.9, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:41:48 standalone.localdomain podman[302044]: 2025-10-13 14:41:48.897581405 +0000 UTC m=+0.134231063 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, tcib_managed=true, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:41:48 standalone.localdomain podman[302045]: 2025-10-13 14:41:48.910686719 +0000 UTC m=+0.146409048 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:41:48 standalone.localdomain podman[302027]: 2025-10-13 14:41:48.867321823 +0000 UTC m=+0.125701329 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, container_name=swift_object_server, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:41:48 standalone.localdomain podman[302028]: 2025-10-13 14:41:48.975214445 +0000 UTC m=+0.215536026 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, name=rhosp17/openstack-swift-proxy-server, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:41:48 standalone.localdomain podman[302055]: 2025-10-13 14:41:48.982989725 +0000 UTC m=+0.214664670 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:41:48 standalone.localdomain podman[302034]: 2025-10-13 14:41:48.839945601 +0000 UTC m=+0.090874339 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, container_name=glance_api_internal, release=1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 14:41:48 standalone.localdomain podman[302055]: 2025-10-13 14:41:48.996746588 +0000 UTC m=+0.228421553 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:41:49 standalone.localdomain podman[302055]: unhealthy
Oct 13 14:41:49 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:41:49 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:41:49 standalone.localdomain podman[302034]: 2025-10-13 14:41:49.030795666 +0000 UTC m=+0.281724404 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, version=17.1.9, com.redhat.component=openstack-glance-api-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:41:49 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:41:49 standalone.localdomain podman[302027]: 2025-10-13 14:41:49.041231138 +0000 UTC m=+0.299610634 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, version=17.1.9, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, distribution-scope=public, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:41:49 standalone.localdomain podman[302045]: 2025-10-13 14:41:49.047972105 +0000 UTC m=+0.283694434 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true)
Oct 13 14:41:49 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:41:49 standalone.localdomain podman[302045]: unhealthy
Oct 13 14:41:49 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:41:49 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:41:49 standalone.localdomain podman[302020]: 2025-10-13 14:41:49.126185033 +0000 UTC m=+0.387562611 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=glance_api_cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, release=1, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:41:49 standalone.localdomain podman[302059]: 2025-10-13 14:41:49.135163999 +0000 UTC m=+0.365496462 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., container_name=swift_account_server, release=1, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:41:49 standalone.localdomain podman[302044]: 2025-10-13 14:41:49.142908298 +0000 UTC m=+0.379557986 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-type=git, name=rhosp17/openstack-swift-container, io.buildah.version=1.33.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, container_name=swift_container_server, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:41:49 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:41:49 standalone.localdomain podman[302028]: 2025-10-13 14:41:49.158730324 +0000 UTC m=+0.399051935 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, name=rhosp17/openstack-swift-proxy-server, build-date=2025-07-21T14:48:37, version=17.1.9, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=swift_proxy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:41:49 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:41:49 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:41:49 standalone.localdomain podman[302020]: 2025-10-13 14:41:49.187124749 +0000 UTC m=+0.448502357 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.9, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, tcib_managed=true)
Oct 13 14:41:49 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:41:49 standalone.localdomain podman[302231]: 2025-10-13 14:41:49.110147439 +0000 UTC m=+0.170736737 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:41:49 standalone.localdomain podman[302231]: 2025-10-13 14:41:49.491054635 +0000 UTC m=+0.551643923 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1)
Oct 13 14:41:49 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:41:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2019: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:51 standalone.localdomain ceph-mon[29756]: pgmap v2019: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2020: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.379992) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366512380025, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 2109, "num_deletes": 251, "total_data_size": 1991935, "memory_usage": 2032304, "flush_reason": "Manual Compaction"}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366512392308, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1932759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45959, "largest_seqno": 48067, "table_properties": {"data_size": 1924524, "index_size": 5002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16871, "raw_average_key_size": 19, "raw_value_size": 1907808, "raw_average_value_size": 2239, "num_data_blocks": 227, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366311, "oldest_key_time": 1760366311, "file_creation_time": 1760366512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 12385 microseconds, and 4106 cpu microseconds.
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.392365) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1932759 bytes OK
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.392394) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.397030) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.397045) EVENT_LOG_v1 {"time_micros": 1760366512397040, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.397062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1982945, prev total WAL file size 1982945, number of live WAL files 2.
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.397628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1887KB)], [101(5400KB)]
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366512397692, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 7463179, "oldest_snapshot_seqno": -1}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5199 keys, 6454757 bytes, temperature: kUnknown
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366512434181, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 6454757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6420612, "index_size": 20032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 130112, "raw_average_key_size": 25, "raw_value_size": 6326944, "raw_average_value_size": 1216, "num_data_blocks": 830, "num_entries": 5199, "num_filter_entries": 5199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.434443) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 6454757 bytes
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.435986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.2 rd, 176.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 5.3 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 5717, records dropped: 518 output_compression: NoCompression
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.436007) EVENT_LOG_v1 {"time_micros": 1760366512435998, "job": 60, "event": "compaction_finished", "compaction_time_micros": 36556, "compaction_time_cpu_micros": 15057, "output_level": 6, "num_output_files": 1, "total_output_size": 6454757, "num_input_records": 5717, "num_output_records": 5199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366512436295, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366512436821, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.397519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.436928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.436936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.436939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.436941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:41:52.436944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:41:53 standalone.localdomain ceph-mon[29756]: pgmap v2020: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:41:53 standalone.localdomain podman[302419]: 2025-10-13 14:41:53.809175645 +0000 UTC m=+0.072863614 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=keystone_cron, name=rhosp17/openstack-keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:41:53 standalone.localdomain podman[302419]: 2025-10-13 14:41:53.845860224 +0000 UTC m=+0.109548213 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, name=rhosp17/openstack-keystone, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=keystone_cron, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.buildah.version=1.33.12, config_id=tripleo_step3, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 keystone, release=1)
Oct 13 14:41:53 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:41:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2021: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:54 standalone.localdomain ceph-mon[29756]: pgmap v2021: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:41:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:41:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:41:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:41:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:41:55 standalone.localdomain podman[302511]: 2025-10-13 14:41:55.833431479 +0000 UTC m=+0.068690295 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Oct 13 14:41:55 standalone.localdomain podman[302511]: 2025-10-13 14:41:55.841610751 +0000 UTC m=+0.076869517 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, tcib_managed=true, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, distribution-scope=public, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container)
Oct 13 14:41:55 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:41:55 standalone.localdomain podman[302513]: 2025-10-13 14:41:55.885470811 +0000 UTC m=+0.116355793 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, build-date=2025-07-21T16:03:34, release=1, version=17.1.9, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:41:55 standalone.localdomain podman[302512]: 2025-10-13 14:41:55.940601798 +0000 UTC m=+0.171703117 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, container_name=nova_compute, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, version=17.1.9)
Oct 13 14:41:55 standalone.localdomain podman[302513]: 2025-10-13 14:41:55.956816587 +0000 UTC m=+0.187701559 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=neutron_sriov_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:41:55 standalone.localdomain podman[302512]: 2025-10-13 14:41:55.963809533 +0000 UTC m=+0.194910832 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, release=1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:41:55 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:41:55 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:41:55 standalone.localdomain podman[302524]: 2025-10-13 14:41:55.998429488 +0000 UTC m=+0.225154451 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, container_name=clustercheck, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, release=1, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1)
Oct 13 14:41:56 standalone.localdomain podman[302524]: 2025-10-13 14:41:56.037807581 +0000 UTC m=+0.264532544 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, build-date=2025-07-21T12:58:45, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, config_id=tripleo_step2, architecture=x86_64, version=17.1.9, vcs-type=git, container_name=clustercheck, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:41:56 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:41:56 standalone.localdomain podman[302510]: 2025-10-13 14:41:56.039285786 +0000 UTC m=+0.275200313 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, tcib_managed=true, container_name=neutron_dhcp, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, release=1)
Oct 13 14:41:56 standalone.localdomain podman[302510]: 2025-10-13 14:41:56.11998182 +0000 UTC m=+0.355896337 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, container_name=neutron_dhcp, version=17.1.9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-dhcp-agent, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, config_id=tripleo_step4, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 14:41:56 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:41:56 standalone.localdomain runuser[302647]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2022: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:56 standalone.localdomain ceph-mon[29756]: pgmap v2022: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:56 standalone.localdomain runuser[302647]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:57 standalone.localdomain runuser[302716]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:57 standalone.localdomain runuser[302716]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:57 standalone.localdomain runuser[302770]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:41:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:41:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2023: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:41:58 standalone.localdomain runuser[302770]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:41:58 standalone.localdomain ceph-mon[29756]: pgmap v2023: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2024: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:00 standalone.localdomain ceph-mon[29756]: pgmap v2024: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2025: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:02 standalone.localdomain ceph-mon[29756]: pgmap v2025: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2026: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:05 standalone.localdomain ceph-mon[29756]: pgmap v2026: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2027: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:06 standalone.localdomain ceph-mon[29756]: pgmap v2027: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:07 standalone.localdomain podman[303207]: 2025-10-13 14:42:07.249707127 +0000 UTC m=+0.076494896 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, batch=17.1_20250721.1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, version=17.1.9, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:42:07 standalone.localdomain podman[303207]: 2025-10-13 14:42:07.281128135 +0000 UTC m=+0.107915894 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, distribution-scope=public, version=17.1.9, description=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, release=1, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1)
Oct 13 14:42:07 standalone.localdomain podman[303240]: 2025-10-13 14:42:07.799802562 +0000 UTC m=+0.087810605 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, com.redhat.component=openstack-haproxy-container, build-date=2025-07-21T13:08:11, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, name=rhosp17/openstack-haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:42:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:07 standalone.localdomain podman[303240]: 2025-10-13 14:42:07.831849008 +0000 UTC m=+0.119857031 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, version=17.1.9, com.redhat.component=openstack-haproxy-container, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, build-date=2025-07-21T13:08:11, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, description=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, name=rhosp17/openstack-haproxy, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Oct 13 14:42:08 standalone.localdomain podman[303273]: 2025-10-13 14:42:08.14378306 +0000 UTC m=+0.069137339 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, name=rhosp17/openstack-rabbitmq, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container)
Oct 13 14:42:08 standalone.localdomain podman[303273]: 2025-10-13 14:42:08.17691907 +0000 UTC m=+0.102273329 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.component=openstack-rabbitmq-container, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:42:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2028: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:08 standalone.localdomain ceph-mon[29756]: pgmap v2028: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:08 standalone.localdomain runuser[303439]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:09 standalone.localdomain runuser[303439]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:09 standalone.localdomain runuser[303549]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2029: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:10 standalone.localdomain runuser[303549]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:10 standalone.localdomain runuser[303652]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:10 standalone.localdomain ceph-mon[29756]: pgmap v2029: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:10 standalone.localdomain runuser[303652]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2030: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:12 standalone.localdomain ceph-mon[29756]: pgmap v2030: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:42:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:42:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:42:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:42:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:42:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:42:13 standalone.localdomain podman[303734]: 2025-10-13 14:42:13.796363538 +0000 UTC m=+0.063391352 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12)
Oct 13 14:42:13 standalone.localdomain podman[303744]: 2025-10-13 14:42:13.810772821 +0000 UTC m=+0.067507719 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-07-21T13:07:52, tcib_managed=true)
Oct 13 14:42:13 standalone.localdomain podman[303734]: 2025-10-13 14:42:13.819881492 +0000 UTC m=+0.086909306 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, build-date=2025-07-21T14:49:55, release=1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, container_name=heat_api_cfn, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:42:13 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:42:13 standalone.localdomain systemd[1]: tmp-crun.tWQYZ8.mount: Deactivated successfully.
Oct 13 14:42:13 standalone.localdomain podman[303744]: 2025-10-13 14:42:13.870949744 +0000 UTC m=+0.127684652 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T13:07:52, release=1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, distribution-scope=public)
Oct 13 14:42:13 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:42:13 standalone.localdomain podman[303738]: 2025-10-13 14:42:13.91631105 +0000 UTC m=+0.174203313 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, container_name=heat_api, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-heat-api, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:42:13 standalone.localdomain podman[303735]: 2025-10-13 14:42:13.971738277 +0000 UTC m=+0.234615484 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 heat-engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, container_name=heat_engine, com.redhat.component=openstack-heat-engine-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 14:42:13 standalone.localdomain podman[303736]: 2025-10-13 14:42:13.87438222 +0000 UTC m=+0.137607377 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, batch=17.1_20250721.1, name=rhosp17/openstack-memcached, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, release=1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-memcached-container, config_id=tripleo_step1, container_name=memcached, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:42:13 standalone.localdomain podman[303737]: 2025-10-13 14:42:13.946297554 +0000 UTC m=+0.208381967 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api)
Oct 13 14:42:14 standalone.localdomain podman[303738]: 2025-10-13 14:42:14.003401861 +0000 UTC m=+0.261294124 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, build-date=2025-07-21T15:56:26, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, container_name=heat_api, version=17.1.9, distribution-scope=public)
Oct 13 14:42:14 standalone.localdomain podman[303736]: 2025-10-13 14:42:14.011084918 +0000 UTC m=+0.274310055 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., config_id=tripleo_step1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, build-date=2025-07-21T12:58:43, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, name=rhosp17/openstack-memcached, vcs-type=git, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, container_name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:42:14 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:42:14 standalone.localdomain podman[303737]: 2025-10-13 14:42:14.028952748 +0000 UTC m=+0.291037191 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, name=rhosp17/openstack-heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron)
Oct 13 14:42:14 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:42:14 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:42:14 standalone.localdomain podman[303735]: 2025-10-13 14:42:14.06602073 +0000 UTC m=+0.328897917 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., name=rhosp17/openstack-heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:44:11, distribution-scope=public, release=1, com.redhat.component=openstack-heat-engine-container)
Oct 13 14:42:14 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:42:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2031: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:14 standalone.localdomain ceph-mon[29756]: pgmap v2031: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 13 14:42:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.34:0/169224357' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Oct 13 14:42:14 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.14658 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 13 14:42:14 standalone.localdomain ceph-mgr[29999]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 13 14:42:14 standalone.localdomain ceph-mgr[29999]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 13 14:42:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.34:0/169224357' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Oct 13 14:42:15 standalone.localdomain ceph-mon[29756]: from='client.14658 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 13 14:42:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2032: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:16 standalone.localdomain ceph-mon[29756]: pgmap v2032: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2033: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:18 standalone.localdomain ceph-mon[29756]: pgmap v2033: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:42:19 standalone.localdomain systemd[1]: tmp-crun.ZSaADw.mount: Deactivated successfully.
Oct 13 14:42:19 standalone.localdomain podman[304157]: 2025-10-13 14:42:19.921144384 +0000 UTC m=+0.145521361 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_container_server, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:42:19 standalone.localdomain podman[304135]: 2025-10-13 14:42:19.847732273 +0000 UTC m=+0.102679211 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_cron, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:42:19 standalone.localdomain podman[304136]: 2025-10-13 14:42:19.903563642 +0000 UTC m=+0.154092625 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=swift_object_server, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 14:42:19 standalone.localdomain podman[304135]: 2025-10-13 14:42:19.977433407 +0000 UTC m=+0.232380345 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, container_name=glance_api_cron, io.openshift.expose-services=, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:42:19 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:42:20 standalone.localdomain podman[304143]: 2025-10-13 14:42:19.951046624 +0000 UTC m=+0.189433052 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, container_name=swift_proxy, version=17.1.9, release=1, com.redhat.component=openstack-swift-proxy-server-container, vendor=Red Hat, Inc., vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:42:20 standalone.localdomain podman[304160]: 2025-10-13 14:42:19.880554144 +0000 UTC m=+0.109985117 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.33.12, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Oct 13 14:42:20 standalone.localdomain podman[304160]: 2025-10-13 14:42:20.061115523 +0000 UTC m=+0.290546496 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:42:20 standalone.localdomain podman[304160]: unhealthy
Oct 13 14:42:20 standalone.localdomain podman[304165]: 2025-10-13 14:42:20.067942803 +0000 UTC m=+0.295707415 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, container_name=swift_account_server, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:42:20 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:20 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:42:20 standalone.localdomain podman[304149]: 2025-10-13 14:42:20.045243424 +0000 UTC m=+0.277834364 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, name=rhosp17/openstack-glance-api, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-glance-api-container, container_name=glance_api_internal, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, build-date=2025-07-21T13:58:20, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9)
Oct 13 14:42:20 standalone.localdomain podman[304157]: 2025-10-13 14:42:20.100519396 +0000 UTC m=+0.324896383 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-swift-container-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:42:20 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:42:20 standalone.localdomain podman[304137]: 2025-10-13 14:42:20.115729024 +0000 UTC m=+0.360900482 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:42:20 standalone.localdomain podman[304143]: 2025-10-13 14:42:20.123230995 +0000 UTC m=+0.361617413 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, version=17.1.9, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, release=1, batch=17.1_20250721.1)
Oct 13 14:42:20 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:42:20 standalone.localdomain podman[304164]: 2025-10-13 14:42:20.045579294 +0000 UTC m=+0.273943314 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-ovn-controller-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:42:20 standalone.localdomain podman[304136]: 2025-10-13 14:42:20.168681814 +0000 UTC m=+0.419210787 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, version=17.1.9, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true)
Oct 13 14:42:20 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:42:20 standalone.localdomain podman[304164]: 2025-10-13 14:42:20.18318244 +0000 UTC m=+0.411546490 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, release=1, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=)
Oct 13 14:42:20 standalone.localdomain podman[304164]: unhealthy
Oct 13 14:42:20 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:20 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:42:20 standalone.localdomain podman[304149]: 2025-10-13 14:42:20.226031669 +0000 UTC m=+0.458622599 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., version=17.1.9, container_name=glance_api_internal, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']})
Oct 13 14:42:20 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:42:20 standalone.localdomain podman[304165]: 2025-10-13 14:42:20.266595908 +0000 UTC m=+0.494360530 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_account_server, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 14:42:20 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:42:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2034: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:20 standalone.localdomain ceph-mon[29756]: pgmap v2034: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:20 standalone.localdomain podman[304137]: 2025-10-13 14:42:20.466067018 +0000 UTC m=+0.711238526 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc.)
Oct 13 14:42:20 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:42:21 standalone.localdomain runuser[304408]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:22 standalone.localdomain runuser[304408]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:22 standalone.localdomain runuser[304469]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2035: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:22 standalone.localdomain ceph-mon[29756]: pgmap v2035: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:22 standalone.localdomain runuser[304469]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:23 standalone.localdomain runuser[304531]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:42:23
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'backups', 'volumes', '.mgr', 'vms', 'images', 'manila_data']
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:42:23 standalone.localdomain runuser[304531]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2036: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:24 standalone.localdomain ceph-mon[29756]: pgmap v2036: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:42:24 standalone.localdomain podman[304607]: 2025-10-13 14:42:24.812373186 +0000 UTC m=+0.067084516 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, com.redhat.component=openstack-keystone-container, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8)
Oct 13 14:42:24 standalone.localdomain podman[304607]: 2025-10-13 14:42:24.822236169 +0000 UTC m=+0.076947559 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, architecture=x86_64, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, version=17.1.9, container_name=keystone_cron, name=rhosp17/openstack-keystone, distribution-scope=public)
Oct 13 14:42:24 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:42:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2037: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:26 standalone.localdomain ceph-mon[29756]: pgmap v2037: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:26 standalone.localdomain sudo[304687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:42:26 standalone.localdomain sudo[304687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:42:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:42:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:42:26 standalone.localdomain sudo[304687]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:42:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:42:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:42:26 standalone.localdomain sudo[304735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:42:26 standalone.localdomain sudo[304735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:42:26 standalone.localdomain podman[304711]: 2025-10-13 14:42:26.627636867 +0000 UTC m=+0.066940701 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, distribution-scope=public, build-date=2025-07-21T13:27:15, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3)
Oct 13 14:42:26 standalone.localdomain podman[304713]: 2025-10-13 14:42:26.639131421 +0000 UTC m=+0.072672228 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, release=1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, container_name=neutron_sriov_agent, build-date=2025-07-21T16:03:34, vendor=Red Hat, Inc., io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container)
Oct 13 14:42:26 standalone.localdomain podman[304711]: 2025-10-13 14:42:26.661907622 +0000 UTC m=+0.101211436 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=iscsid, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.9, build-date=2025-07-21T13:27:15, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:42:26 standalone.localdomain podman[304724]: 2025-10-13 14:42:26.693761212 +0000 UTC m=+0.124722980 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, config_id=tripleo_step2, container_name=clustercheck, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:42:26 standalone.localdomain podman[304713]: 2025-10-13 14:42:26.698003373 +0000 UTC m=+0.131544230 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, distribution-scope=public, com.redhat.component=openstack-neutron-sriov-agent-container, name=rhosp17/openstack-neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, container_name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1)
Oct 13 14:42:26 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Deactivated successfully.
Oct 13 14:42:26 standalone.localdomain podman[304724]: 2025-10-13 14:42:26.730563975 +0000 UTC m=+0.161525753 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, architecture=x86_64, name=rhosp17/openstack-mariadb, release=1, tcib_managed=true, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step2, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:42:26 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Deactivated successfully.
Oct 13 14:42:26 standalone.localdomain podman[304710]: 2025-10-13 14:42:26.775036414 +0000 UTC m=+0.215676350 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, container_name=neutron_dhcp, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:42:26 standalone.localdomain podman[304712]: 2025-10-13 14:42:26.731432513 +0000 UTC m=+0.168107236 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.33.12, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:42:26 standalone.localdomain podman[304710]: 2025-10-13 14:42:26.807905426 +0000 UTC m=+0.248545412 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_id=tripleo_step4, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:42:26 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Deactivated successfully.
Oct 13 14:42:26 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:42:26 standalone.localdomain podman[304712]: 2025-10-13 14:42:26.915031964 +0000 UTC m=+0.351706737 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:42:26 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Deactivated successfully.
Oct 13 14:42:27 standalone.localdomain sudo[304735]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:42:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 83c331dc-1c55-49f1-b5b7-f7b789a56901 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:42:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 83c331dc-1c55-49f1-b5b7-f7b789a56901 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:42:27 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 83c331dc-1c55-49f1-b5b7-f7b789a56901 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:42:27 standalone.localdomain sudo[304884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:42:27 standalone.localdomain sudo[304884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:42:27 standalone.localdomain sudo[304884]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2038: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:42:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:42:28 standalone.localdomain ceph-mon[29756]: pgmap v2038: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:42:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:42:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:42:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2039: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:42:30 standalone.localdomain ceph-mon[29756]: pgmap v2039: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2040: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:32 standalone.localdomain ceph-mon[29756]: pgmap v2040: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2041: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:34 standalone.localdomain runuser[305164]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:34 standalone.localdomain ceph-mon[29756]: pgmap v2041: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:34 standalone.localdomain runuser[305164]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:35 standalone.localdomain runuser[305235]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:35 standalone.localdomain runuser[305235]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:35 standalone.localdomain runuser[305345]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2042: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:36 standalone.localdomain ceph-mon[29756]: pgmap v2042: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:36 standalone.localdomain runuser[305345]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:37 standalone.localdomain sshd[305416]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:37 standalone.localdomain sshd[305416]: Accepted publickey for root from 192.168.122.11 port 39758 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:37 standalone.localdomain systemd-logind[45629]: New session 257 of user root.
Oct 13 14:42:37 standalone.localdomain systemd[1]: Started Session 257 of User root.
Oct 13 14:42:37 standalone.localdomain sshd[305416]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:37 standalone.localdomain sudo[305428]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config galera-bundle
Oct 13 14:42:37 standalone.localdomain sudo[305428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2043: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:38 standalone.localdomain sudo[305428]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:38 standalone.localdomain sshd[305427]: Received disconnect from 192.168.122.11 port 39758:11: disconnected by user
Oct 13 14:42:38 standalone.localdomain sshd[305427]: Disconnected from user root 192.168.122.11 port 39758
Oct 13 14:42:38 standalone.localdomain sshd[305416]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:38 standalone.localdomain systemd[1]: session-257.scope: Deactivated successfully.
Oct 13 14:42:38 standalone.localdomain systemd-logind[45629]: Session 257 logged out. Waiting for processes to exit.
Oct 13 14:42:38 standalone.localdomain systemd-logind[45629]: Removed session 257.
Oct 13 14:42:38 standalone.localdomain sshd[305444]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:38 standalone.localdomain sshd[305444]: Accepted publickey for root from 192.168.122.11 port 39770 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:38 standalone.localdomain systemd-logind[45629]: New session 258 of user root.
Oct 13 14:42:38 standalone.localdomain systemd[1]: Started Session 258 of User root.
Oct 13 14:42:38 standalone.localdomain sshd[305444]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:38 standalone.localdomain sudo[305477]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource disable galera-bundle
Oct 13 14:42:38 standalone.localdomain sudo[305477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:38 standalone.localdomain ceph-mon[29756]: pgmap v2043: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:39 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:42:39 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera-bundle-podman-0               (                             standalone )  due to node availability
Oct 13 14:42:39 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera-bundle-0                      (                             standalone )  due to node availability
Oct 13 14:42:39 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera:0                             (               Promoted galera-bundle-0 )  due to node availability
Oct 13 14:42:39 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 66, saving inputs in /var/lib/pacemaker/pengine/pe-input-66.bz2
Oct 13 14:42:39 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating demote operation galera_demote_0 locally on galera-bundle-0
Oct 13 14:42:39 standalone.localdomain sudo[305477]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:39 standalone.localdomain sshd[305447]: Received disconnect from 192.168.122.11 port 39770:11: disconnected by user
Oct 13 14:42:39 standalone.localdomain sshd[305447]: Disconnected from user root 192.168.122.11 port 39770
Oct 13 14:42:39 standalone.localdomain sshd[305444]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:39 standalone.localdomain systemd[1]: session-258.scope: Deactivated successfully.
Oct 13 14:42:39 standalone.localdomain systemd-logind[45629]: Session 258 logged out. Waiting for processes to exit.
Oct 13 14:42:39 standalone.localdomain systemd-logind[45629]: Removed session 258.
Oct 13 14:42:39 standalone.localdomain sshd[305600]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:39 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of demote operation for galera on galera-bundle-0
Oct 13 14:42:39 standalone.localdomain sshd[305600]: Accepted publickey for root from 192.168.122.11 port 39774 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:39 standalone.localdomain systemd-logind[45629]: New session 259 of user root.
Oct 13 14:42:39 standalone.localdomain systemd[1]: Started Session 259 of User root.
Oct 13 14:42:39 standalone.localdomain sshd[305600]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:39 standalone.localdomain sudo[305692]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config haproxy-bundle
Oct 13 14:42:39 standalone.localdomain sudo[305692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:40 standalone.localdomain sudo[305692]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:40 standalone.localdomain sshd[305681]: Received disconnect from 192.168.122.11 port 39774:11: disconnected by user
Oct 13 14:42:40 standalone.localdomain sshd[305681]: Disconnected from user root 192.168.122.11 port 39774
Oct 13 14:42:40 standalone.localdomain sshd[305600]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:40 standalone.localdomain systemd[1]: session-259.scope: Deactivated successfully.
Oct 13 14:42:40 standalone.localdomain systemd-logind[45629]: Session 259 logged out. Waiting for processes to exit.
Oct 13 14:42:40 standalone.localdomain systemd-logind[45629]: Removed session 259.
Oct 13 14:42:40 standalone.localdomain sshd[305756]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:40 standalone.localdomain sshd[305756]: Accepted publickey for root from 192.168.122.11 port 47876 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:40 standalone.localdomain systemd-logind[45629]: New session 260 of user root.
Oct 13 14:42:40 standalone.localdomain systemd[1]: Started Session 260 of User root.
Oct 13 14:42:40 standalone.localdomain sshd[305756]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2044: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:40 standalone.localdomain sudo[305760]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource disable haproxy-bundle
Oct 13 14:42:40 standalone.localdomain sudo[305760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:40 standalone.localdomain ceph-mon[29756]: pgmap v2044: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:40 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 66 aborted by haproxy-bundle-meta_attributes-target-role doing create target-role=Stopped: Configuration change
Oct 13 14:42:41 standalone.localdomain sudo[305760]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:41 standalone.localdomain sshd[305759]: Received disconnect from 192.168.122.11 port 47876:11: disconnected by user
Oct 13 14:42:41 standalone.localdomain sshd[305759]: Disconnected from user root 192.168.122.11 port 47876
Oct 13 14:42:41 standalone.localdomain sshd[305756]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:41 standalone.localdomain systemd[1]: session-260.scope: Deactivated successfully.
Oct 13 14:42:41 standalone.localdomain systemd-logind[45629]: Session 260 logged out. Waiting for processes to exit.
Oct 13 14:42:41 standalone.localdomain systemd-logind[45629]: Removed session 260.
Oct 13 14:42:41 standalone.localdomain sshd[305796]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:41 standalone.localdomain sshd[305796]: Accepted publickey for root from 192.168.122.11 port 47880 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:41 standalone.localdomain systemd-logind[45629]: New session 261 of user root.
Oct 13 14:42:41 standalone.localdomain systemd[1]: Started Session 261 of User root.
Oct 13 14:42:41 standalone.localdomain sshd[305796]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:41 standalone.localdomain sudo[305800]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource config rabbitmq-bundle
Oct 13 14:42:41 standalone.localdomain sudo[305800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:41 standalone.localdomain sudo[305800]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:41 standalone.localdomain sshd[305799]: Received disconnect from 192.168.122.11 port 47880:11: disconnected by user
Oct 13 14:42:41 standalone.localdomain sshd[305799]: Disconnected from user root 192.168.122.11 port 47880
Oct 13 14:42:41 standalone.localdomain sshd[305796]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:41 standalone.localdomain systemd[1]: session-261.scope: Deactivated successfully.
Oct 13 14:42:41 standalone.localdomain systemd-logind[45629]: Session 261 logged out. Waiting for processes to exit.
Oct 13 14:42:41 standalone.localdomain systemd-logind[45629]: Removed session 261.
Oct 13 14:42:41 standalone.localdomain sshd[305826]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:41 standalone.localdomain sshd[305826]: Accepted publickey for root from 192.168.122.11 port 47886 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:41 standalone.localdomain systemd-logind[45629]: New session 262 of user root.
Oct 13 14:42:41 standalone.localdomain systemd[1]: Started Session 262 of User root.
Oct 13 14:42:41 standalone.localdomain sshd[305826]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:42 standalone.localdomain sudo[305830]:     root : PWD=/root ; USER=root ; COMMAND=/sbin/pcs resource disable rabbitmq-bundle
Oct 13 14:42:42 standalone.localdomain sudo[305830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2045: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:42 standalone.localdomain ceph-mon[29756]: pgmap v2045: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:42 standalone.localdomain runuser[74850]: pam_unix(runuser-l:session): session closed for user mysql
Oct 13 14:42:42 standalone.localdomain galera(galera)[305852]: INFO: MySQL stopped
Oct 13 14:42:42 standalone.localdomain haproxy[70940]: 172.17.0.100:55747 [13/Oct/2025:14:16:10.094] mysql mysql/standalone.internalapi.localdomain 1/0/1592503 156716 D- 3/3/2/2/0 0/0
Oct 13 14:42:42 standalone.localdomain haproxy[70940]: Backup Server mysql/standalone.internalapi.localdomain is DOWN, reason: Layer7 wrong status, code: 503, info: "Service Unavailable", check duration: 13ms. 0 active and 0 backup servers left. 3 sessions active, 0 requeued, 0 remaining in queue.
Oct 13 14:42:42 standalone.localdomain haproxy[70940]: proxy mysql has no server available!
Oct 13 14:42:42 standalone.localdomain haproxy[70940]: 172.17.0.100:38639 [13/Oct/2025:14:05:17.746] mysql mysql/standalone.internalapi.localdomain 1/0/2244855 38062 D- 2/2/1/1/0 0/0
Oct 13 14:42:42 standalone.localdomain haproxy[70940]: 172.17.0.100:54303 [13/Oct/2025:14:05:17.685] mysql mysql/standalone.internalapi.localdomain 1/0/2244916 8011 D- 2/2/0/0/0 0/0
Oct 13 14:42:42 standalone.localdomain sudo[305830]: pam_unix(sudo:session): session closed for user root
Oct 13 14:42:42 standalone.localdomain sshd[305829]: Received disconnect from 192.168.122.11 port 47886:11: disconnected by user
Oct 13 14:42:42 standalone.localdomain sshd[305829]: Disconnected from user root 192.168.122.11 port 47886
Oct 13 14:42:42 standalone.localdomain sshd[305826]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:42 standalone.localdomain systemd[1]: session-262.scope: Deactivated successfully.
Oct 13 14:42:42 standalone.localdomain systemd-logind[45629]: Session 262 logged out. Waiting for processes to exit.
Oct 13 14:42:42 standalone.localdomain systemd-logind[45629]: Removed session 262.
Oct 13 14:42:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:43 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting master-galera[standalone]: 100 -> (unset)
Oct 13 14:42:43 standalone.localdomain galera(galera)[305887]: INFO: attempting to read safe_to_bootstrap flag from /var/lib/mysql/grastate.dat
Oct 13 14:42:43 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-safe-to-bootstrap[standalone]: (unset) -> 1
Oct 13 14:42:44 standalone.localdomain galera(galera)[305897]: INFO: attempting to detect last commit version by reading /var/lib/mysql/grastate.dat
Oct 13 14:42:44 standalone.localdomain galera(galera)[305903]: INFO: Last commit version found:  5707
Oct 13 14:42:44 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-last-committed[standalone]: (unset) -> 5707
Oct 13 14:42:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2046: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Result of demote operation for galera on galera-bundle-0: ok
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 66 (Complete=7, Pending=0, Fired=0, Skipped=1, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-input-66.bz2): Stopped
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-192.168.122.99                    (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.21.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.17.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.18.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.20.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       haproxy-bundle-podman-0              (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera-bundle-podman-0               (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera-bundle-0                      (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera:0                             (             Unpromoted galera-bundle-0 )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       rabbitmq-bundle-podman-0             (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       rabbitmq-bundle-0                    (                             standalone )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       rabbitmq:0                           (                      rabbitmq-bundle-0 )  due to node availability
Oct 13 14:42:44 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 67, saving inputs in /var/lib/pacemaker/pengine/pe-input-67.bz2
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation haproxy-bundle-podman-0_stop_0 locally on standalone
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for haproxy-bundle-podman-0 on standalone
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating notify operation rabbitmq_pre_notify_stop_0 locally on rabbitmq-bundle-0
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of notify operation for rabbitmq on rabbitmq-bundle-0
Oct 13 14:42:44 standalone.localdomain ceph-mon[29756]: pgmap v2046: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation galera_stop_0 locally on galera-bundle-0
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for galera on galera-bundle-0
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Result of notify operation for rabbitmq on rabbitmq-bundle-0: ok
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation rabbitmq_stop_0 locally on rabbitmq-bundle-0
Oct 13 14:42:44 standalone.localdomain systemd[1]: tmp-crun.558jca.mount: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain podman[305909]: 2025-10-13 14:42:44.505027615 +0000 UTC m=+0.076677421 container exec 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 haproxy, maintainer=OpenStack TripleO Team, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 haproxy, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-haproxy-container, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, build-date=2025-07-21T13:08:11, io.openshift.expose-services=, name=rhosp17/openstack-haproxy)
Oct 13 14:42:44 standalone.localdomain podman[305909]: 2025-10-13 14:42:44.517753668 +0000 UTC m=+0.089403444 container exec_died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-haproxy-container, vendor=Red Hat, Inc., release=1, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, build-date=2025-07-21T13:08:11, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:42:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:42:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:42:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:42:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:42:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:42:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for rabbitmq on rabbitmq-bundle-0
Oct 13 14:42:44 standalone.localdomain galera(galera)[306048]: INFO: MySQL is not running
Oct 13 14:42:44 standalone.localdomain podman[305962]: 2025-10-13 14:42:44.609514451 +0000 UTC m=+0.077368302 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, version=17.1.9, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-type=git, config_id=tripleo_step1, container_name=memcached, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:42:44 standalone.localdomain podman[305951]: 2025-10-13 14:42:44.626929448 +0000 UTC m=+0.085859144 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., container_name=heat_api_cfn, config_id=tripleo_step4, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:42:44 standalone.localdomain systemd[1]: libpod-60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb.scope: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain systemd[1]: libpod-60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb.scope: Consumed 26.629s CPU time.
Oct 13 14:42:44 standalone.localdomain podman[305950]: 2025-10-13 14:42:44.642227069 +0000 UTC m=+0.118178959 container died 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T13:08:11, com.redhat.component=openstack-haproxy-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, io.openshift.expose-services=, name=rhosp17/openstack-haproxy, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:42:44 standalone.localdomain runuser[306091]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:44 standalone.localdomain podman[306017]: 2025-10-13 14:42:44.671143709 +0000 UTC m=+0.100681400 container cleanup 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, com.redhat.component=openstack-haproxy-container, architecture=x86_64, name=rhosp17/openstack-haproxy, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, io.buildah.version=1.33.12, build-date=2025-07-21T13:08:11, distribution-scope=public, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, vendor=Red Hat, Inc.)
Oct 13 14:42:44 standalone.localdomain podman(haproxy-bundle-podman-0)[306138]: INFO: haproxy-bundle-podman-0
Oct 13 14:42:44 standalone.localdomain podman[305953]: 2025-10-13 14:42:44.722780239 +0000 UTC m=+0.192415095 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, release=1, container_name=heat_engine, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-heat-engine-container, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:42:44 standalone.localdomain podman[306112]: 2025-10-13 14:42:44.73094033 +0000 UTC m=+0.084784281 container cleanup 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, architecture=x86_64, name=rhosp17/openstack-haproxy, release=1, summary=Red Hat OpenStack Platform 17.1 haproxy, version=17.1.9, build-date=2025-07-21T13:08:11, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 haproxy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:42:44 standalone.localdomain podman[305953]: 2025-10-13 14:42:44.741470145 +0000 UTC m=+0.211105001 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, description=Red Hat OpenStack Platform 17.1 heat-engine, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-heat-engine-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-engine, io.buildah.version=1.33.12, vcs-type=git, container_name=heat_engine, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:44:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3)
Oct 13 14:42:44 standalone.localdomain podman[305962]: 2025-10-13 14:42:44.751810492 +0000 UTC m=+0.219664313 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, name=rhosp17/openstack-memcached, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, version=17.1.9, tcib_managed=true, build-date=2025-07-21T12:58:43, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-memcached-container, description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:42:44 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain podman[305963]: 2025-10-13 14:42:44.819495716 +0000 UTC m=+0.283964373 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, release=1, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container)
Oct 13 14:42:44 standalone.localdomain podman[305969]: 2025-10-13 14:42:44.824461739 +0000 UTC m=+0.284368355 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, tcib_managed=true, batch=17.1_20250721.1, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, maintainer=OpenStack TripleO Team, container_name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:42:44 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-safe-to-bootstrap[standalone]: 1 -> (unset)
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 67 aborted by deletion of nvpair[@id='status-1-galera-safe-to-bootstrap']: Transient attribute change
Oct 13 14:42:44 standalone.localdomain podman(haproxy-bundle-podman-0)[306227]: NOTICE: Cleaning up inactive container, haproxy-bundle-podman-0.
Oct 13 14:42:44 standalone.localdomain podman[305951]: 2025-10-13 14:42:44.856004249 +0000 UTC m=+0.314933925 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=heat_api_cfn, vcs-type=git, com.redhat.component=openstack-heat-api-cfn-container, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, release=1, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:42:44 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain podman[305963]: 2025-10-13 14:42:44.87711903 +0000 UTC m=+0.341587707 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, version=17.1.9, container_name=heat_api_cron, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:42:44 standalone.localdomain podman[305980]: 2025-10-13 14:42:44.783607242 +0000 UTC m=+0.234006885 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git)
Oct 13 14:42:44 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain podman[305980]: 2025-10-13 14:42:44.912124867 +0000 UTC m=+0.362524500 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=logrotate_crond, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4)
Oct 13 14:42:44 standalone.localdomain podman[306231]: 2025-10-13 14:42:44.92713992 +0000 UTC m=+0.064188057 container remove 60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb (image=cluster.common.tag/haproxy:pcmklatest, name=haproxy-bundle-podman-0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 haproxy, vcs-ref=c3dd74fd14112210ad2ff230577dea7bb5deaa50, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-haproxy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 haproxy, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 haproxy, com.redhat.component=openstack-haproxy-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 haproxy, vcs-type=git, build-date=2025-07-21T13:08:11, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-haproxy/images/17.1.9-1)
Oct 13 14:42:44 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:42:44 standalone.localdomain podman(haproxy-bundle-podman-0)[306250]: INFO: haproxy-bundle-podman-0
Oct 13 14:42:44 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for haproxy-bundle-podman-0 on standalone: ok
Oct 13 14:42:44 standalone.localdomain podman[305969]: 2025-10-13 14:42:44.958556877 +0000 UTC m=+0.418463493 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:42:44 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:42:45 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting galera-last-committed[standalone]: 5707 -> (unset)
Oct 13 14:42:45 standalone.localdomain runuser[306091]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:45 standalone.localdomain runuser[306267]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-148ce7eb9cddf6791c4b1db7affe1d541b049874ebcbdb399317e20ea109207b-merged.mount: Deactivated successfully.
Oct 13 14:42:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60d8dfd99fc89af807f21e7c942480683af07ec1bb726797c1f9e4390b9765fb-userdata-shm.mount: Deactivated successfully.
Oct 13 14:42:45 standalone.localdomain runuser[306267]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:45 standalone.localdomain runuser[306321]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:45 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for galera on galera-bundle-0: ok
Oct 13 14:42:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2047: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:46 standalone.localdomain ceph-mon[29756]: pgmap v2047: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:46 standalone.localdomain runuser[306321]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:46 standalone.localdomain runuser[306373]: pam_unix(runuser:session): session opened for user rabbitmq(uid=42439) by (uid=0)
Oct 13 14:42:47 standalone.localdomain runuser[306373]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:47 standalone.localdomain pacemaker-attrd[57909]:  notice: Setting rmq-node-attr-rabbitmq[standalone]: rabbit@standalone.internalapi.localdomain -> (unset)
Oct 13 14:42:47 standalone.localdomain runuser[76964]: pam_unix(runuser:session): session closed for user rabbitmq
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for rabbitmq on rabbitmq-bundle-0: ok
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 67 (Complete=16, Pending=0, Fired=0, Skipped=7, Incomplete=11, Source=/var/lib/pacemaker/pengine/pe-input-67.bz2): Stopped
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-192.168.122.99                    (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.21.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.17.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.18.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       ip-172.20.0.2                        (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera-bundle-podman-0               (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       galera-bundle-0                      (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       rabbitmq-bundle-podman-0             (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Actions: Stop       rabbitmq-bundle-0                    (                             standalone )  due to node availability
Oct 13 14:42:47 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 68, saving inputs in /var/lib/pacemaker/pengine/pe-input-68.bz2
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-192.168.122.99_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-192.168.122.99 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.21.0.2_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.21.0.2 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.17.0.2_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.17.0.2 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.18.0.2_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.18.0.2 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation ip-172.20.0.2_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for ip-172.20.0.2 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation galera-bundle-0_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for galera-bundle-0 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation rabbitmq-bundle-0_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for rabbitmq-bundle-0 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Node galera-bundle-0 state is now lost
Oct 13 14:42:47 standalone.localdomain pacemaker-remoted[73556]:  notice: Cleaning up after remote client pacemaker-remote-standalone:3123 disconnected
Oct 13 14:42:47 standalone.localdomain pacemaker-attrd[57909]:  notice: Removing all galera-bundle-0 attributes for peer standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for galera-bundle-0 on standalone: ok
Oct 13 14:42:47 standalone.localdomain pacemaker-remoted[76362]:  notice: Cleaning up after remote client pacemaker-remote-standalone:3122 disconnected
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Node rabbitmq-bundle-0 state is now lost
Oct 13 14:42:47 standalone.localdomain pacemaker-attrd[57909]:  notice: Removing all rabbitmq-bundle-0 attributes for peer standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for rabbitmq-bundle-0 on standalone: ok
Oct 13 14:42:47 standalone.localdomain pacemaker-fenced[57907]:  notice: Node galera-bundle-0 state is now lost
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation galera-bundle-podman-0_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for galera-bundle-podman-0 on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-fenced[57907]:  notice: Node rabbitmq-bundle-0 state is now lost
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Initiating stop operation rabbitmq-bundle-podman-0_stop_0 locally on standalone
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Requesting local execution of stop operation for rabbitmq-bundle-podman-0 on standalone
Oct 13 14:42:47 standalone.localdomain IPaddr2(ip-172.21.0.2)[306660]: INFO: IP status = ok, IP_CIP=
Oct 13 14:42:47 standalone.localdomain IPaddr2(ip-172.20.0.2)[306667]: INFO: IP status = ok, IP_CIP=
Oct 13 14:42:47 standalone.localdomain IPaddr2(ip-172.17.0.2)[306673]: INFO: IP status = ok, IP_CIP=
Oct 13 14:42:47 standalone.localdomain IPaddr2(ip-192.168.122.99)[306674]: INFO: IP status = ok, IP_CIP=
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.20.0.2 on standalone: ok
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.21.0.2 on standalone: ok
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.17.0.2 on standalone: ok
Oct 13 14:42:47 standalone.localdomain IPaddr2(ip-172.18.0.2)[306689]: INFO: IP status = ok, IP_CIP=
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-192.168.122.99 on standalone: ok
Oct 13 14:42:47 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for ip-172.18.0.2 on standalone: ok
Oct 13 14:42:47 standalone.localdomain systemd[1]: tmp-crun.6OXJxI.mount: Deactivated successfully.
Oct 13 14:42:47 standalone.localdomain podman[306532]: 2025-10-13 14:42:47.777656679 +0000 UTC m=+0.069917223 container exec f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, version=17.1.9, release=1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T13:08:05, name=rhosp17/openstack-rabbitmq, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rabbitmq, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container)
Oct 13 14:42:47 standalone.localdomain podman[306532]: 2025-10-13 14:42:47.806256371 +0000 UTC m=+0.098516935 container exec_died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-rabbitmq-container, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:08:05, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:42:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:47 standalone.localdomain podman[306488]: 2025-10-13 14:42:47.846359145 +0000 UTC m=+0.146714767 container exec cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, vcs-type=git, release=1, architecture=x86_64)
Oct 13 14:42:47 standalone.localdomain podman[306488]: 2025-10-13 14:42:47.926857603 +0000 UTC m=+0.227213235 container exec_died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, release=1, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-mariadb-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true)
Oct 13 14:42:47 standalone.localdomain pacemaker-remoted[76362]:  notice: Caught 'Terminated' signal
Oct 13 14:42:47 standalone.localdomain systemd[1]: libpod-f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd.scope: Deactivated successfully.
Oct 13 14:42:47 standalone.localdomain systemd[1]: libpod-f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd.scope: Consumed 8min 38.234s CPU time.
Oct 13 14:42:47 standalone.localdomain podman[306730]: 2025-10-13 14:42:47.94594387 +0000 UTC m=+0.067038514 container died f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rabbitmq-container, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, build-date=2025-07-21T13:08:05, description=Red Hat OpenStack Platform 17.1 rabbitmq, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq)
Oct 13 14:42:48 standalone.localdomain podman[306730]: 2025-10-13 14:42:48.018114812 +0000 UTC m=+0.139209396 container cleanup f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, build-date=2025-07-21T13:08:05, release=1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-rabbitmq-container, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-rabbitmq, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1)
Oct 13 14:42:48 standalone.localdomain podman[306750]: 2025-10-13 14:42:48.020121924 +0000 UTC m=+0.069581333 container cleanup f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-rabbitmq, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, com.redhat.component=openstack-rabbitmq-container, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T13:08:05, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, summary=Red Hat OpenStack Platform 17.1 rabbitmq, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 14:42:48 standalone.localdomain podman(rabbitmq-bundle-podman-0)[306778]: INFO: rabbitmq-bundle-podman-0
Oct 13 14:42:48 standalone.localdomain pacemaker-remoted[73556]:  notice: Caught 'Terminated' signal
Oct 13 14:42:48 standalone.localdomain systemd[1]: libpod-cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c.scope: Deactivated successfully.
Oct 13 14:42:48 standalone.localdomain systemd[1]: libpod-cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c.scope: Consumed 1min 16.399s CPU time.
Oct 13 14:42:48 standalone.localdomain podman[306753]: 2025-10-13 14:42:48.057822944 +0000 UTC m=+0.103111964 container died cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, com.redhat.component=openstack-mariadb-container, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, build-date=2025-07-21T12:58:45, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public)
Oct 13 14:42:48 standalone.localdomain podman(rabbitmq-bundle-podman-0)[306816]: NOTICE: Cleaning up inactive container, rabbitmq-bundle-podman-0.
Oct 13 14:42:48 standalone.localdomain podman[306753]: 2025-10-13 14:42:48.137739385 +0000 UTC m=+0.183028385 container cleanup cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, build-date=2025-07-21T12:58:45, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, com.redhat.component=openstack-mariadb-container, architecture=x86_64, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1)
Oct 13 14:42:48 standalone.localdomain podman[306792]: 2025-10-13 14:42:48.14180933 +0000 UTC m=+0.076089904 container cleanup cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, vendor=Red Hat, Inc., com.redhat.component=openstack-mariadb-container, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, vcs-type=git, build-date=2025-07-21T12:58:45, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, release=1)
Oct 13 14:42:48 standalone.localdomain podman(galera-bundle-podman-0)[306837]: INFO: galera-bundle-podman-0
Oct 13 14:42:48 standalone.localdomain podman[306821]: 2025-10-13 14:42:48.172853855 +0000 UTC m=+0.060684438 container remove f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd (image=cluster.common.tag/rabbitmq:pcmklatest, name=rabbitmq-bundle-podman-0, build-date=2025-07-21T13:08:05, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-rabbitmq/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-rabbitmq-container, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rabbitmq, io.k8s.description=Red Hat OpenStack Platform 17.1 rabbitmq, description=Red Hat OpenStack Platform 17.1 rabbitmq, name=rhosp17/openstack-rabbitmq, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 rabbitmq, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, tcib_managed=true, vcs-ref=b30dae4cd2f9a99467af45e3d985bc71318ed98b, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:42:48 standalone.localdomain podman(rabbitmq-bundle-podman-0)[306852]: INFO: rabbitmq-bundle-podman-0
Oct 13 14:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for rabbitmq-bundle-podman-0 on standalone: ok
Oct 13 14:42:48 standalone.localdomain podman(galera-bundle-podman-0)[306865]: NOTICE: Cleaning up inactive container, galera-bundle-podman-0.
Oct 13 14:42:48 standalone.localdomain podman[306869]: 2025-10-13 14:42:48.294016066 +0000 UTC m=+0.056669247 container remove cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c (image=cluster.common.tag/mariadb:pcmklatest, name=galera-bundle-podman-0, description=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, tcib_managed=true, release=1, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc.)
Oct 13 14:42:48 standalone.localdomain podman(galera-bundle-podman-0)[306886]: INFO: galera-bundle-podman-0
Oct 13 14:42:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2048: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: Result of stop operation for galera-bundle-podman-0 on standalone: ok
Oct 13 14:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 68 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-68.bz2): Complete
Oct 13 14:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:42:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-308f052ed4aa72a8b162d1a4c4bd3dde09a0351b4551b7d966ee67d9a4fe7a7b-merged.mount: Deactivated successfully.
Oct 13 14:42:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2531572ca73b82b118823abb93d38532c4daaab742ca4d5a8c7d5442a1d55bd-userdata-shm.mount: Deactivated successfully.
Oct 13 14:42:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d576005e5a03efdde860f6e828ca520a2e06f29826e732542b52aab0e5905866-merged.mount: Deactivated successfully.
Oct 13 14:42:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdfc066fd56329cf21ad91b756ff604a0a362c3d0d99b5fb2d3ee0010d01b52c-userdata-shm.mount: Deactivated successfully.
Oct 13 14:42:48 standalone.localdomain ceph-mon[29756]: pgmap v2048: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2049: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:50 standalone.localdomain ceph-mon[29756]: pgmap v2049: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:42:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:42:50 standalone.localdomain systemd[1]: tmp-crun.I3p0XV.mount: Deactivated successfully.
Oct 13 14:42:50 standalone.localdomain podman[306899]: 2025-10-13 14:42:50.835299206 +0000 UTC m=+0.083138611 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, version=17.1.9, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, name=rhosp17/openstack-swift-container, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, io.buildah.version=1.33.12)
Oct 13 14:42:50 standalone.localdomain podman[306892]: 2025-10-13 14:42:50.896345816 +0000 UTC m=+0.145421158 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, container_name=swift_proxy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:42:50 standalone.localdomain podman[306900]: 2025-10-13 14:42:50.852147995 +0000 UTC m=+0.098042209 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Oct 13 14:42:50 standalone.localdomain podman[306913]: 2025-10-13 14:42:50.872754349 +0000 UTC m=+0.116345362 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, tcib_managed=true, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:42:50 standalone.localdomain podman[306900]: 2025-10-13 14:42:50.931735934 +0000 UTC m=+0.177630178 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible)
Oct 13 14:42:50 standalone.localdomain podman[306900]: unhealthy
Oct 13 14:42:50 standalone.localdomain podman[306915]: 2025-10-13 14:42:50.939852874 +0000 UTC m=+0.181841009 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=swift_account_server, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, build-date=2025-07-21T16:11:22, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team)
Oct 13 14:42:50 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:50 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:42:50 standalone.localdomain podman[306913]: 2025-10-13 14:42:50.951867454 +0000 UTC m=+0.195458467 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:42:50 standalone.localdomain podman[306913]: unhealthy
Oct 13 14:42:50 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:50 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:42:50 standalone.localdomain podman[306891]: 2025-10-13 14:42:50.995876759 +0000 UTC m=+0.249241424 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Oct 13 14:42:51 standalone.localdomain podman[306890]: 2025-10-13 14:42:51.017582008 +0000 UTC m=+0.270444197 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-type=git)
Oct 13 14:42:51 standalone.localdomain podman[306889]: 2025-10-13 14:42:51.061774408 +0000 UTC m=+0.317207147 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, version=17.1.9)
Oct 13 14:42:51 standalone.localdomain podman[306899]: 2025-10-13 14:42:51.078656068 +0000 UTC m=+0.326495473 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1)
Oct 13 14:42:51 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:42:51 standalone.localdomain podman[306889]: 2025-10-13 14:42:51.091358028 +0000 UTC m=+0.346790767 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, version=17.1.9, release=1, description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, container_name=glance_api_cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:42:51 standalone.localdomain podman[306893]: 2025-10-13 14:42:51.108034572 +0000 UTC m=+0.355194726 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, name=rhosp17/openstack-glance-api, container_name=glance_api_internal, description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-glance-api-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:42:51 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:42:51 standalone.localdomain podman[306892]: 2025-10-13 14:42:51.11383351 +0000 UTC m=+0.362908872 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, build-date=2025-07-21T14:48:37)
Oct 13 14:42:51 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:42:51 standalone.localdomain podman[306915]: 2025-10-13 14:42:51.137065616 +0000 UTC m=+0.379053761 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, version=17.1.9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:42:51 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:42:51 standalone.localdomain podman[306890]: 2025-10-13 14:42:51.193897245 +0000 UTC m=+0.446759474 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, config_id=tripleo_step4, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team)
Oct 13 14:42:51 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:42:51 standalone.localdomain podman[306893]: 2025-10-13 14:42:51.287516437 +0000 UTC m=+0.534676591 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, container_name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, com.redhat.component=openstack-glance-api-container, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1)
Oct 13 14:42:51 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:42:51 standalone.localdomain podman[306891]: 2025-10-13 14:42:51.348405192 +0000 UTC m=+0.601769877 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:42:51 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:42:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2050: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:52 standalone.localdomain ceph-mon[29756]: pgmap v2050: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:42:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2051: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:54 standalone.localdomain ceph-mon[29756]: pgmap v2051: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:42:55 standalone.localdomain systemd[1]: tmp-crun.X6z2Ic.mount: Deactivated successfully.
Oct 13 14:42:55 standalone.localdomain podman[307101]: 2025-10-13 14:42:55.817782337 +0000 UTC m=+0.083515391 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, name=rhosp17/openstack-keystone, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-keystone-container, vcs-type=git)
Oct 13 14:42:55 standalone.localdomain podman[307101]: 2025-10-13 14:42:55.826031191 +0000 UTC m=+0.091764225 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-keystone-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-keystone, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, io.openshift.expose-services=)
Oct 13 14:42:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:42:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2052: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:56 standalone.localdomain ceph-mon[29756]: pgmap v2052: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:42:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:42:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:42:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:42:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:42:57 standalone.localdomain systemd[1]: tmp-crun.yegyry.mount: Deactivated successfully.
Oct 13 14:42:57 standalone.localdomain podman[307121]: 2025-10-13 14:42:57.807065197 +0000 UTC m=+0.074245637 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.9, build-date=2025-07-21T13:27:15, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, container_name=iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2)
Oct 13 14:42:57 standalone.localdomain podman[307121]: 2025-10-13 14:42:57.814809955 +0000 UTC m=+0.081990395 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, config_id=tripleo_step3, vcs-type=git, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2)
Oct 13 14:42:57 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:42:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:42:57 standalone.localdomain podman[307133]: 2025-10-13 14:42:57.855833748 +0000 UTC m=+0.113249477 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, release=1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:42:57 standalone.localdomain podman[307133]: 2025-10-13 14:42:57.863434272 +0000 UTC m=+0.120849981 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:42:57 standalone.localdomain podman[307133]: unhealthy
Oct 13 14:42:57 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:57 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:42:57 standalone.localdomain podman[307127]: 2025-10-13 14:42:57.82405798 +0000 UTC m=+0.082629415 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, config_id=tripleo_step5, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:42:57 standalone.localdomain podman[307127]: 2025-10-13 14:42:57.903404663 +0000 UTC m=+0.161976098 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9)
Oct 13 14:42:57 standalone.localdomain podman[307127]: unhealthy
Oct 13 14:42:57 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:57 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:42:57 standalone.localdomain podman[307134]: 2025-10-13 14:42:57.913476593 +0000 UTC m=+0.169424047 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=clustercheck, summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, com.redhat.component=openstack-mariadb-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, build-date=2025-07-21T12:58:45, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, release=1, tcib_managed=true)
Oct 13 14:42:57 standalone.localdomain podman[307134]: 2025-10-13 14:42:57.953259348 +0000 UTC m=+0.209206832 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-mariadb-container, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, release=1, vcs-type=git, build-date=2025-07-21T12:58:45, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, name=rhosp17/openstack-mariadb, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step2, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:42:57 standalone.localdomain podman[307134]: unhealthy
Oct 13 14:42:57 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:57 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed with result 'exit-code'.
Oct 13 14:42:58 standalone.localdomain podman[307120]: 2025-10-13 14:42:57.95920525 +0000 UTC m=+0.230183766 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_dhcp, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:42:58 standalone.localdomain podman[307120]: 2025-10-13 14:42:58.039299696 +0000 UTC m=+0.310278202 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, tcib_managed=true, release=1)
Oct 13 14:42:58 standalone.localdomain podman[307120]: unhealthy
Oct 13 14:42:58 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:42:58 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:42:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2053: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:58 standalone.localdomain sshd[307241]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:58 standalone.localdomain sshd[307241]: Accepted publickey for root from 192.168.122.11 port 57062 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:58 standalone.localdomain systemd-logind[45629]: New session 263 of user root.
Oct 13 14:42:58 standalone.localdomain systemd[1]: Started Session 263 of User root.
Oct 13 14:42:58 standalone.localdomain sshd[307241]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:58 standalone.localdomain sshd[307244]: Received disconnect from 192.168.122.11 port 57062:11: disconnected by user
Oct 13 14:42:58 standalone.localdomain sshd[307244]: Disconnected from user root 192.168.122.11 port 57062
Oct 13 14:42:58 standalone.localdomain sshd[307241]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:58 standalone.localdomain systemd-logind[45629]: Session 263 logged out. Waiting for processes to exit.
Oct 13 14:42:58 standalone.localdomain systemd[1]: session-263.scope: Deactivated successfully.
Oct 13 14:42:58 standalone.localdomain systemd-logind[45629]: Removed session 263.
Oct 13 14:42:58 standalone.localdomain sshd[307258]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:58 standalone.localdomain sshd[307258]: Accepted publickey for root from 192.168.122.11 port 57068 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:58 standalone.localdomain systemd-logind[45629]: New session 264 of user root.
Oct 13 14:42:58 standalone.localdomain systemd[1]: Started Session 264 of User root.
Oct 13 14:42:58 standalone.localdomain sshd[307258]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:58 standalone.localdomain ceph-mon[29756]: pgmap v2053: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:42:59 standalone.localdomain sshd[307261]: Received disconnect from 192.168.122.11 port 57068:11: disconnected by user
Oct 13 14:42:59 standalone.localdomain sshd[307261]: Disconnected from user root 192.168.122.11 port 57068
Oct 13 14:42:59 standalone.localdomain sshd[307258]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:59 standalone.localdomain systemd[1]: session-264.scope: Deactivated successfully.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Session 264 logged out. Waiting for processes to exit.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Removed session 264.
Oct 13 14:42:59 standalone.localdomain sshd[307287]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:59 standalone.localdomain sshd[307287]: Accepted publickey for root from 192.168.122.11 port 57076 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: New session 265 of user root.
Oct 13 14:42:59 standalone.localdomain systemd[1]: Started Session 265 of User root.
Oct 13 14:42:59 standalone.localdomain sshd[307287]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:59 standalone.localdomain sshd[307290]: Received disconnect from 192.168.122.11 port 57076:11: disconnected by user
Oct 13 14:42:59 standalone.localdomain sshd[307290]: Disconnected from user root 192.168.122.11 port 57076
Oct 13 14:42:59 standalone.localdomain sshd[307287]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:59 standalone.localdomain systemd[1]: session-265.scope: Deactivated successfully.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Session 265 logged out. Waiting for processes to exit.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Removed session 265.
Oct 13 14:42:59 standalone.localdomain sshd[307304]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:59 standalone.localdomain sshd[307304]: Accepted publickey for root from 192.168.122.11 port 57078 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: New session 266 of user root.
Oct 13 14:42:59 standalone.localdomain systemd[1]: Started Session 266 of User root.
Oct 13 14:42:59 standalone.localdomain sshd[307304]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:59 standalone.localdomain sshd[307307]: Received disconnect from 192.168.122.11 port 57078:11: disconnected by user
Oct 13 14:42:59 standalone.localdomain sshd[307307]: Disconnected from user root 192.168.122.11 port 57078
Oct 13 14:42:59 standalone.localdomain sshd[307304]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:59 standalone.localdomain systemd[1]: session-266.scope: Deactivated successfully.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Session 266 logged out. Waiting for processes to exit.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Removed session 266.
Oct 13 14:42:59 standalone.localdomain sshd[307321]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:59 standalone.localdomain sshd[307321]: Accepted publickey for root from 192.168.122.11 port 57080 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: New session 267 of user root.
Oct 13 14:42:59 standalone.localdomain systemd[1]: Started Session 267 of User root.
Oct 13 14:42:59 standalone.localdomain sshd[307321]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:42:59 standalone.localdomain sshd[307324]: Received disconnect from 192.168.122.11 port 57080:11: disconnected by user
Oct 13 14:42:59 standalone.localdomain sshd[307324]: Disconnected from user root 192.168.122.11 port 57080
Oct 13 14:42:59 standalone.localdomain sshd[307321]: pam_unix(sshd:session): session closed for user root
Oct 13 14:42:59 standalone.localdomain systemd[1]: session-267.scope: Deactivated successfully.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Session 267 logged out. Waiting for processes to exit.
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: Removed session 267.
Oct 13 14:42:59 standalone.localdomain sshd[307338]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:42:59 standalone.localdomain sshd[307338]: Accepted publickey for root from 192.168.122.11 port 40958 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:42:59 standalone.localdomain systemd-logind[45629]: New session 268 of user root.
Oct 13 14:42:59 standalone.localdomain systemd[1]: Started Session 268 of User root.
Oct 13 14:42:59 standalone.localdomain sshd[307338]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:00 standalone.localdomain sshd[307341]: Received disconnect from 192.168.122.11 port 40958:11: disconnected by user
Oct 13 14:43:00 standalone.localdomain sshd[307341]: Disconnected from user root 192.168.122.11 port 40958
Oct 13 14:43:00 standalone.localdomain sshd[307338]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:00 standalone.localdomain systemd[1]: session-268.scope: Deactivated successfully.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Session 268 logged out. Waiting for processes to exit.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Removed session 268.
Oct 13 14:43:00 standalone.localdomain sshd[307355]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:00 standalone.localdomain sshd[307355]: Accepted publickey for root from 192.168.122.11 port 40972 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: New session 269 of user root.
Oct 13 14:43:00 standalone.localdomain systemd[1]: Started Session 269 of User root.
Oct 13 14:43:00 standalone.localdomain sshd[307355]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:00 standalone.localdomain sshd[307358]: Received disconnect from 192.168.122.11 port 40972:11: disconnected by user
Oct 13 14:43:00 standalone.localdomain sshd[307358]: Disconnected from user root 192.168.122.11 port 40972
Oct 13 14:43:00 standalone.localdomain sshd[307355]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:00 standalone.localdomain systemd[1]: session-269.scope: Deactivated successfully.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Session 269 logged out. Waiting for processes to exit.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Removed session 269.
Oct 13 14:43:00 standalone.localdomain sshd[307372]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2054: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:00 standalone.localdomain sshd[307372]: Accepted publickey for root from 192.168.122.11 port 40982 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: New session 270 of user root.
Oct 13 14:43:00 standalone.localdomain systemd[1]: Started Session 270 of User root.
Oct 13 14:43:00 standalone.localdomain sshd[307372]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:00 standalone.localdomain ceph-mon[29756]: pgmap v2054: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:00 standalone.localdomain sshd[307375]: Received disconnect from 192.168.122.11 port 40982:11: disconnected by user
Oct 13 14:43:00 standalone.localdomain sshd[307375]: Disconnected from user root 192.168.122.11 port 40982
Oct 13 14:43:00 standalone.localdomain sshd[307372]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:00 standalone.localdomain systemd[1]: session-270.scope: Deactivated successfully.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Session 270 logged out. Waiting for processes to exit.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Removed session 270.
Oct 13 14:43:00 standalone.localdomain sshd[307389]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:00 standalone.localdomain sshd[307389]: Accepted publickey for root from 192.168.122.11 port 40990 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: New session 271 of user root.
Oct 13 14:43:00 standalone.localdomain systemd[1]: Started Session 271 of User root.
Oct 13 14:43:00 standalone.localdomain sshd[307389]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:00 standalone.localdomain sshd[307392]: Received disconnect from 192.168.122.11 port 40990:11: disconnected by user
Oct 13 14:43:00 standalone.localdomain sshd[307392]: Disconnected from user root 192.168.122.11 port 40990
Oct 13 14:43:00 standalone.localdomain sshd[307389]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Session 271 logged out. Waiting for processes to exit.
Oct 13 14:43:00 standalone.localdomain systemd[1]: session-271.scope: Deactivated successfully.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Removed session 271.
Oct 13 14:43:00 standalone.localdomain sshd[307406]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:00 standalone.localdomain sshd[307406]: Accepted publickey for root from 192.168.122.11 port 41006 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: New session 272 of user root.
Oct 13 14:43:00 standalone.localdomain systemd[1]: Started Session 272 of User root.
Oct 13 14:43:00 standalone.localdomain sshd[307406]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:00 standalone.localdomain sshd[307409]: Received disconnect from 192.168.122.11 port 41006:11: disconnected by user
Oct 13 14:43:00 standalone.localdomain sshd[307409]: Disconnected from user root 192.168.122.11 port 41006
Oct 13 14:43:00 standalone.localdomain sshd[307406]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:00 standalone.localdomain systemd[1]: session-272.scope: Deactivated successfully.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Session 272 logged out. Waiting for processes to exit.
Oct 13 14:43:00 standalone.localdomain systemd-logind[45629]: Removed session 272.
Oct 13 14:43:00 standalone.localdomain sshd[307423]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:01 standalone.localdomain sshd[307423]: Accepted publickey for root from 192.168.122.11 port 41010 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:01 standalone.localdomain systemd-logind[45629]: New session 273 of user root.
Oct 13 14:43:01 standalone.localdomain systemd[1]: Started Session 273 of User root.
Oct 13 14:43:01 standalone.localdomain sshd[307423]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:01 standalone.localdomain sshd[307426]: Received disconnect from 192.168.122.11 port 41010:11: disconnected by user
Oct 13 14:43:01 standalone.localdomain sshd[307426]: Disconnected from user root 192.168.122.11 port 41010
Oct 13 14:43:01 standalone.localdomain sshd[307423]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:01 standalone.localdomain systemd[1]: session-273.scope: Deactivated successfully.
Oct 13 14:43:01 standalone.localdomain systemd-logind[45629]: Session 273 logged out. Waiting for processes to exit.
Oct 13 14:43:01 standalone.localdomain systemd-logind[45629]: Removed session 273.
Oct 13 14:43:01 standalone.localdomain sshd[307440]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:01 standalone.localdomain sshd[307440]: Accepted publickey for root from 192.168.122.11 port 41016 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:01 standalone.localdomain systemd-logind[45629]: New session 274 of user root.
Oct 13 14:43:01 standalone.localdomain systemd[1]: Started Session 274 of User root.
Oct 13 14:43:01 standalone.localdomain sshd[307440]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:01 standalone.localdomain sshd[307443]: Received disconnect from 192.168.122.11 port 41016:11: disconnected by user
Oct 13 14:43:01 standalone.localdomain sshd[307443]: Disconnected from user root 192.168.122.11 port 41016
Oct 13 14:43:01 standalone.localdomain sshd[307440]: pam_unix(sshd:session): session closed for user root
Oct 13 14:43:01 standalone.localdomain systemd[1]: session-274.scope: Deactivated successfully.
Oct 13 14:43:01 standalone.localdomain systemd-logind[45629]: Session 274 logged out. Waiting for processes to exit.
Oct 13 14:43:01 standalone.localdomain systemd-logind[45629]: Removed session 274.
Oct 13 14:43:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2055: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:02 standalone.localdomain ceph-mon[29756]: pgmap v2055: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2056: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:04 standalone.localdomain ceph-mon[29756]: pgmap v2056: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2057: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:06 standalone.localdomain ceph-mon[29756]: pgmap v2057: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:08 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:43:08 standalone.localdomain recover_tripleo_nova_virtqemud[307458]: 93291
Oct 13 14:43:08 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:43:08 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:43:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2058: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:08 standalone.localdomain ceph-mon[29756]: pgmap v2058: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2059: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:10 standalone.localdomain ceph-mon[29756]: pgmap v2059: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2060: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:12 standalone.localdomain ceph-mon[29756]: pgmap v2060: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2061: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:14 standalone.localdomain ceph-mon[29756]: pgmap v2061: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:43:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:43:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:43:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:43:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:43:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:43:15 standalone.localdomain podman[307462]: 2025-10-13 14:43:15.837293938 +0000 UTC m=+0.088979750 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, container_name=heat_api_cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-heat-api-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:43:15 standalone.localdomain systemd[1]: tmp-crun.uhl9jn.mount: Deactivated successfully.
Oct 13 14:43:15 standalone.localdomain podman[307461]: 2025-10-13 14:43:15.873821143 +0000 UTC m=+0.131099687 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, description=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step1, container_name=memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T12:58:43, vendor=Red Hat, Inc.)
Oct 13 14:43:15 standalone.localdomain podman[307462]: 2025-10-13 14:43:15.876842706 +0000 UTC m=+0.128528498 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, name=rhosp17/openstack-heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:56:26, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, managed_by=tripleo_ansible, container_name=heat_api_cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:43:15 standalone.localdomain podman[307460]: 2025-10-13 14:43:15.885557084 +0000 UTC m=+0.142679183 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-engine, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, container_name=heat_engine, build-date=2025-07-21T15:44:11)
Oct 13 14:43:15 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:43:15 standalone.localdomain podman[307461]: 2025-10-13 14:43:15.89776867 +0000 UTC m=+0.155047224 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, build-date=2025-07-21T12:58:43, release=1, com.redhat.component=openstack-memcached-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached)
Oct 13 14:43:15 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:43:15 standalone.localdomain podman[307477]: 2025-10-13 14:43:15.939857735 +0000 UTC m=+0.184099477 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, container_name=heat_api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-heat-api, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, release=1, tcib_managed=true)
Oct 13 14:43:15 standalone.localdomain podman[307460]: 2025-10-13 14:43:15.94879916 +0000 UTC m=+0.205921279 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=heat_engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, name=rhosp17/openstack-heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, com.redhat.component=openstack-heat-engine-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=)
Oct 13 14:43:15 standalone.localdomain podman[307460]: unhealthy
Oct 13 14:43:15 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:15 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:43:15 standalone.localdomain podman[307477]: 2025-10-13 14:43:15.999823462 +0000 UTC m=+0.244065204 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-07-21T15:56:26, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, container_name=heat_api)
Oct 13 14:43:16 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:43:16 standalone.localdomain podman[307485]: 2025-10-13 14:43:16.017822155 +0000 UTC m=+0.244743594 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, release=1, version=17.1.9, distribution-scope=public, build-date=2025-07-21T13:07:52, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, com.redhat.component=openstack-cron-container)
Oct 13 14:43:16 standalone.localdomain podman[307485]: 2025-10-13 14:43:16.028920127 +0000 UTC m=+0.255841576 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, name=rhosp17/openstack-cron, release=1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:43:16 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:43:16 standalone.localdomain podman[307459]: 2025-10-13 14:43:16.076604445 +0000 UTC m=+0.335439327 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, name=rhosp17/openstack-heat-api-cfn, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, container_name=heat_api_cfn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:43:16 standalone.localdomain podman[307459]: 2025-10-13 14:43:16.10403558 +0000 UTC m=+0.362870542 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T14:49:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:43:16 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:43:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2062: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:16 standalone.localdomain ceph-mon[29756]: pgmap v2062: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2063: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/33182379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/33182379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: pgmap v2063: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/33182379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:43:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/33182379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:43:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2064: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:20 standalone.localdomain ceph-mon[29756]: pgmap v2064: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:43:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:43:21 standalone.localdomain podman[307596]: 2025-10-13 14:43:21.816555634 +0000 UTC m=+0.075993511 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64)
Oct 13 14:43:21 standalone.localdomain podman[307595]: 2025-10-13 14:43:21.848697224 +0000 UTC m=+0.108938005 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, release=1, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:43:21 standalone.localdomain podman[307608]: 2025-10-13 14:43:21.915755407 +0000 UTC m=+0.162188293 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1)
Oct 13 14:43:21 standalone.localdomain podman[307631]: 2025-10-13 14:43:21.878106749 +0000 UTC m=+0.117376935 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, container_name=swift_account_server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container)
Oct 13 14:43:21 standalone.localdomain podman[307594]: 2025-10-13 14:43:21.887696893 +0000 UTC m=+0.151925407 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, container_name=glance_api_cron, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:43:21 standalone.localdomain podman[307604]: 2025-10-13 14:43:21.972583157 +0000 UTC m=+0.227513064 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:43:21 standalone.localdomain podman[307597]: 2025-10-13 14:43:21.991729407 +0000 UTC m=+0.248962005 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, container_name=swift_proxy, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, release=1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:43:22 standalone.localdomain podman[307630]: 2025-10-13 14:43:22.002777047 +0000 UTC m=+0.244559570 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, version=17.1.9)
Oct 13 14:43:22 standalone.localdomain podman[307608]: 2025-10-13 14:43:22.007051008 +0000 UTC m=+0.253483854 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:43:22 standalone.localdomain podman[307608]: unhealthy
Oct 13 14:43:22 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:22 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:43:22 standalone.localdomain podman[307595]: 2025-10-13 14:43:22.021813162 +0000 UTC m=+0.282053943 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, container_name=swift_object_server, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:43:22 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain podman[307630]: 2025-10-13 14:43:22.040793437 +0000 UTC m=+0.282575970 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-07-21T13:28:44, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:43:22 standalone.localdomain podman[307630]: unhealthy
Oct 13 14:43:22 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:22 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:43:22 standalone.localdomain podman[307631]: 2025-10-13 14:43:22.081771048 +0000 UTC m=+0.321041234 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Oct 13 14:43:22 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain podman[307594]: 2025-10-13 14:43:22.1233937 +0000 UTC m=+0.387622214 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:43:22 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain podman[307598]: 2025-10-13 14:43:22.09222819 +0000 UTC m=+0.347615212 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, version=17.1.9, container_name=glance_api_internal, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, name=rhosp17/openstack-glance-api, build-date=2025-07-21T13:58:20, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64)
Oct 13 14:43:22 standalone.localdomain podman[307596]: 2025-10-13 14:43:22.150951287 +0000 UTC m=+0.410389154 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public)
Oct 13 14:43:22 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain podman[307604]: 2025-10-13 14:43:22.17279187 +0000 UTC m=+0.427721807 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, container_name=swift_container_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:43:22 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain podman[307597]: 2025-10-13 14:43:22.20007358 +0000 UTC m=+0.457306268 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., container_name=swift_proxy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:43:22 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain podman[307598]: 2025-10-13 14:43:22.278089452 +0000 UTC m=+0.533476484 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, build-date=2025-07-21T13:58:20, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, release=1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container)
Oct 13 14:43:22 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:43:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2065: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:22 standalone.localdomain ceph-mon[29756]: pgmap v2065: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:43:23
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', '.mgr', 'manila_data', 'images', 'vms', 'backups', 'manila_metadata']
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:43:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2066: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:24 standalone.localdomain ceph-mon[29756]: pgmap v2066: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2067: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:26 standalone.localdomain ceph-mon[29756]: pgmap v2067: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:43:26 standalone.localdomain podman[307804]: 2025-10-13 14:43:26.800281722 +0000 UTC m=+0.065526397 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, vendor=Red Hat, Inc., config_id=tripleo_step3, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, release=1, vcs-type=git, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone)
Oct 13 14:43:26 standalone.localdomain podman[307804]: 2025-10-13 14:43:26.86776442 +0000 UTC m=+0.133009085 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, vcs-type=git, build-date=2025-07-21T13:27:18, description=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., container_name=keystone_cron, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, version=17.1.9, config_id=tripleo_step3, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64)
Oct 13 14:43:26 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:43:27 standalone.localdomain sudo[307823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:43:27 standalone.localdomain sudo[307823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:43:27 standalone.localdomain sudo[307823]: pam_unix(sudo:session): session closed for user root
Oct 13 14:43:27 standalone.localdomain sudo[307838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:43:27 standalone.localdomain sudo[307838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:43:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:28 standalone.localdomain sudo[307838]: pam_unix(sudo:session): session closed for user root
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1cc33938-b91a-44a9-91e8-71c448e9b3ef (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1cc33938-b91a-44a9-91e8-71c448e9b3ef (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1cc33938-b91a-44a9-91e8-71c448e9b3ef (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:43:28 standalone.localdomain sudo[307885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:43:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:43:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:43:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:43:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:43:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:43:28 standalone.localdomain sudo[307885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:43:28 standalone.localdomain sudo[307885]: pam_unix(sudo:session): session closed for user root
Oct 13 14:43:28 standalone.localdomain podman[307901]: 2025-10-13 14:43:28.289352322 +0000 UTC m=+0.082251703 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=nova_compute, config_id=tripleo_step5, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Oct 13 14:43:28 standalone.localdomain podman[307901]: 2025-10-13 14:43:28.300589378 +0000 UTC m=+0.093488749 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, tcib_managed=true, batch=17.1_20250721.1, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:43:28 standalone.localdomain podman[307901]: unhealthy
Oct 13 14:43:28 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:28 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2068: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:28 standalone.localdomain podman[307899]: 2025-10-13 14:43:28.338890357 +0000 UTC m=+0.132586383 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12)
Oct 13 14:43:28 standalone.localdomain podman[307899]: 2025-10-13 14:43:28.346613025 +0000 UTC m=+0.140309051 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:43:28 standalone.localdomain podman[307899]: unhealthy
Oct 13 14:43:28 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:28 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:43:28 standalone.localdomain podman[307912]: 2025-10-13 14:43:28.397596234 +0000 UTC m=+0.179009652 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vendor=Red Hat, Inc., name=rhosp17/openstack-mariadb, architecture=x86_64, batch=17.1_20250721.1, container_name=clustercheck, description=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:43:28 standalone.localdomain podman[307902]: 2025-10-13 14:43:28.443393994 +0000 UTC m=+0.229302980 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.buildah.version=1.33.12, io.openshift.expose-services=, container_name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, distribution-scope=public, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:43:28 standalone.localdomain podman[307900]: 2025-10-13 14:43:28.496605922 +0000 UTC m=+0.287385178 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.33.12, version=17.1.9, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T13:27:15, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, container_name=iscsid)
Oct 13 14:43:28 standalone.localdomain podman[307900]: 2025-10-13 14:43:28.50301707 +0000 UTC m=+0.293796346 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:43:28 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:43:28 standalone.localdomain podman[307912]: 2025-10-13 14:43:28.529301048 +0000 UTC m=+0.310714506 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-mariadb-container, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, container_name=clustercheck, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T12:58:45, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2)
Oct 13 14:43:28 standalone.localdomain podman[307912]: unhealthy
Oct 13 14:43:28 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:28 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed with result 'exit-code'.
Oct 13 14:43:28 standalone.localdomain podman[307902]: 2025-10-13 14:43:28.582685962 +0000 UTC m=+0.368594938 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, release=1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container)
Oct 13 14:43:28 standalone.localdomain podman[307902]: unhealthy
Oct 13 14:43:28 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:28 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:43:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:43:28 standalone.localdomain ceph-mon[29756]: pgmap v2068: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:43:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:43:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:43:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2069: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:30 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:43:30 standalone.localdomain ceph-mon[29756]: pgmap v2069: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2070: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:32 standalone.localdomain ceph-mon[29756]: pgmap v2070: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2071: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:34 standalone.localdomain ceph-mon[29756]: pgmap v2071: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2072: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:36 standalone.localdomain ceph-mon[29756]: pgmap v2072: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.873809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366617873857, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1235, "num_deletes": 250, "total_data_size": 1057255, "memory_usage": 1086680, "flush_reason": "Manual Compaction"}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366617880404, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 649979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48068, "largest_seqno": 49302, "table_properties": {"data_size": 645970, "index_size": 1610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10884, "raw_average_key_size": 20, "raw_value_size": 637053, "raw_average_value_size": 1197, "num_data_blocks": 75, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366512, "oldest_key_time": 1760366512, "file_creation_time": 1760366617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 6638 microseconds, and 2950 cpu microseconds.
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.880441) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 649979 bytes OK
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.880467) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.883563) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.883616) EVENT_LOG_v1 {"time_micros": 1760366617883604, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.883640) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1051552, prev total WAL file size 1052041, number of live WAL files 2.
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.884530) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(634KB)], [104(6303KB)]
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366617884601, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 7104736, "oldest_snapshot_seqno": -1}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5261 keys, 5419463 bytes, temperature: kUnknown
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366617908998, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 5419463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5388461, "index_size": 16761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 131474, "raw_average_key_size": 24, "raw_value_size": 5297158, "raw_average_value_size": 1006, "num_data_blocks": 698, "num_entries": 5261, "num_filter_entries": 5261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.909304) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 5419463 bytes
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.911458) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 290.3 rd, 221.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.2 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(19.3) write-amplify(8.3) OK, records in: 5731, records dropped: 470 output_compression: NoCompression
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.911579) EVENT_LOG_v1 {"time_micros": 1760366617911562, "job": 62, "event": "compaction_finished", "compaction_time_micros": 24475, "compaction_time_cpu_micros": 14143, "output_level": 6, "num_output_files": 1, "total_output_size": 5419463, "num_input_records": 5731, "num_output_records": 5261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366617911936, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366617913002, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.884373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.913118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.913125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.913128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.913131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:43:37 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:43:37.913134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:43:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2073: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:38 standalone.localdomain ceph-mon[29756]: pgmap v2073: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2074: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:40 standalone.localdomain ceph-mon[29756]: pgmap v2074: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2075: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:42 standalone.localdomain ceph-mon[29756]: pgmap v2075: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2076: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:44 standalone.localdomain ceph-mon[29756]: pgmap v2076: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2077: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:46 standalone.localdomain ceph-mon[29756]: pgmap v2077: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:43:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:43:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:43:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:43:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:43:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:43:46 standalone.localdomain systemd[1]: tmp-crun.mobh6S.mount: Deactivated successfully.
Oct 13 14:43:46 standalone.localdomain podman[308022]: 2025-10-13 14:43:46.829665805 +0000 UTC m=+0.083825712 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, config_id=tripleo_step4, container_name=heat_api_cron, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T15:56:26, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-heat-api-container)
Oct 13 14:43:46 standalone.localdomain podman[308029]: 2025-10-13 14:43:46.843583843 +0000 UTC m=+0.091624072 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Oct 13 14:43:46 standalone.localdomain podman[308022]: 2025-10-13 14:43:46.868060507 +0000 UTC m=+0.122220374 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, batch=17.1_20250721.1, container_name=heat_api_cron, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-07-21T15:56:26, io.buildah.version=1.33.12, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:43:46 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:43:46 standalone.localdomain podman[308029]: 2025-10-13 14:43:46.878973682 +0000 UTC m=+0.127013891 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team)
Oct 13 14:43:46 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:43:46 standalone.localdomain podman[308023]: 2025-10-13 14:43:46.88538509 +0000 UTC m=+0.138273978 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_api, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, version=17.1.9, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 14:43:46 standalone.localdomain podman[308016]: 2025-10-13 14:43:46.955944472 +0000 UTC m=+0.216254928 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, build-date=2025-07-21T15:44:11, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, container_name=heat_engine)
Oct 13 14:43:46 standalone.localdomain podman[308023]: 2025-10-13 14:43:46.971833151 +0000 UTC m=+0.224722009 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, release=1, build-date=2025-07-21T15:56:26, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-heat-api-container, container_name=heat_api, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-heat-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:43:46 standalone.localdomain podman[308015]: 2025-10-13 14:43:46.929431715 +0000 UTC m=+0.197746278 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, name=rhosp17/openstack-heat-api-cfn, batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, config_id=tripleo_step4, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-heat-api-cfn-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn)
Oct 13 14:43:46 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:43:46 standalone.localdomain podman[308016]: 2025-10-13 14:43:46.997016496 +0000 UTC m=+0.257327012 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=heat_engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-engine, com.redhat.component=openstack-heat-engine-container, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, build-date=2025-07-21T15:44:11, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, distribution-scope=public)
Oct 13 14:43:47 standalone.localdomain podman[308016]: unhealthy
Oct 13 14:43:47 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:47 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:43:47 standalone.localdomain podman[308017]: 2025-10-13 14:43:47.012164662 +0000 UTC m=+0.268390303 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, name=rhosp17/openstack-memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, config_id=tripleo_step1, build-date=2025-07-21T12:58:43, description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-memcached-container, distribution-scope=public, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:43:47 standalone.localdomain podman[308017]: 2025-10-13 14:43:47.043821217 +0000 UTC m=+0.300046848 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, vcs-type=git, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, release=1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T12:58:43, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:43:47 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:43:47 standalone.localdomain podman[308015]: 2025-10-13 14:43:47.06405981 +0000 UTC m=+0.332374383 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-cfn-container, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:43:47 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:43:47 standalone.localdomain systemd[1]: tmp-crun.LXq9Za.mount: Deactivated successfully.
Oct 13 14:43:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2078: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:48 standalone.localdomain ceph-mon[29756]: pgmap v2078: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2079: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:50 standalone.localdomain ceph-mon[29756]: pgmap v2079: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2080: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:52 standalone.localdomain ceph-mon[29756]: pgmap v2080: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:43:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:43:52 standalone.localdomain podman[308181]: 2025-10-13 14:43:52.842985117 +0000 UTC m=+0.086213054 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:43:52 standalone.localdomain podman[308181]: 2025-10-13 14:43:52.854313795 +0000 UTC m=+0.097541752 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, distribution-scope=public, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Oct 13 14:43:52 standalone.localdomain podman[308181]: unhealthy
Oct 13 14:43:52 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:52 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:43:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:52 standalone.localdomain systemd[1]: tmp-crun.T0DwSI.mount: Deactivated successfully.
Oct 13 14:43:52 standalone.localdomain podman[308153]: 2025-10-13 14:43:52.892236963 +0000 UTC m=+0.155034203 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, vendor=Red Hat, Inc., container_name=swift_object_server, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:43:52 standalone.localdomain podman[308162]: 2025-10-13 14:43:52.897274799 +0000 UTC m=+0.151030711 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, batch=17.1_20250721.1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, container_name=glance_api_internal, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, description=Red Hat OpenStack Platform 17.1 glance-api, name=rhosp17/openstack-glance-api, com.redhat.component=openstack-glance-api-container)
Oct 13 14:43:52 standalone.localdomain podman[308185]: 2025-10-13 14:43:52.938159457 +0000 UTC m=+0.176197776 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, tcib_managed=true)
Oct 13 14:43:52 standalone.localdomain podman[308173]: 2025-10-13 14:43:52.948368521 +0000 UTC m=+0.192796066 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:43:52 standalone.localdomain podman[308156]: 2025-10-13 14:43:52.940507739 +0000 UTC m=+0.201816374 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:48:37, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, container_name=swift_proxy, tcib_managed=true)
Oct 13 14:43:52 standalone.localdomain podman[308152]: 2025-10-13 14:43:52.994406858 +0000 UTC m=+0.260981505 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 glance-api, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, batch=17.1_20250721.1, release=1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, container_name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc.)
Oct 13 14:43:53 standalone.localdomain podman[308185]: 2025-10-13 14:43:53.000723903 +0000 UTC m=+0.238762262 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T13:28:44, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, version=17.1.9, distribution-scope=public, container_name=ovn_controller, batch=17.1_20250721.1)
Oct 13 14:43:53 standalone.localdomain podman[308185]: unhealthy
Oct 13 14:43:53 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:53 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:43:53 standalone.localdomain podman[308186]: 2025-10-13 14:43:53.044640934 +0000 UTC m=+0.278940667 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:43:53 standalone.localdomain podman[308152]: 2025-10-13 14:43:53.054467197 +0000 UTC m=+0.321041854 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T13:58:20, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, version=17.1.9, config_id=tripleo_step4, container_name=glance_api_cron, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:43:53 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:43:53 standalone.localdomain podman[308154]: 2025-10-13 14:43:53.091356713 +0000 UTC m=+0.355351020 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:43:53 standalone.localdomain podman[308153]: 2025-10-13 14:43:53.09417212 +0000 UTC m=+0.356969380 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, container_name=swift_object_server, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:43:53 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:43:53 standalone.localdomain podman[308162]: 2025-10-13 14:43:53.120457448 +0000 UTC m=+0.374213350 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, name=rhosp17/openstack-glance-api, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, container_name=glance_api_internal, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:43:53 standalone.localdomain podman[308156]: 2025-10-13 14:43:53.129817517 +0000 UTC m=+0.391126152 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, architecture=x86_64, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, container_name=swift_proxy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12)
Oct 13 14:43:53 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:43:53 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:43:53 standalone.localdomain podman[308173]: 2025-10-13 14:43:53.175472332 +0000 UTC m=+0.419899867 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, tcib_managed=true, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc.)
Oct 13 14:43:53 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:43:53 standalone.localdomain podman[308186]: 2025-10-13 14:43:53.229735302 +0000 UTC m=+0.464035035 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64)
Oct 13 14:43:53 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:43:53 standalone.localdomain podman[308154]: 2025-10-13 14:43:53.415808231 +0000 UTC m=+0.679802528 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:43:53 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:43:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2081: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:54 standalone.localdomain sshd[308361]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:43:54 standalone.localdomain ceph-mon[29756]: pgmap v2081: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:54 standalone.localdomain sshd[308361]: Accepted publickey for root from 192.168.122.30 port 39448 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:43:54 standalone.localdomain systemd-logind[45629]: New session 275 of user root.
Oct 13 14:43:54 standalone.localdomain systemd[1]: Started Session 275 of User root.
Oct 13 14:43:54 standalone.localdomain sshd[308361]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:43:55 standalone.localdomain python3.9[308454]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:43:56 standalone.localdomain python3.9[308546]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"
                                                           _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:43:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2082: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:56 standalone.localdomain ceph-mon[29756]: pgmap v2082: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:56 standalone.localdomain python3.9[308637]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:43:57 standalone.localdomain python3.9[308729]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"
                                                           _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:43:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:43:57 standalone.localdomain podman[308804]: 2025-10-13 14:43:57.831553774 +0000 UTC m=+0.094736627 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, architecture=x86_64, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, container_name=keystone_cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, io.openshift.expose-services=)
Oct 13 14:43:57 standalone.localdomain podman[308804]: 2025-10-13 14:43:57.841093338 +0000 UTC m=+0.104276241 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, container_name=keystone_cron, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, com.redhat.component=openstack-keystone-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, build-date=2025-07-21T13:27:18, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1)
Oct 13 14:43:57 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:43:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:43:57 standalone.localdomain python3.9[308826]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:43:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2083: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:43:58 standalone.localdomain python3.9[308932]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 13 14:43:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:43:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:43:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:43:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:43:58 standalone.localdomain podman[308946]: 2025-10-13 14:43:58.831098274 +0000 UTC m=+0.078157417 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=healthy, batch=17.1_20250721.1, release=1, build-date=2025-07-21T16:03:34, io.openshift.expose-services=, container_name=neutron_sriov_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-sriov-agent, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:43:58 standalone.localdomain podman[308946]: 2025-10-13 14:43:58.841768833 +0000 UTC m=+0.088827976 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-neutron-sriov-agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:43:58 standalone.localdomain podman[308934]: 2025-10-13 14:43:58.81571836 +0000 UTC m=+0.078334561 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, container_name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-07-21T13:27:15, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, version=17.1.9, tcib_managed=true)
Oct 13 14:43:58 standalone.localdomain podman[308933]: 2025-10-13 14:43:58.872995324 +0000 UTC m=+0.137080961 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9)
Oct 13 14:43:58 standalone.localdomain podman[308933]: 2025-10-13 14:43:58.884736205 +0000 UTC m=+0.148821842 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:28:54, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, distribution-scope=public, container_name=neutron_dhcp, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team)
Oct 13 14:43:58 standalone.localdomain ceph-mon[29756]: pgmap v2083: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:43:58 standalone.localdomain podman[308933]: unhealthy
Oct 13 14:43:58 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:58 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:43:58 standalone.localdomain podman[308946]: unhealthy
Oct 13 14:43:58 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:58 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:43:58 standalone.localdomain podman[308935]: 2025-10-13 14:43:58.924737646 +0000 UTC m=+0.175056950 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37)
Oct 13 14:43:58 standalone.localdomain podman[308934]: 2025-10-13 14:43:58.946185827 +0000 UTC m=+0.208802048 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, distribution-scope=public, batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Oct 13 14:43:58 standalone.localdomain podman[308935]: 2025-10-13 14:43:58.962589902 +0000 UTC m=+0.212909215 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, container_name=nova_compute, config_id=tripleo_step5, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:43:58 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:43:58 standalone.localdomain podman[308935]: unhealthy
Oct 13 14:43:58 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:58 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:43:59 standalone.localdomain podman[308947]: 2025-10-13 14:43:59.036877039 +0000 UTC m=+0.280539717 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, tcib_managed=true, name=rhosp17/openstack-mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, config_id=tripleo_step2, vcs-type=git, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, container_name=clustercheck, build-date=2025-07-21T12:58:45, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:43:59 standalone.localdomain podman[308947]: 2025-10-13 14:43:59.088653483 +0000 UTC m=+0.332316181 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-mariadb, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, build-date=2025-07-21T12:58:45, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64)
Oct 13 14:43:59 standalone.localdomain podman[308947]: unhealthy
Oct 13 14:43:59 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:43:59 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed with result 'exit-code'.
Oct 13 14:43:59 standalone.localdomain python3.9[309140]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:44:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2084: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:00 standalone.localdomain python3.9[309232]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 13 14:44:00 standalone.localdomain ceph-mon[29756]: pgmap v2084: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:01 standalone.localdomain python3.9[309322]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 14:44:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2085: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:02 standalone.localdomain python3.9[309370]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:44:02 standalone.localdomain ceph-mon[29756]: pgmap v2085: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2086: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:04 standalone.localdomain ceph-mon[29756]: pgmap v2086: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:05 standalone.localdomain sshd[308361]: pam_unix(sshd:session): session closed for user root
Oct 13 14:44:05 standalone.localdomain systemd[1]: session-275.scope: Deactivated successfully.
Oct 13 14:44:05 standalone.localdomain systemd[1]: session-275.scope: Consumed 7.367s CPU time.
Oct 13 14:44:05 standalone.localdomain systemd-logind[45629]: Session 275 logged out. Waiting for processes to exit.
Oct 13 14:44:05 standalone.localdomain systemd-logind[45629]: Removed session 275.
Oct 13 14:44:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2087: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:06 standalone.localdomain ceph-mon[29756]: pgmap v2087: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23059 DF PROTO=TCP SPT=48312 DPT=9100 SEQ=46755921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9836040000000001030307) 
Oct 13 14:44:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2088: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23060 DF PROTO=TCP SPT=48312 DPT=9100 SEQ=46755921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9839F60000000001030307) 
Oct 13 14:44:08 standalone.localdomain ceph-mon[29756]: pgmap v2088: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2089: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:10 standalone.localdomain ceph-mon[29756]: pgmap v2089: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23061 DF PROTO=TCP SPT=48312 DPT=9100 SEQ=46755921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9841F60000000001030307) 
Oct 13 14:44:11 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44928 DF PROTO=TCP SPT=35680 DPT=9882 SEQ=1031060374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9846560000000001030307) 
Oct 13 14:44:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2090: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:12 standalone.localdomain ceph-mon[29756]: pgmap v2090: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:12 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44929 DF PROTO=TCP SPT=35680 DPT=9882 SEQ=1031060374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C984A760000000001030307) 
Oct 13 14:44:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:12 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24029 DF PROTO=TCP SPT=55134 DPT=9105 SEQ=3092038196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C984BC60000000001030307) 
Oct 13 14:44:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24030 DF PROTO=TCP SPT=55134 DPT=9105 SEQ=3092038196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C984FB60000000001030307) 
Oct 13 14:44:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2091: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:14 standalone.localdomain ceph-mon[29756]: pgmap v2091: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23062 DF PROTO=TCP SPT=48312 DPT=9100 SEQ=46755921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9851B60000000001030307) 
Oct 13 14:44:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44930 DF PROTO=TCP SPT=35680 DPT=9882 SEQ=1031060374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9852760000000001030307) 
Oct 13 14:44:16 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24031 DF PROTO=TCP SPT=55134 DPT=9105 SEQ=3092038196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9857B60000000001030307) 
Oct 13 14:44:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2092: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:16 standalone.localdomain ceph-mon[29756]: pgmap v2092: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:44:17 standalone.localdomain sshd[309411]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:44:17 standalone.localdomain podman[309389]: 2025-10-13 14:44:17.831230624 +0000 UTC m=+0.080726205 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T12:58:43, io.openshift.expose-services=, container_name=memcached, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 memcached, config_id=tripleo_step1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:44:17 standalone.localdomain systemd[1]: tmp-crun.R9dYXL.mount: Deactivated successfully.
Oct 13 14:44:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:17 standalone.localdomain sshd[309411]: Accepted publickey for root from 192.168.122.30 port 40074 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:44:17 standalone.localdomain podman[309390]: 2025-10-13 14:44:17.901852759 +0000 UTC m=+0.149329799 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron)
Oct 13 14:44:17 standalone.localdomain systemd-logind[45629]: New session 276 of user root.
Oct 13 14:44:17 standalone.localdomain systemd[1]: Started Session 276 of User root.
Oct 13 14:44:17 standalone.localdomain sshd[309411]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:44:17 standalone.localdomain podman[309391]: 2025-10-13 14:44:17.882553965 +0000 UTC m=+0.128844078 container health_status 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, build-date=2025-07-21T15:56:26, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:44:17 standalone.localdomain podman[309387]: 2025-10-13 14:44:17.947595387 +0000 UTC m=+0.200129422 container health_status 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, build-date=2025-07-21T14:49:55, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, name=rhosp17/openstack-heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, batch=17.1_20250721.1, tcib_managed=true, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, maintainer=OpenStack TripleO Team, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, release=1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12)
Oct 13 14:44:17 standalone.localdomain podman[309390]: 2025-10-13 14:44:17.964616581 +0000 UTC m=+0.212093621 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, name=rhosp17/openstack-heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-heat-api-container, container_name=heat_api_cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, version=17.1.9, vcs-type=git, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530)
Oct 13 14:44:17 standalone.localdomain podman[309391]: 2025-10-13 14:44:17.964880779 +0000 UTC m=+0.211170942 container exec_died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, container_name=heat_api, version=17.1.9, build-date=2025-07-21T15:56:26, com.redhat.component=openstack-heat-api-container, architecture=x86_64, release=1, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, config_id=tripleo_step4, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:17 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Deactivated successfully.
Oct 13 14:44:18 standalone.localdomain podman[309387]: 2025-10-13 14:44:18.006937403 +0000 UTC m=+0.259471458 container exec_died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, name=rhosp17/openstack-heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container, config_id=tripleo_step4, build-date=2025-07-21T14:49:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-type=git, version=17.1.9, container_name=heat_api_cfn, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, batch=17.1_20250721.1)
Oct 13 14:44:18 standalone.localdomain podman[309389]: 2025-10-13 14:44:18.015036203 +0000 UTC m=+0.264531844 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=memcached, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, config_id=tripleo_step1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:44:18 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Deactivated successfully.
Oct 13 14:44:18 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:44:18 standalone.localdomain podman[309401]: 2025-10-13 14:44:18.099713969 +0000 UTC m=+0.339578694 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T13:07:52, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, release=1, container_name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:44:18 standalone.localdomain podman[309401]: 2025-10-13 14:44:18.113770012 +0000 UTC m=+0.353634747 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.)
Oct 13 14:44:18 standalone.localdomain podman[309388]: 2025-10-13 14:44:17.862439355 +0000 UTC m=+0.111585845 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=healthy, description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, container_name=heat_engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, config_id=tripleo_step4, version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-engine, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:44:18 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:44:18 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:44:18 standalone.localdomain podman[309388]: 2025-10-13 14:44:18.149733349 +0000 UTC m=+0.398879809 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, build-date=2025-07-21T15:44:11, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 heat-engine, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-engine-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, name=rhosp17/openstack-heat-engine, architecture=x86_64, io.openshift.expose-services=, container_name=heat_engine, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:18 standalone.localdomain podman[309388]: unhealthy
Oct 13 14:44:18 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:18 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:44:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2093: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1718557397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1718557397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:44:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44931 DF PROTO=TCP SPT=35680 DPT=9882 SEQ=1031060374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9862360000000001030307) 
Oct 13 14:44:18 standalone.localdomain systemd[1]: tmp-crun.a2HCAX.mount: Deactivated successfully.
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: pgmap v2093: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1718557397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:44:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1718557397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:44:18 standalone.localdomain python3.9[309616]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 14:44:18 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:19 standalone.localdomain systemd-rc-local-generator[309637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:19 standalone.localdomain systemd-sysv-generator[309643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:19 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24032 DF PROTO=TCP SPT=55134 DPT=9105 SEQ=3092038196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9867760000000001030307) 
Oct 13 14:44:20 standalone.localdomain python3.9[309742]: ansible-ansible.builtin.service_facts Invoked
Oct 13 14:44:20 standalone.localdomain network[309759]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 14:44:20 standalone.localdomain network[309760]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:44:20 standalone.localdomain network[309761]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 14:44:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2094: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:20 standalone.localdomain ceph-mon[29756]: pgmap v2094: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2095: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: pgmap v2095: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:22 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.899395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366662899444, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 669, "num_deletes": 257, "total_data_size": 432948, "memory_usage": 445672, "flush_reason": "Manual Compaction"}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366662903446, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 424980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49303, "largest_seqno": 49971, "table_properties": {"data_size": 421757, "index_size": 1144, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7314, "raw_average_key_size": 18, "raw_value_size": 415229, "raw_average_value_size": 1043, "num_data_blocks": 52, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366617, "oldest_key_time": 1760366617, "file_creation_time": 1760366662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 4170 microseconds, and 1322 cpu microseconds.
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.903570) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 424980 bytes OK
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.903604) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.906132) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.906150) EVENT_LOG_v1 {"time_micros": 1760366662906145, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.906170) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 429352, prev total WAL file size 429841, number of live WAL files 2.
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.907670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353034' seq:72057594037927935, type:22 .. '6C6F676D0031373537' seq:0, type:0; will stop at (end)
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(415KB)], [107(5292KB)]
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366662907734, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 5844443, "oldest_snapshot_seqno": -1}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 5132 keys, 5762311 bytes, temperature: kUnknown
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366662929358, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 5762311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5730833, "index_size": 17557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 129682, "raw_average_key_size": 25, "raw_value_size": 5640468, "raw_average_value_size": 1099, "num_data_blocks": 730, "num_entries": 5132, "num_filter_entries": 5132, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.929599) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 5762311 bytes
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.6 rd, 265.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.2 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(27.3) write-amplify(13.6) OK, records in: 5659, records dropped: 527 output_compression: NoCompression
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931121) EVENT_LOG_v1 {"time_micros": 1760366662931112, "job": 64, "event": "compaction_finished", "compaction_time_micros": 21679, "compaction_time_cpu_micros": 10341, "output_level": 6, "num_output_files": 1, "total_output_size": 5762311, "num_input_records": 5659, "num_output_records": 5132, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366662931270, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366662931706, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.907574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:44:22 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:44:22.931778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:44:22 standalone.localdomain systemd[1]: tmp-crun.0XI1Pd.mount: Deactivated successfully.
Oct 13 14:44:22 standalone.localdomain podman[309828]: 2025-10-13 14:44:22.957840571 +0000 UTC m=+0.064252930 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true)
Oct 13 14:44:22 standalone.localdomain podman[309828]: 2025-10-13 14:44:22.972805112 +0000 UTC m=+0.079217461 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.33.12)
Oct 13 14:44:22 standalone.localdomain podman[309828]: unhealthy
Oct 13 14:44:23 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:23 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:44:23 standalone.localdomain podman[309859]: 2025-10-13 14:44:23.090751653 +0000 UTC m=+0.058938676 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:44:23 standalone.localdomain podman[309859]: 2025-10-13 14:44:23.099987397 +0000 UTC m=+0.068174430 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_controller, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, tcib_managed=true)
Oct 13 14:44:23 standalone.localdomain podman[309859]: unhealthy
Oct 13 14:44:23 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:23 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:44:23 standalone.localdomain podman[309879]: 2025-10-13 14:44:23.154792934 +0000 UTC m=+0.063496716 container health_status 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, health_status=healthy, vcs-type=git, release=1, com.redhat.component=openstack-glance-api-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 glance-api, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, distribution-scope=public, name=rhosp17/openstack-glance-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T13:58:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_cron, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 14:44:23 standalone.localdomain podman[309879]: 2025-10-13 14:44:23.190605506 +0000 UTC m=+0.099309298 container exec_died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, name=rhosp17/openstack-glance-api, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-type=git, release=1, io.buildah.version=1.33.12, build-date=2025-07-21T13:58:20, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9)
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:44:23
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_metadata', '.mgr', 'manila_data', 'volumes', 'images', 'vms']
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:44:23 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Deactivated successfully.
Oct 13 14:44:23 standalone.localdomain podman[309895]: 2025-10-13 14:44:23.218240597 +0000 UTC m=+0.088141574 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, distribution-scope=public, version=17.1.9, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:44:23 standalone.localdomain podman[309941]: 2025-10-13 14:44:23.290541603 +0000 UTC m=+0.065799307 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:44:23 standalone.localdomain podman[309909]: 2025-10-13 14:44:23.267162103 +0000 UTC m=+0.116236929 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_proxy, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1)
Oct 13 14:44:23 standalone.localdomain podman[309964]: 2025-10-13 14:44:23.39051646 +0000 UTC m=+0.123033678 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:44:23 standalone.localdomain podman[309895]: 2025-10-13 14:44:23.412819667 +0000 UTC m=+0.282720654 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, config_id=tripleo_step4, container_name=swift_object_server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:23 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:44:23 standalone.localdomain podman[309905]: 2025-10-13 14:44:23.373572469 +0000 UTC m=+0.227581867 container health_status ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, health_status=healthy, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 glance-api, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-glance-api, vendor=Red Hat, Inc., container_name=glance_api_internal, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, release=1)
Oct 13 14:44:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:44:23 standalone.localdomain podman[309909]: 2025-10-13 14:44:23.475604319 +0000 UTC m=+0.324679155 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, version=17.1.9, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vcs-type=git, release=1)
Oct 13 14:44:23 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:44:23 standalone.localdomain podman[309941]: 2025-10-13 14:44:23.493734838 +0000 UTC m=+0.268992522 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, com.redhat.component=openstack-swift-container-container, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:44:23 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:44:23 standalone.localdomain podman[310036]: 2025-10-13 14:44:23.529735557 +0000 UTC m=+0.074397212 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35416 DF PROTO=TCP SPT=35128 DPT=9102 SEQ=1608128948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98751E0000000001030307) 
Oct 13 14:44:23 standalone.localdomain podman[309905]: 2025-10-13 14:44:23.577029091 +0000 UTC m=+0.431038499 container exec_died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:58:20, description=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, name=rhosp17/openstack-glance-api, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, release=1)
Oct 13 14:44:23 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Deactivated successfully.
Oct 13 14:44:23 standalone.localdomain podman[309964]: 2025-10-13 14:44:23.629988602 +0000 UTC m=+0.362505820 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-type=git, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:23 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:44:23 standalone.localdomain podman[310036]: 2025-10-13 14:44:23.838864882 +0000 UTC m=+0.383526567 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, vcs-type=git, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:44:23 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:44:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2096: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:24 standalone.localdomain python3.9[310176]: ansible-ansible.builtin.service_facts Invoked
Oct 13 14:44:24 standalone.localdomain network[310193]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 14:44:24 standalone.localdomain network[310194]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:44:24 standalone.localdomain network[310195]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 14:44:24 standalone.localdomain ceph-mon[29756]: pgmap v2096: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35417 DF PROTO=TCP SPT=35128 DPT=9102 SEQ=1608128948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9879370000000001030307) 
Oct 13 14:44:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16386 DF PROTO=TCP SPT=40506 DPT=9101 SEQ=2732666616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C987CAD0000000001030307) 
Oct 13 14:44:25 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2097: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:26 standalone.localdomain ceph-mon[29756]: pgmap v2097: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16387 DF PROTO=TCP SPT=40506 DPT=9101 SEQ=2732666616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9880B60000000001030307) 
Oct 13 14:44:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35418 DF PROTO=TCP SPT=35128 DPT=9102 SEQ=1608128948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9881360000000001030307) 
Oct 13 14:44:27 standalone.localdomain python3.9[310400]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_barbican_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:27 standalone.localdomain systemd-sysv-generator[310432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:27 standalone.localdomain systemd-rc-local-generator[310425]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:44:28 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:44:28 standalone.localdomain recover_tripleo_nova_virtqemud[310444]: 93291
Oct 13 14:44:28 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:44:28 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:44:28 standalone.localdomain podman[310439]: 2025-10-13 14:44:28.096940352 +0000 UTC m=+0.061221996 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, container_name=keystone_cron, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.openshift.expose-services=, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, build-date=2025-07-21T13:27:18, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:44:28 standalone.localdomain podman[310439]: 2025-10-13 14:44:28.108859629 +0000 UTC m=+0.073141303 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=keystone_cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=openstack-keystone-container, release=1, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone)
Oct 13 14:44:28 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:44:28 standalone.localdomain sudo[310525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:44:28 standalone.localdomain sudo[310525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:44:28 standalone.localdomain sudo[310525]: pam_unix(sudo:session): session closed for user root
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2098: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:28 standalone.localdomain sudo[310565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:44:28 standalone.localdomain sudo[310565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:44:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16388 DF PROTO=TCP SPT=40506 DPT=9101 SEQ=2732666616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9888B60000000001030307) 
Oct 13 14:44:28 standalone.localdomain python3.9[310563]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_barbican_keystone_listener.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:28 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:44:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:44:28 standalone.localdomain systemd-sysv-generator[310626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:28 standalone.localdomain systemd-rc-local-generator[310623]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:28 standalone.localdomain ceph-mon[29756]: pgmap v2098: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:28 standalone.localdomain sudo[310565]: pam_unix(sudo:session): session closed for user root
Oct 13 14:44:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:44:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:44:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:44:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:44:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:44:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 4708ee3f-e691-4a76-b953-507979f2e288 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:44:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 4708ee3f-e691-4a76-b953-507979f2e288 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:44:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 4708ee3f-e691-4a76-b953-507979f2e288 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:44:29 standalone.localdomain sudo[310649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:44:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:44:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:44:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:44:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:44:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:44:29 standalone.localdomain sudo[310649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:44:29 standalone.localdomain sudo[310649]: pam_unix(sudo:session): session closed for user root
Oct 13 14:44:29 standalone.localdomain podman[310666]: 2025-10-13 14:44:29.21565298 +0000 UTC m=+0.081867231 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, vcs-type=git, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:44:29 standalone.localdomain podman[310664]: 2025-10-13 14:44:29.20199635 +0000 UTC m=+0.070886293 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 14:44:29 standalone.localdomain podman[310665]: 2025-10-13 14:44:29.267534928 +0000 UTC m=+0.132352886 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.buildah.version=1.33.12, architecture=x86_64, version=17.1.9, container_name=nova_compute, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Oct 13 14:44:29 standalone.localdomain podman[310666]: 2025-10-13 14:44:29.273530222 +0000 UTC m=+0.139744483 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 14:44:29 standalone.localdomain podman[310665]: 2025-10-13 14:44:29.278830965 +0000 UTC m=+0.143648913 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.33.12, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:29 standalone.localdomain podman[310665]: unhealthy
Oct 13 14:44:29 standalone.localdomain podman[310664]: 2025-10-13 14:44:29.286829102 +0000 UTC m=+0.155719035 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, maintainer=OpenStack TripleO Team, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Oct 13 14:44:29 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:29 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:44:29 standalone.localdomain podman[310663]: 2025-10-13 14:44:29.242742034 +0000 UTC m=+0.113402742 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, release=1, com.redhat.component=openstack-neutron-dhcp-agent-container, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 14:44:29 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:44:29 standalone.localdomain podman[310663]: 2025-10-13 14:44:29.322899581 +0000 UTC m=+0.193560319 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, container_name=neutron_dhcp, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.component=openstack-neutron-dhcp-agent-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:44:29 standalone.localdomain podman[310663]: unhealthy
Oct 13 14:44:29 standalone.localdomain podman[310676]: 2025-10-13 14:44:29.337036808 +0000 UTC m=+0.188351730 container health_status cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, health_status=unhealthy, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, description=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-mariadb-container, container_name=clustercheck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, architecture=x86_64, config_id=tripleo_step2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T12:58:45, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 mariadb)
Oct 13 14:44:29 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:29 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:44:29 standalone.localdomain podman[310666]: unhealthy
Oct 13 14:44:29 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:29 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:44:29 standalone.localdomain podman[310676]: 2025-10-13 14:44:29.395764384 +0000 UTC m=+0.247079306 container exec_died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, container_name=clustercheck, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T12:58:45, com.redhat.component=openstack-mariadb-container, distribution-scope=public, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, release=1, architecture=x86_64)
Oct 13 14:44:29 standalone.localdomain podman[310676]: unhealthy
Oct 13 14:44:29 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:29 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed with result 'exit-code'.
Oct 13 14:44:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:44:29 standalone.localdomain python3.9[310869]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_barbican_worker.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:29 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:44:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:44:29 standalone.localdomain systemd-sysv-generator[310899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:29 standalone.localdomain systemd-rc-local-generator[310895]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:30 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2099: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35419 DF PROTO=TCP SPT=35128 DPT=9102 SEQ=1608128948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9890F70000000001030307) 
Oct 13 14:44:30 standalone.localdomain ceph-mon[29756]: pgmap v2099: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:30 standalone.localdomain python3.9[310996]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_cinder_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:31 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:31 standalone.localdomain systemd-rc-local-generator[311027]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:31 standalone.localdomain systemd-sysv-generator[311030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:31 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:32 standalone.localdomain python3.9[311125]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_cinder_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:32 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:32 standalone.localdomain systemd-sysv-generator[311159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:32 standalone.localdomain systemd-rc-local-generator[311153]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:32 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2100: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:32 standalone.localdomain ceph-mon[29756]: pgmap v2100: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16389 DF PROTO=TCP SPT=40506 DPT=9101 SEQ=2732666616 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9898760000000001030307) 
Oct 13 14:44:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:33 standalone.localdomain python3.9[311254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_cinder_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:33 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:33 standalone.localdomain systemd-sysv-generator[311281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:33 standalone.localdomain systemd-rc-local-generator[311277]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:33 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:34 standalone.localdomain python3.9[311381]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_clustercheck.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:34 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2101: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:34 standalone.localdomain ceph-mon[29756]: pgmap v2101: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:34 standalone.localdomain systemd-rc-local-generator[311411]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:34 standalone.localdomain systemd-sysv-generator[311414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:34 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:34 standalone.localdomain systemd[1]: Stopping clustercheck container...
Oct 13 14:44:34 standalone.localdomain systemd[1]: libpod-cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.scope: Deactivated successfully.
Oct 13 14:44:34 standalone.localdomain systemd[1]: libpod-cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.scope: Consumed 1min 8.778s CPU time.
Oct 13 14:44:34 standalone.localdomain podman[311422]: 2025-10-13 14:44:34.845950133 +0000 UTC m=+0.078152687 container stop cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, container_name=clustercheck, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, managed_by=tripleo_ansible, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, io.buildah.version=1.33.12, com.redhat.component=openstack-mariadb-container, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, name=rhosp17/openstack-mariadb, vcs-type=git, build-date=2025-07-21T12:58:45)
Oct 13 14:44:34 standalone.localdomain podman[311422]: 2025-10-13 14:44:34.902901706 +0000 UTC m=+0.135104320 container died cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-mariadb-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T12:58:45, summary=Red Hat OpenStack Platform 17.1 mariadb, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, config_id=tripleo_step2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, container_name=clustercheck, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, tcib_managed=true, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, description=Red Hat OpenStack Platform 17.1 mariadb, name=rhosp17/openstack-mariadb, vcs-type=git, release=1)
Oct 13 14:44:34 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.timer: Deactivated successfully.
Oct 13 14:44:34 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.
Oct 13 14:44:34 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed to open /run/systemd/transient/cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: No such file or directory
Oct 13 14:44:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-03412483bd4ca7b5539c903f8c964f9a8806e7da0b632626e9e28c6f9211cb67-merged.mount: Deactivated successfully.
Oct 13 14:44:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab-userdata-shm.mount: Deactivated successfully.
Oct 13 14:44:34 standalone.localdomain podman[311422]: 2025-10-13 14:44:34.955134364 +0000 UTC m=+0.187336908 container cleanup cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T12:58:45, batch=17.1_20250721.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, version=17.1.9, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-mariadb, summary=Red Hat OpenStack Platform 17.1 mariadb, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, com.redhat.component=openstack-mariadb-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=clustercheck, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, config_id=tripleo_step2, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']})
Oct 13 14:44:34 standalone.localdomain podman[311422]: clustercheck
Oct 13 14:44:34 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.timer: Failed to open /run/systemd/transient/cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.timer: No such file or directory
Oct 13 14:44:34 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed to open /run/systemd/transient/cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: No such file or directory
Oct 13 14:44:34 standalone.localdomain podman[311436]: 2025-10-13 14:44:34.966756682 +0000 UTC m=+0.103643512 container cleanup cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, managed_by=tripleo_ansible, name=rhosp17/openstack-mariadb, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 mariadb, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-mariadb-container, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, container_name=clustercheck, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T12:58:45, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, architecture=x86_64, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, config_id=tripleo_step2, batch=17.1_20250721.1)
Oct 13 14:44:34 standalone.localdomain systemd[1]: libpod-conmon-cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.scope: Deactivated successfully.
Oct 13 14:44:35 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.timer: Failed to open /run/systemd/transient/cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.timer: No such file or directory
Oct 13 14:44:35 standalone.localdomain systemd[1]: cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: Failed to open /run/systemd/transient/cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab.service: No such file or directory
Oct 13 14:44:35 standalone.localdomain podman[311450]: 2025-10-13 14:44:35.04272452 +0000 UTC m=+0.048836814 container cleanup cbcf1fdb4c951a3424440903fadb68dcf036cc1cd97146ca9ae8e1d330724aab (image=registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1, name=clustercheck, batch=17.1_20250721.1, com.redhat.component=openstack-mariadb-container, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 mariadb, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-mariadb/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T12:58:45, name=rhosp17/openstack-mariadb, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 mariadb, io.k8s.description=Red Hat OpenStack Platform 17.1 mariadb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a7af8ae2ee3733ec3f843ca782a7d238'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-mariadb:17.1', 'net': 'host', 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/clustercheck:/var/lib/kolla/config_files/src:ro', '/var/lib/mysql:/var/lib/mysql']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=da2cb5ba4cc0b38a4a0c84aa2adf09772ed77172, container_name=clustercheck, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 mariadb, release=1, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 14:44:35 standalone.localdomain podman[311450]: clustercheck
Oct 13 14:44:35 standalone.localdomain systemd[1]: tripleo_clustercheck.service: Deactivated successfully.
Oct 13 14:44:35 standalone.localdomain systemd[1]: Stopped clustercheck container.
Oct 13 14:44:35 standalone.localdomain python3.9[311552]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_glance_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:35 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:35 standalone.localdomain systemd-rc-local-generator[311583]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:35 standalone.localdomain systemd-sysv-generator[311586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:35 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2102: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:36 standalone.localdomain ceph-mon[29756]: pgmap v2102: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:36 standalone.localdomain python3.9[311681]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_glance_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:36 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:36 standalone.localdomain systemd-sysv-generator[311710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:36 standalone.localdomain systemd-rc-local-generator[311707]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:37 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:37 standalone.localdomain systemd[1]: Stopping glance_api_cron container...
Oct 13 14:44:37 standalone.localdomain crond[115642]: (CRON) INFO (Shutting down)
Oct 13 14:44:37 standalone.localdomain systemd[1]: libpod-21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.scope: Deactivated successfully.
Oct 13 14:44:37 standalone.localdomain podman[311721]: 2025-10-13 14:44:37.341283769 +0000 UTC m=+0.073481943 container died 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.component=openstack-glance-api-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, container_name=glance_api_cron, summary=Red Hat OpenStack Platform 17.1 glance-api, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 glance-api, version=17.1.9, name=rhosp17/openstack-glance-api, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-type=git, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075)
Oct 13 14:44:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.timer: Deactivated successfully.
Oct 13 14:44:37 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.
Oct 13 14:44:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Failed to open /run/systemd/transient/21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: No such file or directory
Oct 13 14:44:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b-userdata-shm.mount: Deactivated successfully.
Oct 13 14:44:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba-merged.mount: Deactivated successfully.
Oct 13 14:44:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57030 DF PROTO=TCP SPT=60312 DPT=9100 SEQ=3224231792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98AB340000000001030307) 
Oct 13 14:44:37 standalone.localdomain podman[311721]: 2025-10-13 14:44:37.39167452 +0000 UTC m=+0.123872674 container cleanup 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, vcs-type=git, version=17.1.9, container_name=glance_api_cron, name=rhosp17/openstack-glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.component=openstack-glance-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T13:58:20, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:44:37 standalone.localdomain podman[311721]: glance_api_cron
Oct 13 14:44:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.timer: Failed to open /run/systemd/transient/21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.timer: No such file or directory
Oct 13 14:44:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Failed to open /run/systemd/transient/21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: No such file or directory
Oct 13 14:44:37 standalone.localdomain podman[311734]: 2025-10-13 14:44:37.452560704 +0000 UTC m=+0.101977830 container cleanup 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, build-date=2025-07-21T13:58:20, com.redhat.component=openstack-glance-api-container, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, summary=Red Hat OpenStack Platform 17.1 glance-api, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, batch=17.1_20250721.1, container_name=glance_api_cron, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64)
Oct 13 14:44:37 standalone.localdomain systemd[1]: libpod-conmon-21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.scope: Deactivated successfully.
Oct 13 14:44:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.timer: Failed to open /run/systemd/transient/21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.timer: No such file or directory
Oct 13 14:44:37 standalone.localdomain systemd[1]: 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: Failed to open /run/systemd/transient/21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b.service: No such file or directory
Oct 13 14:44:37 standalone.localdomain podman[311751]: 2025-10-13 14:44:37.528333858 +0000 UTC m=+0.045352108 container cleanup 21601552355c15a590700b540d94db98677be7a00b909b98c31da8f99ffbbb2b (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_cron, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-glance-api-container, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '20647e333af6a74e07ef3325107e31dd'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron glance'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api_cron.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api:/var/lib/kolla/config_files/src:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:44:37 standalone.localdomain podman[311751]: glance_api_cron
Oct 13 14:44:37 standalone.localdomain systemd[1]: tripleo_glance_api_cron.service: Deactivated successfully.
Oct 13 14:44:37 standalone.localdomain systemd[1]: Stopped glance_api_cron container.
Oct 13 14:44:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:38 standalone.localdomain python3.9[311853]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_glance_api_internal.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:38 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2103: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:38 standalone.localdomain systemd-sysv-generator[311883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:38 standalone.localdomain systemd-rc-local-generator[311880]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:38 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57031 DF PROTO=TCP SPT=60312 DPT=9100 SEQ=3224231792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98AF360000000001030307) 
Oct 13 14:44:38 standalone.localdomain systemd[1]: Stopping glance_api_internal container...
Oct 13 14:44:38 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:14:44:38 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx8be4cf3cc4f74ad6af1b2-0068ed1056" "proxy-server 2" 0.0008 "-" 20 -
Oct 13 14:44:38 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx8be4cf3cc4f74ad6af1b2-0068ed1056)
Oct 13 14:44:38 standalone.localdomain ceph-mon[29756]: pgmap v2103: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:39 standalone.localdomain systemd[1]: libpod-ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.scope: Deactivated successfully.
Oct 13 14:44:39 standalone.localdomain systemd[1]: libpod-ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.scope: Consumed 43.731s CPU time.
Oct 13 14:44:39 standalone.localdomain podman[311893]: 2025-10-13 14:44:39.029429596 +0000 UTC m=+0.376179731 container died ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 glance-api, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 glance-api, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-glance-api-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T13:58:20, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, container_name=glance_api_internal, name=rhosp17/openstack-glance-api, tcib_managed=true)
Oct 13 14:44:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.timer: Deactivated successfully.
Oct 13 14:44:39 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.
Oct 13 14:44:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Failed to open /run/systemd/transient/ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: No such file or directory
Oct 13 14:44:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8-userdata-shm.mount: Deactivated successfully.
Oct 13 14:44:39 standalone.localdomain podman[311893]: 2025-10-13 14:44:39.105505869 +0000 UTC m=+0.452255944 container cleanup ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, summary=Red Hat OpenStack Platform 17.1 glance-api, build-date=2025-07-21T13:58:20, tcib_managed=true, container_name=glance_api_internal, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 glance-api, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api)
Oct 13 14:44:39 standalone.localdomain podman[311893]: glance_api_internal
Oct 13 14:44:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.timer: Failed to open /run/systemd/transient/ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.timer: No such file or directory
Oct 13 14:44:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Failed to open /run/systemd/transient/ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: No such file or directory
Oct 13 14:44:39 standalone.localdomain podman[311907]: 2025-10-13 14:44:39.113923088 +0000 UTC m=+0.073463093 container cleanup ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, summary=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-glance-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T13:58:20, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 glance-api, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, distribution-scope=public, tcib_managed=true, container_name=glance_api_internal)
Oct 13 14:44:39 standalone.localdomain systemd[1]: libpod-conmon-ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.scope: Deactivated successfully.
Oct 13 14:44:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.timer: Failed to open /run/systemd/transient/ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.timer: No such file or directory
Oct 13 14:44:39 standalone.localdomain systemd[1]: ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: Failed to open /run/systemd/transient/ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8.service: No such file or directory
Oct 13 14:44:39 standalone.localdomain podman[311920]: 2025-10-13 14:44:39.182785327 +0000 UTC m=+0.041009933 container cleanup ac2673720260fa86a2378399338b38a30859a408dcf0d6ddf495ddbfb303e4f8 (image=registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1, name=glance_api_internal, version=17.1.9, name=rhosp17/openstack-glance-api, description=Red Hat OpenStack Platform 17.1 glance-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 glance-api, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:58:20, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=2f84cc38566abd7704c6fde71a06783a20987075, container_name=glance_api_internal, com.redhat.component=openstack-glance-api-container, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 glance-api, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-glance-api/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2f2f858c5aeedbc0383732bfeeefbe74-9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-glance-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/glance:/var/log/glance:z', '/var/log/containers/httpd/glance:/var/log/httpd:z', '/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', '/var/lib/config-data/puppet-generated/glance_api_internal:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/glance:/var/lib/glance:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 glance-api, release=1)
Oct 13 14:44:39 standalone.localdomain podman[311920]: glance_api_internal
Oct 13 14:44:39 standalone.localdomain systemd[1]: tripleo_glance_api_internal.service: Deactivated successfully.
Oct 13 14:44:39 standalone.localdomain systemd[1]: Stopped glance_api_internal container.
Oct 13 14:44:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7-merged.mount: Deactivated successfully.
Oct 13 14:44:39 standalone.localdomain python3.9[312022]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_heat_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:39 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:39 standalone.localdomain systemd-rc-local-generator[312050]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:39 standalone.localdomain systemd-sysv-generator[312053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:40 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:40 standalone.localdomain systemd[1]: Stopping heat_api container...
Oct 13 14:44:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2104: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:40 standalone.localdomain ceph-mon[29756]: pgmap v2104: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57032 DF PROTO=TCP SPT=60312 DPT=9100 SEQ=3224231792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98B7360000000001030307) 
Oct 13 14:44:41 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48847 DF PROTO=TCP SPT=58838 DPT=9882 SEQ=3753496823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98BB870000000001030307) 
Oct 13 14:44:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2105: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:42 standalone.localdomain ceph-mon[29756]: pgmap v2105: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:42 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48848 DF PROTO=TCP SPT=58838 DPT=9882 SEQ=3753496823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98BF760000000001030307) 
Oct 13 14:44:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:42 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54664 DF PROTO=TCP SPT=50832 DPT=9105 SEQ=1021467030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98C0F70000000001030307) 
Oct 13 14:44:43 standalone.localdomain systemd[1]: libpod-9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.scope: Deactivated successfully.
Oct 13 14:44:43 standalone.localdomain systemd[1]: libpod-9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.scope: Consumed 7.303s CPU time.
Oct 13 14:44:43 standalone.localdomain podman[312063]: 2025-10-13 14:44:43.520898193 +0000 UTC m=+3.189471937 container died 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, release=1, distribution-scope=public, com.redhat.component=openstack-heat-api-container, container_name=heat_api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, name=rhosp17/openstack-heat-api, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26)
Oct 13 14:44:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.timer: Deactivated successfully.
Oct 13 14:44:43 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.
Oct 13 14:44:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Failed to open /run/systemd/transient/9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: No such file or directory
Oct 13 14:44:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27-userdata-shm.mount: Deactivated successfully.
Oct 13 14:44:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf9420cd69b63e4cf6dac0289efb2e21fb7a99dd6512be09e26d2ba629cde543-merged.mount: Deactivated successfully.
Oct 13 14:44:43 standalone.localdomain podman[312063]: 2025-10-13 14:44:43.571010146 +0000 UTC m=+3.239583790 container cleanup 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, vcs-type=git, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:44:43 standalone.localdomain podman[312063]: heat_api
Oct 13 14:44:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.timer: Failed to open /run/systemd/transient/9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.timer: No such file or directory
Oct 13 14:44:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Failed to open /run/systemd/transient/9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: No such file or directory
Oct 13 14:44:43 standalone.localdomain podman[312078]: 2025-10-13 14:44:43.588552856 +0000 UTC m=+0.060828294 container cleanup 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, container_name=heat_api, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-heat-api, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:56:26, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-heat-api-container, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, release=1, version=17.1.9)
Oct 13 14:44:43 standalone.localdomain systemd[1]: libpod-conmon-9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.scope: Deactivated successfully.
Oct 13 14:44:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.timer: Failed to open /run/systemd/transient/9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.timer: No such file or directory
Oct 13 14:44:43 standalone.localdomain systemd[1]: 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: Failed to open /run/systemd/transient/9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27.service: No such file or directory
Oct 13 14:44:43 standalone.localdomain podman[312092]: 2025-10-13 14:44:43.674572014 +0000 UTC m=+0.056708037 container cleanup 9a77b715ac7964a589827e2a1ec94c7ec67b3d823af86c0fef72dbec4887aa27 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api, summary=Red Hat OpenStack Platform 17.1 heat-api, config_id=tripleo_step4, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-api, description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, container_name=heat_api, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:44:43 standalone.localdomain podman[312092]: heat_api
Oct 13 14:44:43 standalone.localdomain systemd[1]: tripleo_heat_api.service: Deactivated successfully.
Oct 13 14:44:43 standalone.localdomain systemd[1]: Stopped heat_api container.
Oct 13 14:44:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54665 DF PROTO=TCP SPT=50832 DPT=9105 SEQ=1021467030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98C4F60000000001030307) 
Oct 13 14:44:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2106: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:44 standalone.localdomain python3.9[312194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_heat_api_cfn.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:44 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:44 standalone.localdomain ceph-mon[29756]: pgmap v2106: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:44 standalone.localdomain systemd-rc-local-generator[312221]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:44 standalone.localdomain systemd-sysv-generator[312224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:44 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:44 standalone.localdomain systemd[1]: Stopping heat_api_cfn container...
Oct 13 14:44:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2107: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:46 standalone.localdomain ceph-mon[29756]: pgmap v2107: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:47 standalone.localdomain systemd[1]: libpod-0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.scope: Deactivated successfully.
Oct 13 14:44:47 standalone.localdomain systemd[1]: libpod-0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.scope: Consumed 7.165s CPU time.
Oct 13 14:44:47 standalone.localdomain podman[312235]: 2025-10-13 14:44:47.962029209 +0000 UTC m=+3.104482330 container died 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-heat-api-cfn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, version=17.1.9, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, config_id=tripleo_step4, name=rhosp17/openstack-heat-api-cfn, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., batch=17.1_20250721.1)
Oct 13 14:44:47 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.timer: Deactivated successfully.
Oct 13 14:44:47 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.
Oct 13 14:44:47 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Failed to open /run/systemd/transient/0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: No such file or directory
Oct 13 14:44:47 standalone.localdomain systemd[1]: tmp-crun.Btt1H8.mount: Deactivated successfully.
Oct 13 14:44:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f-userdata-shm.mount: Deactivated successfully.
Oct 13 14:44:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f8c62a6a2c8d47bc643de7e472c16fea3d5facb0eb72e0880a216e4f3b3830d9-merged.mount: Deactivated successfully.
Oct 13 14:44:48 standalone.localdomain podman[312235]: 2025-10-13 14:44:48.020330394 +0000 UTC m=+3.162783495 container cleanup 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, tcib_managed=true, name=rhosp17/openstack-heat-api-cfn, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:49:55, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cfn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, com.redhat.component=openstack-heat-api-cfn-container)
Oct 13 14:44:48 standalone.localdomain podman[312235]: heat_api_cfn
Oct 13 14:44:48 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.timer: Failed to open /run/systemd/transient/0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.timer: No such file or directory
Oct 13 14:44:48 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Failed to open /run/systemd/transient/0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: No such file or directory
Oct 13 14:44:48 standalone.localdomain podman[312250]: 2025-10-13 14:44:48.029824456 +0000 UTC m=+0.062572647 container cleanup 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.component=openstack-heat-api-cfn-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=heat_api_cfn, build-date=2025-07-21T14:49:55, name=rhosp17/openstack-heat-api-cfn, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4)
Oct 13 14:44:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:44:48 standalone.localdomain systemd[1]: libpod-conmon-0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.scope: Deactivated successfully.
Oct 13 14:44:48 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.timer: Failed to open /run/systemd/transient/0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.timer: No such file or directory
Oct 13 14:44:48 standalone.localdomain systemd[1]: 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: Failed to open /run/systemd/transient/0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f.service: No such file or directory
Oct 13 14:44:48 standalone.localdomain podman[312265]: 2025-10-13 14:44:48.091728781 +0000 UTC m=+0.040278320 container cleanup 0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1, name=heat_api_cfn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api-cfn/images/17.1.9-1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api-cfn, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 heat-api-cfn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api-cfn, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:49:55, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-heat-api-cfn, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-cfn-container, container_name=heat_api_cfn, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8a8bc26dd2600d9ce5af7a68a05ecc83'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api-cfn:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api_cfn:/var/lib/kolla/config_files/src:ro']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=4e7292fd77cd8185d9d69d0a3a4974e7b22feb99)
Oct 13 14:44:48 standalone.localdomain podman[312265]: heat_api_cfn
Oct 13 14:44:48 standalone.localdomain systemd[1]: tripleo_heat_api_cfn.service: Deactivated successfully.
Oct 13 14:44:48 standalone.localdomain systemd[1]: Stopped heat_api_cfn container.
Oct 13 14:44:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:44:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:44:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:44:48 standalone.localdomain podman[312264]: 2025-10-13 14:44:48.182533857 +0000 UTC m=+0.133295495 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vendor=Red Hat, Inc., vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=memcached, description=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:44:48 standalone.localdomain podman[312264]: 2025-10-13 14:44:48.232004959 +0000 UTC m=+0.182766607 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, distribution-scope=public, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_id=tripleo_step1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.component=openstack-memcached-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 14:44:48 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:44:48 standalone.localdomain podman[312306]: 2025-10-13 14:44:48.235994633 +0000 UTC m=+0.057209273 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-07-21T13:07:52, vendor=Red Hat, Inc., vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:48 standalone.localdomain podman[312305]: 2025-10-13 14:44:48.338973563 +0000 UTC m=+0.162242495 container health_status 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, com.redhat.component=openstack-heat-api-container, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-heat-api, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:44:48 standalone.localdomain podman[312305]: 2025-10-13 14:44:48.347747673 +0000 UTC m=+0.171016615 container exec_died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, summary=Red Hat OpenStack Platform 17.1 heat-api, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:56:26, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, com.redhat.component=openstack-heat-api-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 heat-api)
Oct 13 14:44:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2108: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:48 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Deactivated successfully.
Oct 13 14:44:48 standalone.localdomain podman[312306]: 2025-10-13 14:44:48.369975547 +0000 UTC m=+0.191190177 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, build-date=2025-07-21T13:07:52, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:44:48 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:44:48 standalone.localdomain podman[312307]: 2025-10-13 14:44:48.289540721 +0000 UTC m=+0.107784810 container health_status 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, health_status=unhealthy, vcs-type=git, config_id=tripleo_step4, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, version=17.1.9, container_name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, batch=17.1_20250721.1, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, release=1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible)
Oct 13 14:44:48 standalone.localdomain podman[312307]: 2025-10-13 14:44:48.421830763 +0000 UTC m=+0.240074842 container exec_died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, build-date=2025-07-21T15:44:11, config_id=tripleo_step4, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, name=rhosp17/openstack-heat-engine, com.redhat.component=openstack-heat-engine-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=heat_engine, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, io.openshift.expose-services=)
Oct 13 14:44:48 standalone.localdomain podman[312307]: unhealthy
Oct 13 14:44:48 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:48 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:44:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48850 DF PROTO=TCP SPT=58838 DPT=9882 SEQ=3753496823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98D7360000000001030307) 
Oct 13 14:44:48 standalone.localdomain python3.9[312445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_heat_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:48 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:48 standalone.localdomain systemd-rc-local-generator[312471]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:48 standalone.localdomain systemd-sysv-generator[312476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:48 standalone.localdomain ceph-mon[29756]: pgmap v2108: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:48 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:49 standalone.localdomain systemd[1]: Stopping heat_api_cron container...
Oct 13 14:44:49 standalone.localdomain crond[113056]: (CRON) INFO (Shutting down)
Oct 13 14:44:49 standalone.localdomain systemd[1]: libpod-8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.scope: Deactivated successfully.
Oct 13 14:44:49 standalone.localdomain podman[312487]: 2025-10-13 14:44:49.292823496 +0000 UTC m=+0.055762428 container died 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, managed_by=tripleo_ansible, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 heat-api, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, container_name=heat_api_cron, tcib_managed=true, name=rhosp17/openstack-heat-api, io.buildah.version=1.33.12)
Oct 13 14:44:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.timer: Deactivated successfully.
Oct 13 14:44:49 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.
Oct 13 14:44:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Failed to open /run/systemd/transient/8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: No such file or directory
Oct 13 14:44:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c-userdata-shm.mount: Deactivated successfully.
Oct 13 14:44:49 standalone.localdomain podman[312487]: 2025-10-13 14:44:49.346708254 +0000 UTC m=+0.109647166 container cleanup 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, release=1, container_name=heat_api_cron, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, build-date=2025-07-21T15:56:26, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, io.openshift.expose-services=, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-api-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']})
Oct 13 14:44:49 standalone.localdomain podman[312487]: heat_api_cron
Oct 13 14:44:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.timer: Failed to open /run/systemd/transient/8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.timer: No such file or directory
Oct 13 14:44:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Failed to open /run/systemd/transient/8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: No such file or directory
Oct 13 14:44:49 standalone.localdomain podman[312501]: 2025-10-13 14:44:49.355463794 +0000 UTC m=+0.053078935 container cleanup 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, release=1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, build-date=2025-07-21T15:56:26, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 heat-api, tcib_managed=true, container_name=heat_api_cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-api, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-api, version=17.1.9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, com.redhat.component=openstack-heat-api-container)
Oct 13 14:44:49 standalone.localdomain systemd[1]: libpod-conmon-8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.scope: Deactivated successfully.
Oct 13 14:44:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.timer: Failed to open /run/systemd/transient/8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.timer: No such file or directory
Oct 13 14:44:49 standalone.localdomain systemd[1]: 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: Failed to open /run/systemd/transient/8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c.service: No such file or directory
Oct 13 14:44:49 standalone.localdomain podman[312515]: 2025-10-13 14:44:49.422429845 +0000 UTC m=+0.035505013 container cleanup 8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c (image=registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1, name=heat_api_cron, batch=17.1_20250721.1, name=rhosp17/openstack-heat-api, summary=Red Hat OpenStack Platform 17.1 heat-api, release=1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T15:56:26, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '171dfb6084155b43ebab325c6674ca84'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron heat'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/log/containers/httpd/heat-api:/var/log/httpd:z', '/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat_api:/var/lib/kolla/config_files/src:ro']}, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-api/images/17.1.9-1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-api, vcs-ref=50ad31a3907524465c6c121b4205bf5b43d43530, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=heat_api_cron, description=Red Hat OpenStack Platform 17.1 heat-api, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-api, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-heat-api-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:44:49 standalone.localdomain podman[312515]: heat_api_cron
Oct 13 14:44:49 standalone.localdomain systemd[1]: tripleo_heat_api_cron.service: Deactivated successfully.
Oct 13 14:44:49 standalone.localdomain systemd[1]: Stopped heat_api_cron container.
Oct 13 14:44:49 standalone.localdomain python3.9[312617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_heat_engine.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:44:50 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:44:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54667 DF PROTO=TCP SPT=50832 DPT=9105 SEQ=1021467030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98DCB60000000001030307) 
Oct 13 14:44:50 standalone.localdomain systemd-rc-local-generator[312641]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:44:50 standalone.localdomain systemd-sysv-generator[312645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:44:50 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:44:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2109: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f103f3389f1c0c290f1c4785293a9b4615c276e3f45d72abe9bd5ad8e441a786-merged.mount: Deactivated successfully.
Oct 13 14:44:50 standalone.localdomain ceph-mon[29756]: pgmap v2109: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:50 standalone.localdomain systemd[1]: Stopping heat_engine container...
Oct 13 14:44:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2110: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:52 standalone.localdomain ceph-mon[29756]: pgmap v2110: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:44:53 standalone.localdomain podman[312673]: 2025-10-13 14:44:53.285874756 +0000 UTC m=+0.055468757 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, version=17.1.9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, distribution-scope=public, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:44:53 standalone.localdomain systemd[1]: tmp-crun.YnLFOv.mount: Deactivated successfully.
Oct 13 14:44:53 standalone.localdomain podman[312674]: 2025-10-13 14:44:53.303031955 +0000 UTC m=+0.067768938 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller)
Oct 13 14:44:53 standalone.localdomain podman[312674]: 2025-10-13 14:44:53.344961136 +0000 UTC m=+0.109698119 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, build-date=2025-07-21T13:28:44, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:44:53 standalone.localdomain podman[312674]: unhealthy
Oct 13 14:44:53 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:53 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:44:53 standalone.localdomain podman[312673]: 2025-10-13 14:44:53.358821212 +0000 UTC m=+0.128415173 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, tcib_managed=true, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:53 standalone.localdomain podman[312673]: unhealthy
Oct 13 14:44:53 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:53 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:44:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60964 DF PROTO=TCP SPT=42712 DPT=9102 SEQ=3675466944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98EA4F0000000001030307) 
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:44:53 standalone.localdomain podman[312714]: 2025-10-13 14:44:53.793882796 +0000 UTC m=+0.064919129 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:44:53 standalone.localdomain podman[312715]: 2025-10-13 14:44:53.857874306 +0000 UTC m=+0.126285379 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:44:53 standalone.localdomain podman[312719]: 2025-10-13 14:44:53.872656681 +0000 UTC m=+0.134196662 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, container_name=swift_container_server, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true)
Oct 13 14:44:53 standalone.localdomain podman[312727]: 2025-10-13 14:44:53.911144156 +0000 UTC m=+0.168695925 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:44:53 standalone.localdomain podman[312714]: 2025-10-13 14:44:53.959766712 +0000 UTC m=+0.230803045 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, io.buildah.version=1.33.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 14:44:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:44:53 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:44:54 standalone.localdomain podman[312809]: 2025-10-13 14:44:54.041446367 +0000 UTC m=+0.054529260 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1)
Oct 13 14:44:54 standalone.localdomain podman[312719]: 2025-10-13 14:44:54.056728598 +0000 UTC m=+0.318268529 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32)
Oct 13 14:44:54 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:44:54 standalone.localdomain podman[312727]: 2025-10-13 14:44:54.082754938 +0000 UTC m=+0.340306707 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, release=1, tcib_managed=true, com.redhat.component=openstack-swift-account-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:44:54 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:44:54 standalone.localdomain podman[312715]: 2025-10-13 14:44:54.09485326 +0000 UTC m=+0.363264353 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 14:44:54 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:44:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2111: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:54 standalone.localdomain podman[312809]: 2025-10-13 14:44:54.417923837 +0000 UTC m=+0.431006720 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Oct 13 14:44:54 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:44:54 standalone.localdomain ceph-mon[29756]: pgmap v2111: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48796 DF PROTO=TCP SPT=33220 DPT=9101 SEQ=1473857208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98F1DD0000000001030307) 
Oct 13 14:44:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2112: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:56 standalone.localdomain ceph-mon[29756]: pgmap v2112: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:44:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:44:58 standalone.localdomain podman[312841]: 2025-10-13 14:44:58.308639797 +0000 UTC m=+0.082601543 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.openshift.expose-services=, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, description=Red Hat OpenStack Platform 17.1 keystone, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, container_name=keystone_cron, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 keystone, distribution-scope=public)
Oct 13 14:44:58 standalone.localdomain podman[312841]: 2025-10-13 14:44:58.315587782 +0000 UTC m=+0.089549508 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 keystone, release=1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-keystone-container, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:18, container_name=keystone_cron, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team)
Oct 13 14:44:58 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:44:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2113: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48798 DF PROTO=TCP SPT=33220 DPT=9101 SEQ=1473857208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C98FDF60000000001030307) 
Oct 13 14:44:58 standalone.localdomain ceph-mon[29756]: pgmap v2113: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:44:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:44:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:44:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:44:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:44:59 standalone.localdomain podman[312860]: 2025-10-13 14:44:59.787323617 +0000 UTC m=+0.051696592 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, version=17.1.9, config_id=tripleo_step4, container_name=neutron_dhcp, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 14:44:59 standalone.localdomain systemd[1]: tmp-crun.gqqse6.mount: Deactivated successfully.
Oct 13 14:44:59 standalone.localdomain podman[312861]: 2025-10-13 14:44:59.810642615 +0000 UTC m=+0.071946546 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team)
Oct 13 14:44:59 standalone.localdomain podman[312861]: 2025-10-13 14:44:59.8182991 +0000 UTC m=+0.079603031 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.9, release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, container_name=iscsid, batch=17.1_20250721.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-iscsid)
Oct 13 14:44:59 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:44:59 standalone.localdomain podman[312860]: 2025-10-13 14:44:59.849628575 +0000 UTC m=+0.114001570 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, container_name=neutron_dhcp, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, tcib_managed=true, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:44:59 standalone.localdomain podman[312860]: unhealthy
Oct 13 14:44:59 standalone.localdomain podman[312862]: 2025-10-13 14:44:59.85596277 +0000 UTC m=+0.112999400 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.9)
Oct 13 14:44:59 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:59 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:44:59 standalone.localdomain podman[312862]: 2025-10-13 14:44:59.869519258 +0000 UTC m=+0.126555918 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-07-21T14:48:37, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, config_id=tripleo_step5, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 14:44:59 standalone.localdomain podman[312862]: unhealthy
Oct 13 14:44:59 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:59 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:44:59 standalone.localdomain podman[312868]: 2025-10-13 14:44:59.820050625 +0000 UTC m=+0.074026101 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, tcib_managed=true, version=17.1.9, container_name=neutron_sriov_agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team)
Oct 13 14:44:59 standalone.localdomain podman[312868]: 2025-10-13 14:44:59.950235572 +0000 UTC m=+0.204211048 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, com.redhat.component=openstack-neutron-sriov-agent-container, build-date=2025-07-21T16:03:34, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, tcib_managed=true, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, container_name=neutron_sriov_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, name=rhosp17/openstack-neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1)
Oct 13 14:44:59 standalone.localdomain podman[312868]: unhealthy
Oct 13 14:44:59 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:44:59 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:45:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2114: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:00 standalone.localdomain ceph-mon[29756]: pgmap v2114: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2115: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:02 standalone.localdomain ceph-mon[29756]: pgmap v2115: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48799 DF PROTO=TCP SPT=33220 DPT=9101 SEQ=1473857208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C990DB60000000001030307) 
Oct 13 14:45:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2116: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:04 standalone.localdomain ceph-mon[29756]: pgmap v2116: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2117: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:06 standalone.localdomain ceph-mon[29756]: pgmap v2117: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13281 DF PROTO=TCP SPT=51736 DPT=9100 SEQ=2021195915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9920650000000001030307) 
Oct 13 14:45:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2118: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13282 DF PROTO=TCP SPT=51736 DPT=9100 SEQ=2021195915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9924760000000001030307) 
Oct 13 14:45:08 standalone.localdomain ceph-mon[29756]: pgmap v2118: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2119: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: pgmap v2119: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13283 DF PROTO=TCP SPT=51736 DPT=9100 SEQ=2021195915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C992C760000000001030307) 
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.505877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366710505949, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 707, "num_deletes": 251, "total_data_size": 494687, "memory_usage": 508536, "flush_reason": "Manual Compaction"}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366710514788, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 485568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49972, "largest_seqno": 50678, "table_properties": {"data_size": 482273, "index_size": 1216, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7814, "raw_average_key_size": 19, "raw_value_size": 475462, "raw_average_value_size": 1165, "num_data_blocks": 56, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366662, "oldest_key_time": 1760366662, "file_creation_time": 1760366710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 8966 microseconds, and 1950 cpu microseconds.
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.514842) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 485568 bytes OK
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.514874) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.516539) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.516561) EVENT_LOG_v1 {"time_micros": 1760366710516554, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.516581) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 490975, prev total WAL file size 490975, number of live WAL files 2.
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.517319) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(474KB)], [110(5627KB)]
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366710517389, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 6247879, "oldest_snapshot_seqno": -1}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 5023 keys, 5170017 bytes, temperature: kUnknown
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366710540855, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 5170017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5140011, "index_size": 16342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 128044, "raw_average_key_size": 25, "raw_value_size": 5052256, "raw_average_value_size": 1005, "num_data_blocks": 674, "num_entries": 5023, "num_filter_entries": 5023, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.541139) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 5170017 bytes
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.542279) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 266.8 rd, 220.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 5.5 +0.0 blob) out(4.9 +0.0 blob), read-write-amplify(23.5) write-amplify(10.6) OK, records in: 5540, records dropped: 517 output_compression: NoCompression
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.542302) EVENT_LOG_v1 {"time_micros": 1760366710542291, "job": 66, "event": "compaction_finished", "compaction_time_micros": 23415, "compaction_time_cpu_micros": 12135, "output_level": 6, "num_output_files": 1, "total_output_size": 5170017, "num_input_records": 5540, "num_output_records": 5023, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366710542478, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366710543188, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.517177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.543271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.543292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.543294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.543296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:45:10 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:45:10.543298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:45:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2120: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:12 standalone.localdomain ceph-mon[29756]: pgmap v2120: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26912 DF PROTO=TCP SPT=59694 DPT=9105 SEQ=171293171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C993A360000000001030307) 
Oct 13 14:45:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2121: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:14 standalone.localdomain ceph-mon[29756]: pgmap v2121: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2122: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:16 standalone.localdomain ceph-mon[29756]: pgmap v2122: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:45:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2123: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:18 standalone.localdomain podman[312940]: 2025-10-13 14:45:18.406907749 +0000 UTC m=+0.065305712 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, name=rhosp17/openstack-memcached, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., distribution-scope=public, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, maintainer=OpenStack TripleO Team, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, container_name=memcached, io.openshift.expose-services=, config_id=tripleo_step1, io.buildah.version=1.33.12)
Oct 13 14:45:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:45:18 standalone.localdomain podman[312940]: 2025-10-13 14:45:18.456550337 +0000 UTC m=+0.114948300 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-memcached-container, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, io.buildah.version=1.33.12, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=memcached, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 memcached, name=rhosp17/openstack-memcached, build-date=2025-07-21T12:58:43, config_id=tripleo_step1)
Oct 13 14:45:18 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:45:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4120534798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4120534798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:45:18 standalone.localdomain podman[312976]: Error: container 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 is not running
Oct 13 14:45:18 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:45:18 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:45:18 standalone.localdomain podman[312964]: 2025-10-13 14:45:18.557323409 +0000 UTC m=+0.124675028 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.buildah.version=1.33.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:07:52, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, vcs-type=git)
Oct 13 14:45:18 standalone.localdomain podman[312964]: 2025-10-13 14:45:18.601307734 +0000 UTC m=+0.168659363 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Oct 13 14:45:18 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:45:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46359 DF PROTO=TCP SPT=42746 DPT=9882 SEQ=3490908168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C994C760000000001030307) 
Oct 13 14:45:18 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:45:18 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txc39f389cc57d46c98f303-0068ed107e" "proxy-server 2" 0.0007 "-" 23 -
Oct 13 14:45:18 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txc39f389cc57d46c98f303-0068ed107e)
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: pgmap v2123: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4120534798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:45:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4120534798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:45:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26914 DF PROTO=TCP SPT=59694 DPT=9105 SEQ=171293171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9951F60000000001030307) 
Oct 13 14:45:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2124: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:20 standalone.localdomain ceph-mon[29756]: pgmap v2124: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2125: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:22 standalone.localdomain ceph-mon[29756]: pgmap v2125: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:45:23
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'images', 'backups', 'manila_data', 'manila_metadata']
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:45:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9941 DF PROTO=TCP SPT=54160 DPT=9102 SEQ=1864936603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C995F7E0000000001030307) 
Oct 13 14:45:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:45:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:45:23 standalone.localdomain podman[312994]: 2025-10-13 14:45:23.815165827 +0000 UTC m=+0.077709503 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, container_name=ovn_metadata_agent, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 14:45:23 standalone.localdomain podman[312994]: 2025-10-13 14:45:23.824451653 +0000 UTC m=+0.086995309 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Oct 13 14:45:23 standalone.localdomain podman[312994]: unhealthy
Oct 13 14:45:23 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:23 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:45:23 standalone.localdomain podman[312995]: 2025-10-13 14:45:23.866749495 +0000 UTC m=+0.127707673 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.9)
Oct 13 14:45:23 standalone.localdomain podman[312995]: 2025-10-13 14:45:23.876789724 +0000 UTC m=+0.137747902 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:45:23 standalone.localdomain podman[312995]: unhealthy
Oct 13 14:45:23 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:23 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:45:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2126: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:24 standalone.localdomain ceph-mon[29756]: pgmap v2126: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:45:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:45:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:45:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:45:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:45:24 standalone.localdomain systemd[1]: tmp-crun.TuWzDG.mount: Deactivated successfully.
Oct 13 14:45:24 standalone.localdomain podman[313035]: 2025-10-13 14:45:24.81611008 +0000 UTC m=+0.077243179 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T14:48:37, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:45:24 standalone.localdomain podman[313043]: 2025-10-13 14:45:24.865046636 +0000 UTC m=+0.122793491 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, container_name=swift_account_server, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1)
Oct 13 14:45:24 standalone.localdomain podman[313034]: 2025-10-13 14:45:24.797030423 +0000 UTC m=+0.066304133 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, distribution-scope=public, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:45:24 standalone.localdomain podman[313042]: 2025-10-13 14:45:24.909512845 +0000 UTC m=+0.168758625 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:45:24 standalone.localdomain podman[313036]: 2025-10-13 14:45:24.920793242 +0000 UTC m=+0.181929991 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:48:37, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:45:24 standalone.localdomain podman[313034]: 2025-10-13 14:45:24.978763377 +0000 UTC m=+0.248037067 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 14:45:24 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:45:25 standalone.localdomain podman[313043]: 2025-10-13 14:45:25.050820665 +0000 UTC m=+0.308567520 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server)
Oct 13 14:45:25 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:45:25 standalone.localdomain podman[313036]: 2025-10-13 14:45:25.101787525 +0000 UTC m=+0.362924274 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vcs-type=git, version=17.1.9, container_name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:45:25 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:45:25 standalone.localdomain podman[313042]: 2025-10-13 14:45:25.161370749 +0000 UTC m=+0.420616519 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 14:45:25 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:45:25 standalone.localdomain podman[313035]: 2025-10-13 14:45:25.22963277 +0000 UTC m=+0.490765859 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:48:37, io.buildah.version=1.33.12, container_name=nova_migration_target, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:45:25 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:45:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29917 DF PROTO=TCP SPT=42832 DPT=9101 SEQ=1649298431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99670E0000000001030307) 
Oct 13 14:45:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2127: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:26 standalone.localdomain ceph-mon[29756]: pgmap v2127: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2128: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29919 DF PROTO=TCP SPT=42832 DPT=9101 SEQ=1649298431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9973360000000001030307) 
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:45:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:45:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:45:28 standalone.localdomain systemd[1]: tmp-crun.XsvsaA.mount: Deactivated successfully.
Oct 13 14:45:28 standalone.localdomain podman[313161]: 2025-10-13 14:45:28.802854649 +0000 UTC m=+0.071157782 container health_status 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, health_status=healthy, vcs-type=git, build-date=2025-07-21T13:27:18, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-keystone, architecture=x86_64, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:45:28 standalone.localdomain podman[313161]: 2025-10-13 14:45:28.842297443 +0000 UTC m=+0.110600686 container exec_died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, distribution-scope=public, release=1, build-date=2025-07-21T13:27:18, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, tcib_managed=true, config_id=tripleo_step3, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vcs-type=git, name=rhosp17/openstack-keystone, container_name=keystone_cron, summary=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1)
Oct 13 14:45:28 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Deactivated successfully.
Oct 13 14:45:28 standalone.localdomain ceph-mon[29756]: pgmap v2128: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:29 standalone.localdomain sudo[313180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:45:29 standalone.localdomain sudo[313180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:45:29 standalone.localdomain sudo[313180]: pam_unix(sudo:session): session closed for user root
Oct 13 14:45:29 standalone.localdomain sudo[313195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:45:29 standalone.localdomain sudo[313195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:45:29 standalone.localdomain sudo[313195]: pam_unix(sudo:session): session closed for user root
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:45:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 92afc2f8-61a7-4cf2-9df4-d65e9cea9d64 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:45:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 92afc2f8-61a7-4cf2-9df4-d65e9cea9d64 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:45:29 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 92afc2f8-61a7-4cf2-9df4-d65e9cea9d64 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:45:29 standalone.localdomain sudo[313243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:45:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:45:29 standalone.localdomain sudo[313243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:45:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:45:29 standalone.localdomain sudo[313243]: pam_unix(sudo:session): session closed for user root
Oct 13 14:45:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:45:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:45:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:45:30 standalone.localdomain systemd[1]: tmp-crun.FWD7CT.mount: Deactivated successfully.
Oct 13 14:45:30 standalone.localdomain podman[313259]: 2025-10-13 14:45:30.091369844 +0000 UTC m=+0.080083426 container health_status 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, build-date=2025-07-21T13:27:15, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1)
Oct 13 14:45:30 standalone.localdomain podman[313259]: 2025-10-13 14:45:30.125846305 +0000 UTC m=+0.114559907 container exec_died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:27:15, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Oct 13 14:45:30 standalone.localdomain podman[313260]: 2025-10-13 14:45:30.132911713 +0000 UTC m=+0.119855730 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.)
Oct 13 14:45:30 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Deactivated successfully.
Oct 13 14:45:30 standalone.localdomain podman[313260]: 2025-10-13 14:45:30.174802452 +0000 UTC m=+0.161746459 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, build-date=2025-07-21T14:48:37, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.33.12, container_name=nova_compute, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, batch=17.1_20250721.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:45:30 standalone.localdomain podman[313260]: unhealthy
Oct 13 14:45:30 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:30 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:45:30 standalone.localdomain podman[313261]: 2025-10-13 14:45:30.195227841 +0000 UTC m=+0.178437264 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=neutron_sriov_agent, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-neutron-sriov-agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, distribution-scope=public, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, version=17.1.9, release=1)
Oct 13 14:45:30 standalone.localdomain podman[313261]: 2025-10-13 14:45:30.233903052 +0000 UTC m=+0.217112475 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, distribution-scope=public, vcs-type=git, release=1, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., batch=17.1_20250721.1, container_name=neutron_sriov_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:45:30 standalone.localdomain podman[313261]: unhealthy
Oct 13 14:45:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:45:30 standalone.localdomain podman[313258]: 2025-10-13 14:45:30.237105341 +0000 UTC m=+0.226753432 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., container_name=neutron_dhcp, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-dhcp-agent-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, name=rhosp17/openstack-neutron-dhcp-agent, release=1, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:45:30 standalone.localdomain podman[313258]: 2025-10-13 14:45:30.319826557 +0000 UTC m=+0.309474618 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vcs-type=git, architecture=x86_64, container_name=neutron_dhcp, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 14:45:30 standalone.localdomain podman[313258]: unhealthy
Oct 13 14:45:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:45:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2129: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:30 standalone.localdomain ceph-mon[29756]: pgmap v2129: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:31 standalone.localdomain systemd[1]: tmp-crun.1lor6u.mount: Deactivated successfully.
Oct 13 14:45:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2130: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:32 standalone.localdomain ceph-mon[29756]: pgmap v2130: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29920 DF PROTO=TCP SPT=42832 DPT=9101 SEQ=1649298431 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9982F60000000001030307) 
Oct 13 14:45:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:45:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:45:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2131: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:34 standalone.localdomain ceph-mon[29756]: pgmap v2131: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:45:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:45:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:45:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:45:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2132: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:36 standalone.localdomain ceph-mon[29756]: pgmap v2132: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35446 DF PROTO=TCP SPT=47508 DPT=9100 SEQ=1534479620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9995950000000001030307) 
Oct 13 14:45:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2133: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35447 DF PROTO=TCP SPT=47508 DPT=9100 SEQ=1534479620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9999B60000000001030307) 
Oct 13 14:45:38 standalone.localdomain ceph-mon[29756]: pgmap v2133: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2134: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:40 standalone.localdomain ceph-mon[29756]: pgmap v2134: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35448 DF PROTO=TCP SPT=47508 DPT=9100 SEQ=1534479620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99A1B60000000001030307) 
Oct 13 14:45:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2135: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:42 standalone.localdomain ceph-mon[29756]: pgmap v2135: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26422 DF PROTO=TCP SPT=42768 DPT=9105 SEQ=671613053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99AF770000000001030307) 
Oct 13 14:45:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2136: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:44 standalone.localdomain ceph-mon[29756]: pgmap v2136: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2137: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:46 standalone.localdomain ceph-mon[29756]: pgmap v2137: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:45:47 standalone.localdomain object-server[313338]: Object update sweep starting on /srv/node/d1 (pid: 20)
Oct 13 14:45:47 standalone.localdomain object-server[313338]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 20)
Oct 13 14:45:47 standalone.localdomain object-server[313338]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:45:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.08s
Oct 13 14:45:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2138: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47959 DF PROTO=TCP SPT=56142 DPT=9882 SEQ=915678175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99C1B60000000001030307) 
Oct 13 14:45:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:45:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:45:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:45:48 standalone.localdomain podman[313339]: Error: container 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 is not running
Oct 13 14:45:48 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:45:48 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed with result 'exit-code'.
Oct 13 14:45:48 standalone.localdomain podman[313340]: 2025-10-13 14:45:48.87374264 +0000 UTC m=+0.134350008 container health_status 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 memcached, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 memcached, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, release=1, vcs-type=git, container_name=memcached, version=17.1.9, build-date=2025-07-21T12:58:43)
Oct 13 14:45:48 standalone.localdomain podman[313340]: 2025-10-13 14:45:48.901235991 +0000 UTC m=+0.161843339 container exec_died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, release=1, name=rhosp17/openstack-memcached, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, com.redhat.component=openstack-memcached-container, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached)
Oct 13 14:45:48 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Deactivated successfully.
Oct 13 14:45:48 standalone.localdomain podman[313341]: 2025-10-13 14:45:48.908656601 +0000 UTC m=+0.166549385 container health_status f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:07:52, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Oct 13 14:45:48 standalone.localdomain ceph-mon[29756]: pgmap v2138: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:48 standalone.localdomain podman[313341]: 2025-10-13 14:45:48.993898768 +0000 UTC m=+0.251791552 container exec_died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-07-21T13:07:52, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, version=17.1.9, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:45:49 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26424 DF PROTO=TCP SPT=42768 DPT=9105 SEQ=671613053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99C7360000000001030307) 
Oct 13 14:45:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2139: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:50 standalone.localdomain ceph-mon[29756]: pgmap v2139: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:50 standalone.localdomain podman[312658]: time="2025-10-13T14:45:50Z" level=warning msg="StopSignal SIGTERM failed to stop container heat_engine in 60 seconds, resorting to SIGKILL"
Oct 13 14:45:50 standalone.localdomain systemd[1]: libpod-1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.scope: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain systemd[1]: libpod-1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.scope: Consumed 4.666s CPU time.
Oct 13 14:45:50 standalone.localdomain podman[312658]: 2025-10-13 14:45:50.569137021 +0000 UTC m=+60.073502089 container died 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, release=1, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, container_name=heat_engine, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, vcs-type=git, build-date=2025-07-21T15:44:11, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 heat-engine, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 heat-engine, name=rhosp17/openstack-heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64)
Oct 13 14:45:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.timer: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.
Oct 13 14:45:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed to open /run/systemd/transient/1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: No such file or directory
Oct 13 14:45:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351-userdata-shm.mount: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-072572511f39c4d4ecb97b214b4148a1acd564ad346e67866a611cfc25e867ff-merged.mount: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain podman[312658]: 2025-10-13 14:45:50.671774647 +0000 UTC m=+60.176139685 container cleanup 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, description=Red Hat OpenStack Platform 17.1 heat-engine, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 heat-engine, com.redhat.component=openstack-heat-engine-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, managed_by=tripleo_ansible, vcs-type=git, container_name=heat_engine, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:45:50 standalone.localdomain podman[312658]: heat_engine
Oct 13 14:45:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.timer: Failed to open /run/systemd/transient/1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.timer: No such file or directory
Oct 13 14:45:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed to open /run/systemd/transient/1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: No such file or directory
Oct 13 14:45:50 standalone.localdomain podman[313393]: 2025-10-13 14:45:50.683717126 +0000 UTC m=+0.103372529 container cleanup 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, container_name=heat_engine, io.buildah.version=1.33.12, com.redhat.component=openstack-heat-engine-container, name=rhosp17/openstack-heat-engine, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 heat-engine, build-date=2025-07-21T15:44:11, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 heat-engine, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, managed_by=tripleo_ansible)
Oct 13 14:45:50 standalone.localdomain systemd[1]: libpod-conmon-1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.scope: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.timer: Failed to open /run/systemd/transient/1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.timer: No such file or directory
Oct 13 14:45:50 standalone.localdomain systemd[1]: 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: Failed to open /run/systemd/transient/1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351.service: No such file or directory
Oct 13 14:45:50 standalone.localdomain podman[313405]: 2025-10-13 14:45:50.785917289 +0000 UTC m=+0.062765293 container cleanup 1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351 (image=registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1, name=heat_engine, description=Red Hat OpenStack Platform 17.1 heat-engine, version=17.1.9, name=rhosp17/openstack-heat-engine, com.redhat.component=openstack-heat-engine-container, container_name=heat_engine, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 heat-engine, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T15:44:11, io.k8s.display-name=Red Hat OpenStack Platform 17.1 heat-engine, summary=Red Hat OpenStack Platform 17.1 heat-engine, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-heat-engine/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02af0e891420e5c6be1d463bd6e7979c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'stop_grace_period': 60, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/heat:/var/log/heat:z', '/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/heat:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, release=1, maintainer=OpenStack TripleO Team, vcs-ref=1db7dea96b5696c7450d431083ec6c52145a5da3, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 14:45:50 standalone.localdomain podman[313405]: heat_engine
Oct 13 14:45:50 standalone.localdomain systemd[1]: tripleo_heat_engine.service: Deactivated successfully.
Oct 13 14:45:50 standalone.localdomain systemd[1]: Stopped heat_engine container.
Oct 13 14:45:50 standalone.localdomain systemd[1]: tripleo_heat_engine.service: Consumed 1.184s CPU time, no IO.
Oct 13 14:45:51 standalone.localdomain python3.9[313507]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_horizon.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:51 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:51 standalone.localdomain systemd-rc-local-generator[313535]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:51 standalone.localdomain systemd-sysv-generator[313540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:51 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:45:51 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:45:51 standalone.localdomain recover_tripleo_nova_virtqemud[313547]: 93291
Oct 13 14:45:51 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:45:51 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:45:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2140: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:52 standalone.localdomain ceph-mon[29756]: pgmap v2140: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:52 standalone.localdomain python3.9[313637]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:52 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:52 standalone.localdomain systemd-rc-local-generator[313665]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:52 standalone.localdomain systemd-sysv-generator[313669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:52 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:45:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:53 standalone.localdomain systemd[1]: Stopping iscsid container...
Oct 13 14:45:53 standalone.localdomain systemd[1]: libpod-0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.scope: Deactivated successfully.
Oct 13 14:45:53 standalone.localdomain podman[313678]: 2025-10-13 14:45:53.141379365 +0000 UTC m=+0.084507316 container stop 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=iscsid, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T13:27:15, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:45:53 standalone.localdomain podman[313678]: 2025-10-13 14:45:53.174701796 +0000 UTC m=+0.117829737 container died 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:45:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.timer: Deactivated successfully.
Oct 13 14:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:45:53 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.
Oct 13 14:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:45:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Failed to open /run/systemd/transient/0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: No such file or directory
Oct 13 14:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:45:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217-userdata-shm.mount: Deactivated successfully.
Oct 13 14:45:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ec6b0f95b63c63d227c33e5b75d478d141a6629833a5ec16c3909f4639560a9e-merged.mount: Deactivated successfully.
Oct 13 14:45:53 standalone.localdomain podman[313678]: 2025-10-13 14:45:53.300658364 +0000 UTC m=+0.243786275 container cleanup 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.9, build-date=2025-07-21T13:27:15, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.33.12, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:45:53 standalone.localdomain podman[313678]: iscsid
Oct 13 14:45:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.timer: Failed to open /run/systemd/transient/0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.timer: No such file or directory
Oct 13 14:45:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Failed to open /run/systemd/transient/0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: No such file or directory
Oct 13 14:45:53 standalone.localdomain podman[313691]: 2025-10-13 14:45:53.313558032 +0000 UTC m=+0.159309620 container cleanup 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-07-21T13:27:15, architecture=x86_64)
Oct 13 14:45:53 standalone.localdomain systemd[1]: libpod-conmon-0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.scope: Deactivated successfully.
Oct 13 14:45:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.timer: Failed to open /run/systemd/transient/0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.timer: No such file or directory
Oct 13 14:45:53 standalone.localdomain systemd[1]: 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: Failed to open /run/systemd/transient/0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217.service: No such file or directory
Oct 13 14:45:53 standalone.localdomain podman[313705]: 2025-10-13 14:45:53.392259287 +0000 UTC m=+0.045623932 container cleanup 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, architecture=x86_64, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20250721.1, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, build-date=2025-07-21T13:27:15, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9)
Oct 13 14:45:53 standalone.localdomain podman[313705]: iscsid
Oct 13 14:45:53 standalone.localdomain systemd[1]: tripleo_iscsid.service: Deactivated successfully.
Oct 13 14:45:53 standalone.localdomain systemd[1]: Stopped iscsid container.
Oct 13 14:45:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24781 DF PROTO=TCP SPT=49312 DPT=9102 SEQ=1250749879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99D4AE0000000001030307) 
Oct 13 14:45:54 standalone.localdomain python3.9[313807]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_keystone.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:45:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:45:54 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:54 standalone.localdomain podman[313810]: 2025-10-13 14:45:54.168803006 +0000 UTC m=+0.076913680 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, tcib_managed=true, container_name=ovn_controller, release=1, name=rhosp17/openstack-ovn-controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:45:54 standalone.localdomain podman[313810]: 2025-10-13 14:45:54.207871225 +0000 UTC m=+0.115981899 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1)
Oct 13 14:45:54 standalone.localdomain podman[313810]: unhealthy
Oct 13 14:45:54 standalone.localdomain systemd-rc-local-generator[313869]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:54 standalone.localdomain systemd-sysv-generator[313872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:54 standalone.localdomain podman[313809]: 2025-10-13 14:45:54.213750468 +0000 UTC m=+0.125460683 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.33.12, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:45:54 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:45:54 standalone.localdomain podman[313809]: 2025-10-13 14:45:54.293966919 +0000 UTC m=+0.205677124 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:45:54 standalone.localdomain podman[313809]: unhealthy
Oct 13 14:45:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2141: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:54 standalone.localdomain ceph-mon[29756]: pgmap v2141: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:54 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:54 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:45:54 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:45:54 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:45:55 standalone.localdomain python3.9[313975]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_keystone_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:55 standalone.localdomain podman[313977]: 2025-10-13 14:45:55.191613856 +0000 UTC m=+0.058334616 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, version=17.1.9, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:45:55 standalone.localdomain podman[314001]: 2025-10-13 14:45:55.252741687 +0000 UTC m=+0.081369369 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, container_name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, release=1, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:45:55 standalone.localdomain podman[313978]: 2025-10-13 14:45:55.305673595 +0000 UTC m=+0.170205357 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:45:55 standalone.localdomain systemd-sysv-generator[314074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:55 standalone.localdomain systemd-rc-local-generator[314068]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:55 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:45:55 standalone.localdomain podman[313977]: 2025-10-13 14:45:55.412939835 +0000 UTC m=+0.279660595 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, version=17.1.9, tcib_managed=true, release=1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, container_name=swift_object_server, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:45:55 standalone.localdomain podman[314001]: 2025-10-13 14:45:55.444906904 +0000 UTC m=+0.273534596 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=swift_proxy)
Oct 13 14:45:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60609 DF PROTO=TCP SPT=41458 DPT=9101 SEQ=2068824541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99DC3E0000000001030307) 
Oct 13 14:45:55 standalone.localdomain podman[313978]: 2025-10-13 14:45:55.491664021 +0000 UTC m=+0.356195763 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9)
Oct 13 14:45:55 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Stopping keystone_cron container...
Oct 13 14:45:55 standalone.localdomain podman[314101]: 2025-10-13 14:45:55.725572708 +0000 UTC m=+0.065660742 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.33.12, batch=17.1_20250721.1)
Oct 13 14:45:55 standalone.localdomain crond[93302]: (CRON) INFO (Shutting down)
Oct 13 14:45:55 standalone.localdomain systemd[1]: libpod-2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.scope: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: libpod-2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.scope: Consumed 1.076s CPU time.
Oct 13 14:45:55 standalone.localdomain podman[314102]: 2025-10-13 14:45:55.774472072 +0000 UTC m=+0.113911166 container died 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, maintainer=OpenStack TripleO Team, build-date=2025-07-21T13:27:18, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, vendor=Red Hat, Inc., name=rhosp17/openstack-keystone, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, config_id=tripleo_step3, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 keystone, release=1)
Oct 13 14:45:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.timer: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.
Oct 13 14:45:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Failed to open /run/systemd/transient/2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: No such file or directory
Oct 13 14:45:55 standalone.localdomain podman[314100]: 2025-10-13 14:45:55.830586698 +0000 UTC m=+0.170625251 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_migration_target)
Oct 13 14:45:55 standalone.localdomain podman[314102]: 2025-10-13 14:45:55.88430365 +0000 UTC m=+0.223742734 container cleanup 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, description=Red Hat OpenStack Platform 17.1 keystone, name=rhosp17/openstack-keystone, config_id=tripleo_step3, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 keystone, container_name=keystone_cron, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, com.redhat.component=openstack-keystone-container, io.openshift.expose-services=, release=1, build-date=2025-07-21T13:27:18, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:45:55 standalone.localdomain podman[314102]: keystone_cron
Oct 13 14:45:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.timer: Failed to open /run/systemd/transient/2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.timer: No such file or directory
Oct 13 14:45:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Failed to open /run/systemd/transient/2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: No such file or directory
Oct 13 14:45:55 standalone.localdomain podman[314142]: 2025-10-13 14:45:55.893727401 +0000 UTC m=+0.106773794 container cleanup 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, build-date=2025-07-21T13:27:18, com.redhat.component=openstack-keystone-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-keystone, io.buildah.version=1.33.12, container_name=keystone_cron, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, architecture=x86_64, io.openshift.expose-services=)
Oct 13 14:45:55 standalone.localdomain systemd[1]: libpod-conmon-2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.scope: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain podman[314101]: 2025-10-13 14:45:55.933931906 +0000 UTC m=+0.274019940 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=swift_container_server, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, build-date=2025-07-21T15:54:32, release=1)
Oct 13 14:45:55 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.timer: Failed to open /run/systemd/transient/2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.timer: No such file or directory
Oct 13 14:45:55 standalone.localdomain systemd[1]: 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: Failed to open /run/systemd/transient/2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5.service: No such file or directory
Oct 13 14:45:55 standalone.localdomain podman[314177]: 2025-10-13 14:45:55.972981804 +0000 UTC m=+0.045101647 container cleanup 2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5 (image=registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1, name=keystone_cron, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-keystone/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 keystone, io.openshift.expose-services=, container_name=keystone_cron, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=0693142a4093f932157b8019660e85aa608befc8, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T13:27:18, config_id=tripleo_step3, name=rhosp17/openstack-keystone, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 keystone, summary=Red Hat OpenStack Platform 17.1 keystone, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-keystone-container, description=Red Hat OpenStack Platform 17.1 keystone, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', '/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '32ceb64403625ed4f04d4f0bcdc85988'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron keystone'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-keystone:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 4, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/keystone:/var/log/keystone:z', '/var/log/containers/httpd/keystone:/var/log/httpd:z', '/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/keystone:/var/lib/kolla/config_files/src:ro']}, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 14:45:55 standalone.localdomain podman[314177]: keystone_cron
Oct 13 14:45:55 standalone.localdomain systemd[1]: tripleo_keystone_cron.service: Deactivated successfully.
Oct 13 14:45:55 standalone.localdomain systemd[1]: Stopped keystone_cron container.
Oct 13 14:45:56 standalone.localdomain podman[314100]: 2025-10-13 14:45:56.146813543 +0000 UTC m=+0.486852066 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:45:56 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:45:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2142: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:56 standalone.localdomain ceph-mon[29756]: pgmap v2142: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-44e224b1120f02376fec61e07ffc17d3c98868215655f2d726d537b67be2bd79-merged.mount: Deactivated successfully.
Oct 13 14:45:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2eba2a62d33627682399b51285fa4289533d4c59540204226d553818106e27f5-userdata-shm.mount: Deactivated successfully.
Oct 13 14:45:56 standalone.localdomain python3.9[314281]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:56 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:56 standalone.localdomain systemd-rc-local-generator[314310]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:56 standalone.localdomain systemd-sysv-generator[314314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:56 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:45:57 standalone.localdomain systemd[1]: Stopping logrotate_crond container...
Oct 13 14:45:57 standalone.localdomain crond[112897]: (CRON) INFO (Shutting down)
Oct 13 14:45:57 standalone.localdomain systemd[1]: libpod-f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.scope: Deactivated successfully.
Oct 13 14:45:57 standalone.localdomain podman[314322]: 2025-10-13 14:45:57.229146404 +0000 UTC m=+0.062440283 container died f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T13:07:52, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.9, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c)
Oct 13 14:45:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.timer: Deactivated successfully.
Oct 13 14:45:57 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.
Oct 13 14:45:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Failed to open /run/systemd/transient/f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: No such file or directory
Oct 13 14:45:57 standalone.localdomain podman[314322]: 2025-10-13 14:45:57.328642092 +0000 UTC m=+0.161935921 container cleanup f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, build-date=2025-07-21T13:07:52, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 cron)
Oct 13 14:45:57 standalone.localdomain podman[314322]: logrotate_crond
Oct 13 14:45:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.timer: Failed to open /run/systemd/transient/f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.timer: No such file or directory
Oct 13 14:45:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Failed to open /run/systemd/transient/f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: No such file or directory
Oct 13 14:45:57 standalone.localdomain podman[314334]: 2025-10-13 14:45:57.345453252 +0000 UTC m=+0.105740712 container cleanup f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-cron-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, build-date=2025-07-21T13:07:52, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, release=1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc.)
Oct 13 14:45:57 standalone.localdomain systemd[1]: libpod-conmon-f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.scope: Deactivated successfully.
Oct 13 14:45:57 standalone.localdomain podman[314364]: error opening file `/run/crun/f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b/status`: No such file or directory
Oct 13 14:45:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.timer: Failed to open /run/systemd/transient/f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.timer: No such file or directory
Oct 13 14:45:57 standalone.localdomain systemd[1]: f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: Failed to open /run/systemd/transient/f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b.service: No such file or directory
Oct 13 14:45:57 standalone.localdomain podman[314352]: 2025-10-13 14:45:57.425627984 +0000 UTC m=+0.053963731 container cleanup f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:07:52, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-cron/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=1cbdeb2f9fe67da66c8007dc1c7f4220cefddf6c, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4)
Oct 13 14:45:57 standalone.localdomain podman[314352]: logrotate_crond
Oct 13 14:45:57 standalone.localdomain systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully.
Oct 13 14:45:57 standalone.localdomain systemd[1]: Stopped logrotate_crond container.
Oct 13 14:45:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-22c9c267afdf127d08eca445cb214129244d6909cac33b961d5f917bf640f921-merged.mount: Deactivated successfully.
Oct 13 14:45:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f08a487a214ada7dd09b8461cc0a6a12df02dda6e499ef66192cc44c7f99ac8b-userdata-shm.mount: Deactivated successfully.
Oct 13 14:45:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:45:58 standalone.localdomain python3.9[314456]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_manila_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:58 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:58 standalone.localdomain systemd-sysv-generator[314487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:58 standalone.localdomain systemd-rc-local-generator[314484]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:58 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:45:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2143: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60611 DF PROTO=TCP SPT=41458 DPT=9101 SEQ=2068824541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99E8360000000001030307) 
Oct 13 14:45:59 standalone.localdomain ceph-mon[29756]: pgmap v2143: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:45:59 standalone.localdomain python3.9[314583]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_manila_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:45:59 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:45:59 standalone.localdomain systemd-rc-local-generator[314608]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:45:59 standalone.localdomain systemd-sysv-generator[314615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:45:59 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:46:00 standalone.localdomain python3.9[314711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_manila_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:46:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:46:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:46:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:46:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2144: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:00 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:46:00 standalone.localdomain podman[314713]: 2025-10-13 14:46:00.406722609 +0000 UTC m=+0.065928781 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step5, container_name=nova_compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true)
Oct 13 14:46:00 standalone.localdomain podman[314713]: 2025-10-13 14:46:00.422021573 +0000 UTC m=+0.081227735 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git)
Oct 13 14:46:00 standalone.localdomain podman[314713]: unhealthy
Oct 13 14:46:00 standalone.localdomain podman[314714]: 2025-10-13 14:46:00.474871238 +0000 UTC m=+0.132488471 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, container_name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, release=1)
Oct 13 14:46:00 standalone.localdomain ceph-mon[29756]: pgmap v2144: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:00 standalone.localdomain systemd-rc-local-generator[314797]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:46:00 standalone.localdomain podman[314714]: 2025-10-13 14:46:00.517194938 +0000 UTC m=+0.174812161 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, batch=17.1_20250721.1, release=1, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:46:00 standalone.localdomain podman[314714]: unhealthy
Oct 13 14:46:00 standalone.localdomain systemd-sysv-generator[314802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:46:00 standalone.localdomain podman[314715]: 2025-10-13 14:46:00.526440923 +0000 UTC m=+0.180172695 container health_status 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, health_status=unhealthy, release=1, version=17.1.9, build-date=2025-07-21T16:28:54, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12)
Oct 13 14:46:00 standalone.localdomain podman[314715]: 2025-10-13 14:46:00.544962097 +0000 UTC m=+0.198693879 container exec_died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, container_name=neutron_dhcp, tcib_managed=true, build-date=2025-07-21T16:28:54, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 14:46:00 standalone.localdomain podman[314715]: unhealthy
Oct 13 14:46:00 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:46:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:46:00 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:00 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:46:00 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:00 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:46:01 standalone.localdomain anacron[91273]: Job `cron.monthly' started
Oct 13 14:46:01 standalone.localdomain anacron[91273]: Job `cron.monthly' terminated
Oct 13 14:46:01 standalone.localdomain anacron[91273]: Normal exit (3 jobs run)
Oct 13 14:46:01 standalone.localdomain python3.9[314902]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_memcached.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:46:01 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:46:01 standalone.localdomain systemd-rc-local-generator[314930]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:46:01 standalone.localdomain systemd-sysv-generator[314935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:46:01 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:46:01 standalone.localdomain systemd[1]: Stopping memcached container...
Oct 13 14:46:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2145: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:02 standalone.localdomain ceph-mon[29756]: pgmap v2145: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60612 DF PROTO=TCP SPT=41458 DPT=9101 SEQ=2068824541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C99F7F70000000001030307) 
Oct 13 14:46:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:03 standalone.localdomain systemd[1]: libpod-3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.scope: Deactivated successfully.
Oct 13 14:46:03 standalone.localdomain systemd[1]: libpod-3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.scope: Consumed 10.124s CPU time.
Oct 13 14:46:03 standalone.localdomain podman[314944]: 2025-10-13 14:46:03.423782577 +0000 UTC m=+1.536317430 container died 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 memcached, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, build-date=2025-07-21T12:58:43, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, name=rhosp17/openstack-memcached, com.redhat.component=openstack-memcached-container)
Oct 13 14:46:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.timer: Deactivated successfully.
Oct 13 14:46:03 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.
Oct 13 14:46:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Failed to open /run/systemd/transient/3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: No such file or directory
Oct 13 14:46:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840-userdata-shm.mount: Deactivated successfully.
Oct 13 14:46:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-43593073e6f183b06ef562b34ef0cb60361ac1108d165f04c57da6bff77c59d1-merged.mount: Deactivated successfully.
Oct 13 14:46:03 standalone.localdomain podman[314944]: 2025-10-13 14:46:03.466284032 +0000 UTC m=+1.578818875 container cleanup 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, build-date=2025-07-21T12:58:43, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, summary=Red Hat OpenStack Platform 17.1 memcached, description=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-memcached-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, container_name=memcached, io.openshift.expose-services=, name=rhosp17/openstack-memcached, vcs-type=git, architecture=x86_64, config_id=tripleo_step1)
Oct 13 14:46:03 standalone.localdomain podman[314944]: memcached
Oct 13 14:46:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.timer: Failed to open /run/systemd/transient/3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.timer: No such file or directory
Oct 13 14:46:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Failed to open /run/systemd/transient/3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: No such file or directory
Oct 13 14:46:03 standalone.localdomain podman[314957]: 2025-10-13 14:46:03.497376154 +0000 UTC m=+0.061528975 container cleanup 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, container_name=memcached, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T12:58:43, config_id=tripleo_step1, name=rhosp17/openstack-memcached, summary=Red Hat OpenStack Platform 17.1 memcached, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 memcached, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:46:03 standalone.localdomain systemd[1]: libpod-conmon-3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.scope: Deactivated successfully.
Oct 13 14:46:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.timer: Failed to open /run/systemd/transient/3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.timer: No such file or directory
Oct 13 14:46:03 standalone.localdomain systemd[1]: 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: Failed to open /run/systemd/transient/3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840.service: No such file or directory
Oct 13 14:46:03 standalone.localdomain podman[314973]: 2025-10-13 14:46:03.574511721 +0000 UTC m=+0.048737138 container cleanup 3e3747b9a545d13ea0781dbac2005925029a3563506ec95b6dbe9ae590f9a840 (image=registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1, name=memcached, vcs-type=git, description=Red Hat OpenStack Platform 17.1 memcached, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 memcached, container_name=memcached, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=c5579e43aadfadb0a3c02e9b1c0ac35d1b75fcbe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 memcached, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-memcached/images/17.1.9-1, build-date=2025-07-21T12:58:43, name=rhosp17/openstack-memcached, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'fe36aaf6c21495a0eb63602becb8c29a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-memcached:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 0, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/memcached.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/memcached:/var/lib/kolla/config_files/src:rw,z', '/var/log/containers/memcached:/var/log/memcached:rw']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-memcached-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 memcached, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:46:03 standalone.localdomain podman[314973]: memcached
Oct 13 14:46:03 standalone.localdomain systemd[1]: tripleo_memcached.service: Deactivated successfully.
Oct 13 14:46:03 standalone.localdomain systemd[1]: Stopped memcached container.
Oct 13 14:46:04 standalone.localdomain python3.9[315076]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:46:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:46:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2146: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:04 standalone.localdomain systemd-rc-local-generator[315105]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:46:04 standalone.localdomain systemd-sysv-generator[315108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:46:04 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:46:04 standalone.localdomain ceph-mon[29756]: pgmap v2146: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:05 standalone.localdomain python3.9[315204]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:46:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:46:05 standalone.localdomain systemd-sysv-generator[315232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:46:05 standalone.localdomain systemd-rc-local-generator[315228]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:46:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:46:05 standalone.localdomain systemd[1]: Stopping neutron_dhcp container...
Oct 13 14:46:05 standalone.localdomain sudo[153426]: pam_unix(sudo:session): session closed for user root
Oct 13 14:46:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2147: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:06 standalone.localdomain ceph-mon[29756]: pgmap v2147: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:46:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 4200.0 total, 600.0 interval
                                                        Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                                        Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.04 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1526 writes, 7166 keys, 1526 commit groups, 1.0 writes per commit group, ingest: 5.83 MB, 0.01 MB/s
                                                        Interval WAL: 1526 writes, 1526 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.5      0.43              0.11        33    0.013       0      0       0.0       0.0
                                                          L6      1/0    4.93 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   4.6    195.4    165.2      0.86              0.41        32    0.027    141K    17K       0.0       0.0
                                                         Sum      1/0    4.93 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.6    130.0    133.8      1.29              0.52        65    0.020    141K    17K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.0    168.0    170.5      0.22              0.10        12    0.018     33K   3079       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    195.4    165.2      0.86              0.41        32    0.027    141K    17K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     71.9      0.43              0.11        32    0.013       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 4200.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.030, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.17 GB write, 0.04 MB/s write, 0.16 GB read, 0.04 MB/s read, 1.3 seconds
                                                        Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 23.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00028 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(2137,22.24 MB,7.21987%) FilterBlock(66,386.92 KB,0.12268%) IndexBlock(66,590.42 KB,0.187203%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 14:46:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31547 DF PROTO=TCP SPT=51962 DPT=9100 SEQ=2584942283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A0AC40000000001030307) 
Oct 13 14:46:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2148: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31548 DF PROTO=TCP SPT=51962 DPT=9100 SEQ=2584942283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A0EB60000000001030307) 
Oct 13 14:46:09 standalone.localdomain ceph-mon[29756]: pgmap v2148: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2149: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31549 DF PROTO=TCP SPT=51962 DPT=9100 SEQ=2584942283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A16B60000000001030307) 
Oct 13 14:46:10 standalone.localdomain ceph-mon[29756]: pgmap v2149: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2150: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:12 standalone.localdomain ceph-mon[29756]: pgmap v2150: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2685 DF PROTO=TCP SPT=58660 DPT=9105 SEQ=3404011109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A24760000000001030307) 
Oct 13 14:46:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2151: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:14 standalone.localdomain ceph-mon[29756]: pgmap v2151: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2152: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:16 standalone.localdomain ceph-mon[29756]: pgmap v2152: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2153: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:46:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/683249636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:46:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:46:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/683249636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:46:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56325 DF PROTO=TCP SPT=50360 DPT=9882 SEQ=2562507331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A36F70000000001030307) 
Oct 13 14:46:19 standalone.localdomain ceph-mon[29756]: pgmap v2153: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/683249636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:46:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/683249636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:46:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2687 DF PROTO=TCP SPT=58660 DPT=9105 SEQ=3404011109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A3C360000000001030307) 
Oct 13 14:46:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2154: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:20 standalone.localdomain ceph-mon[29756]: pgmap v2154: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2155: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:22 standalone.localdomain ceph-mon[29756]: pgmap v2155: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:46:23
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_metadata', 'backups', 'volumes', 'manila_data', '.mgr', 'images']
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:46:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60684 DF PROTO=TCP SPT=42402 DPT=9102 SEQ=3404909136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A49E00000000001030307) 
Oct 13 14:46:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2156: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:24 standalone.localdomain ceph-mon[29756]: pgmap v2156: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:46:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:46:24 standalone.localdomain podman[315259]: 2025-10-13 14:46:24.814258106 +0000 UTC m=+0.077351644 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:46:24 standalone.localdomain podman[315260]: 2025-10-13 14:46:24.869088653 +0000 UTC m=+0.131725707 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:46:24 standalone.localdomain podman[315260]: 2025-10-13 14:46:24.879477174 +0000 UTC m=+0.142114248 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:46:24 standalone.localdomain podman[315260]: unhealthy
Oct 13 14:46:24 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:24 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:46:24 standalone.localdomain podman[315259]: 2025-10-13 14:46:24.901764494 +0000 UTC m=+0.164857992 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, batch=17.1_20250721.1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, release=1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T16:28:53)
Oct 13 14:46:24 standalone.localdomain podman[315259]: unhealthy
Oct 13 14:46:24 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:24 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:46:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19993 DF PROTO=TCP SPT=39070 DPT=9101 SEQ=1889289050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A516E0000000001030307) 
Oct 13 14:46:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:46:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:46:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:46:25 standalone.localdomain podman[315300]: 2025-10-13 14:46:25.786324425 +0000 UTC m=+0.055694974 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, build-date=2025-07-21T14:48:37, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, architecture=x86_64, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, tcib_managed=true)
Oct 13 14:46:25 standalone.localdomain podman[315304]: 2025-10-13 14:46:25.847586601 +0000 UTC m=+0.111368717 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, release=1, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:46:25 standalone.localdomain podman[315299]: 2025-10-13 14:46:25.906621678 +0000 UTC m=+0.177446873 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, tcib_managed=true, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, managed_by=tripleo_ansible, distribution-scope=public)
Oct 13 14:46:25 standalone.localdomain podman[315300]: 2025-10-13 14:46:25.989099849 +0000 UTC m=+0.258470368 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, container_name=swift_proxy, tcib_managed=true, build-date=2025-07-21T14:48:37, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:46:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:46:26 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:46:26 standalone.localdomain podman[315304]: 2025-10-13 14:46:26.029459109 +0000 UTC m=+0.293241255 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 14:46:26 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:46:26 standalone.localdomain podman[315377]: 2025-10-13 14:46:26.076537485 +0000 UTC m=+0.061992749 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, container_name=swift_container_server, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:46:26 standalone.localdomain podman[315299]: 2025-10-13 14:46:26.12840591 +0000 UTC m=+0.399231115 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, tcib_managed=true, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=swift_object_server, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:46:26 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:46:26 standalone.localdomain podman[315377]: 2025-10-13 14:46:26.251028975 +0000 UTC m=+0.236484249 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, version=17.1.9, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, maintainer=OpenStack TripleO Team)
Oct 13 14:46:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:46:26 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:46:26 standalone.localdomain podman[315409]: 2025-10-13 14:46:26.326895032 +0000 UTC m=+0.053058883 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.)
Oct 13 14:46:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2157: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:26 standalone.localdomain ceph-mon[29756]: pgmap v2157: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:26 standalone.localdomain podman[315409]: 2025-10-13 14:46:26.661076273 +0000 UTC m=+0.387240114 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-07-21T14:48:37, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=nova_migration_target, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:46:26 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:46:26 standalone.localdomain systemd[1]: tmp-crun.3hASY9.mount: Deactivated successfully.
Oct 13 14:46:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2158: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19995 DF PROTO=TCP SPT=39070 DPT=9101 SEQ=1889289050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A5D760000000001030307) 
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:46:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:46:29 standalone.localdomain ceph-mon[29756]: pgmap v2158: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:30 standalone.localdomain sudo[315433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:46:30 standalone.localdomain sudo[315433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:46:30 standalone.localdomain sudo[315433]: pam_unix(sudo:session): session closed for user root
Oct 13 14:46:30 standalone.localdomain sudo[315448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 14:46:30 standalone.localdomain sudo[315448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:46:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2159: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:30 standalone.localdomain ceph-mon[29756]: pgmap v2159: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:46:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:46:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:46:30 standalone.localdomain podman[315511]: 2025-10-13 14:46:30.922737802 +0000 UTC m=+0.078368836 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, build-date=2025-07-21T16:03:34, container_name=neutron_sriov_agent, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-neutron-sriov-agent, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent)
Oct 13 14:46:30 standalone.localdomain podman[315511]: 2025-10-13 14:46:30.930588365 +0000 UTC m=+0.086219399 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, architecture=x86_64)
Oct 13 14:46:30 standalone.localdomain podman[315511]: unhealthy
Oct 13 14:46:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:30 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:46:30 standalone.localdomain podman[315510]: 2025-10-13 14:46:30.904670613 +0000 UTC m=+0.069748459 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_compute, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2025-07-21T14:48:37, distribution-scope=public, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step5)
Oct 13 14:46:30 standalone.localdomain podman[315509]: Error: container 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 is not running
Oct 13 14:46:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:46:30 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed with result 'exit-code'.
Oct 13 14:46:31 standalone.localdomain podman[315510]: 2025-10-13 14:46:31.037140162 +0000 UTC m=+0.202218048 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, version=17.1.9)
Oct 13 14:46:31 standalone.localdomain podman[315510]: unhealthy
Oct 13 14:46:31 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:31 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:46:31 standalone.localdomain podman[315582]: 2025-10-13 14:46:31.099888354 +0000 UTC m=+0.066101357 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, name=rhceph, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git)
Oct 13 14:46:31 standalone.localdomain podman[315582]: 2025-10-13 14:46:31.203003414 +0000 UTC m=+0.169216417 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Oct 13 14:46:31 standalone.localdomain sudo[315448]: pam_unix(sudo:session): session closed for user root
Oct 13 14:46:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:46:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:46:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:31 standalone.localdomain sudo[315690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:46:31 standalone.localdomain sudo[315690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:46:31 standalone.localdomain sudo[315690]: pam_unix(sudo:session): session closed for user root
Oct 13 14:46:31 standalone.localdomain sudo[315705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:46:31 standalone.localdomain sudo[315705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:46:32 standalone.localdomain sudo[315705]: pam_unix(sudo:session): session closed for user root
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:46:32 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b69295e3-1a5b-46ee-a153-8501658858de (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:46:32 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b69295e3-1a5b-46ee-a153-8501658858de (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:46:32 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b69295e3-1a5b-46ee-a153-8501658858de (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:46:32 standalone.localdomain sudo[315751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:46:32 standalone.localdomain sudo[315751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:46:32 standalone.localdomain sudo[315751]: pam_unix(sudo:session): session closed for user root
Oct 13 14:46:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2160: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:46:32 standalone.localdomain ceph-mon[29756]: pgmap v2160: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19996 DF PROTO=TCP SPT=39070 DPT=9101 SEQ=1889289050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A6D370000000001030307) 
Oct 13 14:46:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2161: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:34 standalone.localdomain ceph-mon[29756]: pgmap v2161: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:46:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:46:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:46:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2162: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:36 standalone.localdomain ceph-mon[29756]: pgmap v2162: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1581 DF PROTO=TCP SPT=36508 DPT=9100 SEQ=297515325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A7FF40000000001030307) 
Oct 13 14:46:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2163: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1582 DF PROTO=TCP SPT=36508 DPT=9100 SEQ=297515325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A83F60000000001030307) 
Oct 13 14:46:39 standalone.localdomain ceph-mon[29756]: pgmap v2163: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2164: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1583 DF PROTO=TCP SPT=36508 DPT=9100 SEQ=297515325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A8BF70000000001030307) 
Oct 13 14:46:40 standalone.localdomain ceph-mon[29756]: pgmap v2164: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2165: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:42 standalone.localdomain ceph-mon[29756]: pgmap v2165: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64788 DF PROTO=TCP SPT=38842 DPT=9105 SEQ=408484850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9A99B60000000001030307) 
Oct 13 14:46:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2166: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:44 standalone.localdomain ceph-mon[29756]: pgmap v2166: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2167: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:46 standalone.localdomain ceph-mon[29756]: pgmap v2167: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:47 standalone.localdomain podman[315245]: time="2025-10-13T14:46:47Z" level=warning msg="StopSignal SIGTERM failed to stop container neutron_dhcp in 42 seconds, resorting to SIGKILL"
Oct 13 14:46:47 standalone.localdomain podman[315245]: 2025-10-13 14:46:47.812430424 +0000 UTC m=+42.062983765 container died 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, release=1, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, version=17.1.9, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 14:46:47 standalone.localdomain systemd[1]: libpod-054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.scope: Deactivated successfully.
Oct 13 14:46:47 standalone.localdomain systemd[1]: libpod-054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.scope: Consumed 40.642s CPU time.
Oct 13 14:46:47 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.timer: Deactivated successfully.
Oct 13 14:46:47 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.
Oct 13 14:46:47 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed to open /run/systemd/transient/054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: No such file or directory
Oct 13 14:46:47 standalone.localdomain podman[315245]: 2025-10-13 14:46:47.879902821 +0000 UTC m=+42.130456142 container cleanup 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, managed_by=tripleo_ansible, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=neutron_dhcp, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.33.12, release=1, build-date=2025-07-21T16:28:54, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1)
Oct 13 14:46:47 standalone.localdomain podman[315245]: neutron_dhcp
Oct 13 14:46:47 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.timer: Failed to open /run/systemd/transient/054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.timer: No such file or directory
Oct 13 14:46:47 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed to open /run/systemd/transient/054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: No such file or directory
Oct 13 14:46:47 standalone.localdomain podman[315767]: 2025-10-13 14:46:47.89827873 +0000 UTC m=+0.077504829 container cleanup 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, container_name=neutron_dhcp, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T16:28:54, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, distribution-scope=public, name=rhosp17/openstack-neutron-dhcp-agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:46:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2168: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59842 DF PROTO=TCP SPT=50688 DPT=9882 SEQ=3927873933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AABF60000000001030307) 
Oct 13 14:46:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c400a0b1febfb43b3487c71ecbd5a108092b42e80cd739860482d3019963fe9e-merged.mount: Deactivated successfully.
Oct 13 14:46:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2-userdata-shm.mount: Deactivated successfully.
Oct 13 14:46:49 standalone.localdomain ceph-mon[29756]: pgmap v2168: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64790 DF PROTO=TCP SPT=38842 DPT=9105 SEQ=408484850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AB1760000000001030307) 
Oct 13 14:46:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2169: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:50 standalone.localdomain ceph-mon[29756]: pgmap v2169: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2170: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:52 standalone.localdomain ceph-mon[29756]: pgmap v2170: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:46:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62606 DF PROTO=TCP SPT=42794 DPT=9102 SEQ=269633244 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9ABF0E0000000001030307) 
Oct 13 14:46:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2171: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:54 standalone.localdomain ceph-mon[29756]: pgmap v2171: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:46:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:46:55 standalone.localdomain podman[315784]: 2025-10-13 14:46:55.05135005 +0000 UTC m=+0.065561339 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, container_name=ovn_metadata_agent)
Oct 13 14:46:55 standalone.localdomain podman[315784]: 2025-10-13 14:46:55.070571775 +0000 UTC m=+0.084783044 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, build-date=2025-07-21T16:28:53, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, managed_by=tripleo_ansible)
Oct 13 14:46:55 standalone.localdomain podman[315784]: unhealthy
Oct 13 14:46:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:46:55 standalone.localdomain podman[315785]: 2025-10-13 14:46:55.164307236 +0000 UTC m=+0.172450878 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:46:55 standalone.localdomain podman[315785]: 2025-10-13 14:46:55.175005787 +0000 UTC m=+0.183149409 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-ovn-controller, release=1, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:46:55 standalone.localdomain podman[315785]: unhealthy
Oct 13 14:46:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:46:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:46:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20161 DF PROTO=TCP SPT=51318 DPT=9101 SEQ=3029406464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AC69E0000000001030307) 
Oct 13 14:46:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2172: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:56 standalone.localdomain ceph-mon[29756]: pgmap v2172: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:46:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:46:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:46:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:46:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:46:56 standalone.localdomain podman[315826]: 2025-10-13 14:46:56.830703759 +0000 UTC m=+0.090351357 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_proxy, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37)
Oct 13 14:46:56 standalone.localdomain podman[315825]: 2025-10-13 14:46:56.853337139 +0000 UTC m=+0.115649720 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, release=1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, container_name=nova_migration_target, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, version=17.1.9, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:46:56 standalone.localdomain podman[315827]: 2025-10-13 14:46:56.806653734 +0000 UTC m=+0.067140638 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, com.redhat.component=openstack-swift-container-container)
Oct 13 14:46:56 standalone.localdomain podman[315824]: 2025-10-13 14:46:56.919665551 +0000 UTC m=+0.187622857 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, container_name=swift_object_server, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:46:56 standalone.localdomain podman[315833]: 2025-10-13 14:46:56.983786145 +0000 UTC m=+0.240148631 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, version=17.1.9, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=swift_account_server, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:11:22)
Oct 13 14:46:57 standalone.localdomain podman[315827]: 2025-10-13 14:46:57.011979328 +0000 UTC m=+0.272466262 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1)
Oct 13 14:46:57 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:46:57 standalone.localdomain podman[315826]: 2025-10-13 14:46:57.032993368 +0000 UTC m=+0.292640966 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:48:37, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server)
Oct 13 14:46:57 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:46:57 standalone.localdomain podman[315824]: 2025-10-13 14:46:57.11996899 +0000 UTC m=+0.387926316 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, version=17.1.9, distribution-scope=public, release=1, architecture=x86_64)
Oct 13 14:46:57 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:46:57 standalone.localdomain podman[315833]: 2025-10-13 14:46:57.156436728 +0000 UTC m=+0.412799214 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git)
Oct 13 14:46:57 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:46:57 standalone.localdomain podman[315825]: 2025-10-13 14:46:57.20690981 +0000 UTC m=+0.469222361 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.9, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, build-date=2025-07-21T14:48:37, name=rhosp17/openstack-nova-compute, release=1, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:46:57 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:46:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:46:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2173: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20163 DF PROTO=TCP SPT=51318 DPT=9101 SEQ=3029406464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AD2B60000000001030307) 
Oct 13 14:46:58 standalone.localdomain sshd[315955]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:46:59 standalone.localdomain ceph-mon[29756]: pgmap v2173: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:46:59 standalone.localdomain sshd[315955]: Received disconnect from 80.94.93.233 port 13132:11:  [preauth]
Oct 13 14:46:59 standalone.localdomain sshd[315955]: Disconnected from authenticating user root 80.94.93.233 port 13132 [preauth]
Oct 13 14:47:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2174: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:00 standalone.localdomain ceph-mon[29756]: pgmap v2174: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:47:01 standalone.localdomain podman[315957]: 2025-10-13 14:47:01.07143173 +0000 UTC m=+0.090754339 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vcs-type=git, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, build-date=2025-07-21T16:03:34, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:47:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:47:01 standalone.localdomain podman[315957]: 2025-10-13 14:47:01.085261678 +0000 UTC m=+0.104584267 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, container_name=neutron_sriov_agent, build-date=2025-07-21T16:03:34, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, name=rhosp17/openstack-neutron-sriov-agent, architecture=x86_64, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 14:47:01 standalone.localdomain podman[315957]: unhealthy
Oct 13 14:47:01 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:01 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:47:01 standalone.localdomain systemd[1]: tmp-crun.c6s3jT.mount: Deactivated successfully.
Oct 13 14:47:01 standalone.localdomain podman[315977]: 2025-10-13 14:47:01.19065989 +0000 UTC m=+0.090372458 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20250721.1, release=1, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:48:37, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team)
Oct 13 14:47:01 standalone.localdomain podman[315977]: 2025-10-13 14:47:01.210277497 +0000 UTC m=+0.109990125 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:47:01 standalone.localdomain podman[315977]: unhealthy
Oct 13 14:47:01 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:01 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:47:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2175: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:02 standalone.localdomain ceph-mon[29756]: pgmap v2175: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20164 DF PROTO=TCP SPT=51318 DPT=9101 SEQ=3029406464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AE2760000000001030307) 
Oct 13 14:47:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2176: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:04 standalone.localdomain ceph-mon[29756]: pgmap v2176: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2177: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:06 standalone.localdomain ceph-mon[29756]: pgmap v2177: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:06 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:47:06 standalone.localdomain recover_tripleo_nova_virtqemud[316002]: 93291
Oct 13 14:47:06 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:47:06 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:47:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17611 DF PROTO=TCP SPT=49030 DPT=9100 SEQ=3197947604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AF5250000000001030307) 
Oct 13 14:47:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2178: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17612 DF PROTO=TCP SPT=49030 DPT=9100 SEQ=3197947604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9AF9360000000001030307) 
Oct 13 14:47:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:47:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 4200.1 total, 600.0 interval
                                                        Cumulative writes: 6951 writes, 30K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                                        Cumulative WAL: 6951 writes, 1290 syncs, 5.39 writes per sync, written: 0.03 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 14:47:09 standalone.localdomain ceph-mon[29756]: pgmap v2178: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2179: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17613 DF PROTO=TCP SPT=49030 DPT=9100 SEQ=3197947604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B01360000000001030307) 
Oct 13 14:47:10 standalone.localdomain ceph-mon[29756]: pgmap v2179: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2180: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:12 standalone.localdomain ceph-mon[29756]: pgmap v2180: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33005 DF PROTO=TCP SPT=41716 DPT=9105 SEQ=4020166358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B0EF60000000001030307) 
Oct 13 14:47:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2181: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:14 standalone.localdomain ceph-mon[29756]: pgmap v2181: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 14:47:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2182: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:16 standalone.localdomain account-server[114566]: 172.20.0.100 - - [13/Oct/2025:14:47:16 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx32f46c554ea94ddcb4f7f-0068ed10f4" "proxy-server 2" 0.0004 "-" 22 -
Oct 13 14:47:16 standalone.localdomain swift[114487]: Error setting value in memcached: standalone.internalapi.localdomain:11211:  (txn: tx32f46c554ea94ddcb4f7f-0068ed10f4)
Oct 13 14:47:16 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx32f46c554ea94ddcb4f7f-0068ed10f4)
Oct 13 14:47:16 standalone.localdomain ceph-mon[29756]: pgmap v2182: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2183: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:47:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2214855144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:47:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:47:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2214855144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:47:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52971 DF PROTO=TCP SPT=40932 DPT=9882 SEQ=2667621125 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B21360000000001030307) 
Oct 13 14:47:19 standalone.localdomain ceph-mon[29756]: pgmap v2183: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2214855144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:47:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2214855144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:47:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33007 DF PROTO=TCP SPT=41716 DPT=9105 SEQ=4020166358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B26B60000000001030307) 
Oct 13 14:47:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2184: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:20 standalone.localdomain ceph-mon[29756]: pgmap v2184: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2185: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:22 standalone.localdomain ceph-mon[29756]: pgmap v2185: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:47:23
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'vms', '.mgr', 'backups', 'manila_data', 'images']
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:47:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28030 DF PROTO=TCP SPT=57656 DPT=9102 SEQ=1659544596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B343F0000000001030307) 
Oct 13 14:47:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2186: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:24 standalone.localdomain ceph-mon[29756]: pgmap v2186: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:47:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:47:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56946 DF PROTO=TCP SPT=33532 DPT=9101 SEQ=3750728654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B3BCE0000000001030307) 
Oct 13 14:47:25 standalone.localdomain podman[316003]: 2025-10-13 14:47:25.596775341 +0000 UTC m=+0.116792445 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, release=1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=)
Oct 13 14:47:25 standalone.localdomain systemd[1]: tmp-crun.ZPKh9G.mount: Deactivated successfully.
Oct 13 14:47:25 standalone.localdomain podman[316004]: 2025-10-13 14:47:25.621458425 +0000 UTC m=+0.136635199 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Oct 13 14:47:25 standalone.localdomain podman[316004]: 2025-10-13 14:47:25.632201407 +0000 UTC m=+0.147378191 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller, version=17.1.9, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:47:25 standalone.localdomain podman[316004]: unhealthy
Oct 13 14:47:25 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:25 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:47:25 standalone.localdomain podman[316003]: 2025-10-13 14:47:25.646852211 +0000 UTC m=+0.166869325 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Oct 13 14:47:25 standalone.localdomain podman[316003]: unhealthy
Oct 13 14:47:25 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:25 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:47:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2187: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:26 standalone.localdomain ceph-mon[29756]: pgmap v2187: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:47:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:47:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:47:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:47:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:47:27 standalone.localdomain systemd[1]: tmp-crun.G16MXC.mount: Deactivated successfully.
Oct 13 14:47:27 standalone.localdomain podman[316043]: 2025-10-13 14:47:27.832703647 +0000 UTC m=+0.091012666 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, build-date=2025-07-21T14:48:37, container_name=nova_migration_target, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute)
Oct 13 14:47:27 standalone.localdomain podman[316042]: 2025-10-13 14:47:27.875805992 +0000 UTC m=+0.134585346 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T14:56:28, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:47:27 standalone.localdomain podman[316060]: 2025-10-13 14:47:27.950688558 +0000 UTC m=+0.188297697 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, container_name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, version=17.1.9, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 14:47:28 standalone.localdomain podman[316054]: 2025-10-13 14:47:28.000652874 +0000 UTC m=+0.238852051 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, version=17.1.9, container_name=swift_container_server, build-date=2025-07-21T15:54:32, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, tcib_managed=true)
Oct 13 14:47:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:28 standalone.localdomain podman[316044]: 2025-10-13 14:47:28.043711357 +0000 UTC m=+0.293604656 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=swift_proxy, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:47:28 standalone.localdomain podman[316042]: 2025-10-13 14:47:28.092894529 +0000 UTC m=+0.351673873 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:47:28 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:47:28 standalone.localdomain podman[316060]: 2025-10-13 14:47:28.172789872 +0000 UTC m=+0.410399031 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, container_name=swift_account_server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64)
Oct 13 14:47:28 standalone.localdomain podman[316043]: 2025-10-13 14:47:28.182877443 +0000 UTC m=+0.441186362 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:47:28 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:47:28 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:47:28 standalone.localdomain podman[316044]: 2025-10-13 14:47:28.220741205 +0000 UTC m=+0.470634504 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, distribution-scope=public, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, container_name=swift_proxy, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 14:47:28 standalone.localdomain podman[316054]: 2025-10-13 14:47:28.227848555 +0000 UTC m=+0.466047692 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, architecture=x86_64, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-type=git, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 14:47:28 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:47:28 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2188: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56948 DF PROTO=TCP SPT=33532 DPT=9101 SEQ=3750728654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B47B60000000001030307) 
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:47:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:47:28 standalone.localdomain systemd[1]: tmp-crun.VVLC1f.mount: Deactivated successfully.
Oct 13 14:47:29 standalone.localdomain ceph-mon[29756]: pgmap v2188: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2189: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:30 standalone.localdomain ceph-mon[29756]: pgmap v2189: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:47:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:47:31 standalone.localdomain podman[316175]: 2025-10-13 14:47:31.551778949 +0000 UTC m=+0.071448542 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=nova_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1, tcib_managed=true, config_id=tripleo_step5, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:47:31 standalone.localdomain podman[316175]: 2025-10-13 14:47:31.600821626 +0000 UTC m=+0.120491129 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, version=17.1.9, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Oct 13 14:47:31 standalone.localdomain podman[316176]: 2025-10-13 14:47:31.600161856 +0000 UTC m=+0.115873126 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:03:34, com.redhat.component=openstack-neutron-sriov-agent-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:47:31 standalone.localdomain podman[316175]: unhealthy
Oct 13 14:47:31 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:31 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:47:31 standalone.localdomain podman[316176]: 2025-10-13 14:47:31.685035872 +0000 UTC m=+0.200747142 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, release=1, com.redhat.component=openstack-neutron-sriov-agent-container, container_name=neutron_sriov_agent, name=rhosp17/openstack-neutron-sriov-agent, build-date=2025-07-21T16:03:34, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:47:31 standalone.localdomain podman[316176]: unhealthy
Oct 13 14:47:31 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:31 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:47:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2190: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:32 standalone.localdomain sudo[316219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:47:32 standalone.localdomain sudo[316219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:47:32 standalone.localdomain sudo[316219]: pam_unix(sudo:session): session closed for user root
Oct 13 14:47:32 standalone.localdomain ceph-mon[29756]: pgmap v2190: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:32 standalone.localdomain sudo[316234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:47:32 standalone.localdomain sudo[316234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:47:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56949 DF PROTO=TCP SPT=33532 DPT=9101 SEQ=3750728654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B57760000000001030307) 
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:33 standalone.localdomain sudo[316234]: pam_unix(sudo:session): session closed for user root
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:47:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:47:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev eee60962-e7da-475b-9810-d9f2f7d58774 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:47:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev eee60962-e7da-475b-9810-d9f2f7d58774 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:47:33 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event eee60962-e7da-475b-9810-d9f2f7d58774 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:47:33 standalone.localdomain sudo[316282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:47:33 standalone.localdomain sudo[316282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:47:33 standalone.localdomain sudo[316282]: pam_unix(sudo:session): session closed for user root
Oct 13 14:47:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:47:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:47:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:47:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:47:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2191: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:47:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:47:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:47:35 standalone.localdomain ceph-mon[29756]: pgmap v2191: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:47:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2192: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:36 standalone.localdomain ceph-mon[29756]: pgmap v2192: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43384 DF PROTO=TCP SPT=53490 DPT=9100 SEQ=2997181464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B6A550000000001030307) 
Oct 13 14:47:37 standalone.localdomain sshd[316297]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:47:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2193: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43385 DF PROTO=TCP SPT=53490 DPT=9100 SEQ=2997181464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B6E770000000001030307) 
Oct 13 14:47:38 standalone.localdomain sshd[316297]: Connection closed by authenticating user root 78.128.112.74 port 46416 [preauth]
Oct 13 14:47:39 standalone.localdomain ceph-mon[29756]: pgmap v2193: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2194: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43386 DF PROTO=TCP SPT=53490 DPT=9100 SEQ=2997181464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B76760000000001030307) 
Oct 13 14:47:40 standalone.localdomain ceph-mon[29756]: pgmap v2194: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2195: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:42 standalone.localdomain ceph-mon[29756]: pgmap v2195: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52831 DF PROTO=TCP SPT=44408 DPT=9105 SEQ=1424500128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B84360000000001030307) 
Oct 13 14:47:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2196: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:44 standalone.localdomain ceph-mon[29756]: pgmap v2196: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2197: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:46 standalone.localdomain ceph-mon[29756]: pgmap v2197: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2198: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52785 DF PROTO=TCP SPT=51804 DPT=9882 SEQ=3526559794 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B96760000000001030307) 
Oct 13 14:47:49 standalone.localdomain ceph-mon[29756]: pgmap v2198: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52833 DF PROTO=TCP SPT=44408 DPT=9105 SEQ=1424500128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9B9BF60000000001030307) 
Oct 13 14:47:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2199: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:50 standalone.localdomain ceph-mon[29756]: pgmap v2199: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2200: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:52 standalone.localdomain ceph-mon[29756]: pgmap v2200: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:47:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4164 DF PROTO=TCP SPT=45762 DPT=9102 SEQ=3678158615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BA96E0000000001030307) 
Oct 13 14:47:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2201: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:54 standalone.localdomain ceph-mon[29756]: pgmap v2201: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3470 DF PROTO=TCP SPT=36332 DPT=9101 SEQ=3633155644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BB0FE0000000001030307) 
Oct 13 14:47:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:47:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:47:55 standalone.localdomain systemd[1]: tmp-crun.IPpKFE.mount: Deactivated successfully.
Oct 13 14:47:55 standalone.localdomain podman[316300]: 2025-10-13 14:47:55.804414085 +0000 UTC m=+0.061423922 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:47:55 standalone.localdomain podman[316300]: 2025-10-13 14:47:55.840641756 +0000 UTC m=+0.097651593 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.component=openstack-ovn-controller-container)
Oct 13 14:47:55 standalone.localdomain podman[316300]: unhealthy
Oct 13 14:47:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:55 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:47:55 standalone.localdomain podman[316299]: 2025-10-13 14:47:55.852663638 +0000 UTC m=+0.113464672 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, release=1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64)
Oct 13 14:47:55 standalone.localdomain podman[316299]: 2025-10-13 14:47:55.89086071 +0000 UTC m=+0.151661714 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2025-07-21T16:28:53, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:47:55 standalone.localdomain podman[316299]: unhealthy
Oct 13 14:47:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:47:55 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:47:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2202: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:56 standalone.localdomain ceph-mon[29756]: pgmap v2202: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:47:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2203: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:47:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:47:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:47:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:47:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:47:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:47:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3472 DF PROTO=TCP SPT=36332 DPT=9101 SEQ=3633155644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BBCF70000000001030307) 
Oct 13 14:47:58 standalone.localdomain systemd[1]: tmp-crun.2BTQsv.mount: Deactivated successfully.
Oct 13 14:47:58 standalone.localdomain podman[316340]: 2025-10-13 14:47:58.57671437 +0000 UTC m=+0.083034031 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-swift-proxy-server, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:47:58 standalone.localdomain podman[316347]: 2025-10-13 14:47:58.587840524 +0000 UTC m=+0.084105233 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=swift_account_server, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9)
Oct 13 14:47:58 standalone.localdomain podman[316338]: 2025-10-13 14:47:58.622682692 +0000 UTC m=+0.136516335 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, version=17.1.9, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, container_name=swift_object_server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git)
Oct 13 14:47:58 standalone.localdomain podman[316343]: 2025-10-13 14:47:58.673848885 +0000 UTC m=+0.179057282 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:47:58 standalone.localdomain podman[316339]: 2025-10-13 14:47:58.684080372 +0000 UTC m=+0.193613582 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, build-date=2025-07-21T14:48:37, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1)
Oct 13 14:47:58 standalone.localdomain podman[316340]: 2025-10-13 14:47:58.770770975 +0000 UTC m=+0.277090626 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, architecture=x86_64, vcs-type=git, build-date=2025-07-21T14:48:37, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, container_name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc.)
Oct 13 14:47:58 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:47:58 standalone.localdomain podman[316347]: 2025-10-13 14:47:58.788151402 +0000 UTC m=+0.284416141 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, distribution-scope=public, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, container_name=swift_account_server, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=)
Oct 13 14:47:58 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:47:58 standalone.localdomain podman[316338]: 2025-10-13 14:47:58.81778931 +0000 UTC m=+0.331622973 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=swift_object_server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 14:47:58 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:47:58 standalone.localdomain podman[316343]: 2025-10-13 14:47:58.851779541 +0000 UTC m=+0.356987908 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, container_name=swift_container_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12)
Oct 13 14:47:58 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:47:59 standalone.localdomain podman[316339]: 2025-10-13 14:47:59.001930228 +0000 UTC m=+0.511463448 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, config_id=tripleo_step4, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible)
Oct 13 14:47:59 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:47:59 standalone.localdomain ceph-mon[29756]: pgmap v2203: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2204: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:00 standalone.localdomain ceph-mon[29756]: pgmap v2204: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:48:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:48:01 standalone.localdomain podman[316471]: 2025-10-13 14:48:01.814728443 +0000 UTC m=+0.072747512 container health_status 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=neutron_sriov_agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., build-date=2025-07-21T16:03:34)
Oct 13 14:48:01 standalone.localdomain podman[316471]: 2025-10-13 14:48:01.830894653 +0000 UTC m=+0.088913692 container exec_died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:03:34, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.component=openstack-neutron-sriov-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, release=1, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., version=17.1.9, vcs-type=git, container_name=neutron_sriov_agent, distribution-scope=public)
Oct 13 14:48:01 standalone.localdomain podman[316471]: unhealthy
Oct 13 14:48:01 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:48:01 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed with result 'exit-code'.
Oct 13 14:48:01 standalone.localdomain podman[316470]: 2025-10-13 14:48:01.874378259 +0000 UTC m=+0.133901655 container health_status 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20250721.1, config_id=tripleo_step5, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:48:01 standalone.localdomain podman[316470]: 2025-10-13 14:48:01.914926573 +0000 UTC m=+0.174449939 container exec_died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, container_name=nova_compute, distribution-scope=public, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 14:48:01 standalone.localdomain podman[316470]: unhealthy
Oct 13 14:48:01 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:48:01 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:48:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2205: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:02 standalone.localdomain ceph-mon[29756]: pgmap v2205: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3473 DF PROTO=TCP SPT=36332 DPT=9101 SEQ=3633155644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BCCB60000000001030307) 
Oct 13 14:48:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2206: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:04 standalone.localdomain ceph-mon[29756]: pgmap v2206: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2207: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:06 standalone.localdomain ceph-mon[29756]: pgmap v2207: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55255 DF PROTO=TCP SPT=38996 DPT=9100 SEQ=3621913021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BDF850000000001030307) 
Oct 13 14:48:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55256 DF PROTO=TCP SPT=38996 DPT=9100 SEQ=3621913021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BE3760000000001030307) 
Oct 13 14:48:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2208: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:09 standalone.localdomain ceph-mon[29756]: pgmap v2208: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2209: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55257 DF PROTO=TCP SPT=38996 DPT=9100 SEQ=3621913021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BEB770000000001030307) 
Oct 13 14:48:10 standalone.localdomain ceph-mon[29756]: pgmap v2209: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:11 standalone.localdomain systemd[1]: tripleo_neutron_dhcp.service: State 'stop-sigterm' timed out. Killing.
Oct 13 14:48:11 standalone.localdomain systemd[1]: tripleo_neutron_dhcp.service: Killing process 116918 (conmon) with signal SIGKILL.
Oct 13 14:48:11 standalone.localdomain systemd[1]: tripleo_neutron_dhcp.service: Main process exited, code=killed, status=9/KILL
Oct 13 14:48:11 standalone.localdomain systemd[1]: libpod-conmon-054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.scope: Deactivated successfully.
Oct 13 14:48:12 standalone.localdomain podman[316526]: error opening file `/run/crun/054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2/status`: No such file or directory
Oct 13 14:48:12 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.timer: Failed to open /run/systemd/transient/054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.timer: No such file or directory
Oct 13 14:48:12 standalone.localdomain systemd[1]: 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: Failed to open /run/systemd/transient/054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2.service: No such file or directory
Oct 13 14:48:12 standalone.localdomain podman[316513]: 2025-10-13 14:48:12.055170176 +0000 UTC m=+0.072145924 container cleanup 054c4114444dd8d2df9e5b1a0cf39c7de274855d10f2d549b0d020c06d6856d2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron_dhcp, container_name=neutron_dhcp, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-dhcp-agent, version=17.1.9)
Oct 13 14:48:12 standalone.localdomain podman[316513]: neutron_dhcp
Oct 13 14:48:12 standalone.localdomain systemd[1]: tripleo_neutron_dhcp.service: Failed with result 'timeout'.
Oct 13 14:48:12 standalone.localdomain systemd[1]: Stopped neutron_dhcp container.
Oct 13 14:48:12 standalone.localdomain systemd[1]: tripleo_neutron_dhcp.service: Consumed 1.177s CPU time, read 60.0K from disk, written 79.0M to disk.
Oct 13 14:48:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2210: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:12 standalone.localdomain ceph-mon[29756]: pgmap v2210: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:12 standalone.localdomain python3.9[316618]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:48:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:13 standalone.localdomain python3.9[316709]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:48:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29641 DF PROTO=TCP SPT=52954 DPT=9105 SEQ=3616152541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9BF9360000000001030307) 
Oct 13 14:48:14 standalone.localdomain python3.9[316800]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_sriov_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:48:14 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:48:14 standalone.localdomain systemd-sysv-generator[316830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:48:14 standalone.localdomain systemd-rc-local-generator[316827]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:48:14 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:48:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2211: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:14 standalone.localdomain systemd[1]: Stopping neutron_sriov_agent container...
Oct 13 14:48:14 standalone.localdomain ceph-mon[29756]: pgmap v2211: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:14 standalone.localdomain systemd[1]: tmp-crun.BcDW3g.mount: Deactivated successfully.
Oct 13 14:48:14 standalone.localdomain systemd[1]: libpod-7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.scope: Deactivated successfully.
Oct 13 14:48:14 standalone.localdomain systemd[1]: libpod-7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.scope: Consumed 8.150s CPU time.
Oct 13 14:48:14 standalone.localdomain podman[316841]: 2025-10-13 14:48:14.582104928 +0000 UTC m=+0.083220036 container died 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, name=rhosp17/openstack-neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:03:34, container_name=neutron_sriov_agent, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-neutron-sriov-agent-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, vendor=Red Hat, Inc., tcib_managed=true)
Oct 13 14:48:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.timer: Deactivated successfully.
Oct 13 14:48:14 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.
Oct 13 14:48:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed to open /run/systemd/transient/7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: No such file or directory
Oct 13 14:48:14 standalone.localdomain podman[316841]: 2025-10-13 14:48:14.650850676 +0000 UTC m=+0.151965734 container cleanup 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, container_name=neutron_sriov_agent, vcs-type=git, name=rhosp17/openstack-neutron-sriov-agent, version=17.1.9, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T16:03:34, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-neutron-sriov-agent-container, managed_by=tripleo_ansible, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969)
Oct 13 14:48:14 standalone.localdomain podman[316841]: neutron_sriov_agent
Oct 13 14:48:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.timer: Failed to open /run/systemd/transient/7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.timer: No such file or directory
Oct 13 14:48:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed to open /run/systemd/transient/7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: No such file or directory
Oct 13 14:48:14 standalone.localdomain podman[316855]: 2025-10-13 14:48:14.664647133 +0000 UTC m=+0.070723571 container cleanup 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, version=17.1.9, com.redhat.component=openstack-neutron-sriov-agent-container, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, build-date=2025-07-21T16:03:34, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, container_name=neutron_sriov_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']})
Oct 13 14:48:14 standalone.localdomain systemd[1]: libpod-conmon-7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.scope: Deactivated successfully.
Oct 13 14:48:14 standalone.localdomain podman[316885]: error opening file `/run/crun/7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d/status`: No such file or directory
Oct 13 14:48:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.timer: Failed to open /run/systemd/transient/7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.timer: No such file or directory
Oct 13 14:48:14 standalone.localdomain systemd[1]: 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: Failed to open /run/systemd/transient/7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d.service: No such file or directory
Oct 13 14:48:14 standalone.localdomain podman[316872]: 2025-10-13 14:48:14.745520944 +0000 UTC m=+0.055388044 container cleanup 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, name=rhosp17/openstack-neutron-sriov-agent, container_name=neutron_sriov_agent, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-neutron-sriov-agent-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible)
Oct 13 14:48:14 standalone.localdomain podman[316872]: neutron_sriov_agent
Oct 13 14:48:14 standalone.localdomain systemd[1]: tripleo_neutron_sriov_agent.service: Deactivated successfully.
Oct 13 14:48:14 standalone.localdomain systemd[1]: Stopped neutron_sriov_agent container.
Oct 13 14:48:15 standalone.localdomain python3.9[316976]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:48:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d1e79337f55b7749d10f6b1db0bdc54fe96b01e19a9e1616f21af66c8b2090d8-merged.mount: Deactivated successfully.
Oct 13 14:48:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d-userdata-shm.mount: Deactivated successfully.
Oct 13 14:48:15 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:48:15 standalone.localdomain systemd-rc-local-generator[317001]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:48:15 standalone.localdomain systemd-sysv-generator[317008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:48:15 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:48:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2212: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:16 standalone.localdomain ceph-mon[29756]: pgmap v2212: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:16 standalone.localdomain python3.9[317104]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:48:16 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:48:16 standalone.localdomain systemd-sysv-generator[317134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:48:16 standalone.localdomain systemd-rc-local-generator[317130]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:48:16 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:48:17 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:48:17 standalone.localdomain recover_tripleo_nova_virtqemud[317143]: 93291
Oct 13 14:48:17 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:48:17 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:48:17 standalone.localdomain python3.9[317233]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:48:17 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:48:17 standalone.localdomain systemd-sysv-generator[317261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:48:17 standalone.localdomain systemd-rc-local-generator[317258]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:48:18 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:48:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:18 standalone.localdomain systemd[1]: Stopping nova_compute container...
Oct 13 14:48:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2213: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:48:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/691710667' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:48:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:48:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/691710667' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:48:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3858 DF PROTO=TCP SPT=52436 DPT=9882 SEQ=1140373605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C0BB60000000001030307) 
Oct 13 14:48:19 standalone.localdomain ceph-mon[29756]: pgmap v2213: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/691710667' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:48:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/691710667' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:48:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29643 DF PROTO=TCP SPT=52954 DPT=9105 SEQ=3616152541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C10F60000000001030307) 
Oct 13 14:48:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2214: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:20 standalone.localdomain ceph-mon[29756]: pgmap v2214: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2215: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:22 standalone.localdomain ceph-mon[29756]: pgmap v2215: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:48:23
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', '.mgr', 'manila_metadata', 'backups', 'manila_data', 'vms', 'images']
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:48:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1570 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=785090897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C1E9E0000000001030307) 
Oct 13 14:48:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2216: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:24 standalone.localdomain ceph-mon[29756]: pgmap v2216: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33408 DF PROTO=TCP SPT=45546 DPT=9101 SEQ=876068314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C262E0000000001030307) 
Oct 13 14:48:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:48:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:48:26 standalone.localdomain podman[317289]: 2025-10-13 14:48:26.044570574 +0000 UTC m=+0.051732173 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1)
Oct 13 14:48:26 standalone.localdomain podman[317289]: 2025-10-13 14:48:26.053687726 +0000 UTC m=+0.060849355 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Oct 13 14:48:26 standalone.localdomain podman[317289]: unhealthy
Oct 13 14:48:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:48:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:48:26 standalone.localdomain podman[317288]: 2025-10-13 14:48:26.099564375 +0000 UTC m=+0.106638411 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4)
Oct 13 14:48:26 standalone.localdomain podman[317288]: 2025-10-13 14:48:26.107889532 +0000 UTC m=+0.114963588 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, distribution-scope=public, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=)
Oct 13 14:48:26 standalone.localdomain podman[317288]: unhealthy
Oct 13 14:48:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:48:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:48:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2217: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:26 standalone.localdomain ceph-mon[29756]: pgmap v2217: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2218: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33410 DF PROTO=TCP SPT=45546 DPT=9101 SEQ=876068314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C32370000000001030307) 
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:48:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: pgmap v2218: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.104196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366909104266, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 2111, "num_deletes": 251, "total_data_size": 1992979, "memory_usage": 2033680, "flush_reason": "Manual Compaction"}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366909114249, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1933901, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50679, "largest_seqno": 52789, "table_properties": {"data_size": 1925652, "index_size": 5016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17043, "raw_average_key_size": 19, "raw_value_size": 1908860, "raw_average_value_size": 2229, "num_data_blocks": 226, "num_entries": 856, "num_filter_entries": 856, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366712, "oldest_key_time": 1760366712, "file_creation_time": 1760366909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 10135 microseconds, and 4177 cpu microseconds.
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.114318) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1933901 bytes OK
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.114364) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.116568) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.116601) EVENT_LOG_v1 {"time_micros": 1760366909116590, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.116631) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1983975, prev total WAL file size 1983975, number of live WAL files 2.
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.117772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1888KB)], [113(5048KB)]
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366909117841, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 7103918, "oldest_snapshot_seqno": -1}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 5359 keys, 6081570 bytes, temperature: kUnknown
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366909144136, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 6081570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6047918, "index_size": 19155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 135477, "raw_average_key_size": 25, "raw_value_size": 5952793, "raw_average_value_size": 1110, "num_data_blocks": 790, "num_entries": 5359, "num_filter_entries": 5359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760366909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.144569) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 6081570 bytes
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.146212) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 269.0 rd, 230.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.9 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 5879, records dropped: 520 output_compression: NoCompression
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.146241) EVENT_LOG_v1 {"time_micros": 1760366909146228, "job": 68, "event": "compaction_finished", "compaction_time_micros": 26413, "compaction_time_cpu_micros": 14986, "output_level": 6, "num_output_files": 1, "total_output_size": 6081570, "num_input_records": 5879, "num_output_records": 5359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366909146644, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760366909147321, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.117624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.147391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.147398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.147401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.147404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:48:29 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:48:29.147407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:48:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:48:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:48:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:48:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:48:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:48:29 standalone.localdomain podman[317330]: 2025-10-13 14:48:29.333581225 +0000 UTC m=+0.091169553 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, release=1, vcs-type=git, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:48:29 standalone.localdomain systemd[1]: tmp-crun.slxE4y.mount: Deactivated successfully.
Oct 13 14:48:29 standalone.localdomain podman[317329]: 2025-10-13 14:48:29.447977435 +0000 UTC m=+0.202323822 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, batch=17.1_20250721.1, container_name=swift_proxy, build-date=2025-07-21T14:48:37, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc.)
Oct 13 14:48:29 standalone.localdomain podman[317334]: 2025-10-13 14:48:29.500692696 +0000 UTC m=+0.243705902 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9)
Oct 13 14:48:29 standalone.localdomain podman[317327]: 2025-10-13 14:48:29.546169383 +0000 UTC m=+0.309025614 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, name=rhosp17/openstack-swift-object, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, container_name=swift_object_server, batch=17.1_20250721.1, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 14:48:29 standalone.localdomain podman[317328]: 2025-10-13 14:48:29.419793193 +0000 UTC m=+0.180974261 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64)
Oct 13 14:48:29 standalone.localdomain podman[317330]: 2025-10-13 14:48:29.624694663 +0000 UTC m=+0.382282971 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_container_server, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container)
Oct 13 14:48:29 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:48:29 standalone.localdomain podman[317329]: 2025-10-13 14:48:29.662698908 +0000 UTC m=+0.417045275 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, container_name=swift_proxy, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, name=rhosp17/openstack-swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:48:37, release=1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, version=17.1.9)
Oct 13 14:48:29 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:48:29 standalone.localdomain podman[317334]: 2025-10-13 14:48:29.729432044 +0000 UTC m=+0.472445270 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, version=17.1.9, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, vcs-type=git, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Oct 13 14:48:29 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:48:29 standalone.localdomain podman[317327]: 2025-10-13 14:48:29.746949745 +0000 UTC m=+0.509806046 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=swift_object_server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.openshift.expose-services=, version=17.1.9)
Oct 13 14:48:29 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:48:29 standalone.localdomain podman[317328]: 2025-10-13 14:48:29.806805558 +0000 UTC m=+0.567986636 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, architecture=x86_64, container_name=nova_migration_target, version=17.1.9, config_id=tripleo_step4, release=1, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute)
Oct 13 14:48:29 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:48:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2219: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:30 standalone.localdomain ceph-mon[29756]: pgmap v2219: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:48:32 standalone.localdomain podman[317457]: Error: container 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c is not running
Oct 13 14:48:32 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Main process exited, code=exited, status=125/n/a
Oct 13 14:48:32 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed with result 'exit-code'.
Oct 13 14:48:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2220: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:32 standalone.localdomain ceph-mon[29756]: pgmap v2220: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33411 DF PROTO=TCP SPT=45546 DPT=9101 SEQ=876068314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C41F70000000001030307) 
Oct 13 14:48:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:33 standalone.localdomain sudo[317468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:48:33 standalone.localdomain sudo[317468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:48:33 standalone.localdomain sudo[317468]: pam_unix(sudo:session): session closed for user root
Oct 13 14:48:33 standalone.localdomain sudo[317483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 14:48:33 standalone.localdomain sudo[317483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:48:33 standalone.localdomain sudo[317483]: pam_unix(sudo:session): session closed for user root
Oct 13 14:48:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:48:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:48:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:33 standalone.localdomain sudo[317519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:48:33 standalone.localdomain sudo[317519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:48:33 standalone.localdomain sudo[317519]: pam_unix(sudo:session): session closed for user root
Oct 13 14:48:34 standalone.localdomain sudo[317534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:48:34 standalone.localdomain sudo[317534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:48:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2221: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:34 standalone.localdomain sudo[317534]: pam_unix(sudo:session): session closed for user root
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:48:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b7af314c-d119-4e93-a767-da9ce5ee0ab5 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:48:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b7af314c-d119-4e93-a767-da9ce5ee0ab5 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:48:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b7af314c-d119-4e93-a767-da9ce5ee0ab5 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:48:34 standalone.localdomain sudo[317582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:48:34 standalone.localdomain sudo[317582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:48:34 standalone.localdomain sudo[317582]: pam_unix(sudo:session): session closed for user root
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: pgmap v2221: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:34 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:48:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2222: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:36 standalone.localdomain ceph-mon[29756]: pgmap v2222: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12393 DF PROTO=TCP SPT=54082 DPT=9100 SEQ=1474989572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C54B40000000001030307) 
Oct 13 14:48:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12394 DF PROTO=TCP SPT=54082 DPT=9100 SEQ=1474989572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C58B70000000001030307) 
Oct 13 14:48:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2223: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:39 standalone.localdomain ceph-mon[29756]: pgmap v2223: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:48:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:48:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2224: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12395 DF PROTO=TCP SPT=54082 DPT=9100 SEQ=1474989572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C60B60000000001030307) 
Oct 13 14:48:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:48:40 standalone.localdomain ceph-mon[29756]: pgmap v2224: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:42 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:14:48:42 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx4a600e5187324783ad838-0068ed114a" "proxy-server 2" 0.0007 "-" 21 -
Oct 13 14:48:42 standalone.localdomain swift[114487]: Error talking to memcached: standalone.internalapi.localdomain:11211: [Errno 32] Broken pipe (txn: tx4a600e5187324783ad838-0068ed114a)
Oct 13 14:48:42 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx4a600e5187324783ad838-0068ed114a)
Oct 13 14:48:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2225: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:42 standalone.localdomain ceph-mon[29756]: pgmap v2225: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11321 DF PROTO=TCP SPT=49350 DPT=9105 SEQ=3532175740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C6E760000000001030307) 
Oct 13 14:48:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2226: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:44 standalone.localdomain ceph-mon[29756]: pgmap v2226: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2227: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:46 standalone.localdomain ceph-mon[29756]: pgmap v2227: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2228: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40446 DF PROTO=TCP SPT=55364 DPT=9882 SEQ=2146675126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C80B70000000001030307) 
Oct 13 14:48:49 standalone.localdomain ceph-mon[29756]: pgmap v2228: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11323 DF PROTO=TCP SPT=49350 DPT=9105 SEQ=3532175740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C86360000000001030307) 
Oct 13 14:48:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2229: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:50 standalone.localdomain ceph-mon[29756]: pgmap v2229: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2230: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:52 standalone.localdomain ceph-mon[29756]: pgmap v2230: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:48:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24349 DF PROTO=TCP SPT=40796 DPT=9102 SEQ=1762974930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C93CE0000000001030307) 
Oct 13 14:48:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2231: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:54 standalone.localdomain ceph-mon[29756]: pgmap v2231: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24562 DF PROTO=TCP SPT=57640 DPT=9101 SEQ=3632998486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9C9B5E0000000001030307) 
Oct 13 14:48:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2232: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:48:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:48:56 standalone.localdomain ceph-mon[29756]: pgmap v2232: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:56 standalone.localdomain podman[317598]: 2025-10-13 14:48:56.574221609 +0000 UTC m=+0.084354051 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_controller, distribution-scope=public, release=1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T13:28:44, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4)
Oct 13 14:48:56 standalone.localdomain podman[317598]: 2025-10-13 14:48:56.611163452 +0000 UTC m=+0.121295874 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, name=rhosp17/openstack-ovn-controller, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 14:48:56 standalone.localdomain podman[317598]: unhealthy
Oct 13 14:48:56 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:48:56 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:48:56 standalone.localdomain podman[317597]: 2025-10-13 14:48:56.619581013 +0000 UTC m=+0.129683144 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3)
Oct 13 14:48:56 standalone.localdomain podman[317597]: 2025-10-13 14:48:56.705332197 +0000 UTC m=+0.215434358 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, build-date=2025-07-21T16:28:53, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 14:48:56 standalone.localdomain podman[317597]: unhealthy
Oct 13 14:48:56 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:48:56 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:48:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:48:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2233: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24564 DF PROTO=TCP SPT=57640 DPT=9101 SEQ=3632998486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CA7760000000001030307) 
Oct 13 14:48:59 standalone.localdomain ceph-mon[29756]: pgmap v2233: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:48:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:48:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:48:59 standalone.localdomain podman[317637]: 2025-10-13 14:48:59.802172883 +0000 UTC m=+0.070319297 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, name=rhosp17/openstack-swift-proxy-server, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, release=1)
Oct 13 14:48:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:48:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:48:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:48:59 standalone.localdomain podman[317670]: 2025-10-13 14:48:59.90159095 +0000 UTC m=+0.069707339 container health_status 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, container_name=nova_migration_target, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1, build-date=2025-07-21T14:48:37)
Oct 13 14:48:59 standalone.localdomain podman[317667]: 2025-10-13 14:48:59.957533981 +0000 UTC m=+0.132340557 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, container_name=swift_object_server, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:48:59 standalone.localdomain podman[317637]: 2025-10-13 14:48:59.96881252 +0000 UTC m=+0.236958944 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, com.redhat.component=openstack-swift-proxy-server-container, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:48:59 standalone.localdomain podman[317638]: 2025-10-13 14:48:59.878537155 +0000 UTC m=+0.142801479 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, release=1, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:48:59 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain podman[317668]: 2025-10-13 14:49:00.016083932 +0000 UTC m=+0.184773479 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:11:22, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:49:00 standalone.localdomain podman[317638]: 2025-10-13 14:49:00.097544013 +0000 UTC m=+0.361808327 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:49:00 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain podman[317667]: 2025-10-13 14:49:00.146825547 +0000 UTC m=+0.321632103 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, container_name=swift_object_server, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, version=17.1.9, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public)
Oct 13 14:49:00 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain podman[317668]: 2025-10-13 14:49:00.188739354 +0000 UTC m=+0.357428891 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22)
Oct 13 14:49:00 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain podman[317670]: 2025-10-13 14:49:00.248849885 +0000 UTC m=+0.416966294 container exec_died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Oct 13 14:49:00 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain podman[317274]: time="2025-10-13T14:49:00Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL"
Oct 13 14:49:00 standalone.localdomain systemd[1]: session-25.scope: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain systemd[1]: session-c16.scope: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain systemd[1]: libpod-277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.scope: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain systemd[1]: libpod-277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.scope: Consumed 28.674s CPU time.
Oct 13 14:49:00 standalone.localdomain podman[317274]: 2025-10-13 14:49:00.36247031 +0000 UTC m=+42.129704320 container died 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:48:37, tcib_managed=true, batch=17.1_20250721.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step5, release=1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:49:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.timer: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.
Oct 13 14:49:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed to open /run/systemd/transient/277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: No such file or directory
Oct 13 14:49:00 standalone.localdomain podman[317274]: 2025-10-13 14:49:00.415710648 +0000 UTC m=+42.182944668 container cleanup 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-07-21T14:48:37, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:49:00 standalone.localdomain podman[317274]: nova_compute
Oct 13 14:49:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2234: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.timer: Failed to open /run/systemd/transient/277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.timer: No such file or directory
Oct 13 14:49:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed to open /run/systemd/transient/277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: No such file or directory
Oct 13 14:49:00 standalone.localdomain podman[317768]: 2025-10-13 14:49:00.448926336 +0000 UTC m=+0.074278370 container cleanup 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.9, release=1, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T14:48:37, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:49:00 standalone.localdomain systemd[1]: libpod-conmon-277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.scope: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain ceph-mon[29756]: pgmap v2234: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.timer: Failed to open /run/systemd/transient/277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.timer: No such file or directory
Oct 13 14:49:00 standalone.localdomain systemd[1]: 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: Failed to open /run/systemd/transient/277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c.service: No such file or directory
Oct 13 14:49:00 standalone.localdomain podman[317782]: 2025-10-13 14:49:00.537426715 +0000 UTC m=+0.063801196 container cleanup 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, container_name=nova_compute, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-07-21T14:48:37, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:49:00 standalone.localdomain podman[317782]: nova_compute
Oct 13 14:49:00 standalone.localdomain systemd[1]: tripleo_nova_compute.service: Deactivated successfully.
Oct 13 14:49:00 standalone.localdomain systemd[1]: Stopped nova_compute container.
Oct 13 14:49:00 standalone.localdomain systemd[1]: tripleo_nova_compute.service: Consumed 1.134s CPU time, read 4.0K from disk, written 0B to disk.
Oct 13 14:49:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fa059f493f94192e7dce82495f13d5d800e8608118e1cb05a72f419570640fd4-merged.mount: Deactivated successfully.
Oct 13 14:49:01 standalone.localdomain python3.9[317885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:49:01 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:49:01 standalone.localdomain systemd-rc-local-generator[317914]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:49:01 standalone.localdomain systemd-sysv-generator[317918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:49:01 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:49:02 standalone.localdomain python3.9[318013]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:49:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2235: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:49:02 standalone.localdomain ceph-mon[29756]: pgmap v2235: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:02 standalone.localdomain systemd-rc-local-generator[318043]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:49:02 standalone.localdomain systemd-sysv-generator[318047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:49:02 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:49:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24565 DF PROTO=TCP SPT=57640 DPT=9101 SEQ=3632998486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CB7360000000001030307) 
Oct 13 14:49:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:03 standalone.localdomain python3.9[318141]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:49:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:49:03 standalone.localdomain systemd-sysv-generator[318175]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:49:03 standalone.localdomain systemd-rc-local-generator[318170]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:49:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:49:04 standalone.localdomain systemd[1]: Stopping nova_migration_target container...
Oct 13 14:49:04 standalone.localdomain sshd[113856]: Received signal 15; terminating.
Oct 13 14:49:04 standalone.localdomain systemd[1]: libpod-6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.scope: Deactivated successfully.
Oct 13 14:49:04 standalone.localdomain systemd[1]: libpod-6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.scope: Consumed 27.269s CPU time.
Oct 13 14:49:04 standalone.localdomain podman[318182]: 2025-10-13 14:49:04.129306989 +0000 UTC m=+0.066000004 container died 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, release=1, version=17.1.9, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20250721.1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-compute-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Oct 13 14:49:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.timer: Deactivated successfully.
Oct 13 14:49:04 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.
Oct 13 14:49:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Failed to open /run/systemd/transient/6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: No such file or directory
Oct 13 14:49:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642-userdata-shm.mount: Deactivated successfully.
Oct 13 14:49:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-36dd811e02ba985b5a336c17f03e47ad2b35e431e58c3e755073c9d6afb5f7b6-merged.mount: Deactivated successfully.
Oct 13 14:49:04 standalone.localdomain podman[318182]: 2025-10-13 14:49:04.183254358 +0000 UTC m=+0.119947303 container cleanup 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, com.redhat.component=openstack-nova-compute-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true)
Oct 13 14:49:04 standalone.localdomain podman[318182]: nova_migration_target
Oct 13 14:49:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.timer: Failed to open /run/systemd/transient/6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.timer: No such file or directory
Oct 13 14:49:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Failed to open /run/systemd/transient/6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: No such file or directory
Oct 13 14:49:04 standalone.localdomain podman[318196]: 2025-10-13 14:49:04.19365861 +0000 UTC m=+0.059839793 container cleanup 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.33.12, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-07-21T14:48:37, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_migration_target, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d)
Oct 13 14:49:04 standalone.localdomain systemd[1]: libpod-conmon-6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.scope: Deactivated successfully.
Oct 13 14:49:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.timer: Failed to open /run/systemd/transient/6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.timer: No such file or directory
Oct 13 14:49:04 standalone.localdomain systemd[1]: 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: Failed to open /run/systemd/transient/6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642.service: No such file or directory
Oct 13 14:49:04 standalone.localdomain podman[318210]: 2025-10-13 14:49:04.281019043 +0000 UTC m=+0.052109383 container cleanup 6d03a7bbde076a06480e8c554790bc49f08d8b8a7643549600906c97ab2e6642 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1, version=17.1.9, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:49:04 standalone.localdomain podman[318210]: nova_migration_target
Oct 13 14:49:04 standalone.localdomain systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully.
Oct 13 14:49:04 standalone.localdomain systemd[1]: Stopped nova_migration_target container.
Oct 13 14:49:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2236: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:04 standalone.localdomain ceph-mon[29756]: pgmap v2236: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:04 standalone.localdomain python3.9[318312]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:49:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:49:05 standalone.localdomain systemd-rc-local-generator[318342]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:49:05 standalone.localdomain systemd-sysv-generator[318345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:49:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:49:06 standalone.localdomain python3.9[318440]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:49:06 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:49:06 standalone.localdomain systemd-rc-local-generator[318469]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:49:06 standalone.localdomain systemd-sysv-generator[318473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:49:06 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:49:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2237: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:06 standalone.localdomain systemd[1]: Stopping nova_virtlogd_wrapper container...
Oct 13 14:49:06 standalone.localdomain ceph-mon[29756]: pgmap v2237: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:06 standalone.localdomain systemd[1]: libpod-a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189.scope: Deactivated successfully.
Oct 13 14:49:06 standalone.localdomain podman[318481]: 2025-10-13 14:49:06.563853822 +0000 UTC m=+0.057319405 container died a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, distribution-scope=public, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_virtlogd_wrapper, io.openshift.expose-services=, batch=17.1_20250721.1, version=17.1.9, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:49:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189-userdata-shm.mount: Deactivated successfully.
Oct 13 14:49:06 standalone.localdomain podman[318481]: 2025-10-13 14:49:06.602193258 +0000 UTC m=+0.095658791 container cleanup a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=2, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, vcs-type=git, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, build-date=2025-07-21T14:56:59)
Oct 13 14:49:06 standalone.localdomain podman[318481]: nova_virtlogd_wrapper
Oct 13 14:49:06 standalone.localdomain podman[318493]: 2025-10-13 14:49:06.628269725 +0000 UTC m=+0.061009629 container cleanup a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=2, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, container_name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:49:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45789 DF PROTO=TCP SPT=36114 DPT=9100 SEQ=933805771 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CC9E40000000001030307) 
Oct 13 14:49:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede-merged.mount: Deactivated successfully.
Oct 13 14:49:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45790 DF PROTO=TCP SPT=36114 DPT=9100 SEQ=933805771 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CCDF60000000001030307) 
Oct 13 14:49:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2238: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:09 standalone.localdomain ceph-mon[29756]: pgmap v2238: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2239: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45791 DF PROTO=TCP SPT=36114 DPT=9100 SEQ=933805771 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CD5F70000000001030307) 
Oct 13 14:49:10 standalone.localdomain ceph-mon[29756]: pgmap v2239: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2240: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:12 standalone.localdomain ceph-mon[29756]: pgmap v2240: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13476 DF PROTO=TCP SPT=45742 DPT=9105 SEQ=3834071909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CE3B60000000001030307) 
Oct 13 14:49:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2241: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:14 standalone.localdomain ceph-mon[29756]: pgmap v2241: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2242: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:16 standalone.localdomain ceph-mon[29756]: pgmap v2242: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2243: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:49:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1405374829' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:49:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:49:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1405374829' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:49:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30011 DF PROTO=TCP SPT=51914 DPT=9882 SEQ=1040459894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CF5F70000000001030307) 
Oct 13 14:49:19 standalone.localdomain ceph-mon[29756]: pgmap v2243: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1405374829' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:49:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1405374829' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:49:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13478 DF PROTO=TCP SPT=45742 DPT=9105 SEQ=3834071909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9CFB760000000001030307) 
Oct 13 14:49:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2244: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:20 standalone.localdomain ceph-mon[29756]: pgmap v2244: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2245: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:22 standalone.localdomain ceph-mon[29756]: pgmap v2245: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:49:23
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'images', 'manila_metadata', 'manila_data', 'volumes', 'vms', 'backups']
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:49:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63654 DF PROTO=TCP SPT=43042 DPT=9102 SEQ=1962950864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D08FE0000000001030307) 
Oct 13 14:49:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2246: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:24 standalone.localdomain ceph-mon[29756]: pgmap v2246: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60103 DF PROTO=TCP SPT=51802 DPT=9101 SEQ=1979222143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D108F0000000001030307) 
Oct 13 14:49:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2247: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:26 standalone.localdomain ceph-mon[29756]: pgmap v2247: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:49:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:49:26 standalone.localdomain podman[318508]: 2025-10-13 14:49:26.790132231 +0000 UTC m=+0.060412381 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.33.12, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vcs-type=git, tcib_managed=true)
Oct 13 14:49:26 standalone.localdomain podman[318508]: 2025-10-13 14:49:26.803792404 +0000 UTC m=+0.074072564 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, version=17.1.9, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T13:28:44, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller)
Oct 13 14:49:26 standalone.localdomain podman[318508]: unhealthy
Oct 13 14:49:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:49:26 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:49:26 standalone.localdomain podman[318509]: 2025-10-13 14:49:26.850077865 +0000 UTC m=+0.112988526 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, vcs-type=git, architecture=x86_64)
Oct 13 14:49:26 standalone.localdomain podman[318509]: 2025-10-13 14:49:26.862945744 +0000 UTC m=+0.125856405 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, container_name=ovn_metadata_agent, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 14:49:26 standalone.localdomain podman[318509]: unhealthy
Oct 13 14:49:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:49:26 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:49:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2248: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60105 DF PROTO=TCP SPT=51802 DPT=9101 SEQ=1979222143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D1CB60000000001030307) 
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:49:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:49:29 standalone.localdomain ceph-mon[29756]: pgmap v2248: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2249: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:49:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:49:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:49:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:49:30 standalone.localdomain ceph-mon[29756]: pgmap v2249: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:30 standalone.localdomain systemd[1]: tmp-crun.IWGMMz.mount: Deactivated successfully.
Oct 13 14:49:30 standalone.localdomain podman[318546]: 2025-10-13 14:49:30.547825108 +0000 UTC m=+0.072509175 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4)
Oct 13 14:49:30 standalone.localdomain systemd[1]: tmp-crun.FUVzyc.mount: Deactivated successfully.
Oct 13 14:49:30 standalone.localdomain podman[318550]: 2025-10-13 14:49:30.608749693 +0000 UTC m=+0.123071449 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, vcs-type=git, container_name=swift_account_server)
Oct 13 14:49:30 standalone.localdomain podman[318548]: 2025-10-13 14:49:30.579901421 +0000 UTC m=+0.100623105 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, release=1, tcib_managed=true, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container)
Oct 13 14:49:30 standalone.localdomain podman[318547]: 2025-10-13 14:49:30.583572784 +0000 UTC m=+0.102800842 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vcs-type=git, container_name=swift_proxy, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12)
Oct 13 14:49:30 standalone.localdomain podman[318546]: 2025-10-13 14:49:30.7307874 +0000 UTC m=+0.255471487 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:49:30 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:49:30 standalone.localdomain podman[318548]: 2025-10-13 14:49:30.76085992 +0000 UTC m=+0.281581644 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64, version=17.1.9, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, tcib_managed=true, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team)
Oct 13 14:49:30 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:49:30 standalone.localdomain podman[318547]: 2025-10-13 14:49:30.776253746 +0000 UTC m=+0.295481814 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, container_name=swift_proxy, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, distribution-scope=public, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:49:30 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:49:30 standalone.localdomain podman[318550]: 2025-10-13 14:49:30.797662178 +0000 UTC m=+0.311983934 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.buildah.version=1.33.12, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account)
Oct 13 14:49:30 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:49:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2250: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:32 standalone.localdomain ceph-mon[29756]: pgmap v2250: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60106 DF PROTO=TCP SPT=51802 DPT=9101 SEQ=1979222143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D2C760000000001030307) 
Oct 13 14:49:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2251: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:34 standalone.localdomain ceph-mon[29756]: pgmap v2251: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:34 standalone.localdomain sudo[318658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:49:34 standalone.localdomain sudo[318658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:49:34 standalone.localdomain sudo[318658]: pam_unix(sudo:session): session closed for user root
Oct 13 14:49:34 standalone.localdomain sudo[318673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:49:34 standalone.localdomain sudo[318673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:49:35 standalone.localdomain sudo[318673]: pam_unix(sudo:session): session closed for user root
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:49:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 8dcd59b7-d5dc-441d-aa40-f82a5d49fcda (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:49:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 8dcd59b7-d5dc-441d-aa40-f82a5d49fcda (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:49:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 8dcd59b7-d5dc-441d-aa40-f82a5d49fcda (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:49:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:49:35 standalone.localdomain sudo[318721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:49:35 standalone.localdomain sudo[318721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:49:35 standalone.localdomain sudo[318721]: pam_unix(sudo:session): session closed for user root
Oct 13 14:49:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2252: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:36 standalone.localdomain ceph-mon[29756]: pgmap v2252: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15588 DF PROTO=TCP SPT=49296 DPT=9100 SEQ=2887547611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D3F150000000001030307) 
Oct 13 14:49:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2253: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15589 DF PROTO=TCP SPT=49296 DPT=9100 SEQ=2887547611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D43370000000001030307) 
Oct 13 14:49:39 standalone.localdomain ceph-mon[29756]: pgmap v2253: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:49:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:49:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:49:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2254: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15590 DF PROTO=TCP SPT=49296 DPT=9100 SEQ=2887547611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D4B370000000001030307) 
Oct 13 14:49:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:49:40 standalone.localdomain ceph-mon[29756]: pgmap v2254: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2255: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:42 standalone.localdomain ceph-mon[29756]: pgmap v2255: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52593 DF PROTO=TCP SPT=49454 DPT=9105 SEQ=1961618830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D58F60000000001030307) 
Oct 13 14:49:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2256: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:44 standalone.localdomain ceph-mon[29756]: pgmap v2256: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2257: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:46 standalone.localdomain ceph-mon[29756]: pgmap v2257: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2258: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24978 DF PROTO=TCP SPT=34488 DPT=9882 SEQ=3868899541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D6B370000000001030307) 
Oct 13 14:49:49 standalone.localdomain ceph-mon[29756]: pgmap v2258: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52595 DF PROTO=TCP SPT=49454 DPT=9105 SEQ=1961618830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D70B70000000001030307) 
Oct 13 14:49:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2259: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:50 standalone.localdomain ceph-mon[29756]: pgmap v2259: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:51 standalone.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Oct 13 14:49:51 standalone.localdomain recover_tripleo_nova_virtqemud[318737]: 93291
Oct 13 14:49:51 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Oct 13 14:49:51 standalone.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Oct 13 14:49:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2260: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:52 standalone.localdomain ceph-mon[29756]: pgmap v2260: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:49:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11421 DF PROTO=TCP SPT=40146 DPT=9102 SEQ=3110911066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D7E2F0000000001030307) 
Oct 13 14:49:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2261: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:54 standalone.localdomain ceph-mon[29756]: pgmap v2261: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62597 DF PROTO=TCP SPT=45988 DPT=9101 SEQ=1090129733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D85BE0000000001030307) 
Oct 13 14:49:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2262: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:56 standalone.localdomain ceph-mon[29756]: pgmap v2262: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:49:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:49:57 standalone.localdomain podman[318738]: 2025-10-13 14:49:57.052294464 +0000 UTC m=+0.066124116 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, build-date=2025-07-21T16:28:53, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.9, architecture=x86_64, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:49:57 standalone.localdomain podman[318738]: 2025-10-13 14:49:57.063443399 +0000 UTC m=+0.077273081 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, vcs-type=git, distribution-scope=public)
Oct 13 14:49:57 standalone.localdomain podman[318738]: unhealthy
Oct 13 14:49:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:49:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:49:57 standalone.localdomain podman[318739]: 2025-10-13 14:49:57.109926648 +0000 UTC m=+0.115611478 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, distribution-scope=public, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:49:57 standalone.localdomain podman[318739]: 2025-10-13 14:49:57.145272412 +0000 UTC m=+0.150957222 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, build-date=2025-07-21T13:28:44, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Oct 13 14:49:57 standalone.localdomain podman[318739]: unhealthy
Oct 13 14:49:57 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:49:57 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:49:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:49:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2263: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:49:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62599 DF PROTO=TCP SPT=45988 DPT=9101 SEQ=1090129733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9D91B60000000001030307) 
Oct 13 14:49:59 standalone.localdomain ceph-mon[29756]: pgmap v2263: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2264: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:00 standalone.localdomain ceph-mon[29756]: pgmap v2264: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:50:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:50:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:50:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:50:01 standalone.localdomain podman[318782]: 2025-10-13 14:50:01.289546969 +0000 UTC m=+0.051517005 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T15:54:32, vcs-type=git)
Oct 13 14:50:01 standalone.localdomain podman[318780]: 2025-10-13 14:50:01.352949611 +0000 UTC m=+0.115247408 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:50:01 standalone.localdomain podman[318781]: 2025-10-13 14:50:01.400193932 +0000 UTC m=+0.161913270 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:48:37, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1)
Oct 13 14:50:01 standalone.localdomain podman[318783]: 2025-10-13 14:50:01.453087599 +0000 UTC m=+0.212508767 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=swift_account_server, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:50:01 standalone.localdomain podman[318780]: 2025-10-13 14:50:01.511909349 +0000 UTC m=+0.274207136 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, vcs-type=git, release=1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:50:01 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:50:01 standalone.localdomain podman[318782]: 2025-10-13 14:50:01.530767203 +0000 UTC m=+0.292737219 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:50:01 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:50:01 standalone.localdomain podman[318781]: 2025-10-13 14:50:01.568837572 +0000 UTC m=+0.330556910 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, managed_by=tripleo_ansible, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-proxy-server-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, container_name=swift_proxy, io.openshift.expose-services=, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, architecture=x86_64)
Oct 13 14:50:01 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:50:01 standalone.localdomain podman[318783]: 2025-10-13 14:50:01.61986971 +0000 UTC m=+0.379290908 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, distribution-scope=public, architecture=x86_64)
Oct 13 14:50:01 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:50:02 standalone.localdomain systemd[1]: tmp-crun.zFKyuN.mount: Deactivated successfully.
Oct 13 14:50:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2265: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:02 standalone.localdomain ceph-mon[29756]: pgmap v2265: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62600 DF PROTO=TCP SPT=45988 DPT=9101 SEQ=1090129733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DA1760000000001030307) 
Oct 13 14:50:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2266: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:04 standalone.localdomain ceph-mon[29756]: pgmap v2266: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2267: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:06 standalone.localdomain ceph-mon[29756]: pgmap v2267: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57475 DF PROTO=TCP SPT=59616 DPT=9100 SEQ=3267840015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DB4450000000001030307) 
Oct 13 14:50:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57476 DF PROTO=TCP SPT=59616 DPT=9100 SEQ=3267840015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DB8360000000001030307) 
Oct 13 14:50:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2268: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:09 standalone.localdomain ceph-mon[29756]: pgmap v2268: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57477 DF PROTO=TCP SPT=59616 DPT=9100 SEQ=3267840015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DC0360000000001030307) 
Oct 13 14:50:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2269: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:10 standalone.localdomain ceph-mon[29756]: pgmap v2269: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2270: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:12 standalone.localdomain ceph-mon[29756]: pgmap v2270: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5230 DF PROTO=TCP SPT=39830 DPT=9105 SEQ=2986298028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DCDF60000000001030307) 
Oct 13 14:50:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2271: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:14 standalone.localdomain ceph-mon[29756]: pgmap v2271: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2272: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:16 standalone.localdomain ceph-mon[29756]: pgmap v2272: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2273: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:50:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/833415466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:50:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:50:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/833415466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:50:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12756 DF PROTO=TCP SPT=54550 DPT=9882 SEQ=2327021420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DE0770000000001030307) 
Oct 13 14:50:19 standalone.localdomain ceph-mon[29756]: pgmap v2273: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/833415466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:50:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/833415466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:50:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5232 DF PROTO=TCP SPT=39830 DPT=9105 SEQ=2986298028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DE5B60000000001030307) 
Oct 13 14:50:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2274: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:20 standalone.localdomain ceph-mon[29756]: pgmap v2274: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2275: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:22 standalone.localdomain ceph-mon[29756]: pgmap v2275: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:50:23
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'volumes', '.mgr', 'vms', 'manila_data', 'images', 'backups']
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:50:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25843 DF PROTO=TCP SPT=41080 DPT=9102 SEQ=2295315279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DF35F0000000001030307) 
Oct 13 14:50:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2276: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:24 standalone.localdomain ceph-mon[29756]: pgmap v2276: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:24 standalone.localdomain account-server[114566]: 172.20.0.100 - - [13/Oct/2025:14:50:24 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx496e578c8d64487fbde9a-0068ed11b0" "proxy-server 2" 0.0005 "-" 22 -
Oct 13 14:50:24 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx496e578c8d64487fbde9a-0068ed11b0)
Oct 13 14:50:24 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx496e578c8d64487fbde9a-0068ed11b0)
Oct 13 14:50:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59197 DF PROTO=TCP SPT=43248 DPT=9101 SEQ=4238268292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9DFAED0000000001030307) 
Oct 13 14:50:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2277: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:26 standalone.localdomain ceph-mon[29756]: pgmap v2277: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:50:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:50:27 standalone.localdomain podman[318891]: 2025-10-13 14:50:27.5999463 +0000 UTC m=+0.108541400 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, version=17.1.9)
Oct 13 14:50:27 standalone.localdomain podman[318891]: 2025-10-13 14:50:27.664053333 +0000 UTC m=+0.172648373 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=)
Oct 13 14:50:27 standalone.localdomain podman[318891]: unhealthy
Oct 13 14:50:27 standalone.localdomain podman[318890]: 2025-10-13 14:50:27.6743221 +0000 UTC m=+0.187067218 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.9, release=1, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=ovn_metadata_agent, vcs-type=git, build-date=2025-07-21T16:28:53, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Oct 13 14:50:27 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:50:27 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:50:27 standalone.localdomain podman[318890]: 2025-10-13 14:50:27.690194562 +0000 UTC m=+0.202939670 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, release=1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent)
Oct 13 14:50:27 standalone.localdomain podman[318890]: unhealthy
Oct 13 14:50:27 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:50:27 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:50:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2278: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59199 DF PROTO=TCP SPT=43248 DPT=9101 SEQ=4238268292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E06F60000000001030307) 
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:50:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:50:29 standalone.localdomain ceph-mon[29756]: pgmap v2278: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2279: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:30 standalone.localdomain ceph-mon[29756]: pgmap v2279: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:30 standalone.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing.
Oct 13 14:50:30 standalone.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 89839 (conmon) with signal SIGKILL.
Oct 13 14:50:30 standalone.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL
Oct 13 14:50:30 standalone.localdomain systemd[1]: libpod-conmon-a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189.scope: Deactivated successfully.
Oct 13 14:50:30 standalone.localdomain podman[318942]: error opening file `/run/crun/a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189/status`: No such file or directory
Oct 13 14:50:30 standalone.localdomain podman[318930]: 2025-10-13 14:50:30.800002908 +0000 UTC m=+0.069730778 container cleanup a9c557e2ca2ff127a60c268328aa01d5890c24fc27957408cd4028350e5e2189 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_id=tripleo_step3, distribution-scope=public, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., release=2, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, container_name=nova_virtlogd_wrapper, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible)
Oct 13 14:50:30 standalone.localdomain podman[318930]: nova_virtlogd_wrapper
Oct 13 14:50:30 standalone.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'.
Oct 13 14:50:30 standalone.localdomain systemd[1]: Stopped nova_virtlogd_wrapper container.
Oct 13 14:50:31 standalone.localdomain python3.9[319034]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:50:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:50:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:50:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:50:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:50:31 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:50:31 standalone.localdomain podman[319037]: 2025-10-13 14:50:31.738137498 +0000 UTC m=+0.105280540 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, tcib_managed=true, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 14:50:31 standalone.localdomain systemd-rc-local-generator[319126]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:50:31 standalone.localdomain systemd-sysv-generator[319133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:50:31 standalone.localdomain podman[319065]: 2025-10-13 14:50:31.834454968 +0000 UTC m=+0.167294157 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-swift-account)
Oct 13 14:50:31 standalone.localdomain podman[319036]: 2025-10-13 14:50:31.79153589 +0000 UTC m=+0.161809168 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 14:50:31 standalone.localdomain podman[319038]: 2025-10-13 14:50:31.852733484 +0000 UTC m=+0.214687315 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, distribution-scope=public, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, release=1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:50:31 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:50:31 standalone.localdomain podman[319037]: 2025-10-13 14:50:31.936417983 +0000 UTC m=+0.303561055 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, release=1, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:48:37, container_name=swift_proxy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64)
Oct 13 14:50:31 standalone.localdomain podman[319036]: 2025-10-13 14:50:31.988858715 +0000 UTC m=+0.359132003 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, architecture=x86_64, version=17.1.9, vendor=Red Hat, Inc., container_name=swift_object_server)
Oct 13 14:50:32 standalone.localdomain podman[319038]: 2025-10-13 14:50:32.043882428 +0000 UTC m=+0.405836239 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, container_name=swift_container_server, distribution-scope=public, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, release=1, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:50:32 standalone.localdomain podman[319065]: 2025-10-13 14:50:32.065970992 +0000 UTC m=+0.398810151 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_id=tripleo_step4, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, release=1, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public)
Oct 13 14:50:32 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain systemd[1]: Stopping nova_virtnodedevd container...
Oct 13 14:50:32 standalone.localdomain systemd[1]: libpod-4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc.scope: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain systemd[1]: libpod-4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc.scope: Consumed 1.182s CPU time.
Oct 13 14:50:32 standalone.localdomain podman[319180]: 2025-10-13 14:50:32.161024883 +0000 UTC m=+0.056008544 container died 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=2, vcs-type=git, build-date=2025-07-21T14:56:59, container_name=nova_virtnodedevd, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64)
Oct 13 14:50:32 standalone.localdomain podman[319180]: 2025-10-13 14:50:32.191845637 +0000 UTC m=+0.086829328 container cleanup 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, container_name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=2, build-date=2025-07-21T14:56:59, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64)
Oct 13 14:50:32 standalone.localdomain podman[319180]: nova_virtnodedevd
Oct 13 14:50:32 standalone.localdomain podman[319196]: 2025-10-13 14:50:32.207861533 +0000 UTC m=+0.041474905 container cleanup 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, release=2, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, container_name=nova_virtnodedevd, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9)
Oct 13 14:50:32 standalone.localdomain systemd[1]: libpod-conmon-4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc.scope: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain podman[319226]: error opening file `/run/crun/4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc/status`: No such file or directory
Oct 13 14:50:32 standalone.localdomain podman[319213]: 2025-10-13 14:50:32.313613274 +0000 UTC m=+0.067885461 container cleanup 4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, batch=17.1_20250721.1, container_name=nova_virtnodedevd, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:50:32 standalone.localdomain podman[319213]: nova_virtnodedevd
Oct 13 14:50:32 standalone.localdomain systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully.
Oct 13 14:50:32 standalone.localdomain systemd[1]: Stopped nova_virtnodedevd container.
Oct 13 14:50:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2280: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:32 standalone.localdomain ceph-mon[29756]: pgmap v2280: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59200 DF PROTO=TCP SPT=43248 DPT=9101 SEQ=4238268292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E16B60000000001030307) 
Oct 13 14:50:33 standalone.localdomain python3.9[319317]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:50:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68-merged.mount: Deactivated successfully.
Oct 13 14:50:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f84c950665978030a1f16ed1143582e5f9c249fe033ab2a2782c01be99cc5fc-userdata-shm.mount: Deactivated successfully.
Oct 13 14:50:33 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:50:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:33 standalone.localdomain systemd-rc-local-generator[319344]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:50:33 standalone.localdomain systemd-sysv-generator[319350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:50:33 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:50:33 standalone.localdomain systemd[1]: Stopping nova_virtproxyd container...
Oct 13 14:50:33 standalone.localdomain systemd[1]: libpod-082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f.scope: Deactivated successfully.
Oct 13 14:50:33 standalone.localdomain podman[319358]: 2025-10-13 14:50:33.581634372 +0000 UTC m=+0.060219624 container died 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, tcib_managed=true, architecture=x86_64, container_name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:59, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public)
Oct 13 14:50:33 standalone.localdomain systemd[1]: tmp-crun.2yvF7B.mount: Deactivated successfully.
Oct 13 14:50:33 standalone.localdomain podman[319358]: 2025-10-13 14:50:33.618712399 +0000 UTC m=+0.097297701 container cleanup 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=nova_virtproxyd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, release=2, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git)
Oct 13 14:50:33 standalone.localdomain podman[319358]: nova_virtproxyd
Oct 13 14:50:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:50:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:50:33 standalone.localdomain podman[319371]: 2025-10-13 14:50:33.641729712 +0000 UTC m=+0.050309498 container cleanup 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=2, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, container_name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step3, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible)
Oct 13 14:50:33 standalone.localdomain systemd[1]: libpod-conmon-082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f.scope: Deactivated successfully.
Oct 13 14:50:33 standalone.localdomain podman[319399]: error opening file `/run/crun/082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f/status`: No such file or directory
Oct 13 14:50:33 standalone.localdomain podman[319386]: 2025-10-13 14:50:33.71213506 +0000 UTC m=+0.039787453 container cleanup 082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, release=2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=nova_virtproxyd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, build-date=2025-07-21T14:56:59)
Oct 13 14:50:33 standalone.localdomain podman[319386]: nova_virtproxyd
Oct 13 14:50:33 standalone.localdomain systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully.
Oct 13 14:50:33 standalone.localdomain systemd[1]: Stopped nova_virtproxyd container.
Oct 13 14:50:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a-merged.mount: Deactivated successfully.
Oct 13 14:50:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f-userdata-shm.mount: Deactivated successfully.
Oct 13 14:50:34 standalone.localdomain python3.9[319490]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:50:34 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:50:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2281: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:34 standalone.localdomain ceph-mon[29756]: pgmap v2281: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:34 standalone.localdomain systemd-rc-local-generator[319514]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:50:34 standalone.localdomain systemd-sysv-generator[319519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:50:34 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:50:34 standalone.localdomain systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully.
Oct 13 14:50:34 standalone.localdomain systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m.
Oct 13 14:50:34 standalone.localdomain systemd[1]: Stopping nova_virtqemud container...
Oct 13 14:50:34 standalone.localdomain systemd[1]: libpod-1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e.scope: Deactivated successfully.
Oct 13 14:50:34 standalone.localdomain systemd[1]: libpod-1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e.scope: Consumed 1.904s CPU time.
Oct 13 14:50:34 standalone.localdomain podman[319531]: 2025-10-13 14:50:34.902589817 +0000 UTC m=+0.083456234 container died 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, io.buildah.version=1.33.12, container_name=nova_virtqemud, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=2, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, config_id=tripleo_step3)
Oct 13 14:50:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e-userdata-shm.mount: Deactivated successfully.
Oct 13 14:50:34 standalone.localdomain podman[319531]: 2025-10-13 14:50:34.927190107 +0000 UTC m=+0.108056484 container cleanup 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, architecture=x86_64, container_name=nova_virtqemud, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=2, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0)
Oct 13 14:50:34 standalone.localdomain podman[319531]: nova_virtqemud
Oct 13 14:50:34 standalone.localdomain podman[319546]: 2025-10-13 14:50:34.969119085 +0000 UTC m=+0.062503155 container cleanup 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, container_name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, version=17.1.9, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, release=2, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:50:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a-merged.mount: Deactivated successfully.
Oct 13 14:50:35 standalone.localdomain sudo[319562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:50:35 standalone.localdomain sudo[319562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:50:35 standalone.localdomain sudo[319562]: pam_unix(sudo:session): session closed for user root
Oct 13 14:50:35 standalone.localdomain sudo[319577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:50:35 standalone.localdomain sudo[319577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:50:36 standalone.localdomain sudo[319577]: pam_unix(sudo:session): session closed for user root
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:50:36 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1ade4238-e7f7-4524-8eee-ddad560bfbe0 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:50:36 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1ade4238-e7f7-4524-8eee-ddad560bfbe0 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:50:36 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1ade4238-e7f7-4524-8eee-ddad560bfbe0 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:50:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:50:36 standalone.localdomain sudo[319624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:50:36 standalone.localdomain sudo[319624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:50:36 standalone.localdomain sudo[319624]: pam_unix(sudo:session): session closed for user root
Oct 13 14:50:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2282: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27700 DF PROTO=TCP SPT=47206 DPT=9100 SEQ=3213714333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E29760000000001030307) 
Oct 13 14:50:37 standalone.localdomain ceph-mon[29756]: pgmap v2282: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27701 DF PROTO=TCP SPT=47206 DPT=9100 SEQ=3213714333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E2D760000000001030307) 
Oct 13 14:50:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2283: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:39 standalone.localdomain ceph-mon[29756]: pgmap v2283: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:50:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:50:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:50:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27702 DF PROTO=TCP SPT=47206 DPT=9100 SEQ=3213714333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E35760000000001030307) 
Oct 13 14:50:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2284: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:50:40 standalone.localdomain ceph-mon[29756]: pgmap v2284: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2285: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:42 standalone.localdomain ceph-mon[29756]: pgmap v2285: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49217 DF PROTO=TCP SPT=44818 DPT=9105 SEQ=657646932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E43360000000001030307) 
Oct 13 14:50:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2286: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:44 standalone.localdomain ceph-mon[29756]: pgmap v2286: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2287: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:46 standalone.localdomain ceph-mon[29756]: pgmap v2287: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:50:47 standalone.localdomain object-server[319639]: Object update sweep starting on /srv/node/d1 (pid: 21)
Oct 13 14:50:47 standalone.localdomain object-server[319639]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 21)
Oct 13 14:50:47 standalone.localdomain object-server[319639]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:50:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.08s
Oct 13 14:50:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2288: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50913 DF PROTO=TCP SPT=47856 DPT=9882 SEQ=3529029175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E55760000000001030307) 
Oct 13 14:50:49 standalone.localdomain ceph-mon[29756]: pgmap v2288: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49219 DF PROTO=TCP SPT=44818 DPT=9105 SEQ=657646932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E5AF70000000001030307) 
Oct 13 14:50:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2289: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:50 standalone.localdomain ceph-mon[29756]: pgmap v2289: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2290: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:52 standalone.localdomain ceph-mon[29756]: pgmap v2290: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:50:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39686 DF PROTO=TCP SPT=37528 DPT=9102 SEQ=2875841806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E688F0000000001030307) 
Oct 13 14:50:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2291: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:54 standalone.localdomain ceph-mon[29756]: pgmap v2291: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56781 DF PROTO=TCP SPT=35798 DPT=9101 SEQ=2396281994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E701E0000000001030307) 
Oct 13 14:50:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2292: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:56 standalone.localdomain ceph-mon[29756]: pgmap v2292: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:50:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:50:57 standalone.localdomain podman[319640]: 2025-10-13 14:50:57.813755976 +0000 UTC m=+0.080413699 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:53, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:50:57 standalone.localdomain podman[319641]: 2025-10-13 14:50:57.843503967 +0000 UTC m=+0.107906880 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-07-21T13:28:44, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245)
Oct 13 14:50:57 standalone.localdomain podman[319640]: 2025-10-13 14:50:57.849355948 +0000 UTC m=+0.116013661 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, tcib_managed=true, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53)
Oct 13 14:50:57 standalone.localdomain podman[319640]: unhealthy
Oct 13 14:50:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:50:57 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:50:57 standalone.localdomain podman[319641]: 2025-10-13 14:50:57.885832617 +0000 UTC m=+0.150235530 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-07-21T13:28:44, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1)
Oct 13 14:50:57 standalone.localdomain podman[319641]: unhealthy
Oct 13 14:50:57 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:50:57 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:50:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:50:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2293: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:50:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56783 DF PROTO=TCP SPT=35798 DPT=9101 SEQ=2396281994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E7C370000000001030307) 
Oct 13 14:50:59 standalone.localdomain ceph-mon[29756]: pgmap v2293: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2294: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:00 standalone.localdomain ceph-mon[29756]: pgmap v2294: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2295: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:51:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:51:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:51:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:51:02 standalone.localdomain ceph-mon[29756]: pgmap v2295: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:02 standalone.localdomain podman[319679]: 2025-10-13 14:51:02.551845369 +0000 UTC m=+0.067908112 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, version=17.1.9, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 14:51:02 standalone.localdomain podman[319680]: 2025-10-13 14:51:02.606178881 +0000 UTC m=+0.118152007 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, distribution-scope=public, container_name=swift_proxy, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.component=openstack-swift-proxy-server-container)
Oct 13 14:51:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56784 DF PROTO=TCP SPT=35798 DPT=9101 SEQ=2396281994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E8BF60000000001030307) 
Oct 13 14:51:02 standalone.localdomain podman[319688]: 2025-10-13 14:51:02.660399818 +0000 UTC m=+0.166397689 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:11:22, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=swift_account_server, vcs-type=git, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:51:02 standalone.localdomain podman[319681]: 2025-10-13 14:51:02.718753154 +0000 UTC m=+0.227194381 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, distribution-scope=public)
Oct 13 14:51:02 standalone.localdomain podman[319679]: 2025-10-13 14:51:02.802863257 +0000 UTC m=+0.318925980 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, com.redhat.component=openstack-swift-object-container, release=1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 14:51:02 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:51:02 standalone.localdomain podman[319680]: 2025-10-13 14:51:02.855800845 +0000 UTC m=+0.367773941 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, managed_by=tripleo_ansible, release=1, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-proxy-server-container, name=rhosp17/openstack-swift-proxy-server, io.buildah.version=1.33.12, build-date=2025-07-21T14:48:37, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:51:02 standalone.localdomain podman[319688]: 2025-10-13 14:51:02.866768834 +0000 UTC m=+0.372766675 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.component=openstack-swift-account-container, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64)
Oct 13 14:51:02 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:51:02 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:51:02 standalone.localdomain podman[319681]: 2025-10-13 14:51:02.983824077 +0000 UTC m=+0.492265294 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:51:02 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:51:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:03 standalone.localdomain systemd[1]: tmp-crun.vN2IFS.mount: Deactivated successfully.
Oct 13 14:51:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2296: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:04 standalone.localdomain ceph-mon[29756]: pgmap v2296: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2297: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:06 standalone.localdomain ceph-mon[29756]: pgmap v2297: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12828 DF PROTO=TCP SPT=37772 DPT=9100 SEQ=3494909195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9E9EA50000000001030307) 
Oct 13 14:51:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12829 DF PROTO=TCP SPT=37772 DPT=9100 SEQ=3494909195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9EA2B60000000001030307) 
Oct 13 14:51:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2298: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:09 standalone.localdomain ceph-mon[29756]: pgmap v2298: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2299: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12830 DF PROTO=TCP SPT=37772 DPT=9100 SEQ=3494909195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9EAAB60000000001030307) 
Oct 13 14:51:10 standalone.localdomain ceph-mon[29756]: pgmap v2299: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2300: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:12 standalone.localdomain ceph-mon[29756]: pgmap v2300: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4136 DF PROTO=TCP SPT=38480 DPT=9105 SEQ=2736505936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9EB8760000000001030307) 
Oct 13 14:51:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2301: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:14 standalone.localdomain ceph-mon[29756]: pgmap v2301: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2302: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:16 standalone.localdomain ceph-mon[29756]: pgmap v2302: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2303: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:51:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2261111092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:51:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2261111092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:51:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58019 DF PROTO=TCP SPT=39970 DPT=9882 SEQ=3044292366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9ECAB60000000001030307) 
Oct 13 14:51:19 standalone.localdomain ceph-mon[29756]: pgmap v2303: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2261111092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:51:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2261111092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:51:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4138 DF PROTO=TCP SPT=38480 DPT=9105 SEQ=2736505936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9ED0370000000001030307) 
Oct 13 14:51:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2304: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:20 standalone.localdomain ceph-mon[29756]: pgmap v2304: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2305: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:22 standalone.localdomain ceph-mon[29756]: pgmap v2305: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:51:23
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'backups', 'manila_metadata', 'volumes', 'vms', 'manila_data', 'images']
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:51:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44289 DF PROTO=TCP SPT=58116 DPT=9102 SEQ=3128991049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9EDDBE0000000001030307) 
Oct 13 14:51:24 standalone.localdomain ceph-mgr[29999]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 14:51:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2306: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:24 standalone.localdomain ceph-mon[29756]: pgmap v2306: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43870 DF PROTO=TCP SPT=53656 DPT=9101 SEQ=3672506121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9EE54E0000000001030307) 
Oct 13 14:51:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2307: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:26 standalone.localdomain ceph-mon[29756]: pgmap v2307: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:51:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:51:28 standalone.localdomain systemd[1]: tmp-crun.WlNPIf.mount: Deactivated successfully.
Oct 13 14:51:28 standalone.localdomain podman[319786]: 2025-10-13 14:51:28.314629709 +0000 UTC m=+0.086727594 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=)
Oct 13 14:51:28 standalone.localdomain systemd[1]: tmp-crun.aDEWPp.mount: Deactivated successfully.
Oct 13 14:51:28 standalone.localdomain podman[319787]: 2025-10-13 14:51:28.355799154 +0000 UTC m=+0.125414762 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-07-21T13:28:44, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.33.12, batch=17.1_20250721.1, architecture=x86_64)
Oct 13 14:51:28 standalone.localdomain podman[319786]: 2025-10-13 14:51:28.374048908 +0000 UTC m=+0.146146803 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, container_name=ovn_metadata_agent, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.33.12, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53)
Oct 13 14:51:28 standalone.localdomain podman[319786]: unhealthy
Oct 13 14:51:28 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:51:28 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:51:28 standalone.localdomain podman[319787]: 2025-10-13 14:51:28.393896912 +0000 UTC m=+0.163512510 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, com.redhat.component=openstack-ovn-controller-container, release=1, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., batch=17.1_20250721.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:51:28 standalone.localdomain podman[319787]: unhealthy
Oct 13 14:51:28 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:51:28 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2308: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43872 DF PROTO=TCP SPT=53656 DPT=9101 SEQ=3672506121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9EF1770000000001030307) 
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:51:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:51:29 standalone.localdomain ceph-mon[29756]: pgmap v2308: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2309: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:30 standalone.localdomain ceph-mon[29756]: pgmap v2309: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2310: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:32 standalone.localdomain ceph-mon[29756]: pgmap v2310: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43873 DF PROTO=TCP SPT=53656 DPT=9101 SEQ=3672506121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F01370000000001030307) 
Oct 13 14:51:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:51:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:51:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:51:33 standalone.localdomain podman[319827]: 2025-10-13 14:51:33.047595565 +0000 UTC m=+0.063628689 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, container_name=swift_object_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T14:56:28, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 14:51:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:51:33 standalone.localdomain podman[319832]: 2025-10-13 14:51:33.062395664 +0000 UTC m=+0.066923642 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public)
Oct 13 14:51:33 standalone.localdomain podman[319828]: 2025-10-13 14:51:33.110219443 +0000 UTC m=+0.118029173 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-proxy-server, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_proxy, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 14:51:33 standalone.localdomain podman[319873]: 2025-10-13 14:51:33.173917884 +0000 UTC m=+0.110295824 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, distribution-scope=public, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=swift_container_server, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=)
Oct 13 14:51:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:33 standalone.localdomain podman[319832]: 2025-10-13 14:51:33.226763509 +0000 UTC m=+0.231291467 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, container_name=swift_account_server, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., release=1)
Oct 13 14:51:33 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:51:33 standalone.localdomain podman[319827]: 2025-10-13 14:51:33.269916265 +0000 UTC m=+0.285949409 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 14:51:33 standalone.localdomain podman[319828]: 2025-10-13 14:51:33.277863861 +0000 UTC m=+0.285673611 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, io.buildah.version=1.33.12, container_name=swift_proxy, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-swift-proxy-server, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, build-date=2025-07-21T14:48:37)
Oct 13 14:51:33 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:51:33 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:51:33 standalone.localdomain podman[319873]: 2025-10-13 14:51:33.332832941 +0000 UTC m=+0.269210891 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, release=1, distribution-scope=public, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server)
Oct 13 14:51:33 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:51:34 standalone.localdomain systemd[1]: tmp-crun.QoH7fQ.mount: Deactivated successfully.
Oct 13 14:51:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2311: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:34 standalone.localdomain ceph-mon[29756]: pgmap v2311: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:36 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:14:51:36 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txaab4067e3d3f4ac1924ef-0068ed11f8" "proxy-server 2" 0.0004 "-" 23 -
Oct 13 14:51:36 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txaab4067e3d3f4ac1924ef-0068ed11f8)
Oct 13 14:51:36 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txaab4067e3d3f4ac1924ef-0068ed11f8)
Oct 13 14:51:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2312: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:36 standalone.localdomain sudo[319933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:51:36 standalone.localdomain sudo[319933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:51:36 standalone.localdomain sudo[319933]: pam_unix(sudo:session): session closed for user root
Oct 13 14:51:36 standalone.localdomain sudo[319948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:51:36 standalone.localdomain sudo[319948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:51:36 standalone.localdomain ceph-mon[29756]: pgmap v2312: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:37 standalone.localdomain sudo[319948]: pam_unix(sudo:session): session closed for user root
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:51:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1af279c3-f834-4099-8e67-baf34be9de93 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:51:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1af279c3-f834-4099-8e67-baf34be9de93 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:51:37 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1af279c3-f834-4099-8e67-baf34be9de93 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:51:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19295 DF PROTO=TCP SPT=34636 DPT=9100 SEQ=1622419761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F13D40000000001030307) 
Oct 13 14:51:37 standalone.localdomain sudo[319994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:51:37 standalone.localdomain sudo[319994]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:51:37 standalone.localdomain sudo[319994]: pam_unix(sudo:session): session closed for user root
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:51:37 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:51:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19296 DF PROTO=TCP SPT=34636 DPT=9100 SEQ=1622419761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F17F60000000001030307) 
Oct 13 14:51:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2313: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:39 standalone.localdomain ceph-mon[29756]: pgmap v2313: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:51:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:51:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:51:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2314: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19297 DF PROTO=TCP SPT=34636 DPT=9100 SEQ=1622419761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F1FF60000000001030307) 
Oct 13 14:51:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:51:40 standalone.localdomain ceph-mon[29756]: pgmap v2314: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2315: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:42 standalone.localdomain ceph-mon[29756]: pgmap v2315: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37865 DF PROTO=TCP SPT=59402 DPT=9105 SEQ=444824121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F2DB60000000001030307) 
Oct 13 14:51:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2316: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:44 standalone.localdomain ceph-mon[29756]: pgmap v2316: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2317: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: pgmap v2317: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.571112) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367106571155, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2113, "num_deletes": 251, "total_data_size": 2037748, "memory_usage": 2080752, "flush_reason": "Manual Compaction"}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367106580548, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 1968403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52790, "largest_seqno": 54902, "table_properties": {"data_size": 1960076, "index_size": 5094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17135, "raw_average_key_size": 19, "raw_value_size": 1943152, "raw_average_value_size": 2259, "num_data_blocks": 230, "num_entries": 860, "num_filter_entries": 860, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760366910, "oldest_key_time": 1760366910, "file_creation_time": 1760367106, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 9490 microseconds, and 4777 cpu microseconds.
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.580593) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 1968403 bytes OK
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.580620) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.582391) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.582411) EVENT_LOG_v1 {"time_micros": 1760367106582405, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.582431) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2028730, prev total WAL file size 2028730, number of live WAL files 2.
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.583101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(1922KB)], [116(5939KB)]
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367106583155, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 8049973, "oldest_snapshot_seqno": -1}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 5699 keys, 7028052 bytes, temperature: kUnknown
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367106611268, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 7028052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6990704, "index_size": 22019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 143002, "raw_average_key_size": 25, "raw_value_size": 6888085, "raw_average_value_size": 1208, "num_data_blocks": 911, "num_entries": 5699, "num_filter_entries": 5699, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367106, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.611461) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 7028052 bytes
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.612702) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 285.9 rd, 249.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 5.8 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6219, records dropped: 520 output_compression: NoCompression
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.612717) EVENT_LOG_v1 {"time_micros": 1760367106612709, "job": 70, "event": "compaction_finished", "compaction_time_micros": 28158, "compaction_time_cpu_micros": 15064, "output_level": 6, "num_output_files": 1, "total_output_size": 7028052, "num_input_records": 6219, "num_output_records": 5699, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367106612947, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367106613435, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.582983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.613538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.613544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.613547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.613550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:46.613552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2318: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41168 DF PROTO=TCP SPT=43296 DPT=9882 SEQ=3967534601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F3FF60000000001030307) 
Oct 13 14:51:49 standalone.localdomain ceph-mon[29756]: pgmap v2318: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37867 DF PROTO=TCP SPT=59402 DPT=9105 SEQ=444824121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F45760000000001030307) 
Oct 13 14:51:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2319: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:50 standalone.localdomain ceph-mon[29756]: pgmap v2319: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2320: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:52 standalone.localdomain ceph-mon[29756]: pgmap v2320: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:51:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47831 DF PROTO=TCP SPT=42466 DPT=9102 SEQ=4040148214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F52EF0000000001030307) 
Oct 13 14:51:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2321: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:54 standalone.localdomain ceph-mon[29756]: pgmap v2321: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41462 DF PROTO=TCP SPT=49860 DPT=9101 SEQ=3621801477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F5A7E0000000001030307) 
Oct 13 14:51:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2322: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:56 standalone.localdomain ceph-mon[29756]: pgmap v2322: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.228741) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367118228770, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 343, "num_deletes": 250, "total_data_size": 104473, "memory_usage": 111080, "flush_reason": "Manual Compaction"}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367118238783, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 102438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54903, "largest_seqno": 55245, "table_properties": {"data_size": 100327, "index_size": 288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5861, "raw_average_key_size": 20, "raw_value_size": 96142, "raw_average_value_size": 331, "num_data_blocks": 13, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367108, "oldest_key_time": 1760367108, "file_creation_time": 1760367118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 10129 microseconds, and 931 cpu microseconds.
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.238857) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 102438 bytes OK
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.238889) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.243092) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.243124) EVENT_LOG_v1 {"time_micros": 1760367118243115, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.243148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 102134, prev total WAL file size 102623, number of live WAL files 2.
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.246711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303032' seq:72057594037927935, type:22 .. '6D6772737461740032323533' seq:0, type:0; will stop at (end)
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(100KB)], [119(6863KB)]
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367118246759, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 7130490, "oldest_snapshot_seqno": -1}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 5477 keys, 5079575 bytes, temperature: kUnknown
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367118277932, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 5079575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5048483, "index_size": 16340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 138566, "raw_average_key_size": 25, "raw_value_size": 4954383, "raw_average_value_size": 904, "num_data_blocks": 673, "num_entries": 5477, "num_filter_entries": 5477, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.278167) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 5079575 bytes
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.296396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.2 rd, 162.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.7 +0.0 blob) out(4.8 +0.0 blob), read-write-amplify(119.2) write-amplify(49.6) OK, records in: 5989, records dropped: 512 output_compression: NoCompression
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.296459) EVENT_LOG_v1 {"time_micros": 1760367118296433, "job": 72, "event": "compaction_finished", "compaction_time_micros": 31251, "compaction_time_cpu_micros": 13537, "output_level": 6, "num_output_files": 1, "total_output_size": 5079575, "num_input_records": 5989, "num_output_records": 5477, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367118296832, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367118297463, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.246641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.297552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.297559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.297561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.297563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:58 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:51:58.297564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:51:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:51:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:51:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2323: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41464 DF PROTO=TCP SPT=49860 DPT=9101 SEQ=3621801477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F66760000000001030307) 
Oct 13 14:51:58 standalone.localdomain systemd[1]: tmp-crun.5iZW5d.mount: Deactivated successfully.
Oct 13 14:51:58 standalone.localdomain podman[320010]: 2025-10-13 14:51:58.550073136 +0000 UTC m=+0.060342469 container health_status c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:51:58 standalone.localdomain podman[320010]: 2025-10-13 14:51:58.593651504 +0000 UTC m=+0.103920827 container exec_died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, build-date=2025-07-21T13:28:44, release=1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public)
Oct 13 14:51:58 standalone.localdomain podman[320009]: 2025-10-13 14:51:58.596538273 +0000 UTC m=+0.106685252 container health_status bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T16:28:53, version=17.1.9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent)
Oct 13 14:51:58 standalone.localdomain podman[320009]: 2025-10-13 14:51:58.608437042 +0000 UTC m=+0.118584031 container exec_died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, release=1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.33.12, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:51:58 standalone.localdomain podman[320009]: unhealthy
Oct 13 14:51:58 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:51:58 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed with result 'exit-code'.
Oct 13 14:51:58 standalone.localdomain podman[320010]: unhealthy
Oct 13 14:51:58 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 14:51:58 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed with result 'exit-code'.
Oct 13 14:51:58 standalone.localdomain systemd[1]: tripleo_nova_virtqemud.service: State 'stop-sigterm' timed out. Killing.
Oct 13 14:51:58 standalone.localdomain systemd[1]: tripleo_nova_virtqemud.service: Killing process 93280 (conmon) with signal SIGKILL.
Oct 13 14:51:58 standalone.localdomain systemd[1]: tripleo_nova_virtqemud.service: Main process exited, code=killed, status=9/KILL
Oct 13 14:51:58 standalone.localdomain systemd[1]: libpod-conmon-1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e.scope: Deactivated successfully.
Oct 13 14:51:59 standalone.localdomain podman[320062]: error opening file `/run/crun/1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e/status`: No such file or directory
Oct 13 14:51:59 standalone.localdomain podman[320049]: 2025-10-13 14:51:59.028787569 +0000 UTC m=+0.045238121 container cleanup 1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.buildah.version=1.33.12, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-type=git, version=17.1.9, container_name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, managed_by=tripleo_ansible, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, distribution-scope=public, build-date=2025-07-21T14:56:59, architecture=x86_64)
Oct 13 14:51:59 standalone.localdomain podman[320049]: nova_virtqemud
Oct 13 14:51:59 standalone.localdomain systemd[1]: tripleo_nova_virtqemud.service: Failed with result 'timeout'.
Oct 13 14:51:59 standalone.localdomain systemd[1]: Stopped nova_virtqemud container.
Oct 13 14:51:59 standalone.localdomain ceph-mon[29756]: pgmap v2323: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:51:59 standalone.localdomain python3.9[320153]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:51:59 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:51:59 standalone.localdomain systemd-rc-local-generator[320180]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:51:59 standalone.localdomain systemd-sysv-generator[320183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:51:59 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2324: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:00 standalone.localdomain ceph-mon[29756]: pgmap v2324: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:00 standalone.localdomain python3.9[320281]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:00 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:01 standalone.localdomain systemd-sysv-generator[320308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:01 standalone.localdomain systemd-rc-local-generator[320305]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:01 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:01 standalone.localdomain systemd[1]: Stopping nova_virtsecretd container...
Oct 13 14:52:01 standalone.localdomain systemd[1]: libpod-75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6.scope: Deactivated successfully.
Oct 13 14:52:01 standalone.localdomain podman[320322]: 2025-10-13 14:52:01.439424162 +0000 UTC m=+0.061495464 container died 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, release=2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, managed_by=tripleo_ansible, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true)
Oct 13 14:52:01 standalone.localdomain systemd[1]: tmp-crun.Q835lp.mount: Deactivated successfully.
Oct 13 14:52:01 standalone.localdomain podman[320322]: 2025-10-13 14:52:01.48264488 +0000 UTC m=+0.104716172 container cleanup 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtsecretd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:59, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, release=2, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, version=17.1.9, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true)
Oct 13 14:52:01 standalone.localdomain podman[320322]: nova_virtsecretd
Oct 13 14:52:01 standalone.localdomain podman[320337]: 2025-10-13 14:52:01.514647359 +0000 UTC m=+0.063481365 container cleanup 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-07-21T14:56:59, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_virtsecretd, version=17.1.9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, batch=17.1_20250721.1, io.openshift.expose-services=, release=2, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:52:01 standalone.localdomain systemd[1]: libpod-conmon-75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6.scope: Deactivated successfully.
Oct 13 14:52:01 standalone.localdomain podman[320366]: error opening file `/run/crun/75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6/status`: No such file or directory
Oct 13 14:52:01 standalone.localdomain podman[320354]: 2025-10-13 14:52:01.585507472 +0000 UTC m=+0.042435974 container cleanup 75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, container_name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T14:56:59, io.buildah.version=1.33.12, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, release=2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true)
Oct 13 14:52:01 standalone.localdomain podman[320354]: nova_virtsecretd
Oct 13 14:52:01 standalone.localdomain systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully.
Oct 13 14:52:01 standalone.localdomain systemd[1]: Stopped nova_virtsecretd container.
Oct 13 14:52:02 standalone.localdomain python3.9[320457]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a-merged.mount: Deactivated successfully.
Oct 13 14:52:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75990014f13da7493fd18a3c440ac10b59a0876044c9a47df8372312afd5f1c6-userdata-shm.mount: Deactivated successfully.
Oct 13 14:52:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2325: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:02 standalone.localdomain ceph-mon[29756]: pgmap v2325: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41465 DF PROTO=TCP SPT=49860 DPT=9101 SEQ=3621801477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F76370000000001030307) 
Oct 13 14:52:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:52:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:52:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:52:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:03 standalone.localdomain podman[320460]: 2025-10-13 14:52:03.365011636 +0000 UTC m=+0.068064518 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:52:03 standalone.localdomain systemd-rc-local-generator[320527]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:03 standalone.localdomain systemd-sysv-generator[320531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:03 standalone.localdomain podman[320468]: 2025-10-13 14:52:03.425544449 +0000 UTC m=+0.117338282 container health_status 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, distribution-scope=public, release=1, vcs-type=git, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, config_id=tripleo_step4, container_name=swift_proxy, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-proxy-server-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vendor=Red Hat, Inc.)
Oct 13 14:52:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:03 standalone.localdomain podman[320466]: 2025-10-13 14:52:03.478937671 +0000 UTC m=+0.172222520 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, release=1, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, vcs-type=git, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:52:03 standalone.localdomain podman[320460]: 2025-10-13 14:52:03.537856794 +0000 UTC m=+0.240909696 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-type=git, version=17.1.9, config_id=tripleo_step4)
Oct 13 14:52:03 standalone.localdomain podman[320468]: 2025-10-13 14:52:03.600796342 +0000 UTC m=+0.292590235 container exec_died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, com.redhat.component=openstack-swift-proxy-server-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, io.openshift.expose-services=, container_name=swift_proxy, io.buildah.version=1.33.12, release=1, distribution-scope=public, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1)
Oct 13 14:52:03 standalone.localdomain podman[320466]: 2025-10-13 14:52:03.678741584 +0000 UTC m=+0.372026443 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, release=1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.openshift.expose-services=)
Oct 13 14:52:03 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:52:03 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain systemd[1]: Stopping nova_virtstoraged container...
Oct 13 14:52:03 standalone.localdomain systemd[1]: tmp-crun.zaFEhc.mount: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain systemd[1]: libpod-dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d.scope: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain podman[320582]: 2025-10-13 14:52:03.780788221 +0000 UTC m=+0.061564346 container died dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, release=2, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, container_name=nova_virtstoraged, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.)
Oct 13 14:52:03 standalone.localdomain podman[320582]: 2025-10-13 14:52:03.821255433 +0000 UTC m=+0.102031568 container cleanup dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=2, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T14:56:59, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Oct 13 14:52:03 standalone.localdomain podman[320582]: nova_virtstoraged
Oct 13 14:52:03 standalone.localdomain podman[320581]: 2025-10-13 14:52:03.8334143 +0000 UTC m=+0.116212197 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_container_server, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, release=1, vendor=Red Hat, Inc.)
Oct 13 14:52:03 standalone.localdomain podman[320607]: 2025-10-13 14:52:03.904724756 +0000 UTC m=+0.119260420 container cleanup dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=2, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, tcib_managed=true, build-date=2025-07-21T14:56:59, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Oct 13 14:52:03 standalone.localdomain systemd[1]: libpod-conmon-dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d.scope: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain podman[320650]: error opening file `/run/crun/dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d/status`: No such file or directory
Oct 13 14:52:03 standalone.localdomain podman[320636]: 2025-10-13 14:52:03.989767208 +0000 UTC m=+0.056827400 container cleanup dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2025-07-21T14:56:59, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, release=2, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ce329a35cfc30978bc40d323681fc5e'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team)
Oct 13 14:52:03 standalone.localdomain podman[320636]: nova_virtstoraged
Oct 13 14:52:03 standalone.localdomain systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully.
Oct 13 14:52:03 standalone.localdomain systemd[1]: Stopped nova_virtstoraged container.
Oct 13 14:52:04 standalone.localdomain podman[320581]: 2025-10-13 14:52:04.020268881 +0000 UTC m=+0.303066748 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, release=1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:52:04 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:52:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2326: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:04 standalone.localdomain ceph-mon[29756]: pgmap v2326: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec-merged.mount: Deactivated successfully.
Oct 13 14:52:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dceaaeec95442e25c469c925f0684b91675a6d5970f6b543e393f9f795d2496d-userdata-shm.mount: Deactivated successfully.
Oct 13 14:52:04 standalone.localdomain python3.9[320743]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:04 standalone.localdomain systemd-sysv-generator[320771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:04 standalone.localdomain systemd-rc-local-generator[320768]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:04 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:05 standalone.localdomain python3.9[320871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_cluster_north_db_server.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:05 standalone.localdomain systemd-sysv-generator[320900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:05 standalone.localdomain systemd-rc-local-generator[320897]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2327: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:06 standalone.localdomain ceph-mon[29756]: pgmap v2327: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:06 standalone.localdomain python3.9[320998]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_cluster_northd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:06 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:06 standalone.localdomain systemd-rc-local-generator[321027]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:06 standalone.localdomain systemd-sysv-generator[321030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:06 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2644 DF PROTO=TCP SPT=60206 DPT=9100 SEQ=107626619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F89050000000001030307) 
Oct 13 14:52:07 standalone.localdomain python3.9[321126]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_cluster_south_db_server.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:08 standalone.localdomain systemd-sysv-generator[321156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:08 standalone.localdomain systemd-rc-local-generator[321152]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:08 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2645 DF PROTO=TCP SPT=60206 DPT=9100 SEQ=107626619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F8CF70000000001030307) 
Oct 13 14:52:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2328: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:09 standalone.localdomain python3.9[321254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:09 standalone.localdomain systemd-rc-local-generator[321279]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:09 standalone.localdomain systemd-sysv-generator[321284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:09 standalone.localdomain ceph-mon[29756]: pgmap v2328: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:09 standalone.localdomain systemd[1]: Stopping ovn_controller container...
Oct 13 14:52:09 standalone.localdomain systemd[1]: libpod-c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.scope: Deactivated successfully.
Oct 13 14:52:09 standalone.localdomain systemd[1]: libpod-c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.scope: Consumed 2.187s CPU time.
Oct 13 14:52:09 standalone.localdomain podman[321295]: 2025-10-13 14:52:09.462867364 +0000 UTC m=+0.082735111 container died c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, container_name=ovn_controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1, tcib_managed=true, batch=17.1_20250721.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, architecture=x86_64, build-date=2025-07-21T13:28:44, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Oct 13 14:52:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.timer: Deactivated successfully.
Oct 13 14:52:09 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.
Oct 13 14:52:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed to open /run/systemd/transient/c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: No such file or directory
Oct 13 14:52:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80-userdata-shm.mount: Deactivated successfully.
Oct 13 14:52:09 standalone.localdomain podman[321295]: 2025-10-13 14:52:09.515822582 +0000 UTC m=+0.135690279 container cleanup c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=ovn_controller, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, vendor=Red Hat, Inc., config_id=tripleo_step4)
Oct 13 14:52:09 standalone.localdomain podman[321295]: ovn_controller
Oct 13 14:52:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.timer: Failed to open /run/systemd/transient/c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.timer: No such file or directory
Oct 13 14:52:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed to open /run/systemd/transient/c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: No such file or directory
Oct 13 14:52:09 standalone.localdomain podman[321308]: 2025-10-13 14:52:09.532519889 +0000 UTC m=+0.060897296 container cleanup c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-ovn-controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T13:28:44)
Oct 13 14:52:09 standalone.localdomain systemd[1]: libpod-conmon-c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.scope: Deactivated successfully.
Oct 13 14:52:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.timer: Failed to open /run/systemd/transient/c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.timer: No such file or directory
Oct 13 14:52:09 standalone.localdomain systemd[1]: c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: Failed to open /run/systemd/transient/c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80.service: No such file or directory
Oct 13 14:52:09 standalone.localdomain podman[321323]: 2025-10-13 14:52:09.62142325 +0000 UTC m=+0.056651894 container cleanup c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20250721.1, build-date=2025-07-21T13:28:44, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:52:09 standalone.localdomain podman[321323]: ovn_controller
Oct 13 14:52:09 standalone.localdomain systemd[1]: tripleo_ovn_controller.service: Deactivated successfully.
Oct 13 14:52:09 standalone.localdomain systemd[1]: Stopped ovn_controller container.
Oct 13 14:52:09 standalone.localdomain systemd[1]: tripleo_ovn_controller.service: Consumed 480ms CPU time, read 40.0K from disk, written 15.8M to disk.
Oct 13 14:52:10 standalone.localdomain python3.9[321425]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:10 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2646 DF PROTO=TCP SPT=60206 DPT=9100 SEQ=107626619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9F94F60000000001030307) 
Oct 13 14:52:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2329: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:10 standalone.localdomain systemd-sysv-generator[321453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:10 standalone.localdomain systemd-rc-local-generator[321449]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:10 standalone.localdomain ceph-mon[29756]: pgmap v2329: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:10 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-df2c307638fd1cbbca6e0bda1d22f05adab63cc3db84a8ec2dbb12be5e361693-merged.mount: Deactivated successfully.
Oct 13 14:52:10 standalone.localdomain systemd[1]: Stopping ovn_metadata_agent container...
Oct 13 14:52:10 standalone.localdomain systemd[1]: tmp-crun.LPcRqg.mount: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain systemd[1]: libpod-bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.scope: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain systemd[1]: libpod-bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.scope: Consumed 8.754s CPU time.
Oct 13 14:52:11 standalone.localdomain podman[321465]: 2025-10-13 14:52:11.481120566 +0000 UTC m=+0.688779315 container died bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 14:52:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.timer: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.
Oct 13 14:52:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed to open /run/systemd/transient/bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: No such file or directory
Oct 13 14:52:11 standalone.localdomain podman[321465]: 2025-10-13 14:52:11.54557729 +0000 UTC m=+0.753236019 container cleanup bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, build-date=2025-07-21T16:28:53, config_id=tripleo_step4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Oct 13 14:52:11 standalone.localdomain podman[321465]: ovn_metadata_agent
Oct 13 14:52:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.timer: Failed to open /run/systemd/transient/bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.timer: No such file or directory
Oct 13 14:52:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed to open /run/systemd/transient/bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: No such file or directory
Oct 13 14:52:11 standalone.localdomain podman[321479]: 2025-10-13 14:52:11.582806412 +0000 UTC m=+0.097206749 container cleanup bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 14:52:11 standalone.localdomain systemd[1]: libpod-conmon-bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.scope: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain podman[321509]: error opening file `/run/crun/bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400/status`: No such file or directory
Oct 13 14:52:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.timer: Failed to open /run/systemd/transient/bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.timer: No such file or directory
Oct 13 14:52:11 standalone.localdomain systemd[1]: bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: Failed to open /run/systemd/transient/bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400.service: No such file or directory
Oct 13 14:52:11 standalone.localdomain podman[321496]: 2025-10-13 14:52:11.687563033 +0000 UTC m=+0.072043380 container cleanup bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-07-21T16:28:53, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=)
Oct 13 14:52:11 standalone.localdomain podman[321496]: ovn_metadata_agent
Oct 13 14:52:11 standalone.localdomain systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain systemd[1]: Stopped ovn_metadata_agent container.
Oct 13 14:52:11 standalone.localdomain systemd[1]: tmp-crun.m8kp5C.mount: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-82bb009abdf7407c55d58243c3d7fa70c1d552bab8d3e6d4259a3c852e69c497-merged.mount: Deactivated successfully.
Oct 13 14:52:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400-userdata-shm.mount: Deactivated successfully.
Oct 13 14:52:12 standalone.localdomain python3.9[321600]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_placement_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:12 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:12 standalone.localdomain systemd-rc-local-generator[321624]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:12 standalone.localdomain systemd-sysv-generator[321629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2330: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:12 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:12 standalone.localdomain ceph-mon[29756]: pgmap v2330: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:13 standalone.localdomain python3.9[321727]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_swift_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:52:13 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:52:13 standalone.localdomain systemd-rc-local-generator[321751]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:52:13 standalone.localdomain systemd-sysv-generator[321755]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:52:13 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:52:13 standalone.localdomain systemd[1]: Stopping swift_proxy container...
Oct 13 14:52:13 standalone.localdomain systemd[1]: tmp-crun.evAujJ.mount: Deactivated successfully.
Oct 13 14:52:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55348 DF PROTO=TCP SPT=49804 DPT=9105 SEQ=3796133314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FA2B60000000001030307) 
Oct 13 14:52:14 standalone.localdomain proxy-server[116081]: SIGTERM received (2)
Oct 13 14:52:14 standalone.localdomain proxy-server[116081]: Exited (2)
Oct 13 14:52:14 standalone.localdomain systemd[1]: libpod-84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.scope: Deactivated successfully.
Oct 13 14:52:14 standalone.localdomain systemd[1]: libpod-84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.scope: Consumed 47.331s CPU time.
Oct 13 14:52:14 standalone.localdomain podman[321767]: 2025-10-13 14:52:14.440812088 +0000 UTC m=+0.646709573 container died 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-swift-proxy-server-container, container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, io.buildah.version=1.33.12)
Oct 13 14:52:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.timer: Deactivated successfully.
Oct 13 14:52:14 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.
Oct 13 14:52:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Failed to open /run/systemd/transient/84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: No such file or directory
Oct 13 14:52:14 standalone.localdomain systemd[1]: tmp-crun.GvBGdD.mount: Deactivated successfully.
Oct 13 14:52:14 standalone.localdomain podman[321767]: 2025-10-13 14:52:14.493120207 +0000 UTC m=+0.699017682 container cleanup 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-swift-proxy-server-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, release=1, version=17.1.9, build-date=2025-07-21T14:48:37, container_name=swift_proxy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-proxy-server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1)
Oct 13 14:52:14 standalone.localdomain podman[321767]: swift_proxy
Oct 13 14:52:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2331: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.timer: Failed to open /run/systemd/transient/84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.timer: No such file or directory
Oct 13 14:52:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Failed to open /run/systemd/transient/84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: No such file or directory
Oct 13 14:52:14 standalone.localdomain podman[321781]: 2025-10-13 14:52:14.526082017 +0000 UTC m=+0.083628469 container cleanup 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-proxy-server, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-proxy-server-container, vcs-type=git, container_name=swift_proxy, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, version=17.1.9, build-date=2025-07-21T14:48:37, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, distribution-scope=public, name=rhosp17/openstack-swift-proxy-server)
Oct 13 14:52:14 standalone.localdomain systemd[1]: libpod-conmon-84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.scope: Deactivated successfully.
Oct 13 14:52:14 standalone.localdomain ceph-mon[29756]: pgmap v2331: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.timer: Failed to open /run/systemd/transient/84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.timer: No such file or directory
Oct 13 14:52:14 standalone.localdomain systemd[1]: 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: Failed to open /run/systemd/transient/84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263.service: No such file or directory
Oct 13 14:52:14 standalone.localdomain podman[321797]: 2025-10-13 14:52:14.606156624 +0000 UTC m=+0.052767384 container cleanup 84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1, name=swift_proxy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-proxy-server:17.1', 'net': 'host', 'restart': 'always', 'start_order': 2, 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-proxy-server/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-proxy-server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-proxy-server, container_name=swift_proxy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-proxy-server, batch=17.1_20250721.1, build-date=2025-07-21T14:48:37, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-proxy-server-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-proxy-server, name=rhosp17/openstack-swift-proxy-server, io.openshift.expose-services=, version=17.1.9, vcs-ref=2b1b2ddf50ac77b76e4bf1d1c4aad35d9d39bb77, maintainer=OpenStack TripleO Team)
Oct 13 14:52:14 standalone.localdomain podman[321797]: swift_proxy
Oct 13 14:52:14 standalone.localdomain systemd[1]: tripleo_swift_proxy.service: Deactivated successfully.
Oct 13 14:52:14 standalone.localdomain systemd[1]: Stopped swift_proxy container.
Oct 13 14:52:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-78170edbfbc1c4123acff74d0c223c6ed428518291f85e84a08885022922a1b7-merged.mount: Deactivated successfully.
Oct 13 14:52:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84d55bd4060b5250c3f3d78ca525a1b1e0f288aae8b9a47f22af33c3dfe75263-userdata-shm.mount: Deactivated successfully.
Oct 13 14:52:16 standalone.localdomain python3.9[321899]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_barbican_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2332: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:16 standalone.localdomain ceph-mon[29756]: pgmap v2332: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:16 standalone.localdomain python3.9[321989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_barbican_keystone_listener.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:17 standalone.localdomain python3.9[322079]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_barbican_worker.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:17 standalone.localdomain python3.9[322169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_cinder_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:18 standalone.localdomain python3.9[322259]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_cinder_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2333: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:52:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2592441897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:52:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2592441897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:52:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27517 DF PROTO=TCP SPT=58442 DPT=9882 SEQ=667965931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FB5370000000001030307) 
Oct 13 14:52:18 standalone.localdomain python3.9[322349]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_cinder_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:19 standalone.localdomain ceph-mon[29756]: pgmap v2333: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2592441897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:52:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2592441897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:52:19 standalone.localdomain python3.9[322439]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_clustercheck.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55350 DF PROTO=TCP SPT=49804 DPT=9105 SEQ=3796133314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FBA760000000001030307) 
Oct 13 14:52:20 standalone.localdomain python3.9[322529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_glance_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2334: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:20 standalone.localdomain ceph-mon[29756]: pgmap v2334: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:20 standalone.localdomain python3.9[322619]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_glance_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:21 standalone.localdomain python3.9[322709]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_glance_api_internal.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:22 standalone.localdomain python3.9[322799]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_heat_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2335: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:22 standalone.localdomain ceph-mon[29756]: pgmap v2335: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:22 standalone.localdomain python3.9[322889]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_heat_api_cfn.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:23 standalone.localdomain python3.9[322979]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_heat_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:52:23
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'vms', 'manila_data', '.mgr', 'backups', 'volumes', 'images']
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:52:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:52:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28082 DF PROTO=TCP SPT=58836 DPT=9102 SEQ=4250874649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FC81E0000000001030307) 
Oct 13 14:52:23 standalone.localdomain python3.9[323069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_heat_engine.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:24 standalone.localdomain python3.9[323159]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_horizon.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2336: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:24 standalone.localdomain ceph-mon[29756]: pgmap v2336: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:24 standalone.localdomain python3.9[323249]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:25 standalone.localdomain python3.9[323339]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_keystone.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21568 DF PROTO=TCP SPT=44318 DPT=9101 SEQ=2752555755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FCFAE0000000001030307) 
Oct 13 14:52:25 standalone.localdomain python3.9[323429]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_keystone_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:26 standalone.localdomain python3.9[323519]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2337: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:26 standalone.localdomain ceph-mon[29756]: pgmap v2337: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:26 standalone.localdomain python3.9[323609]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_manila_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:27 standalone.localdomain python3.9[323699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_manila_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:27 standalone.localdomain python3.9[323789]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_manila_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.258605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367148258690, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 528, "num_deletes": 255, "total_data_size": 289156, "memory_usage": 300056, "flush_reason": "Manual Compaction"}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367148271533, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 284098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55246, "largest_seqno": 55773, "table_properties": {"data_size": 281372, "index_size": 775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6410, "raw_average_key_size": 18, "raw_value_size": 275831, "raw_average_value_size": 781, "num_data_blocks": 35, "num_entries": 353, "num_filter_entries": 353, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367118, "oldest_key_time": 1760367118, "file_creation_time": 1760367148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 13012 microseconds, and 3387 cpu microseconds.
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.271611) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 284098 bytes OK
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.271649) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.279301) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.279363) EVENT_LOG_v1 {"time_micros": 1760367148279349, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.279395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 286093, prev total WAL file size 286582, number of live WAL files 2.
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.280433) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373536' seq:72057594037927935, type:22 .. '6C6F676D0032303037' seq:0, type:0; will stop at (end)
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(277KB)], [122(4960KB)]
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367148280612, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 5363673, "oldest_snapshot_seqno": -1}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 5307 keys, 5282167 bytes, temperature: kUnknown
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367148314330, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 5282167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5251082, "index_size": 16716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 135938, "raw_average_key_size": 25, "raw_value_size": 5158981, "raw_average_value_size": 972, "num_data_blocks": 687, "num_entries": 5307, "num_filter_entries": 5307, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.314634) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 5282167 bytes
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.329737) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.7 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 4.8 +0.0 blob) out(5.0 +0.0 blob), read-write-amplify(37.5) write-amplify(18.6) OK, records in: 5830, records dropped: 523 output_compression: NoCompression
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.329753) EVENT_LOG_v1 {"time_micros": 1760367148329745, "job": 74, "event": "compaction_finished", "compaction_time_micros": 33794, "compaction_time_cpu_micros": 10815, "output_level": 6, "num_output_files": 1, "total_output_size": 5282167, "num_input_records": 5830, "num_output_records": 5307, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367148329874, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367148330182, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.280133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.330293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.330302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.330304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.330305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:52:28 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:52:28.330307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:52:28 standalone.localdomain python3.9[323879]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_memcached.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2338: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21570 DF PROTO=TCP SPT=44318 DPT=9101 SEQ=2752555755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FDBB60000000001030307) 
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:52:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:52:28 standalone.localdomain python3.9[323969]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:29 standalone.localdomain ceph-mon[29756]: pgmap v2338: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:29 standalone.localdomain python3.9[324059]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:29 standalone.localdomain python3.9[324149]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2339: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:30 standalone.localdomain python3.9[324239]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:30 standalone.localdomain ceph-mon[29756]: pgmap v2339: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:31 standalone.localdomain python3.9[324329]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_sriov_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:31 standalone.localdomain python3.9[324419]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:32 standalone.localdomain python3.9[324509]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2340: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:32 standalone.localdomain ceph-mon[29756]: pgmap v2340: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21571 DF PROTO=TCP SPT=44318 DPT=9101 SEQ=2752555755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FEB760000000001030307) 
Oct 13 14:52:32 standalone.localdomain python3.9[324599]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:33 standalone.localdomain python3.9[324689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:52:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:52:33 standalone.localdomain python3.9[324779]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:33 standalone.localdomain podman[324780]: 2025-10-13 14:52:33.823224895 +0000 UTC m=+0.084934408 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, container_name=swift_object_server, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:52:33 standalone.localdomain podman[324781]: 2025-10-13 14:52:33.871705195 +0000 UTC m=+0.131784909 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_account_server, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, distribution-scope=public, vcs-type=git, version=17.1.9, architecture=x86_64, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:52:34 standalone.localdomain podman[324780]: 2025-10-13 14:52:34.09490244 +0000 UTC m=+0.356611963 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, container_name=swift_object_server, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64)
Oct 13 14:52:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:52:34 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:52:34 standalone.localdomain podman[324781]: 2025-10-13 14:52:34.125994243 +0000 UTC m=+0.386073917 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git)
Oct 13 14:52:34 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:52:34 standalone.localdomain systemd[1]: tmp-crun.7fE6R8.mount: Deactivated successfully.
Oct 13 14:52:34 standalone.localdomain podman[324922]: 2025-10-13 14:52:34.184706041 +0000 UTC m=+0.064290101 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1)
Oct 13 14:52:34 standalone.localdomain python3.9[324931]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:34 standalone.localdomain podman[324922]: 2025-10-13 14:52:34.35078958 +0000 UTC m=+0.230373650 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, container_name=swift_container_server, release=1, tcib_managed=true, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9)
Oct 13 14:52:34 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:52:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2341: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:34 standalone.localdomain ceph-mon[29756]: pgmap v2341: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:34 standalone.localdomain python3.9[325043]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:35 standalone.localdomain python3.9[325133]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:36 standalone.localdomain python3.9[325223]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2342: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:36 standalone.localdomain ceph-mon[29756]: pgmap v2342: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:36 standalone.localdomain python3.9[325313]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:37 standalone.localdomain python3.9[325403]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24114 DF PROTO=TCP SPT=41164 DPT=9100 SEQ=2880422313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7C9FFE340000000001030307) 
Oct 13 14:52:37 standalone.localdomain sudo[325494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:52:37 standalone.localdomain sudo[325494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:52:37 standalone.localdomain sudo[325494]: pam_unix(sudo:session): session closed for user root
Oct 13 14:52:37 standalone.localdomain sudo[325509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:52:37 standalone.localdomain sudo[325509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:52:37 standalone.localdomain python3.9[325493]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:38 standalone.localdomain sudo[325509]: pam_unix(sudo:session): session closed for user root
Oct 13 14:52:38 standalone.localdomain python3.9[325628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:52:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1541a4e6-7026-423c-997b-3c6f161d5e77 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:52:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1541a4e6-7026-423c-997b-3c6f161d5e77 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:52:38 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1541a4e6-7026-423c-997b-3c6f161d5e77 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:52:38 standalone.localdomain sudo[325646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:52:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:38 standalone.localdomain sudo[325646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:52:38 standalone.localdomain sudo[325646]: pam_unix(sudo:session): session closed for user root
Oct 13 14:52:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24115 DF PROTO=TCP SPT=41164 DPT=9100 SEQ=2880422313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA002360000000001030307) 
Oct 13 14:52:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2343: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:38 standalone.localdomain python3.9[325750]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:39 standalone.localdomain python3.9[325840]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:39 standalone.localdomain ceph-mon[29756]: pgmap v2343: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:52:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:52:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:52:39 standalone.localdomain python3.9[325930]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_cluster_north_db_server.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:40 standalone.localdomain python3.9[326020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_cluster_northd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24116 DF PROTO=TCP SPT=41164 DPT=9100 SEQ=2880422313 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA00A360000000001030307) 
Oct 13 14:52:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2344: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:52:40 standalone.localdomain ceph-mon[29756]: pgmap v2344: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:40 standalone.localdomain python3.9[326110]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_cluster_south_db_server.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:41 standalone.localdomain python3.9[326200]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:41 standalone.localdomain python3.9[326290]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:42 standalone.localdomain python3.9[326380]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_placement_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2345: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:42 standalone.localdomain ceph-mon[29756]: pgmap v2345: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:42 standalone.localdomain python3.9[326470]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_swift_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:43 standalone.localdomain python3.9[326560]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_barbican_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:43 standalone.localdomain python3.9[326650]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_barbican_keystone_listener.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45503 DF PROTO=TCP SPT=36302 DPT=9105 SEQ=4050106614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA017F60000000001030307) 
Oct 13 14:52:44 standalone.localdomain python3.9[326740]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_barbican_worker.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2346: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:44 standalone.localdomain ceph-mon[29756]: pgmap v2346: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:44 standalone.localdomain python3.9[326830]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_cinder_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:45 standalone.localdomain python3.9[326920]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_cinder_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:45 standalone.localdomain python3.9[327010]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_cinder_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:46 standalone.localdomain python3.9[327100]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_clustercheck.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2347: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:46 standalone.localdomain ceph-mon[29756]: pgmap v2347: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:46 standalone.localdomain python3.9[327190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_glance_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:47 standalone.localdomain python3.9[327280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_glance_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:48 standalone.localdomain python3.9[327370]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_glance_api_internal.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2348: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:48 standalone.localdomain python3.9[327460]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_heat_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41495 DF PROTO=TCP SPT=35082 DPT=9882 SEQ=2602257871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA02A360000000001030307) 
Oct 13 14:52:49 standalone.localdomain python3.9[327550]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_heat_api_cfn.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:49 standalone.localdomain ceph-mon[29756]: pgmap v2348: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:49 standalone.localdomain python3.9[327640]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_heat_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45505 DF PROTO=TCP SPT=36302 DPT=9105 SEQ=4050106614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA02FB60000000001030307) 
Oct 13 14:52:50 standalone.localdomain python3.9[327730]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_heat_engine.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2349: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:50 standalone.localdomain ceph-mon[29756]: pgmap v2349: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:50 standalone.localdomain python3.9[327820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_horizon.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:51 standalone.localdomain python3.9[327910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:51 standalone.localdomain python3.9[328000]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_keystone.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:52 standalone.localdomain python3.9[328090]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_keystone_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2350: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:52 standalone.localdomain ceph-mon[29756]: pgmap v2350: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:52 standalone.localdomain python3.9[328180]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:52:53 standalone.localdomain python3.9[328270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_manila_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29545 DF PROTO=TCP SPT=52872 DPT=9102 SEQ=98979719 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA03D4E0000000001030307) 
Oct 13 14:52:53 standalone.localdomain python3.9[328360]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_manila_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:54 standalone.localdomain python3.9[328450]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_manila_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2351: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:54 standalone.localdomain ceph-mon[29756]: pgmap v2351: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:54 standalone.localdomain python3.9[328540]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_memcached.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:55 standalone.localdomain sshd[328631]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:52:55 standalone.localdomain python3.9[328630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30721 DF PROTO=TCP SPT=58368 DPT=9101 SEQ=1471416752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA044DD0000000001030307) 
Oct 13 14:52:55 standalone.localdomain sshd[328631]: Invalid user  from 134.199.194.158 port 34796
Oct 13 14:52:55 standalone.localdomain python3.9[328722]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:56 standalone.localdomain python3.9[328812]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2352: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:56 standalone.localdomain ceph-mon[29756]: pgmap v2352: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:56 standalone.localdomain python3.9[328902]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:57 standalone.localdomain python3.9[328992]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_sriov_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:58 standalone.localdomain python3.9[329082]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:52:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2353: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:58 standalone.localdomain python3.9[329172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30723 DF PROTO=TCP SPT=58368 DPT=9101 SEQ=1471416752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA050F70000000001030307) 
Oct 13 14:52:59 standalone.localdomain python3.9[329262]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:52:59 standalone.localdomain ceph-mon[29756]: pgmap v2353: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:52:59 standalone.localdomain python3.9[329352]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:00 standalone.localdomain python3.9[329442]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2354: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:00 standalone.localdomain ceph-mon[29756]: pgmap v2354: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:00 standalone.localdomain python3.9[329532]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:01 standalone.localdomain python3.9[329622]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:01 standalone.localdomain sshd[329713]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:53:01 standalone.localdomain sshd[329713]: error: kex_exchange_identification: banner line contains invalid characters
Oct 13 14:53:01 standalone.localdomain sshd[329713]: banner exchange: Connection from 93.123.109.214 port 42414: invalid format
Oct 13 14:53:01 standalone.localdomain python3.9[329712]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:01 standalone.localdomain sshd[329714]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:53:01 standalone.localdomain sshd[329714]: error: kex_exchange_identification: client sent invalid protocol identifier "GET / HTTP/1.1"
Oct 13 14:53:01 standalone.localdomain sshd[329714]: banner exchange: Connection from 93.123.109.214 port 42416: invalid format
Oct 13 14:53:02 standalone.localdomain python3.9[329804]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2355: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:02 standalone.localdomain ceph-mon[29756]: pgmap v2355: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30724 DF PROTO=TCP SPT=58368 DPT=9101 SEQ=1471416752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA060B60000000001030307) 
Oct 13 14:53:02 standalone.localdomain python3.9[329894]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:03 standalone.localdomain python3.9[329984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:03 standalone.localdomain sshd[328631]: Connection closed by invalid user  134.199.194.158 port 34796 [preauth]
Oct 13 14:53:03 standalone.localdomain python3.9[330074]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:04 standalone.localdomain python3.9[330164]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2356: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:04 standalone.localdomain ceph-mon[29756]: pgmap v2356: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:53:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:53:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:53:04 standalone.localdomain systemd[1]: tmp-crun.4HFE5Q.mount: Deactivated successfully.
Oct 13 14:53:04 standalone.localdomain podman[330256]: 2025-10-13 14:53:04.803249421 +0000 UTC m=+0.067269762 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, tcib_managed=true, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:53:04 standalone.localdomain podman[330257]: 2025-10-13 14:53:04.874825515 +0000 UTC m=+0.136514804 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 14:53:04 standalone.localdomain python3.9[330254]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:04 standalone.localdomain podman[330255]: 2025-10-13 14:53:04.952735806 +0000 UTC m=+0.218442950 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, release=1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, version=17.1.9, maintainer=OpenStack TripleO Team)
Oct 13 14:53:05 standalone.localdomain podman[330256]: 2025-10-13 14:53:05.017793559 +0000 UTC m=+0.281813900 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, name=rhosp17/openstack-swift-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=)
Oct 13 14:53:05 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:53:05 standalone.localdomain podman[330257]: 2025-10-13 14:53:05.064253187 +0000 UTC m=+0.325942466 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, version=17.1.9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 14:53:05 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:53:05 standalone.localdomain podman[330255]: 2025-10-13 14:53:05.131732295 +0000 UTC m=+0.397439439 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, name=rhosp17/openstack-swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:53:05 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:53:05 standalone.localdomain python3.9[330431]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:05 standalone.localdomain python3.9[330521]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_north_db_server.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2357: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:06 standalone.localdomain python3.9[330611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_northd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:06 standalone.localdomain ceph-mon[29756]: pgmap v2357: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:07 standalone.localdomain python3.9[330701]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_cluster_south_db_server.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1285 DF PROTO=TCP SPT=56834 DPT=9100 SEQ=1231236391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA073640000000001030307) 
Oct 13 14:53:07 standalone.localdomain python3.9[330791]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:08 standalone.localdomain python3.9[330881]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1286 DF PROTO=TCP SPT=56834 DPT=9100 SEQ=1231236391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA077770000000001030307) 
Oct 13 14:53:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2358: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:08 standalone.localdomain python3.9[330971]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_placement_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:09 standalone.localdomain python3.9[331061]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_swift_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:53:09 standalone.localdomain ceph-mon[29756]: pgmap v2358: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:09 standalone.localdomain python3.9[331151]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                            systemctl disable --now certmonger.service
                                                            test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                          fi
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1287 DF PROTO=TCP SPT=56834 DPT=9100 SEQ=1231236391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA07F760000000001030307) 
Oct 13 14:53:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2359: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:10 standalone.localdomain ceph-mon[29756]: pgmap v2359: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:10 standalone.localdomain python3.9[331243]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 14:53:11 standalone.localdomain python3.9[331333]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 14:53:11 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:53:11 standalone.localdomain systemd-rc-local-generator[331361]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:53:11 standalone.localdomain systemd-sysv-generator[331364]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:53:11 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:53:12 standalone.localdomain python3.9[331459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_barbican_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2360: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:12 standalone.localdomain ceph-mon[29756]: pgmap v2360: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:12 standalone.localdomain python3.9[331550]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_barbican_keystone_listener.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:13 standalone.localdomain python3.9[331641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_barbican_worker.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:13 standalone.localdomain python3.9[331732]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_cinder_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21704 DF PROTO=TCP SPT=39542 DPT=9105 SEQ=4294720088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA08D360000000001030307) 
Oct 13 14:53:14 standalone.localdomain python3.9[331823]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_cinder_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2361: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:14 standalone.localdomain ceph-mon[29756]: pgmap v2361: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:14 standalone.localdomain python3.9[331914]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_cinder_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:15 standalone.localdomain python3.9[332005]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_clustercheck.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:15 standalone.localdomain python3.9[332096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_glance_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:16 standalone.localdomain python3.9[332187]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_glance_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2362: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:16 standalone.localdomain ceph-mon[29756]: pgmap v2362: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:17 standalone.localdomain python3.9[332278]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_glance_api_internal.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:17 standalone.localdomain python3.9[332369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_heat_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:18 standalone.localdomain python3.9[332460]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_heat_api_cfn.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2363: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:53:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4075651672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:53:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:53:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4075651672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:53:18 standalone.localdomain python3.9[332551]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_heat_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15702 DF PROTO=TCP SPT=40056 DPT=9882 SEQ=3763920906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA09F770000000001030307) 
Oct 13 14:53:19 standalone.localdomain python3.9[332642]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_heat_engine.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:19 standalone.localdomain ceph-mon[29756]: pgmap v2363: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4075651672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:53:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4075651672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:53:19 standalone.localdomain python3.9[332733]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_horizon.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21706 DF PROTO=TCP SPT=39542 DPT=9105 SEQ=4294720088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0A4F60000000001030307) 
Oct 13 14:53:20 standalone.localdomain python3.9[332824]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2364: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:20 standalone.localdomain ceph-mon[29756]: pgmap v2364: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:20 standalone.localdomain python3.9[332915]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_keystone.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:21 standalone.localdomain python3.9[333006]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_keystone_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:21 standalone.localdomain python3.9[333097]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:22 standalone.localdomain python3.9[333188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_manila_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2365: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:22 standalone.localdomain ceph-mon[29756]: pgmap v2365: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:22 standalone.localdomain python3.9[333279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_manila_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:53:23
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'images', 'manila_metadata', 'manila_data', 'volumes']
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:53:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:23 standalone.localdomain python3.9[333370]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_manila_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:53:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9139 DF PROTO=TCP SPT=35212 DPT=9102 SEQ=2759865937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0B27F0000000001030307) 
Oct 13 14:53:24 standalone.localdomain python3.9[333461]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_memcached.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2366: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:24 standalone.localdomain ceph-mon[29756]: pgmap v2366: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:24 standalone.localdomain python3.9[333552]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:25 standalone.localdomain python3.9[333643]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26026 DF PROTO=TCP SPT=60534 DPT=9101 SEQ=3281777214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0BA0D0000000001030307) 
Oct 13 14:53:25 standalone.localdomain python3.9[333734]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:26 standalone.localdomain python3.9[333825]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2367: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:26 standalone.localdomain ceph-mon[29756]: pgmap v2367: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:26 standalone.localdomain python3.9[333916]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_sriov_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:28 standalone.localdomain python3.9[334007]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2368: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26028 DF PROTO=TCP SPT=60534 DPT=9101 SEQ=3281777214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0C5F60000000001030307) 
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:53:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:53:29 standalone.localdomain python3.9[334098]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:29 standalone.localdomain ceph-mon[29756]: pgmap v2368: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:29 standalone.localdomain python3.9[334189]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:30 standalone.localdomain python3.9[334280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2369: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:30 standalone.localdomain ceph-mon[29756]: pgmap v2369: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:30 standalone.localdomain python3.9[334371]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:31 standalone.localdomain python3.9[334462]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2370: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26029 DF PROTO=TCP SPT=60534 DPT=9101 SEQ=3281777214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0D5B60000000001030307) 
Oct 13 14:53:32 standalone.localdomain ceph-mon[29756]: pgmap v2370: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:32 standalone.localdomain python3.9[334553]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:33 standalone.localdomain python3.9[334644]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:34 standalone.localdomain python3.9[334735]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2371: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:34 standalone.localdomain python3.9[334826]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:34 standalone.localdomain ceph-mon[29756]: pgmap v2371: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:35 standalone.localdomain python3.9[334917]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:53:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:53:35 standalone.localdomain podman[334919]: 2025-10-13 14:53:35.170137255 +0000 UTC m=+0.062134083 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, build-date=2025-07-21T15:54:32, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, tcib_managed=true)
Oct 13 14:53:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:53:35 standalone.localdomain systemd[1]: tmp-crun.uHbHZT.mount: Deactivated successfully.
Oct 13 14:53:35 standalone.localdomain podman[334967]: 2025-10-13 14:53:35.246271071 +0000 UTC m=+0.054991843 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, release=1, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, name=rhosp17/openstack-swift-object)
Oct 13 14:53:35 standalone.localdomain podman[334920]: 2025-10-13 14:53:35.22784558 +0000 UTC m=+0.120720416 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, com.redhat.component=openstack-swift-account-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, tcib_managed=true, io.buildah.version=1.33.12, container_name=swift_account_server, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:53:35 standalone.localdomain podman[334919]: 2025-10-13 14:53:35.373977123 +0000 UTC m=+0.265973951 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1)
Oct 13 14:53:35 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:53:35 standalone.localdomain podman[334967]: 2025-10-13 14:53:35.453818193 +0000 UTC m=+0.262538995 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, distribution-scope=public)
Oct 13 14:53:35 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:53:35 standalone.localdomain podman[334920]: 2025-10-13 14:53:35.474804313 +0000 UTC m=+0.367679159 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server)
Oct 13 14:53:35 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:53:35 standalone.localdomain python3.9[335086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:36 standalone.localdomain python3.9[335181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2372: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:36 standalone.localdomain ceph-mon[29756]: pgmap v2372: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:36 standalone.localdomain python3.9[335272]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:37 standalone.localdomain python3.9[335363]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43925 DF PROTO=TCP SPT=58110 DPT=9100 SEQ=3956375866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0E8940000000001030307) 
Oct 13 14:53:37 standalone.localdomain python3.9[335454]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_cluster_north_db_server.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:38 standalone.localdomain python3.9[335545]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_cluster_northd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:38 standalone.localdomain sudo[335546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:53:38 standalone.localdomain sudo[335546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:53:38 standalone.localdomain sudo[335546]: pam_unix(sudo:session): session closed for user root
Oct 13 14:53:38 standalone.localdomain sudo[335562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:53:38 standalone.localdomain sudo[335562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:53:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43926 DF PROTO=TCP SPT=58110 DPT=9100 SEQ=3956375866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0ECB70000000001030307) 
Oct 13 14:53:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2373: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:38 standalone.localdomain python3.9[335666]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_cluster_south_db_server.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:38 standalone.localdomain sudo[335562]: pam_unix(sudo:session): session closed for user root
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:53:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 4dbb0c8f-4ffb-4c88-831a-7ab0ef185aa4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:53:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 4dbb0c8f-4ffb-4c88-831a-7ab0ef185aa4 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:53:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 4dbb0c8f-4ffb-4c88-831a-7ab0ef185aa4 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:53:39 standalone.localdomain sudo[335759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:53:39 standalone.localdomain sudo[335759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:53:39 standalone.localdomain sudo[335759]: pam_unix(sudo:session): session closed for user root
Oct 13 14:53:39 standalone.localdomain python3.9[335804]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: pgmap v2373: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:53:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:53:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:53:39 standalone.localdomain python3.9[335895]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:39 standalone.localdomain auditd[726]: Audit daemon rotating log files
Oct 13 14:53:40 standalone.localdomain python3.9[335986]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_placement_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43927 DF PROTO=TCP SPT=58110 DPT=9100 SEQ=3956375866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA0F4B70000000001030307) 
Oct 13 14:53:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2374: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:53:40 standalone.localdomain ceph-mon[29756]: pgmap v2374: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:40 standalone.localdomain python3.9[336077]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_swift_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:53:41 standalone.localdomain sshd[309411]: pam_unix(sshd:session): session closed for user root
Oct 13 14:53:41 standalone.localdomain systemd[1]: session-276.scope: Deactivated successfully.
Oct 13 14:53:41 standalone.localdomain systemd[1]: session-276.scope: Consumed 1min 36.680s CPU time.
Oct 13 14:53:41 standalone.localdomain systemd-logind[45629]: Session 276 logged out. Waiting for processes to exit.
Oct 13 14:53:41 standalone.localdomain systemd-logind[45629]: Removed session 276.
Oct 13 14:53:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2375: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:42 standalone.localdomain ceph-mon[29756]: pgmap v2375: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33471 DF PROTO=TCP SPT=45108 DPT=9105 SEQ=4273755351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA102760000000001030307) 
Oct 13 14:53:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2376: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:44 standalone.localdomain ceph-mon[29756]: pgmap v2376: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2377: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:46 standalone.localdomain ceph-mon[29756]: pgmap v2377: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2378: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18723 DF PROTO=TCP SPT=60720 DPT=9882 SEQ=3204104765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA114B60000000001030307) 
Oct 13 14:53:49 standalone.localdomain ceph-mon[29756]: pgmap v2378: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33473 DF PROTO=TCP SPT=45108 DPT=9105 SEQ=4273755351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA11A360000000001030307) 
Oct 13 14:53:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2379: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:51 standalone.localdomain systemd[1]: Stopping User Manager for UID 0...
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Activating special unit Exit the Session...
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Removed slice User Background Tasks Slice.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped target Main User Target.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped target Basic System.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped target Paths.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped target Sockets.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped target Timers.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Closed D-Bus User Message Bus Socket.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Stopped Create User's Volatile Files and Directories.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Removed slice User Application Slice.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Reached target Shutdown.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Finished Exit the Session.
Oct 13 14:53:51 standalone.localdomain systemd[175349]: Reached target Exit the Session.
Oct 13 14:53:51 standalone.localdomain systemd[1]: user@0.service: Deactivated successfully.
Oct 13 14:53:51 standalone.localdomain systemd[1]: Stopped User Manager for UID 0.
Oct 13 14:53:51 standalone.localdomain systemd[1]: user@0.service: Consumed 8.447s CPU time, read 604.0K from disk, written 0B to disk.
Oct 13 14:53:51 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 14:53:51 standalone.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 14:53:51 standalone.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 14:53:51 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 14:53:51 standalone.localdomain systemd[1]: Removed slice User Slice of UID 0.
Oct 13 14:53:51 standalone.localdomain systemd[1]: user-0.slice: Consumed 4min 805ms CPU time.
Oct 13 14:53:51 standalone.localdomain ceph-mon[29756]: pgmap v2379: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2380: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:52 standalone.localdomain ceph-mon[29756]: pgmap v2380: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:53:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:53:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53521 DF PROTO=TCP SPT=59720 DPT=9102 SEQ=759055762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA127AE0000000001030307) 
Oct 13 14:53:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2381: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33601 DF PROTO=TCP SPT=42054 DPT=9101 SEQ=1049817356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA12F3D0000000001030307) 
Oct 13 14:53:55 standalone.localdomain ceph-mon[29756]: pgmap v2381: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2382: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:56 standalone.localdomain ceph-mon[29756]: pgmap v2382: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:53:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2383: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:53:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33603 DF PROTO=TCP SPT=42054 DPT=9101 SEQ=1049817356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA13B360000000001030307) 
Oct 13 14:53:59 standalone.localdomain ceph-mon[29756]: pgmap v2383: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2384: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:01 standalone.localdomain ceph-mon[29756]: pgmap v2384: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2385: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33604 DF PROTO=TCP SPT=42054 DPT=9101 SEQ=1049817356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA14AF70000000001030307) 
Oct 13 14:54:02 standalone.localdomain ceph-mon[29756]: pgmap v2385: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:04 standalone.localdomain sshd[336095]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:54:04 standalone.localdomain sshd[336095]: Accepted publickey for root from 192.168.122.30 port 36204 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:54:04 standalone.localdomain systemd[1]: Created slice User Slice of UID 0.
Oct 13 14:54:04 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 14:54:04 standalone.localdomain systemd-logind[45629]: New session 277 of user root.
Oct 13 14:54:04 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 14:54:04 standalone.localdomain systemd[1]: Starting User Manager for UID 0...
Oct 13 14:54:04 standalone.localdomain systemd[336099]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Queued start job for default target Main User Target.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Created slice User Application Slice.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Reached target Paths.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Reached target Timers.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Starting D-Bus User Message Bus Socket...
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Starting Create User's Volatile Files and Directories...
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Finished Create User's Volatile Files and Directories.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Listening on D-Bus User Message Bus Socket.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Reached target Sockets.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Reached target Basic System.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Reached target Main User Target.
Oct 13 14:54:04 standalone.localdomain systemd[336099]: Startup finished in 126ms.
Oct 13 14:54:04 standalone.localdomain systemd[1]: Started User Manager for UID 0.
Oct 13 14:54:04 standalone.localdomain systemd[1]: Started Session 277 of User root.
Oct 13 14:54:04 standalone.localdomain sshd[336095]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:54:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2386: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:04 standalone.localdomain ceph-mon[29756]: pgmap v2386: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:05 standalone.localdomain python3.9[336204]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 13 14:54:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:54:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:54:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:54:05 standalone.localdomain podman[336252]: 2025-10-13 14:54:05.809738975 +0000 UTC m=+0.069798101 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:54:05 standalone.localdomain systemd[1]: tmp-crun.IVm18A.mount: Deactivated successfully.
Oct 13 14:54:05 standalone.localdomain podman[336251]: 2025-10-13 14:54:05.859433963 +0000 UTC m=+0.118771477 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, container_name=swift_object_server, distribution-scope=public, io.openshift.expose-services=)
Oct 13 14:54:05 standalone.localdomain podman[336253]: 2025-10-13 14:54:05.91655287 +0000 UTC m=+0.170140706 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, distribution-scope=public, architecture=x86_64, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git)
Oct 13 14:54:05 standalone.localdomain podman[336252]: 2025-10-13 14:54:05.996766373 +0000 UTC m=+0.256825199 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, container_name=swift_container_server, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 14:54:06 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:54:06 standalone.localdomain podman[336251]: 2025-10-13 14:54:06.076756687 +0000 UTC m=+0.336094181 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 14:54:06 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:54:06 standalone.localdomain podman[336253]: 2025-10-13 14:54:06.128227021 +0000 UTC m=+0.381814847 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, container_name=swift_account_server, vcs-type=git, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, tcib_managed=true, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:54:06 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:54:06 standalone.localdomain python3.9[336383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:54:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2387: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:07 standalone.localdomain python3.9[336477]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:54:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57450 DF PROTO=TCP SPT=45630 DPT=9100 SEQ=3852966764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA15DC40000000001030307) 
Oct 13 14:54:07 standalone.localdomain ceph-mon[29756]: pgmap v2387: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:07 standalone.localdomain python3.9[336570]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:54:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57451 DF PROTO=TCP SPT=45630 DPT=9100 SEQ=3852966764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA161B60000000001030307) 
Oct 13 14:54:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2388: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:08 standalone.localdomain python3.9[336660]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:54:09 standalone.localdomain python3.9[336750]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:54:09 standalone.localdomain python3.9[336821]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/root/.ansible/tmp/ansible-tmp-1760367248.7771459-73-132278727617209/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:54:10 standalone.localdomain ceph-mon[29756]: pgmap v2388: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57452 DF PROTO=TCP SPT=45630 DPT=9100 SEQ=3852966764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA169B60000000001030307) 
Oct 13 14:54:10 standalone.localdomain python3.9[336911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:54:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2389: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:11 standalone.localdomain python3.9[337005]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:54:11 standalone.localdomain python3.9[337095]: ansible-ansible.builtin.service_facts Invoked
Oct 13 14:54:11 standalone.localdomain network[337112]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 14:54:11 standalone.localdomain network[337113]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:54:11 standalone.localdomain network[337114]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 14:54:12 standalone.localdomain ceph-mon[29756]: pgmap v2389: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2390: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8149 DF PROTO=TCP SPT=50930 DPT=9105 SEQ=2893555480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA177760000000001030307) 
Oct 13 14:54:14 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:54:14 standalone.localdomain ceph-mon[29756]: pgmap v2390: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2391: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:16 standalone.localdomain ceph-mon[29756]: pgmap v2391: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2392: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:18 standalone.localdomain python3.9[337319]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:54:18 standalone.localdomain ceph-mon[29756]: pgmap v2392: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2393: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:54:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3641109699' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:54:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:54:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3641109699' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:54:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29793 DF PROTO=TCP SPT=39046 DPT=9882 SEQ=1832259428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA189F70000000001030307) 
Oct 13 14:54:19 standalone.localdomain python3.9[337409]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:54:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3641109699' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:54:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3641109699' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:54:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8151 DF PROTO=TCP SPT=50930 DPT=9105 SEQ=2893555480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA18F360000000001030307) 
Oct 13 14:54:20 standalone.localdomain python3.9[337503]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream
                                                          set -euxo pipefail
                                                          curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                                          python3 -m venv ./venv
                                                          PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main
                                                          # This is required for FIPS enabled until trunk.rdoproject.org
                                                          # is not being served from a centos7 host, tracked by
                                                          # https://issues.redhat.com/browse/RHOSZUUL-1517
                                                          dnf -y install crypto-policies
                                                          update-crypto-policies --set FIPS:NO-ENFORCE-EMS
                                                          ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream
                                                          
                                                          # Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible
                                                          # with rhel 9.2 openssh
                                                          dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save
                                                          # FIXME: perform dnf upgrade for other packages in EDPM ansible
                                                          # here we only ensuring that decontainerized libvirt can start
                                                          dnf -y upgrade openstack-selinux
                                                          rm -f /run/virtlogd.pid
                                                          
                                                          rm -rf repo-setup-main
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:54:20 standalone.localdomain ceph-mon[29756]: pgmap v2393: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2394: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:22 standalone.localdomain ceph-mon[29756]: pgmap v2394: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2395: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:54:23
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'vms', 'backups', 'manila_metadata', 'images', 'volumes', '.mgr']
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:54:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:54:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:54:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16939 DF PROTO=TCP SPT=36144 DPT=9102 SEQ=1430301728 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA19CDE0000000001030307) 
Oct 13 14:54:24 standalone.localdomain ceph-mon[29756]: pgmap v2395: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2396: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23885 DF PROTO=TCP SPT=45476 DPT=9101 SEQ=969282013 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1A46E0000000001030307) 
Oct 13 14:54:26 standalone.localdomain ceph-mon[29756]: pgmap v2396: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2397: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:28 standalone.localdomain ceph-mon[29756]: pgmap v2397: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2398: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23887 DF PROTO=TCP SPT=45476 DPT=9101 SEQ=969282013 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1B0760000000001030307) 
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:54:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:54:30 standalone.localdomain ceph-mon[29756]: pgmap v2398: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2399: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:31 standalone.localdomain systemd[1]: Stopping OpenSSH server daemon...
Oct 13 14:54:31 standalone.localdomain sshd[56396]: Received signal 15; terminating.
Oct 13 14:54:31 standalone.localdomain systemd[1]: sshd.service: Deactivated successfully.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Stopped OpenSSH server daemon.
Oct 13 14:54:31 standalone.localdomain systemd[1]: sshd.service: Consumed 13.523s CPU time.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Stopped target sshd-keygen.target.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Stopping sshd-keygen.target...
Oct 13 14:54:31 standalone.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 14:54:31 standalone.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 14:54:31 standalone.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 14:54:31 standalone.localdomain systemd[1]: Reached target sshd-keygen.target.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Starting OpenSSH server daemon...
Oct 13 14:54:31 standalone.localdomain sshd[337546]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:54:31 standalone.localdomain sshd[337546]: Server listening on 0.0.0.0 port 22.
Oct 13 14:54:31 standalone.localdomain sshd[337546]: Server listening on :: port 22.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Started OpenSSH server daemon.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 14:54:31 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 14:54:31 standalone.localdomain ceph-mon[29756]: pgmap v2399: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:31 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 14:54:31 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 14:54:31 standalone.localdomain systemd[1]: run-rc4e112b0667a48ae9bb843dba296455b.service: Deactivated successfully.
Oct 13 14:54:31 standalone.localdomain systemd[1]: run-r6b81f3ea23b14225b49bfd6821947839.service: Deactivated successfully.
Oct 13 14:54:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2400: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23888 DF PROTO=TCP SPT=45476 DPT=9101 SEQ=969282013 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1C0360000000001030307) 
Oct 13 14:54:32 standalone.localdomain systemd[1]: Stopping OpenSSH server daemon...
Oct 13 14:54:32 standalone.localdomain sshd[337546]: Received signal 15; terminating.
Oct 13 14:54:32 standalone.localdomain systemd[1]: sshd.service: Deactivated successfully.
Oct 13 14:54:32 standalone.localdomain systemd[1]: Stopped OpenSSH server daemon.
Oct 13 14:54:32 standalone.localdomain systemd[1]: Stopped target sshd-keygen.target.
Oct 13 14:54:32 standalone.localdomain systemd[1]: Stopping sshd-keygen.target...
Oct 13 14:54:32 standalone.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 14:54:32 standalone.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 14:54:32 standalone.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 14:54:32 standalone.localdomain systemd[1]: Reached target sshd-keygen.target.
Oct 13 14:54:32 standalone.localdomain systemd[1]: Starting OpenSSH server daemon...
Oct 13 14:54:32 standalone.localdomain sshd[337962]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:54:32 standalone.localdomain sshd[337962]: Server listening on 0.0.0.0 port 22.
Oct 13 14:54:32 standalone.localdomain sshd[337962]: Server listening on :: port 22.
Oct 13 14:54:32 standalone.localdomain systemd[1]: Started OpenSSH server daemon.
Oct 13 14:54:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:34 standalone.localdomain ceph-mon[29756]: pgmap v2400: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2401: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:36 standalone.localdomain ceph-mon[29756]: pgmap v2401: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2402: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:54:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:54:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:54:36 standalone.localdomain podman[337967]: 2025-10-13 14:54:36.813127376 +0000 UTC m=+0.067160411 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, architecture=x86_64, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object)
Oct 13 14:54:36 standalone.localdomain podman[337969]: 2025-10-13 14:54:36.870963327 +0000 UTC m=+0.124375033 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.buildah.version=1.33.12, version=17.1.9, container_name=swift_account_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account)
Oct 13 14:54:36 standalone.localdomain systemd[1]: tmp-crun.ai0qdW.mount: Deactivated successfully.
Oct 13 14:54:36 standalone.localdomain podman[337968]: 2025-10-13 14:54:36.917796577 +0000 UTC m=+0.172564184 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, container_name=swift_container_server, tcib_managed=true)
Oct 13 14:54:36 standalone.localdomain podman[337967]: 2025-10-13 14:54:36.998762273 +0000 UTC m=+0.252795338 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, distribution-scope=public)
Oct 13 14:54:37 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:54:37 standalone.localdomain podman[337969]: 2025-10-13 14:54:37.031788185 +0000 UTC m=+0.285199891 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, distribution-scope=public, version=17.1.9)
Oct 13 14:54:37 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:54:37 standalone.localdomain podman[337968]: 2025-10-13 14:54:37.087969735 +0000 UTC m=+0.342737392 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, vcs-type=git, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible)
Oct 13 14:54:37 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:54:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31017 DF PROTO=TCP SPT=60152 DPT=9100 SEQ=146188126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1D2F50000000001030307) 
Oct 13 14:54:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:38 standalone.localdomain ceph-mon[29756]: pgmap v2402: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31018 DF PROTO=TCP SPT=60152 DPT=9100 SEQ=146188126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1D6F60000000001030307) 
Oct 13 14:54:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2403: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:39 standalone.localdomain sudo[338052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:54:39 standalone.localdomain sudo[338052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:54:39 standalone.localdomain sudo[338052]: pam_unix(sudo:session): session closed for user root
Oct 13 14:54:39 standalone.localdomain sudo[338067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:54:39 standalone.localdomain sudo[338067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:54:39 standalone.localdomain sudo[338067]: pam_unix(sudo:session): session closed for user root
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:54:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:54:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ab760b69-953d-4baa-9ddb-fc3c517384f9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:54:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ab760b69-953d-4baa-9ddb-fc3c517384f9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:54:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ab760b69-953d-4baa-9ddb-fc3c517384f9 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:54:39 standalone.localdomain sudo[338118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:54:39 standalone.localdomain sudo[338118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:54:39 standalone.localdomain sudo[338118]: pam_unix(sudo:session): session closed for user root
Oct 13 14:54:40 standalone.localdomain ceph-mon[29756]: pgmap v2403: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:54:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:54:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:54:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:54:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31019 DF PROTO=TCP SPT=60152 DPT=9100 SEQ=146188126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1DEF60000000001030307) 
Oct 13 14:54:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2404: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:42 standalone.localdomain ceph-mon[29756]: pgmap v2404: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2405: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43498 DF PROTO=TCP SPT=34028 DPT=9105 SEQ=1047299843 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1ECB60000000001030307) 
Oct 13 14:54:44 standalone.localdomain sshd[338211]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:54:44 standalone.localdomain ceph-mon[29756]: pgmap v2405: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2406: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:54:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:54:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:54:44 standalone.localdomain sshd[338211]: Received disconnect from 193.46.255.103 port 23614:11:  [preauth]
Oct 13 14:54:44 standalone.localdomain sshd[338211]: Disconnected from authenticating user root 193.46.255.103 port 23614 [preauth]
Oct 13 14:54:45 standalone.localdomain ceph-mon[29756]: pgmap v2406: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:54:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2407: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:47 standalone.localdomain ceph-mon[29756]: pgmap v2407: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2408: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54831 DF PROTO=TCP SPT=56240 DPT=9882 SEQ=219188444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA1FEF70000000001030307) 
Oct 13 14:54:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43500 DF PROTO=TCP SPT=34028 DPT=9105 SEQ=1047299843 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA204760000000001030307) 
Oct 13 14:54:50 standalone.localdomain ceph-mon[29756]: pgmap v2408: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2409: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:52 standalone.localdomain ceph-mon[29756]: pgmap v2409: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2410: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:54:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:54:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41504 DF PROTO=TCP SPT=51418 DPT=9102 SEQ=978803835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2120E0000000001030307) 
Oct 13 14:54:54 standalone.localdomain ceph-mon[29756]: pgmap v2410: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2411: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57292 DF PROTO=TCP SPT=60700 DPT=9101 SEQ=942781066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2199E0000000001030307) 
Oct 13 14:54:56 standalone.localdomain ceph-mon[29756]: pgmap v2411: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2412: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:54:58 standalone.localdomain ceph-mon[29756]: pgmap v2412: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2413: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:54:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57294 DF PROTO=TCP SPT=60700 DPT=9101 SEQ=942781066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA225B60000000001030307) 
Oct 13 14:55:00 standalone.localdomain ceph-mon[29756]: pgmap v2413: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2414: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:02 standalone.localdomain ceph-mon[29756]: pgmap v2414: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2415: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57295 DF PROTO=TCP SPT=60700 DPT=9101 SEQ=942781066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA235760000000001030307) 
Oct 13 14:55:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:04 standalone.localdomain ceph-mon[29756]: pgmap v2415: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2416: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:06 standalone.localdomain ceph-mon[29756]: pgmap v2416: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2417: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41520 DF PROTO=TCP SPT=39620 DPT=9100 SEQ=3738178232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA248260000000001030307) 
Oct 13 14:55:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:55:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:55:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:55:07 standalone.localdomain systemd[1]: tmp-crun.lqNVyc.mount: Deactivated successfully.
Oct 13 14:55:07 standalone.localdomain podman[338236]: 2025-10-13 14:55:07.815787266 +0000 UTC m=+0.071370671 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, batch=17.1_20250721.1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., container_name=swift_container_server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, architecture=x86_64)
Oct 13 14:55:07 standalone.localdomain systemd[1]: tmp-crun.ovdeYn.mount: Deactivated successfully.
Oct 13 14:55:07 standalone.localdomain podman[338235]: 2025-10-13 14:55:07.880790958 +0000 UTC m=+0.136042102 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1)
Oct 13 14:55:07 standalone.localdomain podman[338237]: 2025-10-13 14:55:07.93252337 +0000 UTC m=+0.184594666 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, container_name=swift_account_server, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, release=1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true)
Oct 13 14:55:07 standalone.localdomain podman[338236]: 2025-10-13 14:55:07.993863589 +0000 UTC m=+0.249446994 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, container_name=swift_container_server, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9)
Oct 13 14:55:08 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:55:08 standalone.localdomain podman[338235]: 2025-10-13 14:55:08.111957325 +0000 UTC m=+0.367208469 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:55:08 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:55:08 standalone.localdomain podman[338237]: 2025-10-13 14:55:08.166815023 +0000 UTC m=+0.418886329 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4)
Oct 13 14:55:08 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.378769) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367308378824, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1741, "num_deletes": 251, "total_data_size": 1611933, "memory_usage": 1650664, "flush_reason": "Manual Compaction"}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367308389046, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1560453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55774, "largest_seqno": 57514, "table_properties": {"data_size": 1553532, "index_size": 4009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14463, "raw_average_key_size": 19, "raw_value_size": 1539461, "raw_average_value_size": 2097, "num_data_blocks": 182, "num_entries": 734, "num_filter_entries": 734, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367148, "oldest_key_time": 1760367148, "file_creation_time": 1760367308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 10361 microseconds, and 6271 cpu microseconds.
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.389107) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1560453 bytes OK
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.389159) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.392473) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.392540) EVENT_LOG_v1 {"time_micros": 1760367308392529, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.392569) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1604319, prev total WAL file size 1604319, number of live WAL files 2.
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.393456) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1523KB)], [125(5158KB)]
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367308393622, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 6842620, "oldest_snapshot_seqno": -1}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 5523 keys, 5827810 bytes, temperature: kUnknown
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367308427334, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 5827810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5794365, "index_size": 18563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 140925, "raw_average_key_size": 25, "raw_value_size": 5697486, "raw_average_value_size": 1031, "num_data_blocks": 763, "num_entries": 5523, "num_filter_entries": 5523, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.427701) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 5827810 bytes
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.429361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.4 rd, 172.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 5.0 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(8.1) write-amplify(3.7) OK, records in: 6041, records dropped: 518 output_compression: NoCompression
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.429401) EVENT_LOG_v1 {"time_micros": 1760367308429385, "job": 76, "event": "compaction_finished", "compaction_time_micros": 33814, "compaction_time_cpu_micros": 22665, "output_level": 6, "num_output_files": 1, "total_output_size": 5827810, "num_input_records": 6041, "num_output_records": 5523, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367308429740, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367308430192, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.393263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.430296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.430303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.430304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.430306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:55:08.430307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:55:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41521 DF PROTO=TCP SPT=39620 DPT=9100 SEQ=3738178232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA24C370000000001030307) 
Oct 13 14:55:08 standalone.localdomain ceph-mon[29756]: pgmap v2417: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2418: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:10 standalone.localdomain ceph-mon[29756]: pgmap v2418: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41522 DF PROTO=TCP SPT=39620 DPT=9100 SEQ=3738178232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA254370000000001030307) 
Oct 13 14:55:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2419: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:12 standalone.localdomain ceph-mon[29756]: pgmap v2419: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2420: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12063 DF PROTO=TCP SPT=53706 DPT=9105 SEQ=1038700257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA261F60000000001030307) 
Oct 13 14:55:14 standalone.localdomain ceph-mon[29756]: pgmap v2420: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2421: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:16 standalone.localdomain ceph-mon[29756]: pgmap v2421: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2422: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:18 standalone.localdomain ceph-mon[29756]: pgmap v2422: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2423: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:55:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3431191255' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:55:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:55:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3431191255' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:55:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4600 DF PROTO=TCP SPT=55764 DPT=9882 SEQ=4029093215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA274360000000001030307) 
Oct 13 14:55:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3431191255' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:55:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3431191255' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:55:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12065 DF PROTO=TCP SPT=53706 DPT=9105 SEQ=1038700257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA279B70000000001030307) 
Oct 13 14:55:20 standalone.localdomain ceph-mon[29756]: pgmap v2423: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2424: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:22 standalone.localdomain ceph-mon[29756]: pgmap v2424: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2425: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:55:23
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'vms', 'images', '.mgr', 'volumes', 'manila_metadata', 'manila_data']
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:55:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:55:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:55:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10051 DF PROTO=TCP SPT=51016 DPT=9102 SEQ=726734184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2873F0000000001030307) 
Oct 13 14:55:24 standalone.localdomain ceph-mon[29756]: pgmap v2425: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2426: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55525 DF PROTO=TCP SPT=59758 DPT=9101 SEQ=957730427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA28ECE0000000001030307) 
Oct 13 14:55:26 standalone.localdomain ceph-mon[29756]: pgmap v2426: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2427: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:27 standalone.localdomain ceph-mon[29756]: pgmap v2427: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2428: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55527 DF PROTO=TCP SPT=59758 DPT=9101 SEQ=957730427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA29AF60000000001030307) 
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:55:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:55:30 standalone.localdomain ceph-mon[29756]: pgmap v2428: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2429: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:32 standalone.localdomain ceph-mon[29756]: pgmap v2429: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2430: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55528 DF PROTO=TCP SPT=59758 DPT=9101 SEQ=957730427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2AAB70000000001030307) 
Oct 13 14:55:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 14:55:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 14:55:34 standalone.localdomain ceph-mon[29756]: pgmap v2430: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2431: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:36 standalone.localdomain ceph-mon[29756]: pgmap v2431: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2432: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28293 DF PROTO=TCP SPT=33246 DPT=9100 SEQ=2622509243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2BD550000000001030307) 
Oct 13 14:55:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:38 standalone.localdomain ceph-mon[29756]: pgmap v2432: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28294 DF PROTO=TCP SPT=33246 DPT=9100 SEQ=2622509243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2C1760000000001030307) 
Oct 13 14:55:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2433: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:55:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:55:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:55:38 standalone.localdomain systemd[1]: tmp-crun.EnqCe2.mount: Deactivated successfully.
Oct 13 14:55:38 standalone.localdomain podman[338350]: 2025-10-13 14:55:38.810639364 +0000 UTC m=+0.078914484 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, version=17.1.9, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:55:38 standalone.localdomain podman[338351]: 2025-10-13 14:55:38.862518351 +0000 UTC m=+0.124435575 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, release=1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, name=rhosp17/openstack-swift-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32)
Oct 13 14:55:38 standalone.localdomain podman[338352]: 2025-10-13 14:55:38.90937474 +0000 UTC m=+0.169983573 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, tcib_managed=true, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=swift_account_server, managed_by=tripleo_ansible, config_id=tripleo_step4)
Oct 13 14:55:39 standalone.localdomain podman[338350]: 2025-10-13 14:55:39.043960247 +0000 UTC m=+0.312235407 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=openstack-swift-object-container, distribution-scope=public, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 14:55:39 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:55:39 standalone.localdomain podman[338351]: 2025-10-13 14:55:39.06794061 +0000 UTC m=+0.329857844 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, tcib_managed=true, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:55:39 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:55:39 standalone.localdomain podman[338352]: 2025-10-13 14:55:39.096742411 +0000 UTC m=+0.357351214 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, container_name=swift_account_server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4)
Oct 13 14:55:39 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:55:39 standalone.localdomain sudo[338433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:55:39 standalone.localdomain sudo[338433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:55:39 standalone.localdomain sudo[338433]: pam_unix(sudo:session): session closed for user root
Oct 13 14:55:40 standalone.localdomain sudo[338448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:55:40 standalone.localdomain sudo[338448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: pgmap v2433: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28295 DF PROTO=TCP SPT=33246 DPT=9100 SEQ=2622509243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2C9770000000001030307) 
Oct 13 14:55:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2434: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:40 standalone.localdomain sudo[338448]: pam_unix(sudo:session): session closed for user root
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:55:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:55:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b02d55d4-d577-4514-acc5-13d65f30db62 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:55:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b02d55d4-d577-4514-acc5-13d65f30db62 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:55:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b02d55d4-d577-4514-acc5-13d65f30db62 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:55:40 standalone.localdomain sudo[338500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:55:40 standalone.localdomain sudo[338500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:55:40 standalone.localdomain sudo[338500]: pam_unix(sudo:session): session closed for user root
Oct 13 14:55:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:55:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:55:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:55:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:55:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2435: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:42 standalone.localdomain ceph-mon[29756]: pgmap v2434: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59588 DF PROTO=TCP SPT=49278 DPT=9105 SEQ=2639493903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2D7360000000001030307) 
Oct 13 14:55:44 standalone.localdomain ceph-mon[29756]: pgmap v2435: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2436: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:55:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:55:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:55:45 standalone.localdomain ceph-mon[29756]: pgmap v2436: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:55:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2437: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 14:55:47 standalone.localdomain object-server[338799]: Object update sweep starting on /srv/node/d1 (pid: 22)
Oct 13 14:55:47 standalone.localdomain object-server[338799]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 22)
Oct 13 14:55:47 standalone.localdomain object-server[338799]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 14:55:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.06s
Oct 13 14:55:47 standalone.localdomain ceph-mon[29756]: pgmap v2437: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2438: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10731 DF PROTO=TCP SPT=43918 DPT=9882 SEQ=2141242510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2E9760000000001030307) 
Oct 13 14:55:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59590 DF PROTO=TCP SPT=49278 DPT=9105 SEQ=2639493903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2EEF70000000001030307) 
Oct 13 14:55:50 standalone.localdomain ceph-mon[29756]: pgmap v2438: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2439: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:52 standalone.localdomain ceph-mon[29756]: pgmap v2439: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2440: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:55:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:55:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15064 DF PROTO=TCP SPT=51724 DPT=9102 SEQ=2593321117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA2FC6E0000000001030307) 
Oct 13 14:55:54 standalone.localdomain ceph-mon[29756]: pgmap v2440: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2441: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25401 DF PROTO=TCP SPT=57152 DPT=9101 SEQ=2088189727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA303FD0000000001030307) 
Oct 13 14:55:56 standalone.localdomain ceph-mon[29756]: pgmap v2441: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2442: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:55:58 standalone.localdomain ceph-mon[29756]: pgmap v2442: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:55:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25403 DF PROTO=TCP SPT=57152 DPT=9101 SEQ=2088189727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA30FF60000000001030307) 
Oct 13 14:55:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2443: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:00 standalone.localdomain ceph-mon[29756]: pgmap v2443: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2444: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:02 standalone.localdomain ceph-mon[29756]: pgmap v2444: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2445: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25404 DF PROTO=TCP SPT=57152 DPT=9101 SEQ=2088189727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA31FB60000000001030307) 
Oct 13 14:56:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:04 standalone.localdomain ceph-mon[29756]: pgmap v2445: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2446: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:06 standalone.localdomain ceph-mon[29756]: pgmap v2446: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2447: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:56:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 4800.0 total, 600.0 interval
                                                        Cumulative writes: 12K writes, 57K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                                        Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.04 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1512 writes, 6857 keys, 1512 commit groups, 1.0 writes per commit group, ingest: 5.75 MB, 0.01 MB/s
                                                        Interval WAL: 1512 writes, 1512 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     75.1      0.49              0.13        38    0.013       0      0       0.0       0.0
                                                          L6      1/0    5.56 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.7    198.3    167.8      1.01              0.49        37    0.027    171K    20K       0.0       0.0
                                                         Sum      1/0    5.56 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   5.7    134.1    137.8      1.50              0.61        75    0.020    171K    20K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.0    159.2    162.3      0.21              0.10        10    0.021     29K   2593       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    198.3    167.8      1.01              0.49        37    0.027    171K    20K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     75.5      0.48              0.13        37    0.013       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 4800.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.036, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.20 GB write, 0.04 MB/s write, 0.20 GB read, 0.04 MB/s read, 1.5 seconds
                                                        Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 27.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000442 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(2529,26.60 MB,8.6371%) FilterBlock(76,475.61 KB,0.150799%) IndexBlock(76,706.36 KB,0.223962%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 14:56:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18148 DF PROTO=TCP SPT=52420 DPT=9100 SEQ=1387208195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA332840000000001030307) 
Oct 13 14:56:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18149 DF PROTO=TCP SPT=52420 DPT=9100 SEQ=1387208195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA336770000000001030307) 
Oct 13 14:56:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:08 standalone.localdomain ceph-mon[29756]: pgmap v2447: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2448: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:08 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=22 res=1
Oct 13 14:56:09 standalone.localdomain ceph-mon[29756]: pgmap v2448: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:09 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=23 res=1
Oct 13 14:56:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:56:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:56:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:56:09 standalone.localdomain systemd[1]: tmp-crun.VuIP6t.mount: Deactivated successfully.
Oct 13 14:56:09 standalone.localdomain podman[338855]: 2025-10-13 14:56:09.816579905 +0000 UTC m=+0.076696955 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, config_id=tripleo_step4, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 14:56:09 standalone.localdomain podman[338854]: 2025-10-13 14:56:09.87552956 +0000 UTC m=+0.135692061 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=)
Oct 13 14:56:09 standalone.localdomain podman[338853]: 2025-10-13 14:56:09.930041278 +0000 UTC m=+0.189985813 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 14:56:09 standalone.localdomain podman[338855]: 2025-10-13 14:56:09.997842157 +0000 UTC m=+0.257959207 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.openshift.expose-services=, release=1, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:56:10 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:56:10 standalone.localdomain podman[338854]: 2025-10-13 14:56:10.087785512 +0000 UTC m=+0.347948003 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, com.redhat.component=openstack-swift-container-container, version=17.1.9)
Oct 13 14:56:10 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:56:10 standalone.localdomain podman[338853]: 2025-10-13 14:56:10.121030761 +0000 UTC m=+0.380975276 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, release=1, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T14:56:28)
Oct 13 14:56:10 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:56:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18150 DF PROTO=TCP SPT=52420 DPT=9100 SEQ=1387208195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA33E760000000001030307) 
Oct 13 14:56:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2449: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  Converting 2962 SID table entries...
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 14:56:10 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 14:56:11 standalone.localdomain ceph-mon[29756]: pgmap v2449: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2450: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:13 standalone.localdomain python3.9[339138]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:56:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:13 standalone.localdomain python3.9[339228]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:56:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5998 DF PROTO=TCP SPT=54286 DPT=9105 SEQ=865129118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA34C360000000001030307) 
Oct 13 14:56:14 standalone.localdomain python3.9[339299]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367373.1581023-147-59977134028979/.source.fact _original_basename=.5k16f9m1 follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:56:14 standalone.localdomain ceph-mon[29756]: pgmap v2450: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2451: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:14 standalone.localdomain python3.9[339389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:56:15 standalone.localdomain python3.9[339485]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 14:56:16 standalone.localdomain ceph-mon[29756]: pgmap v2451: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2452: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:16 standalone.localdomain python3.9[339537]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:56:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:18 standalone.localdomain ceph-mon[29756]: pgmap v2452: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:56:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2513480824' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:56:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:56:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2513480824' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:56:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2453: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4459 DF PROTO=TCP SPT=57048 DPT=9882 SEQ=3856244462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA35EB60000000001030307) 
Oct 13 14:56:18 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=24 res=1
Oct 13 14:56:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2513480824' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:56:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2513480824' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:56:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6000 DF PROTO=TCP SPT=54286 DPT=9105 SEQ=865129118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA363F60000000001030307) 
Oct 13 14:56:20 standalone.localdomain ceph-mon[29756]: pgmap v2453: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2454: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:22 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:56:22 standalone.localdomain systemd-rc-local-generator[339569]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:56:22 standalone.localdomain systemd-sysv-generator[339575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:56:22 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:56:22 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 14:56:22 standalone.localdomain ceph-mon[29756]: pgmap v2454: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2455: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:56:23
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'backups', 'manila_data', 'volumes', 'vms', 'images']
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:56:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:56:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:56:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62201 DF PROTO=TCP SPT=52494 DPT=9102 SEQ=394859141 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3719E0000000001030307) 
Oct 13 14:56:24 standalone.localdomain python3.9[339675]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:56:24 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:14:56:24 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx4c9f744165e747808a71f-0068ed1318" "proxy-server 2" 0.0007 "-" 21 -
Oct 13 14:56:24 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx4c9f744165e747808a71f-0068ed1318)
Oct 13 14:56:24 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx4c9f744165e747808a71f-0068ed1318)
Oct 13 14:56:24 standalone.localdomain ceph-mon[29756]: pgmap v2455: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2456: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22746 DF PROTO=TCP SPT=55650 DPT=9101 SEQ=362969609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3792E0000000001030307) 
Oct 13 14:56:26 standalone.localdomain python3.9[339936]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 13 14:56:26 standalone.localdomain ceph-mon[29756]: pgmap v2456: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2457: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:27 standalone.localdomain python3.9[340026]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 13 14:56:27 standalone.localdomain ceph-mon[29756]: pgmap v2457: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:28 standalone.localdomain python3.9[340117]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:56:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22748 DF PROTO=TCP SPT=55650 DPT=9101 SEQ=362969609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA385370000000001030307) 
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2458: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:56:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:56:29 standalone.localdomain python3.9[340207]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 13 14:56:30 standalone.localdomain python3.9[340297]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:56:30 standalone.localdomain ceph-mon[29756]: pgmap v2458: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2459: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:30 standalone.localdomain python3.9[340387]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:56:31 standalone.localdomain python3.9[340458]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367390.45813-255-144002200752213/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:56:32 standalone.localdomain python3.9[340548]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 13 14:56:32 standalone.localdomain ceph-mon[29756]: pgmap v2459: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2460: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22749 DF PROTO=TCP SPT=55650 DPT=9101 SEQ=362969609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA394F60000000001030307) 
Oct 13 14:56:33 standalone.localdomain python3.9[340639]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 13 14:56:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:33 standalone.localdomain python3.9[340730]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 14:56:33 standalone.localdomain groupmod[340731]: group changed in /etc/group (group hugetlbfs/985, new gid: 42477)
Oct 13 14:56:33 standalone.localdomain groupmod[340731]: group changed in /etc/passwd (group hugetlbfs/985, new gid: 42477)
Oct 13 14:56:34 standalone.localdomain ceph-mon[29756]: pgmap v2460: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2461: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:34 standalone.localdomain python3.9[340826]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 13 14:56:35 standalone.localdomain python3.9[340916]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:56:36 standalone.localdomain ceph-mon[29756]: pgmap v2461: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2462: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34343 DF PROTO=TCP SPT=41484 DPT=9100 SEQ=2393422320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3A7B40000000001030307) 
Oct 13 14:56:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34344 DF PROTO=TCP SPT=41484 DPT=9100 SEQ=2393422320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3ABB60000000001030307) 
Oct 13 14:56:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:38 standalone.localdomain ceph-mon[29756]: pgmap v2462: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2463: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:40 standalone.localdomain python3.9[341008]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:56:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34345 DF PROTO=TCP SPT=41484 DPT=9100 SEQ=2393422320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3B3B70000000001030307) 
Oct 13 14:56:40 standalone.localdomain ceph-mon[29756]: pgmap v2463: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2464: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:56:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:56:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:56:40 standalone.localdomain systemd[1]: tmp-crun.4Z0xVs.mount: Deactivated successfully.
Oct 13 14:56:40 standalone.localdomain sudo[341132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:56:40 standalone.localdomain sudo[341132]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:56:40 standalone.localdomain sudo[341132]: pam_unix(sudo:session): session closed for user root
Oct 13 14:56:40 standalone.localdomain podman[341101]: 2025-10-13 14:56:40.859986586 +0000 UTC m=+0.125420885 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true)
Oct 13 14:56:40 standalone.localdomain podman[341100]: 2025-10-13 14:56:40.834382982 +0000 UTC m=+0.092625649 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, container_name=swift_container_server, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, build-date=2025-07-21T15:54:32)
Oct 13 14:56:40 standalone.localdomain python3.9[341098]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:56:40 standalone.localdomain sudo[341169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 14:56:40 standalone.localdomain sudo[341169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:56:40 standalone.localdomain podman[341099]: 2025-10-13 14:56:40.812588328 +0000 UTC m=+0.077863052 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, version=17.1.9)
Oct 13 14:56:40 standalone.localdomain podman[341099]: 2025-10-13 14:56:40.977553415 +0000 UTC m=+0.242828119 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1, build-date=2025-07-21T14:56:28, distribution-scope=public, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 14:56:40 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:56:40 standalone.localdomain podman[341100]: 2025-10-13 14:56:40.998565015 +0000 UTC m=+0.256807662 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container)
Oct 13 14:56:41 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:56:41 standalone.localdomain podman[341101]: 2025-10-13 14:56:41.034912621 +0000 UTC m=+0.300346940 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 14:56:41 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:56:41 standalone.localdomain python3.9[341278]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/root/.ansible/tmp/ansible-tmp-1760367400.4312382-336-35065725587362/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:56:41 standalone.localdomain podman[341399]: 2025-10-13 14:56:41.710805336 +0000 UTC m=+0.068688547 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55)
Oct 13 14:56:41 standalone.localdomain podman[341399]: 2025-10-13 14:56:41.846071594 +0000 UTC m=+0.203954805 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_BRANCH=main, release=553, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., version=7, vcs-type=git)
Oct 13 14:56:42 standalone.localdomain sudo[341169]: pam_unix(sudo:session): session closed for user root
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:42 standalone.localdomain sudo[341547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:56:42 standalone.localdomain sudo[341547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:56:42 standalone.localdomain sudo[341547]: pam_unix(sudo:session): session closed for user root
Oct 13 14:56:42 standalone.localdomain sudo[341562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:56:42 standalone.localdomain sudo[341562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:56:42 standalone.localdomain python3.9[341516]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 14:56:42 standalone.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 14:56:42 standalone.localdomain systemd[1]: Stopped Load Kernel Modules.
Oct 13 14:56:42 standalone.localdomain systemd[1]: Stopping Load Kernel Modules...
Oct 13 14:56:42 standalone.localdomain systemd[1]: Starting Load Kernel Modules...
Oct 13 14:56:42 standalone.localdomain systemd-modules-load[341580]: Module 'msr' is built in
Oct 13 14:56:42 standalone.localdomain systemd[1]: Finished Load Kernel Modules.
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: pgmap v2464: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:42 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2465: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:42 standalone.localdomain sudo[341562]: pam_unix(sudo:session): session closed for user root
Oct 13 14:56:43 standalone.localdomain python3.9[341684]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:56:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 7f5dfaf3-ec1a-4ebc-89d9-1670e22045e6 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:56:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 7f5dfaf3-ec1a-4ebc-89d9-1670e22045e6 (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:56:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 7f5dfaf3-ec1a-4ebc-89d9-1670e22045e6 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:56:43 standalone.localdomain sudo[341702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:56:43 standalone.localdomain sudo[341702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:56:43 standalone.localdomain sudo[341702]: pam_unix(sudo:session): session closed for user root
Oct 13 14:56:43 standalone.localdomain python3.9[341787]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/root/.ansible/tmp/ansible-tmp-1760367402.6105256-359-131071458883641/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.516241) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367403516281, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1158, "num_deletes": 251, "total_data_size": 951019, "memory_usage": 972952, "flush_reason": "Manual Compaction"}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367403523205, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 933643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57515, "largest_seqno": 58672, "table_properties": {"data_size": 928739, "index_size": 2440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 9779, "raw_average_key_size": 17, "raw_value_size": 918739, "raw_average_value_size": 1655, "num_data_blocks": 109, "num_entries": 555, "num_filter_entries": 555, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367308, "oldest_key_time": 1760367308, "file_creation_time": 1760367403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 7049 microseconds, and 2959 cpu microseconds.
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.523264) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 933643 bytes OK
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.523307) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.525271) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.525304) EVENT_LOG_v1 {"time_micros": 1760367403525294, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.525332) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 945614, prev total WAL file size 945614, number of live WAL files 2.
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.526435) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(911KB)], [128(5691KB)]
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367403526475, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 6761453, "oldest_snapshot_seqno": -1}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 5558 keys, 6034077 bytes, temperature: kUnknown
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367403550239, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 6034077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5999552, "index_size": 19515, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 143321, "raw_average_key_size": 25, "raw_value_size": 5901108, "raw_average_value_size": 1061, "num_data_blocks": 783, "num_entries": 5558, "num_filter_entries": 5558, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.550470) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 6034077 bytes
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.552057) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 285.0 rd, 254.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 5.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(13.7) write-amplify(6.5) OK, records in: 6078, records dropped: 520 output_compression: NoCompression
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.552077) EVENT_LOG_v1 {"time_micros": 1760367403552068, "job": 78, "event": "compaction_finished", "compaction_time_micros": 23721, "compaction_time_cpu_micros": 10690, "output_level": 6, "num_output_files": 1, "total_output_size": 6034077, "num_input_records": 6078, "num_output_records": 5558, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367403552288, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367403552870, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.526134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.552991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.552999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.553002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.553005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:56:43 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:56:43.553008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:56:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18329 DF PROTO=TCP SPT=34700 DPT=9105 SEQ=2971260495 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3C1760000000001030307) 
Oct 13 14:56:44 standalone.localdomain python3.9[341877]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:56:44 standalone.localdomain ceph-mon[29756]: pgmap v2465: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2466: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:56:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:56:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:45 standalone.localdomain ceph-mon[29756]: pgmap v2466: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:56:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2467: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:47 standalone.localdomain ceph-mon[29756]: pgmap v2467: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2468: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3752 DF PROTO=TCP SPT=38750 DPT=9882 SEQ=3323323331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3D3B60000000001030307) 
Oct 13 14:56:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18331 DF PROTO=TCP SPT=34700 DPT=9105 SEQ=2971260495 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3D9360000000001030307) 
Oct 13 14:56:50 standalone.localdomain ceph-mon[29756]: pgmap v2468: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2469: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:51 standalone.localdomain python3.9[341970]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:56:52 standalone.localdomain ceph-mon[29756]: pgmap v2469: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2470: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:52 standalone.localdomain python3.9[342062]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 13 14:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:56:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:56:53 standalone.localdomain python3.9[342152]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:56:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35687 DF PROTO=TCP SPT=45006 DPT=9102 SEQ=3089752422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3E6CE0000000001030307) 
Oct 13 14:56:54 standalone.localdomain python3.9[342242]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:56:54 standalone.localdomain ceph-mon[29756]: pgmap v2470: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:54 standalone.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 13 14:56:54 standalone.localdomain systemd[1]: tuned.service: Deactivated successfully.
Oct 13 14:56:54 standalone.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 13 14:56:54 standalone.localdomain systemd[1]: tuned.service: Consumed 1.426s CPU time, no IO.
Oct 13 14:56:54 standalone.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 13 14:56:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2471: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:56:55 standalone.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Oct 13 14:56:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32429 DF PROTO=TCP SPT=54246 DPT=9101 SEQ=3438509025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3EE5D0000000001030307) 
Oct 13 14:56:56 standalone.localdomain python3.9[342344]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 13 14:56:56 standalone.localdomain ceph-mon[29756]: pgmap v2471: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2472: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:58 standalone.localdomain python3.9[342434]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:56:58 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:56:58 standalone.localdomain systemd-sysv-generator[342464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:56:58 standalone.localdomain systemd-rc-local-generator[342460]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:56:58 standalone.localdomain ceph-mon[29756]: pgmap v2472: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:58 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:56:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32431 DF PROTO=TCP SPT=54246 DPT=9101 SEQ=3438509025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA3FA760000000001030307) 
Oct 13 14:56:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2473: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:56:59 standalone.localdomain python3.9[342561]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 14:56:59 standalone.localdomain systemd[1]: Reloading.
Oct 13 14:56:59 standalone.localdomain systemd-rc-local-generator[342585]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 14:56:59 standalone.localdomain systemd-sysv-generator[342591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 14:56:59 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 14:56:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:00 standalone.localdomain python3.9[342689]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:57:00 standalone.localdomain ceph-mon[29756]: pgmap v2473: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2474: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:00 standalone.localdomain python3.9[342780]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:57:00 standalone.localdomain kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k FS
Oct 13 14:57:01 standalone.localdomain python3.9[342871]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:57:02 standalone.localdomain ceph-mon[29756]: pgmap v2474: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2475: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32432 DF PROTO=TCP SPT=54246 DPT=9101 SEQ=3438509025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA40A360000000001030307) 
Oct 13 14:57:02 standalone.localdomain python3.9[342968]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:57:03 standalone.localdomain python3.9[343059]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 14:57:03 standalone.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 13 14:57:03 standalone.localdomain systemd[1]: Stopped Apply Kernel Variables.
Oct 13 14:57:03 standalone.localdomain systemd[1]: Stopping Apply Kernel Variables...
Oct 13 14:57:03 standalone.localdomain systemd[1]: Starting Apply Kernel Variables...
Oct 13 14:57:03 standalone.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 13 14:57:03 standalone.localdomain systemd[1]: Finished Apply Kernel Variables.
Oct 13 14:57:04 standalone.localdomain sshd[336095]: pam_unix(sshd:session): session closed for user root
Oct 13 14:57:04 standalone.localdomain systemd[1]: session-277.scope: Deactivated successfully.
Oct 13 14:57:04 standalone.localdomain systemd[1]: session-277.scope: Consumed 2min 20.644s CPU time.
Oct 13 14:57:04 standalone.localdomain systemd-logind[45629]: Session 277 logged out. Waiting for processes to exit.
Oct 13 14:57:04 standalone.localdomain systemd-logind[45629]: Removed session 277.
Oct 13 14:57:04 standalone.localdomain ceph-mon[29756]: pgmap v2475: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2476: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:06 standalone.localdomain ceph-mon[29756]: pgmap v2476: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2477: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60344 DF PROTO=TCP SPT=36376 DPT=9100 SEQ=1025391095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA41CE60000000001030307) 
Oct 13 14:57:08 standalone.localdomain ceph-mon[29756]: pgmap v2477: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60345 DF PROTO=TCP SPT=36376 DPT=9100 SEQ=1025391095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA420F60000000001030307) 
Oct 13 14:57:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2478: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 14:57:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 4800.1 total, 600.0 interval
                                                        Cumulative writes: 6951 writes, 30K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                                        Cumulative WAL: 6951 writes, 1290 syncs, 5.39 writes per sync, written: 0.03 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 14:57:09 standalone.localdomain sshd[343079]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:57:09 standalone.localdomain sshd[343079]: Accepted publickey for root from 192.168.122.30 port 57558 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:57:09 standalone.localdomain systemd-logind[45629]: New session 279 of user root.
Oct 13 14:57:09 standalone.localdomain systemd[1]: Started Session 279 of User root.
Oct 13 14:57:09 standalone.localdomain sshd[343079]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:57:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:10 standalone.localdomain ceph-mon[29756]: pgmap v2478: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60346 DF PROTO=TCP SPT=36376 DPT=9100 SEQ=1025391095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA428F60000000001030307) 
Oct 13 14:57:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2479: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:10 standalone.localdomain python3.9[343172]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:57:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:57:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:57:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:57:11 standalone.localdomain systemd[1]: tmp-crun.mHKWu5.mount: Deactivated successfully.
Oct 13 14:57:11 standalone.localdomain podman[343268]: 2025-10-13 14:57:11.82762487 +0000 UTC m=+0.088264814 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, vcs-type=git, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 14:57:11 standalone.localdomain podman[343269]: 2025-10-13 14:57:11.868292208 +0000 UTC m=+0.127780226 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true)
Oct 13 14:57:11 standalone.localdomain podman[343267]: 2025-10-13 14:57:11.918955757 +0000 UTC m=+0.179595881 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc.)
Oct 13 14:57:11 standalone.localdomain podman[343268]: 2025-10-13 14:57:11.987697635 +0000 UTC m=+0.248337569 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, config_id=tripleo_step4, container_name=swift_container_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, vcs-type=git)
Oct 13 14:57:11 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:57:12 standalone.localdomain python3.9[343266]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:57:12 standalone.localdomain podman[343269]: 2025-10-13 14:57:12.068041213 +0000 UTC m=+0.327529291 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, version=17.1.9, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1)
Oct 13 14:57:12 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:57:12 standalone.localdomain podman[343267]: 2025-10-13 14:57:12.127926517 +0000 UTC m=+0.388566661 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, tcib_managed=true)
Oct 13 14:57:12 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:57:12 standalone.localdomain ceph-mon[29756]: pgmap v2479: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2480: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:13 standalone.localdomain python3.9[343441]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:57:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5944 DF PROTO=TCP SPT=47038 DPT=9105 SEQ=662375251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA436B60000000001030307) 
Oct 13 14:57:14 standalone.localdomain python3.9[343534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:57:14 standalone.localdomain ceph-mon[29756]: pgmap v2480: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2481: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:15 standalone.localdomain python3.9[343628]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 14:57:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 14:57:16 standalone.localdomain python3.9[343680]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:57:16 standalone.localdomain ceph-mon[29756]: pgmap v2481: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2482: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:18 standalone.localdomain ceph-mon[29756]: pgmap v2482: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2483: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:57:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2797804207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:57:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:57:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2797804207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:57:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65531 DF PROTO=TCP SPT=59224 DPT=9882 SEQ=339851634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA448F60000000001030307) 
Oct 13 14:57:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2797804207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:57:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2797804207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:57:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5946 DF PROTO=TCP SPT=47038 DPT=9105 SEQ=662375251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA44E760000000001030307) 
Oct 13 14:57:20 standalone.localdomain ceph-mon[29756]: pgmap v2483: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:20 standalone.localdomain python3.9[343772]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 14:57:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2484: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:21 standalone.localdomain python3.9[343925]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:57:22 standalone.localdomain python3.9[344015]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:57:22 standalone.localdomain ceph-mon[29756]: pgmap v2484: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2485: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:23 standalone.localdomain python3.9[344116]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:57:23
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'manila_data', 'vms', '.mgr', 'manila_metadata']
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:57:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:57:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5220 DF PROTO=TCP SPT=37190 DPT=9102 SEQ=4254134908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA45BFF0000000001030307) 
Oct 13 14:57:23 standalone.localdomain python3.9[344162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:57:24 standalone.localdomain python3.9[344252]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:57:24 standalone.localdomain ceph-mon[29756]: pgmap v2485: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2486: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:24 standalone.localdomain python3.9[344323]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/root/.ansible/tmp/ansible-tmp-1760367443.8214285-121-164335584968622/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:57:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5272 DF PROTO=TCP SPT=33042 DPT=9101 SEQ=4059034374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4638E0000000001030307) 
Oct 13 14:57:25 standalone.localdomain python3.9[344413]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:57:26 standalone.localdomain python3.9[344503]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:57:26 standalone.localdomain ceph-mon[29756]: pgmap v2486: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2487: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:26 standalone.localdomain python3.9[344593]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:57:27 standalone.localdomain python3.9[344683]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 14:57:28 standalone.localdomain python3.9[344773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:57:28 standalone.localdomain ceph-mon[29756]: pgmap v2487: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5274 DF PROTO=TCP SPT=33042 DPT=9101 SEQ=4059034374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA46FB60000000001030307) 
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2488: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:28 standalone.localdomain python3.9[344865]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:57:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:57:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:30 standalone.localdomain ceph-mon[29756]: pgmap v2488: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2489: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:32 standalone.localdomain ceph-mon[29756]: pgmap v2489: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2490: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5275 DF PROTO=TCP SPT=33042 DPT=9101 SEQ=4059034374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA47F760000000001030307) 
Oct 13 14:57:33 standalone.localdomain python3.9[344957]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:34 standalone.localdomain ceph-mon[29756]: pgmap v2490: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2491: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:36 standalone.localdomain ceph-mon[29756]: pgmap v2491: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2492: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47366 DF PROTO=TCP SPT=33806 DPT=9100 SEQ=1081459683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA492140000000001030307) 
Oct 13 14:57:37 standalone.localdomain python3.9[345049]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47367 DF PROTO=TCP SPT=33806 DPT=9100 SEQ=1081459683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA496360000000001030307) 
Oct 13 14:57:38 standalone.localdomain ceph-mon[29756]: pgmap v2492: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2493: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47368 DF PROTO=TCP SPT=33806 DPT=9100 SEQ=1081459683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA49E360000000001030307) 
Oct 13 14:57:40 standalone.localdomain ceph-mon[29756]: pgmap v2493: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2494: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:42 standalone.localdomain python3.9[345141]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:57:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:57:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:57:42 standalone.localdomain ceph-mon[29756]: pgmap v2494: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:42 standalone.localdomain podman[345144]: 2025-10-13 14:57:42.570415393 +0000 UTC m=+0.066730747 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, distribution-scope=public, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container)
Oct 13 14:57:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2495: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:42 standalone.localdomain systemd[1]: tmp-crun.Kh3yab.mount: Deactivated successfully.
Oct 13 14:57:42 standalone.localdomain podman[345145]: 2025-10-13 14:57:42.633648191 +0000 UTC m=+0.129665326 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, container_name=swift_account_server, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 14:57:42 standalone.localdomain podman[345143]: 2025-10-13 14:57:42.689618554 +0000 UTC m=+0.185506495 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:57:42 standalone.localdomain podman[345144]: 2025-10-13 14:57:42.748842927 +0000 UTC m=+0.245158311 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server)
Oct 13 14:57:42 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:57:42 standalone.localdomain podman[345145]: 2025-10-13 14:57:42.823206049 +0000 UTC m=+0.319223144 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, tcib_managed=true, container_name=swift_account_server, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container)
Oct 13 14:57:42 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:57:42 standalone.localdomain podman[345143]: 2025-10-13 14:57:42.908752968 +0000 UTC m=+0.404640899 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, vcs-type=git, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, architecture=x86_64, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, version=17.1.9, container_name=swift_object_server, distribution-scope=public, config_id=tripleo_step4)
Oct 13 14:57:42 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:57:43 standalone.localdomain sudo[345222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:57:43 standalone.localdomain sudo[345222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:57:43 standalone.localdomain sudo[345222]: pam_unix(sudo:session): session closed for user root
Oct 13 14:57:43 standalone.localdomain sudo[345237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:57:43 standalone.localdomain sudo[345237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:57:43 standalone.localdomain sudo[345237]: pam_unix(sudo:session): session closed for user root
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:57:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:57:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev fc585818-a0da-47a4-87e5-a35ca5de69fb (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:57:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev fc585818-a0da-47a4-87e5-a35ca5de69fb (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:57:43 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event fc585818-a0da-47a4-87e5-a35ca5de69fb (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:57:43 standalone.localdomain sudo[345284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:57:43 standalone.localdomain sudo[345284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:57:43 standalone.localdomain sudo[345284]: pam_unix(sudo:session): session closed for user root
Oct 13 14:57:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19250 DF PROTO=TCP SPT=36698 DPT=9105 SEQ=2567276489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4ABF70000000001030307) 
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: pgmap v2495: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:57:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2496: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:57:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:57:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:45 standalone.localdomain ceph-mon[29756]: pgmap v2496: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:57:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2497: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:46 standalone.localdomain python3.9[345388]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:47 standalone.localdomain ceph-mon[29756]: pgmap v2497: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 14:57:48 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 69, saving inputs in /var/lib/pacemaker/pengine/pe-input-69.bz2
Oct 13 14:57:48 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 69 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-69.bz2): Complete
Oct 13 14:57:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 14:57:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2498: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1162 DF PROTO=TCP SPT=40014 DPT=9882 SEQ=2602354785 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4BE360000000001030307) 
Oct 13 14:57:49 standalone.localdomain ceph-mon[29756]: pgmap v2498: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19252 DF PROTO=TCP SPT=36698 DPT=9105 SEQ=2567276489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4C3B70000000001030307) 
Oct 13 14:57:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2499: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:51 standalone.localdomain python3.9[345480]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:52 standalone.localdomain ceph-mon[29756]: pgmap v2499: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2500: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:57:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:57:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14253 DF PROTO=TCP SPT=52542 DPT=9102 SEQ=1197924606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4D12F0000000001030307) 
Oct 13 14:57:54 standalone.localdomain ceph-mon[29756]: pgmap v2500: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2501: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:57:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60925 DF PROTO=TCP SPT=45608 DPT=9101 SEQ=2208986809 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4D8BE0000000001030307) 
Oct 13 14:57:55 standalone.localdomain python3.9[345572]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:57:56 standalone.localdomain ceph-mon[29756]: pgmap v2501: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2502: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:58 standalone.localdomain ceph-mon[29756]: pgmap v2502: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:57:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60927 DF PROTO=TCP SPT=45608 DPT=9101 SEQ=2208986809 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4E4B60000000001030307) 
Oct 13 14:57:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2503: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:00 standalone.localdomain ceph-mon[29756]: pgmap v2503: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2504: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:02 standalone.localdomain ceph-mon[29756]: pgmap v2504: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60928 DF PROTO=TCP SPT=45608 DPT=9101 SEQ=2208986809 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA4F4770000000001030307) 
Oct 13 14:58:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2505: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:04 standalone.localdomain ceph-mon[29756]: pgmap v2505: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2506: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:06 standalone.localdomain ceph-mon[29756]: pgmap v2506: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2507: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32035 DF PROTO=TCP SPT=41420 DPT=9100 SEQ=4141343969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA507440000000001030307) 
Oct 13 14:58:07 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:14:58:07 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx8111eea2b26e47519fe1d-0068ed137f" "proxy-server 2" 0.0009 "-" 21 -
Oct 13 14:58:07 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx8111eea2b26e47519fe1d-0068ed137f)
Oct 13 14:58:07 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx8111eea2b26e47519fe1d-0068ed137f)
Oct 13 14:58:08 standalone.localdomain ceph-mon[29756]: pgmap v2507: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32036 DF PROTO=TCP SPT=41420 DPT=9100 SEQ=4141343969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA50B360000000001030307) 
Oct 13 14:58:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2508: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:09 standalone.localdomain python3.9[345741]: ansible-ansible.builtin.file Invoked with group=root mode=0770 owner=root path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:58:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:10 standalone.localdomain ceph-mon[29756]: pgmap v2508: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32037 DF PROTO=TCP SPT=41420 DPT=9100 SEQ=4141343969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA513370000000001030307) 
Oct 13 14:58:10 standalone.localdomain python3.9[345844]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:58:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2509: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:11 standalone.localdomain python3.9[345915]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=root mode=0660 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367490.1109064-253-15708527131489/.source.json _original_basename=.x5yx5kl3 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:58:12 standalone.localdomain python3.9[346005]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 14:58:12 standalone.localdomain ceph-mon[29756]: pgmap v2509: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2510: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:58:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:58:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:58:13 standalone.localdomain systemd[1]: tmp-crun.C4yLrl.mount: Deactivated successfully.
Oct 13 14:58:13 standalone.localdomain podman[346031]: 2025-10-13 14:58:13.812390636 +0000 UTC m=+0.079533503 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-type=git)
Oct 13 14:58:13 standalone.localdomain podman[346030]: 2025-10-13 14:58:13.820104455 +0000 UTC m=+0.087081877 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, release=1, com.redhat.component=openstack-swift-object-container, vcs-type=git, container_name=swift_object_server)
Oct 13 14:58:13 standalone.localdomain podman[346032]: 2025-10-13 14:58:13.877630916 +0000 UTC m=+0.142946356 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 14:58:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41796 DF PROTO=TCP SPT=51774 DPT=9105 SEQ=3157027800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA520F60000000001030307) 
Oct 13 14:58:14 standalone.localdomain podman[346031]: 2025-10-13 14:58:14.016774574 +0000 UTC m=+0.283917441 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 14:58:14 standalone.localdomain podman[346030]: 2025-10-13 14:58:14.022875102 +0000 UTC m=+0.289852514 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, version=17.1.9)
Oct 13 14:58:14 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:58:14 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:58:14 standalone.localdomain podman[346032]: 2025-10-13 14:58:14.061936602 +0000 UTC m=+0.327252042 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, container_name=swift_account_server, build-date=2025-07-21T16:11:22, distribution-scope=public, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, release=1, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:58:14 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:58:14 standalone.localdomain ceph-mon[29756]: pgmap v2510: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2511: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:16 standalone.localdomain ceph-mon[29756]: pgmap v2511: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2512: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:17 standalone.localdomain podman[346017]: 2025-10-13 14:58:12.119530176 +0000 UTC m=+0.039664289 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 14:58:18 standalone.localdomain ceph-mon[29756]: pgmap v2512: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:18 standalone.localdomain python3.9[346299]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 14:58:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:58:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3705368919' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:58:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:58:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3705368919' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:58:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2513: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31965 DF PROTO=TCP SPT=41870 DPT=9882 SEQ=852884800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA533760000000001030307) 
Oct 13 14:58:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3705368919' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:58:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3705368919' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:58:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41798 DF PROTO=TCP SPT=51774 DPT=9105 SEQ=3157027800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA538B60000000001030307) 
Oct 13 14:58:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:20 standalone.localdomain ceph-mon[29756]: pgmap v2513: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2514: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:22 standalone.localdomain ceph-mon[29756]: pgmap v2514: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2515: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:58:23
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'manila_data', 'images', 'volumes', 'manila_metadata']
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:58:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43767 DF PROTO=TCP SPT=54578 DPT=9102 SEQ=1936313059 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5465E0000000001030307) 
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:58:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:58:24 standalone.localdomain ceph-mon[29756]: pgmap v2515: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2516: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.108222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367505108261, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1201, "num_deletes": 251, "total_data_size": 1036323, "memory_usage": 1068080, "flush_reason": "Manual Compaction"}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367505113439, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1006711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58673, "largest_seqno": 59873, "table_properties": {"data_size": 1001595, "index_size": 2588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11121, "raw_average_key_size": 19, "raw_value_size": 991230, "raw_average_value_size": 1742, "num_data_blocks": 118, "num_entries": 569, "num_filter_entries": 569, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367403, "oldest_key_time": 1760367403, "file_creation_time": 1760367505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 5305 microseconds, and 2553 cpu microseconds.
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.113517) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1006711 bytes OK
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.113541) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.114980) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.114996) EVENT_LOG_v1 {"time_micros": 1760367505114991, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.115012) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1030737, prev total WAL file size 1030737, number of live WAL files 2.
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.115443) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(983KB)], [131(5892KB)]
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367505115478, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 7040788, "oldest_snapshot_seqno": -1}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 5609 keys, 6003411 bytes, temperature: kUnknown
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367505153330, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 6003411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5968461, "index_size": 19812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 144966, "raw_average_key_size": 25, "raw_value_size": 5868989, "raw_average_value_size": 1046, "num_data_blocks": 790, "num_entries": 5609, "num_filter_entries": 5609, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.153588) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 6003411 bytes
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.157734) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.6 rd, 158.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 5.8 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(13.0) write-amplify(6.0) OK, records in: 6127, records dropped: 518 output_compression: NoCompression
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.157758) EVENT_LOG_v1 {"time_micros": 1760367505157748, "job": 80, "event": "compaction_finished", "compaction_time_micros": 37926, "compaction_time_cpu_micros": 13266, "output_level": 6, "num_output_files": 1, "total_output_size": 6003411, "num_input_records": 6127, "num_output_records": 5609, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367505157953, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367505158626, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.115361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.158652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.158656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.158658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.158660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:58:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-14:58:25.158662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 14:58:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2020 DF PROTO=TCP SPT=50154 DPT=9101 SEQ=3575132237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA54DEE0000000001030307) 
Oct 13 14:58:26 standalone.localdomain ceph-mon[29756]: pgmap v2516: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2517: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:27 standalone.localdomain podman[346311]: 2025-10-13 14:58:18.650043388 +0000 UTC m=+0.048967417 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 14:58:28 standalone.localdomain ceph-mon[29756]: pgmap v2517: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:28 standalone.localdomain python3.9[346518]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 14:58:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2022 DF PROTO=TCP SPT=50154 DPT=9101 SEQ=3575132237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA559F60000000001030307) 
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2518: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:58:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:58:29 standalone.localdomain podman[346532]: 2025-10-13 14:58:28.44222787 +0000 UTC m=+0.049965818 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Oct 13 14:58:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:30 standalone.localdomain ceph-mon[29756]: pgmap v2518: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2519: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:30 standalone.localdomain python3.9[346698]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 14:58:32 standalone.localdomain ceph-mon[29756]: pgmap v2519: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2023 DF PROTO=TCP SPT=50154 DPT=9101 SEQ=3575132237 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA569B70000000001030307) 
Oct 13 14:58:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2520: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:34 standalone.localdomain ceph-mon[29756]: pgmap v2520: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2521: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:35 standalone.localdomain podman[346712]: 2025-10-13 14:58:30.976759958 +0000 UTC m=+0.035911572 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 14:58:36 standalone.localdomain python3.9[347055]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 14:58:36 standalone.localdomain ceph-mon[29756]: pgmap v2521: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2522: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29686 DF PROTO=TCP SPT=42706 DPT=9100 SEQ=1160131062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA57C740000000001030307) 
Oct 13 14:58:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29687 DF PROTO=TCP SPT=42706 DPT=9100 SEQ=1160131062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA580760000000001030307) 
Oct 13 14:58:38 standalone.localdomain ceph-mon[29756]: pgmap v2522: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2523: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:39 standalone.localdomain podman[347069]: 2025-10-13 14:58:36.504854038 +0000 UTC m=+0.043017664 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Oct 13 14:58:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29688 DF PROTO=TCP SPT=42706 DPT=9100 SEQ=1160131062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA588760000000001030307) 
Oct 13 14:58:40 standalone.localdomain python3.9[347249]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 14:58:40 standalone.localdomain ceph-mon[29756]: pgmap v2523: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2524: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:42 standalone.localdomain podman[347263]: 2025-10-13 14:58:40.614610663 +0000 UTC m=+0.046284974 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Oct 13 14:58:42 standalone.localdomain ceph-mon[29756]: pgmap v2524: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2525: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:42 standalone.localdomain sshd[343079]: pam_unix(sshd:session): session closed for user root
Oct 13 14:58:42 standalone.localdomain systemd[1]: session-279.scope: Deactivated successfully.
Oct 13 14:58:42 standalone.localdomain systemd[1]: session-279.scope: Consumed 1min 30.598s CPU time.
Oct 13 14:58:42 standalone.localdomain systemd-logind[45629]: Session 279 logged out. Waiting for processes to exit.
Oct 13 14:58:42 standalone.localdomain systemd-logind[45629]: Removed session 279.
Oct 13 14:58:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30929 DF PROTO=TCP SPT=58070 DPT=9105 SEQ=19987062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA596360000000001030307) 
Oct 13 14:58:44 standalone.localdomain sudo[347377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:58:44 standalone.localdomain sudo[347377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:58:44 standalone.localdomain sudo[347377]: pam_unix(sudo:session): session closed for user root
Oct 13 14:58:44 standalone.localdomain sudo[347392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 14:58:44 standalone.localdomain sudo[347392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:58:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:58:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:58:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:58:44 standalone.localdomain podman[347407]: 2025-10-13 14:58:44.1766274 +0000 UTC m=+0.071622917 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 14:58:44 standalone.localdomain podman[347409]: 2025-10-13 14:58:44.228367143 +0000 UTC m=+0.116927341 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, tcib_managed=true)
Oct 13 14:58:44 standalone.localdomain podman[347408]: 2025-10-13 14:58:44.27803571 +0000 UTC m=+0.170553621 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., container_name=swift_container_server, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container)
Oct 13 14:58:44 standalone.localdomain podman[347407]: 2025-10-13 14:58:44.375824667 +0000 UTC m=+0.270820244 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, release=1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:58:44 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:58:44 standalone.localdomain podman[347409]: 2025-10-13 14:58:44.418750767 +0000 UTC m=+0.307310935 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vcs-type=git, version=17.1.9, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 14:58:44 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:58:44 standalone.localdomain podman[347408]: 2025-10-13 14:58:44.463789051 +0000 UTC m=+0.356306982 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:54:32, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 14:58:44 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:58:44 standalone.localdomain sudo[347392]: pam_unix(sudo:session): session closed for user root
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:44 standalone.localdomain sudo[347507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:58:44 standalone.localdomain sudo[347507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:58:44 standalone.localdomain sudo[347507]: pam_unix(sudo:session): session closed for user root
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: pgmap v2525: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:44 standalone.localdomain sudo[347522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:58:44 standalone.localdomain sudo[347522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:58:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2526: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:45 standalone.localdomain sudo[347522]: pam_unix(sudo:session): session closed for user root
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:58:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev e1f3c63a-22a6-47f7-a480-815ae93a55fc (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:58:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev e1f3c63a-22a6-47f7-a480-815ae93a55fc (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:58:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event e1f3c63a-22a6-47f7-a480-815ae93a55fc (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:58:45 standalone.localdomain sudo[347569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:58:45 standalone.localdomain sudo[347569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:58:45 standalone.localdomain sudo[347569]: pam_unix(sudo:session): session closed for user root
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:58:46 standalone.localdomain ceph-mon[29756]: pgmap v2526: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2527: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:48 standalone.localdomain ceph-mon[29756]: pgmap v2527: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2528: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5567 DF PROTO=TCP SPT=53530 DPT=9882 SEQ=1641014013 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5A8760000000001030307) 
Oct 13 14:58:49 standalone.localdomain sshd[347584]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:58:49 standalone.localdomain sshd[347584]: Accepted publickey for root from 192.168.122.30 port 44050 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:58:49 standalone.localdomain systemd-logind[45629]: New session 280 of user root.
Oct 13 14:58:49 standalone.localdomain systemd[1]: Started Session 280 of User root.
Oct 13 14:58:49 standalone.localdomain sshd[347584]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:58:49 standalone.localdomain ceph-mon[29756]: pgmap v2528: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:58:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:58:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30931 DF PROTO=TCP SPT=58070 DPT=9105 SEQ=19987062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5ADF70000000001030307) 
Oct 13 14:58:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:50 standalone.localdomain python3.9[347677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:58:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2529: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:58:51 standalone.localdomain python3.9[347771]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 13 14:58:52 standalone.localdomain python3.9[347862]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 14:58:52 standalone.localdomain ceph-mon[29756]: pgmap v2529: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2530: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:58:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:58:53 standalone.localdomain python3.9[347914]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 14:58:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47524 DF PROTO=TCP SPT=43752 DPT=9102 SEQ=3603564832 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5BB8F0000000001030307) 
Oct 13 14:58:54 standalone.localdomain ceph-mon[29756]: pgmap v2530: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2531: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:58:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11327 DF PROTO=TCP SPT=35604 DPT=9101 SEQ=455982562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5C31E0000000001030307) 
Oct 13 14:58:56 standalone.localdomain ceph-mon[29756]: pgmap v2531: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2532: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:57 standalone.localdomain python3.9[348006]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:58:58 standalone.localdomain ceph-mon[29756]: pgmap v2532: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:58:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11329 DF PROTO=TCP SPT=35604 DPT=9101 SEQ=455982562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5CF360000000001030307) 
Oct 13 14:58:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2533: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:00 standalone.localdomain ceph-mon[29756]: pgmap v2533: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2534: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:02 standalone.localdomain ceph-mon[29756]: pgmap v2534: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11330 DF PROTO=TCP SPT=35604 DPT=9101 SEQ=455982562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5DEF60000000001030307) 
Oct 13 14:59:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2535: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:02 standalone.localdomain python3.9[348098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 14:59:03 standalone.localdomain python3.9[348191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:59:04 standalone.localdomain ceph-mon[29756]: pgmap v2535: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2536: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:04 standalone.localdomain python3.9[348281]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 13 14:59:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  Converting 2964 SID table entries...
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 14:59:06 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 14:59:06 standalone.localdomain ceph-mon[29756]: pgmap v2536: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2537: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:07 standalone.localdomain python3.9[348376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 14:59:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48908 DF PROTO=TCP SPT=35036 DPT=9100 SEQ=3570695290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5F1A40000000001030307) 
Oct 13 14:59:08 standalone.localdomain python3.9[348472]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:59:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48909 DF PROTO=TCP SPT=35036 DPT=9100 SEQ=3570695290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5F5B70000000001030307) 
Oct 13 14:59:08 standalone.localdomain ceph-mon[29756]: pgmap v2537: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2538: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:09 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=25 res=1
Oct 13 14:59:09 standalone.localdomain systemd[336099]: Created slice User Background Tasks Slice.
Oct 13 14:59:09 standalone.localdomain systemd[336099]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 14:59:09 standalone.localdomain systemd[336099]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 14:59:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48910 DF PROTO=TCP SPT=35036 DPT=9100 SEQ=3570695290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA5FDB70000000001030307) 
Oct 13 14:59:10 standalone.localdomain ceph-mon[29756]: pgmap v2538: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2539: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:12 standalone.localdomain ceph-mon[29756]: pgmap v2539: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2540: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:12 standalone.localdomain python3.9[348565]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 14:59:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11966 DF PROTO=TCP SPT=39210 DPT=9105 SEQ=3823876305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA60B770000000001030307) 
Oct 13 14:59:14 standalone.localdomain python3.9[348832]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 14:59:14 standalone.localdomain ceph-mon[29756]: pgmap v2540: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2541: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:59:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:59:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:59:14 standalone.localdomain podman[348906]: 2025-10-13 14:59:14.822333814 +0000 UTC m=+0.077607634 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, container_name=swift_object_server, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 14:59:14 standalone.localdomain systemd[1]: tmp-crun.un5AOF.mount: Deactivated successfully.
Oct 13 14:59:14 standalone.localdomain podman[348907]: 2025-10-13 14:59:14.885222931 +0000 UTC m=+0.136515158 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-swift-container, release=1, version=17.1.9)
Oct 13 14:59:14 standalone.localdomain podman[348909]: 2025-10-13 14:59:14.944499306 +0000 UTC m=+0.196448743 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, version=17.1.9, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:59:14 standalone.localdomain python3.9[348949]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:59:15 standalone.localdomain podman[348906]: 2025-10-13 14:59:15.048141324 +0000 UTC m=+0.303415124 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, distribution-scope=public, architecture=x86_64, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, container_name=swift_object_server, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 14:59:15 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:59:15 standalone.localdomain podman[348907]: 2025-10-13 14:59:15.104872801 +0000 UTC m=+0.356165018 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.openshift.expose-services=, container_name=swift_container_server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 14:59:15 standalone.localdomain podman[348909]: 2025-10-13 14:59:15.12391811 +0000 UTC m=+0.375867537 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T16:11:22, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible)
Oct 13 14:59:15 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:59:15 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:59:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:15 standalone.localdomain python3.9[349096]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:59:15 standalone.localdomain systemd[1]: tmp-crun.oBHlkz.mount: Deactivated successfully.
Oct 13 14:59:16 standalone.localdomain ceph-mon[29756]: pgmap v2541: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2542: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:18 standalone.localdomain ceph-mon[29756]: pgmap v2542: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 14:59:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3923236639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:59:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 14:59:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3923236639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:59:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2543: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24455 DF PROTO=TCP SPT=46712 DPT=9882 SEQ=2795448747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA61DB60000000001030307) 
Oct 13 14:59:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3923236639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 14:59:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3923236639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 14:59:19 standalone.localdomain python3.9[349188]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 14:59:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11968 DF PROTO=TCP SPT=39210 DPT=9105 SEQ=3823876305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA623360000000001030307) 
Oct 13 14:59:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:20 standalone.localdomain ceph-mon[29756]: pgmap v2543: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2544: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:22 standalone.localdomain ceph-mon[29756]: pgmap v2544: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2545: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_14:59:23
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_metadata', 'images', 'volumes', 'backups', 'manila_data', '.mgr']
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 14:59:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22441 DF PROTO=TCP SPT=50838 DPT=9102 SEQ=1298837841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA630BF0000000001030307) 
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 14:59:23 standalone.localdomain ceph-mon[29756]: pgmap v2545: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:24 standalone.localdomain python3.9[349280]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 14:59:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2546: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18898 DF PROTO=TCP SPT=52738 DPT=9101 SEQ=515409472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6384E0000000001030307) 
Oct 13 14:59:26 standalone.localdomain python3.9[349373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 14:59:26 standalone.localdomain ceph-mon[29756]: pgmap v2546: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2547: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:26 standalone.localdomain python3.9[349463]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:27 standalone.localdomain python3.9[349553]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:27 standalone.localdomain python3.9[349643]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:28 standalone.localdomain ceph-mon[29756]: pgmap v2547: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18900 DF PROTO=TCP SPT=52738 DPT=9101 SEQ=515409472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA644360000000001030307) 
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2548: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:28 standalone.localdomain python3.9[349733]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 14:59:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 14:59:29 standalone.localdomain python3.9[349804]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/root/.ansible/tmp/ansible-tmp-1760367568.198599-201-246448382285378/.source _original_basename=.dv39vlb1 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:29 standalone.localdomain python3.9[349894]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:30 standalone.localdomain ceph-mon[29756]: pgmap v2548: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:30 standalone.localdomain python3.9[349984]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 13 14:59:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2549: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:31 standalone.localdomain python3.9[350074]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:31 standalone.localdomain python3.9[350164]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 14:59:32 standalone.localdomain python3.9[350237]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367571.3130836-243-205141961434119/.source.yaml _original_basename=.xmr0nme3 follow=False checksum=05de63f55d145a347dd77aa60f75b140b54f1eb0 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 14:59:32 standalone.localdomain ceph-mon[29756]: pgmap v2549: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18901 DF PROTO=TCP SPT=52738 DPT=9101 SEQ=515409472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA653F60000000001030307) 
Oct 13 14:59:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2550: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:32 standalone.localdomain python3.9[350329]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 13 14:59:34 standalone.localdomain ansible-async_wrapper.py[350432]: Invoked with j18501496318 300 /root/.ansible/tmp/ansible-tmp-1760367573.298286-267-33701532161508/AnsiballZ_edpm_os_net_config.py _
Oct 13 14:59:34 standalone.localdomain ansible-async_wrapper.py[350435]: Starting module and watcher
Oct 13 14:59:34 standalone.localdomain ansible-async_wrapper.py[350435]: Start watching 350436 (300)
Oct 13 14:59:34 standalone.localdomain ansible-async_wrapper.py[350436]: Start module (350436)
Oct 13 14:59:34 standalone.localdomain ansible-async_wrapper.py[350432]: Return async_wrapper task started.
Oct 13 14:59:34 standalone.localdomain python3.9[350437]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False
Oct 13 14:59:34 standalone.localdomain ceph-mon[29756]: pgmap v2550: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2551: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.0832] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350446 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ifdown[350447]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:35 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Oct 13 14:59:35 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 14:59:35 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 14:59:35 standalone.localdomain ifdown[350448]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:35 standalone.localdomain ifdown[350449]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.1132] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350456 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.1377] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350464 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.1657] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350473 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.2383] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350496 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ovs-vsctl[350501]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan20
Oct 13 14:59:35 standalone.localdomain kernel: device vlan20 left promiscuous mode
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.3163] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350511 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ifdown[350512]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:35 standalone.localdomain ifdown[350513]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:35 standalone.localdomain ifdown[350514]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.3475] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350520 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.3759] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350528 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.3974] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350537 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.4490] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350560 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ovs-vsctl[350565]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan21
Oct 13 14:59:35 standalone.localdomain kernel: device vlan21 left promiscuous mode
Oct 13 14:59:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.5025] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350572 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ifdown[350573]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:35 standalone.localdomain ifdown[350574]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:35 standalone.localdomain ifdown[350575]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.5253] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350581 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.5519] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350589 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.5789] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350598 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.6444] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350621 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ovs-vsctl[350626]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan23
Oct 13 14:59:35 standalone.localdomain kernel: device vlan23 left promiscuous mode
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.6997] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350633 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ifdown[350634]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:35 standalone.localdomain ifdown[350635]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:35 standalone.localdomain ifdown[350636]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.7286] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350642 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.7627] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350650 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.7855] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350657 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.8134] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=350665 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ovs-vsctl[350670]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan21
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.8587] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350677 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ifdown[350678]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:35 standalone.localdomain ifdown[350679]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:35 standalone.localdomain ifdown[350680]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.8904] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350686 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.9227] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350694 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.9444] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350701 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain NetworkManager[5962]: <info>  [1760367575.9750] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=350709 uid=0 result="success"
Oct 13 14:59:35 standalone.localdomain ovs-vsctl[350714]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan23
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.0194] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=350721 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ifdown[350722]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:36 standalone.localdomain ifdown[350723]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:36 standalone.localdomain ifdown[350724]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.0461] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=350730 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.0744] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=350738 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.0931] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=350747 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.1846] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=350770 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ovs-vsctl[350775]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane eth1
Oct 13 14:59:36 standalone.localdomain kernel: device eth1 left promiscuous mode
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.2264] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350782 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ifdown[350783]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:36 standalone.localdomain ifdown[350784]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:36 standalone.localdomain ifdown[350785]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.2496] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350791 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.2731] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350799 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.2888] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350806 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.3101] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=350814 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ovs-vsctl[350819]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan20
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.3528] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=350826 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ifdown[350827]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:36 standalone.localdomain ifdown[350828]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:36 standalone.localdomain ifdown[350829]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.3806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=350835 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.4090] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=350843 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.4294] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=350852 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.4883] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=350875 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ovs-vsctl[350880]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan22
Oct 13 14:59:36 standalone.localdomain kernel: device vlan22 left promiscuous mode
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.5560] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350887 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ifdown[350888]: You are using 'ifdown' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:36 standalone.localdomain ifdown[350889]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:36 standalone.localdomain ifdown[350890]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.5924] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350896 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.6272] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350904 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2552: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.6519] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350913 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.7262] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350936 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NET[350944]: /etc/sysconfig/network-scripts/ifdown-post : updated /etc/resolv.conf
Oct 13 14:59:36 standalone.localdomain ovs-vsctl[350949]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-br br-ctlplane
Oct 13 14:59:36 standalone.localdomain kernel: device br-ctlplane left promiscuous mode
Oct 13 14:59:36 standalone.localdomain kernel: device vlan44 left promiscuous mode
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.8329] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350967 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ifup[350968]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:36 standalone.localdomain ifup[350969]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:36 standalone.localdomain ifup[350970]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.8590] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350976 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain ovs-vsctl[350978]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ctlplane -- set bridge br-ctlplane other-config:mac-table-size=50000 -- set bridge br-ctlplane other-config:hwaddr=fa:16:3e:0b:50:81 -- set bridge br-ctlplane fail_mode=standalone -- del-controller br-ctlplane
Oct 13 14:59:36 standalone.localdomain kernel: device br-ctlplane entered promiscuous mode
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.8944] manager: (br-ctlplane): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Oct 13 14:59:36 standalone.localdomain systemd-udevd[350504]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.9226] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=350987 uid=0 result="success"
Oct 13 14:59:36 standalone.localdomain NetworkManager[5962]: <info>  [1760367576.9481] device (br-ctlplane): carrier: link connected
Oct 13 14:59:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2553: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:39 standalone.localdomain ansible-async_wrapper.py[350435]: 350436 still running (300)
Oct 13 14:59:39 standalone.localdomain snmpd[110332]: IfIndex of an interface changed. Such interfaces will appear multiple times in IF-MIB.
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.0039] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=351016 uid=0 result="success"
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.0464] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=351031 uid=0 result="success"
Oct 13 14:59:40 standalone.localdomain NET[351056]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.1251] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351065 uid=0 result="success"
Oct 13 14:59:40 standalone.localdomain ifup[351066]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:40 standalone.localdomain ifup[351067]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:40 standalone.localdomain ifup[351068]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.1525] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351074 uid=0 result="success"
Oct 13 14:59:40 standalone.localdomain ovs-vsctl[351077]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan21 -- add-port br-ctlplane vlan21 tag=21 -- set Interface vlan21 type=internal
Oct 13 14:59:40 standalone.localdomain kernel: device vlan21 entered promiscuous mode
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.1874] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/19)
Oct 13 14:59:40 standalone.localdomain systemd-udevd[351080]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.2052] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351089 uid=0 result="success"
Oct 13 14:59:40 standalone.localdomain NetworkManager[5962]: <info>  [1760367580.2218] device (vlan21): carrier: link connected
Oct 13 14:59:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2554: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2555: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:42 standalone.localdomain ceph-mon[29756]: pgmap v2551: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:42 standalone.localdomain ceph-mon[29756]: pgmap v2552: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:42 standalone.localdomain ceph-mon[29756]: pgmap v2553: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:42 standalone.localdomain ceph-mon[29756]: pgmap v2554: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.2734] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351119 uid=0 result="success"
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.3139] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351134 uid=0 result="success"
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.3670] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=351159 uid=0 result="success"
Oct 13 14:59:43 standalone.localdomain ifup[351160]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:43 standalone.localdomain ifup[351161]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:43 standalone.localdomain ifup[351162]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.3903] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=351168 uid=0 result="success"
Oct 13 14:59:43 standalone.localdomain ovs-vsctl[351171]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan23 -- add-port br-ctlplane vlan23 tag=23 -- set Interface vlan23 type=internal
Oct 13 14:59:43 standalone.localdomain kernel: device vlan23 entered promiscuous mode
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.4247] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 13 14:59:43 standalone.localdomain systemd-udevd[351174]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.4436] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=351183 uid=0 result="success"
Oct 13 14:59:43 standalone.localdomain NetworkManager[5962]: <info>  [1760367583.4582] device (vlan23): carrier: link connected
Oct 13 14:59:43 standalone.localdomain ceph-mon[29756]: pgmap v2555: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:44 standalone.localdomain ansible-async_wrapper.py[350435]: 350436 still running (295)
Oct 13 14:59:44 standalone.localdomain sshd[347584]: pam_unix(sshd:session): session closed for user root
Oct 13 14:59:44 standalone.localdomain systemd-logind[45629]: Session 280 logged out. Waiting for processes to exit.
Oct 13 14:59:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2556: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:45 standalone.localdomain sudo[351202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 14:59:45 standalone.localdomain sudo[351202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:59:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 14:59:45 standalone.localdomain sudo[351202]: pam_unix(sudo:session): session closed for user root
Oct 13 14:59:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 14:59:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 14:59:45 standalone.localdomain sudo[351220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 14:59:45 standalone.localdomain sudo[351220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:59:45 standalone.localdomain systemd[1]: tmp-crun.OD9Ysg.mount: Deactivated successfully.
Oct 13 14:59:45 standalone.localdomain podman[351218]: 2025-10-13 14:59:45.431577781 +0000 UTC m=+0.072957240 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, container_name=swift_container_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public)
Oct 13 14:59:45 standalone.localdomain podman[351217]: 2025-10-13 14:59:45.474009295 +0000 UTC m=+0.114159145 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, batch=17.1_20250721.1, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, tcib_managed=true)
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:45 standalone.localdomain podman[351219]: 2025-10-13 14:59:45.541801583 +0000 UTC m=+0.181793328 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, container_name=swift_account_server, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true)
Oct 13 14:59:45 standalone.localdomain podman[351218]: 2025-10-13 14:59:45.614000299 +0000 UTC m=+0.255379758 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible)
Oct 13 14:59:45 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 14:59:45 standalone.localdomain podman[351217]: 2025-10-13 14:59:45.650809058 +0000 UTC m=+0.290958868 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, config_id=tripleo_step4, tcib_managed=true, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1)
Oct 13 14:59:45 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 14:59:45 standalone.localdomain podman[351219]: 2025-10-13 14:59:45.732781457 +0000 UTC m=+0.372773192 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, managed_by=tripleo_ansible, container_name=swift_account_server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-type=git, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 14:59:45 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 14:59:45 standalone.localdomain sudo[351220]: pam_unix(sudo:session): session closed for user root
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 14:59:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:59:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 533ed4ea-834c-47da-948d-7d9a052a8e9f (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:59:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 533ed4ea-834c-47da-948d-7d9a052a8e9f (Updating node-proxy deployment (+1 -> 1))
Oct 13 14:59:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 533ed4ea-834c-47da-948d-7d9a052a8e9f (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 14:59:46 standalone.localdomain sudo[351341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 14:59:46 standalone.localdomain sudo[351341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 14:59:46 standalone.localdomain sudo[351341]: pam_unix(sudo:session): session closed for user root
Oct 13 14:59:46 standalone.localdomain ceph-mon[29756]: pgmap v2556: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 14:59:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 14:59:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:59:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.5134] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=351365 uid=0 result="success"
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.5466] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=351380 uid=0 result="success"
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.5966] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=351405 uid=0 result="success"
Oct 13 14:59:46 standalone.localdomain ifup[351406]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:46 standalone.localdomain ifup[351407]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:46 standalone.localdomain ifup[351408]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.6233] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=351414 uid=0 result="success"
Oct 13 14:59:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2557: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.6664] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=351423 uid=0 result="success"
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.30, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 76 c3 7d 3e d4 25 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.30, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 76 c3 7d 3e d4 25 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.10, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 fa 16 3e 13 3c 3c 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.30, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 76 c3 7d 3e d4 25 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.30, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 76 c3 7d 3e d4 25 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.10, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 fa 16 3e 13 3c 3c 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.30, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 76 c3 7d 3e d4 25 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.10, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 fa 16 3e 13 3c 3c 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.30, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 76 c3 7d 3e d4 25 08 00
Oct 13 14:59:46 standalone.localdomain kernel: IPv4: martian source 192.168.122.100 from 192.168.122.10, on dev eth1
Oct 13 14:59:46 standalone.localdomain kernel: ll header: 00000000: fa 16 3e 0b 50 81 fa 16 3e 13 3c 3c 08 00
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.6729] device (eth1): carrier: link connected
Oct 13 14:59:46 standalone.localdomain NetworkManager[5962]: <info>  [1760367586.6981] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=351432 uid=0 result="success"
Oct 13 14:59:46 standalone.localdomain ipv6_wait_tentative[351444]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Oct 13 14:59:47 standalone.localdomain ipv6_wait_tentative[351449]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Oct 13 14:59:48 standalone.localdomain ceph-mon[29756]: pgmap v2557: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2558: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:48 standalone.localdomain NetworkManager[5962]: <info>  [1760367588.7655] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=351458 uid=0 result="success"
Oct 13 14:59:48 standalone.localdomain ovs-vsctl[351473]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane eth1 -- add-port br-ctlplane eth1
Oct 13 14:59:48 standalone.localdomain kernel: device eth1 entered promiscuous mode
Oct 13 14:59:48 standalone.localdomain NetworkManager[5962]: <info>  [1760367588.8684] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=351481 uid=0 result="success"
Oct 13 14:59:48 standalone.localdomain ifup[351482]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:48 standalone.localdomain ifup[351483]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:48 standalone.localdomain ifup[351484]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:48 standalone.localdomain NetworkManager[5962]: <info>  [1760367588.8982] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ctlplane" pid=351490 uid=0 result="success"
Oct 13 14:59:48 standalone.localdomain NetworkManager[5962]: <info>  [1760367588.9364] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351500 uid=0 result="success"
Oct 13 14:59:48 standalone.localdomain ifup[351501]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:48 standalone.localdomain ifup[351502]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:48 standalone.localdomain ifup[351503]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:48 standalone.localdomain NetworkManager[5962]: <info>  [1760367588.9631] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351509 uid=0 result="success"
Oct 13 14:59:48 standalone.localdomain ovs-vsctl[351512]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan20 -- add-port br-ctlplane vlan20 tag=20 -- set Interface vlan20 type=internal
Oct 13 14:59:48 standalone.localdomain kernel: device vlan20 entered promiscuous mode
Oct 13 14:59:48 standalone.localdomain NetworkManager[5962]: <info>  [1760367588.9981] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Oct 13 14:59:49 standalone.localdomain systemd-udevd[351514]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:59:49 standalone.localdomain NetworkManager[5962]: <info>  [1760367589.0218] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351523 uid=0 result="success"
Oct 13 14:59:49 standalone.localdomain NetworkManager[5962]: <info>  [1760367589.0393] device (vlan20): carrier: link connected
Oct 13 14:59:49 standalone.localdomain ansible-async_wrapper.py[350435]: 350436 still running (290)
Oct 13 14:59:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 14:59:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 14:59:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:59:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62461 DF PROTO=TCP SPT=55900 DPT=9105 SEQ=1278455490 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA698770000000001030307) 
Oct 13 14:59:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:50 standalone.localdomain ceph-mon[29756]: pgmap v2558: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 14:59:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2559: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:51 standalone.localdomain sshd[351543]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 14:59:51 standalone.localdomain sshd[351543]: Accepted publickey for root from 192.168.122.30 port 59614 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 14:59:51 standalone.localdomain systemd-logind[45629]: New session 281 of user root.
Oct 13 14:59:51 standalone.localdomain systemd[1]: Started Session 281 of User root.
Oct 13 14:59:51 standalone.localdomain sshd[351543]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 14:59:51 standalone.localdomain python3.9[351635]: ansible-ansible.legacy.async_status Invoked with jid=j18501496318.350432 mode=status _async_dir=/root/.ansible_async
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.0857] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351645 uid=0 result="success"
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.1328] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351660 uid=0 result="success"
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.1965] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=351685 uid=0 result="success"
Oct 13 14:59:52 standalone.localdomain ifup[351686]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:52 standalone.localdomain ifup[351687]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:52 standalone.localdomain ifup[351688]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.2286] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=351694 uid=0 result="success"
Oct 13 14:59:52 standalone.localdomain ovs-vsctl[351697]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan22 -- add-port br-ctlplane vlan22 tag=22 -- set Interface vlan22 type=internal
Oct 13 14:59:52 standalone.localdomain kernel: device vlan22 entered promiscuous mode
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.2699] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct 13 14:59:52 standalone.localdomain systemd-udevd[351701]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.2927] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=351709 uid=0 result="success"
Oct 13 14:59:52 standalone.localdomain NetworkManager[5962]: <info>  [1760367592.3093] device (vlan22): carrier: link connected
Oct 13 14:59:52 standalone.localdomain ceph-mon[29756]: pgmap v2559: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2560: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 14:59:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 14:59:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12503 DF PROTO=TCP SPT=51026 DPT=9102 SEQ=3512698824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6A5EF0000000001030307) 
Oct 13 14:59:54 standalone.localdomain ansible-async_wrapper.py[350435]: 350436 still running (285)
Oct 13 14:59:54 standalone.localdomain ceph-mon[29756]: pgmap v2560: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12504 DF PROTO=TCP SPT=51026 DPT=9102 SEQ=3512698824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6A9F70000000001030307) 
Oct 13 14:59:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2561: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:55 standalone.localdomain NetworkManager[5962]: <info>  [1760367595.3589] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=351796 uid=0 result="success"
Oct 13 14:59:55 standalone.localdomain python3.9[351786]: ansible-ansible.legacy.async_status Invoked with jid=j18501496318.350432 mode=status _async_dir=/root/.ansible_async
Oct 13 14:59:55 standalone.localdomain NetworkManager[5962]: <info>  [1760367595.3914] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=351811 uid=0 result="success"
Oct 13 14:59:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:59:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:59:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:59:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:59:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 14:59:55 standalone.localdomain NetworkManager[5962]: <info>  [1760367595.4333] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351836 uid=0 result="success"
Oct 13 14:59:55 standalone.localdomain ifup[351837]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:55 standalone.localdomain ifup[351838]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:55 standalone.localdomain ifup[351839]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:55 standalone.localdomain NetworkManager[5962]: <info>  [1760367595.4572] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351845 uid=0 result="success"
Oct 13 14:59:55 standalone.localdomain ovs-vsctl[351848]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan20 -- add-port br-ctlplane vlan20 tag=20 -- set Interface vlan20 type=internal
Oct 13 14:59:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59400 DF PROTO=TCP SPT=47340 DPT=9101 SEQ=277728997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6AD7E0000000001030307) 
Oct 13 14:59:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 14:59:55 standalone.localdomain NetworkManager[5962]: <info>  [1760367595.4989] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351855 uid=0 result="success"
Oct 13 14:59:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59401 DF PROTO=TCP SPT=47340 DPT=9101 SEQ=277728997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6B1760000000001030307) 
Oct 13 14:59:56 standalone.localdomain NetworkManager[5962]: <info>  [1760367596.5475] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351882 uid=0 result="success"
Oct 13 14:59:56 standalone.localdomain ceph-mon[29756]: pgmap v2561: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:56 standalone.localdomain NetworkManager[5962]: <info>  [1760367596.6045] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=351897 uid=0 result="success"
Oct 13 14:59:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12505 DF PROTO=TCP SPT=51026 DPT=9102 SEQ=3512698824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6B1F60000000001030307) 
Oct 13 14:59:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2562: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:56 standalone.localdomain NetworkManager[5962]: <info>  [1760367596.6743] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351922 uid=0 result="success"
Oct 13 14:59:56 standalone.localdomain ifup[351923]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:56 standalone.localdomain ifup[351924]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:56 standalone.localdomain ifup[351925]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:56 standalone.localdomain NetworkManager[5962]: <info>  [1760367596.7117] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351931 uid=0 result="success"
Oct 13 14:59:56 standalone.localdomain ovs-vsctl[351934]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan21 -- add-port br-ctlplane vlan21 tag=21 -- set Interface vlan21 type=internal
Oct 13 14:59:56 standalone.localdomain NetworkManager[5962]: <info>  [1760367596.7661] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351941 uid=0 result="success"
Oct 13 14:59:57 standalone.localdomain NetworkManager[5962]: <info>  [1760367597.8147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351969 uid=0 result="success"
Oct 13 14:59:57 standalone.localdomain NetworkManager[5962]: <info>  [1760367597.8486] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=351984 uid=0 result="success"
Oct 13 14:59:57 standalone.localdomain NetworkManager[5962]: <info>  [1760367597.8934] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=352009 uid=0 result="success"
Oct 13 14:59:57 standalone.localdomain ifup[352010]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Oct 13 14:59:57 standalone.localdomain ifup[352011]: 'network-scripts' will be removed from distribution in near future.
Oct 13 14:59:57 standalone.localdomain ifup[352012]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Oct 13 14:59:57 standalone.localdomain NetworkManager[5962]: <info>  [1760367597.9152] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=352018 uid=0 result="success"
Oct 13 14:59:57 standalone.localdomain ovs-vsctl[352021]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ctlplane vlan23 -- add-port br-ctlplane vlan23 tag=23 -- set Interface vlan23 type=internal
Oct 13 14:59:57 standalone.localdomain NetworkManager[5962]: <info>  [1760367597.9629] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=352028 uid=0 result="success"
Oct 13 14:59:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59402 DF PROTO=TCP SPT=47340 DPT=9101 SEQ=277728997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6B9760000000001030307) 
Oct 13 14:59:58 standalone.localdomain ceph-mon[29756]: pgmap v2562: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2563: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 14:59:58 standalone.localdomain python3.9[352103]: ansible-ansible.legacy.async_status Invoked with jid=j18501496318.350432 mode=status _async_dir=/root/.ansible_async
Oct 13 14:59:59 standalone.localdomain NetworkManager[5962]: <info>  [1760367599.0079] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=352113 uid=0 result="success"
Oct 13 14:59:59 standalone.localdomain NetworkManager[5962]: <info>  [1760367599.0447] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=352128 uid=0 result="success"
Oct 13 14:59:59 standalone.localdomain ansible-async_wrapper.py[350435]: 350436 still running (280)
Oct 13 14:59:59 standalone.localdomain ansible-async_wrapper.py[350436]: Module complete (350436)
Oct 13 15:00:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:00 standalone.localdomain ceph-mon[29756]: pgmap v2563: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2564: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12506 DF PROTO=TCP SPT=51026 DPT=9102 SEQ=3512698824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6C1B60000000001030307) 
Oct 13 15:00:02 standalone.localdomain python3.9[352205]: ansible-ansible.legacy.async_status Invoked with jid=j18501496318.350432 mode=status _async_dir=/root/.ansible_async
Oct 13 15:00:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59403 DF PROTO=TCP SPT=47340 DPT=9101 SEQ=277728997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6C9360000000001030307) 
Oct 13 15:00:02 standalone.localdomain ceph-mon[29756]: pgmap v2564: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:02 standalone.localdomain python3.9[352262]: ansible-ansible.legacy.async_status Invoked with jid=j18501496318.350432 mode=cleanup _async_dir=/root/.ansible_async
Oct 13 15:00:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2565: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:03 standalone.localdomain python3.9[352352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:00:03 standalone.localdomain python3.9[352423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367602.8123991-307-146809841926635/.source.returncode _original_basename=.yo0vezte follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:00:04 standalone.localdomain ansible-async_wrapper.py[350435]: Done in kid B.
Oct 13 15:00:04 standalone.localdomain python3.9[352513]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:00:04 standalone.localdomain systemd[1]: session-280.scope: Deactivated successfully.
Oct 13 15:00:04 standalone.localdomain systemd[1]: session-280.scope: Consumed 38.178s CPU time.
Oct 13 15:00:04 standalone.localdomain systemd-logind[45629]: Removed session 280.
Oct 13 15:00:04 standalone.localdomain ceph-mon[29756]: pgmap v2565: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2566: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:04 standalone.localdomain python3.9[352586]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367603.7836914-323-133979893870787/.source.cfg _original_basename=.0asdo30k follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:00:05 standalone.localdomain sshd[351543]: pam_unix(sshd:session): session closed for user root
Oct 13 15:00:05 standalone.localdomain systemd[1]: session-281.scope: Deactivated successfully.
Oct 13 15:00:05 standalone.localdomain systemd[1]: session-281.scope: Consumed 3.272s CPU time.
Oct 13 15:00:05 standalone.localdomain systemd-logind[45629]: Session 281 logged out. Waiting for processes to exit.
Oct 13 15:00:05 standalone.localdomain systemd-logind[45629]: Removed session 281.
Oct 13 15:00:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:06 standalone.localdomain ceph-mon[29756]: pgmap v2566: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2567: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57527 DF PROTO=TCP SPT=33040 DPT=9100 SEQ=4020683530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6DC040000000001030307) 
Oct 13 15:00:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57528 DF PROTO=TCP SPT=33040 DPT=9100 SEQ=4020683530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6DFF70000000001030307) 
Oct 13 15:00:08 standalone.localdomain ceph-mon[29756]: pgmap v2567: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2568: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57529 DF PROTO=TCP SPT=33040 DPT=9100 SEQ=4020683530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6E7F60000000001030307) 
Oct 13 15:00:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:10 standalone.localdomain ceph-mon[29756]: pgmap v2568: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2569: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:10 standalone.localdomain sshd[352601]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:00:10 standalone.localdomain sshd[352601]: Accepted publickey for root from 192.168.122.30 port 35444 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:00:10 standalone.localdomain systemd-logind[45629]: New session 282 of user root.
Oct 13 15:00:10 standalone.localdomain systemd[1]: Started Session 282 of User root.
Oct 13 15:00:10 standalone.localdomain sshd[352601]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:00:11 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40649 DF PROTO=TCP SPT=46820 DPT=9882 SEQ=3968380569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6EC570000000001030307) 
Oct 13 15:00:11 standalone.localdomain ceph-mon[29756]: pgmap v2569: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:11 standalone.localdomain python3.9[352694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:00:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2570: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:12 standalone.localdomain python3.9[352788]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:00:13 standalone.localdomain ceph-mon[29756]: pgmap v2570: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:13 standalone.localdomain python3.9[352937]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:00:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30892 DF PROTO=TCP SPT=38650 DPT=9105 SEQ=1357694346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA6F5B60000000001030307) 
Oct 13 15:00:14 standalone.localdomain sshd[352601]: pam_unix(sshd:session): session closed for user root
Oct 13 15:00:14 standalone.localdomain systemd[1]: session-282.scope: Deactivated successfully.
Oct 13 15:00:14 standalone.localdomain systemd[1]: session-282.scope: Consumed 2.000s CPU time.
Oct 13 15:00:14 standalone.localdomain systemd-logind[45629]: Session 282 logged out. Waiting for processes to exit.
Oct 13 15:00:14 standalone.localdomain systemd-logind[45629]: Removed session 282.
Oct 13 15:00:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2571: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:00:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:00:15 standalone.localdomain systemd[1]: tmp-crun.wwArSN.mount: Deactivated successfully.
Oct 13 15:00:15 standalone.localdomain podman[352953]: 2025-10-13 15:00:15.841902205 +0000 UTC m=+0.097215531 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, config_id=tripleo_step4, version=17.1.9, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 15:00:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:00:15 standalone.localdomain podman[352954]: 2025-10-13 15:00:15.874938487 +0000 UTC m=+0.128946514 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, version=17.1.9, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, tcib_managed=true)
Oct 13 15:00:15 standalone.localdomain podman[352988]: 2025-10-13 15:00:15.922110057 +0000 UTC m=+0.059325097 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, distribution-scope=public, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, container_name=swift_account_server, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, tcib_managed=true)
Oct 13 15:00:16 standalone.localdomain podman[352953]: 2025-10-13 15:00:16.044845517 +0000 UTC m=+0.300158863 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public)
Oct 13 15:00:16 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:00:16 standalone.localdomain podman[352954]: 2025-10-13 15:00:16.073821564 +0000 UTC m=+0.327829581 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git)
Oct 13 15:00:16 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:00:16 standalone.localdomain podman[352988]: 2025-10-13 15:00:16.14379332 +0000 UTC m=+0.281008370 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, distribution-scope=public, version=17.1.9, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:00:16 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:00:16 standalone.localdomain ceph-mon[29756]: pgmap v2571: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2572: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:16 standalone.localdomain systemd[1]: tmp-crun.ymUcgM.mount: Deactivated successfully.
Oct 13 15:00:18 standalone.localdomain ceph-mon[29756]: pgmap v2572: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:00:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3075878369' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:00:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:00:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3075878369' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:00:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2573: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40652 DF PROTO=TCP SPT=46820 DPT=9882 SEQ=3968380569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA708360000000001030307) 
Oct 13 15:00:19 standalone.localdomain sshd[353037]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:00:19 standalone.localdomain sshd[353037]: Accepted publickey for root from 192.168.122.30 port 48874 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:00:19 standalone.localdomain systemd-logind[45629]: New session 283 of user root.
Oct 13 15:00:19 standalone.localdomain systemd[1]: Started Session 283 of User root.
Oct 13 15:00:19 standalone.localdomain sshd[353037]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:00:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3075878369' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:00:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3075878369' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:00:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30894 DF PROTO=TCP SPT=38650 DPT=9105 SEQ=1357694346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA70D760000000001030307) 
Oct 13 15:00:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:20 standalone.localdomain ceph-mon[29756]: pgmap v2573: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:20 standalone.localdomain python3.9[353130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:00:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2574: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:21 standalone.localdomain python3.9[353224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:00:22 standalone.localdomain python3.9[353318]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:00:22 standalone.localdomain ceph-mon[29756]: pgmap v2574: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2575: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:00:23
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'manila_data', 'volumes', 'backups', 'vms', 'manila_metadata', 'images']
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:00:23 standalone.localdomain python3.9[353370]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:00:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7838 DF PROTO=TCP SPT=36480 DPT=9102 SEQ=2784960547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA71B1E0000000001030307) 
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:00:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:00:24 standalone.localdomain ceph-mon[29756]: pgmap v2575: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2576: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9729 DF PROTO=TCP SPT=54424 DPT=9101 SEQ=2400445947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA722AE0000000001030307) 
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.516549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367625516584, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 250, "total_data_size": 1183787, "memory_usage": 1210056, "flush_reason": "Manual Compaction"}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367625521857, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 733430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59874, "largest_seqno": 61220, "table_properties": {"data_size": 729088, "index_size": 1814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11675, "raw_average_key_size": 20, "raw_value_size": 719469, "raw_average_value_size": 1271, "num_data_blocks": 83, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367505, "oldest_key_time": 1760367505, "file_creation_time": 1760367625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 5360 microseconds, and 2233 cpu microseconds.
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.521899) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 733430 bytes OK
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.521922) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.523599) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.523611) EVENT_LOG_v1 {"time_micros": 1760367625523607, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.523623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1177660, prev total WAL file size 1178149, number of live WAL files 2.
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.524056) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323532' seq:72057594037927935, type:22 .. '6D6772737461740032353033' seq:0, type:0; will stop at (end)
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(716KB)], [134(5862KB)]
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367625524089, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 6736841, "oldest_snapshot_seqno": -1}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 5708 keys, 5081739 bytes, temperature: kUnknown
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367625543887, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 5081739, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5049569, "index_size": 16840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 147085, "raw_average_key_size": 25, "raw_value_size": 4951715, "raw_average_value_size": 867, "num_data_blocks": 671, "num_entries": 5708, "num_filter_entries": 5708, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.544209) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 5081739 bytes
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.545936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 338.8 rd, 255.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 5.7 +0.0 blob) out(4.8 +0.0 blob), read-write-amplify(16.1) write-amplify(6.9) OK, records in: 6175, records dropped: 467 output_compression: NoCompression
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.545968) EVENT_LOG_v1 {"time_micros": 1760367625545953, "job": 82, "event": "compaction_finished", "compaction_time_micros": 19885, "compaction_time_cpu_micros": 9958, "output_level": 6, "num_output_files": 1, "total_output_size": 5081739, "num_input_records": 6175, "num_output_records": 5708, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367625546247, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367625546995, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.523986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.547032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.547039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.547043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.547047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:25 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:25.547052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:26 standalone.localdomain ceph-mon[29756]: pgmap v2576: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2577: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9731 DF PROTO=TCP SPT=54424 DPT=9101 SEQ=2400445947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA72EB60000000001030307) 
Oct 13 15:00:28 standalone.localdomain ceph-mon[29756]: pgmap v2577: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2578: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:00:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:00:29 standalone.localdomain python3.9[353462]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:00:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:30 standalone.localdomain ceph-mon[29756]: pgmap v2578: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2579: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:31 standalone.localdomain python3.9[353611]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:00:31 standalone.localdomain python3.9[353701]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:00:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9732 DF PROTO=TCP SPT=54424 DPT=9101 SEQ=2400445947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA73E770000000001030307) 
Oct 13 15:00:32 standalone.localdomain ceph-mon[29756]: pgmap v2579: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:32 standalone.localdomain python3.9[353802]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:00:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2580: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:32 standalone.localdomain python3.9[353848]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:00:33 standalone.localdomain python3.9[353938]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:00:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:00:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:00:33 standalone.localdomain python3.9[353984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:00:34 standalone.localdomain ceph-mon[29756]: pgmap v2580: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:34 standalone.localdomain python3.9[354074]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:00:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2581: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:35 standalone.localdomain python3.9[354164]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.524279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367635524319, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 352, "num_deletes": 257, "total_data_size": 106240, "memory_usage": 113944, "flush_reason": "Manual Compaction"}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367635528523, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 104858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61221, "largest_seqno": 61572, "table_properties": {"data_size": 102750, "index_size": 284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5298, "raw_average_key_size": 17, "raw_value_size": 98453, "raw_average_value_size": 329, "num_data_blocks": 13, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367625, "oldest_key_time": 1760367625, "file_creation_time": 1760367635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 4276 microseconds, and 758 cpu microseconds.
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.528553) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 104858 bytes OK
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.528570) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.529958) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.529972) EVENT_LOG_v1 {"time_micros": 1760367635529967, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.529988) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 103831, prev total WAL file size 104154, number of live WAL files 2.
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.530640) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303036' seq:72057594037927935, type:22 .. '6C6F676D0032323539' seq:0, type:0; will stop at (end)
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(102KB)], [137(4962KB)]
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367635530671, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 5186597, "oldest_snapshot_seqno": -1}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 5480 keys, 5100396 bytes, temperature: kUnknown
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367635552321, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 5100396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5068873, "index_size": 16705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 143277, "raw_average_key_size": 26, "raw_value_size": 4974193, "raw_average_value_size": 907, "num_data_blocks": 663, "num_entries": 5480, "num_filter_entries": 5480, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.552640) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 5100396 bytes
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.554284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.7 rd, 234.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 4.8 +0.0 blob) out(4.9 +0.0 blob), read-write-amplify(98.1) write-amplify(48.6) OK, records in: 6007, records dropped: 527 output_compression: NoCompression
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.554305) EVENT_LOG_v1 {"time_micros": 1760367635554295, "job": 84, "event": "compaction_finished", "compaction_time_micros": 21731, "compaction_time_cpu_micros": 12549, "output_level": 6, "num_output_files": 1, "total_output_size": 5100396, "num_input_records": 6007, "num_output_records": 5480, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367635554435, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367635555108, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.530603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.555176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.555182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.555184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.555186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:00:35.555188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:00:35 standalone.localdomain python3.9[354254]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:00:36 standalone.localdomain python3.9[354344]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:00:36 standalone.localdomain ceph-mon[29756]: pgmap v2581: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2582: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:37 standalone.localdomain python3.9[354434]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:00:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19663 DF PROTO=TCP SPT=51204 DPT=9100 SEQ=643678425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7513C0000000001030307) 
Oct 13 15:00:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19664 DF PROTO=TCP SPT=51204 DPT=9100 SEQ=643678425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA755360000000001030307) 
Oct 13 15:00:38 standalone.localdomain ceph-mon[29756]: pgmap v2582: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2583: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19665 DF PROTO=TCP SPT=51204 DPT=9100 SEQ=643678425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA75D360000000001030307) 
Oct 13 15:00:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:40 standalone.localdomain ceph-mon[29756]: pgmap v2583: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2584: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:42 standalone.localdomain python3.9[354526]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:00:42 standalone.localdomain ceph-mon[29756]: pgmap v2584: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2585: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:42 standalone.localdomain python3.9[354618]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:00:43 standalone.localdomain python3.9[354708]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:00:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44789 DF PROTO=TCP SPT=50208 DPT=9105 SEQ=864704928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA76AF60000000001030307) 
Oct 13 15:00:44 standalone.localdomain python3.9[354798]: ansible-service_facts Invoked
Oct 13 15:00:44 standalone.localdomain network[354815]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:00:44 standalone.localdomain network[354816]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:00:44 standalone.localdomain network[354817]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:00:44 standalone.localdomain ceph-mon[29756]: pgmap v2585: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2586: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:46 standalone.localdomain sudo[354830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:00:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:00:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:00:46 standalone.localdomain sudo[354830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:00:46 standalone.localdomain sudo[354830]: pam_unix(sudo:session): session closed for user root
Oct 13 15:00:46 standalone.localdomain sudo[354857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:00:46 standalone.localdomain sudo[354857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:00:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:00:46 standalone.localdomain podman[354846]: 2025-10-13 15:00:46.203416622 +0000 UTC m=+0.105776937 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, tcib_managed=true, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=swift_object_server, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12)
Oct 13 15:00:46 standalone.localdomain podman[354888]: 2025-10-13 15:00:46.266890776 +0000 UTC m=+0.074538958 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:00:46 standalone.localdomain podman[354847]: 2025-10-13 15:00:46.229434977 +0000 UTC m=+0.127250401 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:00:46 standalone.localdomain podman[354846]: 2025-10-13 15:00:46.388954835 +0000 UTC m=+0.291315160 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, container_name=swift_object_server, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.buildah.version=1.33.12, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:00:46 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:00:46 standalone.localdomain podman[354847]: 2025-10-13 15:00:46.404968531 +0000 UTC m=+0.302783975 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, distribution-scope=public, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, architecture=x86_64, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:00:46 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:00:46 standalone.localdomain podman[354888]: 2025-10-13 15:00:46.44173958 +0000 UTC m=+0.249387762 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=)
Oct 13 15:00:46 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:00:46 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: pgmap v2586: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2587: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:46 standalone.localdomain sudo[354857]: pam_unix(sudo:session): session closed for user root
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:00:46 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:00:46 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev add4c87f-721a-424c-a26e-311d29a1ec4c (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:00:46 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev add4c87f-721a-424c-a26e-311d29a1ec4c (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:00:46 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event add4c87f-721a-424c-a26e-311d29a1ec4c (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:00:46 standalone.localdomain sudo[355010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:00:46 standalone.localdomain sudo[355010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:00:46 standalone.localdomain sudo[355010]: pam_unix(sudo:session): session closed for user root
Oct 13 15:00:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:00:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:00:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:00:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:00:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:00:47 standalone.localdomain object-server[355064]: Object update sweep starting on /srv/node/d1 (pid: 23)
Oct 13 15:00:47 standalone.localdomain object-server[355064]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 23)
Oct 13 15:00:47 standalone.localdomain object-server[355064]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:00:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.09s
Oct 13 15:00:48 standalone.localdomain ceph-mon[29756]: pgmap v2587: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42135 DF PROTO=TCP SPT=37350 DPT=9882 SEQ=420586152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA77D360000000001030307) 
Oct 13 15:00:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2588: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:00:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:00:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:00:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44791 DF PROTO=TCP SPT=50208 DPT=9105 SEQ=864704928 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA782B70000000001030307) 
Oct 13 15:00:50 standalone.localdomain python3.9[355295]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:00:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:50 standalone.localdomain ceph-mon[29756]: pgmap v2588: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:00:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2589: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:51 standalone.localdomain ceph-mon[29756]: pgmap v2589: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2590: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:00:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:00:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62662 DF PROTO=TCP SPT=55280 DPT=9102 SEQ=372231943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA790500000000001030307) 
Oct 13 15:00:53 standalone.localdomain ceph-mon[29756]: pgmap v2590: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2591: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38289 DF PROTO=TCP SPT=40214 DPT=9101 SEQ=2965061901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA797DD0000000001030307) 
Oct 13 15:00:55 standalone.localdomain python3.9[355387]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 13 15:00:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:00:56 standalone.localdomain ceph-mon[29756]: pgmap v2591: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2592: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:56 standalone.localdomain python3.9[355477]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:00:57 standalone.localdomain python3.9[355550]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367656.3978648-220-163303817937141/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:00:58 standalone.localdomain python3.9[355642]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:00:58 standalone.localdomain ceph-mon[29756]: pgmap v2592: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38291 DF PROTO=TCP SPT=40214 DPT=9101 SEQ=2965061901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7A3F60000000001030307) 
Oct 13 15:00:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2593: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:00:59 standalone.localdomain python3.9[355715]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367657.9776244-235-1892681484227/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:00 standalone.localdomain python3.9[355807]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:00 standalone.localdomain ceph-mon[29756]: pgmap v2593: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2594: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:01 standalone.localdomain python3.9[355899]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:01:01 standalone.localdomain CROND[355902]: (root) CMD (run-parts /etc/cron.hourly)
Oct 13 15:01:01 standalone.localdomain run-parts[355906]: (/etc/cron.hourly) starting 0anacron
Oct 13 15:01:01 standalone.localdomain run-parts[355915]: (/etc/cron.hourly) finished 0anacron
Oct 13 15:01:01 standalone.localdomain CROND[355900]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 13 15:01:02 standalone.localdomain python3.9[355962]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:01:02 standalone.localdomain ceph-mon[29756]: pgmap v2594: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38292 DF PROTO=TCP SPT=40214 DPT=9101 SEQ=2965061901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7B3B60000000001030307) 
Oct 13 15:01:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2595: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:03 standalone.localdomain python3.9[356054]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:01:04 standalone.localdomain python3.9[356106]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:01:04 standalone.localdomain chronyd[28474]: chronyd exiting
Oct 13 15:01:04 standalone.localdomain systemd[1]: Stopping NTP client/server...
Oct 13 15:01:04 standalone.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Oct 13 15:01:04 standalone.localdomain systemd[1]: Stopped NTP client/server.
Oct 13 15:01:04 standalone.localdomain systemd[1]: Starting NTP client/server...
Oct 13 15:01:04 standalone.localdomain chronyd[356114]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 13 15:01:04 standalone.localdomain chronyd[356114]: Frequency -30.471 +/- 0.251 ppm read from /var/lib/chrony/drift
Oct 13 15:01:04 standalone.localdomain chronyd[356114]: Loaded seccomp filter (level 2)
Oct 13 15:01:04 standalone.localdomain systemd[1]: Started NTP client/server.
Oct 13 15:01:04 standalone.localdomain ceph-mon[29756]: pgmap v2595: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2596: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:04 standalone.localdomain sshd[353037]: pam_unix(sshd:session): session closed for user root
Oct 13 15:01:04 standalone.localdomain systemd[1]: session-283.scope: Deactivated successfully.
Oct 13 15:01:04 standalone.localdomain systemd[1]: session-283.scope: Consumed 28.442s CPU time.
Oct 13 15:01:04 standalone.localdomain systemd-logind[45629]: Session 283 logged out. Waiting for processes to exit.
Oct 13 15:01:04 standalone.localdomain systemd-logind[45629]: Removed session 283.
Oct 13 15:01:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:06 standalone.localdomain ceph-mon[29756]: pgmap v2596: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2597: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5577 DF PROTO=TCP SPT=41452 DPT=9100 SEQ=3218944113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7C6650000000001030307) 
Oct 13 15:01:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5578 DF PROTO=TCP SPT=41452 DPT=9100 SEQ=3218944113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7CA760000000001030307) 
Oct 13 15:01:08 standalone.localdomain ceph-mon[29756]: pgmap v2597: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2598: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:10 standalone.localdomain sshd[356130]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:01:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5579 DF PROTO=TCP SPT=41452 DPT=9100 SEQ=3218944113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7D2760000000001030307) 
Oct 13 15:01:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:10 standalone.localdomain sshd[356130]: Accepted publickey for root from 192.168.122.30 port 52486 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:01:10 standalone.localdomain systemd-logind[45629]: New session 284 of user root.
Oct 13 15:01:10 standalone.localdomain systemd[1]: Started Session 284 of User root.
Oct 13 15:01:10 standalone.localdomain sshd[356130]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:01:10 standalone.localdomain ceph-mon[29756]: pgmap v2598: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2599: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:11 standalone.localdomain python3.9[356223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:01:12 standalone.localdomain ceph-mon[29756]: pgmap v2599: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:12 standalone.localdomain python3.9[356317]: ansible-ansible.builtin.file Invoked with group=root mode=0770 owner=root path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2600: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:13 standalone.localdomain python3.9[356420]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:13 standalone.localdomain ceph-mon[29756]: pgmap v2600: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:13 standalone.localdomain python3.9[356466]: ansible-ansible.legacy.file Invoked with group=root mode=0660 owner=root dest=/root/.config/containers/auth.json _original_basename=.0ru00ppf recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32750 DF PROTO=TCP SPT=34772 DPT=9105 SEQ=551330212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7E0370000000001030307) 
Oct 13 15:01:14 standalone.localdomain python3.9[356556]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2601: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:15 standalone.localdomain python3.9[356629]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367674.0614057-61-17511999034547/.source _original_basename=.mnp9l1r1 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:15 standalone.localdomain python3.9[356719]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:01:16 standalone.localdomain python3.9[356809]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:16 standalone.localdomain ceph-mon[29756]: pgmap v2601: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2602: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:01:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:01:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:01:16 standalone.localdomain python3.9[356880]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367675.8752406-85-20056938093362/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:01:16 standalone.localdomain systemd[1]: tmp-crun.qiO77R.mount: Deactivated successfully.
Oct 13 15:01:16 standalone.localdomain podman[356881]: 2025-10-13 15:01:16.840463435 +0000 UTC m=+0.100058715 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, release=1)
Oct 13 15:01:16 standalone.localdomain podman[356883]: 2025-10-13 15:01:16.907684828 +0000 UTC m=+0.159280252 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:01:16 standalone.localdomain podman[356882]: 2025-10-13 15:01:16.926837548 +0000 UTC m=+0.183917291 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=swift_container_server)
Oct 13 15:01:17 standalone.localdomain podman[356881]: 2025-10-13 15:01:17.034806696 +0000 UTC m=+0.294401996 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:01:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:01:17 standalone.localdomain podman[356883]: 2025-10-13 15:01:17.093167065 +0000 UTC m=+0.344762449 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server)
Oct 13 15:01:17 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:01:17 standalone.localdomain podman[356882]: 2025-10-13 15:01:17.137814372 +0000 UTC m=+0.394894085 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, tcib_managed=true, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, release=1, version=17.1.9, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 15:01:17 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:01:17 standalone.localdomain python3.9[357048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:17 standalone.localdomain python3.9[357120]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367676.9253383-85-2347715848665/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:01:18 standalone.localdomain python3.9[357210]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:18 standalone.localdomain ceph-mon[29756]: pgmap v2602: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:01:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1280705003' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:01:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:01:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1280705003' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:01:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54772 DF PROTO=TCP SPT=59592 DPT=9882 SEQ=958460445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7F2760000000001030307) 
Oct 13 15:01:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2603: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:18 standalone.localdomain python3.9[357300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:19 standalone.localdomain python3.9[357371]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367678.5119948-122-212617436968993/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1280705003' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:01:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1280705003' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:01:19 standalone.localdomain python3.9[357461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32752 DF PROTO=TCP SPT=34772 DPT=9105 SEQ=551330212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA7F7F60000000001030307) 
Oct 13 15:01:20 standalone.localdomain python3.9[357532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367679.5258515-137-74008066199984/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:20 standalone.localdomain ceph-mon[29756]: pgmap v2603: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2604: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:21 standalone.localdomain python3.9[357622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:01:21 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:01:21 standalone.localdomain systemd-rc-local-generator[357650]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:01:21 standalone.localdomain systemd-sysv-generator[357654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:01:21 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:01:21 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:01:21 standalone.localdomain systemd-rc-local-generator[357686]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:01:21 standalone.localdomain systemd-sysv-generator[357690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:01:21 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:01:22 standalone.localdomain systemd[1]: Starting EDPM Container Shutdown...
Oct 13 15:01:22 standalone.localdomain systemd[1]: Finished EDPM Container Shutdown.
Oct 13 15:01:22 standalone.localdomain ceph-mon[29756]: pgmap v2604: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:22 standalone.localdomain python3.9[357790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2605: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:23 standalone.localdomain python3.9[357861]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367682.2058365-160-258182208721334/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:01:23
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'images', 'manila_metadata', '.mgr', 'vms', 'manila_data', 'backups']
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:01:23 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:15:01:23 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx561d57f1ffb3473bb2442-0068ed1443" "proxy-server 2" 0.0006 "-" 20 -
Oct 13 15:01:23 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx561d57f1ffb3473bb2442-0068ed1443)
Oct 13 15:01:23 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx561d57f1ffb3473bb2442-0068ed1443)
Oct 13 15:01:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50343 DF PROTO=TCP SPT=43398 DPT=9102 SEQ=3771608903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8057E0000000001030307) 
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:01:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:01:23 standalone.localdomain python3.9[357951]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:24 standalone.localdomain python3.9[358022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367683.2478588-175-229283809124672/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:24 standalone.localdomain ceph-mon[29756]: pgmap v2605: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2606: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:24 standalone.localdomain python3.9[358112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:01:24 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:01:24 standalone.localdomain systemd-rc-local-generator[358137]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:01:24 standalone.localdomain systemd-sysv-generator[358141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:01:25 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:01:25 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:01:25 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:01:25 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:01:25 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:01:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2446 DF PROTO=TCP SPT=42400 DPT=9101 SEQ=1229343225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA80D0E0000000001030307) 
Oct 13 15:01:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:26 standalone.localdomain python3.9[358243]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:01:26 standalone.localdomain network[358260]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:01:26 standalone.localdomain network[358261]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:01:26 standalone.localdomain network[358262]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:01:26 standalone.localdomain ceph-mon[29756]: pgmap v2606: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2607: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:27 standalone.localdomain ceph-mon[29756]: pgmap v2607: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:01:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2448 DF PROTO=TCP SPT=42400 DPT=9101 SEQ=1229343225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA819360000000001030307) 
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2608: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:01:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:01:29 standalone.localdomain ceph-mon[29756]: pgmap v2608: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2609: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:32 standalone.localdomain ceph-mon[29756]: pgmap v2609: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2449 DF PROTO=TCP SPT=42400 DPT=9101 SEQ=1229343225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA828F60000000001030307) 
Oct 13 15:01:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2610: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:33 standalone.localdomain python3.9[358469]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:33 standalone.localdomain python3.9[358542]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760367693.0610797-216-206477876940396/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:34 standalone.localdomain ceph-mon[29756]: pgmap v2610: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:34 standalone.localdomain python3.9[358633]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:01:34 standalone.localdomain systemd[1]: Reloading OpenSSH server daemon...
Oct 13 15:01:34 standalone.localdomain systemd[1]: Reloaded OpenSSH server daemon.
Oct 13 15:01:34 standalone.localdomain sshd[337962]: Received SIGHUP; restarting.
Oct 13 15:01:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2611: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:34 standalone.localdomain sshd[337962]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:01:34 standalone.localdomain sshd[337962]: Server listening on 0.0.0.0 port 22.
Oct 13 15:01:34 standalone.localdomain sshd[337962]: Server listening on :: port 22.
Oct 13 15:01:35 standalone.localdomain python3.9[358727]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:35 standalone.localdomain python3.9[358817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:36 standalone.localdomain python3.9[358888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367695.4069033-247-51738227733908/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:36 standalone.localdomain ceph-mon[29756]: pgmap v2611: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2612: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:37 standalone.localdomain python3.9[358978]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 13 15:01:37 standalone.localdomain systemd[1]: Starting Time & Date Service...
Oct 13 15:01:37 standalone.localdomain systemd[1]: Started Time & Date Service.
Oct 13 15:01:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13535 DF PROTO=TCP SPT=40608 DPT=9100 SEQ=2239109232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA83B940000000001030307) 
Oct 13 15:01:38 standalone.localdomain python3.9[359072]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13536 DF PROTO=TCP SPT=40608 DPT=9100 SEQ=2239109232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA83FB70000000001030307) 
Oct 13 15:01:38 standalone.localdomain python3.9[359162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:38 standalone.localdomain ceph-mon[29756]: pgmap v2612: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2613: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:39 standalone.localdomain python3.9[359233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367698.1471124-282-209149021941205/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:39 standalone.localdomain python3.9[359323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:40 standalone.localdomain python3.9[359394]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367699.1660752-297-32083086131706/.source.yaml _original_basename=.88gkgh0b follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13537 DF PROTO=TCP SPT=40608 DPT=9100 SEQ=2239109232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA847B60000000001030307) 
Oct 13 15:01:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:40 standalone.localdomain ceph-mon[29756]: pgmap v2613: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:40 standalone.localdomain python3.9[359484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2614: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:41 standalone.localdomain python3.9[359557]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367700.2211776-312-278428830450125/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:41 standalone.localdomain python3.9[359647]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:01:42 standalone.localdomain python3.9[359738]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:01:42 standalone.localdomain ceph-mon[29756]: pgmap v2614: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2615: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:43 standalone.localdomain python3[359829]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 15:01:43 standalone.localdomain python3.9[359919]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:44 standalone.localdomain python3.9[359990]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367703.3626373-351-155933315529488/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13538 DF PROTO=TCP SPT=40608 DPT=9100 SEQ=2239109232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA857760000000001030307) 
Oct 13 15:01:44 standalone.localdomain ceph-mon[29756]: pgmap v2615: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2616: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:44 standalone.localdomain python3.9[360080]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:45 standalone.localdomain python3.9[360151]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367704.4304953-366-243722203940223/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.575832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367705575867, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 914, "num_deletes": 251, "total_data_size": 695759, "memory_usage": 713160, "flush_reason": "Manual Compaction"}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367705581956, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 682234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61573, "largest_seqno": 62486, "table_properties": {"data_size": 678229, "index_size": 1733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9115, "raw_average_key_size": 19, "raw_value_size": 670053, "raw_average_value_size": 1419, "num_data_blocks": 79, "num_entries": 472, "num_filter_entries": 472, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367635, "oldest_key_time": 1760367635, "file_creation_time": 1760367705, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 6227 microseconds, and 2630 cpu microseconds.
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.582035) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 682234 bytes OK
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.582078) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.584290) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.584317) EVENT_LOG_v1 {"time_micros": 1760367705584309, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.584339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 691270, prev total WAL file size 691270, number of live WAL files 2.
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.584893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(666KB)], [140(4980KB)]
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367705584935, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 5782630, "oldest_snapshot_seqno": -1}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 5435 keys, 4758293 bytes, temperature: kUnknown
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367705603999, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 4758293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4727475, "index_size": 16129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 142940, "raw_average_key_size": 26, "raw_value_size": 4633914, "raw_average_value_size": 852, "num_data_blocks": 633, "num_entries": 5435, "num_filter_entries": 5435, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367705, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.604315) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 4758293 bytes
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.605936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 301.9 rd, 248.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 4.9 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(15.5) write-amplify(7.0) OK, records in: 5952, records dropped: 517 output_compression: NoCompression
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.605958) EVENT_LOG_v1 {"time_micros": 1760367705605949, "job": 86, "event": "compaction_finished", "compaction_time_micros": 19153, "compaction_time_cpu_micros": 9503, "output_level": 6, "num_output_files": 1, "total_output_size": 4758293, "num_input_records": 5952, "num_output_records": 5435, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367705606174, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367705606769, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.584806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.606852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.606859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.606863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.606866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:01:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:01:45.606869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:01:45 standalone.localdomain python3.9[360241]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:46 standalone.localdomain python3.9[360312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367705.4937527-381-253885225037884/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:46 standalone.localdomain ceph-mon[29756]: pgmap v2616: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2617: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:46 standalone.localdomain sudo[360403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:01:46 standalone.localdomain sudo[360403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:01:46 standalone.localdomain sudo[360403]: pam_unix(sudo:session): session closed for user root
Oct 13 15:01:46 standalone.localdomain sudo[360418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:01:46 standalone.localdomain sudo[360418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:01:47 standalone.localdomain python3.9[360402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:47 standalone.localdomain python3.9[360515]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367706.5574236-396-81264404363042/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:47 standalone.localdomain sudo[360418]: pam_unix(sudo:session): session closed for user root
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:01:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 657392aa-e604-4265-b2d4-3d376786f4e7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:01:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 657392aa-e604-4265-b2d4-3d376786f4e7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:01:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 657392aa-e604-4265-b2d4-3d376786f4e7 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: pgmap v2617: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:01:47 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:01:47 standalone.localdomain sudo[360566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:01:47 standalone.localdomain sudo[360566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:01:47 standalone.localdomain sudo[360566]: pam_unix(sudo:session): session closed for user root
Oct 13 15:01:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:01:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:01:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:01:47 standalone.localdomain podman[360596]: 2025-10-13 15:01:47.815055875 +0000 UTC m=+0.089414957 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20250721.1)
Oct 13 15:01:47 standalone.localdomain podman[360597]: 2025-10-13 15:01:47.79574101 +0000 UTC m=+0.070658759 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, vcs-type=git, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:01:47 standalone.localdomain podman[360598]: 2025-10-13 15:01:47.863620502 +0000 UTC m=+0.131578967 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, tcib_managed=true, name=rhosp17/openstack-swift-account, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, container_name=swift_account_server, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, distribution-scope=public, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64)
Oct 13 15:01:47 standalone.localdomain podman[360596]: 2025-10-13 15:01:47.986097028 +0000 UTC m=+0.260456120 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-type=git, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, container_name=swift_object_server, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, version=17.1.9)
Oct 13 15:01:47 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:01:48 standalone.localdomain podman[360597]: 2025-10-13 15:01:48.011224732 +0000 UTC m=+0.286142501 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server)
Oct 13 15:01:48 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:01:48 standalone.localdomain podman[360598]: 2025-10-13 15:01:48.074199973 +0000 UTC m=+0.342158448 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-type=git, version=17.1.9)
Oct 13 15:01:48 standalone.localdomain python3.9[360700]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:01:48 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:01:48 standalone.localdomain python3.9[360792]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367707.601458-411-194495777690636/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2618: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:48 standalone.localdomain systemd[1]: tmp-crun.0wZgU2.mount: Deactivated successfully.
Oct 13 15:01:49 standalone.localdomain python3.9[360882]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:49 standalone.localdomain ceph-mon[29756]: pgmap v2618: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:01:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:01:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:01:49 standalone.localdomain python3.9[360972]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:01:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:50 standalone.localdomain python3.9[361065]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                          include "/etc/nftables/edpm-chains.nft"
                                                          include "/etc/nftables/edpm-rules.nft"
                                                          include "/etc/nftables/edpm-jumps.nft"
                                                           path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2619: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:01:51 standalone.localdomain python3.9[361156]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=root path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:51 standalone.localdomain python3.9[361246]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=root path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:01:52 standalone.localdomain python3.9[361336]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 13 15:01:52 standalone.localdomain ceph-mon[29756]: pgmap v2619: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2620: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:52 standalone.localdomain python3.9[361427]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 13 15:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:01:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:01:53 standalone.localdomain sshd[356130]: pam_unix(sshd:session): session closed for user root
Oct 13 15:01:53 standalone.localdomain systemd[1]: session-284.scope: Deactivated successfully.
Oct 13 15:01:53 standalone.localdomain systemd[1]: session-284.scope: Consumed 24.966s CPU time.
Oct 13 15:01:53 standalone.localdomain systemd-logind[45629]: Session 284 logged out. Waiting for processes to exit.
Oct 13 15:01:53 standalone.localdomain systemd-logind[45629]: Removed session 284.
Oct 13 15:01:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45218 DF PROTO=TCP SPT=37850 DPT=9102 SEQ=1314867712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA87AAF0000000001030307) 
Oct 13 15:01:54 standalone.localdomain ceph-mon[29756]: pgmap v2620: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2621: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49761 DF PROTO=TCP SPT=52966 DPT=9101 SEQ=3793661359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8823E0000000001030307) 
Oct 13 15:01:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:01:56 standalone.localdomain ceph-mon[29756]: pgmap v2621: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2622: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:58 standalone.localdomain sshd[361443]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:01:58 standalone.localdomain ceph-mon[29756]: pgmap v2622: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:58 standalone.localdomain sshd[361443]: Accepted publickey for root from 192.168.122.30 port 35194 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:01:58 standalone.localdomain systemd-logind[45629]: New session 285 of user root.
Oct 13 15:01:58 standalone.localdomain systemd[1]: Started Session 285 of User root.
Oct 13 15:01:58 standalone.localdomain sshd[361443]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:01:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2623: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:01:59 standalone.localdomain python3.9[361536]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 13 15:02:00 standalone.localdomain python3.9[361626]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:02:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:00 standalone.localdomain ceph-mon[29756]: pgmap v2623: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2624: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:00 standalone.localdomain python3.9[361718]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 13 15:02:01 standalone.localdomain python3.9[361808]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.csmpnv29 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:02 standalone.localdomain python3.9[361881]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.csmpnv29 mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367720.9617026-44-136094745399116/.source.csmpnv29 _original_basename=.8qnnnblk follow=False checksum=0c90883844bdc6c731b22f0c0f48f8d2ed044107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:02 standalone.localdomain ceph-mon[29756]: pgmap v2624: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2625: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:03 standalone.localdomain python3.9[361971]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:02:03 standalone.localdomain python3.9[362061]: ansible-ansible.builtin.blockinfile Invoked with block=standalone.localdomain,192.168.122.100,standalone* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNkoVwZ2oohlYvcD/qhxAyRnIat7PUr6hW4eXifpaxVFWOGSjkOVawsiECVgrbBhDIPW+M4HrGs8BsLzCO0q0HJQ9IvnS8ZQr5mRbjei4swbOm3FFwBq6RW1FXifUOD0K8Ob/JoKwFFoitFHI4w4TTVbUXlub6YgM8FJZfZVQF+OkgsOUPKravagy8zyzUWCJZq3CWyzjEMKb2CoJMV+uoryALhR6A/iAJGGAg/TtDOXCxsHfuGUPXo04ks63xpOCgGpzdQnD5c9q5EySCejiYhtsgm+CTNrcsvO9yhjXYVQ8IB15W8KmZDJNAcEe5i2bBYbTLiC0NgdyINHQ8jnb7mTztBjhdNbFYPqvgD8p8i3obhP1yvSBxk5yJ3qzht6ixTnggCh5Yev6rIcetij2XYREtvsTTCvq1To+Ke7Qem4wRnRtqwQxvH4yBUu+WrqFWJIeAH7wB4QH05ordBku7Dqss4Ak64CSJfQ+NnAEx2NaEhGfvNPOAT2GHFm8zuCU=
                                                          standalone.localdomain,192.168.122.100,standalone* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIItfyllXPnA2mglXY75TaDQc+oQ2p7yOmXrpgnYlAkIm
                                                          standalone.localdomain,192.168.122.100,standalone* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGRgz4VWqFP1i7a7km/tmigFqNlGZB7h3tW6ksNloCiHUVgCXwmEsBM9hHJUOv09oiloUA/lCqkeFgTp8x5naAM=
                                                           create=True mode=0644 path=/tmp/ansible.csmpnv29 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:04 standalone.localdomain python3.9[362151]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.csmpnv29' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:02:04 standalone.localdomain ceph-mon[29756]: pgmap v2625: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2626: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:05 standalone.localdomain python3.9[362243]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.csmpnv29 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:05 standalone.localdomain sshd[361443]: pam_unix(sshd:session): session closed for user root
Oct 13 15:02:05 standalone.localdomain systemd-logind[45629]: Session 285 logged out. Waiting for processes to exit.
Oct 13 15:02:05 standalone.localdomain systemd[1]: session-285.scope: Deactivated successfully.
Oct 13 15:02:05 standalone.localdomain systemd[1]: session-285.scope: Consumed 3.680s CPU time.
Oct 13 15:02:05 standalone.localdomain systemd-logind[45629]: Removed session 285.
Oct 13 15:02:06 standalone.localdomain ceph-mon[29756]: pgmap v2626: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2627: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:07 standalone.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 15:02:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19005 DF PROTO=TCP SPT=38562 DPT=9100 SEQ=2179871649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8B0C40000000001030307) 
Oct 13 15:02:08 standalone.localdomain ceph-mon[29756]: pgmap v2627: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2628: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:10 standalone.localdomain ceph-mon[29756]: pgmap v2628: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2629: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:10 standalone.localdomain sshd[362260]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:02:10 standalone.localdomain sshd[362260]: Accepted publickey for root from 192.168.122.30 port 58852 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:02:10 standalone.localdomain systemd-logind[45629]: New session 286 of user root.
Oct 13 15:02:10 standalone.localdomain systemd[1]: Started Session 286 of User root.
Oct 13 15:02:10 standalone.localdomain sshd[362260]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:02:11 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29853 DF PROTO=TCP SPT=55810 DPT=9882 SEQ=3354693275 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8C1170000000001030307) 
Oct 13 15:02:11 standalone.localdomain python3.9[362353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:02:12 standalone.localdomain ceph-mon[29756]: pgmap v2629: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2630: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:12 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35144 DF PROTO=TCP SPT=48286 DPT=9105 SEQ=1908045305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8C6860000000001030307) 
Oct 13 15:02:13 standalone.localdomain python3.9[362447]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 15:02:13 standalone.localdomain python3.9[362539]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:02:14 standalone.localdomain ceph-mon[29756]: pgmap v2630: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2631: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:14 standalone.localdomain python3.9[362630]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:02:15 standalone.localdomain python3.9[362721]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:02:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:15 standalone.localdomain python3.9[362813]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:02:16 standalone.localdomain ceph-mon[29756]: pgmap v2631: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:16 standalone.localdomain python3.9[362906]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2632: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:16 standalone.localdomain sshd[362260]: pam_unix(sshd:session): session closed for user root
Oct 13 15:02:16 standalone.localdomain systemd[1]: session-286.scope: Deactivated successfully.
Oct 13 15:02:16 standalone.localdomain systemd[1]: session-286.scope: Consumed 3.493s CPU time.
Oct 13 15:02:16 standalone.localdomain systemd-logind[45629]: Session 286 logged out. Waiting for processes to exit.
Oct 13 15:02:16 standalone.localdomain systemd-logind[45629]: Removed session 286.
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3308745812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3308745812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: pgmap v2632: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3308745812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:02:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3308745812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:02:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2633: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:02:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:02:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:02:18 standalone.localdomain systemd[1]: tmp-crun.XND4Jt.mount: Deactivated successfully.
Oct 13 15:02:18 standalone.localdomain podman[362922]: 2025-10-13 15:02:18.834766336 +0000 UTC m=+0.093878305 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:02:18 standalone.localdomain podman[362921]: 2025-10-13 15:02:18.870248499 +0000 UTC m=+0.128807051 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.buildah.version=1.33.12)
Oct 13 15:02:18 standalone.localdomain podman[362923]: 2025-10-13 15:02:18.94001303 +0000 UTC m=+0.193767904 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, version=17.1.9, batch=17.1_20250721.1, container_name=swift_account_server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:02:19 standalone.localdomain podman[362922]: 2025-10-13 15:02:19.067761599 +0000 UTC m=+0.326873468 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, container_name=swift_container_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, release=1, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team)
Oct 13 15:02:19 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:02:19 standalone.localdomain podman[362921]: 2025-10-13 15:02:19.106367749 +0000 UTC m=+0.364926271 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, tcib_managed=true, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:02:19 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:02:19 standalone.localdomain podman[362923]: 2025-10-13 15:02:19.119027149 +0000 UTC m=+0.372782003 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=swift_account_server, build-date=2025-07-21T16:11:22, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:02:19 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:02:19 standalone.localdomain ceph-mon[29756]: pgmap v2633: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:19 standalone.localdomain systemd[1]: tmp-crun.ne8MFy.mount: Deactivated successfully.
Oct 13 15:02:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2634: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:21 standalone.localdomain sshd[363003]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:02:21 standalone.localdomain sshd[363003]: Accepted publickey for root from 192.168.122.30 port 47900 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:02:21 standalone.localdomain systemd-logind[45629]: New session 287 of user root.
Oct 13 15:02:21 standalone.localdomain systemd[1]: Started Session 287 of User root.
Oct 13 15:02:21 standalone.localdomain sshd[363003]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:02:22 standalone.localdomain ceph-mon[29756]: pgmap v2634: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2635: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:22 standalone.localdomain python3.9[363096]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:02:23
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'vms', 'manila_metadata', '.mgr', 'backups', 'volumes', 'images']
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:02:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30700 DF PROTO=TCP SPT=49914 DPT=9102 SEQ=3238778215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8EFDF0000000001030307) 
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:02:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:02:23 standalone.localdomain python3.9[363190]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:02:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30701 DF PROTO=TCP SPT=49914 DPT=9102 SEQ=3238778215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8F3F70000000001030307) 
Oct 13 15:02:24 standalone.localdomain ceph-mon[29756]: pgmap v2635: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2636: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:24 standalone.localdomain python3.9[363242]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 15:02:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46552 DF PROTO=TCP SPT=42914 DPT=9101 SEQ=3239446814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8F76E0000000001030307) 
Oct 13 15:02:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46553 DF PROTO=TCP SPT=42914 DPT=9101 SEQ=3239446814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8FB760000000001030307) 
Oct 13 15:02:26 standalone.localdomain ceph-mon[29756]: pgmap v2636: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30702 DF PROTO=TCP SPT=49914 DPT=9102 SEQ=3238778215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA8FBF60000000001030307) 
Oct 13 15:02:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2637: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46554 DF PROTO=TCP SPT=42914 DPT=9101 SEQ=3239446814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA903760000000001030307) 
Oct 13 15:02:28 standalone.localdomain ceph-mon[29756]: pgmap v2637: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2638: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:02:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:02:29 standalone.localdomain python3.9[363334]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:02:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:30 standalone.localdomain ceph-mon[29756]: pgmap v2638: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30703 DF PROTO=TCP SPT=49914 DPT=9102 SEQ=3238778215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA90BB60000000001030307) 
Oct 13 15:02:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2639: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:32 standalone.localdomain python3.9[363425]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46555 DF PROTO=TCP SPT=42914 DPT=9101 SEQ=3239446814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA913360000000001030307) 
Oct 13 15:02:32 standalone.localdomain ceph-mon[29756]: pgmap v2639: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2640: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:32 standalone.localdomain python3.9[363515]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:33 standalone.localdomain python3.9[363605]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Updating Subscription Management repositories.
                                                          Core libraries or services have been updated since boot-up:
                                                            * systemd
                                                          
                                                          Reboot is required to fully utilize these updates.
                                                          More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:33 standalone.localdomain ceph-mon[29756]: pgmap v2640: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:34 standalone.localdomain python3.9[363695]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:02:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2641: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:35 standalone.localdomain python3.9[363785]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:02:35 standalone.localdomain python3.9[363877]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:02:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:35 standalone.localdomain sshd[363003]: pam_unix(sshd:session): session closed for user root
Oct 13 15:02:35 standalone.localdomain systemd[1]: session-287.scope: Deactivated successfully.
Oct 13 15:02:35 standalone.localdomain systemd[1]: session-287.scope: Consumed 10.784s CPU time.
Oct 13 15:02:35 standalone.localdomain systemd-logind[45629]: Session 287 logged out. Waiting for processes to exit.
Oct 13 15:02:35 standalone.localdomain systemd-logind[45629]: Removed session 287.
Oct 13 15:02:36 standalone.localdomain ceph-mon[29756]: pgmap v2641: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2642: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60619 DF PROTO=TCP SPT=41200 DPT=9100 SEQ=2586555219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA925F50000000001030307) 
Oct 13 15:02:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60620 DF PROTO=TCP SPT=41200 DPT=9100 SEQ=2586555219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA929F70000000001030307) 
Oct 13 15:02:38 standalone.localdomain ceph-mon[29756]: pgmap v2642: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2643: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60621 DF PROTO=TCP SPT=41200 DPT=9100 SEQ=2586555219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA931F60000000001030307) 
Oct 13 15:02:40 standalone.localdomain sshd[363893]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:02:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:40 standalone.localdomain ceph-mon[29756]: pgmap v2643: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2644: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:40 standalone.localdomain sshd[363895]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:02:41 standalone.localdomain sshd[363895]: Accepted publickey for root from 192.168.122.30 port 40304 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:02:41 standalone.localdomain systemd-logind[45629]: New session 288 of user root.
Oct 13 15:02:41 standalone.localdomain systemd[1]: Started Session 288 of User root.
Oct 13 15:02:41 standalone.localdomain sshd[363895]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:02:41 standalone.localdomain unix_chkpwd[363920]: password check failed for user (root)
Oct 13 15:02:41 standalone.localdomain sshd[363893]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 13 15:02:41 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50259 DF PROTO=TCP SPT=47008 DPT=9882 SEQ=1192186608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA936460000000001030307) 
Oct 13 15:02:42 standalone.localdomain python3.9[363989]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:02:42 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50260 DF PROTO=TCP SPT=47008 DPT=9882 SEQ=1192186608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA93A370000000001030307) 
Oct 13 15:02:42 standalone.localdomain ceph-mon[29756]: pgmap v2644: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2645: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:42 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9894 DF PROTO=TCP SPT=59568 DPT=9105 SEQ=1320034980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA93BB70000000001030307) 
Oct 13 15:02:43 standalone.localdomain sshd[363893]: Failed password for root from 80.94.93.176 port 60056 ssh2
Oct 13 15:02:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9895 DF PROTO=TCP SPT=59568 DPT=9105 SEQ=1320034980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA93FB60000000001030307) 
Oct 13 15:02:44 standalone.localdomain python3.9[364083]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:44 standalone.localdomain ceph-mon[29756]: pgmap v2645: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2646: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:44 standalone.localdomain unix_chkpwd[364174]: password check failed for user (root)
Oct 13 15:02:45 standalone.localdomain python3.9[364173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:45 standalone.localdomain python3.9[364245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367764.447027-74-93301449412801/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:46 standalone.localdomain python3.9[364335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:46 standalone.localdomain ceph-mon[29756]: pgmap v2646: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2647: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:46 standalone.localdomain sshd[363893]: Failed password for root from 80.94.93.176 port 60056 ssh2
Oct 13 15:02:46 standalone.localdomain python3.9[364425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:47 standalone.localdomain python3.9[364496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367766.5267358-98-57637929926285/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:47 standalone.localdomain sudo[364569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:02:47 standalone.localdomain sudo[364569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:02:47 standalone.localdomain sudo[364569]: pam_unix(sudo:session): session closed for user root
Oct 13 15:02:47 standalone.localdomain sudo[364602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:02:47 standalone.localdomain sudo[364602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:02:47 standalone.localdomain python3.9[364600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:48 standalone.localdomain sudo[364602]: pam_unix(sudo:session): session closed for user root
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:02:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 851396cc-2a33-4473-8d8d-e3052f5815ef (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:02:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 851396cc-2a33-4473-8d8d-e3052f5815ef (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:02:48 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 851396cc-2a33-4473-8d8d-e3052f5815ef (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:02:48 standalone.localdomain sudo[364738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:02:48 standalone.localdomain sudo[364738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:02:48 standalone.localdomain sudo[364738]: pam_unix(sudo:session): session closed for user root
Oct 13 15:02:48 standalone.localdomain python3.9[364733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: pgmap v2647: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:02:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:02:48 standalone.localdomain unix_chkpwd[364766]: password check failed for user (root)
Oct 13 15:02:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50262 DF PROTO=TCP SPT=47008 DPT=9882 SEQ=1192186608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA951F60000000001030307) 
Oct 13 15:02:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2648: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:48 standalone.localdomain python3.9[364824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367768.1378727-122-201268023728206/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:02:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:02:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:02:49 standalone.localdomain podman[364908]: 2025-10-13 15:02:49.455770102 +0000 UTC m=+0.079922394 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, version=17.1.9, com.redhat.component=openstack-swift-object-container, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:02:49 standalone.localdomain podman[364911]: 2025-10-13 15:02:49.498916023 +0000 UTC m=+0.119229857 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, release=1, vcs-type=git, architecture=x86_64, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 15:02:49 standalone.localdomain podman[364910]: 2025-10-13 15:02:49.513440741 +0000 UTC m=+0.135524599 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, release=1, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 15:02:49 standalone.localdomain python3.9[364932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:49 standalone.localdomain podman[364908]: 2025-10-13 15:02:49.656845381 +0000 UTC m=+0.280997653 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, container_name=swift_object_server, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:02:49 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:02:49 standalone.localdomain podman[364911]: 2025-10-13 15:02:49.677675164 +0000 UTC m=+0.297989038 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12)
Oct 13 15:02:49 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:02:49 standalone.localdomain podman[364910]: 2025-10-13 15:02:49.699765424 +0000 UTC m=+0.321849262 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 15:02:49 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:02:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:02:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:02:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:02:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9897 DF PROTO=TCP SPT=59568 DPT=9105 SEQ=1320034980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA957760000000001030307) 
Oct 13 15:02:50 standalone.localdomain python3.9[365085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:50 standalone.localdomain sshd[363893]: Failed password for root from 80.94.93.176 port 60056 ssh2
Oct 13 15:02:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:50 standalone.localdomain ceph-mon[29756]: pgmap v2648: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:02:50 standalone.localdomain python3.9[365156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367769.710604-146-134935022614950/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2649: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:51 standalone.localdomain python3.9[365246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:51 standalone.localdomain python3.9[365336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:52 standalone.localdomain python3.9[365407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367771.3910766-170-37646205850866/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:52 standalone.localdomain sshd[363893]: Received disconnect from 80.94.93.176 port 60056:11:  [preauth]
Oct 13 15:02:52 standalone.localdomain sshd[363893]: Disconnected from authenticating user root 80.94.93.176 port 60056 [preauth]
Oct 13 15:02:52 standalone.localdomain sshd[363893]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 13 15:02:52 standalone.localdomain sshd[365448]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:02:52 standalone.localdomain ceph-mon[29756]: pgmap v2649: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2650: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:52 standalone.localdomain python3.9[365499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:53 standalone.localdomain unix_chkpwd[365590]: password check failed for user (root)
Oct 13 15:02:53 standalone.localdomain sshd[365448]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 13 15:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:02:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:02:53 standalone.localdomain python3.9[365589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50980 DF PROTO=TCP SPT=39314 DPT=9102 SEQ=2656032758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9650E0000000001030307) 
Oct 13 15:02:53 standalone.localdomain python3.9[365661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367772.8954258-194-202231441797122/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:54 standalone.localdomain python3.9[365751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:54 standalone.localdomain ceph-mon[29756]: pgmap v2650: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2651: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:54 standalone.localdomain python3.9[365841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:55 standalone.localdomain sshd[365448]: Failed password for root from 80.94.93.176 port 10776 ssh2
Oct 13 15:02:55 standalone.localdomain python3.9[365912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367774.5095212-218-160507758412850/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2066 DF PROTO=TCP SPT=44974 DPT=9101 SEQ=2794511228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA96C9D0000000001030307) 
Oct 13 15:02:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:02:55 standalone.localdomain python3.9[366002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:02:56 standalone.localdomain python3.9[366092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:02:56 standalone.localdomain ceph-mon[29756]: pgmap v2651: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2652: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:56 standalone.localdomain unix_chkpwd[366164]: password check failed for user (root)
Oct 13 15:02:56 standalone.localdomain python3.9[366163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367776.0675159-242-41302501256840/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=928926d2c02810deba82cc2155c15efe3a7bcb96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:02:57 standalone.localdomain sshd[363895]: pam_unix(sshd:session): session closed for user root
Oct 13 15:02:57 standalone.localdomain systemd-logind[45629]: Session 288 logged out. Waiting for processes to exit.
Oct 13 15:02:57 standalone.localdomain systemd[1]: session-288.scope: Deactivated successfully.
Oct 13 15:02:57 standalone.localdomain systemd[1]: session-288.scope: Consumed 10.006s CPU time.
Oct 13 15:02:57 standalone.localdomain systemd-logind[45629]: Removed session 288.
Oct 13 15:02:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2068 DF PROTO=TCP SPT=44974 DPT=9101 SEQ=2794511228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA978B60000000001030307) 
Oct 13 15:02:58 standalone.localdomain ceph-mon[29756]: pgmap v2652: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2653: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:02:58 standalone.localdomain sshd[365448]: Failed password for root from 80.94.93.176 port 10776 ssh2
Oct 13 15:03:00 standalone.localdomain unix_chkpwd[366179]: password check failed for user (root)
Oct 13 15:03:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:00 standalone.localdomain ceph-mon[29756]: pgmap v2653: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2654: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:02 standalone.localdomain sshd[366180]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:03:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2069 DF PROTO=TCP SPT=44974 DPT=9101 SEQ=2794511228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA988770000000001030307) 
Oct 13 15:03:02 standalone.localdomain sshd[365448]: Failed password for root from 80.94.93.176 port 10776 ssh2
Oct 13 15:03:02 standalone.localdomain ceph-mon[29756]: pgmap v2654: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:02 standalone.localdomain sshd[366180]: Accepted publickey for root from 192.168.122.30 port 39382 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:03:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2655: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:02 standalone.localdomain systemd-logind[45629]: New session 289 of user root.
Oct 13 15:03:02 standalone.localdomain systemd[1]: Started Session 289 of User root.
Oct 13 15:03:02 standalone.localdomain sshd[366180]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:03:03 standalone.localdomain python3.9[366273]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:03 standalone.localdomain ceph-mon[29756]: pgmap v2655: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:04 standalone.localdomain sshd[365448]: Received disconnect from 80.94.93.176 port 10776:11:  [preauth]
Oct 13 15:03:04 standalone.localdomain sshd[365448]: Disconnected from authenticating user root 80.94.93.176 port 10776 [preauth]
Oct 13 15:03:04 standalone.localdomain sshd[365448]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 13 15:03:04 standalone.localdomain python3.9[366363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:04 standalone.localdomain sshd[366377]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:03:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2656: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:04 standalone.localdomain python3.9[366436]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760367783.6622043-34-42699861522389/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=e2b92839f87802a8b24ea6e6780e50ee4d6ed8f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:05 standalone.localdomain unix_chkpwd[366451]: password check failed for user (root)
Oct 13 15:03:05 standalone.localdomain sshd[366377]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 13 15:03:05 standalone.localdomain python3.9[366527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:05 standalone.localdomain python3.9[366598]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760367785.0732648-34-71025328380137/.source.conf _original_basename=ceph.conf follow=False checksum=caf654597c4a27b997493eb3fb702d1f39212af6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:06 standalone.localdomain sshd[366180]: pam_unix(sshd:session): session closed for user root
Oct 13 15:03:06 standalone.localdomain systemd[1]: session-289.scope: Deactivated successfully.
Oct 13 15:03:06 standalone.localdomain systemd[1]: session-289.scope: Consumed 2.082s CPU time.
Oct 13 15:03:06 standalone.localdomain systemd-logind[45629]: Session 289 logged out. Waiting for processes to exit.
Oct 13 15:03:06 standalone.localdomain systemd-logind[45629]: Removed session 289.
Oct 13 15:03:06 standalone.localdomain ceph-mon[29756]: pgmap v2656: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2657: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:07 standalone.localdomain sshd[366377]: Failed password for root from 80.94.93.176 port 13394 ssh2
Oct 13 15:03:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33806 DF PROTO=TCP SPT=57564 DPT=9100 SEQ=4063431259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA99B240000000001030307) 
Oct 13 15:03:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33807 DF PROTO=TCP SPT=57564 DPT=9100 SEQ=4063431259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA99F370000000001030307) 
Oct 13 15:03:08 standalone.localdomain ceph-mon[29756]: pgmap v2657: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2658: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:08 standalone.localdomain unix_chkpwd[366613]: password check failed for user (root)
Oct 13 15:03:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33808 DF PROTO=TCP SPT=57564 DPT=9100 SEQ=4063431259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9A7360000000001030307) 
Oct 13 15:03:10 standalone.localdomain sshd[366377]: Failed password for root from 80.94.93.176 port 13394 ssh2
Oct 13 15:03:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:10 standalone.localdomain ceph-mon[29756]: pgmap v2658: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2659: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:11 standalone.localdomain sshd[366614]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:03:11 standalone.localdomain sshd[366614]: Accepted publickey for root from 192.168.122.30 port 53446 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:03:11 standalone.localdomain systemd-logind[45629]: New session 290 of user root.
Oct 13 15:03:11 standalone.localdomain systemd[1]: Started Session 290 of User root.
Oct 13 15:03:11 standalone.localdomain sshd[366614]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:03:12 standalone.localdomain unix_chkpwd[366708]: password check failed for user (root)
Oct 13 15:03:12 standalone.localdomain python3.9[366707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:03:12 standalone.localdomain ceph-mon[29756]: pgmap v2659: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2660: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:13 standalone.localdomain python3.9[366802]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:13 standalone.localdomain ceph-mon[29756]: pgmap v2660: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:13 standalone.localdomain chronyd[356114]: Selected source 162.159.200.123 (pool.ntp.org)
Oct 13 15:03:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65339 DF PROTO=TCP SPT=32872 DPT=9105 SEQ=457598293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9B4F60000000001030307) 
Oct 13 15:03:14 standalone.localdomain python3.9[366892]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2661: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:14 standalone.localdomain sshd[366377]: Failed password for root from 80.94.93.176 port 13394 ssh2
Oct 13 15:03:14 standalone.localdomain python3.9[366982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:03:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:15 standalone.localdomain python3.9[367072]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 13 15:03:16 standalone.localdomain sshd[366377]: Received disconnect from 80.94.93.176 port 13394:11:  [preauth]
Oct 13 15:03:16 standalone.localdomain sshd[366377]: Disconnected from authenticating user root 80.94.93.176 port 13394 [preauth]
Oct 13 15:03:16 standalone.localdomain sshd[366377]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 13 15:03:16 standalone.localdomain python3.9[367162]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:03:16 standalone.localdomain ceph-mon[29756]: pgmap v2661: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2662: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:17 standalone.localdomain python3.9[367214]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/787236167' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/787236167' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: pgmap v2662: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/787236167' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:03:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/787236167' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:03:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24272 DF PROTO=TCP SPT=38110 DPT=9882 SEQ=462054931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9C7360000000001030307) 
Oct 13 15:03:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2663: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:19 standalone.localdomain ceph-mon[29756]: pgmap v2663: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:03:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:03:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:03:19 standalone.localdomain systemd[1]: tmp-crun.rE2ah1.mount: Deactivated successfully.
Oct 13 15:03:19 standalone.localdomain podman[367219]: 2025-10-13 15:03:19.797763012 +0000 UTC m=+0.065113888 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible)
Oct 13 15:03:19 standalone.localdomain podman[367218]: 2025-10-13 15:03:19.841840771 +0000 UTC m=+0.110486447 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:03:19 standalone.localdomain podman[367217]: 2025-10-13 15:03:19.91287669 +0000 UTC m=+0.181157344 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:03:19 standalone.localdomain podman[367219]: 2025-10-13 15:03:19.959412845 +0000 UTC m=+0.226763731 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T15:54:32, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, version=17.1.9, container_name=swift_container_server, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:03:19 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:03:20 standalone.localdomain podman[367218]: 2025-10-13 15:03:20.033934483 +0000 UTC m=+0.302580139 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=)
Oct 13 15:03:20 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:03:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65341 DF PROTO=TCP SPT=32872 DPT=9105 SEQ=457598293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9CCB60000000001030307) 
Oct 13 15:03:20 standalone.localdomain podman[367217]: 2025-10-13 15:03:20.100417582 +0000 UTC m=+0.368698216 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_object_server, vcs-type=git, release=1, build-date=2025-07-21T14:56:28, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12)
Oct 13 15:03:20 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:03:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2664: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:22 standalone.localdomain python3.9[367392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:03:22 standalone.localdomain ceph-mon[29756]: pgmap v2664: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2665: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:03:23
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_data', '.mgr', 'manila_metadata', 'volumes', 'backups', 'images']
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:03:23 standalone.localdomain python3[367485]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                                          rule:
                                                            proto: udp
                                                            dport: 4789
                                                        - rule_name: 119 neutron geneve networks
                                                          rule:
                                                            proto: udp
                                                            dport: 6081
                                                            state: ["UNTRACKED"]
                                                        - rule_name: 120 neutron geneve networks no conntrack
                                                          rule:
                                                            proto: udp
                                                            dport: 6081
                                                            table: raw
                                                            chain: OUTPUT
                                                            jump: NOTRACK
                                                            action: append
                                                            state: []
                                                        - rule_name: 121 neutron geneve networks no conntrack
                                                          rule:
                                                            proto: udp
                                                            dport: 6081
                                                            table: raw
                                                            chain: PREROUTING
                                                            jump: NOTRACK
                                                            action: append
                                                            state: []
                                                         dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 13 15:03:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15904 DF PROTO=TCP SPT=40130 DPT=9102 SEQ=295199366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9DA3E0000000001030307) 
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:03:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:03:23 standalone.localdomain python3.9[367575]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:24 standalone.localdomain ceph-mon[29756]: pgmap v2665: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2666: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:24 standalone.localdomain python3.9[367665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:25 standalone.localdomain python3.9[367711]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9986 DF PROTO=TCP SPT=56940 DPT=9101 SEQ=3916513828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9E1CE0000000001030307) 
Oct 13 15:03:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:25 standalone.localdomain python3.9[367801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:26 standalone.localdomain python3.9[367847]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hrufu7qw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:26 standalone.localdomain ceph-mon[29756]: pgmap v2666: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2667: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:26 standalone.localdomain python3.9[367937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:27 standalone.localdomain python3.9[367983]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:27 standalone.localdomain ceph-mon[29756]: pgmap v2667: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:27 standalone.localdomain python3.9[368073]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:03:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9988 DF PROTO=TCP SPT=56940 DPT=9101 SEQ=3916513828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9EDF70000000001030307) 
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2668: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:28 standalone.localdomain python3[368164]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:03:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:03:29 standalone.localdomain python3.9[368254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:29 standalone.localdomain ceph-mon[29756]: pgmap v2668: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:30 standalone.localdomain python3.9[368327]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367808.8787873-157-103964913987483/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:30 standalone.localdomain python3.9[368417]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2669: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:31 standalone.localdomain python3.9[368490]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367810.1949391-172-29838514877419/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:31 standalone.localdomain python3.9[368580]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:32 standalone.localdomain python3.9[368653]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367811.2896788-187-103815557729118/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9989 DF PROTO=TCP SPT=56940 DPT=9101 SEQ=3916513828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CA9FDB60000000001030307) 
Oct 13 15:03:32 standalone.localdomain ceph-mon[29756]: pgmap v2669: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2670: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:32 standalone.localdomain python3.9[368743]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:33 standalone.localdomain python3.9[368816]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367812.2836907-202-171899525135874/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:33 standalone.localdomain python3.9[368906]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:34 standalone.localdomain python3.9[368979]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760367813.3931043-217-177854122090721/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:34 standalone.localdomain ceph-mon[29756]: pgmap v2670: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2671: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:34 standalone.localdomain python3.9[369069]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:35 standalone.localdomain python3.9[369159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:03:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:36 standalone.localdomain python3.9[369252]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                          include "/etc/nftables/edpm-chains.nft"
                                                          include "/etc/nftables/edpm-rules.nft"
                                                          include "/etc/nftables/edpm-jumps.nft"
                                                           path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:36 standalone.localdomain ceph-mon[29756]: pgmap v2671: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2672: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:36 standalone.localdomain python3.9[369342]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:03:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2140 DF PROTO=TCP SPT=46986 DPT=9100 SEQ=3540457868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA10540000000001030307) 
Oct 13 15:03:37 standalone.localdomain python3.9[369433]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:03:37 standalone.localdomain python3.9[369525]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:03:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2141 DF PROTO=TCP SPT=46986 DPT=9100 SEQ=3540457868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA14760000000001030307) 
Oct 13 15:03:38 standalone.localdomain python3.9[369618]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:38 standalone.localdomain ceph-mon[29756]: pgmap v2672: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2673: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:39 standalone.localdomain python3.9[369708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:03:39 standalone.localdomain ceph-mon[29756]: pgmap v2673: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2142 DF PROTO=TCP SPT=46986 DPT=9100 SEQ=3540457868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA1C760000000001030307) 
Oct 13 15:03:40 standalone.localdomain python3.9[369799]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=standalone.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ctlplane external_ids:ovn-chassis-mac-mappings="datacentre:0e:34:95:81:11:2b" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:03:40 standalone.localdomain ovs-vsctl[369800]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=standalone.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ctlplane external_ids:ovn-chassis-mac-mappings=datacentre:0e:34:95:81:11:2b external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 13 15:03:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2674: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:41 standalone.localdomain python3.9[369890]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                          ovs-vsctl show | grep -q "Manager"
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:03:41 standalone.localdomain python3.9[369983]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:03:42 standalone.localdomain python3.9[370075]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:42 standalone.localdomain ceph-mon[29756]: pgmap v2674: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2675: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:42 standalone.localdomain python3.9[370165]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:43 standalone.localdomain python3.9[370211]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:43 standalone.localdomain python3.9[370301]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2968 DF PROTO=TCP SPT=43580 DPT=9105 SEQ=4165944826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA2A360000000001030307) 
Oct 13 15:03:44 standalone.localdomain python3.9[370347]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:44 standalone.localdomain ceph-mon[29756]: pgmap v2675: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:44 standalone.localdomain python3.9[370437]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2676: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:45 standalone.localdomain python3.9[370527]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:45 standalone.localdomain python3.9[370573]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:46 standalone.localdomain python3.9[370663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:46 standalone.localdomain python3.9[370709]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:46 standalone.localdomain ceph-mon[29756]: pgmap v2676: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2677: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:47 standalone.localdomain python3.9[370799]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:03:47 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:03:47 standalone.localdomain systemd-rc-local-generator[370820]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:03:47 standalone.localdomain systemd-sysv-generator[370824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:03:47 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:03:48 standalone.localdomain python3.9[370926]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:48 standalone.localdomain python3.9[370972]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:48 standalone.localdomain sudo[370987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:03:48 standalone.localdomain sudo[370987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:03:48 standalone.localdomain sudo[370987]: pam_unix(sudo:session): session closed for user root
Oct 13 15:03:48 standalone.localdomain sudo[371015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:03:48 standalone.localdomain sudo[371015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:03:48 standalone.localdomain ceph-mon[29756]: pgmap v2677: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60838 DF PROTO=TCP SPT=39908 DPT=9882 SEQ=3072253794 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA3C770000000001030307) 
Oct 13 15:03:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2678: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:48 standalone.localdomain python3.9[371092]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:49 standalone.localdomain sudo[371015]: pam_unix(sudo:session): session closed for user root
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:03:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b1ec356f-e6c6-4004-93e6-323345211c23 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:03:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b1ec356f-e6c6-4004-93e6-323345211c23 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:03:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b1ec356f-e6c6-4004-93e6-323345211c23 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:03:49 standalone.localdomain sudo[371171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:03:49 standalone.localdomain sudo[371171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:03:49 standalone.localdomain sudo[371171]: pam_unix(sudo:session): session closed for user root
Oct 13 15:03:49 standalone.localdomain python3.9[371170]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:03:49 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:03:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:03:50 standalone.localdomain python3.9[371275]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:03:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:03:50 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:03:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2970 DF PROTO=TCP SPT=43580 DPT=9105 SEQ=4165944826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA41F60000000001030307) 
Oct 13 15:03:50 standalone.localdomain systemd-sysv-generator[371319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:03:50 standalone.localdomain podman[371277]: 2025-10-13 15:03:50.133589781 +0000 UTC m=+0.072858757 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4)
Oct 13 15:03:50 standalone.localdomain systemd-rc-local-generator[371314]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:03:50 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:03:50 standalone.localdomain podman[371277]: 2025-10-13 15:03:50.294933435 +0000 UTC m=+0.234202421 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, release=1, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:03:50 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:03:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:03:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:03:50 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:03:50 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:03:50 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:03:50 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:03:50 standalone.localdomain podman[371340]: 2025-10-13 15:03:50.44465852 +0000 UTC m=+0.081048849 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:03:50 standalone.localdomain podman[371341]: 2025-10-13 15:03:50.495468347 +0000 UTC m=+0.127067418 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T16:11:22, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=swift_account_server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, distribution-scope=public, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 15:03:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:50 standalone.localdomain podman[371340]: 2025-10-13 15:03:50.665952632 +0000 UTC m=+0.302342981 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, managed_by=tripleo_ansible, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-type=git, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:03:50 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:03:50 standalone.localdomain ceph-mon[29756]: pgmap v2678: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:03:50 standalone.localdomain podman[371341]: 2025-10-13 15:03:50.697917108 +0000 UTC m=+0.329516169 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 15:03:50 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:03:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2679: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:51 standalone.localdomain python3.9[371488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:51 standalone.localdomain ceph-mon[29756]: pgmap v2679: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:51 standalone.localdomain python3.9[371578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:52 standalone.localdomain python3.9[371649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367831.3396127-461-172085586356942/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2680: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:52 standalone.localdomain python3.9[371739]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:03:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:03:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22036 DF PROTO=TCP SPT=60594 DPT=9102 SEQ=312835880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA4F6F0000000001030307) 
Oct 13 15:03:53 standalone.localdomain python3.9[371829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:03:53 standalone.localdomain ceph-mon[29756]: pgmap v2680: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:54 standalone.localdomain python3.9[371902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760367833.1993992-486-9975324701583/.source.json _original_basename=.jw00uw1u follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:54 standalone.localdomain python3.9[371992]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:03:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2681: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7080 DF PROTO=TCP SPT=45372 DPT=9101 SEQ=3902397553 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA56FE0000000001030307) 
Oct 13 15:03:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:03:56 standalone.localdomain ceph-mon[29756]: pgmap v2681: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2682: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:56 standalone.localdomain python3.9[372243]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 13 15:03:57 standalone.localdomain python3.9[372333]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:03:57 standalone.localdomain ceph-mon[29756]: pgmap v2682: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:58 standalone.localdomain python3.9[372423]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:03:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7082 DF PROTO=TCP SPT=45372 DPT=9101 SEQ=3902397553 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA62F60000000001030307) 
Oct 13 15:03:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2683: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:03:59 standalone.localdomain ceph-mon[29756]: pgmap v2683: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2684: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:01 standalone.localdomain ceph-mon[29756]: pgmap v2684: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7083 DF PROTO=TCP SPT=45372 DPT=9101 SEQ=3902397553 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA72B70000000001030307) 
Oct 13 15:04:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2685: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:03 standalone.localdomain python3[372540]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:04:03 standalone.localdomain python3[372540]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "96ad696e7914500f1daa441ab9a026a0f524ff8aa3b224853f7d517ebfe3b2e5",
                                                                  "Digest": "sha256:4169e723173dd3fd28a992cae946f4e95e9aa17990af7d1ed85bc86c02cc48e4",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:4169e723173dd3fd28a992cae946f4e95e9aa17990af7d1ed85bc86c02cc48e4"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:39:10.09964603Z",
                                                                  "Config": {
                                                                       "User": "root",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 345595841,
                                                                  "VirtualSize": 345595841,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/e794d9016fa7b11bc2e0ec272b767b863143d54dc9add54763809bf9da301e7e/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/3ea6486050b1357ab82c68b9483cc00c71ff83bf406dd5e32ebd009c718a069e/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/3ea6486050b1357ab82c68b9483cc00c71ff83bf406dd5e32ebd009c718a069e/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:68ce5c07940208d836156eaf3891df099e5e08a3442537f87eeca9b4d2e7475a",
                                                                            "sha256:d5f8c13b780b5e07f296c61d8031b5c504ee00571ca0d3d53cd4271c36055afe"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "root",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.1142671Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:14:03.269920242Z",
                                                                            "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:14:05.38970852Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:38:34.302922051Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:39:10.098704077Z",
                                                                            "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:39:11.38530058Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 15:04:03 standalone.localdomain podman[372590]: 2025-10-13 15:04:03.700861701 +0000 UTC m=+0.073565948 container remove c1662789024a7e2de6781c9190b2d41f34b86c6ccbe4ef31ebd65d06f4ed6d80 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-07-21T13:28:44, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-ovn-controller/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=f1f0bbd48091f4ceb6d7f5422dfd17725d070245, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible)
Oct 13 15:04:03 standalone.localdomain python3[372540]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller
Oct 13 15:04:03 standalone.localdomain podman[372603]: 
Oct 13 15:04:03 standalone.localdomain podman[372603]: 2025-10-13 15:04:03.781352393 +0000 UTC m=+0.067936495 container create fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:04:03 standalone.localdomain podman[372603]: 2025-10-13 15:04:03.740527104 +0000 UTC m=+0.027111276 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 15:04:03 standalone.localdomain python3[372540]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 15:04:03 standalone.localdomain ceph-mon[29756]: pgmap v2685: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:04 standalone.localdomain python3.9[372730]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:04:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2686: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:05 standalone.localdomain python3.9[372822]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:05 standalone.localdomain python3.9[372866]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:04:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:05 standalone.localdomain ceph-mon[29756]: pgmap v2686: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:06 standalone.localdomain python3.9[372955]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760367845.6505554-574-108674844966993/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:06 standalone.localdomain python3.9[372999]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:04:06 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:04:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2687: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:06 standalone.localdomain systemd-sysv-generator[373026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:04:06 standalone.localdomain systemd-rc-local-generator[373021]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:04:06 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:04:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38984 DF PROTO=TCP SPT=55880 DPT=9100 SEQ=2807550589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA85850000000001030307) 
Oct 13 15:04:07 standalone.localdomain python3.9[373079]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:04:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:04:07 standalone.localdomain ceph-mon[29756]: pgmap v2687: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:07 standalone.localdomain systemd-sysv-generator[373113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:04:07 standalone.localdomain systemd-rc-local-generator[373110]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:04:08 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:04:08 standalone.localdomain systemd[1]: Starting ovn_controller container...
Oct 13 15:04:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38985 DF PROTO=TCP SPT=55880 DPT=9100 SEQ=2807550589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA89760000000001030307) 
Oct 13 15:04:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:04:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b66b3fa17d0c453a31e7865cc61ef4b6d8eb2c2db0119b43f3d3f9c98a7a8dd/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 15:04:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:04:08 standalone.localdomain podman[373122]: 2025-10-13 15:04:08.660278686 +0000 UTC m=+0.397620928 container init fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + sudo -E kolla_set_configs
Oct 13 15:04:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:04:08 standalone.localdomain systemd[1]: Started Session c18 of User root.
Oct 13 15:04:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2688: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: INFO:__main__:Validating config file
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: INFO:__main__:Writing out command to execute
Oct 13 15:04:08 standalone.localdomain systemd[1]: session-c18.scope: Deactivated successfully.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: ++ cat /run_command
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + ARGS=
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + sudo kolla_copy_cacerts
Oct 13 15:04:08 standalone.localdomain systemd[1]: Started Session c19 of User root.
Oct 13 15:04:08 standalone.localdomain podman[373122]: 2025-10-13 15:04:08.828183242 +0000 UTC m=+0.565525424 container start fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 13 15:04:08 standalone.localdomain edpm-start-podman-container[373122]: ovn_controller
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + [[ ! -n '' ]]
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + . kolla_extend_start
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\'''
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + umask 0022
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock
Oct 13 15:04:08 standalone.localdomain systemd[1]: session-c19.scope: Deactivated successfully.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00013|main|INFO|OVS feature set changed, force recompute.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00021|main|INFO|OVS feature set changed, force recompute.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00023|binding|INFO|Claiming lport cr-lrp-dc5eaba3-5a43-4142-aa7e-8a037cb22fad for this chassis.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00024|binding|INFO|cr-lrp-dc5eaba3-5a43-4142-aa7e-8a037cb22fad: Claiming fa:16:3e:ab:3f:36 192.168.122.224/24
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00025|binding|INFO|Claiming lport b7ed1856-7267-428d-bfda-0ee53d5ea6b6 for this chassis.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00026|binding|INFO|b7ed1856-7267-428d-bfda-0ee53d5ea6b6: Claiming unknown
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00027|binding|INFO|Claiming lport 8a49767c-fb09-4185-95da-4261d8043fad for this chassis.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00028|binding|INFO|8a49767c-fb09-4185-95da-4261d8043fad: Claiming fa:16:3e:15:4e:c0 192.168.0.238
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00029|binding|INFO|Claiming lport 9a2e0de2-0849-4467-97f1-b171963f0a27 for this chassis.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00030|binding|INFO|9a2e0de2-0849-4467-97f1-b171963f0a27: Claiming unknown
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00031|binding|INFO|Claiming lport da3e5a61-7adb-481b-b7f5-703c3939cde2 for this chassis.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00032|binding|INFO|da3e5a61-7adb-481b-b7f5-703c3939cde2: Claiming fa:16:3e:6c:20:5b 192.168.0.237
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00033|binding|INFO|Claiming lport 88d4c439-3ced-4b8c-9aea-848d2cfc109e for this chassis.
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00034|binding|INFO|88d4c439-3ced-4b8c-9aea-848d2cfc109e: Claiming unknown
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00035|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00036|binding|INFO|Removing lport 88d4c439-3ced-4b8c-9aea-848d2cfc109e ovn-installed in OVS
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00037|binding|INFO|Removing lport da3e5a61-7adb-481b-b7f5-703c3939cde2 ovn-installed in OVS
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00038|binding|INFO|Removing lport 8a49767c-fb09-4185-95da-4261d8043fad ovn-installed in OVS
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00039|binding|INFO|Removing lport b7ed1856-7267-428d-bfda-0ee53d5ea6b6 ovn-installed in OVS
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00040|binding|INFO|Removing lport 9a2e0de2-0849-4467-97f1-b171963f0a27 ovn-installed in OVS
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00041|binding|INFO|Setting lport cr-lrp-dc5eaba3-5a43-4142-aa7e-8a037cb22fad up in Southbound
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 15:04:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:08Z|00042|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:04:08 standalone.localdomain podman[373144]: 2025-10-13 15:04:08.917123794 +0000 UTC m=+0.210035335 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:04:08 standalone.localdomain podman[373144]: 2025-10-13 15:04:08.938945767 +0000 UTC m=+0.231857308 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 13 15:04:08 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:04:09 standalone.localdomain edpm-start-podman-container[373121]: Creating additional drop-in dependency for "ovn_controller" (fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e)
Oct 13 15:04:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:04:09 standalone.localdomain systemd-sysv-generator[373227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:04:09 standalone.localdomain systemd-rc-local-generator[373221]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:04:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:04:09 standalone.localdomain systemd[1]: Started ovn_controller container.
Oct 13 15:04:09 standalone.localdomain python3.9[373324]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:04:09 standalone.localdomain ovs-vsctl[373325]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 13 15:04:09 standalone.localdomain ceph-mon[29756]: pgmap v2688: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38986 DF PROTO=TCP SPT=55880 DPT=9100 SEQ=2807550589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA91760000000001030307) 
Oct 13 15:04:10 standalone.localdomain python3.9[373415]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:04:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2689: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:11 standalone.localdomain python3.9[373508]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:04:11 standalone.localdomain ovs-vsctl[373509]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 13 15:04:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:11Z|00043|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:04:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:11Z|00044|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:04:11 standalone.localdomain ceph-mon[29756]: pgmap v2689: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:11 standalone.localdomain sshd[366614]: pam_unix(sshd:session): session closed for user root
Oct 13 15:04:11 standalone.localdomain systemd-logind[45629]: Session 290 logged out. Waiting for processes to exit.
Oct 13 15:04:11 standalone.localdomain systemd[1]: session-290.scope: Deactivated successfully.
Oct 13 15:04:11 standalone.localdomain systemd[1]: session-290.scope: Consumed 37.745s CPU time.
Oct 13 15:04:11 standalone.localdomain systemd-logind[45629]: Removed session 290.
Oct 13 15:04:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2690: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:13 standalone.localdomain ceph-mon[29756]: pgmap v2690: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30103 DF PROTO=TCP SPT=58660 DPT=9105 SEQ=3099141137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAA9F370000000001030307) 
Oct 13 15:04:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2691: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:15 standalone.localdomain ceph-mon[29756]: pgmap v2691: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2692: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00045|binding|INFO|Setting lport 88d4c439-3ced-4b8c-9aea-848d2cfc109e ovn-installed in OVS
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00046|binding|INFO|Setting lport 88d4c439-3ced-4b8c-9aea-848d2cfc109e up in Southbound
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00047|binding|INFO|Setting lport da3e5a61-7adb-481b-b7f5-703c3939cde2 ovn-installed in OVS
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00048|binding|INFO|Setting lport da3e5a61-7adb-481b-b7f5-703c3939cde2 up in Southbound
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00049|binding|INFO|Setting lport 8a49767c-fb09-4185-95da-4261d8043fad ovn-installed in OVS
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00050|binding|INFO|Setting lport 8a49767c-fb09-4185-95da-4261d8043fad up in Southbound
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00051|binding|INFO|Setting lport 9a2e0de2-0849-4467-97f1-b171963f0a27 ovn-installed in OVS
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00052|binding|INFO|Setting lport 9a2e0de2-0849-4467-97f1-b171963f0a27 up in Southbound
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00053|binding|INFO|Setting lport b7ed1856-7267-428d-bfda-0ee53d5ea6b6 ovn-installed in OVS
Oct 13 15:04:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:16Z|00054|binding|INFO|Setting lport b7ed1856-7267-428d-bfda-0ee53d5ea6b6 up in Southbound
Oct 13 15:04:17 standalone.localdomain sshd[373524]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:04:17 standalone.localdomain sshd[373524]: Accepted publickey for root from 192.168.122.30 port 47492 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:04:17 standalone.localdomain systemd-logind[45629]: New session 291 of user root.
Oct 13 15:04:17 standalone.localdomain systemd[1]: Started Session 291 of User root.
Oct 13 15:04:17 standalone.localdomain sshd[373524]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:04:17 standalone.localdomain ceph-mon[29756]: pgmap v2692: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:18 standalone.localdomain python3.9[373617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:04:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:04:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/925604102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:04:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:04:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/925604102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:04:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39535 DF PROTO=TCP SPT=58056 DPT=9882 SEQ=4103120358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAB1B60000000001030307) 
Oct 13 15:04:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2693: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/925604102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:04:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/925604102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:04:19 standalone.localdomain python3.9[373711]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:19 standalone.localdomain ceph-mon[29756]: pgmap v2693: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:19 standalone.localdomain python3.9[373801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30105 DF PROTO=TCP SPT=58660 DPT=9105 SEQ=3099141137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAB6F60000000001030307) 
Oct 13 15:04:20 standalone.localdomain python3.9[373891]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:04:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:04:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:04:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2694: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:20 standalone.localdomain podman[373968]: 2025-10-13 15:04:20.810773412 +0000 UTC m=+0.067770659 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., version=17.1.9, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:04:20 standalone.localdomain systemd[1]: tmp-crun.s7Ndpm.mount: Deactivated successfully.
Oct 13 15:04:20 standalone.localdomain podman[373965]: 2025-10-13 15:04:20.864305944 +0000 UTC m=+0.127109796 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, architecture=x86_64, vcs-type=git)
Oct 13 15:04:20 standalone.localdomain systemd[1]: tmp-crun.TRmuGt.mount: Deactivated successfully.
Oct 13 15:04:20 standalone.localdomain podman[373966]: 2025-10-13 15:04:20.929597897 +0000 UTC m=+0.182925405 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1)
Oct 13 15:04:20 standalone.localdomain python3.9[374014]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:20 standalone.localdomain podman[373968]: 2025-10-13 15:04:20.987907462 +0000 UTC m=+0.244904719 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, tcib_managed=true, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=)
Oct 13 15:04:21 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:04:21 standalone.localdomain podman[373965]: 2025-10-13 15:04:21.078905235 +0000 UTC m=+0.341709087 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, name=rhosp17/openstack-swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:04:21 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:04:21 standalone.localdomain podman[373966]: 2025-10-13 15:04:21.144010204 +0000 UTC m=+0.397337702 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, container_name=swift_container_server, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team)
Oct 13 15:04:21 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:04:21 standalone.localdomain python3.9[374154]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:21 standalone.localdomain ceph-mon[29756]: pgmap v2694: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:22 standalone.localdomain python3.9[374244]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:04:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2695: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:23 standalone.localdomain python3.9[374334]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:04:23
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'volumes', 'manila_data', 'manila_metadata', 'vms', 'backups', '.mgr']
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:04:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26720 DF PROTO=TCP SPT=50388 DPT=9102 SEQ=880828901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAC49F0000000001030307) 
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:04:23 standalone.localdomain ceph-mon[29756]: pgmap v2695: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:23 standalone.localdomain python3.9[374424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:24 standalone.localdomain python3.9[374497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367863.3717687-86-18025591244147/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2696: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:25 standalone.localdomain python3.9[374587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=774 DF PROTO=TCP SPT=48640 DPT=9101 SEQ=2194192681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAACC2E0000000001030307) 
Oct 13 15:04:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:25 standalone.localdomain python3.9[374660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367864.8968782-101-232877449520276/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:25 standalone.localdomain ceph-mon[29756]: pgmap v2696: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:26 standalone.localdomain python3.9[374750]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:04:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2697: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:27 standalone.localdomain python3.9[374803]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:04:27 standalone.localdomain ceph-mon[29756]: pgmap v2697: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=776 DF PROTO=TCP SPT=48640 DPT=9101 SEQ=2194192681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAD8360000000001030307) 
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2698: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:04:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:04:29 standalone.localdomain ceph-mon[29756]: pgmap v2698: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2699: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:31 standalone.localdomain ceph-mon[29756]: pgmap v2699: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:32 standalone.localdomain python3.9[374895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:04:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=777 DF PROTO=TCP SPT=48640 DPT=9101 SEQ=2194192681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAE7F60000000001030307) 
Oct 13 15:04:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2700: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:33 standalone.localdomain python3.9[374988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:33 standalone.localdomain python3.9[375059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367872.605917-138-259627057343349/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:33 standalone.localdomain ceph-mon[29756]: pgmap v2700: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:34 standalone.localdomain python3.9[375149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:34 standalone.localdomain python3.9[375220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367873.5871751-138-146175367371925/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2701: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:35 standalone.localdomain python3.9[375310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:35 standalone.localdomain ceph-mon[29756]: pgmap v2701: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:36 standalone.localdomain python3.9[375381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367875.2471948-182-138617507993604/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:36 standalone.localdomain python3.9[375471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2702: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:37 standalone.localdomain python3.9[375542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367876.2604756-182-69649044392809/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21711 DF PROTO=TCP SPT=37860 DPT=9100 SEQ=1483107999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAFAB40000000001030307) 
Oct 13 15:04:37 standalone.localdomain python3.9[375632]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:04:37 standalone.localdomain ceph-mon[29756]: pgmap v2702: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:38 standalone.localdomain python3.9[375724]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21712 DF PROTO=TCP SPT=37860 DPT=9100 SEQ=1483107999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAAFEB60000000001030307) 
Oct 13 15:04:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2703: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:39 standalone.localdomain python3.9[375814]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:39 standalone.localdomain python3.9[375860]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:04:39 standalone.localdomain podman[375875]: 2025-10-13 15:04:39.597778327 +0000 UTC m=+0.090237941 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:04:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:39Z|00055|memory|INFO|20340 kB peak resident set size after 30.8 seconds
Oct 13 15:04:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:39Z|00056|memory|INFO|idl-cells-OVN_Southbound:4275 idl-cells-Open_vSwitch:935 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:101 lflow-cache-entries-cache-matches:200 lflow-cache-size-KB:365 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:202 ofctrl_installed_flow_usage-KB:147 ofctrl_sb_flow_ref_usage-KB:85
Oct 13 15:04:39 standalone.localdomain podman[375875]: 2025-10-13 15:04:39.684780571 +0000 UTC m=+0.177240235 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 13 15:04:39 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:04:39 standalone.localdomain ceph-mon[29756]: pgmap v2703: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:40 standalone.localdomain python3.9[375975]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21713 DF PROTO=TCP SPT=37860 DPT=9100 SEQ=1483107999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB06B60000000001030307) 
Oct 13 15:04:40 standalone.localdomain python3.9[376021]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2704: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:41 standalone.localdomain python3.9[376111]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:41 standalone.localdomain ceph-mon[29756]: pgmap v2704: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:41 standalone.localdomain python3.9[376201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:42 standalone.localdomain python3.9[376247]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2705: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:42 standalone.localdomain python3.9[376337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:43 standalone.localdomain python3.9[376383]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:43 standalone.localdomain ceph-mon[29756]: pgmap v2705: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:43 standalone.localdomain python3.9[376473]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:04:43 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:04:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8195 DF PROTO=TCP SPT=40572 DPT=9105 SEQ=362620258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB14760000000001030307) 
Oct 13 15:04:43 standalone.localdomain systemd-rc-local-generator[376500]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:04:43 standalone.localdomain systemd-sysv-generator[376503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:04:44 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:04:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2706: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:44 standalone.localdomain python3.9[376602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:45 standalone.localdomain python3.9[376648]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:45 standalone.localdomain ceph-mon[29756]: pgmap v2706: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:45 standalone.localdomain python3.9[376738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:46 standalone.localdomain python3.9[376784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2707: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:04:46Z|00057|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 13 15:04:47 standalone.localdomain python3.9[376874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:04:47 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:04:47 standalone.localdomain systemd-sysv-generator[376901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:04:47 standalone.localdomain systemd-rc-local-generator[376897]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:04:47 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:04:47 standalone.localdomain account-server[114566]: 172.20.0.100 - - [13/Oct/2025:15:04:47 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx6d83d0f2561240a39405a-0068ed150f" "proxy-server 2" 0.0004 "-" 22 -
Oct 13 15:04:47 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx6d83d0f2561240a39405a-0068ed150f)
Oct 13 15:04:47 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx6d83d0f2561240a39405a-0068ed150f)
Oct 13 15:04:47 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:04:47 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:04:47 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:04:47 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:04:47 standalone.localdomain ceph-mon[29756]: pgmap v2707: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:48 standalone.localdomain python3.9[377006]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59131 DF PROTO=TCP SPT=41840 DPT=9882 SEQ=3080479453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB26B60000000001030307) 
Oct 13 15:04:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2708: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:48 standalone.localdomain python3.9[377096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:49 standalone.localdomain python3.9[377167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760367888.3626492-333-75179728179620/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:49 standalone.localdomain sudo[377168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:04:49 standalone.localdomain sudo[377168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:04:49 standalone.localdomain sudo[377168]: pam_unix(sudo:session): session closed for user root
Oct 13 15:04:49 standalone.localdomain sudo[377197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:04:49 standalone.localdomain sudo[377197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:04:49 standalone.localdomain ceph-mon[29756]: pgmap v2708: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:49 standalone.localdomain python3.9[377303]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:04:49 standalone.localdomain sudo[377197]: pam_unix(sudo:session): session closed for user root
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:04:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 8f475cfb-b7e4-4b9c-adca-d45c9221d948 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:04:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 8f475cfb-b7e4-4b9c-adca-d45c9221d948 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:04:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 8f475cfb-b7e4-4b9c-adca-d45c9221d948 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:04:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8197 DF PROTO=TCP SPT=40572 DPT=9105 SEQ=362620258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB2C360000000001030307) 
Oct 13 15:04:50 standalone.localdomain sudo[377321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:04:50 standalone.localdomain sudo[377321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:04:50 standalone.localdomain sudo[377321]: pam_unix(sudo:session): session closed for user root
Oct 13 15:04:50 standalone.localdomain python3.9[377425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2709: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:04:50 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:04:51 standalone.localdomain python3.9[377498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760367890.2241595-358-72102406542946/.source.json _original_basename=.38wdqlnn follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:04:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:04:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:04:51 standalone.localdomain python3.9[377588]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:04:51 standalone.localdomain podman[377591]: 2025-10-13 15:04:51.839412288 +0000 UTC m=+0.095520060 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, version=17.1.9, name=rhosp17/openstack-swift-account, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server)
Oct 13 15:04:51 standalone.localdomain ceph-mon[29756]: pgmap v2709: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:51 standalone.localdomain podman[377590]: 2025-10-13 15:04:51.890804955 +0000 UTC m=+0.151479674 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 15:04:51 standalone.localdomain podman[377589]: 2025-10-13 15:04:51.818478891 +0000 UTC m=+0.084643454 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, container_name=swift_object_server, version=17.1.9, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, batch=17.1_20250721.1, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:04:52 standalone.localdomain podman[377591]: 2025-10-13 15:04:52.011793476 +0000 UTC m=+0.267901178 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22)
Oct 13 15:04:52 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:04:52 standalone.localdomain podman[377589]: 2025-10-13 15:04:52.031781135 +0000 UTC m=+0.297945658 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1)
Oct 13 15:04:52 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:04:52 standalone.localdomain podman[377590]: 2025-10-13 15:04:52.095327446 +0000 UTC m=+0.356002195 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, container_name=swift_container_server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-swift-container, version=17.1.9)
Oct 13 15:04:52 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:04:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2710: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:04:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:04:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53902 DF PROTO=TCP SPT=37626 DPT=9102 SEQ=872631330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB39CE0000000001030307) 
Oct 13 15:04:53 standalone.localdomain python3.9[377923]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 13 15:04:53 standalone.localdomain ceph-mon[29756]: pgmap v2710: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:54 standalone.localdomain python3.9[378013]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:04:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2711: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:04:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:04:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:04:55 standalone.localdomain python3.9[378103]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:04:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22508 DF PROTO=TCP SPT=47590 DPT=9101 SEQ=1097836618 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB415E0000000001030307) 
Oct 13 15:04:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:04:55 standalone.localdomain ceph-mon[29756]: pgmap v2711: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:04:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2712: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:57 standalone.localdomain ceph-mon[29756]: pgmap v2712: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22510 DF PROTO=TCP SPT=47590 DPT=9101 SEQ=1097836618 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB4D760000000001030307) 
Oct 13 15:04:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2713: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:04:59 standalone.localdomain ceph-mon[29756]: pgmap v2713: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:00 standalone.localdomain python3[378220]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:05:00 standalone.localdomain python3[378220]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7",
                                                                  "Digest": "sha256:a8a7c14e842794ab4bf0de5f556e4d6bb05e3991b23677b50171c8f9d8910fa5",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a8a7c14e842794ab4bf0de5f556e4d6bb05e3991b23677b50171c8f9d8910fa5"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:29:28.267485583Z",
                                                                  "Config": {
                                                                       "User": "neutron",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 783976704,
                                                                  "VirtualSize": 783976704,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001/diff:/var/lib/containers/storage/overlay/96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074/diff:/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:2176850e44c3ecf34b145f4e4f66c2144233e08dc232d76e9b587fc651884364",
                                                                            "sha256:1e774b824d463a7308c818a32aac5f892267710e9e2ef9b1492f7d432571e1ff",
                                                                            "sha256:6f8deec2c2972111112afeee2a53e3d78f5f3f34ef05fb50205a2ac95209aa02"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "neutron",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:16:18.853030625Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:16:19.228070681Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage neutron",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:17:47.555190736Z",
                                                                            "created_by": "/bin/sh -c dnf -y install iputils net-tools openstack-neutron openstack-neutron-rpc-server openstack-neutron-ml2 openvswitch python3-networking-baremetal python3-openvswitch python3-unbound && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:17:55.204013754Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/neutron-base/neutron_sudoers /etc/sudoers.d/neutron_sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:20.180748405Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers.d/neutron_sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:43.442226259Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:27:25.962751459Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:28:43.356054777Z",
                                                                            "created_by": "/bin/sh -c dnf -y install libseccomp podman && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:28:47.631745728Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:28:51.011185638Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-agent-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:29:28.265082445Z",
                                                                            "created_by": "/bin/sh -c dnf -y install python3-networking-ovn-metadata-agent && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:29:28.265136666Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER neutron",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:29:32.057158606Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 15:05:00 standalone.localdomain podman[378273]: 2025-10-13 15:05:00.669441454 +0000 UTC m=+0.094229691 container remove bf0d924433075345d904694bea7d32d641d081cd5130cd019fd5566fb7b83400 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T16:28:53, vcs-ref=6abf7c351fd73f1a4e60437aa721e00f9a9d02d3, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-metadata-agent-ovn/images/17.1.9-1, release=1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Oct 13 15:05:00 standalone.localdomain python3[378220]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent
Oct 13 15:05:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:00 standalone.localdomain podman[378286]: 
Oct 13 15:05:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2714: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:00 standalone.localdomain podman[378286]: 2025-10-13 15:05:00.755941893 +0000 UTC m=+0.070950005 container create 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:05:00 standalone.localdomain podman[378286]: 2025-10-13 15:05:00.717123211 +0000 UTC m=+0.032131363 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 15:05:00 standalone.localdomain python3[378220]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 15:05:01 standalone.localdomain python3.9[378414]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:05:01 standalone.localdomain ceph-mon[29756]: pgmap v2714: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:02 standalone.localdomain python3.9[378506]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:02 standalone.localdomain python3.9[378550]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:05:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22511 DF PROTO=TCP SPT=47590 DPT=9101 SEQ=1097836618 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB5D360000000001030307) 
Oct 13 15:05:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2715: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.836955) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367902836999, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 251, "total_data_size": 2020296, "memory_usage": 2068832, "flush_reason": "Manual Compaction"}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367902853541, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1950754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62487, "largest_seqno": 64596, "table_properties": {"data_size": 1942545, "index_size": 4975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16956, "raw_average_key_size": 19, "raw_value_size": 1925854, "raw_average_value_size": 2249, "num_data_blocks": 226, "num_entries": 856, "num_filter_entries": 856, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367705, "oldest_key_time": 1760367705, "file_creation_time": 1760367902, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 16657 microseconds, and 4689 cpu microseconds.
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.853595) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1950754 bytes OK
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.853632) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.862089) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.862135) EVENT_LOG_v1 {"time_micros": 1760367902862125, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.862160) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2011299, prev total WAL file size 2011299, number of live WAL files 2.
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.862922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1905KB)], [143(4646KB)]
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367902862979, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 6709047, "oldest_snapshot_seqno": -1}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 5773 keys, 5695684 bytes, temperature: kUnknown
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367902899393, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 5695684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5661143, "index_size": 19019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 150420, "raw_average_key_size": 26, "raw_value_size": 5560110, "raw_average_value_size": 963, "num_data_blocks": 754, "num_entries": 5773, "num_filter_entries": 5773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760367902, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.899628) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 5695684 bytes
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.906583) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.8 rd, 156.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 4.5 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6291, records dropped: 518 output_compression: NoCompression
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.906627) EVENT_LOG_v1 {"time_micros": 1760367902906609, "job": 88, "event": "compaction_finished", "compaction_time_micros": 36507, "compaction_time_cpu_micros": 10984, "output_level": 6, "num_output_files": 1, "total_output_size": 5695684, "num_input_records": 6291, "num_output_records": 5773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367902906950, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760367902907477, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.862800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.907638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.907644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.907646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.907648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:05:02 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:05:02.907651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:05:03 standalone.localdomain python3.9[378639]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760367902.5802233-446-186885491361260/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:03 standalone.localdomain python3.9[378683]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:05:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:05:03 standalone.localdomain systemd-rc-local-generator[378707]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:05:03 standalone.localdomain systemd-sysv-generator[378710]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:05:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:03 standalone.localdomain ceph-mon[29756]: pgmap v2715: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:04 standalone.localdomain python3.9[378762]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:05:04 standalone.localdomain systemd-rc-local-generator[378790]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:05:04 standalone.localdomain systemd-sysv-generator[378793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:05:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2716: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:04 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:05 standalone.localdomain systemd[1]: Starting ovn_metadata_agent container...
Oct 13 15:05:05 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:05:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4dc5d4c31fd9c672bbc867a19c9a12aecefff60c4daaa5b67d56ee67e854c3/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 13 15:05:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4dc5d4c31fd9c672bbc867a19c9a12aecefff60c4daaa5b67d56ee67e854c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:05:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:05:05 standalone.localdomain podman[378803]: 2025-10-13 15:05:05.136947438 +0000 UTC m=+0.117146327 container init 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + sudo -E kolla_set_configs
Oct 13 15:05:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:05:05 standalone.localdomain podman[378803]: 2025-10-13 15:05:05.170505182 +0000 UTC m=+0.150704161 container start 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 13 15:05:05 standalone.localdomain edpm-start-podman-container[378803]: ovn_metadata_agent
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Validating config file
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Copying service configuration files
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Writing out command to execute
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/lock
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dnsmasq_wrapper
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_haproxy_wrapper
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/03f66d1294ee0eb4465c67b7f02f44b76b4fb981cae67c336f96704556f02da9
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/interface
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/leases
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/pid
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/interface
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/leases
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/pid
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/interface
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/leases
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/pid
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: ++ cat /run_command
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + CMD=neutron-ovn-metadata-agent
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + ARGS=
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + sudo kolla_copy_cacerts
Oct 13 15:05:05 standalone.localdomain edpm-start-podman-container[378802]: Creating additional drop-in dependency for "ovn_metadata_agent" (0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063)
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: Running command: 'neutron-ovn-metadata-agent'
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + [[ ! -n '' ]]
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + . kolla_extend_start
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + umask 0022
Oct 13 15:05:05 standalone.localdomain ovn_metadata_agent[378816]: + exec neutron-ovn-metadata-agent
Oct 13 15:05:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:05:05 standalone.localdomain podman[378825]: 2025-10-13 15:05:05.291803312 +0000 UTC m=+0.117960671 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:05:05 standalone.localdomain podman[378825]: 2025-10-13 15:05:05.325908422 +0000 UTC m=+0.152065812 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:05:05 standalone.localdomain systemd-rc-local-generator[378889]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:05:05 standalone.localdomain systemd-sysv-generator[378898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:05:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:05 standalone.localdomain systemd[1]: tmp-crun.veH5OP.mount: Deactivated successfully.
Oct 13 15:05:05 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:05:05 standalone.localdomain systemd[1]: Started ovn_metadata_agent container.
Oct 13 15:05:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:05 standalone.localdomain ceph-mon[29756]: pgmap v2716: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:06 standalone.localdomain sshd[373524]: pam_unix(sshd:session): session closed for user root
Oct 13 15:05:06 standalone.localdomain systemd[1]: session-291.scope: Deactivated successfully.
Oct 13 15:05:06 standalone.localdomain systemd[1]: session-291.scope: Consumed 30.943s CPU time.
Oct 13 15:05:06 standalone.localdomain systemd-logind[45629]: Session 291 logged out. Waiting for processes to exit.
Oct 13 15:05:06 standalone.localdomain systemd-logind[45629]: Removed session 291.
Oct 13 15:05:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2717: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.885 378821 INFO neutron.common.config [-] Logging enabled!
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.886 378821 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.886 378821 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.886 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.886 378821 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.886 378821 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.887 378821 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.888 378821 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.889 378821 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.890 378821 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.891 378821 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.892 378821 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.893 378821 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.894 378821 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.895 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.896 378821 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.897 378821 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.898 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.899 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.900 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.901 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.902 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.903 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.904 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.905 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.906 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.907 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.908 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.909 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.910 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.911 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.912 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.913 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.914 378821 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.915 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.916 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.917 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.918 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.919 378821 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.919 378821 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.927 378821 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.927 378821 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.927 378821 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.928 378821 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.928 378821 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.944 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 90b9e3f7-f5c9-4d48-9eca-66995f72d494 (UUID: 90b9e3f7-f5c9-4d48-9eca-66995f72d494) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.978 378821 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.979 378821 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.979 378821 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.979 378821 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.981 378821 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.984 378821 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Oct 13 15:05:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.997 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:4e:c0 192.168.0.238'], port_security=['fa:16:3e:15:4e:c0 192.168.0.238'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.238/24', 'neutron:device_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'standalone.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c455abd-28d4-47e7-a254-e50de0526def', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'abececb5-f6f2-4dbd-993f-ddc54effe614', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0c27a6-9f97-46f6-a22d-9a9321701921, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8a49767c-fb09-4185-95da-4261d8043fad) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:06.999 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-777e62f3-0875-437c-ae6a-3924fea34ee8', 'neutron:device_owner': 'network:dhcp', 'neutron:host_id': 'standalone.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-777e62f3-0875-437c-ae6a-3924fea34ee8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c979cdad54804317be260cfeb61b08c7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca991d1d-61d5-4546-85b9-0ab092f453c3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b7ed1856-7267-428d-bfda-0ee53d5ea6b6) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.001 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:20:5b 192.168.0.237'], port_security=['fa:16:3e:6c:20:5b 192.168.0.237'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.237/24', 'neutron:device_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'standalone.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c455abd-28d4-47e7-a254-e50de0526def', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'abececb5-f6f2-4dbd-993f-ddc54effe614', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0c27a6-9f97-46f6-a22d-9a9321701921, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=da3e5a61-7adb-481b-b7f5-703c3939cde2) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.003 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.171/24', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a12f1166-9c4d-4d97-bc78-657c05e7af68', 'neutron:device_owner': 'network:dhcp', 'neutron:host_id': 'standalone.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a12f1166-9c4d-4d97-bc78-657c05e7af68', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc7de62-083b-4bda-900f-89d15ca00155, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=88d4c439-3ced-4b8c-9aea-848d2cfc109e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.005 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0c455abd-28d4-47e7-a254-e50de0526def', 'neutron:device_owner': 'network:dhcp', 'neutron:host_id': 'standalone.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c455abd-28d4-47e7-a254-e50de0526def', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0c27a6-9f97-46f6-a22d-9a9321701921, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9a2e0de2-0849-4467-97f1-b171963f0a27) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.006 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '90b9e3f7-f5c9-4d48-9eca-66995f72d494'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], external_ids={'neutron:ovn-metadata-id': '1de3f89c-87f2-55a7-a39e-ba419b7e34a6', 'neutron:ovn-metadata-sb-cfg': '2'}, name=90b9e3f7-f5c9-4d48-9eca-66995f72d494, nb_cfg_timestamp=1760367856896, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.007 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8a49767c-fb09-4185-95da-4261d8043fad in datapath 0c455abd-28d4-47e7-a254-e50de0526def bound to our chassis on insert
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.008 378821 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f7216c6daf0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.009 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.009 378821 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.010 378821 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.010 378821 INFO oslo_service.service [-] Starting 1 workers
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.014 378821 DEBUG oslo_service.service [-] Started child 378920 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.017 378920 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-173497'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.017 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 84bcd2dc-1ea4-4b7f-a1ad-ec9a7d798be1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.018 378821 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.019 378821 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpuqohnaha/privsep.sock']
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.032 378920 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.032 378920 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.033 378920 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.035 378920 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.036 378920 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.046 378920 INFO eventlet.wsgi.server [-] (378920) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 13 15:05:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4633 DF PROTO=TCP SPT=40214 DPT=9100 SEQ=3710850581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB6FE40000000001030307) 
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.710 378821 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.711 378821 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuqohnaha/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.568 378925 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.575 378925 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.580 378925 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.581 378925 INFO oslo.privsep.daemon [-] privsep daemon running as pid 378925
Oct 13 15:05:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:07.714 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a1ca78-b25d-401e-a1d5-51b552c29745]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:07 standalone.localdomain ceph-mon[29756]: pgmap v2717: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:08.189 378925 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:05:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:08.189 378925 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:05:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:08.189 378925 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:05:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4634 DF PROTO=TCP SPT=40214 DPT=9100 SEQ=3710850581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB73F60000000001030307) 
Oct 13 15:05:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:08.677 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4c31e8a5-a476-4c1c-acc5-8bdfc9646a1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:08.679 378821 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpx0mgpjte/privsep.sock']
Oct 13 15:05:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2718: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.270 378821 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.271 378821 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpx0mgpjte/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.173 378936 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.178 378936 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.182 378936 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.182 378936 INFO oslo.privsep.daemon [-] privsep daemon running as pid 378936
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.274 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[8c961285-53a6-4cf7-8dd6-4b594bf06ddb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.708 378936 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.708 378936 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:05:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:09.709 378936 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:05:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:05:09 standalone.localdomain podman[378941]: 2025-10-13 15:05:09.799823048 +0000 UTC m=+0.072697657 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:05:09 standalone.localdomain podman[378941]: 2025-10-13 15:05:09.864800012 +0000 UTC m=+0.137674631 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:05:09 standalone.localdomain ceph-mon[29756]: pgmap v2718: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:09 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:05:10 standalone.localdomain kernel: net_ratelimit: 8 callbacks suppressed
Oct 13 15:05:10 standalone.localdomain kernel: IPv4: martian source 169.254.169.254 from 169.254.169.254, on dev tap9a2e0de2-08
Oct 13 15:05:10 standalone.localdomain kernel: ll header: 00000000: ff ff ff ff ff ff fa 16 3e e5 ca 6c 08 06
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.218 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6d93a4-8cad-4b50-85d6-65bf9e935aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.221 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[4304c98c-864b-4750-8dbb-0ed498bd2a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.239 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[39eaffc9-bd74-4253-abb4-96fb474b5980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.253 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ba6b07-cc6c-4c91-becf-80caea1bb949]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c455abd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e5:ca:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 53, 'tx_packets': 63, 'rx_bytes': 3830, 'tx_bytes': 3814, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 53, 'tx_packets': 63, 'rx_bytes': 3830, 'tx_bytes': 3814, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442179, 'reachable_time': 31105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 7, 'inoctets': 520, 'indelivers': 2, 'outforwdatagrams': 0, 'outpkts': 17, 'outoctets': 1164, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 17, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 520, 'outmcastoctets': 1164, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 2, 'inerrors': 0, 'outmsgs': 17, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378972, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.267 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9553bfef-761c-41ad-a83a-599db5d041bf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442185, 'tstamp': 442185}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378973, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442186, 'tstamp': 442186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378973, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 200, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::a9fe:a9fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442188, 'tstamp': 442188}], ['IFA_FLAGS', 200]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378973, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:ca6c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442179, 'tstamp': 442179}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378973, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.305 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b38000e8-ba37-4a04-ad71-b3688ce21190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.307 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c455abd-20, bridge=br-ctlplane, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.309 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c455abd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.310 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.310 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c455abd-20, col_values=(('external_ids', {'iface-id': '52f81088-6dd1-4b23-992e-fdefd91123e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.310 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.311 378821 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.313 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0048e853-aaf1-4d43-bf94-af8749ae70f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.314 378821 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: global
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     log         /dev/log local0 debug
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     log-tag     haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     user        root
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     group       root
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     maxconn     1024
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     pidfile     /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     daemon
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: defaults
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     log global
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     mode http
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     option httplog
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     option dontlognull
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     option http-server-close
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     option forwardfor
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     retries                 3
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout http-request    30s
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout connect         30s
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout client          32s
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout server          32s
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout http-keep-alive 30s
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: listen listener
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     bind 169.254.169.254:80
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     server metadata /var/lib/neutron/metadata_proxy
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:     http-request add-header X-OVN-Network-ID 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.314 378821 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'env', 'PROCESS_TAG=haproxy-0c455abd-28d4-47e7-a254-e50de0526def', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c455abd-28d4-47e7-a254-e50de0526def.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 13 15:05:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4635 DF PROTO=TCP SPT=40214 DPT=9100 SEQ=3710850581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB7BF60000000001030307) 
Oct 13 15:05:10 standalone.localdomain podman[379003]: 
Oct 13 15:05:10 standalone.localdomain podman[379003]: 2025-10-13 15:05:10.685851553 +0000 UTC m=+0.064998516 container create 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:05:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:10 standalone.localdomain systemd[1]: Started libpod-conmon-4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d.scope.
Oct 13 15:05:10 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:05:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/461aa997cb27c140eaa5a112d1c6dafece82fe45950c8843271a52b350b33ee2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:05:10 standalone.localdomain podman[379003]: 2025-10-13 15:05:10.650143215 +0000 UTC m=+0.029290158 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 15:05:10 standalone.localdomain podman[379003]: 2025-10-13 15:05:10.753478497 +0000 UTC m=+0.132625460 container init 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:05:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2719: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:10 standalone.localdomain podman[379003]: 2025-10-13 15:05:10.761579279 +0000 UTC m=+0.140726202 container start 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:05:10 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [NOTICE]   (379022) : New worker (379024) forked
Oct 13 15:05:10 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [NOTICE]   (379022) : Loading success.
Oct 13 15:05:10 standalone.localdomain systemd[1]: tmp-crun.Z2GRUy.mount: Deactivated successfully.
Oct 13 15:05:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:10.808 378821 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp6tbrr4m9/privsep.sock']
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.397 378821 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.397 378821 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6tbrr4m9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.285 379037 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.289 379037 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.291 379037 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.291 379037 INFO oslo.privsep.daemon [-] privsep daemon running as pid 379037
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.401 379037 DEBUG oslo.privsep.daemon [-] privsep: reply[95d7014c-49ef-4107-86c0-01f8c32e4f70]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:11 standalone.localdomain sshd[379042]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:05:11 standalone.localdomain ceph-mon[29756]: pgmap v2719: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:11 standalone.localdomain sshd[379042]: Accepted publickey for root from 192.168.122.30 port 37450 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:05:11 standalone.localdomain systemd-logind[45629]: New session 292 of user root.
Oct 13 15:05:11 standalone.localdomain systemd[1]: Started Session 292 of User root.
Oct 13 15:05:11 standalone.localdomain sshd[379042]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.876 379037 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.876 379037 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:05:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:11.876 379037 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.381 379037 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bb63da-5e2d-4544-9b25-3ff9e724f8c7]: (4, ['ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68', 'qdhcp-0c455abd-28d4-47e7-a254-e50de0526def', 'qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8']) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.384 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, column=external_ids, values=({'neutron:ovn-metadata-id': '1de3f89c-87f2-55a7-a39e-ba419b7e34a6'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.384 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.385 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b7ed1856-7267-428d-bfda-0ee53d5ea6b6 in datapath 777e62f3-0875-437c-ae6a-3924fea34ee8 bound to our chassis on insert
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.385 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 315d6142-e28c-4df9-ac03-3105f00b4ca8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.386 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 777e62f3-0875-437c-ae6a-3924fea34ee8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.386 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.386 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b986b6fa-075f-4faf-8851-2450ab9c5ae0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.387 378821 INFO neutron.agent.ovn.metadata.agent [-] Port da3e5a61-7adb-481b-b7f5-703c3939cde2 in datapath 0c455abd-28d4-47e7-a254-e50de0526def bound to our chassis on insert
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.395 378821 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.395 378821 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.395 378821 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.395 378821 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.396 378821 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.397 378821 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.398 378821 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.399 378821 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.400 378821 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.401 378821 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.402 378821 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.403 378821 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.404 378821 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.405 378821 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.406 378821 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.407 378821 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.408 378821 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.409 378821 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.410 378821 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.411 378821 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.412 378821 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.413 378821 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.414 378821 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.415 378821 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.416 378821 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.417 378821 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.418 378821 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.419 378821 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.420 378821 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.421 378821 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.422 378821 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.423 378821 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.424 378821 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.425 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.426 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.427 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.428 378821 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.429 378821 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.429 378821 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.429 378821 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.430 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 84bcd2dc-1ea4-4b7f-a1ad-ec9a7d798be1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.430 378821 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.438 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb1a867-a458-47b0-aa6c-b68ea3671198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain kernel: IPv4: martian source 169.254.169.254 from 169.254.169.254, on dev tap9a2e0de2-08
Oct 13 15:05:12 standalone.localdomain kernel: ll header: 00000000: ff ff ff ff ff ff fa 16 3e e5 ca 6c 08 06
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.460 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[67e09dfc-c24d-4603-9f5f-b48e8a749290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.464 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d7b553-e2aa-4b5c-9117-bec97355f8b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.481 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[12abd5d5-bed6-45ca-8486-484a60120c1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.494 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a1da54af-0bf8-4925-9dd5-219882213242]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c455abd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e5:ca:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 53, 'tx_packets': 68, 'rx_bytes': 3830, 'tx_bytes': 4188, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 53, 'tx_packets': 68, 'rx_bytes': 3830, 'tx_bytes': 4188, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442179, 'reachable_time': 31105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 7, 'inoctets': 520, 'indelivers': 2, 'outforwdatagrams': 0, 'outpkts': 20, 'outoctets': 1412, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 20, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 520, 'outmcastoctets': 1412, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 2, 'inerrors': 0, 'outmsgs': 20, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379141, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.509 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9b67a1-b749-435b-bfa0-9b77a154f5e8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442185, 'tstamp': 442185}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379142, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442186, 'tstamp': 442186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379142, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.510 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c455abd-20, bridge=br-ctlplane, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.513 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c455abd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.513 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.514 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c455abd-20, col_values=(('external_ids', {'iface-id': '52f81088-6dd1-4b23-992e-fdefd91123e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.514 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.515 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 88d4c439-3ced-4b8c-9aea-848d2cfc109e in datapath a12f1166-9c4d-4d97-bc78-657c05e7af68 bound to our chassis on insert
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.516 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7124b88e-1f88-4251-aad9-4d43c627d111 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.516 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a12f1166-9c4d-4d97-bc78-657c05e7af68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.516 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[61f61159-fc94-4abc-848d-ba6e05173ee0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.517 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9a2e0de2-0849-4467-97f1-b171963f0a27 in datapath 0c455abd-28d4-47e7-a254-e50de0526def bound to our chassis on insert
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.518 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 84bcd2dc-1ea4-4b7f-a1ad-ec9a7d798be1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.518 378821 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.531 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[75147bda-6dcb-401a-afc3-2117e8594bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain kernel: IPv4: martian source 169.254.169.254 from 169.254.169.254, on dev tap9a2e0de2-08
Oct 13 15:05:12 standalone.localdomain kernel: ll header: 00000000: ff ff ff ff ff ff fa 16 3e e5 ca 6c 08 06
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.557 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[42927b84-905c-41d3-a072-55437636d538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.560 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[82016d7e-a25f-4bbe-b645-03b82cf8e9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.580 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[7288eadf-427d-4c88-b3a2-4d366d141bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.592 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1826e558-d766-45f1-8f3a-3d88b2119466]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c455abd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e5:ca:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 53, 'tx_packets': 70, 'rx_bytes': 3830, 'tx_bytes': 4272, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 53, 'tx_packets': 70, 'rx_bytes': 3830, 'tx_bytes': 4272, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442179, 'reachable_time': 31105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 7, 'inoctets': 520, 'indelivers': 2, 'outforwdatagrams': 0, 'outpkts': 20, 'outoctets': 1412, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 20, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 520, 'outmcastoctets': 1412, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 2, 'inerrors': 0, 'outmsgs': 20, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379148, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.601 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[79932fef-78a2-4c0c-99cf-56f068dd5853]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442185, 'tstamp': 442185}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379149, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442186, 'tstamp': 442186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379149, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.602 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c455abd-20, bridge=br-ctlplane, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.606 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c455abd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.606 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.606 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c455abd-20, col_values=(('external_ids', {'iface-id': '52f81088-6dd1-4b23-992e-fdefd91123e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:05:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:05:12.607 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:05:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2720: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:12 standalone.localdomain python3.9[379140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:05:13 standalone.localdomain ceph-mon[29756]: pgmap v2720: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48691 DF PROTO=TCP SPT=55028 DPT=9105 SEQ=1029060256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB89B60000000001030307) 
Oct 13 15:05:14 standalone.localdomain python3.9[379243]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2721: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:14 standalone.localdomain python3.9[379346]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:14 standalone.localdomain systemd[1]: tmp-crun.GmRiLN.mount: Deactivated successfully.
Oct 13 15:05:14 standalone.localdomain systemd[1]: libpod-807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432.scope: Deactivated successfully.
Oct 13 15:05:14 standalone.localdomain systemd[1]: libpod-807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432.scope: Consumed 1.824s CPU time.
Oct 13 15:05:14 standalone.localdomain podman[379347]: 2025-10-13 15:05:14.845092721 +0000 UTC m=+0.072066658 container died 807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, build-date=2025-07-21T14:56:59, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, architecture=x86_64, release=2, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 15:05:14 standalone.localdomain podman[379347]: 2025-10-13 15:05:14.888396958 +0000 UTC m=+0.115370915 container cleanup 807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, io.openshift.tags=rhosp osp openstack osp-17.1, release=2, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, batch=17.1_20250721.1)
Oct 13 15:05:14 standalone.localdomain podman[379362]: 2025-10-13 15:05:14.90885332 +0000 UTC m=+0.063382198 container remove 807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-07-21T14:56:59, maintainer=OpenStack TripleO Team, vcs-ref=809f31d3cd93a9e04341110fb85686656c754dc0, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-libvirt/images/17.1.9-2, architecture=x86_64, release=2, version=17.1.9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.33.12)
Oct 13 15:05:14 standalone.localdomain systemd[1]: libpod-conmon-807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432.scope: Deactivated successfully.
Oct 13 15:05:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:15 standalone.localdomain systemd[1]: tmp-crun.k7KRtZ.mount: Deactivated successfully.
Oct 13 15:05:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-022ad1fccb5a07d341e4b8a6e9305339a3bb07c497484ef58989394e0aab9453-merged.mount: Deactivated successfully.
Oct 13 15:05:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-807bff0b0315d134dc602c777f0ee0244bf144919a784d6d898b4029faa66432-userdata-shm.mount: Deactivated successfully.
Oct 13 15:05:15 standalone.localdomain ceph-mon[29756]: pgmap v2721: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:15 standalone.localdomain python3.9[379466]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:05:15 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:05:16 standalone.localdomain systemd-rc-local-generator[379493]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:05:16 standalone.localdomain systemd-sysv-generator[379497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:05:16 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2722: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:16 standalone.localdomain python3.9[379592]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:05:17 standalone.localdomain network[379609]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:05:17 standalone.localdomain network[379610]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:05:17 standalone.localdomain network[379611]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:05:17 standalone.localdomain ceph-mon[29756]: pgmap v2722: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:05:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3510967160' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:05:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:05:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3510967160' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:05:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6577 DF PROTO=TCP SPT=44488 DPT=9882 SEQ=1843719995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAB9BF60000000001030307) 
Oct 13 15:05:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2723: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3510967160' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:05:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3510967160' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:05:19 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:19 standalone.localdomain ceph-mon[29756]: pgmap v2723: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48693 DF PROTO=TCP SPT=55028 DPT=9105 SEQ=1029060256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABA1760000000001030307) 
Oct 13 15:05:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2724: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:21 standalone.localdomain ceph-mon[29756]: pgmap v2724: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:22 standalone.localdomain python3.9[379818]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:05:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:05:22 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:05:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2725: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:22 standalone.localdomain systemd-sysv-generator[379899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:05:22 standalone.localdomain systemd-rc-local-generator[379896]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:05:22 standalone.localdomain podman[379821]: 2025-10-13 15:05:22.762075531 +0000 UTC m=+0.076032953 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:05:22 standalone.localdomain podman[379820]: 2025-10-13 15:05:22.833846529 +0000 UTC m=+0.146557974 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 15:05:22 standalone.localdomain podman[379822]: 2025-10-13 15:05:22.799546732 +0000 UTC m=+0.110760853 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-swift-account-container, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:05:22 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:22 standalone.localdomain podman[379821]: 2025-10-13 15:05:22.972957884 +0000 UTC m=+0.286915396 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vcs-type=git, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:05:22 standalone.localdomain podman[379822]: 2025-10-13 15:05:22.997901376 +0000 UTC m=+0.309115467 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, version=17.1.9, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 15:05:23 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:05:23 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:05:23 standalone.localdomain podman[379820]: 2025-10-13 15:05:23.041807215 +0000 UTC m=+0.354518630 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, version=17.1.9, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1)
Oct 13 15:05:23 standalone.localdomain systemd[1]: Stopped target tripleo_nova_libvirt.target.
Oct 13 15:05:23 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:05:23
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_metadata', '.mgr', 'images', 'vms', 'manila_data', 'backups']
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:05:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11914 DF PROTO=TCP SPT=52932 DPT=9102 SEQ=4199387708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABAEFE0000000001030307) 
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:05:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:05:23 standalone.localdomain python3.9[380031]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:23 standalone.localdomain ceph-mon[29756]: pgmap v2725: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:24 standalone.localdomain python3.9[380122]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2726: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:25 standalone.localdomain python3.9[380213]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3435 DF PROTO=TCP SPT=46694 DPT=9101 SEQ=365467290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABB68F0000000001030307) 
Oct 13 15:05:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:25 standalone.localdomain ceph-mon[29756]: pgmap v2726: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:26 standalone.localdomain python3.9[380304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2727: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:26 standalone.localdomain python3.9[380395]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:27 standalone.localdomain python3.9[380486]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:05:27 standalone.localdomain ceph-mon[29756]: pgmap v2727: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:28 standalone.localdomain python3.9[380577]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3437 DF PROTO=TCP SPT=46694 DPT=9101 SEQ=365467290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABC2B70000000001030307) 
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2728: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:28 standalone.localdomain python3.9[380667]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:05:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:05:29 standalone.localdomain python3.9[380757]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:29 standalone.localdomain ceph-mon[29756]: pgmap v2728: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:30 standalone.localdomain python3.9[380847]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:30 standalone.localdomain python3.9[380937]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2729: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:31 standalone.localdomain python3.9[381027]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:31 standalone.localdomain python3.9[381117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:31 standalone.localdomain ceph-mon[29756]: pgmap v2729: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:32 standalone.localdomain python3.9[381207]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3438 DF PROTO=TCP SPT=46694 DPT=9101 SEQ=365467290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABD2770000000001030307) 
Oct 13 15:05:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2730: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:32 standalone.localdomain python3.9[381297]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:33 standalone.localdomain python3.9[381387]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:33 standalone.localdomain ceph-mon[29756]: pgmap v2730: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:05:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:05:34 standalone.localdomain python3.9[381477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:34 standalone.localdomain python3.9[381567]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2731: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:35 standalone.localdomain python3.9[381657]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:05:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:35 standalone.localdomain python3.9[381747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:05:35 standalone.localdomain ceph-mon[29756]: pgmap v2731: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:35 standalone.localdomain systemd[1]: tmp-crun.hRjmvH.mount: Deactivated successfully.
Oct 13 15:05:35 standalone.localdomain podman[381748]: 2025-10-13 15:05:35.82467204 +0000 UTC m=+0.089046606 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:05:35 standalone.localdomain podman[381748]: 2025-10-13 15:05:35.857839284 +0000 UTC m=+0.122213860 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:05:35 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:05:36 standalone.localdomain python3.9[381855]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                            systemctl disable --now certmonger.service
                                                            test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                          fi
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2732: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:37 standalone.localdomain python3.9[381947]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:05:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8182 DF PROTO=TCP SPT=52432 DPT=9100 SEQ=768936086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABE5140000000001030307) 
Oct 13 15:05:37 standalone.localdomain ceph-mon[29756]: pgmap v2732: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:37 standalone.localdomain python3.9[382037]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:05:37 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:05:38 standalone.localdomain systemd-sysv-generator[382066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:05:38 standalone.localdomain systemd-rc-local-generator[382057]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:05:38 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:05:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8183 DF PROTO=TCP SPT=52432 DPT=9100 SEQ=768936086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABE9360000000001030307) 
Oct 13 15:05:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2733: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:38 standalone.localdomain python3.9[382162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:39 standalone.localdomain python3.9[382253]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:39 standalone.localdomain ceph-mon[29756]: pgmap v2733: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:40 standalone.localdomain python3.9[382344]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:05:40 standalone.localdomain podman[382346]: 2025-10-13 15:05:40.184630982 +0000 UTC m=+0.052977306 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:05:40 standalone.localdomain podman[382346]: 2025-10-13 15:05:40.216237021 +0000 UTC m=+0.084583335 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:05:40 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:05:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8184 DF PROTO=TCP SPT=52432 DPT=9100 SEQ=768936086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABF1370000000001030307) 
Oct 13 15:05:40 standalone.localdomain python3.9[382459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2734: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:41 standalone.localdomain python3.9[382550]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:41 standalone.localdomain python3.9[382641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:41 standalone.localdomain ceph-mon[29756]: pgmap v2734: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:42 standalone.localdomain python3.9[382732]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:05:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2735: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:43 standalone.localdomain python3.9[382823]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 13 15:05:43 standalone.localdomain ceph-mon[29756]: pgmap v2735: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:43 standalone.localdomain python3.9[382914]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 15:05:43 standalone.localdomain groupadd[382915]: group added to /etc/group: name=libvirt, GID=42473
Oct 13 15:05:43 standalone.localdomain groupadd[382915]: group added to /etc/gshadow: name=libvirt
Oct 13 15:05:44 standalone.localdomain groupadd[382915]: new group: name=libvirt, GID=42473
Oct 13 15:05:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46475 DF PROTO=TCP SPT=56832 DPT=9105 SEQ=944520140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CABFEF60000000001030307) 
Oct 13 15:05:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2736: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:44 standalone.localdomain python3.9[383010]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on standalone.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 15:05:44 standalone.localdomain useradd[383012]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/2
Oct 13 15:05:44 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 79.0 (263 of 333 items), suggesting rotation.
Oct 13 15:05:44 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:05:44 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:05:44 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:05:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:45 standalone.localdomain ceph-mon[29756]: pgmap v2736: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:45 standalone.localdomain python3.9[383109]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:05:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2737: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:46 standalone.localdomain python3.9[383161]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:05:47 standalone.localdomain ceph-mon[29756]: pgmap v2737: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:05:47 standalone.localdomain object-server[383164]: Object update sweep starting on /srv/node/d1 (pid: 24)
Oct 13 15:05:47 standalone.localdomain object-server[383164]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 24)
Oct 13 15:05:47 standalone.localdomain object-server[383164]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:05:47 standalone.localdomain object-server[114601]: Object update sweep completed: 0.07s
Oct 13 15:05:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6122 DF PROTO=TCP SPT=37256 DPT=9882 SEQ=2678556860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC11360000000001030307) 
Oct 13 15:05:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2738: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:49 standalone.localdomain ceph-mon[29756]: pgmap v2738: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46477 DF PROTO=TCP SPT=56832 DPT=9105 SEQ=944520140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC16B60000000001030307) 
Oct 13 15:05:50 standalone.localdomain sudo[383165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:05:50 standalone.localdomain sudo[383165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:05:50 standalone.localdomain sudo[383165]: pam_unix(sudo:session): session closed for user root
Oct 13 15:05:50 standalone.localdomain sudo[383180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:05:50 standalone.localdomain sudo[383180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:05:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2739: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:50 standalone.localdomain sudo[383180]: pam_unix(sudo:session): session closed for user root
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:05:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:05:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c2d28dd7-fb70-4f97-a501-2b7f3d5d5efe (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:05:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c2d28dd7-fb70-4f97-a501-2b7f3d5d5efe (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:05:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c2d28dd7-fb70-4f97-a501-2b7f3d5d5efe (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:05:50 standalone.localdomain sudo[383239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:05:50 standalone.localdomain sudo[383239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:05:50 standalone.localdomain sudo[383239]: pam_unix(sudo:session): session closed for user root
Oct 13 15:05:51 standalone.localdomain ceph-mon[29756]: pgmap v2739: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:05:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:05:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:05:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:05:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2740: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:05:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:05:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4039 DF PROTO=TCP SPT=36864 DPT=9102 SEQ=1710523807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC242F0000000001030307) 
Oct 13 15:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:05:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:05:53 standalone.localdomain systemd[1]: tmp-crun.Vucio7.mount: Deactivated successfully.
Oct 13 15:05:53 standalone.localdomain podman[383309]: 2025-10-13 15:05:53.823722738 +0000 UTC m=+0.090477697 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, container_name=swift_object_server, name=rhosp17/openstack-swift-object, version=17.1.9, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct 13 15:05:53 standalone.localdomain ceph-mon[29756]: pgmap v2740: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:53 standalone.localdomain systemd[1]: tmp-crun.g01iQN.mount: Deactivated successfully.
Oct 13 15:05:53 standalone.localdomain podman[383311]: 2025-10-13 15:05:53.894773985 +0000 UTC m=+0.153754775 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, distribution-scope=public, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:05:53 standalone.localdomain podman[383310]: 2025-10-13 15:05:53.927830395 +0000 UTC m=+0.190199676 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, vcs-type=git, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:05:54 standalone.localdomain podman[383309]: 2025-10-13 15:05:54.027928135 +0000 UTC m=+0.294683084 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, release=1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 15:05:54 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:05:54 standalone.localdomain podman[383310]: 2025-10-13 15:05:54.108781909 +0000 UTC m=+0.371151160 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, com.redhat.component=openstack-swift-container-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9)
Oct 13 15:05:54 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:05:54 standalone.localdomain podman[383311]: 2025-10-13 15:05:54.143985312 +0000 UTC m=+0.402966102 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, architecture=x86_64, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:05:54 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:05:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2741: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:05:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:05:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:05:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=594 DF PROTO=TCP SPT=38852 DPT=9101 SEQ=2424171532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC2BBE0000000001030307) 
Oct 13 15:05:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:05:55 standalone.localdomain ceph-mon[29756]: pgmap v2741: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:05:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2742: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:57 standalone.localdomain ceph-mon[29756]: pgmap v2742: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=596 DF PROTO=TCP SPT=38852 DPT=9101 SEQ=2424171532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC37B70000000001030307) 
Oct 13 15:05:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2743: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:05:59 standalone.localdomain ceph-mon[29756]: pgmap v2743: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2744: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:01 standalone.localdomain ceph-mon[29756]: pgmap v2744: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=597 DF PROTO=TCP SPT=38852 DPT=9101 SEQ=2424171532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC47760000000001030307) 
Oct 13 15:06:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2745: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:03 standalone.localdomain ceph-mon[29756]: pgmap v2745: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2746: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:05 standalone.localdomain ceph-mon[29756]: pgmap v2746: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:06:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:06:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 5400.0 total, 600.0 interval
                                                        Cumulative writes: 14K writes, 65K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                                        Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1516 writes, 7122 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 5.72 MB, 0.01 MB/s
                                                        Interval WAL: 1516 writes, 1516 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.5      0.53              0.14        44    0.012       0      0       0.0       0.0
                                                          L6      1/0    5.43 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.8    202.5    171.6      1.17              0.55        43    0.027    208K    23K       0.0       0.0
                                                         Sum      1/0    5.43 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   5.8    139.4    142.6      1.70              0.70        87    0.020    208K    23K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.0    178.8    178.2      0.20              0.08        12    0.017     36K   3067       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    202.5    171.6      1.17              0.55        43    0.027    208K    23K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     78.9      0.53              0.14        43    0.012       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 5400.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.041, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.24 GB write, 0.04 MB/s write, 0.23 GB read, 0.04 MB/s read, 1.7 seconds
                                                        Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 32.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000359 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(2940,31.24 MB,10.1417%) FilterBlock(88,580.86 KB,0.184171%) IndexBlock(88,837.58 KB,0.265567%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 15:06:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2747: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:06 standalone.localdomain podman[383417]: 2025-10-13 15:06:06.818808055 +0000 UTC m=+0.084407189 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:06:06 standalone.localdomain podman[383417]: 2025-10-13 15:06:06.827816889 +0000 UTC m=+0.093416063 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:06:06 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:06:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:06:06.921 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:06:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:06:06.921 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:06:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:06:06.922 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:06:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34194 DF PROTO=TCP SPT=60986 DPT=9100 SEQ=3074535488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC5A440000000001030307) 
Oct 13 15:06:07 standalone.localdomain ceph-mon[29756]: pgmap v2747: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34195 DF PROTO=TCP SPT=60986 DPT=9100 SEQ=3074535488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC5E370000000001030307) 
Oct 13 15:06:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2748: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:09 standalone.localdomain ceph-mon[29756]: pgmap v2748: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34196 DF PROTO=TCP SPT=60986 DPT=9100 SEQ=3074535488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC66360000000001030307) 
Oct 13 15:06:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:06:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2749: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:10 standalone.localdomain systemd[1]: tmp-crun.gqAnPE.mount: Deactivated successfully.
Oct 13 15:06:10 standalone.localdomain podman[383435]: 2025-10-13 15:06:10.805008102 +0000 UTC m=+0.069713338 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:06:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:10 standalone.localdomain podman[383435]: 2025-10-13 15:06:10.862876232 +0000 UTC m=+0.127581468 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:06:10 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:06:11 standalone.localdomain ceph-mon[29756]: pgmap v2749: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2750: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  Converting 2967 SID table entries...
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:06:13 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:06:13 standalone.localdomain ceph-mon[29756]: pgmap v2750: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50130 DF PROTO=TCP SPT=56622 DPT=9105 SEQ=380439975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC73F70000000001030307) 
Oct 13 15:06:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2751: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:15 standalone.localdomain ceph-mon[29756]: pgmap v2751: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2752: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:17 standalone.localdomain ceph-mon[29756]: pgmap v2752: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:06:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/829503821' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:06:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:06:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/829503821' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:06:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6587 DF PROTO=TCP SPT=52044 DPT=9882 SEQ=3380620196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC86760000000001030307) 
Oct 13 15:06:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2753: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/829503821' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:06:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/829503821' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:06:19 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=26 res=1
Oct 13 15:06:20 standalone.localdomain ceph-mon[29756]: pgmap v2753: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50132 DF PROTO=TCP SPT=56622 DPT=9105 SEQ=380439975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC8BB60000000001030307) 
Oct 13 15:06:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2754: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:22 standalone.localdomain ceph-mon[29756]: pgmap v2754: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2755: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:06:23
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'volumes', 'manila_metadata', 'manila_data', 'backups']
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:06:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33294 DF PROTO=TCP SPT=51562 DPT=9102 SEQ=2093126002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAC995F0000000001030307) 
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:06:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:06:24 standalone.localdomain ceph-mon[29756]: pgmap v2755: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:06:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:06:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2756: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:24 standalone.localdomain podman[384710]: 2025-10-13 15:06:24.947761357 +0000 UTC m=+0.211409288 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, tcib_managed=true, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 15:06:25 standalone.localdomain podman[384711]: 2025-10-13 15:06:25.066608867 +0000 UTC m=+0.328857348 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, container_name=swift_container_server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, version=17.1.9, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, config_id=tripleo_step4)
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  Converting 2970 SID table entries...
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:06:25 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:06:25 standalone.localdomain podman[384710]: 2025-10-13 15:06:25.119728597 +0000 UTC m=+0.383376508 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:06:25 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:06:25 standalone.localdomain podman[384712]: 2025-10-13 15:06:25.071881372 +0000 UTC m=+0.329506357 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, io.openshift.expose-services=, version=17.1.9, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, config_id=tripleo_step4)
Oct 13 15:06:25 standalone.localdomain podman[384712]: 2025-10-13 15:06:25.251775294 +0000 UTC m=+0.509400289 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, com.redhat.component=openstack-swift-account-container, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server)
Oct 13 15:06:25 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:06:25 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=27 res=1
Oct 13 15:06:25 standalone.localdomain podman[384711]: 2025-10-13 15:06:25.314505806 +0000 UTC m=+0.576754287 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public)
Oct 13 15:06:25 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:06:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56020 DF PROTO=TCP SPT=33266 DPT=9101 SEQ=385037216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACA0EE0000000001030307) 
Oct 13 15:06:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:26 standalone.localdomain ceph-mon[29756]: pgmap v2756: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2757: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:28 standalone.localdomain ceph-mon[29756]: pgmap v2757: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56022 DF PROTO=TCP SPT=33266 DPT=9101 SEQ=385037216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACACF60000000001030307) 
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2758: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:06:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:06:30 standalone.localdomain ceph-mon[29756]: pgmap v2758: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2759: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:32 standalone.localdomain ceph-mon[29756]: pgmap v2759: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56023 DF PROTO=TCP SPT=33266 DPT=9101 SEQ=385037216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACBCB70000000001030307) 
Oct 13 15:06:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2760: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:34 standalone.localdomain ceph-mon[29756]: pgmap v2760: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  Converting 2970 SID table entries...
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:06:34 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:06:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2761: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:36 standalone.localdomain ceph-mon[29756]: pgmap v2761: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2762: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45543 DF PROTO=TCP SPT=52120 DPT=9100 SEQ=3919929143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACCF750000000001030307) 
Oct 13 15:06:37 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=28 res=1
Oct 13 15:06:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:06:37 standalone.localdomain podman[384804]: 2025-10-13 15:06:37.881520677 +0000 UTC m=+0.136753868 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:06:37 standalone.localdomain podman[384804]: 2025-10-13 15:06:37.922107809 +0000 UTC m=+0.177340950 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 13 15:06:37 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:06:38 standalone.localdomain ceph-mon[29756]: pgmap v2762: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45544 DF PROTO=TCP SPT=52120 DPT=9100 SEQ=3919929143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACD3760000000001030307) 
Oct 13 15:06:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2763: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:40 standalone.localdomain ceph-mon[29756]: pgmap v2763: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45545 DF PROTO=TCP SPT=52120 DPT=9100 SEQ=3919929143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACDB760000000001030307) 
Oct 13 15:06:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2764: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:06:41 standalone.localdomain systemd[1]: tmp-crun.DUTrGS.mount: Deactivated successfully.
Oct 13 15:06:41 standalone.localdomain podman[384822]: 2025-10-13 15:06:41.814047977 +0000 UTC m=+0.085160896 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct 13 15:06:41 standalone.localdomain podman[384822]: 2025-10-13 15:06:41.873926936 +0000 UTC m=+0.145039865 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:06:41 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:06:42 standalone.localdomain ceph-mon[29756]: pgmap v2764: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2765: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  Converting 2970 SID table entries...
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:06:43 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:06:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28418 DF PROTO=TCP SPT=45382 DPT=9105 SEQ=867703613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACE9360000000001030307) 
Oct 13 15:06:44 standalone.localdomain ceph-mon[29756]: pgmap v2765: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2766: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:46 standalone.localdomain ceph-mon[29756]: pgmap v2766: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2767: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:48 standalone.localdomain ceph-mon[29756]: pgmap v2767: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2459 DF PROTO=TCP SPT=43268 DPT=9882 SEQ=2849013606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CACFB760000000001030307) 
Oct 13 15:06:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2768: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:49 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=29 res=1
Oct 13 15:06:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28420 DF PROTO=TCP SPT=45382 DPT=9105 SEQ=867703613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD00F60000000001030307) 
Oct 13 15:06:50 standalone.localdomain ceph-mon[29756]: pgmap v2768: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2769: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:51 standalone.localdomain sudo[384858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:06:51 standalone.localdomain sudo[384858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:06:51 standalone.localdomain sudo[384858]: pam_unix(sudo:session): session closed for user root
Oct 13 15:06:51 standalone.localdomain sudo[384876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 15:06:51 standalone.localdomain sudo[384876]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:06:51 standalone.localdomain podman[384968]: 2025-10-13 15:06:51.942215247 +0000 UTC m=+0.087987973 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 15:06:52 standalone.localdomain podman[384968]: 2025-10-13 15:06:52.06592724 +0000 UTC m=+0.211699966 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7)
Oct 13 15:06:52 standalone.localdomain ceph-mon[29756]: pgmap v2769: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:52 standalone.localdomain sudo[384876]: pam_unix(sudo:session): session closed for user root
Oct 13 15:06:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:06:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:06:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:52 standalone.localdomain sudo[385071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:06:52 standalone.localdomain sudo[385071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:06:52 standalone.localdomain sudo[385071]: pam_unix(sudo:session): session closed for user root
Oct 13 15:06:52 standalone.localdomain sudo[385089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:06:52 standalone.localdomain sudo[385089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:06:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2770: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:06:53 standalone.localdomain sudo[385089]: pam_unix(sudo:session): session closed for user root
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c639f89a-8cff-4da4-bf1d-d03c3b638d14 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c639f89a-8cff-4da4-bf1d-d03c3b638d14 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:06:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c639f89a-8cff-4da4-bf1d-d03c3b638d14 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:06:53 standalone.localdomain sudo[385143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:06:53 standalone.localdomain sudo[385143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:06:53 standalone.localdomain sudo[385143]: pam_unix(sudo:session): session closed for user root
Oct 13 15:06:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45565 DF PROTO=TCP SPT=56734 DPT=9102 SEQ=1954084645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD0E8E0000000001030307) 
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  Converting 2970 SID table entries...
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:06:53 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:06:54 standalone.localdomain ceph-mon[29756]: pgmap v2770: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2771: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:06:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:06:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53386 DF PROTO=TCP SPT=47904 DPT=9101 SEQ=484975649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD161D0000000001030307) 
Oct 13 15:06:55 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=30 res=1
Oct 13 15:06:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:06:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:06:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:06:55 standalone.localdomain systemd[1]: tmp-crun.EiMSgP.mount: Deactivated successfully.
Oct 13 15:06:55 standalone.localdomain podman[385165]: 2025-10-13 15:06:55.819953716 +0000 UTC m=+0.075118154 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, distribution-scope=public, io.openshift.expose-services=, release=1)
Oct 13 15:06:55 standalone.localdomain systemd[1]: tmp-crun.rMJljs.mount: Deactivated successfully.
Oct 13 15:06:55 standalone.localdomain podman[385167]: 2025-10-13 15:06:55.838276464 +0000 UTC m=+0.087398775 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, release=1, vcs-type=git, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 15:06:55 standalone.localdomain ceph-mon[29756]: pgmap v2771: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:06:55 standalone.localdomain podman[385166]: 2025-10-13 15:06:55.880877457 +0000 UTC m=+0.132466355 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.buildah.version=1.33.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, architecture=x86_64, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:06:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:06:56 standalone.localdomain podman[385165]: 2025-10-13 15:06:56.004784066 +0000 UTC m=+0.259948534 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-swift-object, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:06:56 standalone.localdomain podman[385167]: 2025-10-13 15:06:56.01458128 +0000 UTC m=+0.263703621 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc.)
Oct 13 15:06:56 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:06:56 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:06:56 standalone.localdomain podman[385166]: 2025-10-13 15:06:56.047788411 +0000 UTC m=+0.299377319 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team)
Oct 13 15:06:56 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:06:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2772: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:57 standalone.localdomain ceph-mon[29756]: pgmap v2772: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53388 DF PROTO=TCP SPT=47904 DPT=9101 SEQ=484975649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD22360000000001030307) 
Oct 13 15:06:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2773: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:06:59 standalone.localdomain ceph-mon[29756]: pgmap v2773: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2774: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:01 standalone.localdomain ceph-mon[29756]: pgmap v2774: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  Converting 2970 SID table entries...
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:07:02 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:07:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53389 DF PROTO=TCP SPT=47904 DPT=9101 SEQ=484975649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD31F60000000001030307) 
Oct 13 15:07:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2775: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:07:03 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=31 res=1
Oct 13 15:07:03 standalone.localdomain systemd-rc-local-generator[385283]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:07:03 standalone.localdomain systemd-sysv-generator[385286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:07:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:07:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:07:03 standalone.localdomain systemd-rc-local-generator[385324]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:07:03 standalone.localdomain systemd-sysv-generator[385327]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:07:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:07:03 standalone.localdomain ceph-mon[29756]: pgmap v2775: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2776: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:06 standalone.localdomain ceph-mon[29756]: pgmap v2776: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2777: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:07:06.922 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:07:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:07:06.924 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:07:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:07:06.924 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:07:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29189 DF PROTO=TCP SPT=59316 DPT=9100 SEQ=2112421874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD44A50000000001030307) 
Oct 13 15:07:08 standalone.localdomain ceph-mon[29756]: pgmap v2777: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29190 DF PROTO=TCP SPT=59316 DPT=9100 SEQ=2112421874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD48B60000000001030307) 
Oct 13 15:07:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:07:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2778: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:08 standalone.localdomain podman[385339]: 2025-10-13 15:07:08.803214324 +0000 UTC m=+0.065600098 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:07:08 standalone.localdomain podman[385339]: 2025-10-13 15:07:08.811934945 +0000 UTC m=+0.074320749 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:07:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:07:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 5400.1 total, 600.0 interval
                                                        Cumulative writes: 6951 writes, 30K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                                        Cumulative WAL: 6951 writes, 1290 syncs, 5.39 writes per sync, written: 0.03 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                        Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:07:08 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:07:10 standalone.localdomain ceph-mon[29756]: pgmap v2778: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29191 DF PROTO=TCP SPT=59316 DPT=9100 SEQ=2112421874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD50B70000000001030307) 
Oct 13 15:07:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2779: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:12 standalone.localdomain ceph-mon[29756]: pgmap v2779: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:07:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2780: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:12 standalone.localdomain podman[385358]: 2025-10-13 15:07:12.979143963 +0000 UTC m=+0.211691496 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 13 15:07:13 standalone.localdomain podman[385358]: 2025-10-13 15:07:13.021774616 +0000 UTC m=+0.254322339 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 13 15:07:13 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  Converting 2971 SID table entries...
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability open_perms=1
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability always_check_network=0
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 15:07:13 standalone.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 15:07:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42171 DF PROTO=TCP SPT=54174 DPT=9105 SEQ=615442074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD5E760000000001030307) 
Oct 13 15:07:14 standalone.localdomain groupadd[385391]: group added to /etc/group: name=clevis, GID=983
Oct 13 15:07:14 standalone.localdomain groupadd[385391]: group added to /etc/gshadow: name=clevis
Oct 13 15:07:14 standalone.localdomain groupadd[385391]: new group: name=clevis, GID=983
Oct 13 15:07:14 standalone.localdomain ceph-mon[29756]: pgmap v2780: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:14 standalone.localdomain useradd[385398]: new user: name=clevis, UID=984, GID=983, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 13 15:07:14 standalone.localdomain usermod[385408]: add 'clevis' to group 'tss'
Oct 13 15:07:14 standalone.localdomain usermod[385408]: add 'clevis' to shadow group 'tss'
Oct 13 15:07:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2781: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 15:07:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:16 standalone.localdomain ceph-mon[29756]: pgmap v2781: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2782: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:17 standalone.localdomain groupadd[385430]: group added to /etc/group: name=dnsmasq, GID=982
Oct 13 15:07:17 standalone.localdomain groupadd[385430]: group added to /etc/gshadow: name=dnsmasq
Oct 13 15:07:17 standalone.localdomain groupadd[385430]: new group: name=dnsmasq, GID=982
Oct 13 15:07:17 standalone.localdomain useradd[385437]: new user: name=dnsmasq, UID=983, GID=982, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 13 15:07:17 standalone.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Oct 13 15:07:17 standalone.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=32 res=1
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Reloading rules
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Collecting garbage unconditionally...
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Loading rules from directory /etc/polkit-1/rules.d
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Finished loading, compiling and executing 5 rules
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Reloading rules
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Collecting garbage unconditionally...
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Loading rules from directory /etc/polkit-1/rules.d
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 13 15:07:17 standalone.localdomain polkitd[1036]: Finished loading, compiling and executing 5 rules
Oct 13 15:07:18 standalone.localdomain ceph-mon[29756]: pgmap v2782: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:07:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1219584200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:07:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:07:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1219584200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:07:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16897 DF PROTO=TCP SPT=43666 DPT=9882 SEQ=64671273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD70B60000000001030307) 
Oct 13 15:07:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2783: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1219584200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:07:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1219584200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:07:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42173 DF PROTO=TCP SPT=54174 DPT=9105 SEQ=615442074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD76370000000001030307) 
Oct 13 15:07:20 standalone.localdomain ceph-mon[29756]: pgmap v2783: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2784: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:22 standalone.localdomain ceph-mon[29756]: pgmap v2784: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2785: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:07:23
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'images', 'volumes', 'vms', '.mgr', 'backups']
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:07:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51428 DF PROTO=TCP SPT=56110 DPT=9102 SEQ=2532878533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD83BF0000000001030307) 
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:07:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:07:24 standalone.localdomain ceph-mon[29756]: pgmap v2785: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2786: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11961 DF PROTO=TCP SPT=50670 DPT=9101 SEQ=4152292724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD8B4E0000000001030307) 
Oct 13 15:07:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:26 standalone.localdomain ceph-mon[29756]: pgmap v2786: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:07:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:07:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:07:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2787: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:26 standalone.localdomain podman[385621]: 2025-10-13 15:07:26.830131389 +0000 UTC m=+0.078067655 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, container_name=swift_account_server, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git)
Oct 13 15:07:26 standalone.localdomain podman[385619]: 2025-10-13 15:07:26.880154403 +0000 UTC m=+0.130254846 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-type=git, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28)
Oct 13 15:07:26 standalone.localdomain podman[385620]: 2025-10-13 15:07:26.935369338 +0000 UTC m=+0.185961117 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team)
Oct 13 15:07:27 standalone.localdomain podman[385621]: 2025-10-13 15:07:27.035224908 +0000 UTC m=+0.283161184 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account)
Oct 13 15:07:27 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:07:27 standalone.localdomain podman[385619]: 2025-10-13 15:07:27.098920787 +0000 UTC m=+0.349021230 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, tcib_managed=true)
Oct 13 15:07:27 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:07:27 standalone.localdomain podman[385620]: 2025-10-13 15:07:27.138760514 +0000 UTC m=+0.389352273 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, container_name=swift_container_server, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4)
Oct 13 15:07:27 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:07:28 standalone.localdomain ceph-mon[29756]: pgmap v2787: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11963 DF PROTO=TCP SPT=50670 DPT=9101 SEQ=4152292724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAD97760000000001030307) 
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2788: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:07:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:07:30 standalone.localdomain ceph-mon[29756]: pgmap v2788: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2789: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:32 standalone.localdomain ceph-mon[29756]: pgmap v2789: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11964 DF PROTO=TCP SPT=50670 DPT=9101 SEQ=4152292724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADA7360000000001030307) 
Oct 13 15:07:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2790: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:34 standalone.localdomain ceph-mon[29756]: pgmap v2790: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2791: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:36 standalone.localdomain ceph-mon[29756]: pgmap v2791: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2792: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6201 DF PROTO=TCP SPT=49732 DPT=9100 SEQ=1682347890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADB9D50000000001030307) 
Oct 13 15:07:38 standalone.localdomain ceph-mon[29756]: pgmap v2792: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6202 DF PROTO=TCP SPT=49732 DPT=9100 SEQ=1682347890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADBDF60000000001030307) 
Oct 13 15:07:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2793: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:07:39 standalone.localdomain podman[393340]: 2025-10-13 15:07:39.724719003 +0000 UTC m=+0.060466259 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:07:39 standalone.localdomain podman[393340]: 2025-10-13 15:07:39.759785566 +0000 UTC m=+0.095532802 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:07:39 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:07:40 standalone.localdomain ceph-mon[29756]: pgmap v2793: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6203 DF PROTO=TCP SPT=49732 DPT=9100 SEQ=1682347890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADC5F60000000001030307) 
Oct 13 15:07:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2794: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:42 standalone.localdomain ceph-mon[29756]: pgmap v2794: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2795: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:07:43 standalone.localdomain podman[397205]: 2025-10-13 15:07:43.790572301 +0000 UTC m=+0.053888133 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:07:43 standalone.localdomain podman[397205]: 2025-10-13 15:07:43.828835229 +0000 UTC m=+0.092151071 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:07:43 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:07:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44062 DF PROTO=TCP SPT=35570 DPT=9105 SEQ=3529601824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADD3B70000000001030307) 
Oct 13 15:07:44 standalone.localdomain ceph-mon[29756]: pgmap v2795: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2796: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:46 standalone.localdomain ceph-mon[29756]: pgmap v2796: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2797: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:48 standalone.localdomain ceph-mon[29756]: pgmap v2797: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11536 DF PROTO=TCP SPT=33150 DPT=9882 SEQ=1083645052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADE5F60000000001030307) 
Oct 13 15:07:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2798: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44064 DF PROTO=TCP SPT=35570 DPT=9105 SEQ=3529601824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADEB760000000001030307) 
Oct 13 15:07:50 standalone.localdomain ceph-mon[29756]: pgmap v2798: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2799: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:51 standalone.localdomain groupadd[402488]: group added to /etc/group: name=ceph, GID=167
Oct 13 15:07:51 standalone.localdomain groupadd[402488]: group added to /etc/gshadow: name=ceph
Oct 13 15:07:51 standalone.localdomain groupadd[402488]: new group: name=ceph, GID=167
Oct 13 15:07:51 standalone.localdomain useradd[402494]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 13 15:07:52 standalone.localdomain ceph-mon[29756]: pgmap v2799: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2800: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:07:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:07:53 standalone.localdomain sudo[402510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:07:53 standalone.localdomain sudo[402510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:07:53 standalone.localdomain sudo[402510]: pam_unix(sudo:session): session closed for user root
Oct 13 15:07:53 standalone.localdomain sudo[402532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:07:53 standalone.localdomain sudo[402532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:07:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24271 DF PROTO=TCP SPT=35268 DPT=9102 SEQ=2506458968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CADF8EE0000000001030307) 
Oct 13 15:07:54 standalone.localdomain sudo[402532]: pam_unix(sudo:session): session closed for user root
Oct 13 15:07:54 standalone.localdomain sudo[403133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:07:54 standalone.localdomain sudo[403133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:07:54 standalone.localdomain sudo[403133]: pam_unix(sudo:session): session closed for user root
Oct 13 15:07:54 standalone.localdomain sudo[403196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Oct 13 15:07:54 standalone.localdomain sudo[403196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:07:54 standalone.localdomain sudo[403196]: pam_unix(sudo:session): session closed for user root
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: pgmap v2800: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:07:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev af1a5d2e-ff79-4243-bdbb-951686647d7e (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:07:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev af1a5d2e-ff79-4243-bdbb-951686647d7e (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:07:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event af1a5d2e-ff79-4243-bdbb-951686647d7e (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:07:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2801: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:54 standalone.localdomain sudo[403584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:07:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:07:54 standalone.localdomain sudo[403584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:07:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:54 standalone.localdomain sudo[403584]: pam_unix(sudo:session): session closed for user root
Oct 13 15:07:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18135 DF PROTO=TCP SPT=34042 DPT=9101 SEQ=3920654679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE007D0000000001030307) 
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: pgmap v2801: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:07:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:07:56 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 15:07:56 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 15:07:56 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:07:56 standalone.localdomain systemd-rc-local-generator[403824]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:07:56 standalone.localdomain systemd-sysv-generator[403828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:07:56 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:07:56 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 15:07:56 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 15:07:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2802: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:07:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:07:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:07:58 standalone.localdomain podman[406075]: 2025-10-13 15:07:58.350668374 +0000 UTC m=+0.621308089 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9)
Oct 13 15:07:58 standalone.localdomain systemd[1]: tmp-crun.JWjoss.mount: Deactivated successfully.
Oct 13 15:07:58 standalone.localdomain podman[406080]: 2025-10-13 15:07:58.363903829 +0000 UTC m=+0.632043312 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T16:11:22, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Oct 13 15:07:58 standalone.localdomain ceph-mon[29756]: pgmap v2802: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:07:58 standalone.localdomain podman[406070]: 2025-10-13 15:07:58.407906385 +0000 UTC m=+0.678353714 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, distribution-scope=public, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9)
Oct 13 15:07:58 standalone.localdomain podman[406075]: 2025-10-13 15:07:58.519605878 +0000 UTC m=+0.790245593 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, managed_by=tripleo_ansible, container_name=swift_container_server, io.buildah.version=1.33.12, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:07:58 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:07:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18137 DF PROTO=TCP SPT=34042 DPT=9101 SEQ=3920654679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE0C770000000001030307) 
Oct 13 15:07:58 standalone.localdomain podman[406080]: 2025-10-13 15:07:58.583879851 +0000 UTC m=+0.852019314 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, container_name=swift_account_server, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:07:58 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:07:58 standalone.localdomain podman[406070]: 2025-10-13 15:07:58.59690155 +0000 UTC m=+0.867348859 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, version=17.1.9, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, release=1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12)
Oct 13 15:07:58 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:07:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2803: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:00 standalone.localdomain ceph-mon[29756]: pgmap v2803: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2804: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:01 standalone.localdomain python3.9[409875]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:08:01 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:01 standalone.localdomain systemd-rc-local-generator[409980]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:01 standalone.localdomain systemd-sysv-generator[409988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:01 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:02 standalone.localdomain python3.9[410441]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:08:02 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:02 standalone.localdomain systemd-sysv-generator[410745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:02 standalone.localdomain systemd-rc-local-generator[410742]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:02 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:02 standalone.localdomain ceph-mon[29756]: pgmap v2804: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18138 DF PROTO=TCP SPT=34042 DPT=9101 SEQ=3920654679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE1C370000000001030307) 
Oct 13 15:08:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2805: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:03 standalone.localdomain python3.9[411178]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:08:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:03 standalone.localdomain systemd-rc-local-generator[411458]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:03 standalone.localdomain systemd-sysv-generator[411462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:04 standalone.localdomain python3.9[411975]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:08:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:04 standalone.localdomain systemd-sysv-generator[412319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:04 standalone.localdomain systemd-rc-local-generator[412316]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:04 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:04 standalone.localdomain ceph-mon[29756]: pgmap v2805: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2806: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:05 standalone.localdomain python3.9[412778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:05 standalone.localdomain systemd-rc-local-generator[413057]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:05 standalone.localdomain systemd-sysv-generator[413062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:06 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 15:08:06 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 15:08:06 standalone.localdomain systemd[1]: man-db-cache-update.service: Consumed 11.255s CPU time.
Oct 13 15:08:06 standalone.localdomain systemd[1]: run-raf78af06de374518a21e391e55a1c082.service: Deactivated successfully.
Oct 13 15:08:06 standalone.localdomain systemd[1]: run-r223e6b32bed248ed875d557239833c9b.service: Deactivated successfully.
Oct 13 15:08:06 standalone.localdomain python3.9[413416]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:06 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:06 standalone.localdomain ceph-mon[29756]: pgmap v2806: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:06 standalone.localdomain systemd-rc-local-generator[413476]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:06 standalone.localdomain systemd-sysv-generator[413479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:06 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2807: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:08:06.923 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:08:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:08:06.924 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:08:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:08:06.925 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:08:07 standalone.localdomain python3.9[413592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38007 DF PROTO=TCP SPT=59032 DPT=9100 SEQ=2620433724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE2F040000000001030307) 
Oct 13 15:08:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:07 standalone.localdomain systemd-sysv-generator[413625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:07 standalone.localdomain systemd-rc-local-generator[413622]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:07 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38008 DF PROTO=TCP SPT=59032 DPT=9100 SEQ=2620433724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE32F60000000001030307) 
Oct 13 15:08:08 standalone.localdomain ceph-mon[29756]: pgmap v2807: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:08 standalone.localdomain python3.9[413739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2808: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:09 standalone.localdomain python3.9[413850]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:09 standalone.localdomain systemd-rc-local-generator[413878]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:09 standalone.localdomain systemd-sysv-generator[413881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38009 DF PROTO=TCP SPT=59032 DPT=9100 SEQ=2620433724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE3AF60000000001030307) 
Oct 13 15:08:10 standalone.localdomain ceph-mon[29756]: pgmap v2808: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:10 standalone.localdomain python3.9[413998]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:08:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:08:10 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:08:10 standalone.localdomain podman[414000]: 2025-10-13 15:08:10.69394596 +0000 UTC m=+0.089578126 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:08:10 standalone.localdomain podman[414000]: 2025-10-13 15:08:10.704697694 +0000 UTC m=+0.100329780 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:08:10 standalone.localdomain systemd-rc-local-generator[414043]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:08:10 standalone.localdomain systemd-sysv-generator[414048]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:08:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2809: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:10 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:08:11 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:08:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:11 standalone.localdomain python3.9[414165]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:12 standalone.localdomain ceph-mon[29756]: pgmap v2809: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2810: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:13 standalone.localdomain python3.9[414276]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36872 DF PROTO=TCP SPT=52312 DPT=9105 SEQ=4046077422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE48B60000000001030307) 
Oct 13 15:08:14 standalone.localdomain python3.9[414387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:08:14 standalone.localdomain podman[414389]: 2025-10-13 15:08:14.370745517 +0000 UTC m=+0.059276760 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:08:14 standalone.localdomain podman[414389]: 2025-10-13 15:08:14.405243371 +0000 UTC m=+0.093774644 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:08:14 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:08:14 standalone.localdomain ceph-mon[29756]: pgmap v2810: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2811: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:15 standalone.localdomain python3.9[414522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:15 standalone.localdomain sshd[414526]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:08:15 standalone.localdomain sshd[414526]: error: kex_exchange_identification: client sent invalid protocol identifier "GET / HTTP/1.1"
Oct 13 15:08:15 standalone.localdomain sshd[414526]: banner exchange: Connection from 165.154.238.248 port 57212: invalid format
Oct 13 15:08:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:16 standalone.localdomain ceph-mon[29756]: pgmap v2811: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:16 standalone.localdomain python3.9[414634]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2812: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:17 standalone.localdomain python3.9[414745]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:18 standalone.localdomain python3.9[414856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:18 standalone.localdomain ceph-mon[29756]: pgmap v2812: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:08:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3175914606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:08:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:08:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3175914606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:08:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61394 DF PROTO=TCP SPT=34606 DPT=9882 SEQ=1174064584 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE5B370000000001030307) 
Oct 13 15:08:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2813: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:19 standalone.localdomain python3.9[414967]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3175914606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3175914606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.549467) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368099549526, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 251, "total_data_size": 2008911, "memory_usage": 2048432, "flush_reason": "Manual Compaction"}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368099562180, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 1942074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64597, "largest_seqno": 66706, "table_properties": {"data_size": 1933787, "index_size": 5053, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17076, "raw_average_key_size": 19, "raw_value_size": 1916928, "raw_average_value_size": 2234, "num_data_blocks": 228, "num_entries": 858, "num_filter_entries": 858, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760367903, "oldest_key_time": 1760367903, "file_creation_time": 1760368099, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 12822 microseconds, and 4705 cpu microseconds.
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.562259) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 1942074 bytes OK
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.562314) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.564070) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.564089) EVENT_LOG_v1 {"time_micros": 1760368099564082, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.564119) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1999914, prev total WAL file size 1999914, number of live WAL files 2.
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.564874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(1896KB)], [146(5562KB)]
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368099564939, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 7637758, "oldest_snapshot_seqno": -1}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 6111 keys, 6615392 bytes, temperature: kUnknown
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368099593407, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 6615392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6577188, "index_size": 21850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 157886, "raw_average_key_size": 25, "raw_value_size": 6468723, "raw_average_value_size": 1058, "num_data_blocks": 873, "num_entries": 6111, "num_filter_entries": 6111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368099, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.593713) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 6615392 bytes
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.595391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.1 rd, 231.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 5.4 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 6631, records dropped: 520 output_compression: NoCompression
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.595430) EVENT_LOG_v1 {"time_micros": 1760368099595413, "job": 90, "event": "compaction_finished", "compaction_time_micros": 28595, "compaction_time_cpu_micros": 15166, "output_level": 6, "num_output_files": 1, "total_output_size": 6615392, "num_input_records": 6631, "num_output_records": 6111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368099595940, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368099596825, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.564701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.596955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.596965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.596967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.596968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:19 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:19.596970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:19 standalone.localdomain python3.9[415078]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36874 DF PROTO=TCP SPT=52312 DPT=9105 SEQ=4046077422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE60760000000001030307) 
Oct 13 15:08:20 standalone.localdomain ceph-mon[29756]: pgmap v2813: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2814: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:21 standalone.localdomain python3.9[415189]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:22 standalone.localdomain python3.9[415300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:22 standalone.localdomain ceph-mon[29756]: pgmap v2814: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2815: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:08:23
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', 'manila_metadata', 'manila_data', 'backups']
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:08:23 standalone.localdomain python3.9[415411]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15758 DF PROTO=TCP SPT=55730 DPT=9102 SEQ=3788654215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE6E1F0000000001030307) 
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:08:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:08:24 standalone.localdomain python3.9[415522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:24 standalone.localdomain ceph-mon[29756]: pgmap v2815: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2816: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:24 standalone.localdomain python3.9[415633]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 15:08:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45401 DF PROTO=TCP SPT=41436 DPT=9101 SEQ=3432561289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE75AE0000000001030307) 
Oct 13 15:08:25 standalone.localdomain python3.9[415744]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:08:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:26 standalone.localdomain python3.9[415852]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:08:26 standalone.localdomain ceph-mon[29756]: pgmap v2816: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2817: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:26 standalone.localdomain python3.9[415960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:08:27 standalone.localdomain python3.9[416068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:08:27 standalone.localdomain python3.9[416176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:08:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45403 DF PROTO=TCP SPT=41436 DPT=9101 SEQ=3432561289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE81B60000000001030307) 
Oct 13 15:08:28 standalone.localdomain python3.9[416284]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:08:28 standalone.localdomain ceph-mon[29756]: pgmap v2817: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:08:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:08:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:08:28 standalone.localdomain podman[416304]: 2025-10-13 15:08:28.809224081 +0000 UTC m=+0.064148290 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2818: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:28 standalone.localdomain podman[416302]: 2025-10-13 15:08:28.859041958 +0000 UTC m=+0.116850532 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, vcs-type=git, container_name=swift_object_server, config_id=tripleo_step4)
Oct 13 15:08:28 standalone.localdomain podman[416303]: 2025-10-13 15:08:28.92509517 +0000 UTC m=+0.177810667 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., container_name=swift_container_server, batch=17.1_20250721.1, tcib_managed=true, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:08:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:08:29 standalone.localdomain podman[416302]: 2025-10-13 15:08:29.029173162 +0000 UTC m=+0.286981796 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-type=git, build-date=2025-07-21T14:56:28, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:08:29 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:08:29 standalone.localdomain podman[416304]: 2025-10-13 15:08:29.043909176 +0000 UTC m=+0.298833395 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T16:11:22, container_name=swift_account_server, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container)
Oct 13 15:08:29 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:08:29 standalone.localdomain podman[416303]: 2025-10-13 15:08:29.123789723 +0000 UTC m=+0.376505240 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., container_name=swift_container_server, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, version=17.1.9, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 15:08:29 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:08:29 standalone.localdomain python3.9[416474]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:30 standalone.localdomain python3.9[416563]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368108.781859-561-124791197487688/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:30 standalone.localdomain ceph-mon[29756]: pgmap v2818: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:30 standalone.localdomain python3.9[416671]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2819: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:31 standalone.localdomain python3.9[416760]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368110.3032553-561-241489038983304/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:31 standalone.localdomain python3.9[416868]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:32 standalone.localdomain python3.9[416956]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368111.400572-561-21107530946145/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45404 DF PROTO=TCP SPT=41436 DPT=9101 SEQ=3432561289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAE91760000000001030307) 
Oct 13 15:08:32 standalone.localdomain ceph-mon[29756]: pgmap v2819: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2820: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:32 standalone.localdomain python3.9[417123]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:33 standalone.localdomain python3.9[417270]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368112.4074914-561-88559481449879/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:33 standalone.localdomain python3.9[417435]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:34 standalone.localdomain python3.9[417523]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368113.460455-561-204762399253792/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:34 standalone.localdomain ceph-mon[29756]: pgmap v2820: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2821: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:35 standalone.localdomain python3.9[417631]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:35 standalone.localdomain python3.9[417719]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368114.5647874-561-127370904438673/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.037627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368116037657, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 402, "num_deletes": 256, "total_data_size": 160169, "memory_usage": 168072, "flush_reason": "Manual Compaction"}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368116041136, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 157791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66707, "largest_seqno": 67108, "table_properties": {"data_size": 155466, "index_size": 437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5605, "raw_average_key_size": 17, "raw_value_size": 150803, "raw_average_value_size": 480, "num_data_blocks": 20, "num_entries": 314, "num_filter_entries": 314, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368100, "oldest_key_time": 1760368100, "file_creation_time": 1760368116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 3574 microseconds, and 1124 cpu microseconds.
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.041181) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 157791 bytes OK
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.041217) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.043924) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.043959) EVENT_LOG_v1 {"time_micros": 1760368116043952, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.043980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 157579, prev total WAL file size 158068, number of live WAL files 2.
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.044621) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323538' seq:72057594037927935, type:22 .. '6C6F676D0032353130' seq:0, type:0; will stop at (end)
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(154KB)], [149(6460KB)]
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368116044648, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 6773183, "oldest_snapshot_seqno": -1}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 5900 keys, 6688843 bytes, temperature: kUnknown
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368116084024, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 6688843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6651112, "index_size": 21889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 154419, "raw_average_key_size": 26, "raw_value_size": 6545498, "raw_average_value_size": 1109, "num_data_blocks": 872, "num_entries": 5900, "num_filter_entries": 5900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.084266) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 6688843 bytes
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086028) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.7 rd, 169.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 6.3 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(85.3) write-amplify(42.4) OK, records in: 6425, records dropped: 525 output_compression: NoCompression
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086045) EVENT_LOG_v1 {"time_micros": 1760368116086038, "job": 92, "event": "compaction_finished", "compaction_time_micros": 39438, "compaction_time_cpu_micros": 13981, "output_level": 6, "num_output_files": 1, "total_output_size": 6688843, "num_input_records": 6425, "num_output_records": 5900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368116086160, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368116086622, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.044519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:36.086794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:36 standalone.localdomain python3.9[417827]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:36 standalone.localdomain ceph-mon[29756]: pgmap v2821: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:36 standalone.localdomain python3.9[417913]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368115.790117-561-61751593527587/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2822: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:37 standalone.localdomain python3.9[418021]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43190 DF PROTO=TCP SPT=50910 DPT=9100 SEQ=3695098689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEA4340000000001030307) 
Oct 13 15:08:37 standalone.localdomain python3.9[418109]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/root/.ansible/tmp/ansible-tmp-1760368116.8446157-561-37012439970923/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:38 standalone.localdomain python3.9[418217]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43191 DF PROTO=TCP SPT=50910 DPT=9100 SEQ=3695098689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEA8360000000001030307) 
Oct 13 15:08:38 standalone.localdomain ceph-mon[29756]: pgmap v2822: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2823: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:38 standalone.localdomain python3.9[418325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:39 standalone.localdomain python3.9[418433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:40 standalone.localdomain python3.9[418541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43192 DF PROTO=TCP SPT=50910 DPT=9100 SEQ=3695098689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEB0360000000001030307) 
Oct 13 15:08:40 standalone.localdomain rhsm-service[6467]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Oct 13 15:08:40 standalone.localdomain ceph-mon[29756]: pgmap v2823: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:40 standalone.localdomain python3.9[418649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2824: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:41 standalone.localdomain python3.9[418758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:08:41 standalone.localdomain python3.9[418866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:41 standalone.localdomain systemd[1]: tmp-crun.buefyE.mount: Deactivated successfully.
Oct 13 15:08:41 standalone.localdomain podman[418867]: 2025-10-13 15:08:41.846408769 +0000 UTC m=+0.105285648 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:08:41 standalone.localdomain podman[418867]: 2025-10-13 15:08:41.881987442 +0000 UTC m=+0.140864291 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:08:41 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:08:42 standalone.localdomain ceph-mon[29756]: pgmap v2824: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:42 standalone.localdomain python3.9[418992]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2825: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:42 standalone.localdomain python3.9[419100]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:43 standalone.localdomain python3.9[419208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13579 DF PROTO=TCP SPT=36946 DPT=9105 SEQ=3523058940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEBDF60000000001030307) 
Oct 13 15:08:44 standalone.localdomain ceph-mon[29756]: pgmap v2825: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:44 standalone.localdomain python3.9[419316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:44 standalone.localdomain python3.9[419424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:08:44 standalone.localdomain podman[419425]: 2025-10-13 15:08:44.799137435 +0000 UTC m=+0.066935845 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 13 15:08:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2826: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:44 standalone.localdomain podman[419425]: 2025-10-13 15:08:44.867287338 +0000 UTC m=+0.135085768 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:08:44 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:08:45 standalone.localdomain python3.9[419558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:45 standalone.localdomain python3.9[419666]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.048478) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368126048520, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 251, "total_data_size": 104529, "memory_usage": 110400, "flush_reason": "Manual Compaction"}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368126051822, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 102481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67109, "largest_seqno": 67452, "table_properties": {"data_size": 100370, "index_size": 287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5881, "raw_average_key_size": 20, "raw_value_size": 96176, "raw_average_value_size": 330, "num_data_blocks": 13, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368116, "oldest_key_time": 1760368116, "file_creation_time": 1760368126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 3372 microseconds, and 764 cpu microseconds.
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.051862) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 102481 bytes OK
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.051884) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.055513) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.055530) EVENT_LOG_v1 {"time_micros": 1760368126055525, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.055544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 102182, prev total WAL file size 102671, number of live WAL files 2.
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.055974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353032' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(100KB)], [152(6532KB)]
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368126056039, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 6791324, "oldest_snapshot_seqno": -1}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: pgmap v2826: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 5677 keys, 4732103 bytes, temperature: kUnknown
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368126086529, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 4732103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4700684, "index_size": 16154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 149963, "raw_average_key_size": 26, "raw_value_size": 4603711, "raw_average_value_size": 810, "num_data_blocks": 633, "num_entries": 5677, "num_filter_entries": 5677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.086797) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 4732103 bytes
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.088525) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.2 rd, 154.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.4 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(112.4) write-amplify(46.2) OK, records in: 6191, records dropped: 514 output_compression: NoCompression
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.088553) EVENT_LOG_v1 {"time_micros": 1760368126088541, "job": 94, "event": "compaction_finished", "compaction_time_micros": 30561, "compaction_time_cpu_micros": 20390, "output_level": 6, "num_output_files": 1, "total_output_size": 4732103, "num_input_records": 6191, "num_output_records": 5677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368126088711, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368126089569, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.055869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.089673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.089679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.089680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.089682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:46 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:08:46.089684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:08:46 standalone.localdomain python3.9[419774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2827: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:47 standalone.localdomain python3.9[419882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:47 standalone.localdomain python3.9[419968]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368126.778777-782-74261497418279/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:48 standalone.localdomain ceph-mon[29756]: pgmap v2827: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:48 standalone.localdomain python3.9[420076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33378 DF PROTO=TCP SPT=35370 DPT=9882 SEQ=2138959709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAED0370000000001030307) 
Oct 13 15:08:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2828: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:48 standalone.localdomain python3.9[420162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368127.885175-782-86307210421272/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:49 standalone.localdomain python3.9[420270]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:49 standalone.localdomain python3.9[420356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368129.0331526-782-135697856404563/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13581 DF PROTO=TCP SPT=36946 DPT=9105 SEQ=3523058940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAED5B60000000001030307) 
Oct 13 15:08:50 standalone.localdomain ceph-mon[29756]: pgmap v2828: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:50 standalone.localdomain python3.9[420464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2829: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:51 standalone.localdomain python3.9[420550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368130.1112654-782-246129509657998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:51 standalone.localdomain python3.9[420658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:52 standalone.localdomain ceph-mon[29756]: pgmap v2829: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:52 standalone.localdomain python3.9[420744]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368131.2854226-782-140998248922556/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2830: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:52 standalone.localdomain python3.9[420852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:08:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:08:53 standalone.localdomain python3.9[420938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368132.4502602-782-188317405659838/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12585 DF PROTO=TCP SPT=57862 DPT=9102 SEQ=1350600738 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEE34E0000000001030307) 
Oct 13 15:08:54 standalone.localdomain python3.9[421046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:54 standalone.localdomain ceph-mon[29756]: pgmap v2830: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:54 standalone.localdomain python3.9[421132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368133.6517553-782-120301075654391/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2831: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:54 standalone.localdomain sudo[421241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:08:54 standalone.localdomain sudo[421241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:54 standalone.localdomain sudo[421241]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:54 standalone.localdomain sudo[421259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 15:08:54 standalone.localdomain sudo[421259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:55 standalone.localdomain python3.9[421240]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:55 standalone.localdomain sudo[421259]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:08:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:08:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16077 DF PROTO=TCP SPT=40298 DPT=9101 SEQ=4082273573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEEADE0000000001030307) 
Oct 13 15:08:55 standalone.localdomain sudo[421385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:08:55 standalone.localdomain sudo[421385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:55 standalone.localdomain sudo[421385]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:55 standalone.localdomain sudo[421403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:08:55 standalone.localdomain sudo[421403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:55 standalone.localdomain python3.9[421383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368134.6815772-782-188336283017621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:08:56 standalone.localdomain sudo[421403]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:56 standalone.localdomain sudo[421560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:08:56 standalone.localdomain sudo[421560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:56 standalone.localdomain sudo[421560]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:56 standalone.localdomain python3.9[421545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:56 standalone.localdomain sudo[421578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- inventory --format=json-pretty --filter-for-batch
Oct 13 15:08:56 standalone.localdomain sudo[421578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:56 standalone.localdomain ceph-mon[29756]: pgmap v2831: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:56 standalone.localdomain python3.9[421681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368135.750058-782-144330283363497/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2832: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:56 standalone.localdomain podman[421755]: 
Oct 13 15:08:56 standalone.localdomain podman[421755]: 2025-10-13 15:08:56.890263756 +0000 UTC m=+0.069653474 container create 6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_hertz, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, RELEASE=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., release=553, GIT_CLEAN=True, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64)
Oct 13 15:08:56 standalone.localdomain systemd[1]: Started libpod-conmon-6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308.scope.
Oct 13 15:08:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:08:56 standalone.localdomain podman[421755]: 2025-10-13 15:08:56.957424128 +0000 UTC m=+0.136813876 container init 6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_hertz, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.33.12, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Oct 13 15:08:56 standalone.localdomain podman[421755]: 2025-10-13 15:08:56.858704077 +0000 UTC m=+0.038093825 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 15:08:56 standalone.localdomain podman[421755]: 2025-10-13 15:08:56.966660484 +0000 UTC m=+0.146050212 container start 6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_hertz, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, version=7, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 15:08:56 standalone.localdomain podman[421755]: 2025-10-13 15:08:56.96706588 +0000 UTC m=+0.146455658 container attach 6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_hertz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, distribution-scope=public, ceph=True, release=553, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 15:08:56 standalone.localdomain upbeat_hertz[421791]: 167 167
Oct 13 15:08:56 standalone.localdomain systemd[1]: libpod-6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308.scope: Deactivated successfully.
Oct 13 15:08:56 standalone.localdomain podman[421755]: 2025-10-13 15:08:56.972400776 +0000 UTC m=+0.151790494 container died 6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_hertz, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, version=7, release=553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, distribution-scope=public)
Oct 13 15:08:57 standalone.localdomain podman[421812]: 2025-10-13 15:08:57.043447596 +0000 UTC m=+0.061029885 container remove 6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_hertz, GIT_CLEAN=True, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Oct 13 15:08:57 standalone.localdomain systemd[1]: libpod-conmon-6e72214ed42f4e88e0498a32827af0736ea4346fcf682511804fc288bea8f308.scope: Deactivated successfully.
Oct 13 15:08:57 standalone.localdomain podman[421870]: 
Oct 13 15:08:57 standalone.localdomain podman[421870]: 2025-10-13 15:08:57.23202493 +0000 UTC m=+0.083441773 container create 59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_zhukovsky, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph)
Oct 13 15:08:57 standalone.localdomain python3.9[421864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:57 standalone.localdomain systemd[1]: Started libpod-conmon-59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e.scope.
Oct 13 15:08:57 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:08:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132b0206bf6510891952121fe55c361fd546cba1c6868dcbdd6ff0cdc3413277/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 15:08:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132b0206bf6510891952121fe55c361fd546cba1c6868dcbdd6ff0cdc3413277/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 15:08:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132b0206bf6510891952121fe55c361fd546cba1c6868dcbdd6ff0cdc3413277/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 15:08:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132b0206bf6510891952121fe55c361fd546cba1c6868dcbdd6ff0cdc3413277/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 15:08:57 standalone.localdomain podman[421870]: 2025-10-13 15:08:57.303641553 +0000 UTC m=+0.155058386 container init 59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_zhukovsky, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, RELEASE=main)
Oct 13 15:08:57 standalone.localdomain podman[421870]: 2025-10-13 15:08:57.205477124 +0000 UTC m=+0.056894007 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 15:08:57 standalone.localdomain podman[421870]: 2025-10-13 15:08:57.309869056 +0000 UTC m=+0.161285899 container start 59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_zhukovsky, release=553, build-date=2025-09-24T08:57:55, name=rhceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Oct 13 15:08:57 standalone.localdomain podman[421870]: 2025-10-13 15:08:57.310069064 +0000 UTC m=+0.161485977 container attach 59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_zhukovsky, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, RELEASE=main)
Oct 13 15:08:57 standalone.localdomain python3.9[421975]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368136.8117115-782-128590076315447/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-819c1be7e9f9fbbc458d369e18eac8e706de02259e97975d5db1fb9fb98c4206-merged.mount: Deactivated successfully.
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]: [
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:     {
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "available": false,
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "ceph_device": false,
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "lsm_data": {},
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "lvs": [],
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "path": "/dev/sr0",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "rejected_reasons": [
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "Insufficient space (<5GB)",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "Has a FileSystem"
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         ],
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         "sys_api": {
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "actuators": null,
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "device_nodes": "sr0",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "human_readable_size": "482.00 KB",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "id_bus": "ata",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "model": "QEMU DVD-ROM",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "nr_requests": "2",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "partitions": {},
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "path": "/dev/sr0",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "removable": "1",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "rev": "2.5+",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "ro": "0",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "rotational": "1",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "sas_address": "",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "sas_device_handle": "",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "scheduler_mode": "mq-deadline",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "sectors": 0,
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "sectorsize": "2048",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "size": 493568.0,
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "support_discard": "0",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "type": "disk",
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:             "vendor": "QEMU"
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:         }
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]:     }
Oct 13 15:08:58 standalone.localdomain naughty_zhukovsky[421885]: ]
Oct 13 15:08:58 standalone.localdomain systemd[1]: libpod-59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e.scope: Deactivated successfully.
Oct 13 15:08:58 standalone.localdomain podman[421870]: 2025-10-13 15:08:58.030755479 +0000 UTC m=+0.882172352 container died 59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_zhukovsky, GIT_CLEAN=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, build-date=2025-09-24T08:57:55, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public)
Oct 13 15:08:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132b0206bf6510891952121fe55c361fd546cba1c6868dcbdd6ff0cdc3413277-merged.mount: Deactivated successfully.
Oct 13 15:08:58 standalone.localdomain podman[423596]: 2025-10-13 15:08:58.103281128 +0000 UTC m=+0.062554686 container remove 59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_zhukovsky, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main)
Oct 13 15:08:58 standalone.localdomain systemd[1]: libpod-conmon-59b26f662a8b6a564f15d82de83a220908376391469a27b252efacc928cabd3e.scope: Deactivated successfully.
Oct 13 15:08:58 standalone.localdomain sudo[421578]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:08:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 9ae994c8-2161-4f83-8995-e850f39c3451 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:08:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 9ae994c8-2161-4f83-8995-e850f39c3451 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:08:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 9ae994c8-2161-4f83-8995-e850f39c3451 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:08:58 standalone.localdomain sudo[423662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:08:58 standalone.localdomain sudo[423662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:08:58 standalone.localdomain sudo[423662]: pam_unix(sudo:session): session closed for user root
Oct 13 15:08:58 standalone.localdomain python3.9[423661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: pgmap v2832: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:58 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:08:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16079 DF PROTO=TCP SPT=40298 DPT=9101 SEQ=4082273573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAEF6F60000000001030307) 
Oct 13 15:08:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2833: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:08:58 standalone.localdomain python3.9[423765]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368137.8258293-782-34203191299722/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:59 standalone.localdomain python3.9[423873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:08:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:08:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:08:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:08:59 standalone.localdomain podman[423961]: 2025-10-13 15:08:59.800123294 +0000 UTC m=+0.060085316 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, vendor=Red Hat, Inc., container_name=swift_container_server)
Oct 13 15:08:59 standalone.localdomain podman[423962]: 2025-10-13 15:08:59.85849765 +0000 UTC m=+0.116056715 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64)
Oct 13 15:08:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:08:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:08:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:08:59 standalone.localdomain python3.9[423959]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368139.0158463-782-272615094484645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:08:59 standalone.localdomain podman[423960]: 2025-10-13 15:08:59.910979608 +0000 UTC m=+0.171207341 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, version=17.1.9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git)
Oct 13 15:09:00 standalone.localdomain podman[423961]: 2025-10-13 15:09:00.035741515 +0000 UTC m=+0.295703527 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-07-21T15:54:32, release=1, vcs-type=git, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:09:00 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:09:00 standalone.localdomain podman[423962]: 2025-10-13 15:09:00.064818294 +0000 UTC m=+0.322377359 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, com.redhat.component=openstack-swift-account-container, tcib_managed=true, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T16:11:22, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4)
Oct 13 15:09:00 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:09:00 standalone.localdomain podman[423960]: 2025-10-13 15:09:00.103977572 +0000 UTC m=+0.364205355 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, vcs-type=git, distribution-scope=public, container_name=swift_object_server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible)
Oct 13 15:09:00 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:09:00 standalone.localdomain python3.9[424151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:00 standalone.localdomain ceph-mon[29756]: pgmap v2833: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:09:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2834: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:00 standalone.localdomain python3.9[424237]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368140.0029964-782-278584436784355/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:01 standalone.localdomain python3.9[424345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:02 standalone.localdomain python3.9[424431]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368141.1724932-782-138135677257644/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:02 standalone.localdomain ceph-mon[29756]: pgmap v2834: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16080 DF PROTO=TCP SPT=40298 DPT=9101 SEQ=4082273573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF06B60000000001030307) 
Oct 13 15:09:02 standalone.localdomain python3.9[424539]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                          ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2835: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:03 standalone.localdomain python3.9[424650]: ansible-ansible.legacy.command Invoked with _raw_params=restorecon -R /run/libvirt _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:04 standalone.localdomain python3.9[424759]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 13 15:09:04 standalone.localdomain ceph-mon[29756]: pgmap v2835: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2836: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:05 standalone.localdomain python3.9[424867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:09:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:05 standalone.localdomain systemd-rc-local-generator[424895]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:05 standalone.localdomain systemd-sysv-generator[424898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:05 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:05 standalone.localdomain systemd[1]: Starting libvirt logging daemon socket...
Oct 13 15:09:05 standalone.localdomain systemd[1]: Listening on libvirt logging daemon socket.
Oct 13 15:09:05 standalone.localdomain systemd[1]: Starting libvirt logging daemon admin socket...
Oct 13 15:09:05 standalone.localdomain systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 13 15:09:05 standalone.localdomain systemd[1]: Starting libvirt logging daemon...
Oct 13 15:09:05 standalone.localdomain systemd[1]: Started libvirt logging daemon.
Oct 13 15:09:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:06 standalone.localdomain ceph-mon[29756]: pgmap v2836: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2837: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:09:06.924 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:09:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:09:06.925 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:09:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:09:06.926 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:09:07 standalone.localdomain python3.9[425017]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:09:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:07 standalone.localdomain systemd-rc-local-generator[425044]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:07 standalone.localdomain systemd-sysv-generator[425047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:07 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45717 DF PROTO=TCP SPT=35926 DPT=9100 SEQ=3886504984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF19640000000001030307) 
Oct 13 15:09:07 standalone.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 13 15:09:07 standalone.localdomain systemd[1]: Starting libvirt nodedev daemon socket...
Oct 13 15:09:07 standalone.localdomain systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 13 15:09:07 standalone.localdomain systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 13 15:09:07 standalone.localdomain systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 13 15:09:07 standalone.localdomain systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 13 15:09:07 standalone.localdomain systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 13 15:09:07 standalone.localdomain systemd[1]: Starting libvirt nodedev daemon...
Oct 13 15:09:07 standalone.localdomain systemd[1]: Started libvirt nodedev daemon.
Oct 13 15:09:07 standalone.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 13 15:09:07 standalone.localdomain setroubleshoot[425054]: Deleting alert 7501d5cd-9d8a-4973-bb60-e83685a385f7, it is allowed in current policy
Oct 13 15:09:08 standalone.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service.
Oct 13 15:09:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45718 DF PROTO=TCP SPT=35926 DPT=9100 SEQ=3886504984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF1D770000000001030307) 
Oct 13 15:09:08 standalone.localdomain ceph-mon[29756]: pgmap v2837: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2838: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:09 standalone.localdomain python3.9[425198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:09:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:09 standalone.localdomain systemd-rc-local-generator[425225]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:09 standalone.localdomain systemd-sysv-generator[425229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:09 standalone.localdomain setroubleshoot[425054]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 24d4d7e5-4043-43bc-89c1-7465ea0b549b
Oct 13 15:09:09 standalone.localdomain setroubleshoot[425054]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                               
                                                               *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                               
                                                               If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                               Then turn on full auditing to get path information about the offending file and generate the error again.
                                                               Do
                                                               
                                                               Turn on full auditing
                                                               # auditctl -w /etc/shadow -p w
                                                               Try to recreate AVC. Then execute
                                                               # ausearch -m avc -ts recent
                                                               If you see PATH record check ownership/permissions on file, and fix it,
                                                               otherwise report as a bugzilla.
                                                               
                                                               *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                               
                                                               If you believe that virtlogd should have the dac_read_search capability by default.
                                                               Then you should report this as a bug.
                                                               You can generate a local policy module to allow this access.
                                                               Do
                                                               allow this access for now by executing:
                                                               # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                               # semodule -X 300 -i my-virtlogd.pp
                                                               
Oct 13 15:09:09 standalone.localdomain setroubleshoot[425054]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 24d4d7e5-4043-43bc-89c1-7465ea0b549b
Oct 13 15:09:09 standalone.localdomain setroubleshoot[425054]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                               
                                                               *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                               
                                                               If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                               Then turn on full auditing to get path information about the offending file and generate the error again.
                                                               Do
                                                               
                                                               Turn on full auditing
                                                               # auditctl -w /etc/shadow -p w
                                                               Try to recreate AVC. Then execute
                                                               # ausearch -m avc -ts recent
                                                               If you see PATH record check ownership/permissions on file, and fix it,
                                                               otherwise report as a bugzilla.
                                                               
                                                               *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                               
                                                               If you believe that virtlogd should have the dac_read_search capability by default.
                                                               Then you should report this as a bug.
                                                               You can generate a local policy module to allow this access.
                                                               Do
                                                               allow this access for now by executing:
                                                               # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                               # semodule -X 300 -i my-virtlogd.pp
                                                               
Oct 13 15:09:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:09 standalone.localdomain systemd[1]: Starting libvirt proxy daemon socket...
Oct 13 15:09:09 standalone.localdomain systemd[1]: Listening on libvirt proxy daemon socket.
Oct 13 15:09:09 standalone.localdomain systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 13 15:09:09 standalone.localdomain systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 13 15:09:09 standalone.localdomain systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 13 15:09:09 standalone.localdomain systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 13 15:09:09 standalone.localdomain systemd[1]: Starting libvirt proxy daemon...
Oct 13 15:09:09 standalone.localdomain systemd[1]: Started libvirt proxy daemon.
Oct 13 15:09:10 standalone.localdomain python3.9[425365]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:09:10 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:10 standalone.localdomain systemd-rc-local-generator[425388]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:10 standalone.localdomain systemd-sysv-generator[425395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:10 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45719 DF PROTO=TCP SPT=35926 DPT=9100 SEQ=3886504984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF25760000000001030307) 
Oct 13 15:09:10 standalone.localdomain ceph-mon[29756]: pgmap v2838: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:10 standalone.localdomain systemd[1]: Listening on libvirt locking daemon socket.
Oct 13 15:09:10 standalone.localdomain systemd[1]: Starting libvirt QEMU daemon socket...
Oct 13 15:09:10 standalone.localdomain systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 13 15:09:10 standalone.localdomain systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 13 15:09:10 standalone.localdomain systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 13 15:09:10 standalone.localdomain systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 13 15:09:10 standalone.localdomain systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 13 15:09:10 standalone.localdomain systemd[1]: Starting libvirt QEMU daemon...
Oct 13 15:09:10 standalone.localdomain systemd[1]: Started libvirt QEMU daemon.
Oct 13 15:09:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2839: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:11 standalone.localdomain python3.9[425545]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:09:11 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:11 standalone.localdomain systemd-rc-local-generator[425580]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:11 standalone.localdomain systemd-sysv-generator[425583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:11 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:12 standalone.localdomain ceph-mon[29756]: pgmap v2839: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:09:12 standalone.localdomain systemd[1]: Starting libvirt secret daemon socket...
Oct 13 15:09:12 standalone.localdomain systemd[1]: Listening on libvirt secret daemon socket.
Oct 13 15:09:12 standalone.localdomain systemd[1]: Starting libvirt secret daemon admin socket...
Oct 13 15:09:12 standalone.localdomain systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 13 15:09:12 standalone.localdomain systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 13 15:09:12 standalone.localdomain systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 13 15:09:12 standalone.localdomain systemd[1]: Starting libvirt secret daemon...
Oct 13 15:09:12 standalone.localdomain systemd[1]: Started libvirt secret daemon.
Oct 13 15:09:12 standalone.localdomain podman[425599]: 2025-10-13 15:09:12.806533699 +0000 UTC m=+0.075947250 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 13 15:09:12 standalone.localdomain podman[425599]: 2025-10-13 15:09:12.83569701 +0000 UTC m=+0.105110591 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:09:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2840: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:12 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:09:13 standalone.localdomain python3.9[425743]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28667 DF PROTO=TCP SPT=51382 DPT=9105 SEQ=390882432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF33360000000001030307) 
Oct 13 15:09:14 standalone.localdomain python3.9[425851]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:09:14 standalone.localdomain ceph-mon[29756]: pgmap v2840: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:14 standalone.localdomain python3.9[425959]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                                          echo ceph
                                                          awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2841: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:15 standalone.localdomain python3.9[426071]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:09:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:09:15 standalone.localdomain podman[426128]: 2025-10-13 15:09:15.804718097 +0000 UTC m=+0.066029287 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:09:15 standalone.localdomain podman[426128]: 2025-10-13 15:09:15.831618928 +0000 UTC m=+0.092930118 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:09:15 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:09:16 standalone.localdomain python3.9[426204]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:16 standalone.localdomain python3.9[426290]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760368155.6416602-1077-225122950849124/.source.xml follow=False _original_basename=secret.xml.j2 checksum=8d3583ab320dd65f250ea0ca7070c25a9c58bccb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:16 standalone.localdomain ceph-mon[29756]: pgmap v2841: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2842: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:17 standalone.localdomain python3.9[426398]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 627e7f45-65aa-56de-94df-66eaee84a56e
                                                          virsh secret-define --file /tmp/secret.xml
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:17 standalone.localdomain polkitd[1036]: Registered Authentication Agent for unix-process:426400:760027 (system bus name :1.2667 [/usr/bin/pkttyagent --process 426400 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Oct 13 15:09:17 standalone.localdomain polkitd[1036]: Unregistered Authentication Agent for unix-process:426400:760027 (system bus name :1.2667, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Oct 13 15:09:17 standalone.localdomain polkitd[1036]: Registered Authentication Agent for unix-process:426399:760026 (system bus name :1.2668 [/usr/bin/pkttyagent --process 426399 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Oct 13 15:09:17 standalone.localdomain polkitd[1036]: Unregistered Authentication Agent for unix-process:426399:760026 (system bus name :1.2668, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Oct 13 15:09:17 standalone.localdomain python3.9[426518]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:18 standalone.localdomain ceph-mon[29756]: pgmap v2842: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:09:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2074695650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:09:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:09:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2074695650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:09:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31004 DF PROTO=TCP SPT=36266 DPT=9882 SEQ=2593810466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF45760000000001030307) 
Oct 13 15:09:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2843: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:19 standalone.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 13 15:09:19 standalone.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service: Deactivated successfully.
Oct 13 15:09:19 standalone.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service: Consumed 1.180s CPU time.
Oct 13 15:09:19 standalone.localdomain polkitd[1036]: Registered Authentication Agent for unix-process:426736:760249 (system bus name :1.2669 [/usr/bin/pkttyagent --process 426736 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Oct 13 15:09:19 standalone.localdomain polkitd[1036]: Unregistered Authentication Agent for unix-process:426736:760249 (system bus name :1.2669, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Oct 13 15:09:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2074695650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:09:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2074695650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:09:19 standalone.localdomain python3.9[426849]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28669 DF PROTO=TCP SPT=51382 DPT=9105 SEQ=390882432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF4AF60000000001030307) 
Oct 13 15:09:20 standalone.localdomain python3.9[426957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:20 standalone.localdomain ceph-mon[29756]: pgmap v2843: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2844: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:20 standalone.localdomain python3.9[427043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/root/.ansible/tmp/ansible-tmp-1760368160.0052357-1132-152832983337907/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:21 standalone.localdomain python3.9[427151]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:22 standalone.localdomain python3.9[427259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:22 standalone.localdomain ceph-mon[29756]: pgmap v2844: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:22 standalone.localdomain python3.9[427314]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2845: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:23 standalone.localdomain python3.9[427422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:09:23
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'volumes', '.mgr', 'manila_data', 'backups', 'images', 'manila_metadata']
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:09:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46650 DF PROTO=TCP SPT=34974 DPT=9102 SEQ=960764526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF587F0000000001030307) 
Oct 13 15:09:23 standalone.localdomain python3.9[427477]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.u47ip2be recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:09:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:09:24 standalone.localdomain python3.9[427585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:24 standalone.localdomain python3.9[427641]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:24 standalone.localdomain ceph-mon[29756]: pgmap v2845: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2846: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:25 standalone.localdomain python3.9[427749]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37376 DF PROTO=TCP SPT=42110 DPT=9101 SEQ=1903934101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF600E0000000001030307) 
Oct 13 15:09:25 standalone.localdomain python3[427858]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 15:09:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:26 standalone.localdomain ceph-mon[29756]: pgmap v2846: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.1 KiB/s rd, 2 op/s
Oct 13 15:09:26 standalone.localdomain python3.9[427966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2847: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:27 standalone.localdomain python3.9[428021]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:27 standalone.localdomain python3.9[428129]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:28 standalone.localdomain python3.9[428184]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37378 DF PROTO=TCP SPT=42110 DPT=9101 SEQ=1903934101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF6BF60000000001030307) 
Oct 13 15:09:28 standalone.localdomain ceph-mon[29756]: pgmap v2847: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2848: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:28 standalone.localdomain python3.9[428292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:09:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:09:29 standalone.localdomain python3.9[428347]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:29 standalone.localdomain python3.9[428455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:30 standalone.localdomain python3.9[428510]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:30 standalone.localdomain ceph-mon[29756]: pgmap v2848: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:09:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:09:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:09:30 standalone.localdomain podman[428568]: 2025-10-13 15:09:30.835103606 +0000 UTC m=+0.099211612 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, io.buildah.version=1.33.12, container_name=swift_account_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:09:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2849: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:30 standalone.localdomain systemd[1]: tmp-crun.kxgm8y.mount: Deactivated successfully.
Oct 13 15:09:30 standalone.localdomain podman[428567]: 2025-10-13 15:09:30.857616849 +0000 UTC m=+0.122813649 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, container_name=swift_container_server, io.buildah.version=1.33.12, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, name=rhosp17/openstack-swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, batch=17.1_20250721.1)
Oct 13 15:09:30 standalone.localdomain podman[428566]: 2025-10-13 15:09:30.916384392 +0000 UTC m=+0.182391455 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, container_name=swift_object_server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public)
Oct 13 15:09:30 standalone.localdomain podman[428568]: 2025-10-13 15:09:30.993994308 +0000 UTC m=+0.258102294 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, vcs-type=git, container_name=swift_account_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, distribution-scope=public, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:09:31 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:09:31 standalone.localdomain podman[428567]: 2025-10-13 15:09:31.057460211 +0000 UTC m=+0.322657061 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=swift_container_server, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:09:31 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:09:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:31 standalone.localdomain python3.9[428672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:31 standalone.localdomain podman[428566]: 2025-10-13 15:09:31.153921101 +0000 UTC m=+0.419928254 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, vcs-type=git, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_object_server, architecture=x86_64, io.buildah.version=1.33.12)
Oct 13 15:09:31 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:09:31 standalone.localdomain python3.9[428784]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368170.5870662-1257-175864582104977/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:32 standalone.localdomain sshd[428893]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:09:32 standalone.localdomain python3.9[428892]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37379 DF PROTO=TCP SPT=42110 DPT=9101 SEQ=1903934101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF7BB70000000001030307) 
Oct 13 15:09:32 standalone.localdomain ceph-mon[29756]: pgmap v2849: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:32 standalone.localdomain unix_chkpwd[428983]: password check failed for user (root)
Oct 13 15:09:32 standalone.localdomain sshd[428893]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 13 15:09:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2850: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:33 standalone.localdomain python3.9[429003]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:33 standalone.localdomain python3.9[429114]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                          include "/etc/nftables/edpm-chains.nft"
                                                          include "/etc/nftables/edpm-rules.nft"
                                                          include "/etc/nftables/edpm-jumps.nft"
                                                           path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:34 standalone.localdomain python3.9[429222]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:34 standalone.localdomain ceph-mon[29756]: pgmap v2850: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:34 standalone.localdomain sshd[428893]: Failed password for root from 193.46.255.7 port 10992 ssh2
Oct 13 15:09:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2851: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:35 standalone.localdomain python3.9[429331]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:09:35 standalone.localdomain ceph-mon[29756]: pgmap v2851: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:35 standalone.localdomain python3.9[429441]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:09:36 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:15:09:36 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx6c2afc55b52543c7a375b-0068ed1630" "proxy-server 2" 0.0006 "-" 20 -
Oct 13 15:09:36 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx6c2afc55b52543c7a375b-0068ed1630)
Oct 13 15:09:36 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx6c2afc55b52543c7a375b-0068ed1630)
Oct 13 15:09:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:36 standalone.localdomain python3.9[429552]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:36 standalone.localdomain unix_chkpwd[429553]: password check failed for user (root)
Oct 13 15:09:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2852: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:37 standalone.localdomain python3.9[429661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45037 DF PROTO=TCP SPT=50404 DPT=9100 SEQ=3486477015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF8E950000000001030307) 
Oct 13 15:09:37 standalone.localdomain python3.9[429747]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760368176.631498-1329-59980248907350/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:38 standalone.localdomain ceph-mon[29756]: pgmap v2852: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:38 standalone.localdomain python3.9[429855]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45038 DF PROTO=TCP SPT=50404 DPT=9100 SEQ=3486477015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF92B60000000001030307) 
Oct 13 15:09:38 standalone.localdomain sshd[428893]: Failed password for root from 193.46.255.7 port 10992 ssh2
Oct 13 15:09:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2853: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:38 standalone.localdomain python3.9[429941]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760368177.8387933-1344-190468179935820/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:39 standalone.localdomain python3.9[430049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:09:40 standalone.localdomain python3.9[430135]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760368179.1131787-1359-11368357231703/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:09:40 standalone.localdomain ceph-mon[29756]: pgmap v2853: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:40 standalone.localdomain unix_chkpwd[430153]: password check failed for user (root)
Oct 13 15:09:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45039 DF PROTO=TCP SPT=50404 DPT=9100 SEQ=3486477015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAF9AB70000000001030307) 
Oct 13 15:09:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2854: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:40 standalone.localdomain python3.9[430244]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:09:40 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:41 standalone.localdomain systemd-sysv-generator[430276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:41 standalone.localdomain systemd-rc-local-generator[430272]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:41 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:41 standalone.localdomain systemd[1]: Reached target edpm_libvirt.target.
Oct 13 15:09:42 standalone.localdomain python3.9[430392]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 15:09:42 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:42 standalone.localdomain ceph-mon[29756]: pgmap v2854: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:42 standalone.localdomain systemd-rc-local-generator[430422]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:42 standalone.localdomain systemd-sysv-generator[430425]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:42 standalone.localdomain sshd[428893]: Failed password for root from 193.46.255.7 port 10992 ssh2
Oct 13 15:09:42 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:42 standalone.localdomain systemd-rc-local-generator[430456]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:42 standalone.localdomain systemd-sysv-generator[430459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2855: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:43 standalone.localdomain sshd[379042]: pam_unix(sshd:session): session closed for user root
Oct 13 15:09:43 standalone.localdomain systemd[1]: session-292.scope: Deactivated successfully.
Oct 13 15:09:43 standalone.localdomain systemd[1]: session-292.scope: Consumed 3min 34.155s CPU time.
Oct 13 15:09:43 standalone.localdomain systemd-logind[45629]: Session 292 logged out. Waiting for processes to exit.
Oct 13 15:09:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:09:43 standalone.localdomain systemd-logind[45629]: Removed session 292.
Oct 13 15:09:43 standalone.localdomain podman[430485]: 2025-10-13 15:09:43.372614135 +0000 UTC m=+0.097326062 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:09:43 standalone.localdomain podman[430485]: 2025-10-13 15:09:43.380032716 +0000 UTC m=+0.104744713 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:09:43 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:09:43 standalone.localdomain sshd[428893]: Received disconnect from 193.46.255.7 port 10992:11:  [preauth]
Oct 13 15:09:43 standalone.localdomain sshd[428893]: Disconnected from authenticating user root 193.46.255.7 port 10992 [preauth]
Oct 13 15:09:43 standalone.localdomain sshd[428893]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 13 15:09:44 standalone.localdomain sshd[430504]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:09:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48361 DF PROTO=TCP SPT=42920 DPT=9105 SEQ=2668609999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFA8760000000001030307) 
Oct 13 15:09:44 standalone.localdomain ceph-mon[29756]: pgmap v2855: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:44 standalone.localdomain unix_chkpwd[430506]: password check failed for user (root)
Oct 13 15:09:44 standalone.localdomain sshd[430504]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 13 15:09:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2856: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:46 standalone.localdomain ceph-mon[29756]: pgmap v2856: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:46 standalone.localdomain sshd[430504]: Failed password for root from 193.46.255.7 port 58142 ssh2
Oct 13 15:09:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:09:46 standalone.localdomain systemd[1]: tmp-crun.Xp6WYG.mount: Deactivated successfully.
Oct 13 15:09:46 standalone.localdomain podman[430507]: 2025-10-13 15:09:46.800732754 +0000 UTC m=+0.063004934 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:09:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2857: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:46 standalone.localdomain podman[430507]: 2025-10-13 15:09:46.852928671 +0000 UTC m=+0.115200861 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:09:46 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:09:48 standalone.localdomain ceph-mon[29756]: pgmap v2857: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:48 standalone.localdomain unix_chkpwd[430532]: password check failed for user (root)
Oct 13 15:09:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42889 DF PROTO=TCP SPT=33030 DPT=9882 SEQ=3625130162 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFBAB60000000001030307) 
Oct 13 15:09:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2858: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:49 standalone.localdomain sshd[430533]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:09:49 standalone.localdomain sshd[430533]: Accepted publickey for root from 192.168.122.30 port 50196 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:09:49 standalone.localdomain systemd-logind[45629]: New session 293 of user root.
Oct 13 15:09:49 standalone.localdomain systemd[1]: Started Session 293 of User root.
Oct 13 15:09:49 standalone.localdomain sshd[430533]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:09:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48363 DF PROTO=TCP SPT=42920 DPT=9105 SEQ=2668609999 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFC0360000000001030307) 
Oct 13 15:09:50 standalone.localdomain ceph-mon[29756]: pgmap v2858: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:50 standalone.localdomain python3.9[430644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:09:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2859: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:50 standalone.localdomain sshd[430504]: Failed password for root from 193.46.255.7 port 58142 ssh2
Oct 13 15:09:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:51 standalone.localdomain python3.9[430756]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:09:52 standalone.localdomain unix_chkpwd[430845]: password check failed for user (root)
Oct 13 15:09:52 standalone.localdomain ceph-mon[29756]: pgmap v2859: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:52 standalone.localdomain python3.9[430865]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:09:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2860: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:53 standalone.localdomain python3.9[430973]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:09:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:09:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22978 DF PROTO=TCP SPT=55598 DPT=9102 SEQ=837832570 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFCDAF0000000001030307) 
Oct 13 15:09:53 standalone.localdomain python3.9[431081]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 15:09:54 standalone.localdomain sshd[430504]: Failed password for root from 193.46.255.7 port 58142 ssh2
Oct 13 15:09:54 standalone.localdomain ceph-mon[29756]: pgmap v2860: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:54 standalone.localdomain python3.9[431189]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:09:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2861: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:55 standalone.localdomain python3.9[431297]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:09:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63444 DF PROTO=TCP SPT=34744 DPT=9101 SEQ=520434711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFD53E0000000001030307) 
Oct 13 15:09:55 standalone.localdomain sshd[430504]: Received disconnect from 193.46.255.7 port 58142:11:  [preauth]
Oct 13 15:09:55 standalone.localdomain sshd[430504]: Disconnected from authenticating user root 193.46.255.7 port 58142 [preauth]
Oct 13 15:09:55 standalone.localdomain sshd[430504]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 13 15:09:55 standalone.localdomain sshd[431408]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:09:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:09:56 standalone.localdomain python3.9[431407]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:09:56 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:09:56 standalone.localdomain ceph-mon[29756]: pgmap v2861: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:56 standalone.localdomain systemd-sysv-generator[431441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:09:56 standalone.localdomain systemd-rc-local-generator[431438]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:09:56 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:09:56 standalone.localdomain unix_chkpwd[431480]: password check failed for user (root)
Oct 13 15:09:56 standalone.localdomain sshd[431408]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.7  user=root
Oct 13 15:09:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2862: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:57 standalone.localdomain python3.9[431555]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:09:57 standalone.localdomain network[431572]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:09:57 standalone.localdomain network[431573]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:09:57 standalone.localdomain network[431574]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: pgmap v2862: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:58 standalone.localdomain sudo[431589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:09:58 standalone.localdomain sudo[431589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:09:58 standalone.localdomain sudo[431589]: pam_unix(sudo:session): session closed for user root
Oct 13 15:09:58 standalone.localdomain sudo[431610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:09:58 standalone.localdomain sudo[431610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:09:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63446 DF PROTO=TCP SPT=34744 DPT=9101 SEQ=520434711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFE1360000000001030307) 
Oct 13 15:09:58 standalone.localdomain sshd[431408]: Failed password for root from 193.46.255.7 port 21318 ssh2
Oct 13 15:09:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2863: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:09:58 standalone.localdomain sudo[431610]: pam_unix(sudo:session): session closed for user root
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:09:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:09:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c4cb0ad9-579e-4144-928e-4a7bc025c136 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:09:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c4cb0ad9-579e-4144-928e-4a7bc025c136 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:09:58 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c4cb0ad9-579e-4144-928e-4a7bc025c136 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:09:59 standalone.localdomain sudo[431673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:09:59 standalone.localdomain sudo[431673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:09:59 standalone.localdomain sudo[431673]: pam_unix(sudo:session): session closed for user root
Oct 13 15:09:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:09:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:09:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:09:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:09:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:09:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:09:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:10:00 standalone.localdomain ceph-mon[29756]: pgmap v2863: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:10:00 standalone.localdomain sshd[431408]: Received disconnect from 193.46.255.7 port 21318:11:  [preauth]
Oct 13 15:10:00 standalone.localdomain sshd[431408]: Disconnected from authenticating user root 193.46.255.7 port 21318 [preauth]
Oct 13 15:10:00 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2864: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:10:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:10:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:01 standalone.localdomain podman[431734]: 2025-10-13 15:10:01.171663082 +0000 UTC m=+0.079390312 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, version=17.1.9, com.redhat.component=openstack-swift-container-container, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, architecture=x86_64)
Oct 13 15:10:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:10:01 standalone.localdomain podman[431720]: 2025-10-13 15:10:01.148394574 +0000 UTC m=+0.101602114 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22)
Oct 13 15:10:01 standalone.localdomain podman[431771]: 2025-10-13 15:10:01.264665128 +0000 UTC m=+0.068551695 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 15:10:01 standalone.localdomain podman[431734]: 2025-10-13 15:10:01.354785311 +0000 UTC m=+0.262512571 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.expose-services=, release=1, container_name=swift_container_server, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 15:10:01 standalone.localdomain podman[431720]: 2025-10-13 15:10:01.356778425 +0000 UTC m=+0.309985985 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, com.redhat.component=openstack-swift-account-container, distribution-scope=public, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=swift_account_server)
Oct 13 15:10:01 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:10:01 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:10:01 standalone.localdomain podman[431771]: 2025-10-13 15:10:01.510882533 +0000 UTC m=+0.314769100 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:10:01 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:10:02 standalone.localdomain ceph-mon[29756]: pgmap v2864: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63447 DF PROTO=TCP SPT=34744 DPT=9101 SEQ=520434711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CAFF0F60000000001030307) 
Oct 13 15:10:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2865: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:04 standalone.localdomain python3.9[431980]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:10:04 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:10:04 standalone.localdomain ceph-mon[29756]: pgmap v2865: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:04 standalone.localdomain systemd-rc-local-generator[432010]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:10:04 standalone.localdomain systemd-sysv-generator[432013]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:10:04 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2866: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:05 standalone.localdomain python3.9[432126]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:06 standalone.localdomain python3.9[432236]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:06 standalone.localdomain ceph-mon[29756]: pgmap v2866: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2867: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:10:06.925 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:10:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:10:06.928 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:10:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:10:06.929 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:10:06 standalone.localdomain python3.9[432346]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24948 DF PROTO=TCP SPT=54978 DPT=9100 SEQ=903811198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB003C40000000001030307) 
Oct 13 15:10:07 standalone.localdomain python3.9[432454]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:08 standalone.localdomain python3.9[432562]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:08 standalone.localdomain ceph-mon[29756]: pgmap v2867: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24949 DF PROTO=TCP SPT=54978 DPT=9100 SEQ=903811198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB007B60000000001030307) 
Oct 13 15:10:08 standalone.localdomain python3.9[432617]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2868: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:09 standalone.localdomain python3.9[432725]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:09 standalone.localdomain python3.9[432780]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:10 standalone.localdomain python3.9[432888]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:10 standalone.localdomain ceph-mon[29756]: pgmap v2868: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24950 DF PROTO=TCP SPT=54978 DPT=9100 SEQ=903811198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB00FD50000000001030307) 
Oct 13 15:10:10 standalone.localdomain python3.9[432996]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2869: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:11 standalone.localdomain python3.9[433051]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:11 standalone.localdomain python3.9[433159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:12 standalone.localdomain ceph-mon[29756]: pgmap v2869: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:12 standalone.localdomain python3.9[433214]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2870: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:13 standalone.localdomain python3.9[433322]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:10:13 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:10:13 standalone.localdomain systemd-sysv-generator[433352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:10:13 standalone.localdomain systemd-rc-local-generator[433348]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:10:13 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:10:13 standalone.localdomain podman[433359]: 2025-10-13 15:10:13.622130868 +0000 UTC m=+0.090211016 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:10:13 standalone.localdomain podman[433359]: 2025-10-13 15:10:13.651353734 +0000 UTC m=+0.119433942 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:10:13 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:10:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29007 DF PROTO=TCP SPT=51346 DPT=9105 SEQ=668016091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB01D760000000001030307) 
Oct 13 15:10:14 standalone.localdomain python3.9[433484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:14 standalone.localdomain ceph-mon[29756]: pgmap v2870: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:14 standalone.localdomain python3.9[433539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2871: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:15 standalone.localdomain python3.9[433647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:15 standalone.localdomain python3.9[433702]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:16 standalone.localdomain ceph-mon[29756]: pgmap v2871: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:16 standalone.localdomain python3.9[433810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:10:16 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:10:16 standalone.localdomain systemd-rc-local-generator[433834]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:10:16 standalone.localdomain systemd-sysv-generator[433839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:10:16 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:16 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:10:16 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:10:16 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:10:16 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:10:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2872: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:17 standalone.localdomain python3.9[433960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:10:17 standalone.localdomain podman[433978]: 2025-10-13 15:10:17.807728192 +0000 UTC m=+0.073439201 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:10:17 standalone.localdomain podman[433978]: 2025-10-13 15:10:17.869947497 +0000 UTC m=+0.135658476 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:10:17 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:10:18 standalone.localdomain python3.9[434093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:18 standalone.localdomain ceph-mon[29756]: pgmap v2872: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:10:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1883174064' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:10:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:10:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1883174064' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:10:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36313 DF PROTO=TCP SPT=39634 DPT=9882 SEQ=3374819829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB02FF60000000001030307) 
Oct 13 15:10:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2873: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:18 standalone.localdomain python3.9[434179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368217.7965524-245-21408180167291/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1883174064' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:10:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1883174064' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:10:19 standalone.localdomain python3.9[434287]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29009 DF PROTO=TCP SPT=51346 DPT=9105 SEQ=668016091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB035360000000001030307) 
Oct 13 15:10:20 standalone.localdomain ceph-mon[29756]: pgmap v2873: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:20 standalone.localdomain python3.9[434395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2874: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:20 standalone.localdomain python3.9[434483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760368220.000138-270-16260958590081/.source.json _original_basename=.v83jpebh follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:21 standalone.localdomain python3.9[434591]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:22 standalone.localdomain ceph-mon[29756]: pgmap v2874: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2875: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:10:23
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'volumes', 'manila_metadata', 'images', 'manila_data']
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:10:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17954 DF PROTO=TCP SPT=51058 DPT=9102 SEQ=15660642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB042DF0000000001030307) 
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:10:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:10:23 standalone.localdomain python3.9[434893]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 13 15:10:24 standalone.localdomain ceph-mon[29756]: pgmap v2875: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:24 standalone.localdomain python3.9[435001]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:10:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2876: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37445 DF PROTO=TCP SPT=50296 DPT=9101 SEQ=1125665062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB04A6E0000000001030307) 
Oct 13 15:10:25 standalone.localdomain python3.9[435109]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:10:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:26 standalone.localdomain ceph-mon[29756]: pgmap v2876: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2877: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:28 standalone.localdomain ceph-mon[29756]: pgmap v2877: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37447 DF PROTO=TCP SPT=50296 DPT=9101 SEQ=1125665062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB056760000000001030307) 
Oct 13 15:10:28 standalone.localdomain sshd[435155]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2878: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:10:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:10:30 standalone.localdomain ceph-mon[29756]: pgmap v2878: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2879: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:30 standalone.localdomain python3[435247]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:10:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:10:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:10:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:10:31 standalone.localdomain podman[435278]: 2025-10-13 15:10:31.825419444 +0000 UTC m=+0.088666670 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, release=1, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:10:31 standalone.localdomain podman[435277]: 2025-10-13 15:10:31.880838115 +0000 UTC m=+0.146874407 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, container_name=swift_container_server, com.redhat.component=openstack-swift-container-container)
Oct 13 15:10:31 standalone.localdomain podman[435276]: 2025-10-13 15:10:31.93375645 +0000 UTC m=+0.200432252 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, container_name=swift_object_server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, batch=17.1_20250721.1)
Oct 13 15:10:32 standalone.localdomain podman[435278]: 2025-10-13 15:10:32.085891297 +0000 UTC m=+0.349138453 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, build-date=2025-07-21T16:11:22, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, container_name=swift_account_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:10:32 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:10:32 standalone.localdomain podman[435277]: 2025-10-13 15:10:32.113183801 +0000 UTC m=+0.379220113 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container)
Oct 13 15:10:32 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:10:32 standalone.localdomain podman[435276]: 2025-10-13 15:10:32.204327125 +0000 UTC m=+0.471002927 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., container_name=swift_object_server, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:10:32 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:10:32 standalone.localdomain ceph-mon[29756]: pgmap v2879: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37448 DF PROTO=TCP SPT=50296 DPT=9101 SEQ=1125665062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB066370000000001030307) 
Oct 13 15:10:32 standalone.localdomain podman[435262]: 2025-10-13 15:10:31.047900112 +0000 UTC m=+0.050367566 image pull  quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 15:10:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2880: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:32 standalone.localdomain python3[435247]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "7acf5363984cc8f102650810da36ae6f915a365c30bf42518548c6b195c5c57c",
                                                                  "Digest": "sha256:5f2ae19ef15dc3d54ad5e1703eef8fefe0d5435f620e138c261e2c61f335cf42",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:5f2ae19ef15dc3d54ad5e1703eef8fefe0d5435f620e138c261e2c61f335cf42"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:15:23.997982946Z",
                                                                  "Config": {
                                                                       "User": "root",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 403860617,
                                                                  "VirtualSize": 403860617,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:0ab1c826063c05055623c37c80ed0ce3e93bc2506e1bd27474790c822896beee"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "root",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:14:28.849756008Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:14.186138693Z",
                                                                            "created_by": "/bin/sh -c dnf -y install iscsi-initiator-utils python3-rtslib targetcli socat && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:18.201887791Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/iscsid/extend_start.sh /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:20.468735099Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:23.966935506Z",
                                                                            "created_by": "/bin/sh -c rm -f /etc/iscsi/initiatorname.iscsi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:26.178206367Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 15:10:33 standalone.localdomain podman[435410]: 2025-10-13 15:10:32.999873503 +0000 UTC m=+0.083147779 container remove 0d1f2273b6bdeacd92350b31e827a6f7f287024988705fd55ab294a1ec052217 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20250721.1, name=rhosp17/openstack-iscsid, container_name=iscsid, build-date=2025-07-21T13:27:15, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-ref=92ba14eeb90bb45ac0dcf02b7ce60e274a5ccbb2, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-iscsid/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Oct 13 15:10:33 standalone.localdomain python3[435247]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force iscsid
Oct 13 15:10:33 standalone.localdomain podman[435424]: 
Oct 13 15:10:33 standalone.localdomain podman[435424]: 2025-10-13 15:10:33.09758236 +0000 UTC m=+0.082943943 container create 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:10:33 standalone.localdomain podman[435424]: 2025-10-13 15:10:33.052586211 +0000 UTC m=+0.037947864 image pull  quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 15:10:33 standalone.localdomain python3[435247]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 15:10:33 standalone.localdomain python3.9[435570]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:33 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:10:33 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:10:34 standalone.localdomain ceph-mon[29756]: pgmap v2880: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:34 standalone.localdomain python3.9[435680]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2881: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:34 standalone.localdomain python3.9[435733]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:35 standalone.localdomain python3.9[435840]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368234.9377275-358-25371108913284/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:36 standalone.localdomain python3.9[435893]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:10:36 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:10:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:36 standalone.localdomain systemd-sysv-generator[435918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:10:36 standalone.localdomain systemd-rc-local-generator[435914]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:10:36 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:36 standalone.localdomain ceph-mon[29756]: pgmap v2881: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2882: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:37 standalone.localdomain python3.9[435982]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:10:37 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:10:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47731 DF PROTO=TCP SPT=34234 DPT=9100 SEQ=1301865366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB078F50000000001030307) 
Oct 13 15:10:37 standalone.localdomain systemd-sysv-generator[436015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:10:37 standalone.localdomain systemd-rc-local-generator[436010]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:10:37 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:37 standalone.localdomain systemd[1]: Starting iscsid container...
Oct 13 15:10:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:10:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba114622faabd7df4acfc2ea05e1fb69108349bd698d1d0ee856afc8245d96/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:10:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba114622faabd7df4acfc2ea05e1fb69108349bd698d1d0ee856afc8245d96/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 13 15:10:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba114622faabd7df4acfc2ea05e1fb69108349bd698d1d0ee856afc8245d96/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:10:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:10:37 standalone.localdomain podman[436023]: 2025-10-13 15:10:37.81717941 +0000 UTC m=+0.125216178 container init 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + sudo -E kolla_set_configs
Oct 13 15:10:37 standalone.localdomain sudo[436042]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 15:10:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:10:37 standalone.localdomain podman[436023]: 2025-10-13 15:10:37.853622065 +0000 UTC m=+0.161658843 container start 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:10:37 standalone.localdomain podman[436023]: iscsid
Oct 13 15:10:37 standalone.localdomain systemd[1]: Started iscsid container.
Oct 13 15:10:37 standalone.localdomain systemd[1]: Started Session c20 of User root.
Oct 13 15:10:37 standalone.localdomain sudo[436042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: INFO:__main__:Validating config file
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: INFO:__main__:Writing out command to execute
Oct 13 15:10:37 standalone.localdomain sudo[436042]: pam_unix(sudo:session): session closed for user root
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: ++ cat /run_command
Oct 13 15:10:37 standalone.localdomain systemd[1]: session-c20.scope: Deactivated successfully.
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + CMD='/usr/sbin/iscsid -f'
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + ARGS=
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + sudo kolla_copy_cacerts
Oct 13 15:10:37 standalone.localdomain sudo[436072]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 13 15:10:37 standalone.localdomain podman[436044]: 2025-10-13 15:10:37.954961605 +0000 UTC m=+0.097593815 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:10:37 standalone.localdomain systemd[1]: Started Session c21 of User root.
Oct 13 15:10:37 standalone.localdomain sudo[436072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:10:37 standalone.localdomain sudo[436072]: pam_unix(sudo:session): session closed for user root
Oct 13 15:10:37 standalone.localdomain systemd[1]: session-c21.scope: Deactivated successfully.
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + [[ ! -n '' ]]
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + . kolla_extend_start
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: Running command: '/usr/sbin/iscsid -f'
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + umask 0022
Oct 13 15:10:37 standalone.localdomain iscsid[436036]: + exec /usr/sbin/iscsid -f
Oct 13 15:10:38 standalone.localdomain podman[436044]: 2025-10-13 15:10:38.00500569 +0000 UTC m=+0.147637850 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:10:38 standalone.localdomain podman[436044]: unhealthy
Oct 13 15:10:38 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:10:38 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Failed with result 'exit-code'.
Oct 13 15:10:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47732 DF PROTO=TCP SPT=34234 DPT=9100 SEQ=1301865366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB07CF60000000001030307) 
Oct 13 15:10:38 standalone.localdomain ceph-mon[29756]: pgmap v2882: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:38 standalone.localdomain python3.9[436176]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:38 standalone.localdomain sshd[435155]: Connection closed by 103.29.69.96 port 34864 [preauth]
Oct 13 15:10:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2883: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:39 standalone.localdomain python3.9[436284]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:40 standalone.localdomain python3.9[436392]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:10:40 standalone.localdomain network[436409]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:10:40 standalone.localdomain network[436410]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:10:40 standalone.localdomain network[436411]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:10:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47733 DF PROTO=TCP SPT=34234 DPT=9100 SEQ=1301865366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB084F70000000001030307) 
Oct 13 15:10:40 standalone.localdomain ceph-mon[29756]: pgmap v2883: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2884: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:42 standalone.localdomain ceph-mon[29756]: pgmap v2884: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:42 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:10:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2885: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:10:43 standalone.localdomain podman[436517]: 2025-10-13 15:10:43.757006652 +0000 UTC m=+0.062679486 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:10:43 standalone.localdomain podman[436517]: 2025-10-13 15:10:43.765749512 +0000 UTC m=+0.071422336 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:10:43 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:10:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7619 DF PROTO=TCP SPT=58568 DPT=9105 SEQ=4286936657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB092B60000000001030307) 
Oct 13 15:10:44 standalone.localdomain ceph-mon[29756]: pgmap v2885: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2886: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:45 standalone.localdomain python3.9[436668]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 15:10:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:46 standalone.localdomain ceph-mon[29756]: pgmap v2886: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:46 standalone.localdomain python3.9[436776]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 13 15:10:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2887: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:47 standalone.localdomain python3.9[436888]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:47 standalone.localdomain python3.9[436974]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760368246.9451523-432-91278230702836/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:47 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:10:48 standalone.localdomain object-server[436986]: Object update sweep starting on /srv/node/d1 (pid: 25)
Oct 13 15:10:48 standalone.localdomain object-server[436986]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 25)
Oct 13 15:10:48 standalone.localdomain object-server[436986]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:10:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.09s
Oct 13 15:10:48 standalone.localdomain ceph-mon[29756]: pgmap v2887: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:48 standalone.localdomain python3.9[437083]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65337 DF PROTO=TCP SPT=54576 DPT=9882 SEQ=62493566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0A4F70000000001030307) 
Oct 13 15:10:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:10:48 standalone.localdomain podman[437098]: 2025-10-13 15:10:48.802232729 +0000 UTC m=+0.069175851 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:10:48 standalone.localdomain podman[437098]: 2025-10-13 15:10:48.84097643 +0000 UTC m=+0.107919542 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:10:48 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:10:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2888: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:49 standalone.localdomain python3.9[437216]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:10:49 standalone.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 15:10:49 standalone.localdomain systemd[1]: Stopped Load Kernel Modules.
Oct 13 15:10:49 standalone.localdomain systemd[1]: Stopping Load Kernel Modules...
Oct 13 15:10:49 standalone.localdomain systemd[1]: Starting Load Kernel Modules...
Oct 13 15:10:49 standalone.localdomain systemd-modules-load[437220]: Module 'msr' is built in
Oct 13 15:10:49 standalone.localdomain systemd[1]: Finished Load Kernel Modules.
Oct 13 15:10:49 standalone.localdomain sshd[437293]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:10:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7621 DF PROTO=TCP SPT=58568 DPT=9105 SEQ=4286936657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0AA760000000001030307) 
Oct 13 15:10:50 standalone.localdomain python3.9[437330]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:10:50 standalone.localdomain sshd[437293]: Invalid user  from 66.240.192.82 port 53061
Oct 13 15:10:50 standalone.localdomain sshd[437293]: Connection closed by invalid user  66.240.192.82 port 53061 [preauth]
Oct 13 15:10:50 standalone.localdomain ceph-mon[29756]: pgmap v2888: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2889: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:50 standalone.localdomain python3.9[437438]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:51 standalone.localdomain python3.9[437546]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:52 standalone.localdomain python3.9[437654]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:10:52 standalone.localdomain ceph-mon[29756]: pgmap v2889: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:52 standalone.localdomain python3.9[437740]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760368251.8644574-490-110445122721281/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2890: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:10:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:10:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25096 DF PROTO=TCP SPT=36510 DPT=9102 SEQ=506850957 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0B80E0000000001030307) 
Oct 13 15:10:53 standalone.localdomain python3.9[437848]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:10:54 standalone.localdomain python3.9[437957]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:54 standalone.localdomain ceph-mon[29756]: pgmap v2890: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2891: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:55 standalone.localdomain python3.9[438065]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54989 DF PROTO=TCP SPT=55568 DPT=9101 SEQ=621206372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0BF9E0000000001030307) 
Oct 13 15:10:55 standalone.localdomain python3.9[438173]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:10:56 standalone.localdomain python3.9[438281]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:56 standalone.localdomain ceph-mon[29756]: pgmap v2891: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:56 standalone.localdomain account-server[114566]: 172.20.0.100 - - [13/Oct/2025:15:10:56 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx3720165115654d5b8cbb9-0068ed1680" "proxy-server 2" 0.0011 "-" 22 -
Oct 13 15:10:56 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx3720165115654d5b8cbb9-0068ed1680)
Oct 13 15:10:56 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx3720165115654d5b8cbb9-0068ed1680)
Oct 13 15:10:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2892: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:57 standalone.localdomain python3.9[438389]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:57 standalone.localdomain python3.9[438497]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:58 standalone.localdomain python3.9[438605]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54991 DF PROTO=TCP SPT=55568 DPT=9101 SEQ=621206372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0CBB60000000001030307) 
Oct 13 15:10:58 standalone.localdomain ceph-mon[29756]: pgmap v2892: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2893: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:10:58 standalone.localdomain python3.9[438713]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:10:59 standalone.localdomain sudo[438733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:10:59 standalone.localdomain sudo[438733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:10:59 standalone.localdomain sudo[438733]: pam_unix(sudo:session): session closed for user root
Oct 13 15:10:59 standalone.localdomain sudo[438767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:10:59 standalone.localdomain sudo[438767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:10:59 standalone.localdomain python3.9[438859]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:10:59 standalone.localdomain sudo[438767]: pam_unix(sudo:session): session closed for user root
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:10:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev a10be584-688c-4057-9943-08f584e5dd94 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:10:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev a10be584-688c-4057-9943-08f584e5dd94 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:10:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event a10be584-688c-4057-9943-08f584e5dd94 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:10:59 standalone.localdomain sudo[438926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:10:59 standalone.localdomain sudo[438926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:10:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:10:59 standalone.localdomain sudo[438926]: pam_unix(sudo:session): session closed for user root
Oct 13 15:10:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:11:00 standalone.localdomain python3.9[439018]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:11:00 standalone.localdomain ceph-mon[29756]: pgmap v2893: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:11:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:11:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:11:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:11:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:11:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2894: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:00 standalone.localdomain python3.9[439126]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:01 standalone.localdomain python3.9[439181]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:11:01 standalone.localdomain python3.9[439289]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:02 standalone.localdomain python3.9[439344]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:11:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54992 DF PROTO=TCP SPT=55568 DPT=9101 SEQ=621206372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0DB760000000001030307) 
Oct 13 15:11:02 standalone.localdomain ceph-mon[29756]: pgmap v2894: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:11:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:11:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:11:02 standalone.localdomain podman[439439]: 2025-10-13 15:11:02.817998762 +0000 UTC m=+0.067704932 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, version=17.1.9, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, com.redhat.component=openstack-swift-account-container)
Oct 13 15:11:02 standalone.localdomain podman[439435]: 2025-10-13 15:11:02.871670807 +0000 UTC m=+0.122578907 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, container_name=swift_container_server, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git)
Oct 13 15:11:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2895: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:02 standalone.localdomain systemd[1]: tmp-crun.ZlVZ26.mount: Deactivated successfully.
Oct 13 15:11:02 standalone.localdomain podman[439433]: 2025-10-13 15:11:02.94653382 +0000 UTC m=+0.197367297 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git)
Oct 13 15:11:02 standalone.localdomain python3.9[439477]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:03 standalone.localdomain podman[439439]: 2025-10-13 15:11:03.008828317 +0000 UTC m=+0.258534417 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-type=git, version=17.1.9, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:11:03 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:11:03 standalone.localdomain podman[439433]: 2025-10-13 15:11:03.110098702 +0000 UTC m=+0.360932189 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-type=git, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object)
Oct 13 15:11:03 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:11:03 standalone.localdomain podman[439435]: 2025-10-13 15:11:03.161124229 +0000 UTC m=+0.412032329 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, container_name=swift_container_server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.openshift.expose-services=)
Oct 13 15:11:03 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:11:03 standalone.localdomain python3.9[439644]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:04 standalone.localdomain python3.9[439699]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:04 standalone.localdomain ceph-mon[29756]: pgmap v2895: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:04 standalone.localdomain python3.9[439807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2896: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:05 standalone.localdomain python3.9[439862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:05 standalone.localdomain python3.9[439970]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:05 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:05 standalone.localdomain systemd-sysv-generator[440000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:05 standalone.localdomain systemd-rc-local-generator[439994]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:06 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:06 standalone.localdomain ceph-mon[29756]: pgmap v2896: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2897: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:06 standalone.localdomain python3.9[440115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:11:06.926 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:11:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:11:06.928 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:11:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:11:06.928 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:11:07 standalone.localdomain python3.9[440170]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10890 DF PROTO=TCP SPT=60118 DPT=9100 SEQ=1293039944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0EE240000000001030307) 
Oct 13 15:11:07 standalone.localdomain systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 13 15:11:07 standalone.localdomain python3.9[440279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:08 standalone.localdomain python3.9[440334]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10891 DF PROTO=TCP SPT=60118 DPT=9100 SEQ=1293039944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0F2370000000001030307) 
Oct 13 15:11:08 standalone.localdomain ceph-mon[29756]: pgmap v2897: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:11:08 standalone.localdomain systemd[1]: tmp-crun.pHyW1O.mount: Deactivated successfully.
Oct 13 15:11:08 standalone.localdomain podman[440406]: 2025-10-13 15:11:08.834393975 +0000 UTC m=+0.089925793 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 13 15:11:08 standalone.localdomain podman[440406]: 2025-10-13 15:11:08.842619773 +0000 UTC m=+0.098151601 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 15:11:08 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:11:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2898: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:09 standalone.localdomain python3.9[440462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:09 standalone.localdomain systemd-sysv-generator[440493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:09 standalone.localdomain systemd-rc-local-generator[440490]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:09 standalone.localdomain systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 15:11:09 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:11:09 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:11:09 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:11:09 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:11:10 standalone.localdomain python3.9[440613]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:11:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10892 DF PROTO=TCP SPT=60118 DPT=9100 SEQ=1293039944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB0FA370000000001030307) 
Oct 13 15:11:10 standalone.localdomain ceph-mon[29756]: pgmap v2898: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2899: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:10 standalone.localdomain python3.9[440721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:11 standalone.localdomain python3.9[440807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368270.5599337-697-193670076404729/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:11:12 standalone.localdomain python3.9[440915]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:11:12 standalone.localdomain ceph-mon[29756]: pgmap v2899: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2900: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:13 standalone.localdomain python3.9[441023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:13 standalone.localdomain python3.9[441109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760368272.8322773-722-243267685706305/.source.json _original_basename=.zs3zf5x4 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51371 DF PROTO=TCP SPT=50078 DPT=9105 SEQ=1048663508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB107F60000000001030307) 
Oct 13 15:11:14 standalone.localdomain python3.9[441217]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:11:14 standalone.localdomain ceph-mon[29756]: pgmap v2900: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:14 standalone.localdomain podman[441235]: 2025-10-13 15:11:14.836613628 +0000 UTC m=+0.099601140 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:11:14 standalone.localdomain podman[441235]: 2025-10-13 15:11:14.86889318 +0000 UTC m=+0.131880652 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 13 15:11:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2901: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:14 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:11:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:16 standalone.localdomain python3.9[441537]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 13 15:11:16 standalone.localdomain ceph-mon[29756]: pgmap v2901: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2902: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:17 standalone.localdomain python3.9[441645]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:11:18 standalone.localdomain python3.9[441753]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3895347508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3895347508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:11:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6790 DF PROTO=TCP SPT=48246 DPT=9882 SEQ=1575759889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB11A370000000001030307) 
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: pgmap v2902: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3895347508' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:11:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3895347508' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:11:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2903: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:19 standalone.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 13 15:11:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:11:19 standalone.localdomain podman[441799]: 2025-10-13 15:11:19.467042037 +0000 UTC m=+0.056775501 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:11:19 standalone.localdomain podman[441799]: 2025-10-13 15:11:19.54486678 +0000 UTC m=+0.134600244 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 13 15:11:19 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:11:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51373 DF PROTO=TCP SPT=50078 DPT=9105 SEQ=1048663508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB11FB60000000001030307) 
Oct 13 15:11:20 standalone.localdomain ceph-mon[29756]: pgmap v2903: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2904: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:22 standalone.localdomain ceph-mon[29756]: pgmap v2904: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2905: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:11:23 standalone.localdomain python3[441914]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:11:23
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'backups', 'vms', 'images', 'manila_data', '.mgr', 'volumes']
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:11:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55903 DF PROTO=TCP SPT=34306 DPT=9102 SEQ=1605291610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB12D3F0000000001030307) 
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:11:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:11:24 standalone.localdomain ceph-mon[29756]: pgmap v2905: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2906: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:24 standalone.localdomain podman[441928]: 2025-10-13 15:11:23.334312922 +0000 UTC m=+0.028989342 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 15:11:25 standalone.localdomain podman[441979]: 
Oct 13 15:11:25 standalone.localdomain podman[441979]: 2025-10-13 15:11:25.166651064 +0000 UTC m=+0.097974465 container create 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:11:25 standalone.localdomain podman[441979]: 2025-10-13 15:11:25.111956143 +0000 UTC m=+0.043279584 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 15:11:25 standalone.localdomain python3[441914]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 15:11:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55752 DF PROTO=TCP SPT=55116 DPT=9101 SEQ=843665638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB134CE0000000001030307) 
Oct 13 15:11:26 standalone.localdomain python3.9[442125]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:11:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:26 standalone.localdomain python3.9[442235]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:26 standalone.localdomain ceph-mon[29756]: pgmap v2906: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2907: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:27 standalone.localdomain python3.9[442288]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:11:27 standalone.localdomain python3.9[442395]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368287.2229972-810-179555891156157/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:28 standalone.localdomain python3.9[442448]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:11:28 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55754 DF PROTO=TCP SPT=55116 DPT=9101 SEQ=843665638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB140F60000000001030307) 
Oct 13 15:11:28 standalone.localdomain systemd-rc-local-generator[442467]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:28 standalone.localdomain systemd-sysv-generator[442474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:28 standalone.localdomain ceph-mon[29756]: pgmap v2907: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2908: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:11:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:11:29 standalone.localdomain python3.9[442537]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:29 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:29 standalone.localdomain ceph-mon[29756]: pgmap v2908: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:29 standalone.localdomain systemd-rc-local-generator[442563]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:29 standalone.localdomain systemd-sysv-generator[442569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:29 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:30 standalone.localdomain systemd[1]: Starting multipathd container...
Oct 13 15:11:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:11:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf43edd5c47b85a1b7e6436354c1f670a82a419a2d005b6cdaa575efd538eb06/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 15:11:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf43edd5c47b85a1b7e6436354c1f670a82a419a2d005b6cdaa575efd538eb06/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:11:30 standalone.localdomain podman[442577]: 2025-10-13 15:11:30.261602497 +0000 UTC m=+0.129177701 container init 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + sudo -E kolla_set_configs
Oct 13 15:11:30 standalone.localdomain sudo[442597]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 15:11:30 standalone.localdomain sudo[442597]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:11:30 standalone.localdomain sudo[442597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:11:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:11:30 standalone.localdomain podman[442577]: 2025-10-13 15:11:30.297079377 +0000 UTC m=+0.164654601 container start 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:11:30 standalone.localdomain podman[442577]: multipathd
Oct 13 15:11:30 standalone.localdomain systemd[1]: Started multipathd container.
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: INFO:__main__:Validating config file
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: INFO:__main__:Writing out command to execute
Oct 13 15:11:30 standalone.localdomain sudo[442597]: pam_unix(sudo:session): session closed for user root
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: ++ cat /run_command
Oct 13 15:11:30 standalone.localdomain podman[442600]: 2025-10-13 15:11:30.354345544 +0000 UTC m=+0.054365000 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + CMD='/usr/sbin/multipathd -d'
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + ARGS=
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + sudo kolla_copy_cacerts
Oct 13 15:11:30 standalone.localdomain sudo[442619]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 13 15:11:30 standalone.localdomain sudo[442619]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:11:30 standalone.localdomain sudo[442619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:11:30 standalone.localdomain podman[442600]: 2025-10-13 15:11:30.365227133 +0000 UTC m=+0.065246599 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:11:30 standalone.localdomain sudo[442619]: pam_unix(sudo:session): session closed for user root
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + [[ ! -n '' ]]
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + . kolla_extend_start
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: Running command: '/usr/sbin/multipathd -d'
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + umask 0022
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: + exec /usr/sbin/multipathd -d
Oct 13 15:11:30 standalone.localdomain podman[442600]: unhealthy
Oct 13 15:11:30 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:11:30 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Failed with result 'exit-code'.
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: 7733.566244 | --------start up--------
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: 7733.566258 | read /etc/multipath.conf
Oct 13 15:11:30 standalone.localdomain multipathd[442591]: 7733.568340 | path checkers start up
Oct 13 15:11:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2909: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:30 standalone.localdomain python3.9[442737]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:11:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:31 standalone.localdomain python3.9[442847]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:11:32 standalone.localdomain ceph-mon[29756]: pgmap v2909: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:32 standalone.localdomain python3.9[442969]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:11:32 standalone.localdomain systemd[1]: Stopping multipathd container...
Oct 13 15:11:32 standalone.localdomain multipathd[442591]: 7735.787946 | exit (signal)
Oct 13 15:11:32 standalone.localdomain multipathd[442591]: 7735.788017 | --------shut down-------
Oct 13 15:11:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55755 DF PROTO=TCP SPT=55116 DPT=9101 SEQ=843665638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB150B60000000001030307) 
Oct 13 15:11:32 standalone.localdomain systemd[1]: libpod-87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.scope: Deactivated successfully.
Oct 13 15:11:32 standalone.localdomain podman[442973]: 2025-10-13 15:11:32.629966401 +0000 UTC m=+0.112083653 container died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible)
Oct 13 15:11:32 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.timer: Deactivated successfully.
Oct 13 15:11:32 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:11:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898-userdata-shm.mount: Deactivated successfully.
Oct 13 15:11:32 standalone.localdomain podman[442973]: 2025-10-13 15:11:32.792133896 +0000 UTC m=+0.274251128 container cleanup 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:11:32 standalone.localdomain podman[442973]: multipathd
Oct 13 15:11:32 standalone.localdomain podman[442999]: 2025-10-13 15:11:32.874613986 +0000 UTC m=+0.053451279 container cleanup 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:11:32 standalone.localdomain podman[442999]: multipathd
Oct 13 15:11:32 standalone.localdomain systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 13 15:11:32 standalone.localdomain systemd[1]: Stopped multipathd container.
Oct 13 15:11:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2910: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:32 standalone.localdomain systemd[1]: Starting multipathd container...
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:11:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf43edd5c47b85a1b7e6436354c1f670a82a419a2d005b6cdaa575efd538eb06/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 15:11:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf43edd5c47b85a1b7e6436354c1f670a82a419a2d005b6cdaa575efd538eb06/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:11:33 standalone.localdomain podman[443013]: 2025-10-13 15:11:33.057761261 +0000 UTC m=+0.145219273 container init 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + sudo -E kolla_set_configs
Oct 13 15:11:33 standalone.localdomain sudo[443046]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 15:11:33 standalone.localdomain sudo[443046]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:11:33 standalone.localdomain sudo[443046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:11:33 standalone.localdomain podman[443013]: 2025-10-13 15:11:33.092575699 +0000 UTC m=+0.180033711 container start 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:11:33 standalone.localdomain podman[443013]: multipathd
Oct 13 15:11:33 standalone.localdomain podman[443031]: 2025-10-13 15:11:33.093552071 +0000 UTC m=+0.054478673 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, version=17.1.9, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true)
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started multipathd container.
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: INFO:__main__:Validating config file
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: INFO:__main__:Writing out command to execute
Oct 13 15:11:33 standalone.localdomain sudo[443046]: pam_unix(sudo:session): session closed for user root
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: ++ cat /run_command
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + CMD='/usr/sbin/multipathd -d'
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + ARGS=
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + sudo kolla_copy_cacerts
Oct 13 15:11:33 standalone.localdomain sudo[443068]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 13 15:11:33 standalone.localdomain sudo[443068]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:11:33 standalone.localdomain sudo[443068]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:11:33 standalone.localdomain sudo[443068]: pam_unix(sudo:session): session closed for user root
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + [[ ! -n '' ]]
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + . kolla_extend_start
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: Running command: '/usr/sbin/multipathd -d'
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + umask 0022
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: + exec /usr/sbin/multipathd -d
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: 7736.338323 | --------start up--------
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: 7736.338341 | read /etc/multipath.conf
Oct 13 15:11:33 standalone.localdomain multipathd[443028]: 7736.341227 | path checkers start up
Oct 13 15:11:33 standalone.localdomain podman[443053]: 2025-10-13 15:11:33.170153153 +0000 UTC m=+0.076505019 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:11:33 standalone.localdomain podman[443053]: 2025-10-13 15:11:33.175253556 +0000 UTC m=+0.081605452 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:11:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:11:33 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:11:33 standalone.localdomain podman[443104]: 2025-10-13 15:11:33.246696722 +0000 UTC m=+0.053956086 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, release=1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9)
Oct 13 15:11:33 standalone.localdomain podman[443031]: 2025-10-13 15:11:33.296801857 +0000 UTC m=+0.257728439 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, version=17.1.9, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:11:33 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:11:33 standalone.localdomain podman[443105]: 2025-10-13 15:11:33.300711369 +0000 UTC m=+0.104578928 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, distribution-scope=public, version=17.1.9, vcs-type=git, architecture=x86_64, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, release=1, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:11:33 standalone.localdomain podman[443105]: 2025-10-13 15:11:33.46897156 +0000 UTC m=+0.272839089 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, container_name=swift_container_server, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:11:33 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:11:33 standalone.localdomain podman[443104]: 2025-10-13 15:11:33.488891335 +0000 UTC m=+0.296150679 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, version=17.1.9, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, container_name=swift_object_server, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true)
Oct 13 15:11:33 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:11:33 standalone.localdomain python3.9[443251]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:34 standalone.localdomain ceph-mon[29756]: pgmap v2910: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:34 standalone.localdomain python3.9[443359]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 15:11:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2911: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:35 standalone.localdomain python3.9[443467]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.196050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368295196084, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1867, "num_deletes": 251, "total_data_size": 1769015, "memory_usage": 1805680, "flush_reason": "Manual Compaction"}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368295213652, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1707261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67453, "largest_seqno": 69319, "table_properties": {"data_size": 1699836, "index_size": 4384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15514, "raw_average_key_size": 19, "raw_value_size": 1684722, "raw_average_value_size": 2157, "num_data_blocks": 198, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368126, "oldest_key_time": 1760368126, "file_creation_time": 1760368295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 17676 microseconds, and 3478 cpu microseconds.
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.213706) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1707261 bytes OK
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.213742) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.215714) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.215727) EVENT_LOG_v1 {"time_micros": 1760368295215723, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.215742) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1760927, prev total WAL file size 1760927, number of live WAL files 2.
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.216213) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1667KB)], [155(4621KB)]
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368295216268, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 6439364, "oldest_snapshot_seqno": -1}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 5938 keys, 5403011 bytes, temperature: kUnknown
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368295260246, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 5403011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5368779, "index_size": 18326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 155867, "raw_average_key_size": 26, "raw_value_size": 5266051, "raw_average_value_size": 886, "num_data_blocks": 723, "num_entries": 5938, "num_filter_entries": 5938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.260549) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 5403011 bytes
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.262361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.2 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 4.5 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6458, records dropped: 520 output_compression: NoCompression
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.262389) EVENT_LOG_v1 {"time_micros": 1760368295262376, "job": 96, "event": "compaction_finished", "compaction_time_micros": 44044, "compaction_time_cpu_micros": 20289, "output_level": 6, "num_output_files": 1, "total_output_size": 5403011, "num_input_records": 6458, "num_output_records": 5938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368295262759, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368295263542, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.216104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.263600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.263604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.263606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.263608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:11:35 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:11:35.263610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:11:35 standalone.localdomain python3.9[443583]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:11:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:36 standalone.localdomain ceph-mon[29756]: pgmap v2911: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:36 standalone.localdomain python3.9[443669]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/root/.ansible/tmp/ansible-tmp-1760368295.394995-890-275497321490480/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2912: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:37 standalone.localdomain python3.9[443777]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29436 DF PROTO=TCP SPT=48760 DPT=9100 SEQ=2305948507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB163540000000001030307) 
Oct 13 15:11:37 standalone.localdomain python3.9[443885]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:11:37 standalone.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 15:11:37 standalone.localdomain systemd[1]: Stopped Load Kernel Modules.
Oct 13 15:11:37 standalone.localdomain systemd[1]: Stopping Load Kernel Modules...
Oct 13 15:11:37 standalone.localdomain systemd[1]: Starting Load Kernel Modules...
Oct 13 15:11:37 standalone.localdomain systemd-modules-load[443889]: Module 'msr' is built in
Oct 13 15:11:37 standalone.localdomain systemd[1]: Finished Load Kernel Modules.
Oct 13 15:11:38 standalone.localdomain ceph-mon[29756]: pgmap v2912: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29437 DF PROTO=TCP SPT=48760 DPT=9100 SEQ=2305948507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB167770000000001030307) 
Oct 13 15:11:38 standalone.localdomain python3.9[443997]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:11:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2913: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:11:39 standalone.localdomain podman[444059]: 2025-10-13 15:11:39.808888877 +0000 UTC m=+0.062547087 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:11:39 standalone.localdomain podman[444059]: 2025-10-13 15:11:39.844932266 +0000 UTC m=+0.098590506 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:11:39 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:11:40 standalone.localdomain python3.9[444058]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:11:40 standalone.localdomain ceph-mon[29756]: pgmap v2913: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29438 DF PROTO=TCP SPT=48760 DPT=9100 SEQ=2305948507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB16F760000000001030307) 
Oct 13 15:11:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2914: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:42 standalone.localdomain ceph-mon[29756]: pgmap v2914: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2915: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54168 DF PROTO=TCP SPT=43422 DPT=9105 SEQ=2565738606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB17D360000000001030307) 
Oct 13 15:11:44 standalone.localdomain ceph-mon[29756]: pgmap v2915: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:44 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:44 standalone.localdomain systemd-rc-local-generator[444115]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:44 standalone.localdomain systemd-sysv-generator[444118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:44 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2916: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:11:44 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:45 standalone.localdomain systemd-rc-local-generator[444159]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:45 standalone.localdomain systemd-sysv-generator[444162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:45 standalone.localdomain podman[444125]: 2025-10-13 15:11:45.030520096 +0000 UTC m=+0.118931754 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:11:45 standalone.localdomain podman[444125]: 2025-10-13 15:11:45.036102725 +0000 UTC m=+0.124514383 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:11:45 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:45 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:11:45 standalone.localdomain systemd-logind[45629]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 13 15:11:45 standalone.localdomain systemd-logind[45629]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 13 15:11:45 standalone.localdomain lvm[444216]: PV /dev/loop3 online, VG vg2 is complete.
Oct 13 15:11:45 standalone.localdomain lvm[444216]: VG vg2 finished
Oct 13 15:11:45 standalone.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 15:11:45 standalone.localdomain systemd[1]: Starting man-db-cache-update.service...
Oct 13 15:11:45 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:45 standalone.localdomain systemd-rc-local-generator[444266]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:45 standalone.localdomain systemd-sysv-generator[444269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:45 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:45 standalone.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 15:11:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:46 standalone.localdomain ceph-mon[29756]: pgmap v2916: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:46 standalone.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 15:11:46 standalone.localdomain systemd[1]: Finished man-db-cache-update.service.
Oct 13 15:11:46 standalone.localdomain systemd[1]: run-r885b3c9011804e498905b4da29c5bc8e.service: Deactivated successfully.
Oct 13 15:11:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2917: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:47 standalone.localdomain python3.9[445517]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:48 standalone.localdomain ceph-mon[29756]: pgmap v2917: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:48 standalone.localdomain python3.9[445625]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:11:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11556 DF PROTO=TCP SPT=35938 DPT=9882 SEQ=2259094035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB18F760000000001030307) 
Oct 13 15:11:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2918: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:49 standalone.localdomain python3.9[445737]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:11:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:11:49 standalone.localdomain podman[445752]: 2025-10-13 15:11:49.83195879 +0000 UTC m=+0.077572224 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:11:49 standalone.localdomain podman[445752]: 2025-10-13 15:11:49.873834136 +0000 UTC m=+0.119447580 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:11:49 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:11:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54170 DF PROTO=TCP SPT=43422 DPT=9105 SEQ=2565738606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB194F60000000001030307) 
Oct 13 15:11:50 standalone.localdomain ceph-mon[29756]: pgmap v2918: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2919: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:50 standalone.localdomain python3.9[445870]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:11:50 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:11:51 standalone.localdomain systemd-sysv-generator[445895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:11:51 standalone.localdomain systemd-rc-local-generator[445891]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:11:51 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:51 standalone.localdomain python3.9[446014]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:11:52 standalone.localdomain network[446031]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:11:52 standalone.localdomain network[446032]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:11:52 standalone.localdomain network[446033]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:11:52 standalone.localdomain ceph-mon[29756]: pgmap v2919: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2920: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:11:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:11:53 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:11:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26905 DF PROTO=TCP SPT=48140 DPT=9102 SEQ=2406317646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1A26E0000000001030307) 
Oct 13 15:11:54 standalone.localdomain ceph-mon[29756]: pgmap v2920: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2921: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65006 DF PROTO=TCP SPT=34050 DPT=9101 SEQ=2880220473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1A9FE0000000001030307) 
Oct 13 15:11:55 standalone.localdomain python3.9[446275]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:11:56 standalone.localdomain ceph-mon[29756]: pgmap v2921: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:56 standalone.localdomain python3.9[446384]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2922: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:57 standalone.localdomain python3.9[446493]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:58 standalone.localdomain python3.9[446602]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:58 standalone.localdomain ceph-mon[29756]: pgmap v2922: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65008 DF PROTO=TCP SPT=34050 DPT=9101 SEQ=2880220473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1B5F60000000001030307) 
Oct 13 15:11:58 standalone.localdomain python3.9[446711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2923: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:11:59 standalone.localdomain python3.9[446820]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:11:59 standalone.localdomain sudo[446877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:11:59 standalone.localdomain sudo[446877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:11:59 standalone.localdomain sudo[446877]: pam_unix(sudo:session): session closed for user root
Oct 13 15:12:00 standalone.localdomain sudo[446911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:12:00 standalone.localdomain sudo[446911]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: pgmap v2923: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:00 standalone.localdomain python3.9[446965]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:12:00 standalone.localdomain sudo[446911]: pam_unix(sudo:session): session closed for user root
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:12:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:12:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev f911e234-01c1-48a3-be2a-a687fdd2d6cc (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:12:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev f911e234-01c1-48a3-be2a-a687fdd2d6cc (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:12:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event f911e234-01c1-48a3-be2a-a687fdd2d6cc (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:12:00 standalone.localdomain sudo[447076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:12:00 standalone.localdomain sudo[447076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:12:00 standalone.localdomain sudo[447076]: pam_unix(sudo:session): session closed for user root
Oct 13 15:12:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2924: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:01 standalone.localdomain python3.9[447125]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:12:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:12:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:12:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:12:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:12:01 standalone.localdomain python3.9[447234]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:02 standalone.localdomain python3.9[447342]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65009 DF PROTO=TCP SPT=34050 DPT=9101 SEQ=2880220473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1C5B60000000001030307) 
Oct 13 15:12:02 standalone.localdomain ceph-mon[29756]: pgmap v2924: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2925: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:03 standalone.localdomain python3.9[447450]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:03 standalone.localdomain python3.9[447558]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:12:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:12:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:12:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:12:03 standalone.localdomain podman[447579]: 2025-10-13 15:12:03.818197191 +0000 UTC m=+0.076754311 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, container_name=swift_account_server)
Oct 13 15:12:03 standalone.localdomain podman[447578]: 2025-10-13 15:12:03.877845457 +0000 UTC m=+0.136278724 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, release=1, architecture=x86_64, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, container_name=swift_container_server, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, vcs-type=git, build-date=2025-07-21T15:54:32)
Oct 13 15:12:03 standalone.localdomain systemd[1]: tmp-crun.ftVm5h.mount: Deactivated successfully.
Oct 13 15:12:04 standalone.localdomain podman[447577]: 2025-10-13 15:12:04.002874631 +0000 UTC m=+0.261317998 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:12:04 standalone.localdomain podman[447577]: 2025-10-13 15:12:04.015290685 +0000 UTC m=+0.273734062 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 15:12:04 standalone.localdomain podman[447579]: 2025-10-13 15:12:04.032262246 +0000 UTC m=+0.290819366 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:12:04 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:12:04 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:12:04 standalone.localdomain podman[447578]: 2025-10-13 15:12:04.102880303 +0000 UTC m=+0.361313570 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, vcs-type=git, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, container_name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, version=17.1.9)
Oct 13 15:12:04 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:12:04 standalone.localdomain podman[447576]: 2025-10-13 15:12:03.954561887 +0000 UTC m=+0.212895141 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, version=17.1.9, tcib_managed=true)
Oct 13 15:12:04 standalone.localdomain podman[447576]: 2025-10-13 15:12:04.185034276 +0000 UTC m=+0.443367620 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, container_name=swift_object_server, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=)
Oct 13 15:12:04 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:12:04 standalone.localdomain python3.9[447757]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:04 standalone.localdomain ceph-mon[29756]: pgmap v2925: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:04 standalone.localdomain python3.9[447873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2926: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:12:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:12:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:12:05 standalone.localdomain python3.9[447981]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:05 standalone.localdomain ceph-mon[29756]: pgmap v2926: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:12:06 standalone.localdomain python3.9[448089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:06 standalone.localdomain python3.9[448197]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2927: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:12:06.927 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:12:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:12:06.928 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:12:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:12:06.929 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:12:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34367 DF PROTO=TCP SPT=45386 DPT=9100 SEQ=252945208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1D8840000000001030307) 
Oct 13 15:12:07 standalone.localdomain python3.9[448305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:08 standalone.localdomain python3.9[448413]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:08 standalone.localdomain ceph-mon[29756]: pgmap v2927: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34368 DF PROTO=TCP SPT=45386 DPT=9100 SEQ=252945208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1DC760000000001030307) 
Oct 13 15:12:08 standalone.localdomain python3.9[448521]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2928: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:09 standalone.localdomain python3.9[448629]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:12:10 standalone.localdomain python3.9[448737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:10 standalone.localdomain podman[448738]: 2025-10-13 15:12:10.011021344 +0000 UTC m=+0.094173687 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:12:10 standalone.localdomain podman[448738]: 2025-10-13 15:12:10.026584182 +0000 UTC m=+0.109736455 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:12:10 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:12:10 standalone.localdomain ceph-mon[29756]: pgmap v2928: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34369 DF PROTO=TCP SPT=45386 DPT=9100 SEQ=252945208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1E48A0000000001030307) 
Oct 13 15:12:10 standalone.localdomain python3.9[448864]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2929: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:11 standalone.localdomain python3.9[448972]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:11 standalone.localdomain python3.9[449080]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                            systemctl disable --now certmonger.service
                                                            test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                          fi
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:12 standalone.localdomain ceph-mon[29756]: pgmap v2929: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:12 standalone.localdomain python3.9[449190]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:12:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2930: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:13 standalone.localdomain python3.9[449298]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:12:13 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:12:13 standalone.localdomain systemd-rc-local-generator[449324]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:12:13 standalone.localdomain systemd-sysv-generator[449329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:12:13 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:12:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55192 DF PROTO=TCP SPT=36766 DPT=9105 SEQ=2530184318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB1F2370000000001030307) 
Oct 13 15:12:14 standalone.localdomain ceph-mon[29756]: pgmap v2930: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:14 standalone.localdomain python3.9[449443]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2931: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:15 standalone.localdomain python3.9[449552]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:12:15 standalone.localdomain podman[449554]: 2025-10-13 15:12:15.433940556 +0000 UTC m=+0.098995372 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Oct 13 15:12:15 standalone.localdomain podman[449554]: 2025-10-13 15:12:15.466758684 +0000 UTC m=+0.131813480 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:12:15 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:12:15 standalone.localdomain python3.9[449678]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:16 standalone.localdomain ceph-mon[29756]: pgmap v2931: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:16 standalone.localdomain python3.9[449787]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2932: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:17 standalone.localdomain python3.9[449896]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:17 standalone.localdomain python3.9[450005]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:18 standalone.localdomain ceph-mon[29756]: pgmap v2932: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:18 standalone.localdomain python3.9[450114]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:12:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/968856731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:12:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:12:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/968856731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:12:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59330 DF PROTO=TCP SPT=49752 DPT=9882 SEQ=2586419408 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB204B60000000001030307) 
Oct 13 15:12:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2933: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:19 standalone.localdomain python3.9[450223]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:12:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/968856731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:12:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/968856731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:12:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:12:20 standalone.localdomain podman[450242]: 2025-10-13 15:12:20.020700964 +0000 UTC m=+0.096944400 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:12:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55194 DF PROTO=TCP SPT=36766 DPT=9105 SEQ=2530184318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB209F60000000001030307) 
Oct 13 15:12:20 standalone.localdomain podman[450242]: 2025-10-13 15:12:20.065088801 +0000 UTC m=+0.141332277 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:12:20 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:12:20 standalone.localdomain ceph-mon[29756]: pgmap v2933: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:20 standalone.localdomain python3.9[450357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2934: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:21 standalone.localdomain python3.9[450465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:22 standalone.localdomain python3.9[450573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:22 standalone.localdomain ceph-mon[29756]: pgmap v2934: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:22 standalone.localdomain python3.9[450681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2935: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:12:23
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'manila_metadata', 'manila_data', 'backups', '.mgr', 'volumes']
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:12:23 standalone.localdomain python3.9[450789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36527 DF PROTO=TCP SPT=38766 DPT=9102 SEQ=3986649140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2179F0000000001030307) 
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:12:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:12:23 standalone.localdomain python3.9[450897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:24 standalone.localdomain ceph-mon[29756]: pgmap v2935: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:24 standalone.localdomain python3.9[451005]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2936: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:25 standalone.localdomain python3.9[451113]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48138 DF PROTO=TCP SPT=41648 DPT=9101 SEQ=2190694803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB21F2D0000000001030307) 
Oct 13 15:12:25 standalone.localdomain python3.9[451221]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:26 standalone.localdomain ceph-mon[29756]: pgmap v2936: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:26 standalone.localdomain python3.9[451329]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2937: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:26 standalone.localdomain python3.9[451437]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:27 standalone.localdomain python3.9[451545]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:28 standalone.localdomain ceph-mon[29756]: pgmap v2937: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48140 DF PROTO=TCP SPT=41648 DPT=9101 SEQ=2190694803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB22B360000000001030307) 
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2938: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:12:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:12:30 standalone.localdomain ceph-mon[29756]: pgmap v2938: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2939: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:32 standalone.localdomain ceph-mon[29756]: pgmap v2939: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48141 DF PROTO=TCP SPT=41648 DPT=9101 SEQ=2190694803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB23AF70000000001030307) 
Oct 13 15:12:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2940: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:33 standalone.localdomain python3.9[451653]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 13 15:12:34 standalone.localdomain python3.9[451762]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 15:12:34 standalone.localdomain ceph-mon[29756]: pgmap v2940: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:12:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:12:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:12:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:12:34 standalone.localdomain groupadd[451767]: group added to /etc/group: name=nova, GID=42436
Oct 13 15:12:34 standalone.localdomain groupadd[451767]: group added to /etc/gshadow: name=nova
Oct 13 15:12:34 standalone.localdomain groupadd[451767]: new group: name=nova, GID=42436
Oct 13 15:12:34 standalone.localdomain podman[451763]: 2025-10-13 15:12:34.435676288 +0000 UTC m=+0.075536505 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object)
Oct 13 15:12:34 standalone.localdomain systemd[1]: tmp-crun.RhyxVo.mount: Deactivated successfully.
Oct 13 15:12:34 standalone.localdomain podman[451766]: 2025-10-13 15:12:34.50379003 +0000 UTC m=+0.137948095 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, release=1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, version=17.1.9, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:12:34 standalone.localdomain podman[451764]: 2025-10-13 15:12:34.540581207 +0000 UTC m=+0.180502236 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:12:34 standalone.localdomain podman[451764]: 2025-10-13 15:12:34.550938659 +0000 UTC m=+0.190859658 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:12:34 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:12:34 standalone.localdomain podman[451765]: 2025-10-13 15:12:34.595211872 +0000 UTC m=+0.232177362 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 15:12:34 standalone.localdomain podman[451763]: 2025-10-13 15:12:34.670109587 +0000 UTC m=+0.309969784 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-swift-object-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object)
Oct 13 15:12:34 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:12:34 standalone.localdomain podman[451766]: 2025-10-13 15:12:34.702014687 +0000 UTC m=+0.336172782 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container)
Oct 13 15:12:34 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:12:34 standalone.localdomain podman[451765]: 2025-10-13 15:12:34.787754669 +0000 UTC m=+0.424720189 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, container_name=swift_container_server, distribution-scope=public, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64)
Oct 13 15:12:34 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:12:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2941: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:35 standalone.localdomain python3.9[451977]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on standalone.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 15:12:35 standalone.localdomain useradd[451979]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/2
Oct 13 15:12:35 standalone.localdomain useradd[451979]: add 'nova' to group 'libvirt'
Oct 13 15:12:35 standalone.localdomain useradd[451979]: add 'nova' to shadow group 'libvirt'
Oct 13 15:12:36 standalone.localdomain sshd[452003]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:12:36 standalone.localdomain sshd[452003]: Accepted publickey for root from 192.168.122.30 port 46010 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:12:36 standalone.localdomain systemd-logind[45629]: New session 294 of user root.
Oct 13 15:12:36 standalone.localdomain systemd[1]: Started Session 294 of User root.
Oct 13 15:12:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:36 standalone.localdomain sshd[452003]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:12:36 standalone.localdomain ceph-mon[29756]: pgmap v2941: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:36 standalone.localdomain sshd[452006]: Received disconnect from 192.168.122.30 port 46010:11: disconnected by user
Oct 13 15:12:36 standalone.localdomain sshd[452006]: Disconnected from user root 192.168.122.30 port 46010
Oct 13 15:12:36 standalone.localdomain sshd[452003]: pam_unix(sshd:session): session closed for user root
Oct 13 15:12:36 standalone.localdomain systemd[1]: session-294.scope: Deactivated successfully.
Oct 13 15:12:36 standalone.localdomain systemd-logind[45629]: Session 294 logged out. Waiting for processes to exit.
Oct 13 15:12:36 standalone.localdomain systemd-logind[45629]: Removed session 294.
Oct 13 15:12:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2942: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:36 standalone.localdomain python3.9[452114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2705 DF PROTO=TCP SPT=39638 DPT=9100 SEQ=2890937717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB24DB40000000001030307) 
Oct 13 15:12:37 standalone.localdomain python3.9[452200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368356.5082145-1527-239877186030685/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:38 standalone.localdomain python3.9[452308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:38 standalone.localdomain ceph-mon[29756]: pgmap v2942: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2706 DF PROTO=TCP SPT=39638 DPT=9100 SEQ=2890937717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB251B60000000001030307) 
Oct 13 15:12:38 standalone.localdomain python3.9[452363]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2943: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:39 standalone.localdomain python3.9[452471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:39 standalone.localdomain python3.9[452557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368358.7329676-1527-64851745482591/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:40 standalone.localdomain ceph-mon[29756]: pgmap v2943: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:40 standalone.localdomain python3.9[452665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2707 DF PROTO=TCP SPT=39638 DPT=9100 SEQ=2890937717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB259B70000000001030307) 
Oct 13 15:12:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:12:40 standalone.localdomain podman[452733]: 2025-10-13 15:12:40.804266893 +0000 UTC m=+0.070573995 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3)
Oct 13 15:12:40 standalone.localdomain podman[452733]: 2025-10-13 15:12:40.817834022 +0000 UTC m=+0.084141144 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid)
Oct 13 15:12:40 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:12:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2944: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:40 standalone.localdomain python3.9[452759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368359.9874692-1527-237803675048229/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=f34c7eb1930a91cf1fb02b6d58032eaf6db6dbe6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:41 standalone.localdomain python3.9[452878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:42 standalone.localdomain python3.9[452964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368361.1086907-1527-231645940189119/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:42 standalone.localdomain ceph-mon[29756]: pgmap v2944: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:42 standalone.localdomain python3.9[453072]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2945: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:43 standalone.localdomain python3.9[453180]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17510 DF PROTO=TCP SPT=38572 DPT=9105 SEQ=2991558854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB267770000000001030307) 
Oct 13 15:12:44 standalone.localdomain python3.9[453288]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:12:44 standalone.localdomain ceph-mon[29756]: pgmap v2945: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2946: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:44 standalone.localdomain python3.9[453398]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:12:45 standalone.localdomain python3.9[453506]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:12:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:12:45 standalone.localdomain podman[453509]: 2025-10-13 15:12:45.789015504 +0000 UTC m=+0.054929755 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:12:45 standalone.localdomain podman[453509]: 2025-10-13 15:12:45.79286086 +0000 UTC m=+0.058775211 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:12:45 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:12:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:46 standalone.localdomain python3.9[453635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:46 standalone.localdomain ceph-mon[29756]: pgmap v2946: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2947: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:46 standalone.localdomain python3.9[453721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368365.8962305-1638-220113646155184/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:47 standalone.localdomain python3.9[453829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:12:48 standalone.localdomain python3.9[453915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368367.0975184-1653-105787960670547/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:12:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 15:12:48 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 70, saving inputs in /var/lib/pacemaker/pengine/pe-input-69.bz2
Oct 13 15:12:48 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 70 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-69.bz2): Complete
Oct 13 15:12:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 15:12:48 standalone.localdomain ceph-mon[29756]: pgmap v2947: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33185 DF PROTO=TCP SPT=59178 DPT=9882 SEQ=2642014556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB279B70000000001030307) 
Oct 13 15:12:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2948: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:48 standalone.localdomain python3.9[454023]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 13 15:12:49 standalone.localdomain python3.9[454131]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:12:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17512 DF PROTO=TCP SPT=38572 DPT=9105 SEQ=2991558854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB27F360000000001030307) 
Oct 13 15:12:50 standalone.localdomain ceph-mon[29756]: pgmap v2948: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:50 standalone.localdomain python3[454239]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:12:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:12:50 standalone.localdomain podman[454262]: 2025-10-13 15:12:50.82244653 +0000 UTC m=+0.089935898 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:12:50 standalone.localdomain podman[454262]: 2025-10-13 15:12:50.865862527 +0000 UTC m=+0.133351885 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:12:50 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:12:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2949: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:52 standalone.localdomain ceph-mon[29756]: pgmap v2949: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2950: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:12:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:12:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56111 DF PROTO=TCP SPT=42190 DPT=9102 SEQ=2056434952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB28CCF0000000001030307) 
Oct 13 15:12:54 standalone.localdomain ceph-mon[29756]: pgmap v2950: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2951: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62697 DF PROTO=TCP SPT=49718 DPT=9101 SEQ=3094888596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2945F0000000001030307) 
Oct 13 15:12:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:12:56 standalone.localdomain ceph-mon[29756]: pgmap v2951: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2952: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:58 standalone.localdomain ceph-mon[29756]: pgmap v2952: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:12:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62699 DF PROTO=TCP SPT=49718 DPT=9101 SEQ=3094888596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2A0760000000001030307) 
Oct 13 15:12:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2953: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:00 standalone.localdomain ceph-mon[29756]: pgmap v2953: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:00 standalone.localdomain sudo[454773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:13:00 standalone.localdomain sudo[454773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:13:00 standalone.localdomain sudo[454773]: pam_unix(sudo:session): session closed for user root
Oct 13 15:13:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2954: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:00 standalone.localdomain podman[454254]: 2025-10-13 15:12:50.739567715 +0000 UTC m=+0.043295004 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 15:13:00 standalone.localdomain sudo[454791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:13:00 standalone.localdomain sudo[454791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:13:01 standalone.localdomain podman[454832]: 
Oct 13 15:13:01 standalone.localdomain podman[454832]: 2025-10-13 15:13:01.136990752 +0000 UTC m=+0.080059861 container create 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 13 15:13:01 standalone.localdomain podman[454832]: 2025-10-13 15:13:01.0996477 +0000 UTC m=+0.042716819 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 15:13:01 standalone.localdomain python3[454239]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:01 standalone.localdomain sudo[454791]: pam_unix(sudo:session): session closed for user root
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:13:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:13:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 409af614-2d48-4582-86f6-2c1aef7d6805 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:13:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 409af614-2d48-4582-86f6-2c1aef7d6805 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:13:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 409af614-2d48-4582-86f6-2c1aef7d6805 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:13:01 standalone.localdomain sudo[454974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:13:01 standalone.localdomain sudo[454974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:13:01 standalone.localdomain sudo[454974]: pam_unix(sudo:session): session closed for user root
Oct 13 15:13:01 standalone.localdomain python3.9[455028]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:02 standalone.localdomain ceph-mon[29756]: pgmap v2954: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:13:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:13:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:13:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:13:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62700 DF PROTO=TCP SPT=49718 DPT=9101 SEQ=3094888596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2B0360000000001030307) 
Oct 13 15:13:02 standalone.localdomain python3.9[455138]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 13 15:13:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2955: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:03 standalone.localdomain python3.9[455246]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:13:04 standalone.localdomain python3[455354]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:13:04 standalone.localdomain ceph-mon[29756]: pgmap v2955: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:04 standalone.localdomain python3[455354]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "97abb4e5d6eb812c6abde306e15dbdde9dbba5ef5cd42ad11b83abc055914569",
                                                                  "Digest": "sha256:2b70db6709e60609ff94638c3a7a40f509d80d41e53eb54f065703da11802cd3",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:2b70db6709e60609ff94638c3a7a40f509d80d41e53eb54f065703da11802cd3"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:31:21.381002217Z",
                                                                  "Config": {
                                                                       "User": "nova",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 1207007612,
                                                                  "VirtualSize": 1207007612,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96/diff:/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:1a40b8e27c60dd8893fdbc5fa30120a44a28acf4612c80547c0553f310501dd5",
                                                                            "sha256:2a9ec46259af39bbeda42274a61412e662148c4183ea89d81b5121453e60e59e"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "nova",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:11.761349972Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:20.180231702Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:44.424181775Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:20:01.549627401Z",
                                                                            "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:20:10.269093781Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:29:37.419592755Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:20.482454363Z",
                                                                            "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:20.985389477Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:21.378472405Z",
                                                                            "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:21.378530597Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:28.94629191Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 15:13:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:13:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:13:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:13:04 standalone.localdomain podman[455405]: 2025-10-13 15:13:04.764666302 +0000 UTC m=+0.120465941 container remove 277ca4f02baa3059149c2d2e0bbb3e32b4dab12d1d928360a61bed8dae578b3c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-nova-compute/images/17.1.9-1, vcs-ref=5fbf038504b4f996506e416c0a4ec212fba00b4d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1, architecture=x86_64, version=17.1.9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '9002d51dff127f237c02241c0a80d08e-5ce329a35cfc30978bc40d323681fc5e'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-07-21T14:48:37, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 15:13:04 standalone.localdomain python3[455354]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Oct 13 15:13:04 standalone.localdomain podman[455418]: 2025-10-13 15:13:04.798617866 +0000 UTC m=+0.088014918 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:13:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:13:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2956: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:04 standalone.localdomain podman[455447]: 2025-10-13 15:13:04.823512831 +0000 UTC m=+0.038907698 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 15:13:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:13:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:13:04 standalone.localdomain podman[455418]: 2025-10-13 15:13:04.942317652 +0000 UTC m=+0.231714704 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:13:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:13:04 standalone.localdomain podman[455447]: 
Oct 13 15:13:04 standalone.localdomain podman[455447]: 2025-10-13 15:13:04.963700797 +0000 UTC m=+0.179095644 container create d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 13 15:13:04 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:13:04 standalone.localdomain python3[455354]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 13 15:13:05 standalone.localdomain podman[455465]: 2025-10-13 15:13:05.002415869 +0000 UTC m=+0.175144531 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, version=17.1.9, name=rhosp17/openstack-swift-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1)
Oct 13 15:13:05 standalone.localdomain podman[455432]: 2025-10-13 15:13:04.962673148 +0000 UTC m=+0.229841692 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9)
Oct 13 15:13:05 standalone.localdomain podman[455419]: 2025-10-13 15:13:05.018549031 +0000 UTC m=+0.298772749 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, version=17.1.9, distribution-scope=public, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:13:05 standalone.localdomain podman[455432]: 2025-10-13 15:13:05.190651124 +0000 UTC m=+0.457819658 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=)
Oct 13 15:13:05 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:13:05 standalone.localdomain podman[455465]: 2025-10-13 15:13:05.215100636 +0000 UTC m=+0.387829238 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, batch=17.1_20250721.1, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, container_name=swift_container_server, release=1)
Oct 13 15:13:05 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:13:05 standalone.localdomain podman[455419]: 2025-10-13 15:13:05.241620857 +0000 UTC m=+0.521844575 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, release=1, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T14:56:28, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:13:05 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:13:05 standalone.localdomain python3.9[455664]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:05 standalone.localdomain ceph-mon[29756]: pgmap v2956: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:13:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:06 standalone.localdomain python3.9[455774]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2957: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:13:06.928 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:13:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:13:06.930 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:13:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:13:06.931 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:13:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48087 DF PROTO=TCP SPT=40298 DPT=9100 SEQ=3413293543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2C2E40000000001030307) 
Oct 13 15:13:08 standalone.localdomain ceph-mon[29756]: pgmap v2957: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48088 DF PROTO=TCP SPT=40298 DPT=9100 SEQ=3413293543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2C6F70000000001030307) 
Oct 13 15:13:08 standalone.localdomain python3.9[455950]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368386.6235435-1745-180630866265450/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2958: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:09 standalone.localdomain python3.9[456003]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:13:09 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:13:09 standalone.localdomain systemd-sysv-generator[456031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:13:09 standalone.localdomain systemd-rc-local-generator[456022]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:13:09 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:13:10 standalone.localdomain ceph-mon[29756]: pgmap v2958: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48089 DF PROTO=TCP SPT=40298 DPT=9100 SEQ=3413293543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2CEF60000000001030307) 
Oct 13 15:13:10 standalone.localdomain python3.9[456091]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:13:10 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:13:10 standalone.localdomain systemd-rc-local-generator[456119]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:13:10 standalone.localdomain systemd-sysv-generator[456122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:13:10 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:13:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2959: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:13:11 standalone.localdomain systemd[1]: Starting nova_compute container...
Oct 13 15:13:11 standalone.localdomain podman[456132]: 2025-10-13 15:13:11.200362803 +0000 UTC m=+0.069762744 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:13:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:11 standalone.localdomain podman[456132]: 2025-10-13 15:13:11.284426917 +0000 UTC m=+0.153826898 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:13:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:13:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:11 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:13:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:11 standalone.localdomain podman[456133]: 2025-10-13 15:13:11.30366617 +0000 UTC m=+0.172862375 container init d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:13:11 standalone.localdomain podman[456133]: 2025-10-13 15:13:11.30995217 +0000 UTC m=+0.179148375 container start d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:13:11 standalone.localdomain podman[456133]: nova_compute
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + sudo -E kolla_set_configs
Oct 13 15:13:11 standalone.localdomain systemd[1]: Started nova_compute container.
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Validating config file
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying service configuration files
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Deleting /etc/ceph
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Creating directory /etc/ceph
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/ceph
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Writing out command to execute
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: ++ cat /run_command
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + CMD=nova-compute
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + ARGS=
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + sudo kolla_copy_cacerts
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + [[ ! -n '' ]]
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + . kolla_extend_start
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + echo 'Running command: '\''nova-compute'\'''
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: Running command: 'nova-compute'
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + umask 0022
Oct 13 15:13:11 standalone.localdomain nova_compute[456164]: + exec nova-compute
Oct 13 15:13:12 standalone.localdomain python3.9[456284]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:12 standalone.localdomain ceph-mon[29756]: pgmap v2959: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2960: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:13 standalone.localdomain python3.9[456392]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.210 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.211 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.211 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.211 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.334 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.344 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:13:13 standalone.localdomain python3.9[456504]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.878 2 INFO nova.virt.driver [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.976 2 INFO nova.compute.provider_config [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.993 2 WARNING nova.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.993 2 DEBUG oslo_concurrency.lockutils [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.993 2 DEBUG oslo_concurrency.lockutils [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.994 2 DEBUG oslo_concurrency.lockutils [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.994 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.994 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.994 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.994 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.994 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.995 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.996 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] console_host                   = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.997 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.998 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:13 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:13.999 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.000 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.000 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.000 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.000 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.000 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.000 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20517 DF PROTO=TCP SPT=45834 DPT=9105 SEQ=2240547494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2DCB60000000001030307) 
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.001 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.002 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.003 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.004 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.005 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.005 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.005 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.005 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.005 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.005 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.006 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.007 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.008 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.009 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.010 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.011 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.012 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.013 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.014 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.015 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.016 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.016 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.016 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.016 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.016 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.016 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.017 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.017 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.017 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.017 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.017 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.017 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.018 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.019 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.020 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.021 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.022 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.023 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.024 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.025 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.026 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.027 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.028 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.028 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.028 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.028 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.028 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.028 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.029 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.030 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.031 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.032 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.033 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.034 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.035 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.036 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.037 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.038 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.039 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.040 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.040 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.040 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.040 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.040 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.041 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.042 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.043 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.043 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.043 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.043 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.043 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.043 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.044 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.044 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.044 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.044 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.044 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.045 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.045 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.045 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.045 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.046 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.046 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.046 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.046 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.046 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.046 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.047 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.047 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.047 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.047 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.047 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.048 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.048 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.048 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.048 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.048 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.048 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.049 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.050 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.050 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.050 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.050 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.050 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.050 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.051 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.052 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.053 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.054 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.055 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.056 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.057 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.058 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.059 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.060 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.061 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.062 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.063 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.064 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.065 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.065 2 WARNING oslo_config.cfg [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: and ``live_migration_inbound_addr`` respectively.
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: ).  Its value may be silently ignored in the future.
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.065 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.065 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.065 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.065 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.066 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.066 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.066 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.066 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.066 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.066 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.067 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rbd_secret_uuid        = 627e7f45-65aa-56de-94df-66eaee84a56e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.068 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.069 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.070 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.070 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.070 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.070 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.070 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.070 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.071 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.072 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.073 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.074 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.075 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.076 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.077 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.078 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.079 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.080 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.081 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.082 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.083 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.084 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.085 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.086 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.087 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.088 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.088 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.088 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.088 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.088 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.088 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.089 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.089 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.089 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.089 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.089 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.089 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.090 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.090 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.090 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.090 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.090 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.090 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.091 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.091 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.091 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.091 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.091 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.092 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.092 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.092 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.092 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.092 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.093 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.094 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.094 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.094 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.094 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.094 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.094 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.095 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.095 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.095 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.095 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.095 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.095 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.096 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.096 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.096 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.096 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.096 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.097 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.097 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.097 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.097 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.097 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.097 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.098 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.098 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.098 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.098 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.098 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.098 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.099 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.100 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.100 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.100 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.100 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.100 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.100 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.101 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.101 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.101 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.101 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.101 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.101 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.102 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.102 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.102 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.102 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.102 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.103 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.103 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.103 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.103 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.103 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.104 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.105 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.105 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.105 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.105 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.105 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.105 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.106 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.107 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.108 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.109 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.109 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.109 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.109 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.109 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.109 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.110 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.110 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.110 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.110 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.110 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.110 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.111 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.111 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.111 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.111 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.111 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.111 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.112 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.113 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.113 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.113 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.113 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.113 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.113 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.114 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.115 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.116 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.116 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.116 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.116 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.116 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.117 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.117 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.117 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.117 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.117 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.117 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.118 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.119 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.120 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.121 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.121 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.121 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.121 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.121 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.121 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.122 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.123 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.123 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.123 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.123 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.123 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.123 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.124 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.124 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.124 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.124 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.124 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.124 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.125 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.126 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.126 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.126 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.126 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.127 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.127 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.127 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.127 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.127 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.127 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.128 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.129 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.130 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.130 2 DEBUG oslo_service.service [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.131 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.149 2 INFO nova.virt.node [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Determined node identity e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from /var/lib/nova/compute_id
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.149 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.150 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.150 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.150 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.160 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb6a9fdeb50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.163 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb6a9fdeb50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.164 2 INFO nova.virt.libvirt.driver [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Connection event '1' reason 'None'
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.184 2 INFO nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Libvirt host capabilities <capabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <host>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <uuid>6635e8bc-8f5a-4138-a382-e7fa61dbcec1</uuid>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <arch>x86_64</arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model>EPYC-Rome-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <vendor>AMD</vendor>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <microcode version='16777317'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <signature family='23' model='49' stepping='0'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='x2apic'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='tsc-deadline'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='osxsave'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='hypervisor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='tsc_adjust'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='spec-ctrl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='stibp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='arch-capabilities'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='cmp_legacy'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='topoext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='virt-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='lbrv'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='tsc-scale'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='vmcb-clean'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='pause-filter'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='pfthreshold'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='svme-addr-chk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='rdctl-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='skip-l1dfl-vmentry'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='mds-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature name='pschange-mc-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <pages unit='KiB' size='4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <pages unit='KiB' size='2048'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <pages unit='KiB' size='1048576'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <power_management>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <suspend_mem/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <suspend_disk/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <suspend_hybrid/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </power_management>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <iommu support='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <migration_features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <live/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <uri_transports>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <uri_transport>tcp</uri_transport>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <uri_transport>rdma</uri_transport>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </uri_transports>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </migration_features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <topology>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <cells num='1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <cell id='0'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           <memory unit='KiB'>16116612</memory>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           <pages unit='KiB' size='4'>4029153</pages>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           <pages unit='KiB' size='2048'>0</pages>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           <distances>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <sibling id='0' value='10'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           </distances>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           <cpus num='8'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:           </cpus>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         </cell>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </cells>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </topology>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <cache>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </cache>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <secmodel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model>selinux</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <doi>0</doi>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </secmodel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <secmodel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model>dac</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <doi>0</doi>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </secmodel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </host>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <guest>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <os_type>hvm</os_type>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <arch name='i686'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <wordsize>32</wordsize>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <domain type='qemu'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <domain type='kvm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <pae/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <nonpae/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <acpi default='on' toggle='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <apic default='on' toggle='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <cpuselection/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <deviceboot/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <disksnapshot default='on' toggle='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <externalSnapshot/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </guest>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <guest>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <os_type>hvm</os_type>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <arch name='x86_64'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <wordsize>64</wordsize>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <domain type='qemu'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <domain type='kvm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <acpi default='on' toggle='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <apic default='on' toggle='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <cpuselection/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <deviceboot/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <disksnapshot default='on' toggle='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <externalSnapshot/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </guest>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: </capabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.192 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.195 2 DEBUG nova.virt.libvirt.volume.mount [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.206 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: <domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <domain>kvm</domain>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <arch>i686</arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <vcpu max='1024'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <iothreads supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <os supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='firmware'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <loader supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>rom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pflash</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='readonly'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>yes</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='secure'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </loader>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </os>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='maximumMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <vendor>AMD</vendor>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='succor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='custom' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-128'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-256'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-512'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <memoryBacking supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='sourceType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>file</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>anonymous</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>memfd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </memoryBacking>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <disk supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='diskDevice'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>disk</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cdrom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>floppy</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>lun</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>fdc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>sata</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </disk>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <graphics supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vnc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egl-headless</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>dbus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </graphics>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <video supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='modelType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vga</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cirrus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>none</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>bochs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ramfb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </video>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hostdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='mode'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>subsystem</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='startupPolicy'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>mandatory</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>requisite</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>optional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='subsysType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pci</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='capsType'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='pciBackend'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hostdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <rng supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>random</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </rng>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <filesystem supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='driverType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>path</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>handle</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtiofs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </filesystem>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <tpm supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-tis</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-crb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emulator</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>external</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendVersion'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>2.0</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </tpm>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <redirdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </redirdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <channel supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pty</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>unix</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </channel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <crypto supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>qemu</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </crypto>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <interface supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>passt</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </interface>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <panic supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>isa</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>hyperv</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </panic>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <gic supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <genid supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backup supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <async-teardown supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <ps2 supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sev supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sgx supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hyperv supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='features'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>relaxed</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vapic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>spinlocks</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vpindex</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>runtime</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>synic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>stimer</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reset</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vendor_id</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>frequencies</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reenlightenment</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tlbflush</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ipi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>avic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emsr_bitmap</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>xmm_input</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hyperv>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <launchSecurity supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: </domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.213 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: <domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <domain>kvm</domain>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <arch>i686</arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <vcpu max='240'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <iothreads supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <os supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='firmware'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <loader supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>rom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pflash</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='readonly'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>yes</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='secure'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </loader>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </os>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='maximumMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <vendor>AMD</vendor>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='succor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='custom' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-128'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-256'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-512'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <memoryBacking supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='sourceType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>file</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>anonymous</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>memfd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </memoryBacking>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <disk supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='diskDevice'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>disk</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cdrom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>floppy</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>lun</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ide</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>fdc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>sata</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </disk>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <graphics supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vnc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egl-headless</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>dbus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </graphics>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <video supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='modelType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vga</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cirrus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>none</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>bochs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ramfb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </video>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hostdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='mode'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>subsystem</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='startupPolicy'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>mandatory</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>requisite</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>optional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='subsysType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pci</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='capsType'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='pciBackend'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hostdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <rng supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>random</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </rng>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <filesystem supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='driverType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>path</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>handle</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtiofs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </filesystem>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <tpm supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-tis</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-crb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emulator</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>external</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendVersion'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>2.0</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </tpm>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <redirdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </redirdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <channel supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pty</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>unix</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </channel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <crypto supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>qemu</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </crypto>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <interface supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>passt</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </interface>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <panic supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>isa</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>hyperv</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </panic>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <gic supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <genid supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backup supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <async-teardown supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <ps2 supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sev supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sgx supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hyperv supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='features'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>relaxed</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vapic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>spinlocks</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vpindex</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>runtime</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>synic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>stimer</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reset</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vendor_id</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>frequencies</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reenlightenment</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tlbflush</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ipi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>avic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emsr_bitmap</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>xmm_input</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hyperv>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <launchSecurity supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: </domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.248 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.253 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: <domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <domain>kvm</domain>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <arch>x86_64</arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <vcpu max='1024'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <iothreads supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <os supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='firmware'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>efi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <loader supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>rom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pflash</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='readonly'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>yes</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='secure'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>yes</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </loader>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </os>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='maximumMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <vendor>AMD</vendor>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='succor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='custom' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-128'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-256'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-512'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain ceph-mon[29756]: pgmap v2960: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <memoryBacking supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='sourceType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>file</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>anonymous</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>memfd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </memoryBacking>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <disk supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='diskDevice'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>disk</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cdrom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>floppy</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>lun</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>fdc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>sata</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </disk>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <graphics supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vnc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egl-headless</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>dbus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </graphics>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <video supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='modelType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vga</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cirrus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>none</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>bochs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ramfb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </video>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hostdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='mode'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>subsystem</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='startupPolicy'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>mandatory</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>requisite</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>optional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='subsysType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pci</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='capsType'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='pciBackend'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hostdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <rng supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>random</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </rng>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <filesystem supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='driverType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>path</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>handle</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtiofs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </filesystem>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <tpm supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-tis</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-crb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emulator</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>external</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendVersion'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>2.0</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </tpm>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <redirdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </redirdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <channel supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pty</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>unix</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </channel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <crypto supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>qemu</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </crypto>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <interface supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>passt</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </interface>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <panic supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>isa</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>hyperv</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </panic>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <gic supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <genid supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backup supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <async-teardown supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <ps2 supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sev supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sgx supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hyperv supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='features'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>relaxed</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vapic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>spinlocks</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vpindex</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>runtime</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>synic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>stimer</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reset</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vendor_id</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>frequencies</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reenlightenment</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tlbflush</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ipi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>avic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emsr_bitmap</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>xmm_input</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hyperv>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <launchSecurity supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: </domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.320 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: <domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <domain>kvm</domain>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <arch>x86_64</arch>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <vcpu max='240'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <iothreads supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <os supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='firmware'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <loader supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>rom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pflash</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='readonly'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>yes</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='secure'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>no</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </loader>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </os>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='maximumMigratable'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>on</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>off</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <vendor>AMD</vendor>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='succor'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <mode name='custom' supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Denverton-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='auto-ibrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amd-psfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='stibp-always-on'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='EPYC-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-128'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-256'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx10-512'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='prefetchiti'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Haswell-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512er'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512pf'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fma4'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tbm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xop'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='amx-tile'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-bf16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-fp16'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bitalg'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrc'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fzrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='la57'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='taa-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xfd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ifma'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cmpccxadd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fbsdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='fsrs'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ibrs-all'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mcdt-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pbrsb-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='psdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='serialize'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vaes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='hle'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='rtm'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512bw'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512cd'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512dq'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512f'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='avx512vl'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='invpcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pcid'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='pku'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='mpx'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='core-capability'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='split-lock-detect'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='cldemote'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='erms'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='gfni'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdir64b'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='movdiri'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='xsaves'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='athlon-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='core2duo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='coreduo-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='n270-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='ss'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <blockers model='phenom-v1'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnow'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <feature name='3dnowext'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </blockers>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </mode>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </cpu>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <memoryBacking supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <enum name='sourceType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>file</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>anonymous</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <value>memfd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </memoryBacking>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <disk supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='diskDevice'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>disk</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cdrom</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>floppy</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>lun</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ide</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>fdc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>sata</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </disk>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <graphics supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vnc</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egl-headless</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>dbus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </graphics>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <video supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='modelType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vga</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>cirrus</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>none</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>bochs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ramfb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </video>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hostdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='mode'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>subsystem</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='startupPolicy'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>mandatory</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>requisite</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>optional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='subsysType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pci</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>scsi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='capsType'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='pciBackend'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hostdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <rng supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtio-non-transitional</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>random</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>egd</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </rng>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <filesystem supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='driverType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>path</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>handle</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>virtiofs</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </filesystem>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <tpm supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-tis</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tpm-crb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emulator</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>external</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendVersion'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>2.0</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </tpm>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <redirdev supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='bus'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>usb</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </redirdev>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <channel supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>pty</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>unix</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </channel>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <crypto supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='type'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>qemu</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendModel'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>builtin</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </crypto>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <interface supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='backendType'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>default</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>passt</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </interface>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <panic supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='model'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>isa</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>hyperv</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </panic>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </devices>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   <features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <gic supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <genid supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <backup supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <async-teardown supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <ps2 supported='yes'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sev supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <sgx supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <hyperv supported='yes'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       <enum name='features'>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>relaxed</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vapic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>spinlocks</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vpindex</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>runtime</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>synic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>stimer</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reset</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>vendor_id</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>frequencies</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>reenlightenment</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>tlbflush</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>ipi</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>avic</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>emsr_bitmap</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:         <value>xmm_input</value>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:       </enum>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     </hyperv>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:     <launchSecurity supported='no'/>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:   </features>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: </domainCapabilities>
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.380 2 DEBUG nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.380 2 INFO nova.virt.libvirt.host [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Secure Boot support detected
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.384 2 INFO nova.virt.libvirt.driver [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.384 2 INFO nova.virt.libvirt.driver [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.401 2 DEBUG nova.virt.libvirt.driver [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.448 2 INFO nova.virt.node [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Determined node identity e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from /var/lib/nova/compute_id
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.474 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Verified node e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 matches my host standalone.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.516 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.523 2 DEBUG nova.virt.libvirt.vif [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,hidden=False,host='standalone.localdomain',hostname='test',id=1,image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:16:19Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=<?>,node='standalone.localdomain',numa_topology=None,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-iqeurhsr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T14:16:19Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=54a46fec-332e-42f9-83ed-88e763d13f63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.523 2 DEBUG nova.network.os_vif_util [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Converting VIF {"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.524 2 DEBUG nova.network.os_vif_util [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.524 2 DEBUG os_vif [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.559 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.560 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.560 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:14 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:14.582 2 INFO oslo.privsep.daemon [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmps_eebot4/privsep.sock']
Oct 13 15:13:14 standalone.localdomain python3.9[456637]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 15:13:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2961: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:14 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 121.6 (405 of 333 items), suggesting rotation.
Oct 13 15:13:14 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:13:14 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:13:14 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.198 2 INFO oslo.privsep.daemon [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Spawned new privsep daemon via rootwrap
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.088 40 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.091 40 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.093 40 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.093 40 INFO oslo.privsep.daemon [-] privsep daemon running as pid 40
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a49767c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a49767c-fb, col_values=(('external_ids', {'iface-id': '8a49767c-fb09-4185-95da-4261d8043fad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:4e:c0', 'vm-uuid': '54a46fec-332e-42f9-83ed-88e763d13f63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.468 2 INFO os_vif [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb')
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.468 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.471 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.472 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.476 2 DEBUG nova.virt.libvirt.vif [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:17:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='bfv-server',display_name='bfv-server',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,hidden=False,host='standalone.localdomain',hostname='bfv-server',id=2,image_ref='',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:17:39Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=<?>,node='standalone.localdomain',numa_topology=None,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-n0khd6io',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T14:17:40Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=8f68d5aa-abc4-451d-89d2-f5342b71831c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.476 2 DEBUG nova.network.os_vif_util [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Converting VIF {"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.477 2 DEBUG nova.network.os_vif_util [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.477 2 DEBUG os_vif [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda3e5a61-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda3e5a61-7a, col_values=(('external_ids', {'iface-id': 'da3e5a61-7adb-481b-b7f5-703c3939cde2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:20:5b', 'vm-uuid': '8f68d5aa-abc4-451d-89d2-f5342b71831c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.481 2 INFO os_vif [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a')
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.482 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.484 2 DEBUG nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.484 2 INFO nova.compute.manager [None req-c386edbc-3888-4a82-b59f-865ef64df81d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 13 15:13:15 standalone.localdomain python3.9[456778]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:13:15 standalone.localdomain systemd[1]: Stopping nova_compute container...
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.768 2 DEBUG oslo_concurrency.lockutils [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.768 2 DEBUG oslo_concurrency.lockutils [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:13:15 standalone.localdomain nova_compute[456164]: 2025-10-13 15:13:15.768 2 DEBUG oslo_concurrency.lockutils [None req-a4135b7a-a95b-4bff-a460-c1f23b51ea81 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:13:16 standalone.localdomain virtqemud[425408]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 13 15:13:16 standalone.localdomain virtqemud[425408]: hostname: standalone.localdomain
Oct 13 15:13:16 standalone.localdomain virtqemud[425408]: End of file while reading data: Input/output error
Oct 13 15:13:16 standalone.localdomain systemd[1]: libpod-d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719.scope: Deactivated successfully.
Oct 13 15:13:16 standalone.localdomain systemd[1]: libpod-d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719.scope: Consumed 3.732s CPU time.
Oct 13 15:13:16 standalone.localdomain podman[456782]: 2025-10-13 15:13:16.172943452 +0000 UTC m=+0.466917438 container died d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3)
Oct 13 15:13:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:13:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719-userdata-shm.mount: Deactivated successfully.
Oct 13 15:13:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc-merged.mount: Deactivated successfully.
Oct 13 15:13:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:16 standalone.localdomain podman[456799]: 2025-10-13 15:13:16.382395437 +0000 UTC m=+0.187260539 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:13:16 standalone.localdomain ceph-mon[29756]: pgmap v2961: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:16 standalone.localdomain podman[456799]: 2025-10-13 15:13:16.749105807 +0000 UTC m=+0.553970869 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:13:16 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:13:16 standalone.localdomain podman[456782]: 2025-10-13 15:13:16.815083281 +0000 UTC m=+1.109057247 container cleanup d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Oct 13 15:13:16 standalone.localdomain podman[456782]: nova_compute
Oct 13 15:13:16 standalone.localdomain podman[456828]: 2025-10-13 15:13:16.911868271 +0000 UTC m=+0.066658166 container cleanup d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:13:16 standalone.localdomain podman[456828]: nova_compute
Oct 13 15:13:16 standalone.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 13 15:13:16 standalone.localdomain systemd[1]: Stopped nova_compute container.
Oct 13 15:13:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2962: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:16 standalone.localdomain systemd[1]: Starting nova_compute container...
Oct 13 15:13:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:13:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:17 standalone.localdomain podman[456841]: 2025-10-13 15:13:17.058635975 +0000 UTC m=+0.121811919 container init d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=nova_compute)
Oct 13 15:13:17 standalone.localdomain podman[456841]: 2025-10-13 15:13:17.066440839 +0000 UTC m=+0.129616783 container start d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3)
Oct 13 15:13:17 standalone.localdomain podman[456841]: nova_compute
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + sudo -E kolla_set_configs
Oct 13 15:13:17 standalone.localdomain systemd[1]: Started nova_compute container.
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Validating config file
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying service configuration files
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /etc/ceph
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Creating directory /etc/ceph
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/ceph
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Writing out command to execute
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: ++ cat /run_command
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + CMD=nova-compute
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + ARGS=
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + sudo kolla_copy_cacerts
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + [[ ! -n '' ]]
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + . kolla_extend_start
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + echo 'Running command: '\''nova-compute'\'''
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: Running command: 'nova-compute'
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + umask 0022
Oct 13 15:13:17 standalone.localdomain nova_compute[456855]: + exec nova-compute
Oct 13 15:13:17 standalone.localdomain python3.9[456974]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 15:13:17 standalone.localdomain systemd[1]: Started libpod-conmon-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d.scope.
Oct 13 15:13:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:13:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 13 15:13:18 standalone.localdomain podman[457001]: 2025-10-13 15:13:18.017331685 +0000 UTC m=+0.129180452 container init 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:13:18 standalone.localdomain podman[457001]: 2025-10-13 15:13:18.030309177 +0000 UTC m=+0.142157944 container start 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:13:18 standalone.localdomain python3.9[456974]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Applying nova statedir ownership
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 0:0 to 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 0:0 to 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63 already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63 to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63/console.log
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/468faea603e9a8e7d8ab655f127ad3cb3131d39b
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-468faea603e9a8e7d8ab655f127ad3cb3131d39b
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c/console.log
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/4e1c8ff05ea4b4d3fd130f5de9a8e0ba3414c26740fc6ca454deea0f548914ea
Oct 13 15:13:18 standalone.localdomain nova_compute_init[457021]: INFO:nova_statedir:Nova statedir ownership complete
Oct 13 15:13:18 standalone.localdomain systemd[1]: libpod-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d.scope: Deactivated successfully.
Oct 13 15:13:18 standalone.localdomain podman[457022]: 2025-10-13 15:13:18.10111635 +0000 UTC m=+0.051141520 container died 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:13:18 standalone.localdomain podman[457031]: 2025-10-13 15:13:18.161452483 +0000 UTC m=+0.075086328 container cleanup 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:13:18 standalone.localdomain systemd[1]: libpod-conmon-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d.scope: Deactivated successfully.
Oct 13 15:13:18 standalone.localdomain ceph-mon[29756]: pgmap v2962: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:13:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1194125640' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:13:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:13:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1194125640' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:13:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46768 DF PROTO=TCP SPT=41228 DPT=9882 SEQ=3114247536 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2EEF60000000001030307) 
Oct 13 15:13:18 standalone.localdomain sshd[430533]: pam_unix(sshd:session): session closed for user root
Oct 13 15:13:18 standalone.localdomain systemd[1]: session-293.scope: Deactivated successfully.
Oct 13 15:13:18 standalone.localdomain systemd[1]: session-293.scope: Consumed 2min 24.267s CPU time.
Oct 13 15:13:18 standalone.localdomain systemd-logind[45629]: Session 293 logged out. Waiting for processes to exit.
Oct 13 15:13:18 standalone.localdomain systemd-logind[45629]: Removed session 293.
Oct 13 15:13:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2963: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25-merged.mount: Deactivated successfully.
Oct 13 15:13:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:13:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:18.937 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:13:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:18.937 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:13:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:18.937 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:13:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:18.937 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.065 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.081 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:13:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1194125640' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:13:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1194125640' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.840 2 INFO nova.virt.driver [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.926 2 INFO nova.compute.provider_config [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.940 2 WARNING nova.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.940 2 DEBUG oslo_concurrency.lockutils [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.940 2 DEBUG oslo_concurrency.lockutils [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.940 2 DEBUG oslo_concurrency.lockutils [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.941 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.941 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.941 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.941 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.942 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.943 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] console_host                   = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.944 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.945 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.945 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.945 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.945 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.945 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.946 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.947 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.948 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.949 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.950 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.951 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.952 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.953 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.954 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.955 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.956 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.957 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.958 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.959 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.960 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.961 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.962 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.963 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.964 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.965 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.966 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.966 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.966 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.966 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.966 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.966 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.967 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.968 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.969 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.970 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.971 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.972 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.973 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.974 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.975 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.976 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.977 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.978 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.979 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.980 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.981 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.982 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.983 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.983 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.983 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.983 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.983 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.983 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.984 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.985 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.986 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.987 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.988 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.989 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.990 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.990 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.990 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.990 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.990 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.990 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.991 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.992 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.993 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.994 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.995 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.995 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.995 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.995 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.995 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.996 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.997 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.998 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:19.999 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.000 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.001 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.002 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.003 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.004 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.005 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.006 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.007 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.008 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.008 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.008 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.008 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.008 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.008 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.009 2 WARNING oslo_config.cfg [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: and ``live_migration_inbound_addr`` respectively.
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: ).  Its value may be silently ignored in the future.
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.009 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.009 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.009 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.009 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.010 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.011 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rbd_secret_uuid        = 627e7f45-65aa-56de-94df-66eaee84a56e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.012 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.013 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.013 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.013 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.013 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.013 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.013 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.014 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.015 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.016 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.016 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.016 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.016 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.016 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.016 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.017 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.018 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.019 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.020 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.021 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.022 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.023 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.024 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.025 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.026 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.026 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.026 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.026 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.026 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.026 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.027 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.028 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.029 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.029 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.029 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.029 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.029 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.029 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.030 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.031 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.032 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.032 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.032 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.032 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.032 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.033 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.034 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.035 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.036 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.036 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.036 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.036 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.036 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.037 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.037 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.037 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.037 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.037 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.037 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.038 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.038 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.038 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.038 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.038 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.039 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.039 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.039 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.039 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.039 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.040 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.040 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.040 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.040 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.040 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.041 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.041 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.041 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.041 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.041 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.042 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.042 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.042 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.042 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.043 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.043 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.043 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.043 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.043 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.044 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.044 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.044 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.044 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.044 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.045 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.045 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.045 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.045 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.045 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.046 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.046 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.046 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.046 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.046 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.046 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.047 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.047 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.047 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.047 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.047 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.047 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.048 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.048 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.048 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.048 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.049 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.050 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.051 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.052 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.052 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.052 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.052 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.052 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.052 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.053 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.054 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.054 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.054 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.054 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.054 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.054 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.055 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.056 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.057 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.058 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.059 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.060 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.061 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.062 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.063 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.064 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.065 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.066 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.067 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.068 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.069 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.070 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.071 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.072 2 DEBUG oslo_service.service [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.073 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 13 15:13:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20519 DF PROTO=TCP SPT=45834 DPT=9105 SEQ=2240547494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB2F4760000000001030307) 
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.091 2 INFO nova.virt.node [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Determined node identity e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from /var/lib/nova/compute_id
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.092 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.093 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.093 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.093 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.104 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f82a06492b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.106 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f82a06492b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.107 2 INFO nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Connection event '1' reason 'None'
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.112 2 INFO nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Libvirt host capabilities <capabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <host>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <uuid>6635e8bc-8f5a-4138-a382-e7fa61dbcec1</uuid>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <arch>x86_64</arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model>EPYC-Rome-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <vendor>AMD</vendor>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <microcode version='16777317'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <signature family='23' model='49' stepping='0'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='x2apic'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='tsc-deadline'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='osxsave'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='hypervisor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='tsc_adjust'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='spec-ctrl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='stibp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='arch-capabilities'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='cmp_legacy'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='topoext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='virt-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='lbrv'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='tsc-scale'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='vmcb-clean'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='pause-filter'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='pfthreshold'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='svme-addr-chk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='rdctl-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='skip-l1dfl-vmentry'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='mds-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature name='pschange-mc-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <pages unit='KiB' size='4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <pages unit='KiB' size='2048'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <pages unit='KiB' size='1048576'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <power_management>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <suspend_mem/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <suspend_disk/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <suspend_hybrid/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </power_management>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <iommu support='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <migration_features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <live/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <uri_transports>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <uri_transport>tcp</uri_transport>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <uri_transport>rdma</uri_transport>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </uri_transports>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </migration_features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <topology>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <cells num='1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <cell id='0'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           <memory unit='KiB'>16116612</memory>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           <pages unit='KiB' size='4'>4029153</pages>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           <pages unit='KiB' size='2048'>0</pages>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           <distances>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <sibling id='0' value='10'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           </distances>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           <cpus num='8'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:           </cpus>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         </cell>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </cells>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </topology>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <cache>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </cache>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <secmodel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model>selinux</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <doi>0</doi>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </secmodel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <secmodel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model>dac</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <doi>0</doi>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </secmodel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </host>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <guest>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <os_type>hvm</os_type>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <arch name='i686'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <wordsize>32</wordsize>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <domain type='qemu'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <domain type='kvm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <pae/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <nonpae/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <acpi default='on' toggle='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <apic default='on' toggle='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <cpuselection/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <deviceboot/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <disksnapshot default='on' toggle='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <externalSnapshot/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </guest>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <guest>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <os_type>hvm</os_type>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <arch name='x86_64'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <wordsize>64</wordsize>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <domain type='qemu'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <domain type='kvm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <acpi default='on' toggle='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <apic default='on' toggle='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <cpuselection/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <deviceboot/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <disksnapshot default='on' toggle='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <externalSnapshot/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </guest>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: </capabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.117 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.120 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: <domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <domain>kvm</domain>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <arch>i686</arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <vcpu max='240'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <iothreads supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <os supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='firmware'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <loader supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>rom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pflash</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='readonly'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>yes</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='secure'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </loader>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </os>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='maximumMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <vendor>AMD</vendor>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='succor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='custom' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-128'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-256'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-512'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <memoryBacking supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='sourceType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>file</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>anonymous</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>memfd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </memoryBacking>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <disk supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='diskDevice'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>disk</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cdrom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>floppy</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>lun</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ide</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>fdc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>sata</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </disk>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <graphics supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vnc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egl-headless</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>dbus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </graphics>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <video supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='modelType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vga</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cirrus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>none</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>bochs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ramfb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </video>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hostdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='mode'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>subsystem</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='startupPolicy'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>mandatory</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>requisite</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>optional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='subsysType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pci</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='capsType'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='pciBackend'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hostdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <rng supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>random</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </rng>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <filesystem supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='driverType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>path</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>handle</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtiofs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </filesystem>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <tpm supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-tis</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-crb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emulator</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>external</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendVersion'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>2.0</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </tpm>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <redirdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </redirdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <channel supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pty</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>unix</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </channel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <crypto supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>qemu</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </crypto>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <interface supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>passt</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </interface>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <panic supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>isa</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>hyperv</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </panic>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <gic supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <genid supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backup supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <async-teardown supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <ps2 supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sev supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sgx supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hyperv supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='features'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>relaxed</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vapic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>spinlocks</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vpindex</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>runtime</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>synic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>stimer</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reset</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vendor_id</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>frequencies</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reenlightenment</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tlbflush</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ipi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>avic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emsr_bitmap</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>xmm_input</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hyperv>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <launchSecurity supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: </domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.123 2 DEBUG nova.virt.libvirt.volume.mount [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.131 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: <domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <domain>kvm</domain>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <arch>i686</arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <vcpu max='1024'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <iothreads supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <os supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='firmware'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <loader supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>rom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pflash</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='readonly'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>yes</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='secure'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </loader>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </os>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='maximumMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <vendor>AMD</vendor>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='succor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='custom' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-128'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-256'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-512'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <memoryBacking supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='sourceType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>file</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>anonymous</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>memfd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </memoryBacking>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <disk supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='diskDevice'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>disk</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cdrom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>floppy</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>lun</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>fdc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>sata</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </disk>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <graphics supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vnc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egl-headless</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>dbus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </graphics>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <video supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='modelType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vga</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cirrus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>none</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>bochs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ramfb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </video>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hostdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='mode'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>subsystem</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='startupPolicy'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>mandatory</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>requisite</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>optional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='subsysType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pci</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='capsType'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='pciBackend'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hostdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <rng supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>random</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </rng>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <filesystem supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='driverType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>path</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>handle</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtiofs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </filesystem>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <tpm supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-tis</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-crb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emulator</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>external</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendVersion'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>2.0</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </tpm>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <redirdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </redirdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <channel supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pty</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>unix</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </channel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <crypto supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>qemu</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </crypto>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <interface supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>passt</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </interface>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <panic supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>isa</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>hyperv</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </panic>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <gic supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <genid supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backup supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <async-teardown supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <ps2 supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sev supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sgx supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hyperv supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='features'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>relaxed</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vapic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>spinlocks</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vpindex</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>runtime</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>synic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>stimer</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reset</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vendor_id</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>frequencies</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reenlightenment</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tlbflush</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ipi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>avic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emsr_bitmap</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>xmm_input</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hyperv>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <launchSecurity supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: </domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.153 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.158 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: <domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <domain>kvm</domain>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <arch>x86_64</arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <vcpu max='240'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <iothreads supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <os supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='firmware'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <loader supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>rom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pflash</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='readonly'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>yes</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='secure'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </loader>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </os>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='maximumMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <vendor>AMD</vendor>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='succor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='custom' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-128'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-256'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-512'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <memoryBacking supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='sourceType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>file</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>anonymous</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>memfd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </memoryBacking>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <disk supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='diskDevice'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>disk</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cdrom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>floppy</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>lun</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ide</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>fdc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>sata</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </disk>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <graphics supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vnc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egl-headless</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>dbus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </graphics>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <video supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='modelType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vga</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cirrus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>none</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>bochs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ramfb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </video>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hostdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='mode'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>subsystem</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='startupPolicy'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>mandatory</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>requisite</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>optional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='subsysType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pci</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='capsType'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='pciBackend'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hostdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <rng supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>random</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </rng>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <filesystem supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='driverType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>path</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>handle</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtiofs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </filesystem>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <tpm supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-tis</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-crb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emulator</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>external</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendVersion'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>2.0</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </tpm>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <redirdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </redirdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <channel supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pty</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>unix</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </channel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <crypto supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>qemu</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </crypto>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <interface supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>passt</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </interface>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <panic supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>isa</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>hyperv</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </panic>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <gic supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <genid supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backup supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <async-teardown supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <ps2 supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sev supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sgx supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hyperv supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='features'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>relaxed</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vapic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>spinlocks</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vpindex</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>runtime</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>synic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>stimer</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reset</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vendor_id</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>frequencies</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reenlightenment</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tlbflush</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ipi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>avic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emsr_bitmap</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>xmm_input</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hyperv>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <launchSecurity supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: </domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.216 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: <domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <domain>kvm</domain>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <arch>x86_64</arch>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <vcpu max='1024'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <iothreads supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <os supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='firmware'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>efi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <loader supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>rom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pflash</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='readonly'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>yes</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='secure'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>yes</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>no</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </loader>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </os>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='maximum' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='maximumMigratable'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>on</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>off</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='host-model' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <vendor>AMD</vendor>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='x2apic'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='stibp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='succor'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lbrv'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='mds-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <mode name='custom' supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Broadwell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Cooperlake-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Denverton-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Dhyana-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='auto-ibrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amd-psfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='no-nested-data-bp'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='null-sel-clr-base'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='stibp-always-on'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='EPYC-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-128'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-256'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx10-512'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='prefetchiti'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Haswell-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='IvyBridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='KnightsMill-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4fmaps'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-4vnniw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512er'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512pf'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fma4'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tbm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xop'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='amx-tile'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-bf16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-fp16'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bitalg'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vbmi2'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrc'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fzrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='la57'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='taa-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='tsx-ldtrk'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xfd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='SierraForest-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ifma'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-ne-convert'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx-vnni-int8'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='bus-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cmpccxadd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fbsdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='fsrs'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ibrs-all'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mcdt-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pbrsb-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='psdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='serialize'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vaes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='vpclmulqdq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='hle'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='rtm'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512bw'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512cd'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512dq'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512f'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='avx512vl'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='invpcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pcid'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='pku'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='mpx'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v2'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v3'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='core-capability'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='split-lock-detect'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='Snowridge-v4'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='cldemote'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='erms'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='gfni'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdir64b'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='movdiri'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='xsaves'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='athlon-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='core2duo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='coreduo-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='n270-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='ss'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <blockers model='phenom-v1'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnow'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <feature name='3dnowext'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </blockers>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </mode>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </cpu>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <memoryBacking supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <enum name='sourceType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>file</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>anonymous</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <value>memfd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </memoryBacking>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <disk supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='diskDevice'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>disk</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cdrom</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>floppy</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>lun</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>fdc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>sata</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </disk>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <graphics supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vnc</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egl-headless</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>dbus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </graphics>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <video supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='modelType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vga</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>cirrus</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>none</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>bochs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ramfb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </video>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hostdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='mode'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>subsystem</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='startupPolicy'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>mandatory</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>requisite</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>optional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='subsysType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pci</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>scsi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='capsType'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='pciBackend'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hostdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <rng supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtio-non-transitional</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>random</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>egd</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </rng>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <filesystem supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='driverType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>path</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>handle</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>virtiofs</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </filesystem>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <tpm supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-tis</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tpm-crb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emulator</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>external</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendVersion'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>2.0</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </tpm>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <redirdev supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='bus'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>usb</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </redirdev>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <channel supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>pty</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>unix</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </channel>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <crypto supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='type'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>qemu</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendModel'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>builtin</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </crypto>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <interface supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='backendType'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>default</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>passt</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </interface>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <panic supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='model'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>isa</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>hyperv</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </panic>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </devices>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   <features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <gic supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <vmcoreinfo supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <genid supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backingStoreInput supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <backup supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <async-teardown supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <ps2 supported='yes'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sev supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <sgx supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <hyperv supported='yes'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       <enum name='features'>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>relaxed</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vapic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>spinlocks</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vpindex</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>runtime</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>synic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>stimer</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reset</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>vendor_id</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>frequencies</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>reenlightenment</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>tlbflush</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>ipi</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>avic</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>emsr_bitmap</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:         <value>xmm_input</value>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:       </enum>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     </hyperv>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:     <launchSecurity supported='no'/>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:   </features>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: </domainCapabilities>
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.270 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.270 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.270 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.270 2 INFO nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Secure Boot support detected
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.272 2 INFO nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.272 2 INFO nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.283 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.318 2 INFO nova.virt.node [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Determined node identity e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from /var/lib/nova/compute_id
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.333 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Verified node e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 matches my host standalone.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.366 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.372 2 DEBUG nova.virt.libvirt.vif [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,hidden=False,host='standalone.localdomain',hostname='test',id=1,image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:16:19Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=<?>,node='standalone.localdomain',numa_topology=None,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-iqeurhsr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T14:16:19Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=54a46fec-332e-42f9-83ed-88e763d13f63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.373 2 DEBUG nova.network.os_vif_util [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Converting VIF {"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.374 2 DEBUG nova.network.os_vif_util [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.374 2 DEBUG os_vif [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.400 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.400 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.401 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.419 2 INFO oslo.privsep.daemon [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpeku8vqms/privsep.sock']
Oct 13 15:13:20 standalone.localdomain ceph-mon[29756]: pgmap v2963: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2964: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.983 2 INFO oslo.privsep.daemon [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Spawned new privsep daemon via rootwrap
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.888 40 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.892 40 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.894 40 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 13 15:13:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:20.894 40 INFO oslo.privsep.daemon [-] privsep daemon running as pid 40
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a49767c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a49767c-fb, col_values=(('external_ids', {'iface-id': '8a49767c-fb09-4185-95da-4261d8043fad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:4e:c0', 'vm-uuid': '54a46fec-332e-42f9-83ed-88e763d13f63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.304 2 INFO os_vif [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb')
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.305 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.309 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.310 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.315 2 DEBUG nova.virt.libvirt.vif [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:17:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='bfv-server',display_name='bfv-server',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,hidden=False,host='standalone.localdomain',hostname='bfv-server',id=2,image_ref='',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:17:39Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=<?>,node='standalone.localdomain',numa_topology=None,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-n0khd6io',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T14:17:40Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=8f68d5aa-abc4-451d-89d2-f5342b71831c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.315 2 DEBUG nova.network.os_vif_util [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Converting VIF {"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.317 2 DEBUG nova.network.os_vif_util [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.317 2 DEBUG os_vif [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda3e5a61-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda3e5a61-7a, col_values=(('external_ids', {'iface-id': 'da3e5a61-7adb-481b-b7f5-703c3939cde2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:20:5b', 'vm-uuid': '8f68d5aa-abc4-451d-89d2-f5342b71831c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.325 2 INFO os_vif [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a')
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.325 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.328 2 DEBUG nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.329 2 INFO nova.compute.manager [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 13 15:13:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:21.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:13:21 standalone.localdomain systemd[1]: tmp-crun.3ZwHl0.mount: Deactivated successfully.
Oct 13 15:13:21 standalone.localdomain podman[457112]: 2025-10-13 15:13:21.803370351 +0000 UTC m=+0.070821325 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:13:21 standalone.localdomain podman[457112]: 2025-10-13 15:13:21.846060416 +0000 UTC m=+0.113511490 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 15:13:21 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.050 2 INFO nova.service [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating service version for nova-compute on standalone.localdomain from 57 to 66
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.088 2 DEBUG oslo_concurrency.lockutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.089 2 DEBUG oslo_concurrency.lockutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.089 2 DEBUG oslo_concurrency.lockutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.089 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.090 2 DEBUG oslo_concurrency.processutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:13:22 standalone.localdomain ceph-mon[29756]: pgmap v2964: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:13:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/377758263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.545 2 DEBUG oslo_concurrency.processutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.625 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.625 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.625 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.629 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.629 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:13:22 standalone.localdomain systemd[1]: Starting libvirt nodedev daemon...
Oct 13 15:13:22 standalone.localdomain systemd[1]: Started libvirt nodedev daemon.
Oct 13 15:13:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2965: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.983 2 WARNING nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.985 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10712MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.985 2 DEBUG oslo_concurrency.lockutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:13:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:22.985 2 DEBUG oslo_concurrency.lockutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:13:23
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'manila_data', 'backups', '.mgr', 'manila_metadata']
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:13:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:23.458 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:13:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:23.458 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:13:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:23.459 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:13:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:23.459 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:13:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/377758263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:13:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42863 DF PROTO=TCP SPT=52934 DPT=9102 SEQ=2333315612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB301FE0000000001030307) 
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:13:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:13:23 standalone.localdomain sshd[457181]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:13:23 standalone.localdomain sshd[457181]: Accepted publickey for root from 192.168.122.30 port 47334 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:13:23 standalone.localdomain systemd-logind[45629]: New session 295 of user root.
Oct 13 15:13:23 standalone.localdomain systemd[1]: Started Session 295 of User root.
Oct 13 15:13:23 standalone.localdomain sshd[457181]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.063 2 DEBUG nova.scheduler.client.report [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.081 2 DEBUG nova.scheduler.client.report [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 0, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.081 2 DEBUG nova.compute.provider_tree [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 0, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.097 2 DEBUG nova.scheduler.client.report [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.134 2 DEBUG nova.scheduler.client.report [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.188 2 DEBUG oslo_concurrency.processutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:13:24 standalone.localdomain ceph-mon[29756]: pgmap v2965: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:13:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2783453396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.702 2 DEBUG oslo_concurrency.processutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.710 2 DEBUG nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.710 2 INFO nova.virt.libvirt.host [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] kernel doesn't support AMD SEV
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.712 2 DEBUG nova.compute.provider_tree [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 6, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.714 2 DEBUG nova.virt.libvirt.driver [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 13 15:13:24 standalone.localdomain python3.9[457312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:13:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2966: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.997 2 DEBUG nova.scheduler.client.report [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updated inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 6, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.998 2 DEBUG nova.compute.provider_tree [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 13 15:13:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:24.998 2 DEBUG nova.compute.provider_tree [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.102 2 DEBUG nova.compute.provider_tree [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Updating resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 generation from 5 to 6 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.167 2 DEBUG nova.compute.resource_tracker [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.167 2 DEBUG oslo_concurrency.lockutils [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.168 2 DEBUG nova.service [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.251 2 DEBUG nova.service [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.252 2 DEBUG nova.servicegroup.drivers.db [None req-7904f362-1ad8-4f95-a54d-d9abff8b5c8f - - - - - -] DB_Driver: join new ServiceGroup member standalone.localdomain to the compute group, service = <Service: host=standalone.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 13 15:13:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:25.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4444 DF PROTO=TCP SPT=58930 DPT=9101 SEQ=760428106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3098E0000000001030307) 
Oct 13 15:13:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2783453396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:13:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:26.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:26 standalone.localdomain python3.9[457426]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:13:26 standalone.localdomain ceph-mon[29756]: pgmap v2966: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:26 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:13:26 standalone.localdomain systemd-rc-local-generator[457450]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:13:26 standalone.localdomain systemd-sysv-generator[457456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:13:26 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:13:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2967: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:27 standalone.localdomain python3.9[457569]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:13:27 standalone.localdomain network[457586]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:13:27 standalone.localdomain network[457587]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:13:27 standalone.localdomain network[457588]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:13:28 standalone.localdomain ceph-mon[29756]: pgmap v2967: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4446 DF PROTO=TCP SPT=58930 DPT=9101 SEQ=760428106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB315B60000000001030307) 
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2968: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:13:28 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:13:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:30.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:30 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:13:30 standalone.localdomain ceph-mon[29756]: pgmap v2968: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2969: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:32 standalone.localdomain ceph-mon[29756]: pgmap v2969: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4447 DF PROTO=TCP SPT=58930 DPT=9101 SEQ=760428106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB325760000000001030307) 
Oct 13 15:13:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2970: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:34 standalone.localdomain ceph-mon[29756]: pgmap v2970: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2971: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:13:35 standalone.localdomain systemd[1]: tmp-crun.nUCYkW.mount: Deactivated successfully.
Oct 13 15:13:35 standalone.localdomain podman[457705]: 2025-10-13 15:13:35.131885281 +0000 UTC m=+0.125012231 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 13 15:13:35 standalone.localdomain podman[457705]: 2025-10-13 15:13:35.170341486 +0000 UTC m=+0.163468416 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:13:35 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:13:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:13:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:13:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:13:35 standalone.localdomain podman[457733]: 2025-10-13 15:13:35.303120428 +0000 UTC m=+0.079303818 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1, build-date=2025-07-21T16:11:22, architecture=x86_64, container_name=swift_account_server, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account)
Oct 13 15:13:35 standalone.localdomain podman[457750]: 2025-10-13 15:13:35.390921369 +0000 UTC m=+0.082897661 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:13:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:35 standalone.localdomain podman[457751]: 2025-10-13 15:13:35.366474788 +0000 UTC m=+0.058708837 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, container_name=swift_container_server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:13:35 standalone.localdomain podman[457733]: 2025-10-13 15:13:35.492855877 +0000 UTC m=+0.269039247 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1, build-date=2025-07-21T16:11:22, tcib_managed=true, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:13:35 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:13:35 standalone.localdomain podman[457751]: 2025-10-13 15:13:35.545368604 +0000 UTC m=+0.237602643 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, tcib_managed=true, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 15:13:35 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:13:35 standalone.localdomain podman[457750]: 2025-10-13 15:13:35.559128669 +0000 UTC m=+0.251104911 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, container_name=swift_object_server, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team)
Oct 13 15:13:35 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:13:36 standalone.localdomain systemd[1]: tmp-crun.94ehOX.mount: Deactivated successfully.
Oct 13 15:13:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:36.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:36 standalone.localdomain ceph-mon[29756]: pgmap v2971: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2972: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:37 standalone.localdomain python3.9[457930]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:13:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23337 DF PROTO=TCP SPT=49304 DPT=9100 SEQ=2898527186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB338150000000001030307) 
Oct 13 15:13:38 standalone.localdomain python3.9[458039]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23338 DF PROTO=TCP SPT=49304 DPT=9100 SEQ=2898527186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB33C360000000001030307) 
Oct 13 15:13:38 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 78.7 (262 of 333 items), suggesting rotation.
Oct 13 15:13:38 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:13:38 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:13:38 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:13:38 standalone.localdomain ceph-mon[29756]: pgmap v2972: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:38 standalone.localdomain python3.9[458148]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2973: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:39 standalone.localdomain python3.9[458256]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                            systemctl disable --now certmonger.service
                                                            test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                          fi
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:13:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:40.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:40 standalone.localdomain python3.9[458366]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:13:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23339 DF PROTO=TCP SPT=49304 DPT=9100 SEQ=2898527186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB344370000000001030307) 
Oct 13 15:13:40 standalone.localdomain ceph-mon[29756]: pgmap v2973: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2974: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:41 standalone.localdomain python3.9[458474]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:13:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:41.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:13:41 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:13:41 standalone.localdomain podman[458476]: 2025-10-13 15:13:41.522585801 +0000 UTC m=+0.095824432 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:13:41 standalone.localdomain podman[458476]: 2025-10-13 15:13:41.533255288 +0000 UTC m=+0.106493929 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 13 15:13:41 standalone.localdomain systemd-rc-local-generator[458516]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:13:41 standalone.localdomain systemd-sysv-generator[458519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:13:41 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:13:41 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:13:42 standalone.localdomain python3.9[458636]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:13:42 standalone.localdomain ceph-mon[29756]: pgmap v2974: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2975: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:43 standalone.localdomain python3.9[458745]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:13:43 standalone.localdomain python3.9[458853]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47608 DF PROTO=TCP SPT=34416 DPT=9105 SEQ=1873999120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB351F60000000001030307) 
Oct 13 15:13:44 standalone.localdomain python3.9[458963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:44 standalone.localdomain ceph-mon[29756]: pgmap v2975: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2976: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:45 standalone.localdomain python3.9[459049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368424.1729407-133-232756829569812/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=b8b12bb15cd21daecd0d8671bb7ed8daad73311e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:13:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:45.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:46 standalone.localdomain python3.9[459157]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 13 15:13:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:46.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:46 standalone.localdomain ceph-mon[29756]: pgmap v2976: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2977: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:47 standalone.localdomain python3.9[459265]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 13 15:13:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:13:47 standalone.localdomain podman[459267]: 2025-10-13 15:13:47.280038868 +0000 UTC m=+0.092571579 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:13:47 standalone.localdomain podman[459267]: 2025-10-13 15:13:47.313010215 +0000 UTC m=+0.125542866 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:13:47 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:13:47 standalone.localdomain python3.9[459392]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 15:13:47 standalone.localdomain groupadd[459393]: group added to /etc/group: name=ceilometer, GID=42405
Oct 13 15:13:47 standalone.localdomain groupadd[459393]: group added to /etc/gshadow: name=ceilometer
Oct 13 15:13:47 standalone.localdomain groupadd[459393]: new group: name=ceilometer, GID=42405
Oct 13 15:13:48 standalone.localdomain python3.9[459506]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on standalone.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 15:13:48 standalone.localdomain ceph-mon[29756]: pgmap v2977: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:48 standalone.localdomain useradd[459508]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/2
Oct 13 15:13:48 standalone.localdomain useradd[459508]: add 'ceilometer' to group 'libvirt'
Oct 13 15:13:48 standalone.localdomain useradd[459508]: add 'ceilometer' to shadow group 'libvirt'
Oct 13 15:13:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10002 DF PROTO=TCP SPT=36576 DPT=9882 SEQ=3904650355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB364370000000001030307) 
Oct 13 15:13:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2978: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:49 standalone.localdomain python3.9[459622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47610 DF PROTO=TCP SPT=34416 DPT=9105 SEQ=1873999120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB369B60000000001030307) 
Oct 13 15:13:50 standalone.localdomain python3.9[459708]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/root/.ansible/tmp/ansible-tmp-1760368429.4470608-201-273318756673113/.source.conf _original_basename=ceilometer.conf follow=False checksum=9ca28d1fdd2b5a404d133572760112ded80870cd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:50.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:50 standalone.localdomain ceph-mon[29756]: pgmap v2978: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2979: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:51 standalone.localdomain python3.9[459816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:51.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:51 standalone.localdomain python3.9[459902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/root/.ansible/tmp/ansible-tmp-1760368430.5724185-201-100826566464868/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:51 standalone.localdomain python3.9[460010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:52 standalone.localdomain python3.9[460096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/root/.ansible/tmp/ansible-tmp-1760368431.582372-201-163040999392425/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:52 standalone.localdomain ceph-mon[29756]: pgmap v2979: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:13:52 standalone.localdomain podman[460114]: 2025-10-13 15:13:52.819148865 +0000 UTC m=+0.083593581 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:13:52 standalone.localdomain podman[460114]: 2025-10-13 15:13:52.849339822 +0000 UTC m=+0.113784498 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:13:52 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:13:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2980: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:53 standalone.localdomain python3.9[460229]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:13:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:13:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21348 DF PROTO=TCP SPT=46094 DPT=9102 SEQ=4279194654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3772F0000000001030307) 
Oct 13 15:13:53 standalone.localdomain python3.9[460337]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:13:54 standalone.localdomain python3.9[460445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:54 standalone.localdomain ceph-mon[29756]: pgmap v2980: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2981: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:54 standalone.localdomain python3.9[460531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368433.9584699-260-68994791885784/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:55.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5980 DF PROTO=TCP SPT=39294 DPT=9101 SEQ=658683423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB37EBE0000000001030307) 
Oct 13 15:13:55 standalone.localdomain python3.9[460639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:55 standalone.localdomain python3.9[460694]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:13:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:13:56.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:13:56 standalone.localdomain python3.9[460802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:56 standalone.localdomain ceph-mon[29756]: pgmap v2981: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2982: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:57 standalone.localdomain python3.9[460888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368436.0843594-260-217309840819653/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:57 standalone.localdomain python3.9[460996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:58 standalone.localdomain python3.9[461082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368437.1658697-260-194208887038436/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:13:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5982 DF PROTO=TCP SPT=39294 DPT=9101 SEQ=658683423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB38AB70000000001030307) 
Oct 13 15:13:58 standalone.localdomain ceph-mon[29756]: pgmap v2982: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2983: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:13:59 standalone.localdomain python3.9[461190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:13:59 standalone.localdomain python3.9[461276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368438.4938617-260-56220097195761/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:00 standalone.localdomain python3.9[461384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:00.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:00 standalone.localdomain python3.9[461470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368439.722471-260-152771393265033/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:00 standalone.localdomain ceph-mon[29756]: pgmap v2983: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2984: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:01 standalone.localdomain python3.9[461578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:01.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:01 standalone.localdomain sudo[461665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:14:01 standalone.localdomain sudo[461665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:14:01 standalone.localdomain sudo[461665]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:01 standalone.localdomain sudo[461683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:14:01 standalone.localdomain sudo[461683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:14:01 standalone.localdomain python3.9[461664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368440.8633828-260-144800141698836/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:02 standalone.localdomain sudo[461683]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:02 standalone.localdomain python3.9[461823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:14:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 5ca8a225-cc72-49af-8dd0-6e053712f815 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:14:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 5ca8a225-cc72-49af-8dd0-6e053712f815 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:14:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 5ca8a225-cc72-49af-8dd0-6e053712f815 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:14:02 standalone.localdomain sudo[461841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:14:02 standalone.localdomain sudo[461841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:14:02 standalone.localdomain sudo[461841]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5983 DF PROTO=TCP SPT=39294 DPT=9101 SEQ=658683423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB39A770000000001030307) 
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: pgmap v2984: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:14:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:14:02 standalone.localdomain python3.9[461944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368441.9973109-260-99442996882631/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2985: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:03 standalone.localdomain python3.9[462052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:03 standalone.localdomain python3.9[462138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368443.0448322-260-75838553980916/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:04 standalone.localdomain python3.9[462246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:04 standalone.localdomain ceph-mon[29756]: pgmap v2985: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2986: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:14:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:14:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:14:04 standalone.localdomain python3.9[462332]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368444.0924025-260-6742653556541/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:05.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:05 standalone.localdomain python3.9[462440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:14:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:14:05 standalone.localdomain podman[462507]: 2025-10-13 15:14:05.806745108 +0000 UTC m=+0.070584710 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, tcib_managed=true, batch=17.1_20250721.1, container_name=swift_object_server, vcs-type=git, vendor=Red Hat, Inc., release=1, version=17.1.9, build-date=2025-07-21T14:56:28, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:14:05 standalone.localdomain podman[462509]: 2025-10-13 15:14:05.870554584 +0000 UTC m=+0.129723974 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:14:05 standalone.localdomain podman[462509]: 2025-10-13 15:14:05.877714896 +0000 UTC m=+0.136884286 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 13 15:14:05 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:14:05 standalone.localdomain podman[462512]: 2025-10-13 15:14:05.918302654 +0000 UTC m=+0.174719108 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, io.buildah.version=1.33.12, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32)
Oct 13 15:14:05 standalone.localdomain ceph-mon[29756]: pgmap v2986: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:14:05 standalone.localdomain podman[462521]: 2025-10-13 15:14:05.968731478 +0000 UTC m=+0.223488331 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 15:14:05 standalone.localdomain python3.9[462562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/root/.ansible/tmp/ansible-tmp-1760368445.080991-260-21017372939809/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:06 standalone.localdomain podman[462507]: 2025-10-13 15:14:06.007778147 +0000 UTC m=+0.271617739 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true)
Oct 13 15:14:06 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:14:06 standalone.localdomain podman[462521]: 2025-10-13 15:14:06.123837261 +0000 UTC m=+0.378594054 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, container_name=swift_account_server, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, tcib_managed=true)
Oct 13 15:14:06 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:14:06 standalone.localdomain podman[462512]: 2025-10-13 15:14:06.140755478 +0000 UTC m=+0.397171912 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, release=1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32)
Oct 13 15:14:06 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:14:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:06.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:06 standalone.localdomain python3.9[462732]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:14:06 standalone.localdomain systemd[1]: tmp-crun.gctHZ0.mount: Deactivated successfully.
Oct 13 15:14:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:14:06.930 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:14:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:14:06.930 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:14:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:14:06.931 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:14:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2987: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58266 DF PROTO=TCP SPT=35574 DPT=9100 SEQ=3303442792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3AD450000000001030307) 
Oct 13 15:14:07 standalone.localdomain python3.9[462840]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:14:07 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:07 standalone.localdomain systemd-rc-local-generator[462868]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:07 standalone.localdomain systemd-sysv-generator[462872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:07 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:07 standalone.localdomain systemd[1]: Listening on Podman API Socket.
Oct 13 15:14:08 standalone.localdomain ceph-mon[29756]: pgmap v2987: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58267 DF PROTO=TCP SPT=35574 DPT=9100 SEQ=3303442792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3B1360000000001030307) 
Oct 13 15:14:08 standalone.localdomain python3.9[462988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2988: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:09 standalone.localdomain python3.9[463074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368448.1553564-432-121466745673574/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:14:09 standalone.localdomain python3.9[463127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:10 standalone.localdomain python3.9[463213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368448.1553564-432-121466745673574/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:14:10 standalone.localdomain ceph-mon[29756]: pgmap v2988: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:10.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58268 DF PROTO=TCP SPT=35574 DPT=9100 SEQ=3303442792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3B9360000000001030307) 
Oct 13 15:14:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2989: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:11 standalone.localdomain python3.9[463321]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Oct 13 15:14:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:11.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:12 standalone.localdomain python3.9[463429]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:14:12 standalone.localdomain ceph-mon[29756]: pgmap v2989: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:14:12 standalone.localdomain podman[463485]: 2025-10-13 15:14:12.498648901 +0000 UTC m=+0.076380641 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:14:12 standalone.localdomain podman[463485]: 2025-10-13 15:14:12.513990289 +0000 UTC m=+0.091722009 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:14:12 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:14:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2990: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:13 standalone.localdomain python3[463556]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:14:13 standalone.localdomain podman[463594]: 
Oct 13 15:14:13 standalone.localdomain podman[463594]: 2025-10-13 15:14:13.451505967 +0000 UTC m=+0.075850127 container create 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:14:13 standalone.localdomain podman[463594]: 2025-10-13 15:14:13.418448481 +0000 UTC m=+0.042792671 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Oct 13 15:14:13 standalone.localdomain python3[463556]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start
Oct 13 15:14:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49999 DF PROTO=TCP SPT=34020 DPT=9105 SEQ=235141793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3C6F60000000001030307) 
Oct 13 15:14:14 standalone.localdomain python3.9[463740]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:14:14 standalone.localdomain ceph-mon[29756]: pgmap v2990: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:14 standalone.localdomain python3.9[463850]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2991: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:15.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:15 standalone.localdomain python3.9[463957]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368454.991507-496-87351239972672/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:16.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:16 standalone.localdomain ceph-mon[29756]: pgmap v2991: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:16 standalone.localdomain python3.9[464010]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:14:16 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:16 standalone.localdomain systemd-sysv-generator[464039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:16 standalone.localdomain systemd-rc-local-generator[464033]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:16 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2992: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.253 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.292 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Triggering sync for uuid 54a46fec-332e-42f9-83ed-88e763d13f63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.293 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Triggering sync for uuid 8f68d5aa-abc4-451d-89d2-f5342b71831c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.293 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.293 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.294 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.294 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.294 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.374 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:14:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:17.392 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:14:17 standalone.localdomain python3.9[464099]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:14:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:14:17 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:17 standalone.localdomain systemd-rc-local-generator[464143]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:17 standalone.localdomain systemd-sysv-generator[464146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:17 standalone.localdomain podman[464101]: 2025-10-13 15:14:17.712563356 +0000 UTC m=+0.129684813 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:14:17 standalone.localdomain podman[464101]: 2025-10-13 15:14:17.746870002 +0000 UTC m=+0.163991409 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:14:17 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:17 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:14:18 standalone.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Oct 13 15:14:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:14:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbdf7ace66e7ecad4d445aa70efa7de70931022df7e2f8fef952713b1a0cfe4/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 13 15:14:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbdf7ace66e7ecad4d445aa70efa7de70931022df7e2f8fef952713b1a0cfe4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 15:14:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:14:18 standalone.localdomain podman[464158]: 2025-10-13 15:14:18.170163681 +0000 UTC m=+0.136031715 container init 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + sudo -E kolla_set_configs
Oct 13 15:14:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: sudo: unable to send audit message: Operation not permitted
Oct 13 15:14:18 standalone.localdomain sudo[464179]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 15:14:18 standalone.localdomain sudo[464179]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:14:18 standalone.localdomain sudo[464179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 13 15:14:18 standalone.localdomain podman[464158]: 2025-10-13 15:14:18.205805419 +0000 UTC m=+0.171673483 container start 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:14:18 standalone.localdomain podman[464158]: ceilometer_agent_compute
Oct 13 15:14:18 standalone.localdomain systemd[1]: Started ceilometer_agent_compute container.
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Validating config file
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Copying service configuration files
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: INFO:__main__:Writing out command to execute
Oct 13 15:14:18 standalone.localdomain sudo[464179]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: ++ cat /run_command
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + ARGS=
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + sudo kolla_copy_cacerts
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: sudo: unable to send audit message: Operation not permitted
Oct 13 15:14:18 standalone.localdomain sudo[464211]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 13 15:14:18 standalone.localdomain sudo[464211]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:14:18 standalone.localdomain sudo[464211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 13 15:14:18 standalone.localdomain sudo[464211]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:18 standalone.localdomain podman[464181]: 2025-10-13 15:14:18.295942658 +0000 UTC m=+0.093579973 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, managed_by=edpm_ansible)
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + [[ ! -n '' ]]
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + . kolla_extend_start
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + umask 0022
Oct 13 15:14:18 standalone.localdomain ceilometer_agent_compute[464173]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 13 15:14:18 standalone.localdomain podman[464181]: 2025-10-13 15:14:18.329992228 +0000 UTC m=+0.127629543 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible)
Oct 13 15:14:18 standalone.localdomain podman[464181]: unhealthy
Oct 13 15:14:18 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:14:18 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Failed with result 'exit-code'.
Oct 13 15:14:18 standalone.localdomain ceph-mon[29756]: pgmap v2992: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:14:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2058732515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:14:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:14:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2058732515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:14:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1925 DF PROTO=TCP SPT=57588 DPT=9882 SEQ=91149942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3D9760000000001030307) 
Oct 13 15:14:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2993: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:18 standalone.localdomain python3.9[464312]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:14:19 standalone.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.074 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.074 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.074 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.074 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.074 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.075 2 DEBUG cotyledon.oslo_config_glue [-] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.076 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.077 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.078 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.079 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.080 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.081 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.082 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain systemd[1]: tmp-crun.rhGcq0.mount: Deactivated successfully.
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.083 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.084 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.085 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.086 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.087 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.088 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.089 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.103 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.104 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.105 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.171 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Oct 13 15:14:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:19.185 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:19.186 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:19.186 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:14:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:19.186 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.201 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.202 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.272 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.273 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.273 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.273 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.273 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.273 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.273 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.274 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.274 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.274 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.274 12 DEBUG cotyledon.oslo_config_glue [-] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.274 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.275 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.276 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.277 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.278 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.279 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.280 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.281 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.282 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.284 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.285 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.285 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.286 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.286 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.286 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.286 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.286 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.287 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.287 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.287 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.288 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.288 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.289 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.289 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.289 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.289 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.289 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.289 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.290 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.291 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.292 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.293 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.294 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.295 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.296 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.297 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.298 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.299 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.300 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.300 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.300 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.301 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.303 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464173]: 2025-10-13 15:14:19.313 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Oct 13 15:14:19 standalone.localdomain virtqemud[425408]: End of file while reading data: Input/output error
Oct 13 15:14:19 standalone.localdomain systemd[1]: libpod-64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.scope: Deactivated successfully.
Oct 13 15:14:19 standalone.localdomain systemd[1]: libpod-64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.scope: Consumed 1.218s CPU time.
Oct 13 15:14:19 standalone.localdomain podman[464316]: 2025-10-13 15:14:19.443284497 +0000 UTC m=+0.432601885 container died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:14:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2058732515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:14:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2058732515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:14:19 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.timer: Deactivated successfully.
Oct 13 15:14:19 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:14:19 standalone.localdomain podman[464316]: 2025-10-13 15:14:19.666988681 +0000 UTC m=+0.656306069 container cleanup 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:14:19 standalone.localdomain podman[464316]: ceilometer_agent_compute
Oct 13 15:14:19 standalone.localdomain podman[464349]: 2025-10-13 15:14:19.755115703 +0000 UTC m=+0.059256618 container cleanup 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:14:19 standalone.localdomain podman[464349]: ceilometer_agent_compute
Oct 13 15:14:19 standalone.localdomain systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Oct 13 15:14:19 standalone.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Oct 13 15:14:19 standalone.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Oct 13 15:14:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:14:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbdf7ace66e7ecad4d445aa70efa7de70931022df7e2f8fef952713b1a0cfe4/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct 13 15:14:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbdf7ace66e7ecad4d445aa70efa7de70931022df7e2f8fef952713b1a0cfe4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct 13 15:14:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:14:19 standalone.localdomain podman[464362]: 2025-10-13 15:14:19.91827049 +0000 UTC m=+0.122996322 container init 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464376]: + sudo -E kolla_set_configs
Oct 13 15:14:19 standalone.localdomain ceilometer_agent_compute[464376]: sudo: unable to send audit message: Operation not permitted
Oct 13 15:14:19 standalone.localdomain sudo[464383]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 15:14:19 standalone.localdomain sudo[464383]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:14:19 standalone.localdomain sudo[464383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 13 15:14:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:14:19 standalone.localdomain podman[464362]: 2025-10-13 15:14:19.960626699 +0000 UTC m=+0.165352551 container start 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:14:19 standalone.localdomain podman[464362]: ceilometer_agent_compute
Oct 13 15:14:19 standalone.localdomain systemd[1]: Started ceilometer_agent_compute container.
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Validating config file
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Copying service configuration files
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: INFO:__main__:Writing out command to execute
Oct 13 15:14:20 standalone.localdomain sudo[464383]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: ++ cat /run_command
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + ARGS=
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + sudo kolla_copy_cacerts
Oct 13 15:14:20 standalone.localdomain podman[464385]: 2025-10-13 15:14:20.044203081 +0000 UTC m=+0.085106369 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:14:20 standalone.localdomain sudo[464405]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: sudo: unable to send audit message: Operation not permitted
Oct 13 15:14:20 standalone.localdomain sudo[464405]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 13 15:14:20 standalone.localdomain sudo[464405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Oct 13 15:14:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50001 DF PROTO=TCP SPT=34020 DPT=9105 SEQ=235141793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3DEB60000000001030307) 
Oct 13 15:14:20 standalone.localdomain sudo[464405]: pam_unix(sudo:session): session closed for user root
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + [[ ! -n '' ]]
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + . kolla_extend_start
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + umask 0022
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct 13 15:14:20 standalone.localdomain podman[464385]: 2025-10-13 15:14:20.079023569 +0000 UTC m=+0.119926857 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:14:20 standalone.localdomain podman[464385]: unhealthy
Oct 13 15:14:20 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:14:20 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Failed with result 'exit-code'.
Oct 13 15:14:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:20.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:20 standalone.localdomain python3.9[464515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:20 standalone.localdomain ceph-mon[29756]: pgmap v2993: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.760 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.761 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.762 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.763 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.764 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.765 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.766 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.767 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.768 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.769 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.770 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.771 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.772 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.773 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.774 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.791 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.792 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.793 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.804 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Oct 13 15:14:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2994: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.949 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.949 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.949 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.949 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.950 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.951 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.952 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.953 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.954 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.955 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.956 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.957 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.958 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.959 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.960 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.961 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.962 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.963 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.964 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.965 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.966 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.967 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.968 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.969 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.970 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.971 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.971 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.971 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.971 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.971 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.975 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct 13 15:14:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:20.984 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Oct 13 15:14:21 standalone.localdomain python3.9[464604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368460.1636796-528-26380253377607/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:14:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.390 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b6ce2c2280625cff9d72184537087fc2419d2206b034d898812e337b1cf178d" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Oct 13 15:14:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:21.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.594 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Mon, 13 Oct 2025 15:14:21 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-1cb18b0f-aa4d-432b-b88e-78f917338e78 x-openstack-request-id: req-1cb18b0f-aa4d-432b-b88e-78f917338e78 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.595 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "1f304cc6-f658-4c81-8601-a769ec289d72", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/1f304cc6-f658-4c81-8601-a769ec289d72"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/1f304cc6-f658-4c81-8601-a769ec289d72"}]}, {"id": "993bc811-5d85-498c-8b2f-935e295a6567", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/993bc811-5d85-498c-8b2f-935e295a6567"}]}, {"id": "a4dc5724-1446-4a58-9cd5-9b31504ed897", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/a4dc5724-1446-4a58-9cd5-9b31504ed897"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/a4dc5724-1446-4a58-9cd5-9b31504ed897"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.595 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-1cb18b0f-aa4d-432b-b88e-78f917338e78 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.597 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b6ce2c2280625cff9d72184537087fc2419d2206b034d898812e337b1cf178d" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.626 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Mon, 13 Oct 2025 15:14:21 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d97ac5e7-7484-4854-9339-b6b8f1f07551 x-openstack-request-id: req-d97ac5e7-7484-4854-9339-b6b8f1f07551 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.626 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "993bc811-5d85-498c-8b2f-935e295a6567", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/993bc811-5d85-498c-8b2f-935e295a6567"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.626 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567 used request id req-d97ac5e7-7484-4854-9339-b6b8f1f07551 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.627 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.631 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.632 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.636 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 54a46fec-332e-42f9-83ed-88e763d13f63 / tap8a49767c-fb inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.636 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.640 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8f68d5aa-abc4-451d-89d2-f5342b71831c / tapda3e5a61-7a inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.640 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '85e56eec-e6b3-4a08-8194-098d61a8b2d2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.632257', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bcd3c82-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '51ef7ce3b43ef55fdea027ab683f7f49f58d9f7fbce1b4f43bf3bd4698a8542a'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.632257', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bcdc738-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': 'ea555895e703c93018036b8eebf7bb5b3e1db51bb0704f08a643a0c09dcc28bb'}]}, 'timestamp': '2025-10-13 15:14:21.640956', '_unique_id': 'd781d6e286e942b59d2ecc9af84851d6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.650 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.657 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.702 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.703 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.703 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.731 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.732 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f007ef23-30db-44c7-9b31-01eb0bbf6f87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.657908', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bd73ce6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'd889f6110e71947d3154104f5b43834e409e880ed44a442b89f0e8e042a96951'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.657908', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bd75640-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '59b9decf2d9a31fbe2d110de8d846a53b6b841d38c53f64809c810d10d05f316'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.657908', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bd76d92-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '4a522ffe67e91810ff4930923fcafa04d3def2d08ad504acd909282691c8766e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.657908', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bdbad08-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '9372bab1d26c28ed8e1779142d7672c47249cfa966843cd4ec2772da3eb294d7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.657908', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bdbc5d6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': 'a47f0e80988d4ba012a95c4d69d8f99625edbbcc6eed01b94f29dc6b6db44047'}]}, 'timestamp': '2025-10-13 15:14:21.732765', '_unique_id': '79982fea5aef4f14b6f5f83b1f3984b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.734 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.735 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.735 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.736 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cdc84d57-2d38-4928-90aa-a8adba138bdb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.735823', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bdc5406-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': 'a200d67c18c918136e24c9bec05197ee71cc5c979619e64d80872944664c8a15'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.735823', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bdc6450-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': '0dd297e8415fa89783b0ede98da28641a405f8524ad59f9c38cc99cbb76423e2'}]}, 'timestamp': '2025-10-13 15:14:21.736713', '_unique_id': '5bf0e6700e01413ca637b9559cfa5761'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.737 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.738 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.738 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.739 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '65b4ac72-eb0a-40fb-861a-8fc764fa4dad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.738884', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bdccb34-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '282a72484e116cd3c5896a12c1154c5c7fd4acf6807be1925e1735db29f5d7b9'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.738884', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bdcdc46-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': 'd0ad530732d19459575963fbba3e4efbc30fa73b1d92c81310613252c80c52f1'}]}, 'timestamp': '2025-10-13 15:14:21.739755', '_unique_id': 'e1bcf127866949128bbe5d5eb5258418'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.740 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.741 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.741 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.742 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>]
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.742 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.742 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.742 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>]
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.743 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.743 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.743 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '73bd82b3-bc83-4cda-8a8a-28ebcfc9e731', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.743152', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bdd71a6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '1a70c413e8c55af6615636f84213f9019417d43e145dd9d4ea803f2d229679d1'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.743152', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bdd8362-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': '0c10ee1fefecc697819861121024b03b3347a3e27f9327769eb3448b54aa96eb'}]}, 'timestamp': '2025-10-13 15:14:21.744030', '_unique_id': '6e841c805576419cb15aa6e07305214a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.744 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.746 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.746 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.746 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '739b4cbb-2659-4e4f-9c03-2a9bf5441fb9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.746162', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bdde596-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': 'c5504d19c82e59efd0cb73dbbe6031cbc8aa9491fa1d5ccdc96fb673e00ec31c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.746162', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bddf1f8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': '4ac791df9163ed99bfc1e3098d10f5388f835360badc4bcf2c6edb476dbcb9b5'}]}, 'timestamp': '2025-10-13 15:14:21.746784', '_unique_id': 'd51e068bbc0d4966951151965ce35a83'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.747 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.748 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.778 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.805 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03b8c9ea-903b-413c-84c1-12a2e0ce069a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:14:21.748221', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4be2c8f4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.966453815, 'message_signature': '5d38acef8e657b733f3f286ae4a04d1b9b34a355062f9cc2673666268afbcd12'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:14:21.748221', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4be6fa64-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.994483459, 'message_signature': '26852bef58a35bfd80c3c3d29111634b46fb49f27263949bc874c861849744f8'}]}, 'timestamp': '2025-10-13 15:14:21.806132', '_unique_id': '3b54de2e7bcb4e63b0c9c66335e53ab9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.807 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.808 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.809 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.809 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '24df7997-c3d9-49fc-9164-f908f3186b32', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.809140', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4be7863c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '0cfd7f86f65185f2bce3af86259107ced928a348fa7b72102de6a1bc9e9f6e32'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.809140', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4be79dc0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': '5059874ebf9b25a4c2ec57ee4af543afcddeaf4ca63f99b248b3d957413b28df'}]}, 'timestamp': '2025-10-13 15:14:21.810357', '_unique_id': 'cbce2ed15e734df79bdbe49de6cc403d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.811 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.813 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:14:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:21.837 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:14:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:21.838 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:14:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:21.838 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:14:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:21.839 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.838 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.838 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.839 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.853 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.853 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1ba7338b-3e8f-471d-a53b-e0da0775a4a5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.813573', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bebfcf8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': 'a561af544beafc28c639a7c826a56daefae55876dc2368d21f3cc6e67e8b1d8f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.813573', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bec0e14-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': '1def8d52480ce7e1d995cc8bd7cbd03b5fd4b385753fee08308d0a6785a49e2a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.813573', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bec1f8a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': '66c2f78748a95c61713f7bdcfb2ae8a97a7fac7ebee26236391b3d1731982f14'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.813573', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bee3ed2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.028965009, 'message_signature': 'f8470aaf5b6a3c41cde591d7eb218c363362e14f8572d6483fb3839f9e00dba3'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.813573', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bee544e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.028965009, 'message_signature': '65d43fb7921595d982a7d70c08e35438c37f024e1a5711f17200fc07cd04d27b'}]}, 'timestamp': '2025-10-13 15:14:21.854228', '_unique_id': 'b437392050304dfca8c12f4596a430a6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.855 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.856 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.857 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.857 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f2b8f3fe-1730-489c-ad20-7adddde65a85', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.857139', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4beed874-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '7351471d279b3fc00cbc54157d2b395157f39d51137990a9a370989e3d094bce'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.857139', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4beeeb20-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': 'fc278a2c63d914dc57a532ec030df66fa5ebbaf34f93a537e37722b995abdded'}]}, 'timestamp': '2025-10-13 15:14:21.858098', '_unique_id': '194c4947073740769e9b2fcc09350638'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.859 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.860 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.860 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.860 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2480f57-f93c-4a7a-b60a-1604c429af81', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.860466', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bef5b96-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': 'dd5067b1300baa7ea22b61ef6708aab8dec146ef0e064e06059ca1c023fadc4a'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.860466', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bef6ea6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': 'c78f9801f6bddeccf535aaa18af6df1c46a7a9e138ae1975ab9b04ae677c298f'}]}, 'timestamp': '2025-10-13 15:14:21.861462', '_unique_id': '40495f4995dd44fa9f72d5b2d25a5654'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.862 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.863 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.863 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.864 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.864 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.865 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.865 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a81cb447-58da-4fb6-a4ae-940cc8ab077c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.863924', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4befe0a2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '7d3706a9ad4e871b223767edb577e36116dacf19d9352eff7251919e41c8312f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.863924', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4beff254-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'de81c4c2646693226c38bd978a860ddb3aa1d3aff927edcf738d19a66d2600a3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.863924', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf001d6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'eac11ff756495ac69f408db9662aadb09136d97e651d821716765cd30f257698'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.863924', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf01284-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '22a25d440e5ed22b534a2434ae751b3c5849a27230edbd4dee34484e9e84a0c7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.863924', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf022ce-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': 'a76162a889750624597603d82b39c64c38a96f79d2a18fb6f9e03aa1094d8a36'}]}, 'timestamp': '2025-10-13 15:14:21.866051', '_unique_id': '2653788bf9d54df3a9553d2567645e2c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.867 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.868 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.868 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.868 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>]
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.869 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.869 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.869 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.870 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.870 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.871 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0247ee58-4e65-48c2-b1d6-6805aedf7b4c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.869345', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf0b572-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': 'dd0c4400077a0ef01a8f94ca5ca6ee46b5ef2b5bc54a6dfc8eb2b91f81511e84'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.869345', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf0c85a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': '3402b369cbb43ae0826211cd4a67d78d6c73a12db82148dded1c7a1990d19b89'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.869345', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf0db60-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': 'adc7b0024a7779d1a394944acda320c28d31a49fb6d89f3c9302828d9ca2702c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.869345', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf0edd0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.028965009, 'message_signature': 'c489357dc39cf1ce9cfa638955dd8f7bf8874ca3eb1e4f4ea7f3a2d6f45dfdb0'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.869345', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf0fd98-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.028965009, 'message_signature': '56133fa0dd296f25546145492e3e75a70cf985f8ff047bd757026e345fc85ea9'}]}, 'timestamp': '2025-10-13 15:14:21.871749', '_unique_id': '40b2d83d051f4a1e902ff8893bbd247c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.872 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.874 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.874 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.874 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '703d258a-9537-4756-a165-2dcf8f6ecb46', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.874162', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bf1707a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '1da98b16681666523d88d4b044a84c239590462208fb25f7756f718f214267ff'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.874162', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bf18290-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': 'b34feed55dcaffb0b1c345501f92dea68bc0f73df4329bcf2f1ee1275c57196f'}]}, 'timestamp': '2025-10-13 15:14:21.875075', '_unique_id': 'd9b68ffd2be54f1f8c4d0b12853ca179'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.876 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.877 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.877 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.877 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e52ba4bf-affd-44cb-88aa-466b772a8d46', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:14:21.877382', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4bf1ef8c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.821605248, 'message_signature': '3960f2d8b670b3fb005e875625d22ca74efde777a9cd8f1b57728b24daa127d3'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:14:21.877382', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4bf2001c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.826784303, 'message_signature': '9c395936f4e5559ef7befbb3c6c6200df34d4d7d9fe1e23c0650a4027aeb59eb'}]}, 'timestamp': '2025-10-13 15:14:21.878329', '_unique_id': '27d43cee0cf74d55822921156e74cf3d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.879 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.880 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.880 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.881 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.881 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.882 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.882 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f2a3b14-52ca-4e52-8803-996fa7125dc2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.880951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf27aba-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '979951fe51153e10e018b2dcfd950edc6f14f7f95be95542548ca6bddefa4c14'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.880951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf28a28-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '67f69d98fc5e36b3401663463585102feff24b5429faebcb37435b27096c95ce'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.880951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf29482-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'fca41ac83b4c5f1a0ade01337e8eba903e37cf396525395b8c7f35d5b08743e9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.880951', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf29ea0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '9a9629280a91e8e7530c7b7f4211b9e5324313be5fb40f87d376651f7a7f65f6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.880951', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf2a86e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '24f07fc490e07abd63fae930fbb65f39cbea867bf16382b49c6fb9e4ecfa3ee4'}]}, 'timestamp': '2025-10-13 15:14:21.882532', '_unique_id': '3ae009118f5e45aba7fffcf273cb17ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.883 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.884 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.884 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.884 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.884 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.885 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '38754721-7f38-49e2-9ec9-2c4a2033551a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.884077', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf2f134-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'e18bd4a11caabeed9d02ebb02d8e66ff0251c4569b08583bfc4933603f657e56'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.884077', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf2fcb0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'c95d6e90a7ef76d7173eb97890447886362cd1ced25b4bea46ac5651bf511bdf'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.884077', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf30692-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '7f68b5bdb64bfb6b2001f9334a4e5263d743e569e30f16926fe01a2f8fad9e81'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.884077', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf31060-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': 'e737227bacf8687b881d3a154efc90a69eced6c8afeb7882be8d43047cb87e3b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.884077', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf31b5a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '4738860f78f8b674942aeee5abe85531ccb1eb618b5a99fcd121f8aea2e576e8'}]}, 'timestamp': '2025-10-13 15:14:21.885644', '_unique_id': '60b59d2e085c434a86ed840b944da5b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.886 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.887 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.887 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.887 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test>, <NovaLikeServer: bfv-server>]
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.887 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.887 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 26440000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 26200000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '87dec3cb-47c7-42af-b84a-f3a13e5c5b7d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26440000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:14:21.887711', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '4bf37e6a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.966453815, 'message_signature': '78a24165694ed9f8b5f760841ab33176ba7e362df57850a1b130b0e10e2da26d'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26200000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:14:21.887711', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '4bf3890a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.994483459, 'message_signature': '2b2dc32e67a87534fcd0cfdbf33bc34fee0995db285644d9f00ac816bdd1e6b7'}]}, 'timestamp': '2025-10-13 15:14:21.888262', '_unique_id': '6e75301865ff4ec1b4c765b660cb118f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.888 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.889 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.889 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.890 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.890 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.890 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.891 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '197419f2-614d-4d43-946a-7cb3b4b438af', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.889778', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf3cf0a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': '384d8817c0e05d9a0de8bb208af5cde7ae3e6a5c417cc52abfa58746ef788978'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.889778', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf3dacc-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': '5b688ff02586a65ff64682f826ae9df15233ed9fc71a327f821d76dbaa81ebe9'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.889778', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf3ea58-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.003036575, 'message_signature': '69250cec48de502f17ed1beca5f72252944b109d8e5648d791fafa65ca16a393'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.889778', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf3fe9e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.028965009, 'message_signature': '7849b25420208cc5a188f9011d213d46028cf3992727c1b3e1e279e48467972e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.889778', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf40d80-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7905.028965009, 'message_signature': '87501a1afd50d58fd4dec5aa53d53968adafe0a8f1fc7f36682493f4e21dfdfb'}]}, 'timestamp': '2025-10-13 15:14:21.891718', '_unique_id': '5ac2f8164e17448f8e9945c29abb82fc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.892 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.894 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.894 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.894 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.894 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.895 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.895 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da267a54-47a2-47b1-a036-d0de352d8f32', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.894158', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf47d6a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'e3b0013f3ac9342361c25bbc222d3297f1b1998a17963c7a83670fc0970ea519'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.894158', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf48dfa-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '8cf55980e7f58760e60826a741d1d4d7f6a524ecbe5421e220d378c4cc2f3259'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.894158', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf49b2e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '55f8cc287d2618d607b51ff07a6508973245cbca9c973dacd83e72d7c937b94e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.894158', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf4a9a2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': 'd19c3e9ab5124b39736018070a7933798a05f23377f3f49bf0de33a43d1c1791'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.894158', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf4b906-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '06cb5fde6eb69c9a09e8c44756a0eaac7f26b587178e754ab4f42130d6c78d6a'}]}, 'timestamp': '2025-10-13 15:14:21.896110', '_unique_id': '13f39dd2fc15455cb877338b1d9d4d0f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.897 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.898 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.898 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.899 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.899 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.899 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.900 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b7351db-7ef1-48ab-b80c-803ea5e0f4f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:14:21.898605', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf52a94-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'd1144e0fe531190155edffe23a38406273d396707f8b97ce1dd0d47082ee87c4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:14:21.898605', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf5399e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': '076832ad3be6da5c07ff291a6d66f8283afe931e9ca59f371cc30ee575f8cdae'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:14:21.898605', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4bf54902-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.847353397, 'message_signature': 'ee4e3697ad285916a8fc5197f2ed8611fdbf55e5536c8f301148b3692d87bb17'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:14:21.898605', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4bf55776-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '445c5a59b05c6a306c408233a58bb44f6438d0835c0083df91948ae84a48b313'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:14:21.898605', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4bf56536-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 7904.893469018, 'message_signature': '008df7edb157f2a94032b5c62bec8d42988f06222d05a3b3d7cc0c6ecc9e3262'}]}, 'timestamp': '2025-10-13 15:14:21.900537', '_unique_id': 'ca2d39708f714049909be051ef289ac5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:14:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:14:21.901 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:14:21 standalone.localdomain python3.9[464715]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Oct 13 15:14:22 standalone.localdomain python3.9[464823]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:14:22 standalone.localdomain ceph-mon[29756]: pgmap v2994: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2995: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.965 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.988 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.988 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.989 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.990 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.990 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.990 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.991 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.991 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.991 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:14:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:22.992 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.011 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.012 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.012 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.013 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.013 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:14:23 standalone.localdomain python3[464931]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:14:23
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', '.mgr', 'images', 'vms', 'manila_metadata', 'manila_data', 'backups']
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:14:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:14:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/687994290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.481 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:14:23 standalone.localdomain podman[464987]: 
Oct 13 15:14:23 standalone.localdomain podman[464987]: 2025-10-13 15:14:23.525232414 +0000 UTC m=+0.070549199 container create 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:14:23 standalone.localdomain podman[464987]: 2025-10-13 15:14:23.497417465 +0000 UTC m=+0.042734270 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Oct 13 15:14:23 standalone.localdomain python3[464931]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Oct 13 15:14:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41392 DF PROTO=TCP SPT=55012 DPT=9102 SEQ=2075339948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3EC5F0000000001030307) 
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.560 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.560 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.560 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.564 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.565 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:14:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/687994290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:14:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.724 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.725 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10560MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.726 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.726 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:14:23 standalone.localdomain podman[465029]: 2025-10-13 15:14:23.795220643 +0000 UTC m=+0.060187000 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.802 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.802 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.803 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.803 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:14:23 standalone.localdomain podman[465029]: 2025-10-13 15:14:23.826820653 +0000 UTC m=+0.091787050 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 13 15:14:23 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:14:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:23.865 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:14:24 standalone.localdomain python3.9[465182]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:14:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:14:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2947480828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:14:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:24.399 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:14:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:24.409 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:14:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:24.430 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:14:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:24.433 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:14:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:24.433 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:14:24 standalone.localdomain ceph-mon[29756]: pgmap v2995: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2947480828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:14:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2996: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:25 standalone.localdomain python3.9[465294]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:25.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57242 DF PROTO=TCP SPT=51364 DPT=9101 SEQ=3049024220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3F3EE0000000001030307) 
Oct 13 15:14:25 standalone.localdomain python3.9[465401]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368465.1967435-581-224349291539531/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:26.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:26 standalone.localdomain python3.9[465454]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:14:26 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:26 standalone.localdomain systemd-rc-local-generator[465482]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:26 standalone.localdomain systemd-sysv-generator[465485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:26 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:26 standalone.localdomain ceph-mon[29756]: pgmap v2996: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2997: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:27 standalone.localdomain python3.9[465543]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:14:28 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57244 DF PROTO=TCP SPT=51364 DPT=9101 SEQ=3049024220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB3FFF60000000001030307) 
Oct 13 15:14:28 standalone.localdomain systemd-rc-local-generator[465569]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:28 standalone.localdomain systemd-sysv-generator[465575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:28 standalone.localdomain ceph-mon[29756]: pgmap v2997: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:28 standalone.localdomain systemd[1]: Starting node_exporter container...
Oct 13 15:14:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2998: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:14:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:14:29 standalone.localdomain systemd[1]: tmp-crun.i6vGor.mount: Deactivated successfully.
Oct 13 15:14:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:14:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:14:29 standalone.localdomain podman[465583]: 2025-10-13 15:14:29.10228993 +0000 UTC m=+0.141419998 container init 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.114Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.114Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.114Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.114Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.114Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.114Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=arp
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=bcache
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=bonding
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=cpu
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=edac
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=filefd
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=netclass
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=netdev
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=netstat
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=nfs
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=nvme
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=softnet
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=systemd
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=xfs
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.115Z caller=node_exporter.go:117 level=info collector=zfs
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.116Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 13 15:14:29 standalone.localdomain node_exporter[465598]: ts=2025-10-13T15:14:29.116Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 13 15:14:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:14:29 standalone.localdomain podman[465583]: 2025-10-13 15:14:29.132077024 +0000 UTC m=+0.171207092 container start 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:14:29 standalone.localdomain podman[465583]: node_exporter
Oct 13 15:14:29 standalone.localdomain systemd[1]: Started node_exporter container.
Oct 13 15:14:29 standalone.localdomain podman[465607]: 2025-10-13 15:14:29.208589313 +0000 UTC m=+0.066803485 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:14:29 standalone.localdomain podman[465607]: 2025-10-13 15:14:29.243950903 +0000 UTC m=+0.102165045 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:14:29 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:14:29 standalone.localdomain python3.9[465737]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:14:29 standalone.localdomain systemd[1]: Stopping node_exporter container...
Oct 13 15:14:29 standalone.localdomain systemd[1]: libpod-4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.scope: Deactivated successfully.
Oct 13 15:14:29 standalone.localdomain podman[465741]: 2025-10-13 15:14:29.996705185 +0000 UTC m=+0.055702618 container died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:14:30 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.timer: Deactivated successfully.
Oct 13 15:14:30 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:14:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018-userdata-shm.mount: Deactivated successfully.
Oct 13 15:14:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2af6437795897a4ec838bc44d692bff2e7563aa62e5d47d672713cc15228f3d2-merged.mount: Deactivated successfully.
Oct 13 15:14:30 standalone.localdomain podman[465741]: 2025-10-13 15:14:30.066063751 +0000 UTC m=+0.125061194 container cleanup 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:14:30 standalone.localdomain podman[465741]: node_exporter
Oct 13 15:14:30 standalone.localdomain systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 13 15:14:30 standalone.localdomain podman[465767]: 2025-10-13 15:14:30.172896622 +0000 UTC m=+0.073758966 container cleanup 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:14:30 standalone.localdomain podman[465767]: node_exporter
Oct 13 15:14:30 standalone.localdomain systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct 13 15:14:30 standalone.localdomain systemd[1]: Stopped node_exporter container.
Oct 13 15:14:30 standalone.localdomain systemd[1]: Starting node_exporter container...
Oct 13 15:14:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:14:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:14:30 standalone.localdomain podman[465780]: 2025-10-13 15:14:30.32396611 +0000 UTC m=+0.121075059 container init 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.335Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.335Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.336Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.337Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.337Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.337Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.337Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.337Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=arp
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=bcache
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=bonding
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=cpu
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=edac
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=filefd
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.338Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=netclass
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=netdev
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=netstat
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=nfs
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=nvme
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=softnet
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=systemd
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=xfs
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.339Z caller=node_exporter.go:117 level=info collector=zfs
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.340Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct 13 15:14:30 standalone.localdomain node_exporter[465795]: ts=2025-10-13T15:14:30.340Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 13 15:14:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:14:30 standalone.localdomain podman[465780]: 2025-10-13 15:14:30.360248919 +0000 UTC m=+0.157357868 container start 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:14:30 standalone.localdomain podman[465780]: node_exporter
Oct 13 15:14:30 standalone.localdomain systemd[1]: Started node_exporter container.
Oct 13 15:14:30 standalone.localdomain podman[465804]: 2025-10-13 15:14:30.434348352 +0000 UTC m=+0.070240744 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:14:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:30.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:30 standalone.localdomain podman[465804]: 2025-10-13 15:14:30.472892492 +0000 UTC m=+0.108784864 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:14:30 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:14:30 standalone.localdomain ceph-mon[29756]: pgmap v2998: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v2999: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:30 standalone.localdomain python3.9[465934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:31.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:31 standalone.localdomain python3.9[466020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368470.5672214-613-3366456613668/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:14:32 standalone.localdomain python3.9[466128]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 13 15:14:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57245 DF PROTO=TCP SPT=51364 DPT=9101 SEQ=3049024220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB40FB60000000001030307) 
Oct 13 15:14:32 standalone.localdomain ceph-mon[29756]: pgmap v2999: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3000: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:33 standalone.localdomain python3.9[466236]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:14:33 standalone.localdomain python3[466344]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:14:34 standalone.localdomain ceph-mon[29756]: pgmap v3000: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3001: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:35.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:35 standalone.localdomain podman[466358]: 2025-10-13 15:14:34.044335588 +0000 UTC m=+0.051164837 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Oct 13 15:14:36 standalone.localdomain podman[466429]: 
Oct 13 15:14:36 standalone.localdomain podman[466429]: 2025-10-13 15:14:36.085396469 +0000 UTC m=+0.089765366 container create e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:14:36 standalone.localdomain podman[466429]: 2025-10-13 15:14:36.046182179 +0000 UTC m=+0.050551166 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Oct 13 15:14:36 standalone.localdomain python3[466344]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Oct 13 15:14:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:36.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:14:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:14:36 standalone.localdomain ceph-mon[29756]: pgmap v3001: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:14:36 standalone.localdomain systemd[1]: tmp-crun.ZavjmV.mount: Deactivated successfully.
Oct 13 15:14:36 standalone.localdomain systemd[1]: tmp-crun.J1PBj3.mount: Deactivated successfully.
Oct 13 15:14:36 standalone.localdomain podman[466576]: 2025-10-13 15:14:36.81579519 +0000 UTC m=+0.084376687 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, batch=17.1_20250721.1, io.openshift.expose-services=, container_name=swift_object_server, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, version=17.1.9, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 15:14:36 standalone.localdomain podman[466577]: 2025-10-13 15:14:36.88657301 +0000 UTC m=+0.150321006 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:14:36 standalone.localdomain podman[466577]: 2025-10-13 15:14:36.917313844 +0000 UTC m=+0.181061860 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:14:36 standalone.localdomain python3.9[466575]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:14:36 standalone.localdomain podman[466578]: 2025-10-13 15:14:36.926538533 +0000 UTC m=+0.187591534 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, container_name=swift_container_server, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container)
Oct 13 15:14:36 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:14:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3002: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:36 standalone.localdomain podman[466584]: 2025-10-13 15:14:36.85468533 +0000 UTC m=+0.108552876 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, container_name=swift_account_server, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 15:14:37 standalone.localdomain podman[466576]: 2025-10-13 15:14:37.04661924 +0000 UTC m=+0.315200677 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Oct 13 15:14:37 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:14:37 standalone.localdomain podman[466584]: 2025-10-13 15:14:37.07116645 +0000 UTC m=+0.325033976 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-swift-account-container, distribution-scope=public, release=1, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:14:37 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:14:37 standalone.localdomain podman[466578]: 2025-10-13 15:14:37.149832478 +0000 UTC m=+0.410885459 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, container_name=swift_container_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9)
Oct 13 15:14:37 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:14:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49351 DF PROTO=TCP SPT=37650 DPT=9100 SEQ=3452953994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB422740000000001030307) 
Oct 13 15:14:37 standalone.localdomain python3.9[466780]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:38 standalone.localdomain python3.9[466887]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368477.64441-666-17244303748448/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:14:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49352 DF PROTO=TCP SPT=37650 DPT=9100 SEQ=3452953994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB426760000000001030307) 
Oct 13 15:14:38 standalone.localdomain ceph-mon[29756]: pgmap v3002: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:38 standalone.localdomain python3.9[466940]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:14:38 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:38 standalone.localdomain systemd-rc-local-generator[466961]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:38 standalone.localdomain systemd-sysv-generator[466966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3003: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:38 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:39 standalone.localdomain python3.9[467028]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:14:39 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:14:39 standalone.localdomain systemd-sysv-generator[467062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:14:39 standalone.localdomain systemd-rc-local-generator[467059]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:14:39 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:14:40 standalone.localdomain systemd[1]: Starting podman_exporter container...
Oct 13 15:14:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:14:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:14:40 standalone.localdomain podman[467069]: 2025-10-13 15:14:40.333056146 +0000 UTC m=+0.176905970 container init e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:14:40 standalone.localdomain podman_exporter[467084]: ts=2025-10-13T15:14:40.356Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 13 15:14:40 standalone.localdomain podman_exporter[467084]: ts=2025-10-13T15:14:40.356Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 13 15:14:40 standalone.localdomain podman_exporter[467084]: ts=2025-10-13T15:14:40.357Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 13 15:14:40 standalone.localdomain podman_exporter[467084]: ts=2025-10-13T15:14:40.357Z caller=handler.go:105 level=info collector=container
Oct 13 15:14:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:14:40 standalone.localdomain podman[467069]: 2025-10-13 15:14:40.382831447 +0000 UTC m=+0.226681251 container start e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:14:40 standalone.localdomain podman[467069]: podman_exporter
Oct 13 15:14:40 standalone.localdomain systemd[1]: Starting Podman API Service...
Oct 13 15:14:40 standalone.localdomain systemd[1]: Started podman_exporter container.
Oct 13 15:14:40 standalone.localdomain systemd[1]: Started Podman API Service.
Oct 13 15:14:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:40.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:40 standalone.localdomain podman[467099]: time="2025-10-13T15:14:40Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 13 15:14:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49353 DF PROTO=TCP SPT=37650 DPT=9100 SEQ=3452953994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB42E770000000001030307) 
Oct 13 15:14:40 standalone.localdomain podman[467095]: 2025-10-13 15:14:40.486959494 +0000 UTC m=+0.098586274 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:14:40 standalone.localdomain podman[467095]: 2025-10-13 15:14:40.496332018 +0000 UTC m=+0.107958758 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:14:40 standalone.localdomain podman[467095]: unhealthy
Oct 13 15:14:40 standalone.localdomain podman[467099]: time="2025-10-13T15:14:40Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 13 15:14:40 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:14:40 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:14:40 standalone.localdomain podman[467099]: time="2025-10-13T15:14:40Z" level=info msg="Setting parallel job count to 25"
Oct 13 15:14:40 standalone.localdomain podman[467099]: time="2025-10-13T15:14:40Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 13 15:14:40 standalone.localdomain podman[467099]: time="2025-10-13T15:14:40Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Oct 13 15:14:40 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:14:40 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 13 15:14:40 standalone.localdomain podman[467099]: time="2025-10-13T15:14:40Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:14:40 standalone.localdomain ceph-mon[29756]: pgmap v3003: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3004: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:41 standalone.localdomain python3.9[467240]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:14:41 standalone.localdomain systemd[1]: Stopping podman_exporter container...
Oct 13 15:14:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:14:40 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 0 "" "Go-http-client/1.1"
Oct 13 15:14:41 standalone.localdomain systemd[1]: libpod-e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.scope: Deactivated successfully.
Oct 13 15:14:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:41 standalone.localdomain podman[467244]: 2025-10-13 15:14:41.392754757 +0000 UTC m=+0.102344022 container died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:14:41 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.timer: Deactivated successfully.
Oct 13 15:14:41 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:14:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:42 standalone.localdomain systemd[1]: tmp-crun.7RRkL3.mount: Deactivated successfully.
Oct 13 15:14:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:14:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52-userdata-shm.mount: Deactivated successfully.
Oct 13 15:14:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:14:42 standalone.localdomain systemd[1]: tmp-crun.S2Le9O.mount: Deactivated successfully.
Oct 13 15:14:42 standalone.localdomain podman[467272]: 2025-10-13 15:14:42.810383283 +0000 UTC m=+0.077072739 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:14:42 standalone.localdomain ceph-mon[29756]: pgmap v3004: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:42 standalone.localdomain podman[467272]: 2025-10-13 15:14:42.819715246 +0000 UTC m=+0.086404702 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible)
Oct 13 15:14:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3005: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:14:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:14:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-78f028c512b8ccaae72db4f3a887a529627d57fd75b2319005bdc320561533f5-merged.mount: Deactivated successfully.
Oct 13 15:14:43 standalone.localdomain podman[467244]: 2025-10-13 15:14:43.285652181 +0000 UTC m=+1.995241376 container cleanup e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:14:43 standalone.localdomain podman[467244]: podman_exporter
Oct 13 15:14:43 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:14:43 standalone.localdomain podman[467256]: 2025-10-13 15:14:43.300202748 +0000 UTC m=+1.904986505 container cleanup e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:14:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:14:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57247 DF PROTO=TCP SPT=56572 DPT=9105 SEQ=4161222564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB43C360000000001030307) 
Oct 13 15:14:44 standalone.localdomain systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 13 15:14:44 standalone.localdomain podman[467292]: 2025-10-13 15:14:44.129911794 +0000 UTC m=+0.076922225 container cleanup e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:14:44 standalone.localdomain podman[467292]: podman_exporter
Oct 13 15:14:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:14:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:14:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:14:44 standalone.localdomain systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 13 15:14:44 standalone.localdomain systemd[1]: Stopped podman_exporter container.
Oct 13 15:14:44 standalone.localdomain systemd[1]: Starting podman_exporter container...
Oct 13 15:14:44 standalone.localdomain ceph-mon[29756]: pgmap v3005: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3006: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:45.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:14:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-76bdac62c12a19965f1d6e6025533ee72572af7604275a0c89647dc23f734651-merged.mount: Deactivated successfully.
Oct 13 15:14:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:14:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:14:45 standalone.localdomain podman[467306]: 2025-10-13 15:14:45.776287606 +0000 UTC m=+1.317516728 container init e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:14:45 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:14:45.791Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 13 15:14:45 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:14:45.791Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 13 15:14:45 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:14:45.791Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 13 15:14:45 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:14:45.791Z caller=handler.go:105 level=info collector=container
Oct 13 15:14:45 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:14:45 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 13 15:14:45 standalone.localdomain podman[467099]: time="2025-10-13T15:14:45Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:14:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:14:45 standalone.localdomain podman[467306]: 2025-10-13 15:14:45.85963671 +0000 UTC m=+1.400865812 container start e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:14:45 standalone.localdomain podman[467306]: podman_exporter
Oct 13 15:14:45 standalone.localdomain podman[467331]: 2025-10-13 15:14:45.878285085 +0000 UTC m=+0.066806547 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:14:45 standalone.localdomain podman[467331]: 2025-10-13 15:14:45.887743391 +0000 UTC m=+0.076264843 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:14:45 standalone.localdomain podman[467331]: unhealthy
Oct 13 15:14:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:46.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:46 standalone.localdomain ceph-mon[29756]: pgmap v3006: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3007: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:14:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:14:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:14:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:14:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:14:47 standalone.localdomain systemd[1]: Started podman_exporter container.
Oct 13 15:14:47 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:14:47 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:14:48 standalone.localdomain python3.9[467461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:14:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:14:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40741 DF PROTO=TCP SPT=40300 DPT=9882 SEQ=2908108442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB44E770000000001030307) 
Oct 13 15:14:48 standalone.localdomain ceph-mon[29756]: pgmap v3007: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:14:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:14:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3008: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:49 standalone.localdomain python3.9[467559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=root mode=0700 owner=root setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368488.0234208-698-246568308430176/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:14:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:14:49 standalone.localdomain python3.9[467667]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 13 15:14:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:14:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:14:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:14:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57249 DF PROTO=TCP SPT=56572 DPT=9105 SEQ=4161222564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB453F70000000001030307) 
Oct 13 15:14:50 standalone.localdomain python3.9[467775]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:14:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:50.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:14:50 standalone.localdomain podman[467793]: 2025-10-13 15:14:50.575252356 +0000 UTC m=+0.095404674 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:14:50 standalone.localdomain podman[467793]: 2025-10-13 15:14:50.608277661 +0000 UTC m=+0.128430009 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:14:50 standalone.localdomain podman[467793]: unhealthy
Oct 13 15:14:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:14:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:14:50 standalone.localdomain ceph-mon[29756]: pgmap v3008: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3009: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:51 standalone.localdomain python3[467901]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:14:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:51.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:14:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:14:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:14:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-76bdac62c12a19965f1d6e6025533ee72572af7604275a0c89647dc23f734651-merged.mount: Deactivated successfully.
Oct 13 15:14:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:14:51 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:14:51 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Failed with result 'exit-code'.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: pgmap v3009: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.419688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368492419737, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 2108, "num_deletes": 251, "total_data_size": 1945721, "memory_usage": 1991504, "flush_reason": "Manual Compaction"}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368492431399, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 1886462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69320, "largest_seqno": 71427, "table_properties": {"data_size": 1878364, "index_size": 4864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16912, "raw_average_key_size": 19, "raw_value_size": 1861791, "raw_average_value_size": 2180, "num_data_blocks": 221, "num_entries": 854, "num_filter_entries": 854, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368296, "oldest_key_time": 1760368296, "file_creation_time": 1760368492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 11826 microseconds, and 6341 cpu microseconds.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.431466) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 1886462 bytes OK
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.431536) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.433646) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.433671) EVENT_LOG_v1 {"time_micros": 1760368492433663, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.433696) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1936738, prev total WAL file size 1936738, number of live WAL files 2.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.434468) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(1842KB)], [158(5276KB)]
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368492434543, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 7289473, "oldest_snapshot_seqno": -1}
Oct 13 15:14:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 6274 keys, 6266736 bytes, temperature: kUnknown
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368492474461, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 6266736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6228924, "index_size": 21010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 163303, "raw_average_key_size": 26, "raw_value_size": 6118926, "raw_average_value_size": 975, "num_data_blocks": 835, "num_entries": 6274, "num_filter_entries": 6274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.474779) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 6266736 bytes
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.476539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 5.2 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 6792, records dropped: 518 output_compression: NoCompression
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.476565) EVENT_LOG_v1 {"time_micros": 1760368492476553, "job": 98, "event": "compaction_finished", "compaction_time_micros": 40047, "compaction_time_cpu_micros": 21517, "output_level": 6, "num_output_files": 1, "total_output_size": 6266736, "num_input_records": 6792, "num_output_records": 6274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368492476945, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368492477837, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.434335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.477939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.477947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.477950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.477953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:14:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:14:52.477956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:14:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:14:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3010: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:14:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:14:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:14:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:14:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46762 DF PROTO=TCP SPT=33970 DPT=9102 SEQ=259347417 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4618F0000000001030307) 
Oct 13 15:14:54 standalone.localdomain ceph-mon[29756]: pgmap v3010: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:14:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3011: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:14:55 standalone.localdomain podman[467462]: 2025-10-13 15:14:55.233619985 +0000 UTC m=+6.746523080 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:14:55 standalone.localdomain podman[467930]: 2025-10-13 15:14:55.298193081 +0000 UTC m=+0.558966284 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:14:55 standalone.localdomain podman[467930]: 2025-10-13 15:14:55.340406115 +0000 UTC m=+0.601179298 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 13 15:14:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:55.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22317 DF PROTO=TCP SPT=40356 DPT=9101 SEQ=1927089292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4691D0000000001030307) 
Oct 13 15:14:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:14:56 standalone.localdomain ceph-mon[29756]: pgmap v3011: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:14:56.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:14:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3012: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:14:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:14:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:14:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:14:57 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:14:58 standalone.localdomain ceph-mon[29756]: pgmap v3012: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:14:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22319 DF PROTO=TCP SPT=40356 DPT=9101 SEQ=1927089292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB475360000000001030307) 
Oct 13 15:14:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3013: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:00.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:00 standalone.localdomain ceph-mon[29756]: pgmap v3013: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:15:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:15:00 standalone.localdomain podman[467972]: 2025-10-13 15:15:00.829255805 +0000 UTC m=+0.089479498 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:15:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:15:00 standalone.localdomain podman[467972]: 2025-10-13 15:15:00.861753424 +0000 UTC m=+0.121977097 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:15:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:15:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3014: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:01.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:01 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:15:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:15:02 standalone.localdomain ceph-mon[29756]: pgmap v3014: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:02 standalone.localdomain sudo[468004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:15:02 standalone.localdomain sudo[468004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:15:02 standalone.localdomain sudo[468004]: pam_unix(sudo:session): session closed for user root
Oct 13 15:15:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22320 DF PROTO=TCP SPT=40356 DPT=9101 SEQ=1927089292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB484F60000000001030307) 
Oct 13 15:15:02 standalone.localdomain sudo[468022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:15:02 standalone.localdomain sudo[468022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:15:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3015: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:15:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:15:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:15:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:15:04 standalone.localdomain sudo[468022]: pam_unix(sudo:session): session closed for user root
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:15:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 80153080-13ad-450a-8825-2982140ce610 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:15:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 80153080-13ad-450a-8825-2982140ce610 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:15:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 80153080-13ad-450a-8825-2982140ce610 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:15:04 standalone.localdomain sudo[468083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:15:04 standalone.localdomain sudo[468083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:15:04 standalone.localdomain sudo[468083]: pam_unix(sudo:session): session closed for user root
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: pgmap v3015: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:15:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:15:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:15:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:15:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3016: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:15:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:15:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:15:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:05.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:05 standalone.localdomain ceph-mon[29756]: pgmap v3016: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:15:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:15:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:06.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:15:06.931 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:15:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:15:06.932 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:15:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:15:06.932 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:15:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3017: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:15:07 standalone.localdomain podman[468101]: 2025-10-13 15:15:07.066751945 +0000 UTC m=+0.085294250 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:15:07 standalone.localdomain podman[468101]: 2025-10-13 15:15:07.107993371 +0000 UTC m=+0.126535666 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:15:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24373 DF PROTO=TCP SPT=42042 DPT=9100 SEQ=2308385922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB497A40000000001030307) 
Oct 13 15:15:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:15:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:15:07 standalone.localdomain podman[467462]: 2025-10-13 15:15:07.555399238 +0000 UTC m=+19.068302353 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:15:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:15:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c400a0b1febfb43b3487c71ecbd5a108092b42e80cd739860482d3019963fe9e-merged.mount: Deactivated successfully.
Oct 13 15:15:08 standalone.localdomain ceph-mon[29756]: pgmap v3017: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24374 DF PROTO=TCP SPT=42042 DPT=9100 SEQ=2308385922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB49BB60000000001030307) 
Oct 13 15:15:08 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:15:08 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:15:08 standalone.localdomain podman[467917]: 2025-10-13 15:14:52.685384755 +0000 UTC m=+0.041271696 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Oct 13 15:15:08 standalone.localdomain podman[468121]: 2025-10-13 15:15:08.552446918 +0000 UTC m=+1.447357779 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=swift_account_server, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 15:15:08 standalone.localdomain podman[468120]: 2025-10-13 15:15:08.608535877 +0000 UTC m=+1.505157311 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, release=1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:15:08 standalone.localdomain podman[468139]: 2025-10-13 15:15:08.656830632 +0000 UTC m=+1.143973101 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, distribution-scope=public, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container)
Oct 13 15:15:08 standalone.localdomain podman[468120]: 2025-10-13 15:15:08.784613265 +0000 UTC m=+1.681234699 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, release=1, version=17.1.9, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:15:08 standalone.localdomain podman[468121]: 2025-10-13 15:15:08.806750496 +0000 UTC m=+1.701661337 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:15:08 standalone.localdomain podman[468139]: 2025-10-13 15:15:08.845261195 +0000 UTC m=+1.332403684 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1)
Oct 13 15:15:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3018: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:10 standalone.localdomain ceph-mon[29756]: pgmap v3018: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:10.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24375 DF PROTO=TCP SPT=42042 DPT=9100 SEQ=2308385922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4A3B70000000001030307) 
Oct 13 15:15:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:15:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3019: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:15:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:11.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:12 standalone.localdomain ceph-mon[29756]: pgmap v3019: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:15:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:15:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:15:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:15:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3020: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:15:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36147 DF PROTO=TCP SPT=39644 DPT=9105 SEQ=2424226629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4B1760000000001030307) 
Oct 13 15:15:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:14 standalone.localdomain ceph-mon[29756]: pgmap v3020: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:15:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:15:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3021: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:15 standalone.localdomain podman[468227]: 2025-10-13 15:15:13.113429165 +0000 UTC m=+0.039870804 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Oct 13 15:15:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:15 standalone.localdomain podman[468239]: 2025-10-13 15:15:15.068445668 +0000 UTC m=+1.583401180 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:15:15 standalone.localdomain podman[468239]: 2025-10-13 15:15:15.08039982 +0000 UTC m=+1.595355302 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:15:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:15.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:16 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:15:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:16 standalone.localdomain podman[467099]: time="2025-10-13T15:15:16Z" level=error msg="Getting root fs size for \"0629b5194676be254c3825ba35e950d563e9f9c8df2d2a87a4d094c3d31c570f\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:15:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:16 standalone.localdomain ceph-mon[29756]: pgmap v3021: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:16.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3022: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:15:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:15:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f8c62a6a2c8d47bc643de7e472c16fea3d5facb0eb72e0880a216e4f3b3830d9-merged.mount: Deactivated successfully.
Oct 13 15:15:18 standalone.localdomain podman[468258]: 2025-10-13 15:15:18.385194189 +0000 UTC m=+0.093668432 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:15:18 standalone.localdomain podman[468258]: 2025-10-13 15:15:18.420098366 +0000 UTC m=+0.128572589 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:15:18 standalone.localdomain podman[468258]: unhealthy
Oct 13 15:15:18 standalone.localdomain ceph-mon[29756]: pgmap v3022: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:15:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3299559286' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:15:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:15:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3299559286' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:15:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47374 DF PROTO=TCP SPT=55268 DPT=9882 SEQ=1749646071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4C3B60000000001030307) 
Oct 13 15:15:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3023: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:15:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c400a0b1febfb43b3487c71ecbd5a108092b42e80cd739860482d3019963fe9e-merged.mount: Deactivated successfully.
Oct 13 15:15:19 standalone.localdomain podman[468227]: 
Oct 13 15:15:19 standalone.localdomain podman[468227]: 2025-10-13 15:15:19.439260416 +0000 UTC m=+6.365702075 container create 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct 13 15:15:19 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:15:19 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:15:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3299559286' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:15:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3299559286' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:15:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36149 DF PROTO=TCP SPT=39644 DPT=9105 SEQ=2424226629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4C9370000000001030307) 
Oct 13 15:15:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:20.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:20 standalone.localdomain ceph-mon[29756]: pgmap v3023: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3024: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:15:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:15:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:21.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:15:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:15:22 standalone.localdomain python3[467901]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Oct 13 15:15:22 standalone.localdomain podman[468281]: 2025-10-13 15:15:22.315937399 +0000 UTC m=+0.387409897 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:15:22 standalone.localdomain podman[468281]: 2025-10-13 15:15:22.323203355 +0000 UTC m=+0.394675803 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:15:22 standalone.localdomain podman[468281]: unhealthy
Oct 13 15:15:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:15:22 standalone.localdomain ceph-mon[29756]: pgmap v3024: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3025: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:15:23
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'backups', 'volumes', 'manila_data', 'manila_metadata', '.mgr']
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:15:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44589 DF PROTO=TCP SPT=40774 DPT=9102 SEQ=3305620702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4D6BE0000000001030307) 
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:15:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:15:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:15:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.334 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.334 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:15:24 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Failed with result 'exit-code'.
Oct 13 15:15:24 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:24 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.373 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.373 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:15:24 standalone.localdomain ceph-mon[29756]: pgmap v3025: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.659 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.660 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.660 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:15:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.968 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:15:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3026: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.988 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.989 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.989 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.989 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.989 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.989 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.990 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.990 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.990 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:15:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:24.990 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.008 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.008 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.008 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.008 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.009 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:15:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:25 standalone.localdomain podman[467099]: time="2025-10-13T15:15:25Z" level=error msg="Getting root fs size for \"082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:15:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:15:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2155260606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.446 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30513 DF PROTO=TCP SPT=56948 DPT=9101 SEQ=4283739604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4DE4E0000000001030307) 
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.529 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.529 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.530 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.535 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.536 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:15:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2155260606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:15:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.702 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.703 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10307MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.703 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.703 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.771 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.771 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.772 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.772 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:15:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:25.827 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:15:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:15:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4184954576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:15:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:26.237 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:15:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:26.242 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:15:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:26.259 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:15:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:26.261 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:15:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:26.261 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:15:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:26.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:26 standalone.localdomain ceph-mon[29756]: pgmap v3026: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:26 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4184954576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:15:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3027: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:15:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:15:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a-merged.mount: Deactivated successfully.
Oct 13 15:15:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:28 standalone.localdomain podman[468357]: 2025-10-13 15:15:28.171549632 +0000 UTC m=+0.242457518 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:15:28 standalone.localdomain podman[468357]: 2025-10-13 15:15:28.263685265 +0000 UTC m=+0.334593211 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:15:28 standalone.localdomain ceph-mon[29756]: pgmap v3027: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30515 DF PROTO=TCP SPT=56948 DPT=9101 SEQ=4283739604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4EA760000000001030307) 
Oct 13 15:15:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3028: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:15:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:15:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:15:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:15:29 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:15:30 standalone.localdomain python3.9[468501]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:15:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:30.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:30 standalone.localdomain ceph-mon[29756]: pgmap v3028: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:30 standalone.localdomain python3.9[468611]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:15:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3029: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:15:31 standalone.localdomain python3.9[468718]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760368530.9140651-751-166560169252244/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:15:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:31.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:15:31 standalone.localdomain podman[468756]: 2025-10-13 15:15:31.827438675 +0000 UTC m=+0.089558693 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:15:31 standalone.localdomain podman[468756]: 2025-10-13 15:15:31.865543243 +0000 UTC m=+0.127663261 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:15:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f8c62a6a2c8d47bc643de7e472c16fea3d5facb0eb72e0880a216e4f3b3830d9-merged.mount: Deactivated successfully.
Oct 13 15:15:32 standalone.localdomain python3.9[468777]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:15:32 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:15:32 standalone.localdomain systemd-rc-local-generator[468818]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:15:32 standalone.localdomain systemd-sysv-generator[468822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:15:32 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:15:32 standalone.localdomain ceph-mon[29756]: pgmap v3029: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30516 DF PROTO=TCP SPT=56948 DPT=9101 SEQ=4283739604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB4FA360000000001030307) 
Oct 13 15:15:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3030: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:15:33 standalone.localdomain python3.9[468881]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:15:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0a0c56f56276045e0525efb95a9b4d54690d2d3f7b572370d173646d1ca4046-merged.mount: Deactivated successfully.
Oct 13 15:15:33 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:15:33 standalone.localdomain systemd-sysv-generator[468914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:15:33 standalone.localdomain systemd-rc-local-generator[468905]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:15:33 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:15:33 standalone.localdomain systemd[1]: Starting openstack_network_exporter container...
Oct 13 15:15:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:15:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:15:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:15:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:15:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:15:34 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:15:34 standalone.localdomain ceph-mon[29756]: pgmap v3030: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:34 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:15:34 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txf34eb4ccb8b842e0b068c-0068ed1796" "proxy-server 2" 0.0006 "-" 21 -
Oct 13 15:15:34 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txf34eb4ccb8b842e0b068c-0068ed1796)
Oct 13 15:15:34 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txf34eb4ccb8b842e0b068c-0068ed1796)
Oct 13 15:15:34 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3031: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:15:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:15:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:35.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:15:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:15:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:15:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fcb3ca538feef6520aa1554a0aeaec68d09f0e6a11646d3c1cdfbc9b10ae1/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 15:15:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fcb3ca538feef6520aa1554a0aeaec68d09f0e6a11646d3c1cdfbc9b10ae1/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 15:15:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:15:36 standalone.localdomain podman[468922]: 2025-10-13 15:15:36.526543738 +0000 UTC m=+2.905094260 container init 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *bridge.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *coverage.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *datapath.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *iface.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *memory.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *ovnnorthd.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *ovn.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *ovsdbserver.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *pmd_perf.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *pmd_rxq.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: INFO    15:15:36 main.go:48: registering *vswitch.Collector
Oct 13 15:15:36 standalone.localdomain openstack_network_exporter[468938]: NOTICE  15:15:36 main.go:82: listening on http://:9105/metrics
Oct 13 15:15:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:15:36 standalone.localdomain podman[468922]: 2025-10-13 15:15:36.559753512 +0000 UTC m=+2.938304014 container start 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 13 15:15:36 standalone.localdomain podman[468922]: openstack_network_exporter
Oct 13 15:15:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:36.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:36 standalone.localdomain ceph-mon[29756]: pgmap v3031: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:36 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3032: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:15:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5709 DF PROTO=TCP SPT=38472 DPT=9100 SEQ=1035511317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB50CD40000000001030307) 
Oct 13 15:15:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:37 standalone.localdomain systemd[1]: Started openstack_network_exporter container.
Oct 13 15:15:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:37 standalone.localdomain podman[468948]: 2025-10-13 15:15:37.8614412 +0000 UTC m=+1.295352890 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Oct 13 15:15:37 standalone.localdomain podman[468948]: 2025-10-13 15:15:37.884461527 +0000 UTC m=+1.318373197 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:15:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain podman[467099]: time="2025-10-13T15:15:38Z" level=error msg="Getting root fs size for \"082f90b100c1e61ff254c4e8202645f7e953ca071f54d740dec70e32afe4da6f\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:15:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5710 DF PROTO=TCP SPT=38472 DPT=9100 SEQ=1035511317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB510F70000000001030307) 
Oct 13 15:15:38 standalone.localdomain python3.9[469076]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:15:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:15:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:15:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain systemd[1]: Stopping openstack_network_exporter container...
Oct 13 15:15:38 standalone.localdomain ceph-mon[29756]: pgmap v3032: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:38 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:38 standalone.localdomain podman[469079]: 2025-10-13 15:15:38.683273868 +0000 UTC m=+0.128060513 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:15:38 standalone.localdomain podman[469078]: 2025-10-13 15:15:38.727578919 +0000 UTC m=+0.171587260 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:15:38 standalone.localdomain podman[469078]: 2025-10-13 15:15:38.764001654 +0000 UTC m=+0.208010045 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 13 15:15:38 standalone.localdomain systemd[1]: libpod-6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.scope: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain podman[469092]: 2025-10-13 15:15:38.817043238 +0000 UTC m=+0.224275452 container died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Oct 13 15:15:38 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.timer: Deactivated successfully.
Oct 13 15:15:38 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:15:38 standalone.localdomain podman[469079]: 2025-10-13 15:15:38.871263199 +0000 UTC m=+0.316049834 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:15:38 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3033: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:15:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:15:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:40.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5711 DF PROTO=TCP SPT=38472 DPT=9100 SEQ=1035511317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB518F60000000001030307) 
Oct 13 15:15:40 standalone.localdomain ceph-mon[29756]: pgmap v3033: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:15:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:15:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:15:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:15:40 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3034: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dcff515959244485933a3af9172fbefc10c4438a8f7f3991d4ddc0d64213ed9a-merged.mount: Deactivated successfully.
Oct 13 15:15:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:41 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:15:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:41.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:15:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:15:42 standalone.localdomain podman[469092]: 2025-10-13 15:15:42.554905857 +0000 UTC m=+3.962138081 container cleanup 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal)
Oct 13 15:15:42 standalone.localdomain podman[469092]: openstack_network_exporter
Oct 13 15:15:42 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:15:42 standalone.localdomain systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 13 15:15:42 standalone.localdomain ceph-mon[29756]: pgmap v3034: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:42 standalone.localdomain podman[469141]: 2025-10-13 15:15:42.652815688 +0000 UTC m=+1.678814934 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, container_name=swift_object_server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:15:42 standalone.localdomain podman[469142]: 2025-10-13 15:15:42.617118726 +0000 UTC m=+1.640079597 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, vcs-type=git, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:15:42 standalone.localdomain podman[469171]: 2025-10-13 15:15:42.688443089 +0000 UTC m=+0.096185939 container cleanup 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Oct 13 15:15:42 standalone.localdomain podman[469171]: openstack_network_exporter
Oct 13 15:15:42 standalone.localdomain podman[469143]: 2025-10-13 15:15:42.757010067 +0000 UTC m=+1.777447929 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-swift-account, vcs-type=git)
Oct 13 15:15:42 standalone.localdomain podman[469141]: 2025-10-13 15:15:42.841846921 +0000 UTC m=+1.867846097 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 15:15:42 standalone.localdomain podman[469142]: 2025-10-13 15:15:42.895349789 +0000 UTC m=+1.918310600 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., release=1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server)
Oct 13 15:15:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-845fcb3ca538feef6520aa1554a0aeaec68d09f0e6a11646d3c1cdfbc9b10ae1-merged.mount: Deactivated successfully.
Oct 13 15:15:42 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3035: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:42 standalone.localdomain podman[469143]: 2025-10-13 15:15:42.99483697 +0000 UTC m=+2.015274862 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:15:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:15:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-87046473c186226e71335669ea31ba5572060b2d006f721960f031336019f3c2-merged.mount: Deactivated successfully.
Oct 13 15:15:43 standalone.localdomain systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 13 15:15:43 standalone.localdomain systemd[1]: Stopped openstack_network_exporter container.
Oct 13 15:15:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:43 standalone.localdomain systemd[1]: Starting openstack_network_exporter container...
Oct 13 15:15:43 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:15:43 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:15:43 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:15:44 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45363 DF PROTO=TCP SPT=48660 DPT=9105 SEQ=3543930592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB526B70000000001030307) 
Oct 13 15:15:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:44 standalone.localdomain ceph-mon[29756]: pgmap v3035: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:44 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3036: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:15:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:15:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:45.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:15:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fcb3ca538feef6520aa1554a0aeaec68d09f0e6a11646d3c1cdfbc9b10ae1/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 15:15:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/845fcb3ca538feef6520aa1554a0aeaec68d09f0e6a11646d3c1cdfbc9b10ae1/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 15:15:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:15:45 standalone.localdomain podman[469233]: 2025-10-13 15:15:45.538578265 +0000 UTC m=+1.788230295 container init 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, name=ubi9-minimal)
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *bridge.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *coverage.Collector
Oct 13 15:15:45 standalone.localdomain podman[469233]: openstack_network_exporter
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *datapath.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *iface.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *memory.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *ovnnorthd.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *ovn.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *ovsdbserver.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *pmd_perf.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *pmd_rxq.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: INFO    15:15:45 main.go:48: registering *vswitch.Collector
Oct 13 15:15:45 standalone.localdomain openstack_network_exporter[469248]: NOTICE  15:15:45 main.go:82: listening on http://:9105/metrics
Oct 13 15:15:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:15:45 standalone.localdomain podman[469233]: 2025-10-13 15:15:45.576840917 +0000 UTC m=+1.826492947 container start 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Oct 13 15:15:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:15:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:46.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:46 standalone.localdomain systemd[1]: Started openstack_network_exporter container.
Oct 13 15:15:46 standalone.localdomain ceph-mon[29756]: pgmap v3036: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:46 standalone.localdomain podman[469258]: 2025-10-13 15:15:46.670720287 +0000 UTC m=+1.089243916 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:15:46 standalone.localdomain podman[469258]: 2025-10-13 15:15:46.710438735 +0000 UTC m=+1.128962334 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:15:46 standalone.localdomain podman[469270]: 2025-10-13 15:15:46.739671706 +0000 UTC m=+0.544487224 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 13 15:15:46 standalone.localdomain podman[469270]: 2025-10-13 15:15:46.773051947 +0000 UTC m=+0.577867475 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, org.label-schema.build-date=20251009)
Oct 13 15:15:46 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3037: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:47 standalone.localdomain python3.9[469402]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:15:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:15:48 standalone.localdomain object-server[469511]: Object update sweep starting on /srv/node/d1 (pid: 26)
Oct 13 15:15:48 standalone.localdomain object-server[469511]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 26)
Oct 13 15:15:48 standalone.localdomain object-server[469511]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:15:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0a0c56f56276045e0525efb95a9b4d54690d2d3f7b572370d173646d1ca4046-merged.mount: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.09s
Oct 13 15:15:48 standalone.localdomain python3.9[469510]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 13 15:15:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain ceph-mon[29756]: pgmap v3037: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15222 DF PROTO=TCP SPT=52948 DPT=9882 SEQ=4235938420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB538F60000000001030307) 
Oct 13 15:15:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:15:48 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3038: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:15:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:15:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45365 DF PROTO=TCP SPT=48660 DPT=9105 SEQ=3543930592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB53E760000000001030307) 
Oct 13 15:15:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:50.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:50 standalone.localdomain ceph-mon[29756]: pgmap v3038: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:15:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0152fbf47c47b98476307a2f85450ead7caad437035e3d8d6c9790cae3a7e742-merged.mount: Deactivated successfully.
Oct 13 15:15:50 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3039: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:51 standalone.localdomain podman[469526]: 2025-10-13 15:15:51.088149609 +0000 UTC m=+1.348985123 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:15:51 standalone.localdomain podman[469526]: 2025-10-13 15:15:51.122054585 +0000 UTC m=+1.382890149 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:15:51 standalone.localdomain podman[469526]: unhealthy
Oct 13 15:15:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:51.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:15:51 standalone.localdomain python3.9[469655]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:15:52 standalone.localdomain ceph-mon[29756]: pgmap v3039: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:15:52 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3040: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:15:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:15:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:15:53 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:15:53 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:15:53 standalone.localdomain systemd[1]: Started libpod-conmon-fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.scope.
Oct 13 15:15:53 standalone.localdomain podman[469656]: 2025-10-13 15:15:53.364762065 +0000 UTC m=+1.443033663 container exec fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller)
Oct 13 15:15:53 standalone.localdomain podman[469656]: 2025-10-13 15:15:53.397904749 +0000 UTC m=+1.476176377 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:15:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44600 DF PROTO=TCP SPT=39476 DPT=9102 SEQ=942603523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB54BEF0000000001030307) 
Oct 13 15:15:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:15:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:15:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:15:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:15:54 standalone.localdomain ceph-mon[29756]: pgmap v3040: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:15:54 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3041: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:55 standalone.localdomain podman[469684]: 2025-10-13 15:15:55.022873923 +0000 UTC m=+0.587265418 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:15:55 standalone.localdomain podman[469684]: 2025-10-13 15:15:55.057217223 +0000 UTC m=+0.621608728 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:15:55 standalone.localdomain podman[469684]: unhealthy
Oct 13 15:15:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4949 DF PROTO=TCP SPT=38254 DPT=9101 SEQ=1358626076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5537E0000000001030307) 
Oct 13 15:15:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:55.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:55 standalone.localdomain python3.9[469808]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:15:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:15:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:15:55 standalone.localdomain systemd[1]: libpod-conmon-fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.scope: Deactivated successfully.
Oct 13 15:15:55 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:15:55 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Failed with result 'exit-code'.
Oct 13 15:15:55 standalone.localdomain systemd[1]: Started libpod-conmon-fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.scope.
Oct 13 15:15:55 standalone.localdomain podman[469809]: 2025-10-13 15:15:55.908948653 +0000 UTC m=+0.300642562 container exec fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:15:55 standalone.localdomain podman[469809]: 2025-10-13 15:15:55.943990396 +0000 UTC m=+0.335684325 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:15:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:15:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:15:56.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:15:56 standalone.localdomain ceph-mon[29756]: pgmap v3041: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:56 standalone.localdomain systemd[1]: libpod-conmon-fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.scope: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:15:56 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3042: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:57 standalone.localdomain python3.9[469946]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:15:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:15:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-87046473c186226e71335669ea31ba5572060b2d006f721960f031336019f3c2-merged.mount: Deactivated successfully.
Oct 13 15:15:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-87046473c186226e71335669ea31ba5572060b2d006f721960f031336019f3c2-merged.mount: Deactivated successfully.
Oct 13 15:15:57 standalone.localdomain python3.9[470054]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 13 15:15:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4951 DF PROTO=TCP SPT=38254 DPT=9101 SEQ=1358626076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB55F760000000001030307) 
Oct 13 15:15:58 standalone.localdomain ceph-mon[29756]: pgmap v3042: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:15:58 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3043: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:15:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25-merged.mount: Deactivated successfully.
Oct 13 15:15:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25-merged.mount: Deactivated successfully.
Oct 13 15:15:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:15:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:15:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:15:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:16:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:00.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:00 standalone.localdomain ceph-mon[29756]: pgmap v3043: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:00 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3044: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:16:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:01.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:01 standalone.localdomain podman[470066]: 2025-10-13 15:16:01.733045094 +0000 UTC m=+2.002352048 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 13 15:16:01 standalone.localdomain podman[470066]: 2025-10-13 15:16:01.81277603 +0000 UTC m=+2.082083034 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 13 15:16:02 standalone.localdomain python3.9[470199]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4952 DF PROTO=TCP SPT=38254 DPT=9101 SEQ=1358626076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB56F360000000001030307) 
Oct 13 15:16:02 standalone.localdomain ceph-mon[29756]: pgmap v3044: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:02 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3045: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:03 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:16:03 standalone.localdomain systemd[1]: Started libpod-conmon-0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.scope.
Oct 13 15:16:03 standalone.localdomain podman[470200]: 2025-10-13 15:16:03.668955361 +0000 UTC m=+1.298965522 container exec 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 13 15:16:03 standalone.localdomain podman[470200]: 2025-10-13 15:16:03.672548424 +0000 UTC m=+1.302558635 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:16:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:04 standalone.localdomain sudo[470230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:16:04 standalone.localdomain sudo[470230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:16:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:16:04 standalone.localdomain sudo[470230]: pam_unix(sudo:session): session closed for user root
Oct 13 15:16:04 standalone.localdomain sudo[470250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:16:04 standalone.localdomain sudo[470250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:16:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:04 standalone.localdomain podman[470248]: 2025-10-13 15:16:04.732222356 +0000 UTC m=+0.281492765 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:16:04 standalone.localdomain ceph-mon[29756]: pgmap v3045: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:04 standalone.localdomain podman[470248]: 2025-10-13 15:16:04.768831367 +0000 UTC m=+0.318101746 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:16:04 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3046: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:05 standalone.localdomain python3.9[470408]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:05.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:16:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0152fbf47c47b98476307a2f85450ead7caad437035e3d8d6c9790cae3a7e742-merged.mount: Deactivated successfully.
Oct 13 15:16:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:06 standalone.localdomain systemd[1]: libpod-conmon-0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.scope: Deactivated successfully.
Oct 13 15:16:06 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:16:06 standalone.localdomain systemd[1]: Started libpod-conmon-0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.scope.
Oct 13 15:16:06 standalone.localdomain podman[470409]: 2025-10-13 15:16:06.498733513 +0000 UTC m=+1.142028371 container exec 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible)
Oct 13 15:16:06 standalone.localdomain podman[470409]: 2025-10-13 15:16:06.527624533 +0000 UTC m=+1.170919361 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 15:16:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:06.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:16:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 6000.0 total, 600.0 interval
                                                        Cumulative writes: 15K writes, 71K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                                        Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1532 writes, 6927 keys, 1532 commit groups, 1.0 writes per commit group, ingest: 5.78 MB, 0.01 MB/s
                                                        Interval WAL: 1532 writes, 1532 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.4      0.58              0.16        49    0.012       0      0       0.0       0.0
                                                          L6      1/0    5.98 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.9    199.8    169.4      1.35              0.64        48    0.028    240K    25K       0.0       0.0
                                                         Sum      1/0    5.98 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.9    139.9    143.0      1.93              0.81        97    0.020    240K    25K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.1    143.6    146.0      0.23              0.11        10    0.023     32K   2597       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    199.8    169.4      1.35              0.64        48    0.028    240K    25K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     81.7      0.58              0.16        48    0.012       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.046, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.27 GB write, 0.05 MB/s write, 0.26 GB read, 0.05 MB/s read, 1.9 seconds
                                                        Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 36.87 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.00039 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3333,35.28 MB,11.4534%) FilterBlock(98,669.42 KB,0.212251%) IndexBlock(98,961.58 KB,0.304883%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 15:16:06 standalone.localdomain ceph-mon[29756]: pgmap v3046: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:16:06.933 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:16:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:16:06.933 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:16:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:16:06.934 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:16:06 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3047: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:07 standalone.localdomain sudo[470250]: pam_unix(sudo:session): session closed for user root
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:16:07 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c943816e-6b71-4f19-8932-dafbca230b24 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:16:07 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c943816e-6b71-4f19-8932-dafbca230b24 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:16:07 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c943816e-6b71-4f19-8932-dafbca230b24 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:16:07 standalone.localdomain sudo[470562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:16:07 standalone.localdomain sudo[470562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:16:07 standalone.localdomain sudo[470562]: pam_unix(sudo:session): session closed for user root
Oct 13 15:16:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63371 DF PROTO=TCP SPT=59122 DPT=9100 SEQ=1313661563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB582040000000001030307) 
Oct 13 15:16:07 standalone.localdomain python3.9[470571]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:16:07 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:16:08 standalone.localdomain python3.9[470691]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 13 15:16:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63372 DF PROTO=TCP SPT=59122 DPT=9100 SEQ=1313661563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB585F60000000001030307) 
Oct 13 15:16:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:16:08 standalone.localdomain ceph-mon[29756]: pgmap v3047: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6a6ca37cfa9f0e5053b4b5f65562ad93b9cc6ac5c10b8f2ac2713ae975ff1e87-merged.mount: Deactivated successfully.
Oct 13 15:16:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:16:08 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3048: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6a6ca37cfa9f0e5053b4b5f65562ad93b9cc6ac5c10b8f2ac2713ae975ff1e87-merged.mount: Deactivated successfully.
Oct 13 15:16:09 standalone.localdomain systemd[1]: libpod-conmon-0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.scope: Deactivated successfully.
Oct 13 15:16:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:16:09 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:16:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:16:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:16:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:16:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:16:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:16:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63373 DF PROTO=TCP SPT=59122 DPT=9100 SEQ=1313661563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB58DF60000000001030307) 
Oct 13 15:16:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:10.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:16:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:16:10 standalone.localdomain ceph-mon[29756]: pgmap v3048: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:10 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:16:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:16:10 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3049: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:11.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:16:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:16:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:16:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:16:11 standalone.localdomain podman[470705]: 2025-10-13 15:16:11.917701442 +0000 UTC m=+0.180696233 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:16:11 standalone.localdomain podman[470705]: 2025-10-13 15:16:11.946795067 +0000 UTC m=+0.209789878 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:16:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:16:12 standalone.localdomain python3.9[470829]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:16:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:16:12 standalone.localdomain ceph-mon[29756]: pgmap v3049: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:16:12 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3050: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:13 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:16:13 standalone.localdomain systemd[1]: Started libpod-conmon-2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.scope.
Oct 13 15:16:13 standalone.localdomain podman[470830]: 2025-10-13 15:16:13.10387209 +0000 UTC m=+0.501080073 container exec 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 15:16:13 standalone.localdomain podman[470842]: 2025-10-13 15:16:13.109469022 +0000 UTC m=+0.373785715 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible)
Oct 13 15:16:13 standalone.localdomain podman[470842]: 2025-10-13 15:16:13.125361721 +0000 UTC m=+0.389678464 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:16:13 standalone.localdomain podman[470830]: 2025-10-13 15:16:13.137930329 +0000 UTC m=+0.535138272 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 15:16:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:16:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:16:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:16:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:16:13 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37400 DF PROTO=TCP SPT=40326 DPT=9105 SEQ=1079250491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB59BB70000000001030307) 
Oct 13 15:16:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:14 standalone.localdomain ceph-mon[29756]: pgmap v3050: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:16:14 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3051: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:16:15 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:16:15 standalone.localdomain podman[470875]: 2025-10-13 15:16:15.415224929 +0000 UTC m=+1.590566665 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 15:16:15 standalone.localdomain podman[470877]: 2025-10-13 15:16:15.36810965 +0000 UTC m=+1.541138055 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=)
Oct 13 15:16:15 standalone.localdomain podman[470876]: 2025-10-13 15:16:15.469366996 +0000 UTC m=+1.644959740 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container)
Oct 13 15:16:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:15.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:15 standalone.localdomain podman[470875]: 2025-10-13 15:16:15.621914801 +0000 UTC m=+1.797256537 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container)
Oct 13 15:16:15 standalone.localdomain podman[470877]: 2025-10-13 15:16:15.640920886 +0000 UTC m=+1.813949241 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public)
Oct 13 15:16:15 standalone.localdomain podman[470876]: 2025-10-13 15:16:15.696841567 +0000 UTC m=+1.872434331 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=swift_container_server, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, release=1, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, distribution-scope=public, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:16:15 standalone.localdomain python3.9[471069]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:16.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:16 standalone.localdomain ceph-mon[29756]: pgmap v3051: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:16 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3052: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:16:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25-merged.mount: Deactivated successfully.
Oct 13 15:16:17 standalone.localdomain systemd[1]: libpod-conmon-2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.scope: Deactivated successfully.
Oct 13 15:16:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:16:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:17 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:16:17 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:16:17 standalone.localdomain systemd[1]: Started libpod-conmon-2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.scope.
Oct 13 15:16:17 standalone.localdomain podman[471070]: 2025-10-13 15:16:17.520858808 +0000 UTC m=+1.539044540 container exec 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:16:17 standalone.localdomain podman[471070]: 2025-10-13 15:16:17.554970037 +0000 UTC m=+1.573155799 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 15:16:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:16:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:16:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/109364673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/109364673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:16:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14889 DF PROTO=TCP SPT=41140 DPT=9882 SEQ=1291647748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5AE360000000001030307) 
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: pgmap v3052: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/109364673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:16:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/109364673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:16:18 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3053: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:16:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:16:19 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:19 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:19 standalone.localdomain podman[471101]: 2025-10-13 15:16:19.742816016 +0000 UTC m=+1.511825952 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:16:19 standalone.localdomain podman[471100]: 2025-10-13 15:16:19.780046123 +0000 UTC m=+1.549447402 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:16:19 standalone.localdomain podman[471101]: 2025-10-13 15:16:19.784964793 +0000 UTC m=+1.553974649 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:16:19 standalone.localdomain podman[471100]: 2025-10-13 15:16:19.842979579 +0000 UTC m=+1.612380858 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:16:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37402 DF PROTO=TCP SPT=40326 DPT=9105 SEQ=1079250491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5B3760000000001030307) 
Oct 13 15:16:20 standalone.localdomain python3.9[471244]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:16:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:20.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:20 standalone.localdomain ceph-mon[29756]: pgmap v3053: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:20 standalone.localdomain python3.9[471352]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 13 15:16:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:20.987 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:16:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:20.990 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:16:20 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3054: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:20.991 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:16:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:20.995 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:20.999 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '703a2f53-5001-473d-a657-248cefb451a7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:20.991306', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '92f2048a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '60a5ed0deab7fdb0b78452864d683843cb849949da1ddcc145706cf3547b429e'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:20.991306', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '92f287fc-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '6f1817181d609b8454123f23d809e8dad34364ca6b934417bf41d123ec7f938d'}]}, 'timestamp': '2025-10-13 15:16:21.000036', '_unique_id': '8bcce5914b8747f2a7e457f9b8e0447f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.001 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.002 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.003 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.003 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a44e63f-8409-44f1-9eb5-4a0689dcdc5d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.003113', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '92f31370-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '1d20cded1ad3aea7d7da30f4f550f76535f29f8fd0704264e1ae5540471a4711'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.003113', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '92f3266c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '06a281ec2caf53f4aea3e381a9908983aeaff4a1cac44afbe6d3659fc365ccd2'}]}, 'timestamp': '2025-10-13 15:16:21.004061', '_unique_id': 'a767f54b6bf6476cbb9e12cfcbd9930e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.005 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.006 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.006 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.006 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b87a7696-5677-4998-b892-2be17905cdac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.006347', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '92f392b4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': 'fe0ed3441ad88fa1b4902d5509b8005973dd2df125c236e7b0780f30f4730410'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.006347', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '92f3a3c6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '8e01ac787bb0cb6b4938957fd56141ce6f99135c5ecae5775a7dd924110261a7'}]}, 'timestamp': '2025-10-13 15:16:21.007266', '_unique_id': '5a9ce1b8a0ff40b0a10822e359cc965b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.008 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.009 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.009 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.009 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e9ed9d6-13c4-4306-a3c1-ced29bd47b0e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.009480', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '92f40d20-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': 'a3d4b670db5b0410b4323f9d9a009cdb7c6dafcd32efc39e32f021d2df567a3d'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.009480', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '92f41e32-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': 'd3b4a2c02696ec7ab81f75fc227bd062cf7e3c265a8fc32800855ab0129daef6'}]}, 'timestamp': '2025-10-13 15:16:21.010401', '_unique_id': 'b5865f5e69454ddd83cd01dca0a6291a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.011 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.012 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.013 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aa5838aa-73f5-42a9-bd4e-82fdc8a022c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.012651', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '92f48836-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '05e46965576c9a7623dc0c48a81ff6a69f3a04f0e7a24372dd12e59941af60bc'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.012651', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '92f499c0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': 'ea7c49867e336ee1dc0a873b797fbda5a0896a52f4a8d01f0236e62f5ec113a1'}]}, 'timestamp': '2025-10-13 15:16:21.013605', '_unique_id': 'b268b43ec3664b1faf79c9e2268e0447'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.014 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.015 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.059 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.060 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.061 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.089 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.090 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f68a5803-68c5-4864-9e75-f76f74f3292a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.016060', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '92fbc7b8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '79583609c58f050f5235d062b2daef4032b2d6e0568c99fad83f9ff01cbc6b21'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.016060', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '92fbdf5a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '6c3b492705553d1cc5cdbaca7b47d9bae42fe9fc975c98953b1e6ca206a565dd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.016060', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '92fbf1d4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '049638f2d126a9c148e566955131a17e79612154f5e69d71fece618f2bcefe6c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.016060', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '930056e8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '9b877ee71ed34d2a22359111fced79517c95fe8dd1d4e983111361faeb15f65a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.016060', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '93006e76-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '2044725b154d5921c19198d54ffa33c258d804e5d54e1e3c3c90754b993a2742'}]}, 'timestamp': '2025-10-13 15:16:21.091116', '_unique_id': '6acc86a26a8247099f2957131dedbf27'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.094 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.125 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.125 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.126 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.147 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.148 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8bf01d5f-22b4-4031-88f8-495bea577488', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.094208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9305b818-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': 'd8ec1e6469023e45dbf234aaab08dd9729d7db5bad938c0996cef8f0576f74a7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.094208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9305ca4c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': 'd46fc87e6220e04f2a7c34451c947c05dbbf493eacd8f64f5a057cb41d37e52a'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.094208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9305d7b2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': 'f268334ce8ff94023b66673ee52c91a2c50341080e1f044d488fe939090ecde9'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.094208', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9309354c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.315791553, 'message_signature': 'e525d10e3388e6b431ba7b712c2c0d0d41541b79a003c62c69931d3cbe284eb7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.094208', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '930945aa-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.315791553, 'message_signature': 'badff98dca55481438c578c898de1ee9073b91336c48758a831372f045f63935'}]}, 'timestamp': '2025-10-13 15:16:21.148981', '_unique_id': '32cf2b74ca0f40f2a1e1cd79a22be098'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.150 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.151 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.151 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.151 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.151 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.152 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.152 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4649d280-36e5-4e84-b6a1-333c52185f7d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.151152', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9309a7a2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '620e6491848b63ded887d20609c8862b2f80515182e62d1e609d697dd9d908fb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.151152', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9309b44a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '973c21bbc326aa4ded83a3b37535b533de8147cd4aa24f99c176373e9ea25ffa'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.151152', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9309bf9e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': 'd5e06eb19ce55aa489a4cba14f6c71634adf0020e2d19428947d115b58108907'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.151152', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9309cb06-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': 'd73a203ca9b9bb84890c8fba8c34931230258b682f63189a425f4e4e2e29d2a9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.151152', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9309d7f4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '7191229de0a89907a302648150cc4f939ec746c335fbde7c15ff08f92953112a'}]}, 'timestamp': '2025-10-13 15:16:21.152709', '_unique_id': 'f2b6abc69bc140ce9ffad5089921241f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.154 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.154 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.154 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.155 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.155 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.155 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1bbcb090-c5b4-4e9c-83a7-743f123106e8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.154501', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '930a2a4c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': '4b14621cfbb31552753671fe6f0ae5e88ccea5d88910f7c9e5d72c616040bf57'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.154501', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '930a35f0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': 'd702d78cb41019b636917a179e07575d4f25f1908e0de7f005afa255fae1d029'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.154501', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '930a4126-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': '7be16cc10e981123bafc91f77eba9045109163afbff518374781400450fac31a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.154501', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '930a4cac-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.315791553, 'message_signature': 'd533c7e63eba5bbd5b6e3c225844916fc2bde721991c0b87aff1f4b7d4e5b437'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.154501', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '930a5760-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.315791553, 'message_signature': '99544aac372e7fe90e7c913eb57d752b3bc4b65cdb0ee4fa2cf04d1d65eab189'}]}, 'timestamp': '2025-10-13 15:16:21.155970', '_unique_id': '1289e13ac01d4e12a01447eb103ba0c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.157 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.157 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.158 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ea4118f-6bd8-4903-80a6-55a49439ee82', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.157858', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '930aad50-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': 'e3519741d79407272de806dd912e730361ff4da3cf6676e757576ca5264184bd'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.157858', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '930ab926-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '2ed6870431cc6ddb665f2c4c5e7420259a7cc13bf4fae524311a7f64c5b4c73f'}]}, 'timestamp': '2025-10-13 15:16:21.158504', '_unique_id': 'fe3adf952d73461f9f2d3fa8d8366f02'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.159 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.160 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.160 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7071df72-a0e7-484e-9508-40ac042b888e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.160107', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '930b05e8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '3f2141357d0528b237c7e9b3cebf6a4b87e0a7cb2ab97e30837f855d9d59b807'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.160107', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '930b1286-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '9269290abfca2d5345f3378e7727f85341cf48caae14975fb159f442a8f345eb'}]}, 'timestamp': '2025-10-13 15:16:21.160768', '_unique_id': 'c342e17a9ce64a09a545bb15b9d1e147'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.162 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.181 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 27010000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain rsyslogd[56156]: imjournal from <standalone:ceilometer_agent_compute>: begin to drop messages due to rate-limiting
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.201 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 26780000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6806f29-9051-4aa5-a79e-42a12f65b5e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27010000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:16:21.162239', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '930e4118-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.370125596, 'message_signature': '44fb01994f4f48faade8d6a69b89c630cf39ca011b1c7f8a170fe4d50c1f42ec'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 26780000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:16:21.162239', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '93114c1e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.390198413, 'message_signature': '93fd209b2fc76537fd26e29fb9b25357613445c8a1df47a004776f47ec570c38'}]}, 'timestamp': '2025-10-13 15:16:21.201629', '_unique_id': '23d7155633594972807e6704be370c65'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.203 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9fe1dd65-49f6-4aaf-9c71-37fe643bf3ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.203437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9311a1c8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '52b3a9604c4cae756a8f55a8721ed4a0d5f70ebbe18a4f190de2c2a4f6d0af38'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.203437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9311ac04-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '3445c7d498f8148518e4a2b84499090b68302cfde2d0b5043d3a5177f1703749'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.203437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9311b5c8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': 'abbc2d904f78570362d6fb6064f3920fc3c2a81cdbe28132d165c28bfcac2c2c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.203437', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9311c0c2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '95b4a2b80269bb5986b159b6daf87e5769552ae8aa405182fb35d4a226dda4d6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.203437', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9311cc2a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': 'aaf7a76882305ef87661d90f18b0ea04ad88f0997abff743a2de119f456a70bf'}]}, 'timestamp': '2025-10-13 15:16:21.204816', '_unique_id': 'b1bbdf2ff5824d4689f3e47d52b012bc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.206 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.207 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.207 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1dfb9eaa-eb95-45af-b07f-d129b96bbb8e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.206240', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '93120e1a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '1f0b8538e87cc0dc5ebddd96f47909b8011f9f92de13989881851eac8d16c485'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.206240', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '93121946-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '2590bc01895c86e92f270070a83b6ed8f7b3ff68d79bb53563b827686f2416e5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.206240', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '931222f6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '9be4644edb3b9ca9637bc4c36ea3a84b2c5548420b7a543989badedc8517c2a7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.206240', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '93122c7e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': 'cf2360000ae8d4578f5816cd7335f29796fec03f3664a5584d5eb7dff594a1e5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.206240', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '93123606-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '0782c8ee72f9d5eb9765d36bba2f7d6d98a79623569a242759610069b8ee6ae0'}]}, 'timestamp': '2025-10-13 15:16:21.207549', '_unique_id': 'cd3947ee45be406eb82da649550d76ca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.208 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.209 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.209 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16929ea2-3e96-4505-9bda-a20ca7cdf3d0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.209032', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '93127ba2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '001ffb8e7c03193b5a232440bf914a524709b88fdb063c3182bd620ee0918512'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.209032', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '931286b0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '2620bd383da6c9dc9a6534c93bb47afe1afa4ed09b85676e2e09d1b1eb07bc65'}]}, 'timestamp': '2025-10-13 15:16:21.209637', '_unique_id': '079a34488bb9493c9bfce06ba4c9a24a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.210 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.211 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3b5513a6-658c-45a6-9bf8-e99b006fa497', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.211004', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9312c86e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '7acfc55c5b15ef344eca62b59b577479312b2b3a4e911f2ce8d445520d0ca4b6'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.211004', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9312d354-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '84523bda9b6986a3d3fb1dc334723d47271d7545f8f2d2aa3d41cf44e2196e12'}]}, 'timestamp': '2025-10-13 15:16:21.211588', '_unique_id': '93f25fe3cd6d4bcfbc372d2291ad6348'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.212 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.213 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e81c80ab-b29e-453f-8e7e-e3abf5ce9691', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:16:21.213002', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '93131634-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.370125596, 'message_signature': '9589e1d4da1faa45572a4c715a391a113ae7b6945a376a74f8eb6d3b938a4358'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:16:21.213002', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '93132052-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.390198413, 'message_signature': '97c8df87e02c2685d689659727f7a8989e33a9d7fc2296368ae5dd0e61307e2b'}]}, 'timestamp': '2025-10-13 15:16:21.213550', '_unique_id': 'e8bdb3944d554929820917037fbddf65'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.214 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '94c2a39c-c122-47d3-874e-9ea0a447d9d8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:16:21.215039', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '931365f8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.180562921, 'message_signature': '5d4847107598e95496328b940310bfe0cf937dd1a1544b998e0c57ebf28ce9aa'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:16:21.215039', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '931370e8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.185931277, 'message_signature': '9432ab283e809fcd577f2c8f9640d5ac82cba2b57ababa696ff1380feb0aaff7'}]}, 'timestamp': '2025-10-13 15:16:21.215620', '_unique_id': '248b57381fea4868b5be41397a92d907'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.216 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.217 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.217 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.217 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.217 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.217 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.217 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.218 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e892569d-cf04-4f59-b01b-1d68c6214dda', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.217167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9313b922-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '8cf49658cbeb706af48af12e67dfd6987e840cb22838bb64a18141301ccbbd13'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.217167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9313c444-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': 'a9ba5f96737e5e14d6621d02af1cf753a66b2b0117380dc82c8915535f501d5f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.217167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9313cdae-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '5c98464dffd709dceaa6799857a748e9c086138c237d73b7633f959f172f1459'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.217167', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9313d696-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '7e24dd7fc9b7dfec33e218aa3ac8fbac09484ba6cd1dd0e842db0522ea54811d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.217167', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9313df7e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '740febb9b29eaaba72710be047185f687690b3be5d578c2f2f978dedff2730ba'}]}, 'timestamp': '2025-10-13 15:16:21.218415', '_unique_id': '83e88c89b57b468288c7f96f3df6d15b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.219 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.220 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.220 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.220 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.220 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7e56c057-79be-4c92-af6d-76f289028e69', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.219777', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '93141e8a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '44768e1ca49ad7dcdda5a67f5ce818a3cd7afd7ca1149a3bb2d7dbce035f8690'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.219777', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '93142830-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '886c3fa4ef34d2355e3e642c612f7408f416fed22c7df09d34d8c2664d2ad398'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.219777', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '931431fe-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.205317833, 'message_signature': '1373c465d4d5cd8df733b72f81f94e77a0db7ba85f90f6bdc1fe8cd36d0b1362'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.219777', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '93143c26-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '7233b69a895d24c2e574bcc79cd98d0a5159bd18f05bf443484784b29428afff'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.219777', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '93144504-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.250967088, 'message_signature': '8bd200f4972aa983a27b1f60d5cae4f95080cab61ed76a1f892a0b6d7a1f725f'}]}, 'timestamp': '2025-10-13 15:16:21.221020', '_unique_id': 'b9545a6d06a14a0e972b878be54577d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.222 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.222 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.222 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.223 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.223 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.223 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd5d8d96b-e3f6-42d2-b585-1cee635a4e94', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:16:21.222460', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '931488ca-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': '8187a8628a9bed998ec5bf0f0fef2be660f143b5758245999a19918969e9d90e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:16:21.222460', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9314931a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': '17478077115217a788c043121c57359096d88858d8c1158d8f53a46a775b2e74'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:16:21.222460', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '93149cca-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.283480709, 'message_signature': '1ba9f52400b5822fa628b831fb51a9f18db558963e31e5fdc6c7fe8b616db22b'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:16:21.222460', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9314a666-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.315791553, 'message_signature': '131bf8aaf6e377c7bf6652d851c0b4da05917215c1dfad378695e798e2d31d8d'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:16:21.222460', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9314b0f2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8024.315791553, 'message_signature': '579deadafeb99243e8fab25ba4d8bbc90d558f5ba8915fc571326e431e28e019'}]}, 'timestamp': '2025-10-13 15:16:21.223777', '_unique_id': '2d261daa105c45d49e9d2bdec97ee347'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:16:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:16:21.224 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:16:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:21.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:21 standalone.localdomain systemd[1]: libpod-conmon-2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.scope: Deactivated successfully.
Oct 13 15:16:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:21 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:16:21 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:16:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:22 standalone.localdomain ceph-mon[29756]: pgmap v3054: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:22 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3055: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:16:23
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_data', 'vms', 'images', 'volumes', '.mgr', 'manila_metadata']
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:16:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23311 DF PROTO=TCP SPT=45548 DPT=9102 SEQ=1382202452 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5C11F0000000001030307) 
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:16:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:16:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:16:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:16:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b385782410aae5c113d9546c3022ac0ff632d6109f408ddd4832d9b2db2c7c16-merged.mount: Deactivated successfully.
Oct 13 15:16:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b385782410aae5c113d9546c3022ac0ff632d6109f408ddd4832d9b2db2c7c16-merged.mount: Deactivated successfully.
Oct 13 15:16:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:24 standalone.localdomain podman[471364]: 2025-10-13 15:16:24.114227851 +0000 UTC m=+0.377543881 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:16:24 standalone.localdomain podman[471364]: 2025-10-13 15:16:24.129003866 +0000 UTC m=+0.392319826 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:16:24 standalone.localdomain podman[471364]: unhealthy
Oct 13 15:16:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:24 standalone.localdomain ceph-mon[29756]: pgmap v3055: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:24 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3056: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:16:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:16:25 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:16:25 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:16:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15882 DF PROTO=TCP SPT=42598 DPT=9101 SEQ=4147919561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5C8AE0000000001030307) 
Oct 13 15:16:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:25.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:25 standalone.localdomain python3.9[471494]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:16:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:25 standalone.localdomain ceph-mon[29756]: pgmap v3056: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:25 standalone.localdomain podman[471501]: 2025-10-13 15:16:25.96679579 +0000 UTC m=+0.058704858 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:16:26 standalone.localdomain systemd[1]: Started libpod-conmon-87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.scope.
Oct 13 15:16:26 standalone.localdomain podman[471501]: 2025-10-13 15:16:26.047581667 +0000 UTC m=+0.139490755 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:16:26 standalone.localdomain podman[471501]: unhealthy
Oct 13 15:16:26 standalone.localdomain podman[471495]: 2025-10-13 15:16:26.055556332 +0000 UTC m=+0.171001754 container exec 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:16:26 standalone.localdomain podman[471495]: 2025-10-13 15:16:26.088045812 +0000 UTC m=+0.203491214 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.263 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.263 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.264 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.264 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:16:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.756 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.757 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.757 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:16:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:26.757 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:16:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:26 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3057: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.147 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.168 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.168 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.169 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.169 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.170 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.170 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.171 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.171 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.172 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.172 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.195 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.196 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.197 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.197 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.198 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:16:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0ab554be88d1550c55439de27cccaa841a200e2b4e9858835d0b8ff13bea3a63-merged.mount: Deactivated successfully.
Oct 13 15:16:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6a6ca37cfa9f0e5053b4b5f65562ad93b9cc6ac5c10b8f2ac2713ae975ff1e87-merged.mount: Deactivated successfully.
Oct 13 15:16:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6a6ca37cfa9f0e5053b4b5f65562ad93b9cc6ac5c10b8f2ac2713ae975ff1e87-merged.mount: Deactivated successfully.
Oct 13 15:16:27 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:16:27 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Failed with result 'exit-code'.
Oct 13 15:16:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:16:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/278743431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.669 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.743 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.744 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.744 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.750 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.750 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.918 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.918 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10261MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.919 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:16:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:27.919 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.011 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.011 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.012 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.012 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.072 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:16:28 standalone.localdomain python3.9[471674]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain ceph-mon[29756]: pgmap v3057: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/278743431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:16:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:16:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1964614998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:16:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.506 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.511 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.533 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.535 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:16:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:28.535 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:16:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15884 DF PROTO=TCP SPT=42598 DPT=9101 SEQ=4147919561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5D4B70000000001030307) 
Oct 13 15:16:28 standalone.localdomain systemd[1]: libpod-conmon-87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.scope: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain systemd[1]: Started libpod-conmon-87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.scope.
Oct 13 15:16:28 standalone.localdomain podman[471676]: 2025-10-13 15:16:28.788551839 +0000 UTC m=+0.654719042 container exec 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:16:28 standalone.localdomain podman[471676]: 2025-10-13 15:16:28.820846763 +0000 UTC m=+0.687013986 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:16:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f2f7a4d09cf0fc6d635961ac1a5e419617e8dfdf8c2aec5e2a272814a4290da2-merged.mount: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:16:28 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3058: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:16:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:16:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1964614998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:16:30 standalone.localdomain ceph-mon[29756]: pgmap v3058: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:30.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:30 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3059: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:16:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:16:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.467187) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368591467238, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1191, "num_deletes": 257, "total_data_size": 1005784, "memory_usage": 1029592, "flush_reason": "Manual Compaction"}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368591474733, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 976663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71428, "largest_seqno": 72618, "table_properties": {"data_size": 971633, "index_size": 2502, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10967, "raw_average_key_size": 19, "raw_value_size": 961301, "raw_average_value_size": 1677, "num_data_blocks": 114, "num_entries": 573, "num_filter_entries": 573, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368493, "oldest_key_time": 1760368493, "file_creation_time": 1760368591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 8007 microseconds, and 4252 cpu microseconds.
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.475155) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 976663 bytes OK
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.475323) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.477314) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.477336) EVENT_LOG_v1 {"time_micros": 1760368591477330, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.477358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1000214, prev total WAL file size 1000703, number of live WAL files 2.
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.478671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353039' seq:72057594037927935, type:22 .. '6C6F676D0032373632' seq:0, type:0; will stop at (end)
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(953KB)], [161(6119KB)]
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368591478720, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 7243399, "oldest_snapshot_seqno": -1}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 6317 keys, 7148473 bytes, temperature: kUnknown
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368591512382, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 7148473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7108458, "index_size": 23149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 165043, "raw_average_key_size": 26, "raw_value_size": 6995759, "raw_average_value_size": 1107, "num_data_blocks": 926, "num_entries": 6317, "num_filter_entries": 6317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.512647) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 7148473 bytes
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.514241) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.5 rd, 211.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(14.7) write-amplify(7.3) OK, records in: 6847, records dropped: 530 output_compression: NoCompression
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.514261) EVENT_LOG_v1 {"time_micros": 1760368591514252, "job": 100, "event": "compaction_finished", "compaction_time_micros": 33767, "compaction_time_cpu_micros": 16986, "output_level": 6, "num_output_files": 1, "total_output_size": 7148473, "num_input_records": 6847, "num_output_records": 6317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368591514616, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368591515284, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.478548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.515445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.515452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.515456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.515459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:16:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:16:31.515462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:16:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:31.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:31 standalone.localdomain python3.9[471834]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:16:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:16:32 standalone.localdomain ceph-mon[29756]: pgmap v3059: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15885 DF PROTO=TCP SPT=42598 DPT=9101 SEQ=4147919561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5E4760000000001030307) 
Oct 13 15:16:32 standalone.localdomain python3.9[471942]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct 13 15:16:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:32 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3060: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:16:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:16:33 standalone.localdomain systemd[1]: libpod-conmon-87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.scope: Deactivated successfully.
Oct 13 15:16:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:16:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:34 standalone.localdomain ceph-mon[29756]: pgmap v3060: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3061: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:16:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:35.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:16:35 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:35 standalone.localdomain podman[471954]: 2025-10-13 15:16:35.883080497 +0000 UTC m=+2.144724102 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:16:35 standalone.localdomain podman[471954]: 2025-10-13 15:16:35.987618814 +0000 UTC m=+2.249262429 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:16:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:36 standalone.localdomain ceph-mon[29756]: pgmap v3061: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:16:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:16:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:36.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3062: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:37 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:37 standalone.localdomain podman[471980]: 2025-10-13 15:16:37.072287239 +0000 UTC m=+0.491486348 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:16:37 standalone.localdomain podman[471980]: 2025-10-13 15:16:37.107886925 +0000 UTC m=+0.527086034 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:16:37 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26292 DF PROTO=TCP SPT=53158 DPT=9100 SEQ=922385556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5F7340000000001030307) 
Oct 13 15:16:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain python3.9[472110]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:16:37 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:37 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:37 standalone.localdomain systemd[1]: Started libpod-conmon-64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.scope.
Oct 13 15:16:37 standalone.localdomain podman[472111]: 2025-10-13 15:16:37.793907189 +0000 UTC m=+0.141196727 container exec 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct 13 15:16:37 standalone.localdomain podman[472111]: 2025-10-13 15:16:37.826753751 +0000 UTC m=+0.174043259 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:16:38 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26293 DF PROTO=TCP SPT=53158 DPT=9100 SEQ=922385556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB5FB360000000001030307) 
Oct 13 15:16:38 standalone.localdomain ceph-mon[29756]: pgmap v3062: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3063: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:16:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a-merged.mount: Deactivated successfully.
Oct 13 15:16:40 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:40 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a-merged.mount: Deactivated successfully.
Oct 13 15:16:40 standalone.localdomain systemd[1]: libpod-conmon-64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.scope: Deactivated successfully.
Oct 13 15:16:40 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26294 DF PROTO=TCP SPT=53158 DPT=9100 SEQ=922385556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB603360000000001030307) 
Oct 13 15:16:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:40.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:40 standalone.localdomain ceph-mon[29756]: pgmap v3063: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:40 standalone.localdomain python3.9[472248]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:40 standalone.localdomain systemd[1]: Started libpod-conmon-64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.scope.
Oct 13 15:16:40 standalone.localdomain podman[472249]: 2025-10-13 15:16:40.94680411 +0000 UTC m=+0.086064409 container exec 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute)
Oct 13 15:16:40 standalone.localdomain podman[472249]: 2025-10-13 15:16:40.980336232 +0000 UTC m=+0.119596601 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:16:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3064: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:41.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:16:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b385782410aae5c113d9546c3022ac0ff632d6109f408ddd4832d9b2db2c7c16-merged.mount: Deactivated successfully.
Oct 13 15:16:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:16:42 standalone.localdomain ceph-mon[29756]: pgmap v3064: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc487173c42dac73779451c545de56f2d82e43b8703bd3444d61c56a0ffff0f9-merged.mount: Deactivated successfully.
Oct 13 15:16:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3065: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:16:43 standalone.localdomain python3.9[472385]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:16:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b385782410aae5c113d9546c3022ac0ff632d6109f408ddd4832d9b2db2c7c16-merged.mount: Deactivated successfully.
Oct 13 15:16:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:16:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:16:43 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5693 DF PROTO=TCP SPT=46024 DPT=9105 SEQ=1714107090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB610F60000000001030307) 
Oct 13 15:16:44 standalone.localdomain python3.9[472504]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct 13 15:16:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:44 standalone.localdomain ceph-mon[29756]: pgmap v3065: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:16:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:16:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:44 standalone.localdomain systemd[1]: libpod-conmon-64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.scope: Deactivated successfully.
Oct 13 15:16:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3066: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:45 standalone.localdomain podman[472386]: 2025-10-13 15:16:45.00875531 +0000 UTC m=+1.766309244 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 13 15:16:45 standalone.localdomain podman[472386]: 2025-10-13 15:16:45.043261673 +0000 UTC m=+1.800815627 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:16:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:16:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:45.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:46 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:16:46 standalone.localdomain podman[472524]: 2025-10-13 15:16:46.195758915 +0000 UTC m=+0.717859816 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:16:46 standalone.localdomain podman[472524]: 2025-10-13 15:16:46.21185108 +0000 UTC m=+0.733951981 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:16:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:46 standalone.localdomain ceph-mon[29756]: pgmap v3066: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:46.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:46 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:16:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3067: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:47 standalone.localdomain python3.9[472652]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:16:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:16:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:16:47 standalone.localdomain podman[472655]: 2025-10-13 15:16:47.561013575 +0000 UTC m=+0.074678349 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, container_name=swift_container_server, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=)
Oct 13 15:16:47 standalone.localdomain systemd[1]: tmp-crun.A1rvZd.mount: Deactivated successfully.
Oct 13 15:16:47 standalone.localdomain podman[472654]: 2025-10-13 15:16:47.580198345 +0000 UTC m=+0.094108747 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, name=rhosp17/openstack-swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, release=1, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 15:16:47 standalone.localdomain systemd[1]: Started libpod-conmon-4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.scope.
Oct 13 15:16:47 standalone.localdomain podman[472653]: 2025-10-13 15:16:47.644940048 +0000 UTC m=+0.163256556 container exec 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:16:47 standalone.localdomain podman[472656]: 2025-10-13 15:16:47.654630137 +0000 UTC m=+0.166232828 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc.)
Oct 13 15:16:47 standalone.localdomain podman[472653]: 2025-10-13 15:16:47.678842971 +0000 UTC m=+0.197159459 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:16:47 standalone.localdomain podman[472655]: 2025-10-13 15:16:47.771807823 +0000 UTC m=+0.285472567 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_container_server, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=)
Oct 13 15:16:47 standalone.localdomain podman[472654]: 2025-10-13 15:16:47.791101117 +0000 UTC m=+0.305011529 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, batch=17.1_20250721.1, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T14:56:28, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:16:47 standalone.localdomain podman[472656]: 2025-10-13 15:16:47.82177856 +0000 UTC m=+0.333381261 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, name=rhosp17/openstack-swift-account, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public)
Oct 13 15:16:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:16:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f2f7a4d09cf0fc6d635961ac1a5e419617e8dfdf8c2aec5e2a272814a4290da2-merged.mount: Deactivated successfully.
Oct 13 15:16:48 standalone.localdomain ceph-mon[29756]: pgmap v3067: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:48 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18268 DF PROTO=TCP SPT=42690 DPT=9882 SEQ=3915986251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB623360000000001030307) 
Oct 13 15:16:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3068: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc487173c42dac73779451c545de56f2d82e43b8703bd3444d61c56a0ffff0f9-merged.mount: Deactivated successfully.
Oct 13 15:16:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-63ac4d190a9bc6bdffd7c21680e10b5fe3531211b9daf7795599b16a76f460b2-merged.mount: Deactivated successfully.
Oct 13 15:16:49 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:16:49 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:16:49 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:16:49 standalone.localdomain systemd[1]: libpod-conmon-4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.scope: Deactivated successfully.
Oct 13 15:16:50 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5695 DF PROTO=TCP SPT=46024 DPT=9105 SEQ=1714107090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB628B60000000001030307) 
Oct 13 15:16:50 standalone.localdomain python3.9[472866]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:16:50 standalone.localdomain systemd[1]: Started libpod-conmon-4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.scope.
Oct 13 15:16:50 standalone.localdomain podman[472867]: 2025-10-13 15:16:50.491283464 +0000 UTC m=+0.100878896 container exec 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:16:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:50.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:50 standalone.localdomain podman[472867]: 2025-10-13 15:16:50.532843823 +0000 UTC m=+0.142439235 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:16:50 standalone.localdomain ceph-mon[29756]: pgmap v3068: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3069: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:51.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:16:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:16:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:16:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:16:52 standalone.localdomain ceph-mon[29756]: pgmap v3069: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:52 standalone.localdomain podman[472897]: 2025-10-13 15:16:52.615619908 +0000 UTC m=+0.901462987 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Oct 13 15:16:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:16:52 standalone.localdomain systemd[1]: tmp-crun.lX3ilc.mount: Deactivated successfully.
Oct 13 15:16:52 standalone.localdomain podman[472896]: 2025-10-13 15:16:52.65858702 +0000 UTC m=+0.944668526 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:16:52 standalone.localdomain podman[472896]: 2025-10-13 15:16:52.700130549 +0000 UTC m=+0.986211995 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:16:52 standalone.localdomain podman[472897]: 2025-10-13 15:16:52.753677277 +0000 UTC m=+1.039520296 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, managed_by=edpm_ansible, release=1755695350, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3070: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:53 standalone.localdomain python3.9[473042]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:16:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:16:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6077 DF PROTO=TCP SPT=59714 DPT=9102 SEQ=2091110053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB6364F0000000001030307) 
Oct 13 15:16:53 standalone.localdomain python3.9[473150]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 13 15:16:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:16:54 standalone.localdomain ceph-mon[29756]: pgmap v3070: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:54 standalone.localdomain systemd[1]: libpod-conmon-4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.scope: Deactivated successfully.
Oct 13 15:16:54 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:16:54 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:54 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:54 standalone.localdomain podman[467099]: time="2025-10-13T15:16:54Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged: invalid argument"
Oct 13 15:16:54 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:54 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:54 standalone.localdomain podman[467099]: time="2025-10-13T15:16:54Z" level=error msg="Getting root fs size for \"1500fd74fde13a67761de0823a6a7c9804b695da50d1f8aec6313a83a310f67e\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": creating overlay mount to /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/W4XPC5KOLLZ5MP2UTF333CCBDD:/var/lib/containers/storage/overlay/l/5PSUIGDAXQERVK6KLHOROUARFH,upperdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/diff,workdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/work,nodev,metacopy=on\": no such file or directory"
Oct 13 15:16:54 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:16:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3071: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:16:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:16:55 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22324 DF PROTO=TCP SPT=41658 DPT=9101 SEQ=3634040326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB63DDE0000000001030307) 
Oct 13 15:16:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:55.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:16:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:16:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:16:56 standalone.localdomain ceph-mon[29756]: pgmap v3071: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:16:56.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:16:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3072: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:16:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fffa5554b0cfe650195b161e4e00ab11f8ed81bad83af3fd33e2eb249df9058a-merged.mount: Deactivated successfully.
Oct 13 15:16:57 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:57 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:57 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:57 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:57 standalone.localdomain podman[473166]: 2025-10-13 15:16:57.48534761 +0000 UTC m=+1.834398831 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:16:57 standalone.localdomain podman[473166]: 2025-10-13 15:16:57.496953417 +0000 UTC m=+1.846004718 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:16:57 standalone.localdomain podman[473166]: unhealthy
Oct 13 15:16:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:16:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:16:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:16:58 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22326 DF PROTO=TCP SPT=41658 DPT=9101 SEQ=3634040326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB649F60000000001030307) 
Oct 13 15:16:58 standalone.localdomain ceph-mon[29756]: pgmap v3072: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3073: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:16:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:16:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc487173c42dac73779451c545de56f2d82e43b8703bd3444d61c56a0ffff0f9-merged.mount: Deactivated successfully.
Oct 13 15:16:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc487173c42dac73779451c545de56f2d82e43b8703bd3444d61c56a0ffff0f9-merged.mount: Deactivated successfully.
Oct 13 15:16:59 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:16:59 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:16:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:16:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:00 standalone.localdomain podman[473189]: 2025-10-13 15:17:00.005981101 +0000 UTC m=+2.267451480 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true)
Oct 13 15:17:00 standalone.localdomain podman[473189]: 2025-10-13 15:17:00.040902166 +0000 UTC m=+2.302372585 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:17:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:00.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:00 standalone.localdomain ceph-mon[29756]: pgmap v3073: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:00 standalone.localdomain python3.9[473313]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:17:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3074: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:17:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:01.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:17:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:01 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:17:02 standalone.localdomain systemd[1]: Started libpod-conmon-e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.scope.
Oct 13 15:17:02 standalone.localdomain podman[473314]: 2025-10-13 15:17:02.024879159 +0000 UTC m=+1.348850026 container exec e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:17:02 standalone.localdomain podman[473314]: 2025-10-13 15:17:02.057651438 +0000 UTC m=+1.381622345 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:17:02 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22327 DF PROTO=TCP SPT=41658 DPT=9101 SEQ=3634040326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB659B60000000001030307) 
Oct 13 15:17:02 standalone.localdomain ceph-mon[29756]: pgmap v3074: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3075: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-01d7f3190520dadfdb112eff8448e727d7969c79a4854317f6519c23470ffe05-merged.mount: Deactivated successfully.
Oct 13 15:17:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:04 standalone.localdomain python3.9[473450]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:17:04 standalone.localdomain ceph-mon[29756]: pgmap v3075: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:04 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:04 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:04 standalone.localdomain systemd[1]: libpod-conmon-e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.scope: Deactivated successfully.
Oct 13 15:17:04 standalone.localdomain systemd[1]: Started libpod-conmon-e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.scope.
Oct 13 15:17:04 standalone.localdomain podman[473451]: 2025-10-13 15:17:04.793537864 +0000 UTC m=+0.137104981 container exec e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:17:04 standalone.localdomain podman[473451]: 2025-10-13 15:17:04.823690333 +0000 UTC m=+0.167257430 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:17:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3076: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:05.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-255b54765638efdff81e9220881dc259a519b83267eec04efe1f3768f49e0df8-merged.mount: Deactivated successfully.
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.491086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368626491108, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 565, "num_deletes": 250, "total_data_size": 340823, "memory_usage": 351792, "flush_reason": "Manual Compaction"}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368626531383, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 236019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72619, "largest_seqno": 73183, "table_properties": {"data_size": 233449, "index_size": 618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7019, "raw_average_key_size": 20, "raw_value_size": 228114, "raw_average_value_size": 657, "num_data_blocks": 29, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368591, "oldest_key_time": 1760368591, "file_creation_time": 1760368626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 40449 microseconds, and 1014 cpu microseconds.
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.531511) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 236019 bytes OK
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.531553) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.554499) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.554547) EVENT_LOG_v1 {"time_micros": 1760368626554537, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.554573) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 337650, prev total WAL file size 338139, number of live WAL files 2.
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.555619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303034' seq:0, type:0; will stop at (end)
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(230KB)], [164(6980KB)]
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368626555862, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 7384492, "oldest_snapshot_seqno": -1}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 6163 keys, 5431655 bytes, temperature: kUnknown
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368626592754, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 5431655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5397095, "index_size": 18078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 161981, "raw_average_key_size": 26, "raw_value_size": 5291449, "raw_average_value_size": 858, "num_data_blocks": 715, "num_entries": 6163, "num_filter_entries": 6163, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.592985) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 5431655 bytes
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594071) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.8 rd, 146.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 6.8 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(54.3) write-amplify(23.0) OK, records in: 6664, records dropped: 501 output_compression: NoCompression
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594090) EVENT_LOG_v1 {"time_micros": 1760368626594082, "job": 102, "event": "compaction_finished", "compaction_time_micros": 36966, "compaction_time_cpu_micros": 16469, "output_level": 6, "num_output_files": 1, "total_output_size": 5431655, "num_input_records": 6664, "num_output_records": 6163, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368626594220, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368626594815, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.555270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:17:06.594957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:17:06 standalone.localdomain ceph-mon[29756]: pgmap v3076: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:06.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:17:06.933 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:17:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:17:06.934 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:17:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:17:06.934 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:17:06 standalone.localdomain python3.9[473588]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3077: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:07 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36802 DF PROTO=TCP SPT=54640 DPT=9100 SEQ=2322230278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB66C640000000001030307) 
Oct 13 15:17:07 standalone.localdomain sudo[473697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:17:07 standalone.localdomain sudo[473697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:17:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:17:07 standalone.localdomain sudo[473697]: pam_unix(sudo:session): session closed for user root
Oct 13 15:17:07 standalone.localdomain sudo[473716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 15:17:07 standalone.localdomain python3.9[473696]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 13 15:17:07 standalone.localdomain sudo[473716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:17:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:17:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:08 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36803 DF PROTO=TCP SPT=54640 DPT=9100 SEQ=2322230278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB670770000000001030307) 
Oct 13 15:17:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc487173c42dac73779451c545de56f2d82e43b8703bd3444d61c56a0ffff0f9-merged.mount: Deactivated successfully.
Oct 13 15:17:08 standalone.localdomain ceph-mon[29756]: pgmap v3077: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-63ac4d190a9bc6bdffd7c21680e10b5fe3531211b9daf7795599b16a76f460b2-merged.mount: Deactivated successfully.
Oct 13 15:17:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-63ac4d190a9bc6bdffd7c21680e10b5fe3531211b9daf7795599b16a76f460b2-merged.mount: Deactivated successfully.
Oct 13 15:17:08 standalone.localdomain systemd[1]: libpod-conmon-e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.scope: Deactivated successfully.
Oct 13 15:17:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:17:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 6000.1 total, 600.0 interval
                                                        Cumulative writes: 6969 writes, 30K keys, 6969 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 6969 writes, 1298 syncs, 5.37 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 18 writes, 62 keys, 18 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s
                                                        Interval WAL: 18 writes, 8 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-0] **
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-1] **
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [m-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [m-2] **
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-0] **
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-1] **
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [p-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [p-2] **
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-0] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 140.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,0.000141689%) FilterBlock(1,0.11 KB,7.62939e-05%) IndexBlock(1,0.14 KB,9.80922e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-0] **
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-1] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 140.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,0.000141689%) FilterBlock(1,0.11 KB,7.62939e-05%) IndexBlock(1,0.14 KB,9.80922e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-1] **
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [O-2] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427eedd0#2 capacity: 140.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,0.000141689%) FilterBlock(1,0.11 KB,7.62939e-05%) IndexBlock(1,0.14 KB,9.80922e-05%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [O-2] **
                                                        
                                                        ** Compaction Stats [L] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [L] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [L] **
                                                        
                                                        ** Compaction Stats [P] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                        
                                                        ** Compaction Stats [P] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6000.1 total, 4800.0 interval
                                                        Flush(GB): cumulative 0.000, interval 0.000
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x55a7427ee430#2 capacity: 224.00 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000619888%) FilterBlock(3,0.33 KB,0.000143051%) IndexBlock(3,0.34 KB,0.000149863%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [P] **
Oct 13 15:17:08 standalone.localdomain podman[473714]: 2025-10-13 15:17:08.826679227 +0000 UTC m=+1.351272160 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:17:08 standalone.localdomain podman[473752]: 2025-10-13 15:17:08.843775584 +0000 UTC m=+1.080927350 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:17:08 standalone.localdomain podman[473714]: 2025-10-13 15:17:08.868827746 +0000 UTC m=+1.393420629 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:17:08 standalone.localdomain podman[473752]: 2025-10-13 15:17:08.880752702 +0000 UTC m=+1.117904448 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:17:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3078: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:10 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36804 DF PROTO=TCP SPT=54640 DPT=9100 SEQ=2322230278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB678770000000001030307) 
Oct 13 15:17:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:10.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:10 standalone.localdomain ceph-mon[29756]: pgmap v3078: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:10 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:17:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:11 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:17:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3079: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:11.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:12 standalone.localdomain ceph-mon[29756]: pgmap v3079: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3080: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:13 standalone.localdomain python3.9[473922]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:17:13 standalone.localdomain systemd[1]: Started libpod-conmon-6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.scope.
Oct 13 15:17:13 standalone.localdomain podman[473923]: 2025-10-13 15:17:13.675401513 +0000 UTC m=+0.095747407 container exec 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 15:17:13 standalone.localdomain podman[473923]: 2025-10-13 15:17:13.703538379 +0000 UTC m=+0.123884283 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Oct 13 15:17:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:14 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13284 DF PROTO=TCP SPT=36558 DPT=9105 SEQ=2230199988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB686360000000001030307) 
Oct 13 15:17:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-255b54765638efdff81e9220881dc259a519b83267eec04efe1f3768f49e0df8-merged.mount: Deactivated successfully.
Oct 13 15:17:14 standalone.localdomain ceph-mon[29756]: pgmap v3080: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d1deaee7c92302295cfa6e4899573241006c9f9f1593a37994b270ad876377d-merged.mount: Deactivated successfully.
Oct 13 15:17:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3081: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:15 standalone.localdomain python3.9[474059]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 15:17:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:15.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 15:17:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:16 standalone.localdomain ceph-mon[29756]: pgmap v3081: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:17:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:16.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:17:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:17 standalone.localdomain systemd[1]: libpod-conmon-6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.scope: Deactivated successfully.
Oct 13 15:17:17 standalone.localdomain podman[474068]: 2025-10-13 15:17:17.051150461 +0000 UTC m=+0.307593527 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:17:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3082: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:17 standalone.localdomain systemd[1]: Started libpod-conmon-6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.scope.
Oct 13 15:17:17 standalone.localdomain podman[474060]: 2025-10-13 15:17:17.121241452 +0000 UTC m=+1.602603900 container exec 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 15:17:17 standalone.localdomain podman[474079]: 2025-10-13 15:17:17.122037417 +0000 UTC m=+0.216104001 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:17:17 standalone.localdomain podman[474079]: 2025-10-13 15:17:17.13278516 +0000 UTC m=+0.226851674 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:17:17 standalone.localdomain podman[474068]: 2025-10-13 15:17:17.188265381 +0000 UTC m=+0.444708507 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:17:17 standalone.localdomain podman[474060]: 2025-10-13 15:17:17.205007396 +0000 UTC m=+1.686369844 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Oct 13 15:17:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4143881840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:17:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4143881840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:17:18 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13425 DF PROTO=TCP SPT=48244 DPT=9882 SEQ=614620069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB698760000000001030307) 
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: pgmap v3082: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4143881840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:17:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4143881840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:17:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:18 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:17:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:18 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:17:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3083: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:19 standalone.localdomain python3.9[474234]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:17:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:17:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:17:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:20 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13286 DF PROTO=TCP SPT=36558 DPT=9105 SEQ=2230199988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB69DF60000000001030307) 
Oct 13 15:17:20 standalone.localdomain python3.9[474373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:20.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:20 standalone.localdomain ceph-mon[29756]: pgmap v3083: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-01d7f3190520dadfdb112eff8448e727d7969c79a4854317f6519c23470ffe05-merged.mount: Deactivated successfully.
Oct 13 15:17:21 standalone.localdomain python3.9[474481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3084: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:21 standalone.localdomain systemd[1]: libpod-conmon-6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.scope: Deactivated successfully.
Oct 13 15:17:21 standalone.localdomain podman[474237]: 2025-10-13 15:17:21.124718689 +0000 UTC m=+1.379404646 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, container_name=swift_account_server, distribution-scope=public, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:17:21 standalone.localdomain podman[474235]: 2025-10-13 15:17:21.164702544 +0000 UTC m=+1.418537775 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, release=1, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:17:21 standalone.localdomain podman[474236]: 2025-10-13 15:17:21.234233329 +0000 UTC m=+1.488233565 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, container_name=swift_container_server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, distribution-scope=public, vcs-type=git)
Oct 13 15:17:21 standalone.localdomain podman[474235]: 2025-10-13 15:17:21.323726485 +0000 UTC m=+1.577561716 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:17:21 standalone.localdomain podman[474237]: 2025-10-13 15:17:21.370851924 +0000 UTC m=+1.625537891 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=swift_account_server, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible)
Oct 13 15:17:21 standalone.localdomain podman[474236]: 2025-10-13 15:17:21.441792782 +0000 UTC m=+1.695793048 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., tcib_managed=true, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container)
Oct 13 15:17:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:21 standalone.localdomain python3.9[474621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/root/.ansible/tmp/ansible-tmp-1760368640.5494275-1065-30138610350662/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:21.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:22 standalone.localdomain python3.9[474730]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.357 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.358 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.403 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.420 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.420 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.421 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.421 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.421 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:17:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-255b54765638efdff81e9220881dc259a519b83267eec04efe1f3768f49e0df8-merged.mount: Deactivated successfully.
Oct 13 15:17:22 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:17:22 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:17:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:22 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:17:22 standalone.localdomain ceph-mon[29756]: pgmap v3084: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:17:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3091288135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.921 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.985 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.985 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.986 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.991 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:17:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:22.992 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:17:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:23 standalone.localdomain python3.9[474859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3085: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.217 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.220 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10215MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.220 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.221 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:17:23
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_data', 'manila_metadata', 'images', '.mgr', 'volumes', 'backups']
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.321 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.322 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.322 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.323 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.388 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:17:23 standalone.localdomain python3.9[474916]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27961 DF PROTO=TCP SPT=52648 DPT=9102 SEQ=2420090442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB6AB7E0000000001030307) 
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:17:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3091288135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:17:23 standalone.localdomain auditd[726]: Audit daemon rotating log files
Oct 13 15:17:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:17:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1965607480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.827 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.835 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.857 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.861 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:17:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:23.861 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:17:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:24 standalone.localdomain python3.9[475046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:24.549 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:24.550 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:17:24 standalone.localdomain python3.9[475101]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xz_3tvdj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:24.706 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:17:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:24.707 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:17:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:24.707 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:17:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:17:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:17:24 standalone.localdomain ceph-mon[29756]: pgmap v3085: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1965607480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:17:24 standalone.localdomain podman[475136]: 2025-10-13 15:17:24.846442358 +0000 UTC m=+0.107917782 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6)
Oct 13 15:17:24 standalone.localdomain podman[475136]: 2025-10-13 15:17:24.856953865 +0000 UTC m=+0.118429289 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Oct 13 15:17:24 standalone.localdomain podman[475135]: 2025-10-13 15:17:24.818913109 +0000 UTC m=+0.080522957 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:17:24 standalone.localdomain podman[475135]: 2025-10-13 15:17:24.901743785 +0000 UTC m=+0.163353643 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=iscsid)
Oct 13 15:17:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-072572511f39c4d4ecb97b214b4148a1acd564ad346e67866a611cfc25e867ff-merged.mount: Deactivated successfully.
Oct 13 15:17:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3086: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:25 standalone.localdomain python3.9[475246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.215 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:17:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.231 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.232 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.233 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.234 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.234 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.235 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.235 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.236 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.236 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:17:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:25 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:17:25 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:17:25 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25500 DF PROTO=TCP SPT=53938 DPT=9101 SEQ=1569907337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB6B30E0000000001030307) 
Oct 13 15:17:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:25.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:25 standalone.localdomain python3.9[475301]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:26 standalone.localdomain python3.9[475409]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:17:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:26 standalone.localdomain ceph-mon[29756]: pgmap v3086: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:26.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:26 standalone.localdomain python3[475518]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 15:17:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3087: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:17:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:17:27 standalone.localdomain python3.9[475626]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-255b54765638efdff81e9220881dc259a519b83267eec04efe1f3768f49e0df8-merged.mount: Deactivated successfully.
Oct 13 15:17:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d1deaee7c92302295cfa6e4899573241006c9f9f1593a37994b270ad876377d-merged.mount: Deactivated successfully.
Oct 13 15:17:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:28 standalone.localdomain python3.9[475681]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:28 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25502 DF PROTO=TCP SPT=53938 DPT=9101 SEQ=1569907337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB6BF360000000001030307) 
Oct 13 15:17:28 standalone.localdomain ceph-mon[29756]: pgmap v3087: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:28 standalone.localdomain python3.9[475789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:17:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3088: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:29 standalone.localdomain python3.9[475844]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:17:30 standalone.localdomain python3.9[475952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:30 standalone.localdomain podman[475953]: 2025-10-13 15:17:30.045943166 +0000 UTC m=+0.075597868 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:17:30 standalone.localdomain podman[475953]: 2025-10-13 15:17:30.057409512 +0000 UTC m=+0.087064194 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:17:30 standalone.localdomain podman[475953]: unhealthy
Oct 13 15:17:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:30 standalone.localdomain python3.9[476028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:30.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:30 standalone.localdomain sshd[476043]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:17:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:30 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:17:30 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:17:30 standalone.localdomain ceph-mon[29756]: pgmap v3088: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3089: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:31 standalone.localdomain python3.9[476138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:31 standalone.localdomain unix_chkpwd[476194]: password check failed for user (root)
Oct 13 15:17:31 standalone.localdomain sshd[476043]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:17:31 standalone.localdomain python3.9[476193]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:31.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:32 standalone.localdomain python3.9[476302]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:17:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:17:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:17:32 standalone.localdomain systemd[1]: tmp-crun.z2wvJR.mount: Deactivated successfully.
Oct 13 15:17:32 standalone.localdomain podman[476305]: 2025-10-13 15:17:32.545357872 +0000 UTC m=+0.101960362 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:17:32 standalone.localdomain podman[476305]: 2025-10-13 15:17:32.561014664 +0000 UTC m=+0.117617154 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:17:32 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25503 DF PROTO=TCP SPT=53938 DPT=9101 SEQ=1569907337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB6CEF70000000001030307) 
Oct 13 15:17:32 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:17:32 standalone.localdomain ceph-mon[29756]: pgmap v3089: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:32 standalone.localdomain python3.9[476409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/root/.ansible/tmp/ansible-tmp-1760368651.857982-1190-158843772912962/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3090: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccbbc5cc888099d6bfaecb9e9bb51395da78f6c50564be8b58709cb79be8f753-merged.mount: Deactivated successfully.
Oct 13 15:17:33 standalone.localdomain python3.9[476517]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:17:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:33 standalone.localdomain sshd[476043]: Failed password for root from 193.46.255.244 port 42584 ssh2
Oct 13 15:17:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:34 standalone.localdomain python3.9[476625]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:17:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:17:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:34 standalone.localdomain podman[467099]: time="2025-10-13T15:17:34Z" level=error msg="Getting root fs size for \"1ff325f4886d50de0bf2a0b0441e3d76e5e1486b16dc00bb61c078ed5529b351\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:17:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:34 standalone.localdomain ceph-mon[29756]: pgmap v3090: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:35 standalone.localdomain python3.9[476736]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                          include "/etc/nftables/edpm-chains.nft"
                                                          include "/etc/nftables/edpm-rules.nft"
                                                          include "/etc/nftables/edpm-jumps.nft"
                                                           path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3091: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:35 standalone.localdomain unix_chkpwd[476808]: password check failed for user (root)
Oct 13 15:17:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:35.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:35 standalone.localdomain python3.9[476845]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:17:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:17:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4bbf92a4c859842b06884bea493d8bbbda1c9f40c81fd52f77228f976bab6cee-merged.mount: Deactivated successfully.
Oct 13 15:17:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4bbf92a4c859842b06884bea493d8bbbda1c9f40c81fd52f77228f976bab6cee-merged.mount: Deactivated successfully.
Oct 13 15:17:36 standalone.localdomain python3.9[476954]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:17:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:17:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-072572511f39c4d4ecb97b214b4148a1acd564ad346e67866a611cfc25e867ff-merged.mount: Deactivated successfully.
Oct 13 15:17:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:36 standalone.localdomain ceph-mon[29756]: pgmap v3091: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:36 standalone.localdomain python3.9[477064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:17:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3092: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:37 standalone.localdomain sshd[476043]: Failed password for root from 193.46.255.244 port 42584 ssh2
Oct 13 15:17:37 standalone.localdomain python3.9[477175]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:17:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:37 standalone.localdomain sshd[457181]: pam_unix(sshd:session): session closed for user root
Oct 13 15:17:37 standalone.localdomain systemd[1]: session-295.scope: Deactivated successfully.
Oct 13 15:17:37 standalone.localdomain systemd[1]: session-295.scope: Consumed 1min 26.018s CPU time.
Oct 13 15:17:37 standalone.localdomain systemd-logind[45629]: Session 295 logged out. Waiting for processes to exit.
Oct 13 15:17:37 standalone.localdomain systemd-logind[45629]: Removed session 295.
Oct 13 15:17:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:17:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:17:38 standalone.localdomain ceph-mon[29756]: pgmap v3092: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3093: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:39 standalone.localdomain unix_chkpwd[477193]: password check failed for user (root)
Oct 13 15:17:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:17:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:40.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:40 standalone.localdomain ceph-mon[29756]: pgmap v3093: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3094: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:17:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:17:41 standalone.localdomain sshd[476043]: Failed password for root from 193.46.255.244 port 42584 ssh2
Oct 13 15:17:41 standalone.localdomain podman[477195]: 2025-10-13 15:17:41.27338996 +0000 UTC m=+0.081564808 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:17:41 standalone.localdomain podman[477194]: 2025-10-13 15:17:41.288892546 +0000 UTC m=+0.095433686 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:17:41 standalone.localdomain podman[477195]: 2025-10-13 15:17:41.309936381 +0000 UTC m=+0.118111249 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 15:17:41 standalone.localdomain podman[477194]: 2025-10-13 15:17:41.32386445 +0000 UTC m=+0.130405580 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:17:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:41.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:17:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba-merged.mount: Deactivated successfully.
Oct 13 15:17:42 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:17:42 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:17:42 standalone.localdomain sshd[476043]: Received disconnect from 193.46.255.244 port 42584:11:  [preauth]
Oct 13 15:17:42 standalone.localdomain sshd[476043]: Disconnected from authenticating user root 193.46.255.244 port 42584 [preauth]
Oct 13 15:17:42 standalone.localdomain sshd[476043]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:17:42 standalone.localdomain ceph-mon[29756]: pgmap v3094: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:42 standalone.localdomain sshd[477236]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:17:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:17:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:17:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:17:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:17:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:17:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:17:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3095: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:17:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e-merged.mount: Deactivated successfully.
Oct 13 15:17:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e-merged.mount: Deactivated successfully.
Oct 13 15:17:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:17:43 standalone.localdomain unix_chkpwd[477243]: password check failed for user (root)
Oct 13 15:17:43 standalone.localdomain sshd[477236]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:17:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccbbc5cc888099d6bfaecb9e9bb51395da78f6c50564be8b58709cb79be8f753-merged.mount: Deactivated successfully.
Oct 13 15:17:44 standalone.localdomain sshd[477244]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:17:44 standalone.localdomain sshd[477244]: Accepted publickey for root from 192.168.122.30 port 44172 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:17:44 standalone.localdomain systemd-logind[45629]: New session 296 of user root.
Oct 13 15:17:44 standalone.localdomain systemd[1]: Started Session 296 of User root.
Oct 13 15:17:44 standalone.localdomain sshd[477244]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:17:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:17:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:17:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:17:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:17:44 standalone.localdomain ceph-mon[29756]: pgmap v3095: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:45 standalone.localdomain python3.9[477355]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3096: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:45.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:17:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:45 standalone.localdomain python3.9[477463]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:45 standalone.localdomain sshd[477236]: Failed password for root from 193.46.255.244 port 28884 ssh2
Oct 13 15:17:46 standalone.localdomain python3.9[477571]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e-merged.mount: Deactivated successfully.
Oct 13 15:17:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:17:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:17:46 standalone.localdomain ceph-mon[29756]: pgmap v3096: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:46.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:46 standalone.localdomain python3.9[477679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3097: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:47 standalone.localdomain unix_chkpwd[477729]: password check failed for user (root)
Oct 13 15:17:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:17:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4bbf92a4c859842b06884bea493d8bbbda1c9f40c81fd52f77228f976bab6cee-merged.mount: Deactivated successfully.
Oct 13 15:17:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:17:47 standalone.localdomain python3.9[477766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368666.340223-48-227792979151970/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:17:48 standalone.localdomain python3.9[477874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:48 standalone.localdomain python3.9[477960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368667.8409245-63-13444700712358/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:48 standalone.localdomain ceph-mon[29756]: pgmap v3097: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3098: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:49 standalone.localdomain sshd[477236]: Failed password for root from 193.46.255.244 port 28884 ssh2
Oct 13 15:17:49 standalone.localdomain python3.9[478068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:17:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:17:49 standalone.localdomain podman[478119]: 2025-10-13 15:17:49.568770523 +0000 UTC m=+0.075022822 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:17:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:17:49 standalone.localdomain podman[478119]: 2025-10-13 15:17:49.579014672 +0000 UTC m=+0.085266971 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct 13 15:17:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:17:49 standalone.localdomain podman[478118]: 2025-10-13 15:17:49.630894474 +0000 UTC m=+0.135778101 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:17:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e7f16864f80a2c037ac48def8b5c01d605b8cc2e9a20cb2523be725cfabe631b-merged.mount: Deactivated successfully.
Oct 13 15:17:49 standalone.localdomain podman[478118]: 2025-10-13 15:17:49.661254429 +0000 UTC m=+0.166138026 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:17:49 standalone.localdomain python3.9[478182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368668.871139-63-58294608548057/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:50 standalone.localdomain python3.9[478295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:50 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:17:50 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:17:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:50 standalone.localdomain python3.9[478381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368669.8770635-63-46712331798869/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=d8f3e392b376cde8eaa42047302590ba8bfa72ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:50 standalone.localdomain ceph-mon[29756]: pgmap v3098: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:51 standalone.localdomain unix_chkpwd[478399]: password check failed for user (root)
Oct 13 15:17:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3099: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:51 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:51.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:52 standalone.localdomain python3.9[478490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:52 standalone.localdomain python3.9[478576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760368671.6117861-121-195740621799115/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=809e3daaf14459c644bf02f1580900e4bae59863 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:17:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:17:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:17:52 standalone.localdomain podman[478632]: 2025-10-13 15:17:52.826201525 +0000 UTC m=+0.087377473 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, release=1)
Oct 13 15:17:52 standalone.localdomain podman[478633]: 2025-10-13 15:17:52.874744777 +0000 UTC m=+0.134392720 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, tcib_managed=true, name=rhosp17/openstack-swift-container, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 15:17:52 standalone.localdomain ceph-mon[29756]: pgmap v3099: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:52 standalone.localdomain podman[478634]: 2025-10-13 15:17:52.945527459 +0000 UTC m=+0.201530032 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, version=17.1.9, managed_by=tripleo_ansible, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git)
Oct 13 15:17:53 standalone.localdomain podman[478632]: 2025-10-13 15:17:53.016081046 +0000 UTC m=+0.277257014 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1)
Oct 13 15:17:53 standalone.localdomain python3.9[478728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3100: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:53 standalone.localdomain podman[478633]: 2025-10-13 15:17:53.09490373 +0000 UTC m=+0.354551633 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public)
Oct 13 15:17:53 standalone.localdomain podman[478634]: 2025-10-13 15:17:53.110094377 +0000 UTC m=+0.366096960 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, distribution-scope=public, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9)
Oct 13 15:17:53 standalone.localdomain sshd[477236]: Failed password for root from 193.46.255.244 port 28884 ssh2
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:17:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:17:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4770 DF PROTO=TCP SPT=40304 DPT=9102 SEQ=3960707225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB720AF0000000001030307) 
Oct 13 15:17:53 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:17:53 standalone.localdomain python3.9[478874]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:53 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:17:53 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:17:53 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:17:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:17:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-98ccaca48a76da707f9da9049cdbb0b9663c20b38f1eca84b44474ecd1a130ba-merged.mount: Deactivated successfully.
Oct 13 15:17:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:17:53 standalone.localdomain ceph-mon[29756]: pgmap v3100: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:54 standalone.localdomain python3.9[478982]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:17:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:17:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4771 DF PROTO=TCP SPT=40304 DPT=9102 SEQ=3960707225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB724B60000000001030307) 
Oct 13 15:17:54 standalone.localdomain python3.9[479037]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:54 standalone.localdomain sshd[477236]: Received disconnect from 193.46.255.244 port 28884:11:  [preauth]
Oct 13 15:17:54 standalone.localdomain sshd[477236]: Disconnected from authenticating user root 193.46.255.244 port 28884 [preauth]
Oct 13 15:17:54 standalone.localdomain sshd[477236]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:17:54 standalone.localdomain sshd[479075]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:17:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:17:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e-merged.mount: Deactivated successfully.
Oct 13 15:17:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e-merged.mount: Deactivated successfully.
Oct 13 15:17:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3101: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:55 standalone.localdomain python3.9[479147]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:55.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:55 standalone.localdomain unix_chkpwd[479166]: password check failed for user (root)
Oct 13 15:17:55 standalone.localdomain sshd[479075]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:17:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:17:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:17:55 standalone.localdomain systemd[1]: tmp-crun.ZcitXv.mount: Deactivated successfully.
Oct 13 15:17:55 standalone.localdomain podman[479205]: 2025-10-13 15:17:55.859794114 +0000 UTC m=+0.117900943 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=)
Oct 13 15:17:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:17:55 standalone.localdomain podman[479205]: 2025-10-13 15:17:55.869764034 +0000 UTC m=+0.127870893 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public)
Oct 13 15:17:55 standalone.localdomain python3.9[479203]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:17:56 standalone.localdomain ceph-mon[29756]: pgmap v3101: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:56 standalone.localdomain python3.9[479341]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:17:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4772 DF PROTO=TCP SPT=40304 DPT=9102 SEQ=3960707225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB72CB60000000001030307) 
Oct 13 15:17:56 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:17:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:17:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1-merged.mount: Deactivated successfully.
Oct 13 15:17:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:17:56.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:17:57 standalone.localdomain python3.9[479449]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3102: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:17:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:17:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:17:57 standalone.localdomain python3.9[479504]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:57 standalone.localdomain sshd[479075]: Failed password for root from 193.46.255.244 port 38804 ssh2
Oct 13 15:17:58 standalone.localdomain python3.9[479612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:17:58 standalone.localdomain ceph-mon[29756]: pgmap v3102: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e-merged.mount: Deactivated successfully.
Oct 13 15:17:58 standalone.localdomain python3.9[479667]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:17:58 standalone.localdomain podman[479204]: 2025-10-13 15:17:58.608627724 +0000 UTC m=+2.867361922 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 15:17:58 standalone.localdomain podman[479204]: 2025-10-13 15:17:58.619147861 +0000 UTC m=+2.877882069 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:17:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:17:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:17:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3103: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:17:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:17:59 standalone.localdomain unix_chkpwd[479750]: password check failed for user (root)
Oct 13 15:17:59 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:17:59 standalone.localdomain python3.9[479782]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:17:59 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:17:59 standalone.localdomain systemd-rc-local-generator[479805]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:17:59 standalone.localdomain systemd-sysv-generator[479809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:17:59 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:18:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:18:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:18:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:18:00 standalone.localdomain ceph-mon[29756]: pgmap v3103: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:00.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4773 DF PROTO=TCP SPT=40304 DPT=9102 SEQ=3960707225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB73C760000000001030307) 
Oct 13 15:18:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:18:00 standalone.localdomain python3.9[479928]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:18:00 standalone.localdomain podman[479929]: 2025-10-13 15:18:00.827670603 +0000 UTC m=+0.091759546 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:18:00 standalone.localdomain podman[479929]: 2025-10-13 15:18:00.837369766 +0000 UTC m=+0.101458739 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:18:00 standalone.localdomain podman[479929]: unhealthy
Oct 13 15:18:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3104: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:01 standalone.localdomain sshd[479075]: Failed password for root from 193.46.255.244 port 38804 ssh2
Oct 13 15:18:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:01 standalone.localdomain python3.9[480005]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:18:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:01 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:18:01 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:18:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:01.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:01 standalone.localdomain python3.9[480113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:18:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:02 standalone.localdomain python3.9[480168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:18:02 standalone.localdomain ceph-mon[29756]: pgmap v3104: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:18:02 standalone.localdomain unix_chkpwd[480277]: password check failed for user (root)
Oct 13 15:18:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:03 standalone.localdomain podman[480278]: 2025-10-13 15:18:03.047651211 +0000 UTC m=+0.064282367 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:18:03 standalone.localdomain python3.9[480276]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:18:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:18:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e7f16864f80a2c037ac48def8b5c01d605b8cc2e9a20cb2523be725cfabe631b-merged.mount: Deactivated successfully.
Oct 13 15:18:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:18:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3105: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:03 standalone.localdomain podman[480278]: 2025-10-13 15:18:03.091403139 +0000 UTC m=+0.108034285 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct 13 15:18:03 standalone.localdomain systemd-sysv-generator[480327]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:18:03 standalone.localdomain systemd-rc-local-generator[480324]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:18:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:18:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:03 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:18:03 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:18:03 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:18:03 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:18:04 standalone.localdomain python3.9[480447]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:18:04 standalone.localdomain ceph-mon[29756]: pgmap v3105: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:04 standalone.localdomain sshd[479075]: Failed password for root from 193.46.255.244 port 38804 ssh2
Oct 13 15:18:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:18:04 standalone.localdomain sshd[479075]: Received disconnect from 193.46.255.244 port 38804:11:  [preauth]
Oct 13 15:18:04 standalone.localdomain sshd[479075]: Disconnected from authenticating user root 193.46.255.244 port 38804 [preauth]
Oct 13 15:18:04 standalone.localdomain sshd[479075]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:18:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:18:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3106: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:05 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:18:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:05 standalone.localdomain python3.9[480555]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:18:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:05.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:05 standalone.localdomain python3.9[480643]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760368684.650505-258-98255699404042/.source.json _original_basename=.xd9h56g2 follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:18:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:06 standalone.localdomain python3.9[480751]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:18:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:06 standalone.localdomain ceph-mon[29756]: pgmap v3106: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:06.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:18:06.934 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:18:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:18:06.935 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:18:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:18:06.935 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:18:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3107: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:18:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4525e4f58503c299e14f2206981efdbe4549b63528f450366b37f72d8ab6a848-merged.mount: Deactivated successfully.
Oct 13 15:18:08 standalone.localdomain python3.9[481053]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False
Oct 13 15:18:08 standalone.localdomain ceph-mon[29756]: pgmap v3107: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:08 standalone.localdomain python3.9[481161]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:18:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3108: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:18:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1-merged.mount: Deactivated successfully.
Oct 13 15:18:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ccf4e134c83bd0efc99bc4dcd3651f85181d00d223a72ceb281bbba817d781e1-merged.mount: Deactivated successfully.
Oct 13 15:18:09 standalone.localdomain python3.9[481269]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:18:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:18:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:10.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:10 standalone.localdomain ceph-mon[29756]: pgmap v3108: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3109: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.602353) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368691602390, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 854, "num_deletes": 251, "total_data_size": 613486, "memory_usage": 629632, "flush_reason": "Manual Compaction"}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368691607283, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 601201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73184, "largest_seqno": 74037, "table_properties": {"data_size": 597381, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8592, "raw_average_key_size": 19, "raw_value_size": 589662, "raw_average_value_size": 1313, "num_data_blocks": 73, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368626, "oldest_key_time": 1760368626, "file_creation_time": 1760368691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 4992 microseconds, and 2128 cpu microseconds.
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.607327) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 601201 bytes OK
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.607360) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.609902) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.609922) EVENT_LOG_v1 {"time_micros": 1760368691609916, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.609941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 609225, prev total WAL file size 609225, number of live WAL files 2.
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.610526) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(587KB)], [167(5304KB)]
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368691610730, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 6032856, "oldest_snapshot_seqno": -1}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 6098 keys, 5005065 bytes, temperature: kUnknown
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368691635160, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 5005065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4971325, "index_size": 17386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 161242, "raw_average_key_size": 26, "raw_value_size": 4867126, "raw_average_value_size": 798, "num_data_blocks": 680, "num_entries": 6098, "num_filter_entries": 6098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.635422) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 5005065 bytes
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.637434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.1 rd, 204.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.2 +0.0 blob) out(4.8 +0.0 blob), read-write-amplify(18.4) write-amplify(8.3) OK, records in: 6612, records dropped: 514 output_compression: NoCompression
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.637454) EVENT_LOG_v1 {"time_micros": 1760368691637445, "job": 104, "event": "compaction_finished", "compaction_time_micros": 24509, "compaction_time_cpu_micros": 13533, "output_level": 6, "num_output_files": 1, "total_output_size": 5005065, "num_input_records": 6612, "num_output_records": 6098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368691637749, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368691638314, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.610374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.638438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.638444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.638446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.638448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:18:11 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:18:11.638450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:18:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:11 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:18:11 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx24cba42adf154691ae27a-0068ed1833" "proxy-server 2" 0.0005 "-" 21 -
Oct 13 15:18:11 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx24cba42adf154691ae27a-0068ed1833)
Oct 13 15:18:11 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx24cba42adf154691ae27a-0068ed1833)
Oct 13 15:18:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:18:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:11.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:18:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:12 standalone.localdomain ceph-mon[29756]: pgmap v3109: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:18:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:18:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:18:12 standalone.localdomain podman[481284]: 2025-10-13 15:18:12.829433883 +0000 UTC m=+0.090257710 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:18:12 standalone.localdomain podman[481283]: 2025-10-13 15:18:12.910535756 +0000 UTC m=+0.171815637 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:18:12 standalone.localdomain podman[481283]: 2025-10-13 15:18:12.919817596 +0000 UTC m=+0.181097527 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:18:12 standalone.localdomain podman[481284]: 2025-10-13 15:18:12.935768506 +0000 UTC m=+0.196592343 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:18:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:18:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3110: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:14 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:18:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:14 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:18:14 standalone.localdomain ceph-mon[29756]: pgmap v3110: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3111: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:15.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:16 standalone.localdomain ceph-mon[29756]: pgmap v3111: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:16.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3112: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:18:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c800d59e331506d01353fe0ada7b93084552dc4f0ee734e61788460834519d95-merged.mount: Deactivated successfully.
Oct 13 15:18:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c800d59e331506d01353fe0ada7b93084552dc4f0ee734e61788460834519d95-merged.mount: Deactivated successfully.
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4250658402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4250658402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: pgmap v3112: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4250658402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:18:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4250658402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:18:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:18:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4525e4f58503c299e14f2206981efdbe4549b63528f450366b37f72d8ab6a848-merged.mount: Deactivated successfully.
Oct 13 15:18:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:18:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3113: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:19.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:19.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:18:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:19.119 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:18:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:19.120 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:19.120 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:18:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:19.140 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:20.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:20 standalone.localdomain ceph-mon[29756]: pgmap v3113: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:18:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:18:20 standalone.localdomain podman[481330]: 2025-10-13 15:18:20.792938836 +0000 UTC m=+0.065510096 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 13 15:18:20 standalone.localdomain systemd[1]: tmp-crun.24gcwJ.mount: Deactivated successfully.
Oct 13 15:18:20 standalone.localdomain podman[481331]: 2025-10-13 15:18:20.854511029 +0000 UTC m=+0.122594471 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:18:20 standalone.localdomain podman[481331]: 2025-10-13 15:18:20.863964881 +0000 UTC m=+0.132048373 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:18:20 standalone.localdomain podman[481330]: 2025-10-13 15:18:20.87494719 +0000 UTC m=+0.147518470 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:18:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:20.986 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:18:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:18:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:20.992 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:18:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:20.992 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:18:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:20.993 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.033 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3114: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.085 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.086 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.111 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.112 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69e44d09-24b6-4d4d-a91d-6686d8549f6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:20.993161', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da862862-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '7b81db664a9cdee1d28add5d1bc4302269e80fbb63ec1f2cbc03bb1c8e7e19e1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:20.993161', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da8642c0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'd283f1b4340bdf8fc4fce28a0869ee4dd1b846f04cff1b0f78a9a6375236b070'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:20.993161', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da864c34-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'e64edb17b56225d1a7568627272edcf938bf4ab30a19b5496472ccca16d38e69'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:20.993161', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da8a3de4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '5264ed3ec99c88aeaa282a260df5f027c121054f090254e05de009246b54a4bd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:20.993161', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da8a4d16-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': 'feaaa6481db6682afcbaa8d471bdd49bfe6e3b1f0b5014f9be5e60e728492b2a'}]}, 'timestamp': '2025-10-13 15:18:21.112812', '_unique_id': '69207016d9ad4753bc514b839938b0aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.114 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.115 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.140 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.155 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05b21ed5-7a0e-44c8-93ed-6e1018d4cc6c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:18:21.115242', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'da8ea1cc-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.32982709, 'message_signature': 'cfd651e36c20c70b58c8f5741a2ca5ea19589686521e2b62c9c8e305b323066f'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:18:21.115242', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'da90e69e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.344475293, 'message_signature': 'f96565befbbb9f1b355ee923711235e1c65537fff9843b7768a297fe9ef3c951'}]}, 'timestamp': '2025-10-13 15:18:21.156053', '_unique_id': 'abb3d7a6902e4945ad6a681b51da2288'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.157 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.157 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:18:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:21.157 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.163 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.165 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fc5f75ca-6164-4d57-b4af-4e0dc9a86356', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.157873', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9215a0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': 'f0da4b1555dd2c85acdd766a4ca7f003001428f76b66a6667d4167c21fdc6bc3'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.157873', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da926dc0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': '5637584fa4f3e1cb3677a9d763e9c8247c44d332c41129ba8553c3c44bbe7b31'}]}, 'timestamp': '2025-10-13 15:18:21.166053', '_unique_id': '087899659aa44b429a689dd68b97c89f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.166 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.167 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.167 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.191 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.192 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.192 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dafbb265-3779-4a5e-bc52-3cd3ad80fc04', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.167734', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da966a24-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '869c10fe30f02546368e4f7bf32ed90ad5610c0b530b78124600b2441c84f8e2'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.167734', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9679d8-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '14fc5994c721f658bd061bbc780b5ef3e382d2464f945f08a6189a20d0a4a44e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.167734', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da9687ac-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '177188ac19185b6ad8c848b1d4443dcfbbcf3ddb4574954d1639befac3203391'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.167734', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9839b2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.382181059, 'message_signature': 'de5dead7f9a1be58129df805b0041ee1254f7b3dbb723acf79c8b7abe0167940'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.167734', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da98448e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.382181059, 'message_signature': 'f3f57836753afdd7c2153e586981aa4d52b381a0832a3649234f0ef15b3174c0'}]}, 'timestamp': '2025-10-13 15:18:21.204332', '_unique_id': '350df03d0e7b4d6a8f04e62337dc281e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.206 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0310c145-7780-4ebb-97bc-b2102481dc28', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.206332', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da98a4b0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': '60927abdfe1e128bc9b8e382be1767b7a1ddf8c2840a5710937b1fe5f869af38'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.206332', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da98b7ac-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': '32f85e2433af030f23df3e735ed85a642eb19ead46e5b10e7e815a6c5676618c'}]}, 'timestamp': '2025-10-13 15:18:21.207379', '_unique_id': '034111f49341418d85a7cf02d77e4fc3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.208 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.209 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.209 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.210 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3536452c-7cf4-48ff-b2f6-4ce0ae2d422e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.209892', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da992d9a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': '127f9573636b7240136ae617afe44ab1c26a2a0ece992a56119d80b93e4110c6'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.209892', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da99437a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': '25fac60bc204b067c6e16e15eddd654ea1b63f171482f8dbc6b2643c6053ce78'}]}, 'timestamp': '2025-10-13 15:18:21.210957', '_unique_id': '05f81ce0d4414fde9e9e8a6820f8d3de'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.213 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.214 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.214 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5444f2c6-6c6a-4055-8666-f90330b61fc7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.213369', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da99b63e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '0e9fc4571a606ad7b8ce0bc5d2653f973311521e4bff67e53a76c374c73e5d6f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.213369', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da99c7aa-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '055fcd2dfd8f2f919d009fec8cf3a4fac4114afdfb7bd89643cefd191260244f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.213369', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da99d934-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '2e593dc543563f2e39c48a8ad66ddf3941d071eb3a2dcc17422706a85abcfb3b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.213369', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da99eaa0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.382181059, 'message_signature': '24aec4cfa6030d66834b9a2947186889478854e48ede48fcdb82f0040c601437'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.213369', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da99fad6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.382181059, 'message_signature': '02bd0c84a151b2a3ef87742997a3ada6c22ed31e8215962a4d50c73abb538c10'}]}, 'timestamp': '2025-10-13 15:18:21.215654', '_unique_id': '8f011dd05c804058904a7e6d7ceefe63'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.218 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.218 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.218 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16e9ec55-4de5-4e93-935f-958dff0d3649', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.218359', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9a792a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': 'fe2b3cd52bff628b8d7547e70182ab7e442e5fc03dce92a5a80ec68e01f748b5'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.218359', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da9a8a5a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': '07cbab7e5ceb98a07f520f84377227d66d8a1251a48dac98fd0a885212e22f0f'}]}, 'timestamp': '2025-10-13 15:18:21.219300', '_unique_id': '131c039169c84861bb3f4aa4ed3250cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.221 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.222 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.222 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.222 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.222 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a1d97495-9c23-4e04-bd66-f00a06a9c4b6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.221700', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9af9f4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '6461cbcd7c141700683698ee6eb63caac47054f543b50c229859cf3d3c983529'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.221700', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9b085e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '0cdc5cc1f37aea67f598ff543ee75f8ca2bdacda542e53580f19a53a38f2a094'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.221700', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da9b138a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '3073502b6bccd4472cdf157bb92a1b2c0f4975ceffc411dd73ce2f41d02bbf5c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.221700', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9b1dc6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '6b0c240806a38c4a4b1c387850292f22ab97a8c2258a8a6e35a23899eb9d591f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.221700', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9b27c6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': 'bad24b97ede249953382d7cf1a30ac147b2b1309ecb981ebe0c2da40e6112df1'}]}, 'timestamp': '2025-10-13 15:18:21.223243', '_unique_id': 'e9b6620f9a0948b68e3e13875c8b722b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.224 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.224 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.225 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '102d2b92-6161-4383-959c-b5f8656cd2ec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.224814', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9b7078-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': 'd2cbfe0b95bcf45e2c4aceb0f04c55191942ac29def971cee29280bb4802bcae'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.224814', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da9b7dca-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': 'a3e3073f8652a2d21048978b6b1eb0bfe29a5088922076cefd25315bb01ebf29'}]}, 'timestamp': '2025-10-13 15:18:21.225464', '_unique_id': 'd432250343a74a98b7824cf99c6f14f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.226 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.227 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.227 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.227 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.228 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ade8dd2f-7138-4e33-9d3e-d5f0d353534e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.226941', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9bc546-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'b33b926f2648ad0de7d4c3a780bab69ecf4b0febe4d5425f7457ed8a2912a967'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.226941', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9bd05e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'eae870582cf2ac241b42da919fe3e27b3e945b7266d9dd4c48a4e767f3173af6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.226941', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da9bdbb2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'fd75a46a759d90557fc83fc0f75b63b8def51f21e225eafa8f1d30e3a8ec0a23'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.226941', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9be5e4-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '0247841019d6b66b0e859363c0d736b8fd2a91dda998b423db95e7c232e9eb90'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.226941', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9bf124-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '7fba0db08879b06b7cc1876b43786d8bc1a3e3c6a43cfaed94922bd4e9e7452e'}]}, 'timestamp': '2025-10-13 15:18:21.228440', '_unique_id': '5e96768249094cfe96b2a3a15548dc5d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.229 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.230 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 27570000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.230 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 27350000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'db4e2537-3cd0-4a7a-b722-0842e0a1ed75', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27570000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:18:21.230088', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'da9c3e5e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.32982709, 'message_signature': '0e1be70d31457be3f04dbd60f49f29d09ccd591f0d6570fe5b0392087bfe5b95'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27350000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:18:21.230088', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'da9c49ee-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.344475293, 'message_signature': '5c0a47e53be9779ce18e0164301db882f86e330ccc74148cf2cda3161985d0a1'}]}, 'timestamp': '2025-10-13 15:18:21.230676', '_unique_id': '8dc3a8c5b8934965bbdcae3bab93ca54'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.232 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.232 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.232 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3487235d-f6e9-4527-9298-95a0af77c602', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.232131', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9c8e22-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': '23df51c94a45c925ff6082d971ba747de10624c559ad6eecfd60a36aa62886b1'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.232131', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da9c9aa2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': '80b66b6d89dfa20cbbf6192eeb8ca6556bc74be96edaa7c4e5fda8d0b8940a95'}]}, 'timestamp': '2025-10-13 15:18:21.232751', '_unique_id': '11791e429aaf4f8eac9895a9d550141e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.234 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.234 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.234 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.234 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.235 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.235 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '04031880-866c-4473-9b83-38dadba90601', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.234238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9ce052-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '27d142ff4a015de78ca175a047f465f919b0d5f81c8f82fffe67fb8e196f6ef5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.234238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9cec32-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'e332b8bbd22fad22e23b68db199730a028765cd854966f1148d643731e42a125'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.234238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da9cf66e-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'bcc17d4acb95f30e445b8b98ae5258296cf77a8a2c0471feb0e02f70e7789ad4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.234238', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9d008c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '6cf5717a6746ad9d0e40ce880faf5e4b2722f3729233d7ff944807e1c33a2bb5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.234238', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9d0a78-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '2a2b12a756e12efe2f31d88046e664e60d687be88542b9cfa16e75e4392c9343'}]}, 'timestamp': '2025-10-13 15:18:21.235619', '_unique_id': 'ad1031596ca74db09953dbc166a381ee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.236 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.237 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.237 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.237 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b8c8a705-02ab-4e60-8573-e716e0addad9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.237212', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9d55d2-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': 'db41938a19377c15e1a0d87ebbe5793ef070bc2cd90bf29209a70e6f787ffe61'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.237212', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da9d6392-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': 'ea0e6f004fbfd3c2b1ba3395ac3d1275db64888b49cf17308e7a59e105a6af24'}]}, 'timestamp': '2025-10-13 15:18:21.237897', '_unique_id': '807f47913a854509a6f79f33d3af5072'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.238 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.239 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.239 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.239 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51cf1e07-52bb-4ce8-9e23-a4bbcb91e51d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.239478', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9daf0a-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': '258c191037385988d4fb33d790d4ade50d95c28b37c88cd43b33aff3d9dd842c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.239478', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da9dbb44-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': 'f70e017ff6f052562e22b04997e237554342beb2485ed744ddf174624365dad9'}]}, 'timestamp': '2025-10-13 15:18:21.240157', '_unique_id': '8230dd890e074a87a0d8920f482e6d83'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.240 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.241 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.241 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.242 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.242 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.242 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.242 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '900c9aaf-dfbd-43cc-8b3d-761fbc163655', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.241705', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9e03f6-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '0d119b957432ec5c81c90b2a9d37db6ba95d3e6806bc6304879344f4c4ae9d80'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.241705', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9e0ebe-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'c3fc86593cb8ac0003576cb5676cb55639bef35a52599f52b747ed6d42a66a64'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.241705', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da9e18f0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'a7f7a37c5f7cdbb99279dd54bf10e8ffdd5e4c1d25772b8a1af54dac9d9dcfee'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.241705', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9e2408-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '146e4c5439c0e42f6b7080297677cd1d0022046302104b665b39425e7d125eb9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.241705', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9e2dfe-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '89e4742d60bb844e9ada911987a1fb6551128e171a58761277a941243c3aa9ba'}]}, 'timestamp': '2025-10-13 15:18:21.243063', '_unique_id': '58e061c021ee4b648a61f26bf8a7cdf9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.243 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.244 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.244 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.245 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a6a92410-bb96-43cb-8920-73b875a85469', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.244796', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'da9e7d36-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': '623f31e61efe4bed500fe2a8022e7287b474e3cc41c90a219c170c7390de0e05'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.244796', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'da9e895c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': 'd01fa8b4d030e77bb69463a266179a73e6c6ed9abf2b74ee919411b1040251bb'}]}, 'timestamp': '2025-10-13 15:18:21.245416', '_unique_id': '66c08414168448a9ad998c47157d9cda'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.246 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.247 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.247 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.247 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.248 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:18:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:21 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ee6c1bf6-a85a-42a5-835b-536d2c71337c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.246917', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9ecfca-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': 'e06687fc4704db297b9bedfac707dc906f6bcb45321de6db8cb34a7f5b709bd8'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.246917', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9ede66-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': 'c35208db7277356aaa5ed7991744628a59934d1440c654e663b90a4916f63909'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.246917', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'da9ee99c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.3569689, 'message_signature': '4597ac6e2e9e2657ec1169d15b929537f775ed43f685d70b5be65eb302c97646'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.246917', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'da9ef3b0-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.382181059, 'message_signature': '615d70576710073eb1594799fd364ad2913cfa993632a170e07e950632a3faca'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.246917', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'da9efd9c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.382181059, 'message_signature': 'ff21066140853e490b51487670a131f6e6fdcb5ef99b07263479c4171a52774a'}]}, 'timestamp': '2025-10-13 15:18:21.248378', '_unique_id': '9fa38d032a884f979147471cd9726b1b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.249 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.256 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.256 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.256 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.256 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13d2a651-515e-45b2-a128-43549c501c8a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:18:21.256520', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'daa04878-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.347093724, 'message_signature': '88165b45e014cbd661331d68bc7731abf9532b4f36aef5d8690dd9e2afd54824'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:18:21.256520', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'daa05566-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.353023287, 'message_signature': '3785ad777fef60e3d822fb1c9a26397d43c8d036102418a3821e2c52af1a06a1'}]}, 'timestamp': '2025-10-13 15:18:21.257278', '_unique_id': 'aad159476fa4466f8e21ff970fd70b1b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.258 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.259 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.259 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.259 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.259 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c74212bc-d5a4-45d1-a431-ff711b885822', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:18:21.258882', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'daa0a14c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'a6ee4bd3eef36187b08eee58c0a0cfdd82988a2b20d1df5e2f915e585cf50e13'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:18:21.258882', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'daa0a87c-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': 'bad9c5b583a13f06a5eb80827cb11ec39f640a09b290bf33b7605bb13be739c9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:18:21.258882', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'daa0af84-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.182371322, 'message_signature': '4515623d2de90b4d74d1b3f36dd862c1d4a42070d85b9e27a54c2f6271a54e40'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:18:21.258882', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'daa0b704-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': 'fc94b9b8f5bdbec4fafc91a4bbda41681bb82fdbe8fd08db3a708ba33bf40eb6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:18:21.258882', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'daa0bdee-a847-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8144.275780119, 'message_signature': '103c33e15636da536d08239cbb66995302ee5087810c8d0f5a1705261a3e19e7'}]}, 'timestamp': '2025-10-13 15:18:21.259813', '_unique_id': '7a27559fdee34d94af13a44b0eb00291'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:18:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:18:21.260 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:18:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:18:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:21.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:22 standalone.localdomain ceph-mon[29756]: pgmap v3114: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:18:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3115: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.125 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.126 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.126 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.126 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.127 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:18:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:18:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:18:23
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'backups', '.mgr', 'volumes', 'vms', 'images', 'manila_data']
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:18:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42180 DF PROTO=TCP SPT=59006 DPT=9102 SEQ=2471687901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB795DE0000000001030307) 
Oct 13 15:18:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:18:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3176595081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.581 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:18:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3176595081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.678 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.679 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.679 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.682 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.682 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:18:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:18:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:18:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:18:23 standalone.localdomain podman[481386]: 2025-10-13 15:18:23.821816045 +0000 UTC m=+0.084459322 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, name=rhosp17/openstack-swift-object, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-swift-object-container, version=17.1.9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, container_name=swift_object_server, build-date=2025-07-21T14:56:28)
Oct 13 15:18:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-44e224b1120f02376fec61e07ffc17d3c98868215655f2d726d537b67be2bd79-merged.mount: Deactivated successfully.
Oct 13 15:18:23 standalone.localdomain podman[481388]: 2025-10-13 15:18:23.890086626 +0000 UTC m=+0.143467777 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4)
Oct 13 15:18:23 standalone.localdomain podman[481387]: 2025-10-13 15:18:23.935688736 +0000 UTC m=+0.195637669 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, release=1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, batch=17.1_20250721.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.982 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.983 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10144MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.984 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:18:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:23.984 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:18:24 standalone.localdomain podman[481386]: 2025-10-13 15:18:24.020747585 +0000 UTC m=+0.283390812 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.buildah.version=1.33.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:18:24 standalone.localdomain podman[481388]: 2025-10-13 15:18:24.105815465 +0000 UTC m=+0.359196576 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, name=rhosp17/openstack-swift-account)
Oct 13 15:18:24 standalone.localdomain podman[481387]: 2025-10-13 15:18:24.154795409 +0000 UTC m=+0.414744342 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 15:18:24 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:18:24 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:18:24 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:24 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:24 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.381 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.382 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.382 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.382 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:18:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42181 DF PROTO=TCP SPT=59006 DPT=9102 SEQ=2471687901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB799F70000000001030307) 
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.627 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:18:24 standalone.localdomain ceph-mon[29756]: pgmap v3115: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:24 standalone.localdomain systemd[1]: tmp-crun.BcM3b7.mount: Deactivated successfully.
Oct 13 15:18:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aac0fb3335817ca424450144c4f202b7593ed66d9078713ba9c436319ea9e1d2-merged.mount: Deactivated successfully.
Oct 13 15:18:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.941 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.941 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.959 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:18:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:24.981 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:18:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:25 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:25 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:25 standalone.localdomain podman[467099]: time="2025-10-13T15:18:25Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged: invalid argument"
Oct 13 15:18:25 standalone.localdomain podman[467099]: time="2025-10-13T15:18:25Z" level=error msg="Getting root fs size for \"2cccb79f7615a4a0733153a9d0379cd68366ac98e392eb972d213ddff7046a93\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": creating overlay mount to /var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/5PSUIGDAXQERVK6KLHOROUARFH,upperdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/diff,workdir=/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/work,nodev,metacopy=on\": no such file or directory"
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.040 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:18:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3116: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:18:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3396176253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.492 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.499 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.516 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.517 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.518 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:18:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:25.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3396176253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:18:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aac0fb3335817ca424450144c4f202b7593ed66d9078713ba9c436319ea9e1d2-merged.mount: Deactivated successfully.
Oct 13 15:18:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ca0a15d821c5264bbbef5ba19d0556f6e72bc0e0289105b378b947c98fd86551-merged.mount: Deactivated successfully.
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.519 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.519 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.519 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:18:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42182 DF PROTO=TCP SPT=59006 DPT=9102 SEQ=2471687901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB7A1F60000000001030307) 
Oct 13 15:18:26 standalone.localdomain ceph-mon[29756]: pgmap v3116: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.811 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.812 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.812 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.812 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:18:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a5ee323a293e7f691e7ca7f82fe917f1283619168fc1869822e9fd3927a94077-merged.mount: Deactivated successfully.
Oct 13 15:18:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:18:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c800d59e331506d01353fe0ada7b93084552dc4f0ee734e61788460834519d95-merged.mount: Deactivated successfully.
Oct 13 15:18:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:26.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:26 standalone.localdomain systemd[1]: tmp-crun.pwoadz.mount: Deactivated successfully.
Oct 13 15:18:26 standalone.localdomain podman[481492]: 2025-10-13 15:18:26.959788497 +0000 UTC m=+0.094856903 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 15:18:26 standalone.localdomain podman[481492]: 2025-10-13 15:18:26.995079999 +0000 UTC m=+0.130148385 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Oct 13 15:18:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3117: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.162 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.212 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.212 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.212 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.213 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.213 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.213 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.213 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.214 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:18:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:27.214 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:18:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:18:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:18:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:18:28 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:18:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:28 standalone.localdomain ceph-mon[29756]: pgmap v3117: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:18:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3118: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:18:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:29 standalone.localdomain systemd[1]: tmp-crun.U1hnYj.mount: Deactivated successfully.
Oct 13 15:18:29 standalone.localdomain podman[481512]: 2025-10-13 15:18:29.842396205 +0000 UTC m=+0.113879390 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:18:29 standalone.localdomain podman[481512]: 2025-10-13 15:18:29.851791527 +0000 UTC m=+0.123274682 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:18:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:18:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:30 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:18:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:30.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42183 DF PROTO=TCP SPT=59006 DPT=9102 SEQ=2471687901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB7B1B70000000001030307) 
Oct 13 15:18:30 standalone.localdomain ceph-mon[29756]: pgmap v3118: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:18:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3119: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:18:31 standalone.localdomain podman[481529]: 2025-10-13 15:18:31.812474516 +0000 UTC m=+0.084682329 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:18:31 standalone.localdomain podman[481529]: 2025-10-13 15:18:31.819844583 +0000 UTC m=+0.092052376 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:18:31 standalone.localdomain podman[481529]: unhealthy
Oct 13 15:18:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:31.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:32 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:18:32 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:18:32 standalone.localdomain ceph-mon[29756]: pgmap v3119: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:18:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-44e224b1120f02376fec61e07ffc17d3c98868215655f2d726d537b67be2bd79-merged.mount: Deactivated successfully.
Oct 13 15:18:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3120: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aac0fb3335817ca424450144c4f202b7593ed66d9078713ba9c436319ea9e1d2-merged.mount: Deactivated successfully.
Oct 13 15:18:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:34 standalone.localdomain ceph-mon[29756]: pgmap v3120: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:18:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d229f95c042aa50f015ecf04790d97311b8723b668a1b17e0a931684b436bc8-merged.mount: Deactivated successfully.
Oct 13 15:18:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3121: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:35.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aac0fb3335817ca424450144c4f202b7593ed66d9078713ba9c436319ea9e1d2-merged.mount: Deactivated successfully.
Oct 13 15:18:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:18:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ca0a15d821c5264bbbef5ba19d0556f6e72bc0e0289105b378b947c98fd86551-merged.mount: Deactivated successfully.
Oct 13 15:18:35 standalone.localdomain podman[481552]: 2025-10-13 15:18:35.790107403 +0000 UTC m=+0.090120867 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:18:35 standalone.localdomain podman[481552]: 2025-10-13 15:18:35.818952235 +0000 UTC m=+0.118965729 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:18:35 standalone.localdomain systemd[1]: tmp-crun.JvpAr2.mount: Deactivated successfully.
Oct 13 15:18:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:36 standalone.localdomain ceph-mon[29756]: pgmap v3121: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:18:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:36.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3122: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:18:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:18:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:18:37 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:18:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:18:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:18:38 standalone.localdomain ceph-mon[29756]: pgmap v3122: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:18:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3123: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:40 standalone.localdomain podman[467099]: time="2025-10-13T15:18:40Z" level=error msg="Getting root fs size for \"2f21788ca9e8bc9b7b15f5240d61deed89691ca825526b0f4d0e5b27742d7038\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:18:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:40.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:40 standalone.localdomain ceph-mon[29756]: pgmap v3123: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3124: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:41.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:18:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d229f95c042aa50f015ecf04790d97311b8723b668a1b17e0a931684b436bc8-merged.mount: Deactivated successfully.
Oct 13 15:18:42 standalone.localdomain ceph-mon[29756]: pgmap v3124: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:18:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:18:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:18:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3125: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:18:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-300d4c0e2feb52eaa1178d138869677b040789f75a3bb2c582bfc2b3fc939b12-merged.mount: Deactivated successfully.
Oct 13 15:18:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:18:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:18:44 standalone.localdomain podman[481571]: 2025-10-13 15:18:44.559737203 +0000 UTC m=+0.069815249 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:18:44 standalone.localdomain podman[481571]: 2025-10-13 15:18:44.598684918 +0000 UTC m=+0.108762954 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:18:44 standalone.localdomain podman[481572]: 2025-10-13 15:18:44.611574105 +0000 UTC m=+0.119603277 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 15:18:44 standalone.localdomain podman[481572]: 2025-10-13 15:18:44.659744145 +0000 UTC m=+0.167773327 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 13 15:18:44 standalone.localdomain ceph-mon[29756]: pgmap v3125: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3126: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:45.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-80fde2f6bf3f0c404df35c922c103172ab0fc1ada4c94e004c2e5ffef06cfe27-merged.mount: Deactivated successfully.
Oct 13 15:18:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:18:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:46 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:18:46 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:18:46 standalone.localdomain ceph-mon[29756]: pgmap v3126: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:46.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3127: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:48 standalone.localdomain ceph-mon[29756]: pgmap v3127: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3128: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:50.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:18:50 standalone.localdomain ceph-mon[29756]: pgmap v3128: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3129: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:18:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:18:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:51 standalone.localdomain podman[481618]: 2025-10-13 15:18:51.575663332 +0000 UTC m=+0.087404933 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:18:51 standalone.localdomain podman[481618]: 2025-10-13 15:18:51.608444535 +0000 UTC m=+0.120186166 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:18:51 standalone.localdomain podman[481619]: 2025-10-13 15:18:51.621617333 +0000 UTC m=+0.129881977 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:18:51 standalone.localdomain podman[481619]: 2025-10-13 15:18:51.632829649 +0000 UTC m=+0.141094253 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:18:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:51.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-80fde2f6bf3f0c404df35c922c103172ab0fc1ada4c94e004c2e5ffef06cfe27-merged.mount: Deactivated successfully.
Oct 13 15:18:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9405f1b2f2f7c524365dd7da82f79d342db2340e35c55f8c8aa59d7a473d1f4f-merged.mount: Deactivated successfully.
Oct 13 15:18:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9405f1b2f2f7c524365dd7da82f79d342db2340e35c55f8c8aa59d7a473d1f4f-merged.mount: Deactivated successfully.
Oct 13 15:18:52 standalone.localdomain ceph-mon[29756]: pgmap v3129: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3130: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-300d4c0e2feb52eaa1178d138869677b040789f75a3bb2c582bfc2b3fc939b12-merged.mount: Deactivated successfully.
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:18:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:18:53 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:18:53 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:18:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=301 DF PROTO=TCP SPT=53354 DPT=9102 SEQ=3608122411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB80B0F0000000001030307) 
Oct 13 15:18:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=302 DF PROTO=TCP SPT=53354 DPT=9102 SEQ=3608122411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB80F370000000001030307) 
Oct 13 15:18:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:18:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:18:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:18:54 standalone.localdomain systemd[1]: tmp-crun.l05KZo.mount: Deactivated successfully.
Oct 13 15:18:54 standalone.localdomain ceph-mon[29756]: pgmap v3130: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:54 standalone.localdomain podman[481667]: 2025-10-13 15:18:54.857650436 +0000 UTC m=+0.115879094 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, version=17.1.9, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, build-date=2025-07-21T15:54:32)
Oct 13 15:18:54 standalone.localdomain podman[481668]: 2025-10-13 15:18:54.871624178 +0000 UTC m=+0.131215627 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, architecture=x86_64, container_name=swift_account_server, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:18:54 standalone.localdomain podman[481666]: 2025-10-13 15:18:54.820179297 +0000 UTC m=+0.086626278 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, build-date=2025-07-21T14:56:28, release=1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:18:55 standalone.localdomain podman[481666]: 2025-10-13 15:18:55.015720842 +0000 UTC m=+0.282167833 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, batch=17.1_20250721.1)
Oct 13 15:18:55 standalone.localdomain podman[481667]: 2025-10-13 15:18:55.040794007 +0000 UTC m=+0.299022685 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:18:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3131: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:55 standalone.localdomain podman[481668]: 2025-10-13 15:18:55.102942738 +0000 UTC m=+0.362534187 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, tcib_managed=true, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=1, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:18:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:55.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:55 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:18:55 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:18:55 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:18:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-80fde2f6bf3f0c404df35c922c103172ab0fc1ada4c94e004c2e5ffef06cfe27-merged.mount: Deactivated successfully.
Oct 13 15:18:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:18:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:18:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=303 DF PROTO=TCP SPT=53354 DPT=9102 SEQ=3608122411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB817370000000001030307) 
Oct 13 15:18:56 standalone.localdomain ceph-mon[29756]: pgmap v3131: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:18:56.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:18:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3132: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:18:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:18:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:18:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:18:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:18:58 standalone.localdomain systemd[1]: tmp-crun.OA73NO.mount: Deactivated successfully.
Oct 13 15:18:58 standalone.localdomain ceph-mon[29756]: pgmap v3132: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:58 standalone.localdomain podman[481746]: 2025-10-13 15:18:58.854593601 +0000 UTC m=+0.121431885 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Oct 13 15:18:58 standalone.localdomain podman[481746]: 2025-10-13 15:18:58.868870652 +0000 UTC m=+0.135708966 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 15:18:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3133: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:18:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:18:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:19:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:19:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:00 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:19:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:00.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=304 DF PROTO=TCP SPT=53354 DPT=9102 SEQ=3608122411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB826F60000000001030307) 
Oct 13 15:19:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:19:00 standalone.localdomain podman[481766]: 2025-10-13 15:19:00.784761527 +0000 UTC m=+0.054330611 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 15:19:00 standalone.localdomain podman[481766]: 2025-10-13 15:19:00.799860104 +0000 UTC m=+0.069429198 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:19:00 standalone.localdomain ceph-mon[29756]: pgmap v3133: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:00 standalone.localdomain podman[467099]: time="2025-10-13T15:19:00Z" level=error msg="Getting root fs size for \"37aeb184e146653221fb9502ce231e52f2dea8d0233db149e0cb01daa6a343aa\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:19:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3134: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:01 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:19:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:01.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:19:02 standalone.localdomain systemd[1]: tmp-crun.h8FBi9.mount: Deactivated successfully.
Oct 13 15:19:02 standalone.localdomain podman[481786]: 2025-10-13 15:19:02.847404958 +0000 UTC m=+0.098172866 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:19:02 standalone.localdomain podman[481786]: 2025-10-13 15:19:02.856776907 +0000 UTC m=+0.107544825 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:19:02 standalone.localdomain podman[481786]: unhealthy
Oct 13 15:19:02 standalone.localdomain ceph-mon[29756]: pgmap v3134: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3135: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-80fde2f6bf3f0c404df35c922c103172ab0fc1ada4c94e004c2e5ffef06cfe27-merged.mount: Deactivated successfully.
Oct 13 15:19:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9405f1b2f2f7c524365dd7da82f79d342db2340e35c55f8c8aa59d7a473d1f4f-merged.mount: Deactivated successfully.
Oct 13 15:19:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:19:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1725e6410ebd724af9a156d33baa34cd7b609f52e9eac888ddfe9ef0e5981ec4-merged.mount: Deactivated successfully.
Oct 13 15:19:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1725e6410ebd724af9a156d33baa34cd7b609f52e9eac888ddfe9ef0e5981ec4-merged.mount: Deactivated successfully.
Oct 13 15:19:04 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:19:04 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:19:04 standalone.localdomain ceph-mon[29756]: pgmap v3135: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3136: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:05.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:19:06 standalone.localdomain ceph-mon[29756]: pgmap v3136: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:19:06.935 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:19:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:19:06.935 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:19:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:19:06.936 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:19:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:06.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:19:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3137: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:19:07 standalone.localdomain podman[481808]: 2025-10-13 15:19:07.819639523 +0000 UTC m=+0.083067620 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 13 15:19:07 standalone.localdomain podman[481808]: 2025-10-13 15:19:07.853891581 +0000 UTC m=+0.117319668 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:19:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4170fc6e6613bb09a7fdf5d4d8409fa3aa7c9fd1d3bf4d0389a122bf4a320973-merged.mount: Deactivated successfully.
Oct 13 15:19:08 standalone.localdomain ceph-mon[29756]: pgmap v3137: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3138: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:19:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:19:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:09 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:19:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:10 standalone.localdomain podman[467099]: time="2025-10-13T15:19:10Z" level=error msg="Getting root fs size for \"3b9ee7dc83a27e63189c1fdb52d39050081ae38b49a5fb112427f00ad322f26c\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:19:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:10.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:10 standalone.localdomain ceph-mon[29756]: pgmap v3138: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3139: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:19:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:19:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:11.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:12 standalone.localdomain ceph-mon[29756]: pgmap v3139: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:19:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:19:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3140: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4170fc6e6613bb09a7fdf5d4d8409fa3aa7c9fd1d3bf4d0389a122bf4a320973-merged.mount: Deactivated successfully.
Oct 13 15:19:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-57a101a7ef6ab2b3f3e0eb8648411f7f09d848d387bab284c11bac1038a2c94b-merged.mount: Deactivated successfully.
Oct 13 15:19:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:19:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:14 standalone.localdomain ceph-mon[29756]: pgmap v3140: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3141: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:15.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:19:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-43593073e6f183b06ef562b34ef0cb60361ac1108d165f04c57da6bff77c59d1-merged.mount: Deactivated successfully.
Oct 13 15:19:15 standalone.localdomain ceph-mon[29756]: pgmap v3141: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:19:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1725e6410ebd724af9a156d33baa34cd7b609f52e9eac888ddfe9ef0e5981ec4-merged.mount: Deactivated successfully.
Oct 13 15:19:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:19:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:19:16 standalone.localdomain podman[481826]: 2025-10-13 15:19:16.809208392 +0000 UTC m=+0.079854690 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:19:16 standalone.localdomain podman[481825]: 2025-10-13 15:19:16.861407584 +0000 UTC m=+0.131107453 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:19:16 standalone.localdomain podman[481826]: 2025-10-13 15:19:16.868831384 +0000 UTC m=+0.139477682 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:19:16 standalone.localdomain podman[481825]: 2025-10-13 15:19:16.919325305 +0000 UTC m=+0.189025184 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:19:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:16.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3142: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:19:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:19:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:19:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4170fc6e6613bb09a7fdf5d4d8409fa3aa7c9fd1d3bf4d0389a122bf4a320973-merged.mount: Deactivated successfully.
Oct 13 15:19:18 standalone.localdomain ceph-mon[29756]: pgmap v3142: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:19:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1092553938' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:19:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:19:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1092553938' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:19:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4170fc6e6613bb09a7fdf5d4d8409fa3aa7c9fd1d3bf4d0389a122bf4a320973-merged.mount: Deactivated successfully.
Oct 13 15:19:18 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:19:18 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:19:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3143: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1092553938' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:19:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1092553938' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:19:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:19:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:19:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:20 standalone.localdomain ceph-mon[29756]: pgmap v3143: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:20.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3144: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:21.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:22 standalone.localdomain ceph-mon[29756]: pgmap v3144: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3145: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.112 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.131 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.131 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.132 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.132 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.133 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:19:23
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', '.mgr', 'manila_metadata', 'vms', 'manila_data', 'backups', 'images']
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:19:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37674 DF PROTO=TCP SPT=43238 DPT=9102 SEQ=423089958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB8803F0000000001030307) 
Oct 13 15:19:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:19:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3187601963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.632 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:19:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.723 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.724 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.724 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:19:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:19:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:19:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:19:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b48f191c76ffab71d400aed04746f4c46427f833446e2083cf65ea27103edc34-merged.mount: Deactivated successfully.
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.740 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.740 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:19:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b48f191c76ffab71d400aed04746f4c46427f833446e2083cf65ea27103edc34-merged.mount: Deactivated successfully.
Oct 13 15:19:23 standalone.localdomain podman[481897]: 2025-10-13 15:19:23.838118009 +0000 UTC m=+0.099739192 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 13 15:19:23 standalone.localdomain podman[481897]: 2025-10-13 15:19:23.848335363 +0000 UTC m=+0.109956536 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009)
Oct 13 15:19:23 standalone.localdomain podman[481896]: 2025-10-13 15:19:23.938650405 +0000 UTC m=+0.199729791 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.966 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.967 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=10069MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.967 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:19:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:23.968 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:19:23 standalone.localdomain podman[481896]: 2025-10-13 15:19:23.969103197 +0000 UTC m=+0.230182593 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.054 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.055 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.055 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.055 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.116 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:19:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:19:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/672262978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:19:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37675 DF PROTO=TCP SPT=43238 DPT=9102 SEQ=423089958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB884360000000001030307) 
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.573 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.578 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.598 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.601 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:19:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:24.601 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:19:24 standalone.localdomain ceph-mon[29756]: pgmap v3145: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3187601963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:19:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/672262978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:19:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4170fc6e6613bb09a7fdf5d4d8409fa3aa7c9fd1d3bf4d0389a122bf4a320973-merged.mount: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3146: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:25.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-57a101a7ef6ab2b3f3e0eb8648411f7f09d848d387bab284c11bac1038a2c94b-merged.mount: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:19:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:19:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain podman[481950]: 2025-10-13 15:19:26.076415822 +0000 UTC m=+0.069539447 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, release=1, container_name=swift_object_server, architecture=x86_64, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1)
Oct 13 15:19:26 standalone.localdomain podman[481951]: 2025-10-13 15:19:26.111570002 +0000 UTC m=+0.090249331 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true)
Oct 13 15:19:26 standalone.localdomain podman[481952]: 2025-10-13 15:19:26.161237202 +0000 UTC m=+0.134616274 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 15:19:26 standalone.localdomain podman[481950]: 2025-10-13 15:19:26.228172436 +0000 UTC m=+0.221296111 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, container_name=swift_object_server, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12)
Oct 13 15:19:26 standalone.localdomain podman[481951]: 2025-10-13 15:19:26.283660699 +0000 UTC m=+0.262340048 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, container_name=swift_container_server, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:19:26 standalone.localdomain podman[481952]: 2025-10-13 15:19:26.335820096 +0000 UTC m=+0.309199168 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, container_name=swift_account_server, name=rhosp17/openstack-swift-account, architecture=x86_64, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Oct 13 15:19:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:26.579 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:26.579 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:26.580 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:26.580 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:26.580 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:19:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37676 DF PROTO=TCP SPT=43238 DPT=9102 SEQ=423089958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB88C360000000001030307) 
Oct 13 15:19:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:26 standalone.localdomain ceph-mon[29756]: pgmap v3146: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:26 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:19:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:26.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:27.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:27.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3147: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc459c24a9ec3e1f3f5e0147b0f938d1986a03e01d614e68963d260ce193ad26-merged.mount: Deactivated successfully.
Oct 13 15:19:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:28.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:19:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:28.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:19:28 standalone.localdomain ceph-mon[29756]: pgmap v3147: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-43593073e6f183b06ef562b34ef0cb60361ac1108d165f04c57da6bff77c59d1-merged.mount: Deactivated successfully.
Oct 13 15:19:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:28.854 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:19:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:28.854 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:19:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:28.854 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:19:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3148: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:29.223 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:19:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:29.250 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:19:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:29.251 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:19:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:19:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:19:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:19:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:19:30 standalone.localdomain podman[482033]: 2025-10-13 15:19:30.432701638 +0000 UTC m=+0.074360140 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 15:19:30 standalone.localdomain podman[482033]: 2025-10-13 15:19:30.477630488 +0000 UTC m=+0.119288930 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Oct 13 15:19:30 standalone.localdomain ceph-mon[29756]: pgmap v3148: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37677 DF PROTO=TCP SPT=43238 DPT=9102 SEQ=423089958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB89BF60000000001030307) 
Oct 13 15:19:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:30.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:19:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3149: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:19:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:19:31 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:19:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:19:31 standalone.localdomain podman[482052]: 2025-10-13 15:19:31.799206314 +0000 UTC m=+0.064549630 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:19:31 standalone.localdomain podman[482052]: 2025-10-13 15:19:31.839868308 +0000 UTC m=+0.105211594 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:19:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:31.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain ceph-mon[29756]: pgmap v3149: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:19:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3150: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:34 standalone.localdomain ceph-mon[29756]: pgmap v3150: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:19:34 standalone.localdomain podman[482071]: 2025-10-13 15:19:34.881909829 +0000 UTC m=+0.112838165 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:19:34 standalone.localdomain podman[482071]: 2025-10-13 15:19:34.892708891 +0000 UTC m=+0.123637237 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:19:34 standalone.localdomain podman[482071]: unhealthy
Oct 13 15:19:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3151: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:35 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:19:35 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:19:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:35.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:35 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:35 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:19:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b48f191c76ffab71d400aed04746f4c46427f833446e2083cf65ea27103edc34-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:36 standalone.localdomain ceph-mon[29756]: pgmap v3151: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:36.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3152: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:19:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:19:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-afea50fcd80de40189eae447b9f1e1ce11d3f127805d4e44f9b8663c38c63435-merged.mount: Deactivated successfully.
Oct 13 15:19:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:38 standalone.localdomain ceph-mon[29756]: pgmap v3152: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3153: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:19:39 standalone.localdomain podman[482093]: 2025-10-13 15:19:39.664962646 +0000 UTC m=+0.082907110 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:19:39 standalone.localdomain podman[482093]: 2025-10-13 15:19:39.674710804 +0000 UTC m=+0.092655288 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:19:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:19:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:40 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:19:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:40.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:40 standalone.localdomain ceph-mon[29756]: pgmap v3153: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3154: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:19:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:41.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:42 standalone.localdomain ceph-mon[29756]: pgmap v3154: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:19:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:19:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:19:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3155: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:19:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:19:44 standalone.localdomain ceph-mon[29756]: pgmap v3155: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3156: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:19:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:45.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:46 standalone.localdomain ceph-mon[29756]: pgmap v3156: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:19:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:19:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:19:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:46.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3157: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:19:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:19:48 standalone.localdomain ceph-mon[29756]: pgmap v3157: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-afea50fcd80de40189eae447b9f1e1ce11d3f127805d4e44f9b8663c38c63435-merged.mount: Deactivated successfully.
Oct 13 15:19:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:19:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:19:48 standalone.localdomain systemd[1]: tmp-crun.Gzawsy.mount: Deactivated successfully.
Oct 13 15:19:48 standalone.localdomain podman[482113]: 2025-10-13 15:19:48.927213352 +0000 UTC m=+0.086139573 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 13 15:19:48 standalone.localdomain podman[482112]: 2025-10-13 15:19:48.962349191 +0000 UTC m=+0.123163702 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:19:48 standalone.localdomain podman[482112]: 2025-10-13 15:19:48.970797348 +0000 UTC m=+0.131611909 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:19:48 standalone.localdomain podman[482113]: 2025-10-13 15:19:48.981921959 +0000 UTC m=+0.140848180 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:19:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3158: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:49 standalone.localdomain systemd[1]: tmp-crun.O4YLVK.mount: Deactivated successfully.
Oct 13 15:19:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:19:49 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:19:49 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:19:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:50.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:50 standalone.localdomain ceph-mon[29756]: pgmap v3158: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3159: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:52.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:52 standalone.localdomain ceph-mon[29756]: pgmap v3159: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3160: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:19:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:19:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:19:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40913 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=1974362793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB8F56E0000000001030307) 
Oct 13 15:19:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:19:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40914 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=1974362793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB8F9760000000001030307) 
Oct 13 15:19:54 standalone.localdomain ceph-mon[29756]: pgmap v3160: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:19:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:19:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3161: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:19:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:19:55 standalone.localdomain podman[482157]: 2025-10-13 15:19:55.547765031 +0000 UTC m=+0.073121291 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:19:55 standalone.localdomain systemd[1]: tmp-crun.ypI19R.mount: Deactivated successfully.
Oct 13 15:19:55 standalone.localdomain podman[482156]: 2025-10-13 15:19:55.605846886 +0000 UTC m=+0.131526246 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:19:55 standalone.localdomain podman[482157]: 2025-10-13 15:19:55.614137578 +0000 UTC m=+0.139493818 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:19:55 standalone.localdomain podman[482156]: 2025-10-13 15:19:55.63889543 +0000 UTC m=+0.164574750 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:19:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:55.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:19:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:19:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40915 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=1974362793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB901760000000001030307) 
Oct 13 15:19:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:56 standalone.localdomain ceph-mon[29756]: pgmap v3161: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:19:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:19:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:19:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:19:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:19:57.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:19:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:19:57 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:19:57 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:19:57 standalone.localdomain podman[482195]: 2025-10-13 15:19:57.108086558 +0000 UTC m=+0.127552700 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, architecture=x86_64, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git)
Oct 13 15:19:57 standalone.localdomain podman[482188]: 2025-10-13 15:19:57.068219769 +0000 UTC m=+0.097832502 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:19:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3162: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:57 standalone.localdomain podman[482189]: 2025-10-13 15:19:57.176346054 +0000 UTC m=+0.201681982 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, io.openshift.expose-services=, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:19:57 standalone.localdomain podman[482188]: 2025-10-13 15:19:57.249206695 +0000 UTC m=+0.278819478 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64)
Oct 13 15:19:57 standalone.localdomain podman[482195]: 2025-10-13 15:19:57.306819526 +0000 UTC m=+0.326285618 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, release=1, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 15:19:57 standalone.localdomain podman[482189]: 2025-10-13 15:19:57.332590529 +0000 UTC m=+0.357926457 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:19:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:19:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:19:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:19:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:19:58 standalone.localdomain ceph-mon[29756]: pgmap v3162: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:19:59 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:19:59 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:19:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3163: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:19:59 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:20:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40916 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=1974362793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB911360000000001030307) 
Oct 13 15:20:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:00.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:20:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:00 standalone.localdomain ceph-mon[29756]: pgmap v3163: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3164: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:20:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:01 standalone.localdomain podman[482269]: 2025-10-13 15:20:01.802366551 +0000 UTC m=+0.069116555 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:20:01 standalone.localdomain podman[482269]: 2025-10-13 15:20:01.816228709 +0000 UTC m=+0.082978733 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 15:20:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:02.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:02 standalone.localdomain ceph-mon[29756]: pgmap v3164: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3165: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:20:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cebc8707f4c7cba2ecd74c41ce3f00c568695e2bd83874b7af7633e3c76f6461-merged.mount: Deactivated successfully.
Oct 13 15:20:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cebc8707f4c7cba2ecd74c41ce3f00c568695e2bd83874b7af7633e3c76f6461-merged.mount: Deactivated successfully.
Oct 13 15:20:03 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:20:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:03 standalone.localdomain podman[482288]: 2025-10-13 15:20:03.728818402 +0000 UTC m=+0.236064647 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:20:03 standalone.localdomain podman[482288]: 2025-10-13 15:20:03.76289364 +0000 UTC m=+0.270139855 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:20:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:04 standalone.localdomain ceph-mon[29756]: pgmap v3165: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3166: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:20:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:20:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:20:05 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:20:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:20:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:05.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:05 standalone.localdomain podman[482307]: 2025-10-13 15:20:05.790871389 +0000 UTC m=+0.064047754 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:20:05 standalone.localdomain podman[482307]: 2025-10-13 15:20:05.821201837 +0000 UTC m=+0.094378182 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:20:05 standalone.localdomain podman[482307]: unhealthy
Oct 13 15:20:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-35ae3a4ac122a8618eb2a766bf02e775e404790502be51124b9c0c86bae7b901-merged.mount: Deactivated successfully.
Oct 13 15:20:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:06 standalone.localdomain ceph-mon[29756]: pgmap v3166: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:20:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:20:06.936 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:20:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:20:06.937 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:20:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:20:06.938 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:20:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:07.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:07 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:20:07 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:20:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3167: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:20:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:08 standalone.localdomain ceph-mon[29756]: pgmap v3167: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3168: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:20:10 standalone.localdomain systemd[1]: tmp-crun.TChvRW.mount: Deactivated successfully.
Oct 13 15:20:10 standalone.localdomain podman[482327]: 2025-10-13 15:20:10.498811123 +0000 UTC m=+0.096829509 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 13 15:20:10 standalone.localdomain podman[482327]: 2025-10-13 15:20:10.504477972 +0000 UTC m=+0.102496348 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:20:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:10.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:10 standalone.localdomain ceph-mon[29756]: pgmap v3168: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:20:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3169: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1c3ceeb611a5bf472a6616e0e2799f4dfa33cb00f5de706c2cb6f3f3948731f3-merged.mount: Deactivated successfully.
Oct 13 15:20:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:11 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:20:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:12.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:12 standalone.localdomain ceph-mon[29756]: pgmap v3169: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:20:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:20:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3170: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:14 standalone.localdomain ceph-mon[29756]: pgmap v3170: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3171: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:15.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cebc8707f4c7cba2ecd74c41ce3f00c568695e2bd83874b7af7633e3c76f6461-merged.mount: Deactivated successfully.
Oct 13 15:20:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:16 standalone.localdomain ceph-mon[29756]: pgmap v3171: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:17.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3172: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:20:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:20:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:20:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:20:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1236448891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1236448891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:20:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: pgmap v3172: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1236448891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:20:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1236448891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:20:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3173: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:20:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:20:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:20:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:20:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:20 standalone.localdomain podman[482346]: 2025-10-13 15:20:20.477768827 +0000 UTC m=+0.072568193 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:20:20 standalone.localdomain podman[482346]: 2025-10-13 15:20:20.513878588 +0000 UTC m=+0.108677984 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:20:20 standalone.localdomain podman[482347]: 2025-10-13 15:20:20.519935439 +0000 UTC m=+0.114734815 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 13 15:20:20 standalone.localdomain podman[482347]: 2025-10-13 15:20:20.603970663 +0000 UTC m=+0.198770079 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:20:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:20.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:20 standalone.localdomain systemd[1]: tmp-crun.f6Zj12.mount: Deactivated successfully.
Oct 13 15:20:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:20 standalone.localdomain ceph-mon[29756]: pgmap v3173: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f573d32cfb4fe0600026989e0f3eb3e23bdbbcdc54637c567c3b456cea4c706a-merged.mount: Deactivated successfully.
Oct 13 15:20:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:20.987 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:20:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:20.989 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:20:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:20.989 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.021 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.022 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.022 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.043 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.044 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9207d3b4-0481-4a82-85de-ec810350601b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:20.989880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2202f882-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': 'adceb4d3dd58c901542cc0470112c0d2ee9f6bb426a58342e978d94306c69781'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:20.989880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '22030f0c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': 'ea4e9e05a257870d516750ba75d94b01d4744957730464d6888fd061780328d5'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:20.989880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '220320e6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': '6eed0cae7e38046937dc08bae5bd37fa9075b147d9c2b80ea2c26540bf8583ef'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:20.989880', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '22066832-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.212197027, 'message_signature': '9654e0fad3b628177c28c972c86d4a91d009eed06a21526613a050cdb35b84ca'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:20.989880', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '220677b4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.212197027, 'message_signature': 'd7722a0fdde8e432cc10a69cdc1799142e199641bdbd0203bb2f2e25d15e24d8'}]}, 'timestamp': '2025-10-13 15:20:21.044782', '_unique_id': 'b43268d2f75b4113801815722d5504fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.047 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.048 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.083 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.084 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.084 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.113 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77bcceb7-8520-4310-9a7c-809f88545d57', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.048952', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '220c8172-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'fbe47245a29501b754b99a23318d366f8f04e33064bc2ad427c1c5ffcdaca137'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.048952', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '220c9590-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '7866b70e3e26d1d869c817f5bc8c6c894a1e602b9a2341ba27a4f6c08e4dc4ce'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.048952', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '220ca10c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'e1d38d75c77763268b5e0bb16cac89db0207bb6e277600bb98de10d3c45f38e6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.048952', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221117be-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '1a2af3499493c02e2f0897ff2cf1fde8511ee5941294cf214a24df6130af7ef4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.048952', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '22112254-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': 'bac5539c13c3ad5b28dfa81e1534d87923c98b259268b84508f7dd49ecdb4020'}]}, 'timestamp': '2025-10-13 15:20:21.114675', '_unique_id': 'd63b05ffd62e4d6e97475e409d8b4386'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.116 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:20:21 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.119 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.124 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3174: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9ac3d26d-c86b-4cda-b3b2-8e6e3bed226b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.116367', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '2211edce-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '7ba555bb7d6acdcb0ff9a2b6f88b182f965d2aad4749a52b77a2919608dfd7aa'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.116367', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '2212b4e8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': 'd537cd5e42266d6054bb7ec8906e89daaa88d6c9279a8bceab0ae7f01d69c44b'}]}, 'timestamp': '2025-10-13 15:20:21.125047', '_unique_id': '69bb080787c94f17ba56c05ae9247595'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.126 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.127 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.127 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.127 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.127 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.128 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.128 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '930020d1-e007-4a18-9aa3-8acd96df70c6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.127145', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221315e6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '2fa2b41cf9216eaa94add52dd6fe926d705aafab97d0ab2da2ba8b1ce0921923'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.127145', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221322c0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '0c815afcc8bd7e2edec24be9a6f499f085c502dd70601d9d7d123487c7ca8651'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.127145', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '22132e3c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'af7357bc9905dd99c228c4f887652d7c9e9ffa9153ee7fbe5f3a79f5c005160e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.127145', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221339c2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '1324bbb48ec1d412bee688719c012b8a238d136da8a62ada608b0c92fba167f2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.127145', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221345a2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': 'b46b1e12017ce9f504122434613311b1b0c32099f3209164ec4e966321665f4d'}]}, 'timestamp': '2025-10-13 15:20:21.128690', '_unique_id': 'f54826ca45c5488c86d79d52cad30885'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.129 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.130 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.130 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.130 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8555df39-3c43-4799-b7de-2cc94f1cda4e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.130360', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '2213944e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': 'b1255ba3e7fd69714c976893644d8bb0ad63e56400e547e78d58dda63b2caf80'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.130360', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '2213a132-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': '7fa61441e97b592ac71384e3968e1b4b178dbc1e43675ad17ce8ca8aa8f893a0'}]}, 'timestamp': '2025-10-13 15:20:21.131040', '_unique_id': '78fa12b5e5084927875ca852583f3c4a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.132 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.132 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.132 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3ed50894-d2b8-4a13-afb7-809af70500bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.132565', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '2213e93a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '5b69d5fc3323986a49c89ec199b4b81c7ccd3e4a2a49e59812411f82ffaf8e12'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.132565', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '2213f556-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': '798fb63afc31550385826ace68e922a06fdfb9c26bcf3ce03d5c7eebe69cd914'}]}, 'timestamp': '2025-10-13 15:20:21.133193', '_unique_id': '78f98aaa80054e158971fda186576495'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.133 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.134 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.134 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.135 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.135 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.135 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.135 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f46704ca-501e-470d-87e7-081a1d708b1e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.134683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '22143c78-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '9d0863f8b96ddbfa787865b959bcb50dc771ebba239d1d9359c6fd4c1ff0d9b9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.134683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221447fe-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '49230730c79adfa70ab4ba02d2b51e45012de7ada732a7aad92a21912a54d73d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.134683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '2214533e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'cb7ccc098a5b00361d2546acc2faaf7c4a5fe1b602426a9904170a4e283112c2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.134683', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '22145ef6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '96c8a5cf41ac6723935fd8578c214e88d53453128095a0e2fcbe759008e61911'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.134683', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221469f0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': 'c040ff3d8a6665fff69fffba8abc8550ef46e3ca0008c6a491c5c45818133a72'}]}, 'timestamp': '2025-10-13 15:20:21.136170', '_unique_id': '94778ff1674543f7a56ae8d4a29eac85'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.137 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.137 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.138 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.138 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.138 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.138 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a7f112d6-ff40-4e21-93fd-4c2ea6b557b2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.137774', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2214b48c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'bd84a7739b260c54e4314e76eb68227280916bfb9976a0c918557ca3c4c8dd00'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.137774', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2214bfc2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'a8f99e37f5e9e9c506e627ffa181851c780a5ae6ec148243c9718ca868fb80d4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.137774', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '2214cbac-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '230a04df6e37ff9375be62c99c8c2c32f3b5902b6fc7158d83830201c43e4cde'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.137774', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2214d7fa-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '7c8ab63fb9abf0f80ef2ed37cfb5c2c1fe86c32f2b7a22fc5b04f1486e1cff2f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.137774', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2214e2f4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '377b066aeeb69640e54607dc5b14775c398f956f1929b6d95366da60681949b7'}]}, 'timestamp': '2025-10-13 15:20:21.139270', '_unique_id': '1b6ab10512174dbd809af0f3b862dc68'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.140 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.141 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.141 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.141 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.141 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.142 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '44ba9c22-f33b-45b7-8f72-162aba0ce559', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.141069', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221535b0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'c91cd8cab50ea5c5cb1136ee44a40175a40b2a0ebb7b21f4349158dea6f73234'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.141069', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2215419a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '3bbbabab408d514b1f65b50fdfd76e3ad600c20fd371648b8e90e14733cf956b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.141069', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '22154ce4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '012a1eee3543fd002ebe3f012ec3819edef31aa1f99f2ca15b205bf8d8ef026b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.141069', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '22155752-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '242e8ac9218eafbd430ac22934de9cc912c444b555db908b05ba07269aedf693'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.141069', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221561a2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': 'afed51f7dd6e80f8e56c6208fd36d2a8de010fe7cd7d145d080d516a3ea8e411'}]}, 'timestamp': '2025-10-13 15:20:21.142532', '_unique_id': '12848c04508c46eb8a46ed5f2a3f3d83'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.144 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.144 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.144 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.144 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.144 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e8bef7f2-d02f-4e77-a875-798c0866f34b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.144367', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '2215b742-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '4a40d3fb3b3b1355291bab552ea0fbf9aa962190876c4ce1f097f1c7d659ef4d'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.144367', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '2215c390-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': '7849685855f1b21e48490bd7bbd19d9281beef1f11552d2d14baafa8c525906d'}]}, 'timestamp': '2025-10-13 15:20:21.145030', '_unique_id': '482e22c9fb6a4eb8bd88de8fa565473e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.146 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.146 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.146 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7719313e-bac9-4241-917c-9ce52e9a9477', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.146641', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '22160ecc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': 'a7c43dfe9a8f9bb43df776740d416d3fa9683b424ccd9504e6a1df6edfe4127a'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.146641', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '22161b1a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': '3c4a06663f36596a7b5f7dcefd2f1d911d7aa6ca29deacdbf68867db3241f7a0'}]}, 'timestamp': '2025-10-13 15:20:21.147269', '_unique_id': 'fdd6db1d6bac49bbb3ba91cc92d49171'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.148 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.149 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.149 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.149 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.149 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.150 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.150 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '75aca818-8a51-4ac5-b796-8a98d26a678b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.149238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221674d4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': '261ac7e045f20746346e0f681f184ba3d41293e02e070346908b6e738af3af1a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.149238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '22168186-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': '6fb4f3804c90f235b938b12b56928a8745738cdb371acf65d36b5600268a49b9'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.149238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '22168c30-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': '49e21afab6735d762be8c2b1f5053a65d2538e6e980ec4f23f25601a2c0773b9'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.149238', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2216972a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.212197027, 'message_signature': '898ccaab37c3116fad700dfce4532d7c4a801bc2d0a776b839a0ecb8ee709bf2'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.149238', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2216a2f6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.212197027, 'message_signature': 'dcc3022442dc65a1ff0d33fcf04a6865ec186159ca0cc66145daa22de6043a30'}]}, 'timestamp': '2025-10-13 15:20:21.150736', '_unique_id': 'f0d9b68f14f24f82b957e0f4ddb7ea08'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.151 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.152 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.152 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.152 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.153 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.153 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.153 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20a4ee3c-3d0b-42c4-ac83-e9337c304e8d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.152541', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2216f562-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': '62fd86c3130a3837606a3674820f2626da17438007e58e500fdbfc9dc064b936'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.152541', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '22170098-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': '280038ed74c860fe4c889b766e593692ff3911355dfb74d0e48a69bdd964ea42'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.152541', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '22170b88-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.179090761, 'message_signature': 'ef58a9d742b2b8f3e624ea5c0cb9b3a3c1c0076908864feb16ba3f803e8d5b20'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.152541', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '22171718-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.212197027, 'message_signature': '0d88e0a02d96f7476a75105ebcb948ff59b7ef01a5554096c01cc37bc10df297'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.152541', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221721f4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.212197027, 'message_signature': '05659edadfbfb1413d4c5f17f08911358d7390487d93801f65e4b8825193865d'}]}, 'timestamp': '2025-10-13 15:20:21.153986', '_unique_id': '49c3650f487e44058a8008d71a88af2e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.155 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.155 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.155 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da9b89cc-6b43-4a36-a75e-95cd98dc5f1a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.155616', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '22176dbc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '6a6f01b74ecabc01e062a683feb0bb1d238f1b1af9d2c51c95411c6174ed37a1'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.155616', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '22177992-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': '82c5eab5618931f5b5ab6fe9f620468c1cb3f1a91f6d520d931211c5e00b0fa3'}]}, 'timestamp': '2025-10-13 15:20:21.156241', '_unique_id': 'f514d77975e640a8a6360184bf12067f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.157 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.157 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.158 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3848b45e-6d52-4eda-9ef1-888de9708535', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.157810', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '2217c348-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '1a4d111736ba88581f71a4ca92dacbaf1f5fd7057e06bc9fbd5da52ed0803890'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.157810', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '2217cf3c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': '3a7eeb73db1cd836d834c7d7e83d4c7f9cc35df55d77a7c11d20c0d84e65aa7c'}]}, 'timestamp': '2025-10-13 15:20:21.158432', '_unique_id': 'f107585b8f7f46b79f82a0660f2bdebc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.159 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.160 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9c1fb139-45a8-4812-8b9d-97c1582f0621', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.160020', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '2218196a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '59c68a02c60948ca7874fea669f284e215de0d9c8cf193a2c13616673b55425c'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.160020', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '22182518-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': 'a3b3c9d0adf6eafa8284f5b40db73567faafd2ea59537897de5f18918c85a0af'}]}, 'timestamp': '2025-10-13 15:20:21.160646', '_unique_id': 'ff3aaf2a4c5f44f782dbfc9644e4e5d2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.162 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 28120000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.197 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 27920000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '90c2af97-084e-4b27-84f5-a555db1b011c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 28120000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:20:21.162158', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '221b3da2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.369478355, 'message_signature': '866e309a3e5333ea72f5eaf3d3fae90756c7cd59ad83ee8966244e61e4999467'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27920000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:20:21.162158', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '221deafc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.387091631, 'message_signature': '329b95e0bfc1486a0f0f5bab2d4db6915f22cd035577b9748db4a00cf0af6f6f'}]}, 'timestamp': '2025-10-13 15:20:21.198544', '_unique_id': '112337b09be74398a2ed9dbd4fa72b9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.200 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.201 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.201 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.201 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.202 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e63b1116-aa25-4779-b935-6c42358e182a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:20:21.200960', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221e592e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '27968fd47927cebdb6daeedbf65ecafcebb45df4fe66a9187bc8418ddc039872'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:20:21.200960', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221e6482-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': '15b18246814e48bdd23453cd10350ac6c454484af32a3f1ef3a3ee127a7c5760'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:20:21.200960', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '221e6ff4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.238252419, 'message_signature': 'b3c5fd6557019a7722856fd52beb4aab0846fd23398f4608599a03de3a7761c1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:20:21.200960', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '221e7a80-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': 'cd0a1320809d6c752a6bdb633e04cd4cbe31f58468e44423f72d96e688d7ded9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:20:21.200960', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '221e84da-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.274417502, 'message_signature': '0061f9a731544de24653270c860e9d85b7121dbb0a9331310b1a6eb0c3120d6e'}]}, 'timestamp': '2025-10-13 15:20:21.202389', '_unique_id': 'afced2bdf8b949c98c1c64353dfaa9a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.203 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.204 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.204 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cd113c28-03ec-4a80-866d-7ce1f0734932', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:20:21.204119', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '221ed4b2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.369478355, 'message_signature': 'cbad69dd2a7ad29a55361fef674e594b3c030ea315c77aabacbf87ef6215b2f2'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:20:21.204119', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '221ee088-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.387091631, 'message_signature': 'c56777ba46e648260b7b1a04bd995c8e3c02257d061489a23e4a9bd4ae5d173e'}]}, 'timestamp': '2025-10-13 15:20:21.204736', '_unique_id': 'c4380904266e4df9b22fbadf84ebfd67'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.206 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4426ab8f-841b-4fe1-b6f5-46cbaa76b733', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.206241', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '221f273c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': 'd619e7160d71766975de69bb9a21bf12d52810210290a2c503b03d29e48b670c'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.206241', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '221f33b2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': 'c72a52f247e73dafe52725c411e50764e8b54619d6ae7daa13c057a14222535b'}]}, 'timestamp': '2025-10-13 15:20:21.206879', '_unique_id': '15e95410827e46e1a968372820689c64'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.208 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.208 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '28a52c75-dbc2-46ea-8607-ed68b9f69bad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:20:21.208356', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '221f7a8e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.305619357, 'message_signature': '44637bffc821468b57e7e54344c8f37b0820ef0964f0fecd3f2174c5e4c01cf6'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:20:21.208356', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '221f863c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8264.309098857, 'message_signature': 'fc1799feec61b59dbda6be87336da30e0e07af4b2ef0ff85dc4810294c772239'}]}, 'timestamp': '2025-10-13 15:20:21.208990', '_unique_id': '9cae88e165b14b77973884fc3abac3bd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:20:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:20:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:20:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:22.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:22 standalone.localdomain ceph-mon[29756]: pgmap v3174: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3175: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:20:23
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_data', 'manila_metadata', 'backups', 'images', '.mgr', 'vms']
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:20:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:20:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54242 DF PROTO=TCP SPT=58640 DPT=9102 SEQ=773010840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB96A9F0000000001030307) 
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:20:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:20:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:20:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1c3ceeb611a5bf472a6616e0e2799f4dfa33cb00f5de706c2cb6f3f3948731f3-merged.mount: Deactivated successfully.
Oct 13 15:20:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54243 DF PROTO=TCP SPT=58640 DPT=9102 SEQ=773010840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB96EB60000000001030307) 
Oct 13 15:20:24 standalone.localdomain ceph-mon[29756]: pgmap v3175: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3176: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.135 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.136 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.136 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.136 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.136 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:20:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:20:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:20:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:20:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:20:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3362967768' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.604 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.684 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.685 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.685 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.688 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.688 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.858 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.859 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9942MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.859 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.859 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:20:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3362967768' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.931 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.931 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.932 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.932 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:20:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:25.984 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:20:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:20:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3212600426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:20:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:26.452 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:20:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:26.457 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:20:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:26.474 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:20:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:26.475 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:20:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:26.476 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:20:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54244 DF PROTO=TCP SPT=58640 DPT=9102 SEQ=773010840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB976B60000000001030307) 
Oct 13 15:20:26 standalone.localdomain ceph-mon[29756]: pgmap v3176: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:26 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3212600426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:20:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:27.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3177: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:20:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:20:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:20:27 standalone.localdomain systemd[1]: tmp-crun.Gwuscb.mount: Deactivated successfully.
Oct 13 15:20:27 standalone.localdomain podman[482438]: 2025-10-13 15:20:27.277444904 +0000 UTC m=+0.096749436 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:20:27 standalone.localdomain podman[482438]: 2025-10-13 15:20:27.308107873 +0000 UTC m=+0.127412445 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:20:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:20:27 standalone.localdomain podman[482439]: 2025-10-13 15:20:27.356214963 +0000 UTC m=+0.173566493 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd)
Oct 13 15:20:27 standalone.localdomain podman[482439]: 2025-10-13 15:20:27.393242413 +0000 UTC m=+0.210593973 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:20:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:20:28 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:20:28 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:20:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:28.474 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:28.475 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:28.475 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:28 standalone.localdomain ceph-mon[29756]: pgmap v3177: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:20:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:20:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3178: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:20:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:20:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:20:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:20:29 standalone.localdomain systemd[1]: tmp-crun.LiBfqf.mount: Deactivated successfully.
Oct 13 15:20:29 standalone.localdomain podman[482476]: 2025-10-13 15:20:29.264458009 +0000 UTC m=+0.080127800 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:20:29 standalone.localdomain podman[482477]: 2025-10-13 15:20:29.31967215 +0000 UTC m=+0.129550630 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, container_name=swift_container_server, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, version=17.1.9, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:20:29 standalone.localdomain podman[482478]: 2025-10-13 15:20:29.268381951 +0000 UTC m=+0.074493343 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, release=1, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.openshift.expose-services=)
Oct 13 15:20:29 standalone.localdomain podman[482476]: 2025-10-13 15:20:29.447820995 +0000 UTC m=+0.263490706 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:20:29 standalone.localdomain podman[482478]: 2025-10-13 15:20:29.473717312 +0000 UTC m=+0.279828694 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 15:20:29 standalone.localdomain podman[482477]: 2025-10-13 15:20:29.501776747 +0000 UTC m=+0.311655247 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-type=git, vendor=Red Hat, Inc., container_name=swift_container_server, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4)
Oct 13 15:20:29 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:20:29 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:20:29 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.858 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.858 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.859 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:20:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:29.859 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:20:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54245 DF PROTO=TCP SPT=58640 DPT=9102 SEQ=773010840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB986770000000001030307) 
Oct 13 15:20:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:30.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:30 standalone.localdomain ceph-mon[29756]: pgmap v3178: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:31.011 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:20:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:31.028 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:20:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:31.029 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:20:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3179: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:20:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:32.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:32 standalone.localdomain ceph-mon[29756]: pgmap v3179: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3180: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:20:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f573d32cfb4fe0600026989e0f3eb3e23bdbbcdc54637c567c3b456cea4c706a-merged.mount: Deactivated successfully.
Oct 13 15:20:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:20:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f573d32cfb4fe0600026989e0f3eb3e23bdbbcdc54637c567c3b456cea4c706a-merged.mount: Deactivated successfully.
Oct 13 15:20:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:20:33 standalone.localdomain podman[482560]: 2025-10-13 15:20:33.788680614 +0000 UTC m=+0.060823097 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, release=1755695350, vendor=Red Hat, Inc.)
Oct 13 15:20:33 standalone.localdomain podman[482560]: 2025-10-13 15:20:33.804008672 +0000 UTC m=+0.076151175 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:20:33 standalone.localdomain ceph-mon[29756]: pgmap v3180: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:20:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:20:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:20:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:20:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:20:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3181: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:20:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:20:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:20:35 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:20:35 standalone.localdomain podman[482581]: 2025-10-13 15:20:35.547631724 +0000 UTC m=+0.073683179 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 13 15:20:35 standalone.localdomain podman[482581]: 2025-10-13 15:20:35.582631235 +0000 UTC m=+0.108682670 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:20:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:35.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:36 standalone.localdomain ceph-mon[29756]: pgmap v3181: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:20:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:37.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3182: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:20:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:20:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:20:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-26243e2ef834ed1ddfa94abd8ba67d6e19ac7be9552cf98889448c1547a2c001-merged.mount: Deactivated successfully.
Oct 13 15:20:37 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:20:37 standalone.localdomain podman[482600]: 2025-10-13 15:20:37.622112312 +0000 UTC m=+0.250461410 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:20:37 standalone.localdomain podman[482600]: 2025-10-13 15:20:37.629972377 +0000 UTC m=+0.258321455 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:20:37 standalone.localdomain podman[482600]: unhealthy
Oct 13 15:20:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:38 standalone.localdomain ceph-mon[29756]: pgmap v3182: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:20:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:20:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3183: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074-merged.mount: Deactivated successfully.
Oct 13 15:20:39 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:20:39 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:20:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:20:40 standalone.localdomain ceph-mon[29756]: pgmap v3183: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:20:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:20:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:40.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:20:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3184: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:20:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-61d88355902d8f002faf7767ac1094db78824bb783ddbe91278ee46e87ad271e-merged.mount: Deactivated successfully.
Oct 13 15:20:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:20:41 standalone.localdomain podman[482621]: 2025-10-13 15:20:41.620580215 +0000 UTC m=+0.061639112 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:20:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:41 standalone.localdomain podman[482621]: 2025-10-13 15:20:41.632163147 +0000 UTC m=+0.073222064 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:20:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:42.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:42 standalone.localdomain ceph-mon[29756]: pgmap v3184: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a31aafc7f1b29fd16a307719f75e509fb0a8510c4c99fe1f9e98b69d85c353e-merged.mount: Deactivated successfully.
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:20:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:20:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:20:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3185: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:20:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:20:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:20:44 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:20:44 standalone.localdomain ceph-mon[29756]: pgmap v3185: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3186: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:20:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:45.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:20:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:20:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:20:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:46 standalone.localdomain ceph-mon[29756]: pgmap v3186: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:20:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:47.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3187: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:20:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:20:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:47 standalone.localdomain podman[467099]: time="2025-10-13T15:20:47Z" level=error msg="Getting root fs size for \"4dbfeea909f06815c5fc95444b41e56458113758c4919ab38494d8ee46c1a0f9\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:20:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:20:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:48 standalone.localdomain object-server[482641]: Object update sweep starting on /srv/node/d1 (pid: 27)
Oct 13 15:20:48 standalone.localdomain object-server[482641]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 27)
Oct 13 15:20:48 standalone.localdomain object-server[482641]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:20:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.05s
Oct 13 15:20:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:48 standalone.localdomain ceph-mon[29756]: pgmap v3187: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3188: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:20:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-97bde4ffeeebee51995225d1b54e9cfe542557c553e6334a0f543d8b59656baa-merged.mount: Deactivated successfully.
Oct 13 15:20:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Oct 13 15:20:50 standalone.localdomain ceph-mon[29756]: pgmap v3188: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:50.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8-merged.mount: Deactivated successfully.
Oct 13 15:20:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:20:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-61d88355902d8f002faf7767ac1094db78824bb783ddbe91278ee46e87ad271e-merged.mount: Deactivated successfully.
Oct 13 15:20:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3189: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:20:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:20:51 standalone.localdomain podman[482643]: 2025-10-13 15:20:51.255367457 +0000 UTC m=+0.088986326 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:20:51 standalone.localdomain podman[482642]: 2025-10-13 15:20:51.222618955 +0000 UTC m=+0.062823609 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:20:51 standalone.localdomain podman[482643]: 2025-10-13 15:20:51.286843448 +0000 UTC m=+0.120462297 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 15:20:51 standalone.localdomain podman[482642]: 2025-10-13 15:20:51.306785359 +0000 UTC m=+0.146990013 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:20:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:52.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:20:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:20:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:20:52 standalone.localdomain ceph-mon[29756]: pgmap v3189: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3190: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:20:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:20:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:20:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:20:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50356 DF PROTO=TCP SPT=53256 DPT=9102 SEQ=357941554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB9DFCF0000000001030307) 
Oct 13 15:20:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:20:53 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:20:53 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:20:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50357 DF PROTO=TCP SPT=53256 DPT=9102 SEQ=357941554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB9E3F60000000001030307) 
Oct 13 15:20:54 standalone.localdomain ceph-mon[29756]: pgmap v3190: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3191: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:20:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:20:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:20:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:20:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:55.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:20:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:20:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50358 DF PROTO=TCP SPT=53256 DPT=9102 SEQ=357941554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB9EBF60000000001030307) 
Oct 13 15:20:56 standalone.localdomain ceph-mon[29756]: pgmap v3191: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:20:57.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:20:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3192: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:20:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:20:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:20:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:20:58 standalone.localdomain systemd[1]: tmp-crun.DQUqtf.mount: Deactivated successfully.
Oct 13 15:20:58 standalone.localdomain podman[482689]: 2025-10-13 15:20:58.559662288 +0000 UTC m=+0.072760909 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:20:58 standalone.localdomain podman[482689]: 2025-10-13 15:20:58.593844524 +0000 UTC m=+0.106943145 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:20:58 standalone.localdomain podman[482688]: 2025-10-13 15:20:58.602747921 +0000 UTC m=+0.116760190 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:20:58 standalone.localdomain podman[482688]: 2025-10-13 15:20:58.635885865 +0000 UTC m=+0.149898134 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:20:58 standalone.localdomain ceph-mon[29756]: pgmap v3192: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3193: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:20:59 standalone.localdomain systemd[1]: tmp-crun.p8Aslv.mount: Deactivated successfully.
Oct 13 15:20:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:20:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-97bde4ffeeebee51995225d1b54e9cfe542557c553e6334a0f543d8b59656baa-merged.mount: Deactivated successfully.
Oct 13 15:20:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:20:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:20:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:21:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68-merged.mount: Deactivated successfully.
Oct 13 15:21:00 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:21:00 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:21:00 standalone.localdomain podman[482724]: 2025-10-13 15:21:00.589760853 +0000 UTC m=+0.746786205 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, build-date=2025-07-21T16:11:22)
Oct 13 15:21:00 standalone.localdomain podman[482723]: 2025-10-13 15:21:00.635376525 +0000 UTC m=+0.798094614 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, managed_by=tripleo_ansible, release=1)
Oct 13 15:21:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50359 DF PROTO=TCP SPT=53256 DPT=9102 SEQ=357941554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CB9FBB70000000001030307) 
Oct 13 15:21:00 standalone.localdomain podman[482722]: 2025-10-13 15:21:00.705410378 +0000 UTC m=+0.870429559 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, container_name=swift_object_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, architecture=x86_64, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:21:00 standalone.localdomain ceph-mon[29756]: pgmap v3193: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:00.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:00 standalone.localdomain podman[482724]: 2025-10-13 15:21:00.805785278 +0000 UTC m=+0.962810650 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, vcs-type=git, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, tcib_managed=true)
Oct 13 15:21:00 standalone.localdomain podman[482723]: 2025-10-13 15:21:00.839731436 +0000 UTC m=+1.002449545 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64)
Oct 13 15:21:00 standalone.localdomain podman[482722]: 2025-10-13 15:21:00.931830978 +0000 UTC m=+1.096850229 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, container_name=swift_object_server, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:21:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3194: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Oct 13 15:21:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Oct 13 15:21:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:02.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:02 standalone.localdomain ceph-mon[29756]: pgmap v3194: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3195: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:03 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:21:03 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:21:03 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:21:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Oct 13 15:21:04 standalone.localdomain ceph-mon[29756]: pgmap v3195: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3196: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:21:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:05.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:05 standalone.localdomain podman[482805]: 2025-10-13 15:21:05.784181373 +0000 UTC m=+0.066820184 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350)
Oct 13 15:21:05 standalone.localdomain podman[482805]: 2025-10-13 15:21:05.798807069 +0000 UTC m=+0.081445880 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc.)
Oct 13 15:21:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:06 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:21:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:06 standalone.localdomain ceph-mon[29756]: pgmap v3196: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:21:06.937 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:21:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:21:06.938 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:21:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:21:06.939 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:21:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:07.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3197: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:21:07 standalone.localdomain podman[482824]: 2025-10-13 15:21:07.799203307 +0000 UTC m=+0.066222505 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 15:21:07 standalone.localdomain podman[482824]: 2025-10-13 15:21:07.809411576 +0000 UTC m=+0.076430754 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:21:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:08 standalone.localdomain ceph-mon[29756]: pgmap v3197: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3198: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:21:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-85e96b15229bea211cd977b193c3b43c6267f471f74ae0329da8bebc24a16a86-merged.mount: Deactivated successfully.
Oct 13 15:21:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:10 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:21:10 standalone.localdomain podman[482843]: 2025-10-13 15:21:10.262290041 +0000 UTC m=+0.525615969 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:21:10 standalone.localdomain podman[482843]: 2025-10-13 15:21:10.266130571 +0000 UTC m=+0.529456529 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:21:10 standalone.localdomain podman[482843]: unhealthy
Oct 13 15:21:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:10.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:10 standalone.localdomain ceph-mon[29756]: pgmap v3198: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3199: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:21:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3a2cba94cb33211ba0aeff20243c4f38acaa1c8a837e69d7f7d9fdd5f20b02b-merged.mount: Deactivated successfully.
Oct 13 15:21:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:12.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3a2cba94cb33211ba0aeff20243c4f38acaa1c8a837e69d7f7d9fdd5f20b02b-merged.mount: Deactivated successfully.
Oct 13 15:21:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:12 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:21:12 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:21:12 standalone.localdomain ceph-mon[29756]: pgmap v3199: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:21:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:21:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3200: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:21:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:21:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:14 standalone.localdomain podman[482866]: 2025-10-13 15:21:14.7077369 +0000 UTC m=+0.065419500 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Oct 13 15:21:14 standalone.localdomain podman[482866]: 2025-10-13 15:21:14.743768983 +0000 UTC m=+0.101451563 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:21:14 standalone.localdomain ceph-mon[29756]: pgmap v3200: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:14 standalone.localdomain systemd[1]: tmp-crun.Xy6g0a.mount: Deactivated successfully.
Oct 13 15:21:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3201: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b33375115d3f8cb6947ff63348a6809984ef15de15f68be27bcaefc607b71c68-merged.mount: Deactivated successfully.
Oct 13 15:21:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:15.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:15 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:21:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:16 standalone.localdomain ceph-mon[29756]: pgmap v3201: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:17.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3202: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1813433136' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1813433136' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:21:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3a2cba94cb33211ba0aeff20243c4f38acaa1c8a837e69d7f7d9fdd5f20b02b-merged.mount: Deactivated successfully.
Oct 13 15:21:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3347794ca7bacaae66520ae91082dfbc824007d4215bbab2b34c037a86b9c119-merged.mount: Deactivated successfully.
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: pgmap v3202: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1813433136' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:21:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1813433136' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:21:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3203: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:21:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:20.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:20 standalone.localdomain ceph-mon[29756]: pgmap v3203: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3204: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:21:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:21:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:22.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:22 standalone.localdomain podman[467099]: time="2025-10-13T15:21:22Z" level=error msg="Getting root fs size for \"4fd593e167be6332da3b6657ab9628ff56dbf9e68b26e438f73792213003f285\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:21:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:22 standalone.localdomain ceph-mon[29756]: pgmap v3204: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3205: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:21:23
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_data', 'backups', '.mgr', 'manila_metadata', 'volumes', 'images']
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:21:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31034 DF PROTO=TCP SPT=54430 DPT=9102 SEQ=3365876870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBA54FF0000000001030307) 
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:21:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:21:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:21:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:21:23 standalone.localdomain systemd[1]: tmp-crun.RRvZby.mount: Deactivated successfully.
Oct 13 15:21:23 standalone.localdomain podman[482886]: 2025-10-13 15:21:23.791265715 +0000 UTC m=+0.058319669 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:21:23 standalone.localdomain podman[482885]: 2025-10-13 15:21:23.843761372 +0000 UTC m=+0.110613440 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:21:23 standalone.localdomain podman[482886]: 2025-10-13 15:21:23.847964622 +0000 UTC m=+0.115018596 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:21:23 standalone.localdomain podman[482885]: 2025-10-13 15:21:23.854181966 +0000 UTC m=+0.121034024 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:21:24 standalone.localdomain ceph-mgr[29999]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 15:21:24 standalone.localdomain systemd[1]: tmp-crun.FOd6YX.mount: Deactivated successfully.
Oct 13 15:21:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31035 DF PROTO=TCP SPT=54430 DPT=9102 SEQ=3365876870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBA58F60000000001030307) 
Oct 13 15:21:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:21:24 standalone.localdomain ceph-mon[29756]: pgmap v3205: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-56b0412472b532a34bb78587014039d8acbf511963f662c6f865f972c75cba2f-merged.mount: Deactivated successfully.
Oct 13 15:21:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:21:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:25.024 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:25.024 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-85e96b15229bea211cd977b193c3b43c6267f471f74ae0329da8bebc24a16a86-merged.mount: Deactivated successfully.
Oct 13 15:21:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:25.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3206: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:25 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:21:25 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:21:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:25.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.112 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.112 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.112 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.113 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.113 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:21:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:21:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1741305307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.578 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:21:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31036 DF PROTO=TCP SPT=54430 DPT=9102 SEQ=3365876870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBA60F60000000001030307) 
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.648 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.648 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.649 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:21:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.654 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.654 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:21:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:21:26 standalone.localdomain ceph-mon[29756]: pgmap v3206: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:26 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1741305307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.871 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.872 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9900MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.872 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.872 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:21:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.957 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.957 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.957 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:21:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:26.957 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.017 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3207: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:21:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3524148818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.510 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.517 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.531 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.533 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:21:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:27.533 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:21:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3a2cba94cb33211ba0aeff20243c4f38acaa1c8a837e69d7f7d9fdd5f20b02b-merged.mount: Deactivated successfully.
Oct 13 15:21:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3524148818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:21:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:28.534 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:28.535 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:28.535 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:21:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:28 standalone.localdomain ceph-mon[29756]: pgmap v3207: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:21:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3208: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:30.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:30.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:21:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:30 standalone.localdomain podman[467099]: time="2025-10-13T15:21:30Z" level=error msg="Getting root fs size for \"5305f60565da922fcf4ae66b4ec1858c099967d1fa7e119077d0263e9e0a74c3\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:21:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31037 DF PROTO=TCP SPT=54430 DPT=9102 SEQ=3365876870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBA70B60000000001030307) 
Oct 13 15:21:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:21:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:21:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:21:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e410b4438f0c300953cb1d70d8d493e57beab26e7850ef1c9e5f40f4d7c5012-merged.mount: Deactivated successfully.
Oct 13 15:21:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:30.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:30 standalone.localdomain systemd[1]: tmp-crun.ElQMYn.mount: Deactivated successfully.
Oct 13 15:21:30 standalone.localdomain podman[482979]: 2025-10-13 15:21:30.804907865 +0000 UTC m=+0.068208338 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:21:30 standalone.localdomain podman[482979]: 2025-10-13 15:21:30.818994385 +0000 UTC m=+0.082294898 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:21:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:30.857 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:21:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:30.858 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:21:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:30.858 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:21:30 standalone.localdomain podman[482978]: 2025-10-13 15:21:30.877613251 +0000 UTC m=+0.140275684 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 13 15:21:30 standalone.localdomain ceph-mon[29756]: pgmap v3208: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:30 standalone.localdomain podman[482978]: 2025-10-13 15:21:30.909995981 +0000 UTC m=+0.172658354 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:21:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3209: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:31 standalone.localdomain systemd[1]: tmp-crun.nuuBHG.mount: Deactivated successfully.
Oct 13 15:21:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:21:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:31.882 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:21:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:31.897 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:21:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:31.897 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:21:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:31.897 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:21:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:32.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3a2cba94cb33211ba0aeff20243c4f38acaa1c8a837e69d7f7d9fdd5f20b02b-merged.mount: Deactivated successfully.
Oct 13 15:21:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3347794ca7bacaae66520ae91082dfbc824007d4215bbab2b34c037a86b9c119-merged.mount: Deactivated successfully.
Oct 13 15:21:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:32 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:21:32 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:21:32 standalone.localdomain ceph-mon[29756]: pgmap v3209: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3210: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:21:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:21:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:21:33 standalone.localdomain podman[483017]: 2025-10-13 15:21:33.381681203 +0000 UTC m=+0.086094935 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, tcib_managed=true, batch=17.1_20250721.1)
Oct 13 15:21:33 standalone.localdomain podman[483016]: 2025-10-13 15:21:33.359708328 +0000 UTC m=+0.071574883 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, tcib_managed=true, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, container_name=swift_account_server, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.openshift.expose-services=)
Oct 13 15:21:33 standalone.localdomain podman[483015]: 2025-10-13 15:21:33.457801947 +0000 UTC m=+0.168561477 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, release=1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-swift-container, container_name=swift_container_server, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:21:33 standalone.localdomain podman[483016]: 2025-10-13 15:21:33.528936824 +0000 UTC m=+0.240803349 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, container_name=swift_account_server, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:21:33 standalone.localdomain podman[483017]: 2025-10-13 15:21:33.583095832 +0000 UTC m=+0.287509594 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:21:33 standalone.localdomain podman[483015]: 2025-10-13 15:21:33.652126285 +0000 UTC m=+0.362885815 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:21:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:21:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:21:34 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:21:34 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:21:34 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:21:34 standalone.localdomain ceph-mon[29756]: pgmap v3210: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3211: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:35.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:21:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-81afee617d1c2ad30d64b9a6344392209bb792d6cbb7bddbfe6b6c66b765cc51-merged.mount: Deactivated successfully.
Oct 13 15:21:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.669450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368896669525, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 2105, "num_deletes": 251, "total_data_size": 1882096, "memory_usage": 1925376, "flush_reason": "Manual Compaction"}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368896683780, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1843704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74038, "largest_seqno": 76142, "table_properties": {"data_size": 1835571, "index_size": 4899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16670, "raw_average_key_size": 19, "raw_value_size": 1819121, "raw_average_value_size": 2152, "num_data_blocks": 222, "num_entries": 845, "num_filter_entries": 845, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368692, "oldest_key_time": 1760368692, "file_creation_time": 1760368896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 14627 microseconds, and 7914 cpu microseconds.
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.684064) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1843704 bytes OK
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.684192) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.686259) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.686280) EVENT_LOG_v1 {"time_micros": 1760368896686274, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.686300) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1873134, prev total WAL file size 1873134, number of live WAL files 2.
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.687670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1800KB)], [170(4887KB)]
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368896687776, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 6848769, "oldest_snapshot_seqno": -1}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 6428 keys, 5868334 bytes, temperature: kUnknown
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368896715332, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 5868334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5830921, "index_size": 20227, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 168557, "raw_average_key_size": 26, "raw_value_size": 5719362, "raw_average_value_size": 889, "num_data_blocks": 798, "num_entries": 6428, "num_filter_entries": 6428, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760368896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.715691) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 5868334 bytes
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.717381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.7 rd, 212.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.8 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6943, records dropped: 515 output_compression: NoCompression
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.717402) EVENT_LOG_v1 {"time_micros": 1760368896717392, "job": 106, "event": "compaction_finished", "compaction_time_micros": 27645, "compaction_time_cpu_micros": 15876, "output_level": 6, "num_output_files": 1, "total_output_size": 5868334, "num_input_records": 6943, "num_output_records": 6428, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368896717742, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760368896718234, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.687520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.718311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.718319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.718323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.718326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:21:36.718329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:21:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:21:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:36 standalone.localdomain podman[483099]: 2025-10-13 15:21:36.845075415 +0000 UTC m=+0.102664563 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, version=9.6, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 15:21:36 standalone.localdomain podman[483099]: 2025-10-13 15:21:36.879800937 +0000 UTC m=+0.137390045 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Oct 13 15:21:36 standalone.localdomain ceph-mon[29756]: pgmap v3211: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:37.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3212: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:37 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:21:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:21:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:21:38 standalone.localdomain ceph-mon[29756]: pgmap v3212: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3213: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:21:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-56b0412472b532a34bb78587014039d8acbf511963f662c6f865f972c75cba2f-merged.mount: Deactivated successfully.
Oct 13 15:21:40 standalone.localdomain ceph-mon[29756]: pgmap v3213: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:21:40 standalone.localdomain podman[483119]: 2025-10-13 15:21:40.555648592 +0000 UTC m=+0.070814989 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid)
Oct 13 15:21:40 standalone.localdomain podman[483119]: 2025-10-13 15:21:40.591874701 +0000 UTC m=+0.107041108 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct 13 15:21:40 standalone.localdomain systemd[1]: tmp-crun.w0kjfZ.mount: Deactivated successfully.
Oct 13 15:21:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:21:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:40.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:21:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3214: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:41 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:21:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:21:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:42.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:21:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:42 standalone.localdomain podman[483139]: 2025-10-13 15:21:42.447735923 +0000 UTC m=+0.080206962 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:21:42 standalone.localdomain podman[483139]: 2025-10-13 15:21:42.477899683 +0000 UTC m=+0.110370742 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:21:42 standalone.localdomain podman[483139]: unhealthy
Oct 13 15:21:42 standalone.localdomain ceph-mon[29756]: pgmap v3214: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:21:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:21:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:21:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3215: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:21:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:21:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:21:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:43 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:21:43 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:21:44 standalone.localdomain ceph-mon[29756]: pgmap v3215: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3216: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:21:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:45.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:21:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:21:46 standalone.localdomain podman[483163]: 2025-10-13 15:21:46.171697108 +0000 UTC m=+0.077693673 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:21:46 standalone.localdomain podman[483163]: 2025-10-13 15:21:46.206209694 +0000 UTC m=+0.112206279 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:21:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:46 standalone.localdomain ceph-mon[29756]: pgmap v3216: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:21:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e410b4438f0c300953cb1d70d8d493e57beab26e7850ef1c9e5f40f4d7c5012-merged.mount: Deactivated successfully.
Oct 13 15:21:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:47 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:21:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:47.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3217: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:47 standalone.localdomain podman[467099]: time="2025-10-13T15:21:47Z" level=error msg="Getting root fs size for \"5949603598ddd49b1e1905db9f6e327ace8e6a9961ced30429e34a6b2c56743c\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:21:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:21:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:21:48 standalone.localdomain ceph-mon[29756]: pgmap v3217: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3218: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:21:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-988b334fbbc02e37b29bbd89994cd142c427e41613fb7debd89bcf8ffd5cd653-merged.mount: Deactivated successfully.
Oct 13 15:21:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-988b334fbbc02e37b29bbd89994cd142c427e41613fb7debd89bcf8ffd5cd653-merged.mount: Deactivated successfully.
Oct 13 15:21:50 standalone.localdomain ceph-mon[29756]: pgmap v3218: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:21:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-81afee617d1c2ad30d64b9a6344392209bb792d6cbb7bddbfe6b6c66b765cc51-merged.mount: Deactivated successfully.
Oct 13 15:21:50 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:50.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3219: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:21:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:21:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:21:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:21:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:21:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:21:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:52.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:52 standalone.localdomain ceph-mon[29756]: pgmap v3219: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:21:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:21:53 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:53 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3220: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:21:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:21:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=533 DF PROTO=TCP SPT=52462 DPT=9102 SEQ=3547969057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBACA2E0000000001030307) 
Oct 13 15:21:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:21:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=534 DF PROTO=TCP SPT=52462 DPT=9102 SEQ=3547969057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBACE360000000001030307) 
Oct 13 15:21:54 standalone.localdomain ceph-mon[29756]: pgmap v3220: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:21:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3221: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:21:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:21:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:21:55 standalone.localdomain systemd[1]: tmp-crun.wd07u5.mount: Deactivated successfully.
Oct 13 15:21:55 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:55.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:55 standalone.localdomain podman[483183]: 2025-10-13 15:21:55.823435458 +0000 UTC m=+0.087121987 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:21:55 standalone.localdomain podman[483183]: 2025-10-13 15:21:55.858677987 +0000 UTC m=+0.122364516 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:21:55 standalone.localdomain podman[483182]: 2025-10-13 15:21:55.860092611 +0000 UTC m=+0.124475932 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:21:55 standalone.localdomain podman[483182]: 2025-10-13 15:21:55.940213339 +0000 UTC m=+0.204596670 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:21:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=535 DF PROTO=TCP SPT=52462 DPT=9102 SEQ=3547969057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBAD6360000000001030307) 
Oct 13 15:21:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:21:56 standalone.localdomain ceph-mon[29756]: pgmap v3221: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:56 standalone.localdomain systemd[1]: tmp-crun.wA6Pyy.mount: Deactivated successfully.
Oct 13 15:21:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:21:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:21:57.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:21:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3222: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:21:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:21:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:21:57 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:21:57 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:21:58 standalone.localdomain ceph-mon[29756]: pgmap v3222: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:21:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:21:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3223: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:21:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:21:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:21:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:21:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:21:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=536 DF PROTO=TCP SPT=52462 DPT=9102 SEQ=3547969057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBAE5F70000000001030307) 
Oct 13 15:22:00 standalone.localdomain ceph-mon[29756]: pgmap v3223: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:00.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3224: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:22:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-be825dd8871dd84d3ff7059da1d66b913a6d0660f0a5515a236de88c784717e2-merged.mount: Deactivated successfully.
Oct 13 15:22:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:02.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:22:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:22:02 standalone.localdomain ceph-mon[29756]: pgmap v3224: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:22:02 standalone.localdomain podman[483225]: 2025-10-13 15:22:02.82271218 +0000 UTC m=+0.084455214 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:22:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fdd31d62755eb7b47d7ccfec1ef24eab469c303eedbb2ace1125337c3205d7ad-merged.mount: Deactivated successfully.
Oct 13 15:22:02 standalone.localdomain podman[483226]: 2025-10-13 15:22:02.883036591 +0000 UTC m=+0.140637526 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:22:02 standalone.localdomain podman[483226]: 2025-10-13 15:22:02.896835451 +0000 UTC m=+0.154436436 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 15:22:02 standalone.localdomain podman[483225]: 2025-10-13 15:22:02.908005749 +0000 UTC m=+0.169748783 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 13 15:22:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3225: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:22:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-988b334fbbc02e37b29bbd89994cd142c427e41613fb7debd89bcf8ffd5cd653-merged.mount: Deactivated successfully.
Oct 13 15:22:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-988b334fbbc02e37b29bbd89994cd142c427e41613fb7debd89bcf8ffd5cd653-merged.mount: Deactivated successfully.
Oct 13 15:22:04 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:22:04 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:22:04 standalone.localdomain ceph-mon[29756]: pgmap v3225: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:22:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:22:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:22:04 standalone.localdomain podman[483261]: 2025-10-13 15:22:04.897167597 +0000 UTC m=+0.058006909 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64)
Oct 13 15:22:04 standalone.localdomain systemd[1]: tmp-crun.rEucPu.mount: Deactivated successfully.
Oct 13 15:22:04 standalone.localdomain podman[483262]: 2025-10-13 15:22:04.914630742 +0000 UTC m=+0.072209792 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:22:04 standalone.localdomain podman[483263]: 2025-10-13 15:22:04.988702021 +0000 UTC m=+0.144237138 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, architecture=x86_64, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, container_name=swift_account_server, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, batch=17.1_20250721.1, distribution-scope=public, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:22:05 standalone.localdomain podman[483261]: 2025-10-13 15:22:05.078444399 +0000 UTC m=+0.239283711 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, version=17.1.9, container_name=swift_object_server, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, tcib_managed=true)
Oct 13 15:22:05 standalone.localdomain podman[483262]: 2025-10-13 15:22:05.101407995 +0000 UTC m=+0.258987025 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, batch=17.1_20250721.1, release=1, container_name=swift_container_server, name=rhosp17/openstack-swift-container, tcib_managed=true, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 15:22:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3226: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:05 standalone.localdomain podman[483263]: 2025-10-13 15:22:05.162810399 +0000 UTC m=+0.318345506 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., architecture=x86_64, container_name=swift_account_server, release=1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 15:22:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:22:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:22:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:22:05 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:05.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:22:05 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:22:05 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:22:06 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:22:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:06 standalone.localdomain ceph-mon[29756]: pgmap v3226: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:22:06.938 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:22:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:22:06.938 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:22:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:22:06.939 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:22:07 standalone.localdomain podman[467099]: time="2025-10-13T15:22:07Z" level=error msg="Getting root fs size for \"5bfc43f1cfb42b4c062666f5ece255df6cf78effb6f4723f1da601dd76f689bb\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:22:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:07.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3227: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:22:07 standalone.localdomain podman[483343]: 2025-10-13 15:22:07.547343044 +0000 UTC m=+0.116659588 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git)
Oct 13 15:22:07 standalone.localdomain podman[483343]: 2025-10-13 15:22:07.643171881 +0000 UTC m=+0.212488575 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Oct 13 15:22:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:22:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:08 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:22:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:08 standalone.localdomain ceph-mon[29756]: pgmap v3227: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fdd31d62755eb7b47d7ccfec1ef24eab469c303eedbb2ace1125337c3205d7ad-merged.mount: Deactivated successfully.
Oct 13 15:22:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f6698bdef4914a669560655838bf20b3925b44012adb424488f64497bc052998-merged.mount: Deactivated successfully.
Oct 13 15:22:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3228: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:22:10 standalone.localdomain ceph-mon[29756]: pgmap v3228: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:22:10 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:10.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:22:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3229: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:22:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:22:11 standalone.localdomain podman[483364]: 2025-10-13 15:22:11.818674734 +0000 UTC m=+0.087462787 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 15:22:11 standalone.localdomain podman[483364]: 2025-10-13 15:22:11.85798572 +0000 UTC m=+0.126773773 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:22:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:22:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:12.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:12 standalone.localdomain ceph-mon[29756]: pgmap v3229: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:22:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:22:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3230: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:22:13 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:22:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:22:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:13 standalone.localdomain podman[483385]: 2025-10-13 15:22:13.976831041 +0000 UTC m=+0.069658632 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:22:13 standalone.localdomain podman[483385]: 2025-10-13 15:22:13.984706426 +0000 UTC m=+0.077534017 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:22:14 standalone.localdomain podman[483385]: unhealthy
Oct 13 15:22:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:14 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:22:14 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:22:14 standalone.localdomain ceph-mon[29756]: pgmap v3230: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3231: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:22:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-be825dd8871dd84d3ff7059da1d66b913a6d0660f0a5515a236de88c784717e2-merged.mount: Deactivated successfully.
Oct 13 15:22:15 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:15.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:16 standalone.localdomain ceph-mon[29756]: pgmap v3231: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:22:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fdd31d62755eb7b47d7ccfec1ef24eab469c303eedbb2ace1125337c3205d7ad-merged.mount: Deactivated successfully.
Oct 13 15:22:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fdd31d62755eb7b47d7ccfec1ef24eab469c303eedbb2ace1125337c3205d7ad-merged.mount: Deactivated successfully.
Oct 13 15:22:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3232: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:22:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:17.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:17 standalone.localdomain podman[483408]: 2025-10-13 15:22:17.224890709 +0000 UTC m=+0.050493766 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:22:17 standalone.localdomain podman[483408]: 2025-10-13 15:22:17.260912712 +0000 UTC m=+0.086515759 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:22:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:22:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c691834ad3a3edd918cbfe39bcf9284ebfbeb04084cbf00c4514adb5db703ef9-merged.mount: Deactivated successfully.
Oct 13 15:22:18 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:15:22:18 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx6fdd8ed931ff46e6959a2-0068ed192a" "proxy-server 2" 0.0003 "-" 20 -
Oct 13 15:22:18 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx6fdd8ed931ff46e6959a2-0068ed192a)
Oct 13 15:22:18 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx6fdd8ed931ff46e6959a2-0068ed192a)
Oct 13 15:22:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:22:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:22:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:18 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3979328111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3979328111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: pgmap v3232: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3979328111' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:22:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3979328111' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:22:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3233: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:20 standalone.localdomain account-server[114566]: 172.20.0.100 - - [13/Oct/2025:15:22:20 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx52512653b92b421da918c-0068ed192c" "proxy-server 2" 0.0003 "-" 22 -
Oct 13 15:22:20 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx52512653b92b421da918c-0068ed192c)
Oct 13 15:22:20 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx52512653b92b421da918c-0068ed192c)
Oct 13 15:22:20 standalone.localdomain ceph-mon[29756]: pgmap v3233: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:20 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:20.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:20.989 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:22:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:20.991 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:22:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:20.992 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:22:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:20.996 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.002 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd8778282-fa7a-461f-8787-bedcc796cb9b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:20.992270', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '6985c856-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': 'f88e535e58d6e61de50dcaf00bfe7aa0224defc9e3c74a413ea92bf63fb87625'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:20.992270', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '69869448-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': 'bf1f5f7f2b92556d247ab44101066d304f1259d1a6ad76cf663d3d916969f7fb'}]}, 'timestamp': '2025-10-13 15:22:21.002687', '_unique_id': 'ef228588f9984e04abf39a0d627cbde1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.004 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.039 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.039 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.040 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.063 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.063 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7c0e420c-8327-4efe-b265-1deef949957a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.004858', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '698c3f42-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '80f975f45719c02ce7c69fb7131c41cc9366c823d6377e52ecf440023a52774c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.004858', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '698c4b4a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '660001f2a57d5cd5183b6aa107ec9cc6051aa40dd703559c008ea20c0ce9e85b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.004858', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '698c5310-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'a4a4c3a8003bcf0e7ef6a5279237f309113cc5c07aedc9ee14522a9f8a3fe38d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.004858', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '698fec32-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': 'b039c21d2f5ca8020a5b9f92bde656b5b43e14de018fa9d9310c418e536c6bd6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.004858', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '698ff9de-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '68bf922acd8ff93b1c0febe4abebe44d0a0ea681f09d784ef40c813ed3b5a6e6'}]}, 'timestamp': '2025-10-13 15:22:21.064181', '_unique_id': '103bb86e99ac4809a2ebbed5133aac7f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.065 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.066 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.090 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 28680000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.108 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 28510000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bcd65d24-8128-4dae-8866-6893d335ad00', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 28680000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:22:21.066172', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '6993ffa2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.279193641, 'message_signature': '4a943afa60c20620c3575b8521cfc070e4f4c8f13fedde569b822ab0d6bc84dc'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 28510000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:22:21.066172', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '6996dc18-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.297852252, 'message_signature': 'e4b26dda726129dd1e23899daa468f9bf56c9a9875e4cf5e3b0aff812813068c'}]}, 'timestamp': '2025-10-13 15:22:21.109373', '_unique_id': 'abc435962f77409087920f065c3abb1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.110 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.111 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:22:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.135 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.136 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.136 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.148 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.148 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'afe60c33-9a06-4a21-a188-918750b1b0cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.111533', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '699aff96-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': '40b778ac43c47daa5e070807fdd5d4c6062e704c7f9b75860a6baff2f9157a6d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.111533', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '699b0b26-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': 'b9a01cd1db80e0f904882e89143f1196caa9d3156a27c9d67782ef97c3ea7681'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.111533', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '699b142c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': '695a1a561d7ae738b0dbce1315c754e86733ca35f0b27349854cd1b8a38be879'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.111533', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '699ce716-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.326112503, 'message_signature': 'bb00621c38ac0e4bd29b0a11e57574d8b218adec93ea258448a88d3a692abe82'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.111533', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '699cf13e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.326112503, 'message_signature': 'a612cb969f16fbff7b6b7cc145e019482aa02f20587f65241b18886c87e3340e'}]}, 'timestamp': '2025-10-13 15:22:21.149148', '_unique_id': '033213ccdf794eeabfc3b7e80f5a0d58'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.150 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.151 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.151 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.151 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b3403921-c0c8-4a19-9262-5adf984bf050', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.151121', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '699d4aa8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': 'cf18a523193cb604c75d3346088fbd551519a1601bbbf0e8e60783bbb15e9e75'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.151121', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '699d576e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': '5848fd025ad03a376723efc1f4900f79be8df2055696a22975bc53c1bc390cb3'}]}, 'timestamp': '2025-10-13 15:22:21.151791', '_unique_id': '131ebf87ac39493498542c0db03cdc71'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.153 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.153 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.153 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.153 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '26573358-b9c7-4529-9426-71d153d5d040', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.153558', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '699da958-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': '114bbe95965fd68fea18cfb3fc02d335a08ee28e3628c79769691fa52a548668'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.153558', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '699db5e2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': '8e0c94214a588460fb337486fb3c29718ae3214d83736511391e75799d7588cc'}]}, 'timestamp': '2025-10-13 15:22:21.154207', '_unique_id': 'c48f7e24d75c4392b7476fa7e73aa687'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.155 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.155 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.156 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.156 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.156 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.157 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ebe9e25f-4066-4d1a-a914-dbd0bf9e7721', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.155771', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '699e0056-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'a699a3cabd19fd3f23b01576c7c2704a5e780c172ad3be0066448402abdbc193'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.155771', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '699e0ba0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '3edc868141954542930aad69a45a317319a2ff5d27fbf754bafc13fd9cd65f53'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.155771', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '699e1ec4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'f71498a2fc5b056677e4f94dcc62e2326354de577741e75781a0cc5dcddbdcaf'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.155771', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '699e29be-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '272cf56ad13ea19b27e4d9befaf8699040b23c9f66bf5978fb6ee843e1e316d2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.155771', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '699e3490-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': 'f35dd92ca697625d7eb9d943cf09370d5a65dfae2fbc6bf978bb797684e43346'}]}, 'timestamp': '2025-10-13 15:22:21.157435', '_unique_id': '15ccbd3e59e8405fa7290eb5b1c90688'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.158 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.159 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.159 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.159 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.159 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.160 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6f56ddb9-a4ed-4eb4-ba58-f4041de95e12', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.159065', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '699e7fea-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '93508fc4a4e6bf1b7b2271b7496092dd564d15444a00e160a9094e12251a439a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.159065', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '699e8bd4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'dd897a7cedfd1b763b91f8f5a0984bb6ad8b5485ab615c8f17083185c84a5186'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.159065', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '699e98fe-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '06d3acac68cd009a2a7e6252801d487020d095463066abbfde81b540fcec2980'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.159065', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '699ea3b2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '2c90fd32151ca666f51d3b1bb1152c89047a4df977a1f9cca1aefe5edb931c62'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.159065', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '699eabbe-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '23a4fade1f6119f55c331f81fefa19b68823613f9ec3dc943357e18c4ef3d289'}]}, 'timestamp': '2025-10-13 15:22:21.160436', '_unique_id': '15d374d735d545a0a71cf6c37c736ee5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.161 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3234: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16b4c64c-905e-47dc-bffe-a15ee1eb6d09', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.161917', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '699eee3a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': 'eff59906772168e235e074325b79331413aa9359180cfc9fe156ca02b47593a1'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.161917', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '699ef880-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': '871da02decd745100fe7c0211b244adee9c6a4191a3edc94c84e6cf5d8f2e8a7'}]}, 'timestamp': '2025-10-13 15:22:21.162411', '_unique_id': '54ed08bcdc22456e8d8984c0a56d9aef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.163 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.163 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.163 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2dd2c9f6-5417-4ca3-a25f-f3d48e62d75c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:22:21.163578', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '699f2ecc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.279193641, 'message_signature': '3011cd934247bcfb0a86329f1a1790e36108fc004985584c2deb868ec0b4a261'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:22:21.163578', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '699f39d0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.297852252, 'message_signature': '87bbe81e1b9a0832558508019d41aef076418d3c4493d0ad9b3db7b050585731'}]}, 'timestamp': '2025-10-13 15:22:21.164074', '_unique_id': '7e61e424ebf04d52ab680704b819aa8d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.165 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.165 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '86f84d14-d1f8-4d03-8af0-06a0c476bdc7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.165329', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '699f7b84-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': '794830f17710146f29114e8e21bb782c4596c0012ee741abbf5a580a52bc1ff6'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.165329', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '699f8390-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': 'e9956efda69159e5c22475cc950fa1a89941b38f0b02fb9d6dac9d72ee21f62e'}]}, 'timestamp': '2025-10-13 15:22:21.166004', '_unique_id': '46f92026d7e84e52891b8bb740f1684a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.166 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.167 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.167 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.167 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2619cee6-df1a-4e1d-929b-a84eb47d6f1f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.167165', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '699fbad6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': 'c7de1dd81513ae0b25c08ebdb6bd76f7227e3b09328179218ec5014f5aafe033'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.167165', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '699fc42c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': '43ff2eb8474d8de1c70c658aa2615c075c0d954ed22107aa5abde612ef7e0cd6'}]}, 'timestamp': '2025-10-13 15:22:21.167634', '_unique_id': 'd4497d6ab5ae43c4a18ca062d3545f06'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.169 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fb416c85-44cc-4ff8-b586-468909fe51d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.168834', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '699ffd2a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': 'b1d097c39a92dd6eae1c6175eb85adf661d17921de6cabe7516868227f3197ff'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.168834', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '69a00950-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': 'd900a9c2fd58e7b88464465c428c2c9a371fb3e19bc525b34e88013874657bfa'}]}, 'timestamp': '2025-10-13 15:22:21.169430', '_unique_id': '9e50ff087da243f592fda4f56d23fb3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.170 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.171 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.171 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.171 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.171 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd96c74e0-efc8-499f-bab5-8022fd4f46ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.170781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a047f8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'ceb306fd3fd7360a3899dbab0483a319376f0dcb18c328b3bb0ee9b00a77974e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.170781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a05248-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '1abb99c3d7da5b462dfa7af23db748f2c41f065d30175cecd5a4e0230d84afc7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.170781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '69a05cd4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '014297f2d35a39bcb9408692197b67502533ed9f206aa0d77894691f3819bbe6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.170781', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a066d4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '45cb267b61dad5c304be1be89d7697a937ce2e7f05d5e629526231c594a3060e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.170781', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a07052-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '7d2c41d5c8beb8f95b67a5c9db99a31a595a135279b88133464620e43325ddcf'}]}, 'timestamp': '2025-10-13 15:22:21.172057', '_unique_id': '15783a762ddc4224861931b43791ed45'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.173 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.174 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bb881ee1-684b-4164-8e6a-7afd81e3341b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.173583', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a0b6de-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '1569b914b9a75ae3398bbd857d92f979e22a05d1db1fad33360283ff318ee53b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.173583', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a0c142-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '8ef0995118be6895b34627bf2e52d2587bcd43e15ce0319717443cc59a626def'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.173583', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '69a0cbe2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '01de063a151fce8a80a6b4864fc06963cb5ff38b7291cf841031ac80f08d9157'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.173583', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a0d66e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': 'e694ad3e7bc59e5b12dd9284d31c0e0e3b19c6ca4db02714fa012f5f6a12a6a5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.173583', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a0ddbc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '847f4ea4a69a8006655fe4781a546a61f3cabe83530605df6198dfec8a2ad1f6'}]}, 'timestamp': '2025-10-13 15:22:21.174855', '_unique_id': '5b8c040f04164d2a995a4358cfcdad1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.176 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.176 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd481bdde-e4c1-4724-9e50-905eef5be58d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.176164', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '69a11c96-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': '06e46443f332db831ac313298f5dc49b9e2ce11e3ebda2f31110636de2d17275'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.176164', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '69a12830-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': 'c5a044bd14ce6abac793de92ecf8b3dc60b7ec482c2f41cfcbb520c6fd330d19'}]}, 'timestamp': '2025-10-13 15:22:21.176741', '_unique_id': '8820c316f63f45ec800da64998ca8d95'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.177 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.178 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.178 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.178 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.178 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.178 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'db7e4dda-28d1-44df-9c77-3a4a1fe70ac8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.178039', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a163a4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': '1b1abdbb7ee72aeef30415880f511aac4540cd5fe3094cf838c35128725dc7a5'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.178039', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a16b2e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': '65330bf43bd8d2ce2894999812dda1006e9358c0d832cf1f76db16d44c944430'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.178039', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '69a172cc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': '17f77ae8b4c7a182acb2bdeaf463fcc5411ee6c98555a3b2bca8a7dba94dcfc9'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.178039', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a179e8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.326112503, 'message_signature': '4f8b9228f876574e206307b37ca254f9fedb082ed51ee6123fc56c450d78c771'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.178039', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a180b4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.326112503, 'message_signature': '9e454328ed1a1ff445019289e0a3d220c42fa31194d09769a68a0eb039ba1ab7'}]}, 'timestamp': '2025-10-13 15:22:21.179008', '_unique_id': '15e1adfbcd2e4a00ae19cdd8b276484f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.180 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.180 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '47531f7b-44a0-4f91-ac39-cc3e1a4847f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.180378', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a1bfa2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'a5f15018530097b67d00a665b44dbd6f302102f7ae87f476862af73b010d89a1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.180378', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a1c768-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': 'aec933801e26c69234ddfa8afe3f57077bd9a6f8eb628c2c48afcf248db02e85'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.180378', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '69a1ce7a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.194100307, 'message_signature': '79963f433e0b49c58a992807ff03f7ee41228db6864881bbf0d119c3b38219c1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.180378', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a1d5c8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '1748afa2b61c0601283456a8c00bb5ca123ce01d8c962f7ed2db6a0ff27488f0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.180378', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a1dd5c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.229426639, 'message_signature': '9b6b5462a0858c9f85572d8564a279620e1374e7eb1e6378f110ef7f53afb188'}]}, 'timestamp': '2025-10-13 15:22:21.181364', '_unique_id': 'f7771bb1a9c9446c8948b6a907bed845'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.181 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.182 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.182 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.182 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '00f80c25-02c5-4c0b-a2d3-5c136830d01a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.182549', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '69a213da-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': 'dac3cccba8360cfd06fe3d8d26a7c308d6601139134beb9a66946787d62f6271'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.182549', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '69a21bc8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': '3371d33679ad19f2f353caed229534f2421a169ef504957bb126049c632ff538'}]}, 'timestamp': '2025-10-13 15:22:21.182970', '_unique_id': 'c96ab55306454a37adfd66f18390aedc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.183 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.184 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.184 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '00f90693-330b-4085-8822-588040cda326', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:22:21.184070', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '69a250fc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.181495185, 'message_signature': '6d4b34c9d0a5412df7625e8c574c6af8bca4a252b19bf3ec9bb92850bdddb996'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:22:21.184070', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '69a25a8e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.186606183, 'message_signature': '61c3fca374ccb3c1cb79ba7d3cffeb23905868a895be553c02e67d0306bd0e48'}]}, 'timestamp': '2025-10-13 15:22:21.184623', '_unique_id': '26d46f01b12b4a259f80bc25721ed897'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.185 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.186 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3317ee15-4f99-4ab9-95ac-40acd3d1e664', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:22:21.185669', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a28d74-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': 'f418ee7799e58ae00b3b588c879085b9c26ee55a4cd11a1cec7c230b97a6f2d7'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:22:21.185669', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a297e2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': 'b2c335a78352acf02d4728cc1fe3308b1d1446e69dac6b14081c3f028b021b07'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:22:21.185669', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '69a2a066-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.300772843, 'message_signature': 'ec38763ec30573615411dd3655a0082fd9a018e6a0cd2d7539e1fddcec0ba9d7'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:22:21.185669', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '69a2aa2a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.326112503, 'message_signature': '01121a90e241dc58e428f1690a023f745be76a706ad40a6c1e323739ec260ed3'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:22:21.185669', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '69a2b182-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8384.326112503, 'message_signature': '5d34872e10d11b52781059cc790fc9c245f234d1807ba7de2d674e28e3514f21'}]}, 'timestamp': '2025-10-13 15:22:21.186794', '_unique_id': 'd9a7fdf840e1492e9e7fcdd85ff4b0a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:22:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:22:21.187 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:22:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:22:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:22.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fdd31d62755eb7b47d7ccfec1ef24eab469c303eedbb2ace1125337c3205d7ad-merged.mount: Deactivated successfully.
Oct 13 15:22:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f6698bdef4914a669560655838bf20b3925b44012adb424488f64497bc052998-merged.mount: Deactivated successfully.
Oct 13 15:22:22 standalone.localdomain ceph-mon[29756]: pgmap v3234: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3235: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:22:23
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'manila_data', 'manila_metadata', 'backups', '.mgr']
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:22:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36447 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=430934412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBB3F5F0000000001030307) 
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:22:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:22:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36448 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=430934412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBB43770000000001030307) 
Oct 13 15:22:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:24 standalone.localdomain ceph-mon[29756]: pgmap v3235: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:22:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3236: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:25 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:25.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:22:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36449 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=430934412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBB4B760000000001030307) 
Oct 13 15:22:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:26 standalone.localdomain ceph-mon[29756]: pgmap v3236: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.117 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.118 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.118 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.119 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.119 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:22:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3237: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:22:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/888930236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.568 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.662 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.662 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.663 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.669 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.669 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.808 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.809 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9823MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.809 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.810 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.888 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.889 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.889 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.889 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:22:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/888930236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:22:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:22:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:22:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:27.945 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:22:28 standalone.localdomain podman[483450]: 2025-10-13 15:22:28.008626811 +0000 UTC m=+0.076274989 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:22:28 standalone.localdomain podman[483450]: 2025-10-13 15:22:28.013196454 +0000 UTC m=+0.080844652 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:22:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:22:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-20ffedf2813c4567ec9b5d5ba8ae086d18f1a9d9860d73e5badf30326d6edd4c-merged.mount: Deactivated successfully.
Oct 13 15:22:28 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:22:28 standalone.localdomain podman[483451]: 2025-10-13 15:22:28.326786781 +0000 UTC m=+0.392102236 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:22:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:22:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/256807372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:22:28 standalone.localdomain podman[483451]: 2025-10-13 15:22:28.352477462 +0000 UTC m=+0.417792917 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 15:22:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:28.364 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:22:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:28.369 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:22:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:28.381 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:22:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:28.382 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:22:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:28.383 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:22:28 standalone.localdomain ceph-mon[29756]: pgmap v3237: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/256807372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:22:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:22:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3238: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:29.382 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:29.383 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:29.383 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:29.383 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:22:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e05f96628a1d0a392e4c07ba31d7ae388e98a981a4c87765214b2a9f7eec51-merged.mount: Deactivated successfully.
Oct 13 15:22:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60bcce74fc42c073a5b67ee9b805081d10dae6671b0a07bce7e651fcfbfa1ee6-merged.mount: Deactivated successfully.
Oct 13 15:22:29 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:22:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60bcce74fc42c073a5b67ee9b805081d10dae6671b0a07bce7e651fcfbfa1ee6-merged.mount: Deactivated successfully.
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.092 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.092 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.092 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:22:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36450 DF PROTO=TCP SPT=56062 DPT=9102 SEQ=430934412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBB5B370000000001030307) 
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.868 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.869 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.869 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:22:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:30.869 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:22:30 standalone.localdomain ceph-mon[29756]: pgmap v3238: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:31.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:22:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e05f96628a1d0a392e4c07ba31d7ae388e98a981a4c87765214b2a9f7eec51-merged.mount: Deactivated successfully.
Oct 13 15:22:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e05f96628a1d0a392e4c07ba31d7ae388e98a981a4c87765214b2a9f7eec51-merged.mount: Deactivated successfully.
Oct 13 15:22:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3239: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:22:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:32.028 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:22:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:32.043 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:22:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:32.043 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:22:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:32.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:22:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:22:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:32.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:22:32 standalone.localdomain ceph-mon[29756]: pgmap v3239: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:22:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c691834ad3a3edd918cbfe39bcf9284ebfbeb04084cbf00c4514adb5db703ef9-merged.mount: Deactivated successfully.
Oct 13 15:22:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:22:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3240: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60bcce74fc42c073a5b67ee9b805081d10dae6671b0a07bce7e651fcfbfa1ee6-merged.mount: Deactivated successfully.
Oct 13 15:22:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:22:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:22:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:34 standalone.localdomain systemd[1]: tmp-crun.cDf1EG.mount: Deactivated successfully.
Oct 13 15:22:34 standalone.localdomain podman[483517]: 2025-10-13 15:22:34.826981813 +0000 UTC m=+0.092683121 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:22:34 standalone.localdomain podman[483517]: 2025-10-13 15:22:34.834682873 +0000 UTC m=+0.100384201 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:22:34 standalone.localdomain ceph-mon[29756]: pgmap v3240: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3241: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:22:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:36.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:22:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:22:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:22:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:22:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:22:36 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:22:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:36 standalone.localdomain podman[483518]: 2025-10-13 15:22:36.607659421 +0000 UTC m=+1.866292108 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:22:36 standalone.localdomain podman[483518]: 2025-10-13 15:22:36.642591079 +0000 UTC m=+1.901223746 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:22:36 standalone.localdomain podman[483544]: 2025-10-13 15:22:36.65382968 +0000 UTC m=+0.529255723 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, release=1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, container_name=swift_object_server)
Oct 13 15:22:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:36 standalone.localdomain podman[483546]: 2025-10-13 15:22:36.715216024 +0000 UTC m=+0.585545388 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T16:11:22)
Oct 13 15:22:36 standalone.localdomain podman[483545]: 2025-10-13 15:22:36.782916064 +0000 UTC m=+0.654751234 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, container_name=swift_container_server, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, release=1, distribution-scope=public, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Oct 13 15:22:36 standalone.localdomain podman[483544]: 2025-10-13 15:22:36.852875415 +0000 UTC m=+0.728301458 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:22:36 standalone.localdomain podman[483546]: 2025-10-13 15:22:36.950779748 +0000 UTC m=+0.821109092 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, io.buildah.version=1.33.12)
Oct 13 15:22:36 standalone.localdomain ceph-mon[29756]: pgmap v3241: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:22:37 standalone.localdomain podman[483545]: 2025-10-13 15:22:37.000387415 +0000 UTC m=+0.872222575 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, vcs-type=git, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:22:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3242: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:37.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:37 standalone.localdomain ceph-mon[29756]: pgmap v3242: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:22:38 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:38 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:22:38 standalone.localdomain podman[483633]: 2025-10-13 15:22:38.741237503 +0000 UTC m=+0.130440195 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:22:38 standalone.localdomain podman[483633]: 2025-10-13 15:22:38.751863099 +0000 UTC m=+0.141065831 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 13 15:22:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3243: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:39 standalone.localdomain systemd[1]: tmp-crun.mKIqx6.mount: Deactivated successfully.
Oct 13 15:22:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:39 standalone.localdomain podman[467099]: time="2025-10-13T15:22:39Z" level=error msg="Getting root fs size for \"66ee3689e13827480617643d9ed02ae50580d5a94228aa19aa264db63ae93a16\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:22:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:39 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:22:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:40 standalone.localdomain ceph-mon[29756]: pgmap v3243: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:41.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3244: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:22:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f93319d9ea9a072f29f7188e9587784800c667ed7ca47085ef153ed67451a066-merged.mount: Deactivated successfully.
Oct 13 15:22:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:42.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:42 standalone.localdomain ceph-mon[29756]: pgmap v3244: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:22:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:22:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:22:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-20ffedf2813c4567ec9b5d5ba8ae086d18f1a9d9860d73e5badf30326d6edd4c-merged.mount: Deactivated successfully.
Oct 13 15:22:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3245: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-20ffedf2813c4567ec9b5d5ba8ae086d18f1a9d9860d73e5badf30326d6edd4c-merged.mount: Deactivated successfully.
Oct 13 15:22:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:22:43 standalone.localdomain systemd[1]: tmp-crun.ELKi3p.mount: Deactivated successfully.
Oct 13 15:22:43 standalone.localdomain podman[483658]: 2025-10-13 15:22:43.434903367 +0000 UTC m=+0.066972667 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:22:43 standalone.localdomain podman[483658]: 2025-10-13 15:22:43.469965263 +0000 UTC m=+0.102034573 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 15:22:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e05f96628a1d0a392e4c07ba31d7ae388e98a981a4c87765214b2a9f7eec51-merged.mount: Deactivated successfully.
Oct 13 15:22:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60bcce74fc42c073a5b67ee9b805081d10dae6671b0a07bce7e651fcfbfa1ee6-merged.mount: Deactivated successfully.
Oct 13 15:22:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:22:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60bcce74fc42c073a5b67ee9b805081d10dae6671b0a07bce7e651fcfbfa1ee6-merged.mount: Deactivated successfully.
Oct 13 15:22:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:44 standalone.localdomain ceph-mon[29756]: pgmap v3245: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:44 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:22:45 standalone.localdomain podman[483679]: 2025-10-13 15:22:45.020976106 +0000 UTC m=+0.483776222 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:22:45 standalone.localdomain podman[483679]: 2025-10-13 15:22:45.05466181 +0000 UTC m=+0.517461946 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:22:45 standalone.localdomain podman[483679]: unhealthy
Oct 13 15:22:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3246: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:22:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:46.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:22:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e05f96628a1d0a392e4c07ba31d7ae388e98a981a4c87765214b2a9f7eec51-merged.mount: Deactivated successfully.
Oct 13 15:22:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e05f96628a1d0a392e4c07ba31d7ae388e98a981a4c87765214b2a9f7eec51-merged.mount: Deactivated successfully.
Oct 13 15:22:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:46 standalone.localdomain ceph-mon[29756]: pgmap v3246: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:47 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:22:47 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:22:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3247: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:47.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:22:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:22:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:22:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:22:48 standalone.localdomain ceph-mon[29756]: pgmap v3247: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3248: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-60bcce74fc42c073a5b67ee9b805081d10dae6671b0a07bce7e651fcfbfa1ee6-merged.mount: Deactivated successfully.
Oct 13 15:22:49 standalone.localdomain podman[483700]: 2025-10-13 15:22:49.733720225 +0000 UTC m=+1.070244425 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:22:49 standalone.localdomain podman[483700]: 2025-10-13 15:22:49.739468172 +0000 UTC m=+1.075992402 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:22:50 standalone.localdomain ceph-mon[29756]: pgmap v3248: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:51.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3249: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:22:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:22:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-36dd811e02ba985b5a336c17f03e47ad2b35e431e58c3e755073c9d6afb5f7b6-merged.mount: Deactivated successfully.
Oct 13 15:22:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:52 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:22:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:52.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:22:52 standalone.localdomain ceph-mon[29756]: pgmap v3249: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3250: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:22:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:22:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2599 DF PROTO=TCP SPT=35850 DPT=9102 SEQ=2123769306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBBB4910000000001030307) 
Oct 13 15:22:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:22:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:22:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:22:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2600 DF PROTO=TCP SPT=35850 DPT=9102 SEQ=2123769306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBBB8B60000000001030307) 
Oct 13 15:22:54 standalone.localdomain ceph-mon[29756]: pgmap v3250: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:22:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3251: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:56.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:56 standalone.localdomain rsyslogd[56156]: imjournal: 8064 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 13 15:22:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:22:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2601 DF PROTO=TCP SPT=35850 DPT=9102 SEQ=2123769306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBBC0B70000000001030307) 
Oct 13 15:22:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:22:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:22:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:22:56 standalone.localdomain ceph-mon[29756]: pgmap v3251: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3252: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:22:57.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:22:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f93319d9ea9a072f29f7188e9587784800c667ed7ca47085ef153ed67451a066-merged.mount: Deactivated successfully.
Oct 13 15:22:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:22:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:22:58 standalone.localdomain podman[483719]: 2025-10-13 15:22:58.560886408 +0000 UTC m=+0.077222861 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:22:58 standalone.localdomain podman[483719]: 2025-10-13 15:22:58.568032798 +0000 UTC m=+0.084369321 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:22:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:22:58 standalone.localdomain ceph-mon[29756]: pgmap v3252: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3253: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:22:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:22:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:22:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:22:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:22:59 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:23:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:23:00 standalone.localdomain podman[483742]: 2025-10-13 15:23:00.139106766 +0000 UTC m=+0.065860283 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:23:00 standalone.localdomain podman[483742]: 2025-10-13 15:23:00.167409155 +0000 UTC m=+0.094162692 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:23:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2602 DF PROTO=TCP SPT=35850 DPT=9102 SEQ=2123769306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBBD0760000000001030307) 
Oct 13 15:23:00 standalone.localdomain ceph-mon[29756]: pgmap v3253: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:01.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3254: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:02 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:23:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:02.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:02 standalone.localdomain ceph-mon[29756]: pgmap v3254: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3255: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:23:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:04 standalone.localdomain ceph-mon[29756]: pgmap v3255: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3edcb83c23c59fae27c97060729ed4fcf0db2402d1d24ba03f5c92901d5ee987-merged.mount: Deactivated successfully.
Oct 13 15:23:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3256: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:23:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae1540d7ad205defc16e9c375b3a204dd23ca8e1e1b2774129dfde318e32042c-merged.mount: Deactivated successfully.
Oct 13 15:23:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:06.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:23:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:23:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:23:06 standalone.localdomain podman[483764]: 2025-10-13 15:23:06.780600423 +0000 UTC m=+0.050310015 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:23:06 standalone.localdomain podman[483764]: 2025-10-13 15:23:06.784104491 +0000 UTC m=+0.053814083 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 13 15:23:06 standalone.localdomain ceph-mon[29756]: pgmap v3256: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:23:06.939 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:23:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:23:06.939 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:23:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:23:06.940 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:23:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:23:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-36dd811e02ba985b5a336c17f03e47ad2b35e431e58c3e755073c9d6afb5f7b6-merged.mount: Deactivated successfully.
Oct 13 15:23:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:07 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:23:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3257: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:07.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae1540d7ad205defc16e9c375b3a204dd23ca8e1e1b2774129dfde318e32042c-merged.mount: Deactivated successfully.
Oct 13 15:23:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e-merged.mount: Deactivated successfully.
Oct 13 15:23:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:23:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:23:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:23:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:23:08 standalone.localdomain podman[483781]: 2025-10-13 15:23:08.809707672 +0000 UTC m=+0.078580763 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-swift-object-container, distribution-scope=public, tcib_managed=true, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:23:08 standalone.localdomain podman[483784]: 2025-10-13 15:23:08.826554799 +0000 UTC m=+0.074959702 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, release=1, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, container_name=swift_account_server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, distribution-scope=public, version=17.1.9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git)
Oct 13 15:23:08 standalone.localdomain ceph-mon[29756]: pgmap v3257: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:08 standalone.localdomain podman[483783]: 2025-10-13 15:23:08.865726722 +0000 UTC m=+0.131133007 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, version=17.1.9, container_name=swift_container_server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:23:08 standalone.localdomain podman[483782]: 2025-10-13 15:23:08.933846703 +0000 UTC m=+0.191885141 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:23:08 standalone.localdomain podman[483782]: 2025-10-13 15:23:08.943821269 +0000 UTC m=+0.201859767 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:23:08 standalone.localdomain podman[483781]: 2025-10-13 15:23:08.99045203 +0000 UTC m=+0.259325101 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, tcib_managed=true, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_object_server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, distribution-scope=public)
Oct 13 15:23:09 standalone.localdomain podman[483784]: 2025-10-13 15:23:09.067272569 +0000 UTC m=+0.315677472 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-swift-account-container, distribution-scope=public, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 15:23:09 standalone.localdomain podman[483783]: 2025-10-13 15:23:09.073712936 +0000 UTC m=+0.339119211 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=swift_container_server, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9)
Oct 13 15:23:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3258: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:23:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:23:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:23:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:23:10 standalone.localdomain ceph-mon[29756]: pgmap v3258: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:23:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:11.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3259: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:23:11 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:23:11 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:23:11 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:23:11 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:23:11 standalone.localdomain podman[483878]: 2025-10-13 15:23:11.29879138 +0000 UTC m=+1.389658370 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 13 15:23:11 standalone.localdomain podman[483878]: 2025-10-13 15:23:11.313331337 +0000 UTC m=+1.404198307 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, maintainer=Red Hat, Inc.)
Oct 13 15:23:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:12.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:12 standalone.localdomain ceph-mon[29756]: pgmap v3259: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:23:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:23:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:23:13 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:23:13 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:23:13 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:23:13 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:23:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:23:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3260: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:13 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:13 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:13 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:23:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:23:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:14 standalone.localdomain podman[467099]: time="2025-10-13T15:23:14Z" level=error msg="Getting root fs size for \"723b5dec589f98514b504b21163bdc1dec105240a49dcd4ec56c81a0cf07d6dc\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:23:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:14 standalone.localdomain ceph-mon[29756]: pgmap v3260: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:23:15 standalone.localdomain systemd[1]: tmp-crun.AFjWoF.mount: Deactivated successfully.
Oct 13 15:23:15 standalone.localdomain podman[483899]: 2025-10-13 15:23:15.130329849 +0000 UTC m=+0.072408253 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:23:15 standalone.localdomain podman[483899]: 2025-10-13 15:23:15.139701627 +0000 UTC m=+0.081780011 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:23:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3261: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:16.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:23:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3edcb83c23c59fae27c97060729ed4fcf0db2402d1d24ba03f5c92901d5ee987-merged.mount: Deactivated successfully.
Oct 13 15:23:16 standalone.localdomain ceph-mon[29756]: pgmap v3261: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:16 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:23:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:23:17 standalone.localdomain podman[483918]: 2025-10-13 15:23:17.119227904 +0000 UTC m=+0.066705039 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:23:17 standalone.localdomain podman[483918]: 2025-10-13 15:23:17.154035752 +0000 UTC m=+0.101512877 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:23:17 standalone.localdomain podman[483918]: unhealthy
Oct 13 15:23:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3262: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:17.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:17 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:23:17 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:23:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae1540d7ad205defc16e9c375b3a204dd23ca8e1e1b2774129dfde318e32042c-merged.mount: Deactivated successfully.
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2083202398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2083202398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:23:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:23:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: pgmap v3262: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2083202398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:23:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2083202398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:23:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3263: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:23:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a-merged.mount: Deactivated successfully.
Oct 13 15:23:20 standalone.localdomain ceph-mon[29756]: pgmap v3263: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae1540d7ad205defc16e9c375b3a204dd23ca8e1e1b2774129dfde318e32042c-merged.mount: Deactivated successfully.
Oct 13 15:23:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-99acd49219591d82b934db38de143aa2ac465a793b0260c168f14dd18945007e-merged.mount: Deactivated successfully.
Oct 13 15:23:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:21.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3264: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:23:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:23:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:23:22 standalone.localdomain podman[483939]: 2025-10-13 15:23:22.267230895 +0000 UTC m=+0.099199587 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:23:22 standalone.localdomain podman[483939]: 2025-10-13 15:23:22.306890322 +0000 UTC m=+0.138859044 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:23:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:22.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:23:22 standalone.localdomain ceph-mon[29756]: pgmap v3264: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3265: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:23:23
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_data', 'images', '.mgr', 'manila_metadata', 'volumes', 'backups']
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:23:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:23:23 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:23:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17384 DF PROTO=TCP SPT=50676 DPT=9102 SEQ=263900409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBC29BF0000000001030307) 
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:23:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:23:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17385 DF PROTO=TCP SPT=50676 DPT=9102 SEQ=263900409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBC2DB60000000001030307) 
Oct 13 15:23:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:23:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:23:24 standalone.localdomain ceph-mon[29756]: pgmap v3265: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:23:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3266: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:26.086 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:26.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17386 DF PROTO=TCP SPT=50676 DPT=9102 SEQ=263900409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBC35B60000000001030307) 
Oct 13 15:23:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:26 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:26 standalone.localdomain ceph-mon[29756]: pgmap v3266: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.119 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.120 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.120 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.120 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.120 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:23:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3267: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:23:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2622314171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.562 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.622 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.623 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.624 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.627 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.627 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.781 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.782 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9866MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.782 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:23:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:27.782 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:23:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2622314171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.179 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.180 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.180 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.180 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:23:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:23:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eeb0f78246b2c0232f2a45eb268f58d7cda22f69a7a6ba5bc0279db237ec6e56-merged.mount: Deactivated successfully.
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.417 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.634 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.635 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.651 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.673 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:23:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:28.719 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:23:28 standalone.localdomain ceph-mon[29756]: pgmap v3267: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:23:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:23:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4168721954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:23:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3268: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:29.197 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:23:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:29.202 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:23:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:29.216 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:23:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:29.217 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:23:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:29.218 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:23:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:29.218 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4168721954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:23:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:23:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.099 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.099 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.100 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.100 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.100 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.101 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.101 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:30.101 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:23:30 standalone.localdomain podman[484002]: 2025-10-13 15:23:30.13582684 +0000 UTC m=+0.055267326 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:23:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cd7b2ee1594520f0295fcac9eea884c48f523498b8393234dd7cc19be148b00a-merged.mount: Deactivated successfully.
Oct 13 15:23:30 standalone.localdomain podman[484002]: 2025-10-13 15:23:30.170148615 +0000 UTC m=+0.089589081 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:23:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17387 DF PROTO=TCP SPT=50676 DPT=9102 SEQ=263900409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBC45760000000001030307) 
Oct 13 15:23:30 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:23:30 standalone.localdomain ceph-mon[29756]: pgmap v3268: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f25330bdec64be1c507667e3cd0ee8363140958cb34eea9b35059a74ace96db2-merged.mount: Deactivated successfully.
Oct 13 15:23:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:31.107 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:31.108 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:23:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:31.124 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:23:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:31.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3269: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:32.106 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:32.106 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:23:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:32.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:23:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:23:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:23:32 standalone.localdomain podman[484023]: 2025-10-13 15:23:32.77641222 +0000 UTC m=+0.067975897 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:23:32 standalone.localdomain podman[484023]: 2025-10-13 15:23:32.810910819 +0000 UTC m=+0.102474466 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:23:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:32.925 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:23:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:32.926 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:23:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:32.928 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:23:32 standalone.localdomain ceph-mon[29756]: pgmap v3269: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:33 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:33 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:33 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:23:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3270: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:33.324 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:23:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:33.339 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:23:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:33.340 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:23:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:33.340 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:23:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:34 standalone.localdomain podman[467099]: time="2025-10-13T15:23:34Z" level=error msg="Getting root fs size for \"7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:23:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:23:34 standalone.localdomain ceph-mon[29756]: pgmap v3270: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3271: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:36.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f25330bdec64be1c507667e3cd0ee8363140958cb34eea9b35059a74ace96db2-merged.mount: Deactivated successfully.
Oct 13 15:23:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d1e79337f55b7749d10f6b1db0bdc54fe96b01e19a9e1616f21af66c8b2090d8-merged.mount: Deactivated successfully.
Oct 13 15:23:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:36 standalone.localdomain ceph-mon[29756]: pgmap v3271: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3272: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:37.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:23:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:23:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:23:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:23:38 standalone.localdomain podman[484048]: 2025-10-13 15:23:38.267888465 +0000 UTC m=+0.814923078 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:23:38 standalone.localdomain podman[484048]: 2025-10-13 15:23:38.350007916 +0000 UTC m=+0.897042509 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:23:38 standalone.localdomain ceph-mon[29756]: pgmap v3272: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3273: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:23:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:23:39 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:23:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:23:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eeb0f78246b2c0232f2a45eb268f58d7cda22f69a7a6ba5bc0279db237ec6e56-merged.mount: Deactivated successfully.
Oct 13 15:23:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:41 standalone.localdomain ceph-mon[29756]: pgmap v3273: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3274: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:41.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:23:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:23:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f25330bdec64be1c507667e3cd0ee8363140958cb34eea9b35059a74ace96db2-merged.mount: Deactivated successfully.
Oct 13 15:23:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f25330bdec64be1c507667e3cd0ee8363140958cb34eea9b35059a74ace96db2-merged.mount: Deactivated successfully.
Oct 13 15:23:42 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:42 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:42.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:42 standalone.localdomain podman[484094]: 2025-10-13 15:23:42.329539868 +0000 UTC m=+0.597248935 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 13 15:23:42 standalone.localdomain podman[484096]: 2025-10-13 15:23:42.348357495 +0000 UTC m=+0.611088219 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, architecture=x86_64, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:23:42 standalone.localdomain podman[484094]: 2025-10-13 15:23:42.419366595 +0000 UTC m=+0.687075662 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 13 15:23:42 standalone.localdomain podman[484095]: 2025-10-13 15:23:42.385800255 +0000 UTC m=+0.649462059 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, container_name=swift_container_server, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9)
Oct 13 15:23:42 standalone.localdomain podman[484093]: 2025-10-13 15:23:42.456087252 +0000 UTC m=+0.722279792 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, batch=17.1_20250721.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:23:42 standalone.localdomain podman[484096]: 2025-10-13 15:23:42.549763018 +0000 UTC m=+0.812493732 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-type=git, tcib_managed=true, architecture=x86_64, container_name=swift_account_server, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, distribution-scope=public)
Oct 13 15:23:42 standalone.localdomain podman[484095]: 2025-10-13 15:23:42.588281581 +0000 UTC m=+0.851943355 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, io.buildah.version=1.33.12, distribution-scope=public, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, version=17.1.9, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1)
Oct 13 15:23:42 standalone.localdomain podman[484093]: 2025-10-13 15:23:42.626753151 +0000 UTC m=+0.892945611 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, release=1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64)
Oct 13 15:23:42 standalone.localdomain ceph-mon[29756]: pgmap v3274: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:23:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:23:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:23:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3275: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:23:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:44 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain podman[484196]: 2025-10-13 15:23:44.487578495 +0000 UTC m=+0.753239314 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64)
Oct 13 15:23:44 standalone.localdomain podman[484196]: 2025-10-13 15:23:44.501915985 +0000 UTC m=+0.767576824 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter)
Oct 13 15:23:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:44 standalone.localdomain ceph-mon[29756]: pgmap v3275: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3276: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:45 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:23:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:46.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:46 standalone.localdomain ceph-mon[29756]: pgmap v3276: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:23:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:23:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ea91c370b256a874ce5d07e2995f6a00f5191e3d387d95202833d1f28499d361-merged.mount: Deactivated successfully.
Oct 13 15:23:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3277: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:47 standalone.localdomain podman[484230]: 2025-10-13 15:23:47.337668505 +0000 UTC m=+0.212318799 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:23:47 standalone.localdomain podman[484230]: 2025-10-13 15:23:47.351780798 +0000 UTC m=+0.226431082 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:23:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:47.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:47 standalone.localdomain podman[484242]: 2025-10-13 15:23:47.444792123 +0000 UTC m=+0.157069192 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12)
Oct 13 15:23:47 standalone.localdomain podman[484242]: 2025-10-13 15:23:47.59450608 +0000 UTC m=+0.306783099 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, ceph=True, GIT_CLEAN=True, RELEASE=main)
Oct 13 15:23:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:23:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:48 standalone.localdomain ceph-mon[29756]: pgmap v3277: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3278: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:23:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:23:49 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:23:49 standalone.localdomain podman[484279]: 2025-10-13 15:23:49.800095515 +0000 UTC m=+1.821005771 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:23:49 standalone.localdomain podman[484279]: 2025-10-13 15:23:49.81001838 +0000 UTC m=+1.830928676 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:23:49 standalone.localdomain podman[484279]: unhealthy
Oct 13 15:23:50 standalone.localdomain ceph-mon[29756]: pgmap v3278: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3279: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:51.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f25330bdec64be1c507667e3cd0ee8363140958cb34eea9b35059a74ace96db2-merged.mount: Deactivated successfully.
Oct 13 15:23:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d1e79337f55b7749d10f6b1db0bdc54fe96b01e19a9e1616f21af66c8b2090d8-merged.mount: Deactivated successfully.
Oct 13 15:23:51 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:51 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:51 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:23:51 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:23:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:52.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:23:52 standalone.localdomain ceph-mon[29756]: pgmap v3279: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3280: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:23:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:23:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:23:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:23:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:23:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34863 DF PROTO=TCP SPT=40070 DPT=9102 SEQ=3761576384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBC9EEF0000000001030307) 
Oct 13 15:23:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:23:53 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:53 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:53 standalone.localdomain podman[484319]: 2025-10-13 15:23:53.641717684 +0000 UTC m=+0.137592935 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:23:53 standalone.localdomain podman[484319]: 2025-10-13 15:23:53.656751586 +0000 UTC m=+0.152626787 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:23:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34864 DF PROTO=TCP SPT=40070 DPT=9102 SEQ=3761576384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBCA2F60000000001030307) 
Oct 13 15:23:54 standalone.localdomain ceph-mon[29756]: pgmap v3280: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:23:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:23:55 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:55 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:55 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:23:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3281: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:56.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:23:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34865 DF PROTO=TCP SPT=40070 DPT=9102 SEQ=3761576384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBCAAF60000000001030307) 
Oct 13 15:23:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:23:56 standalone.localdomain ceph-mon[29756]: pgmap v3281: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:23:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:23:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3282: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:23:57.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:23:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:23:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a179da2c9d84bbf21427b6e010a4aba1f19cf7e5641074900b9e6b403199bfdf-merged.mount: Deactivated successfully.
Oct 13 15:23:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:23:58 standalone.localdomain ceph-mon[29756]: pgmap v3282: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3283: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:23:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34866 DF PROTO=TCP SPT=40070 DPT=9102 SEQ=3761576384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBCBAB70000000001030307) 
Oct 13 15:24:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:24:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:24:00 standalone.localdomain ceph-mon[29756]: pgmap v3283: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3284: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:01.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:24:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:24:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ea91c370b256a874ce5d07e2995f6a00f5191e3d387d95202833d1f28499d361-merged.mount: Deactivated successfully.
Oct 13 15:24:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:02.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:24:02 standalone.localdomain ceph-mon[29756]: pgmap v3284: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:02 standalone.localdomain podman[484372]: 2025-10-13 15:24:02.960402895 +0000 UTC m=+1.226534663 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:24:02 standalone.localdomain podman[484372]: 2025-10-13 15:24:02.971840956 +0000 UTC m=+1.237972724 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:24:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3285: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:24:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:04 standalone.localdomain ceph-mon[29756]: pgmap v3285: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3286: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:24:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:24:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:24:05 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:24:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:05 standalone.localdomain podman[484411]: 2025-10-13 15:24:05.590207863 +0000 UTC m=+2.355419406 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 13 15:24:05 standalone.localdomain podman[484411]: 2025-10-13 15:24:05.627855589 +0000 UTC m=+2.393067122 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller)
Oct 13 15:24:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:06.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:06 standalone.localdomain ceph-mon[29756]: pgmap v3286: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:24:06.939 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:24:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:24:06.940 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:24:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:24:06.940 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:24:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3287: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:24:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:07.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:24:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:08 standalone.localdomain ceph-mon[29756]: pgmap v3287: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:08 standalone.localdomain sudo[473716]: pam_unix(sudo:session): session closed for user root
Oct 13 15:24:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:24:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:24:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:09 standalone.localdomain sudo[484433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:24:09 standalone.localdomain sudo[484433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:24:09 standalone.localdomain sudo[484433]: pam_unix(sudo:session): session closed for user root
Oct 13 15:24:09 standalone.localdomain sudo[484451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:24:09 standalone.localdomain sudo[484451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:24:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3288: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:09 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:24:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:24:10 standalone.localdomain ceph-mon[29756]: pgmap v3288: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e5fd54a930c5dd347b749289ddb340e344fa5157d8593c8cf9b000e4c28a692-merged.mount: Deactivated successfully.
Oct 13 15:24:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e5fd54a930c5dd347b749289ddb340e344fa5157d8593c8cf9b000e4c28a692-merged.mount: Deactivated successfully.
Oct 13 15:24:11 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:24:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:11 standalone.localdomain podman[484480]: 2025-10-13 15:24:11.152860733 +0000 UTC m=+0.515102834 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:24:11 standalone.localdomain podman[484480]: 2025-10-13 15:24:11.181919554 +0000 UTC m=+0.544161655 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:24:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3289: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:11.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:12.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:12 standalone.localdomain ceph-mon[29756]: pgmap v3289: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:24:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:24:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:24:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3290: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:24:13 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:24:13 standalone.localdomain sudo[484451]: pam_unix(sudo:session): session closed for user root
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:24:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 0d62b157-1175-41d3-8c88-8af6ab0376b7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:24:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 0d62b157-1175-41d3-8c88-8af6ab0376b7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:24:13 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 0d62b157-1175-41d3-8c88-8af6ab0376b7 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:24:13 standalone.localdomain sudo[484520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:24:13 standalone.localdomain sudo[484520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:24:13 standalone.localdomain sudo[484520]: pam_unix(sudo:session): session closed for user root
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:13 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:24:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:24:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:24:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:24:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:24:14 standalone.localdomain ceph-mon[29756]: pgmap v3290: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:24:15 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:24:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:24:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3291: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:24:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:24:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:24:15 standalone.localdomain podman[484540]: 2025-10-13 15:24:15.748672533 +0000 UTC m=+1.005082014 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public)
Oct 13 15:24:15 standalone.localdomain podman[484577]: 2025-10-13 15:24:15.823323574 +0000 UTC m=+0.088726544 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Oct 13 15:24:15 standalone.localdomain podman[484551]: 2025-10-13 15:24:15.832384683 +0000 UTC m=+1.086846644 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 15:24:15 standalone.localdomain podman[484538]: 2025-10-13 15:24:15.892137937 +0000 UTC m=+1.154955975 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, vcs-type=git, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, build-date=2025-07-21T14:56:28, container_name=swift_object_server, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:24:15 standalone.localdomain podman[484577]: 2025-10-13 15:24:15.918783315 +0000 UTC m=+0.184186295 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9)
Oct 13 15:24:15 standalone.localdomain podman[484539]: 2025-10-13 15:24:15.933516618 +0000 UTC m=+1.196885333 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 13 15:24:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a179da2c9d84bbf21427b6e010a4aba1f19cf7e5641074900b9e6b403199bfdf-merged.mount: Deactivated successfully.
Oct 13 15:24:15 standalone.localdomain podman[484540]: 2025-10-13 15:24:15.94791592 +0000 UTC m=+1.204325381 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, io.openshift.expose-services=, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, tcib_managed=true, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, managed_by=tripleo_ansible, vcs-type=git, container_name=swift_container_server)
Oct 13 15:24:15 standalone.localdomain podman[484539]: 2025-10-13 15:24:15.968887043 +0000 UTC m=+1.232255798 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:24:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:24:16 standalone.localdomain ceph-mon[29756]: pgmap v3291: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:16 standalone.localdomain podman[484551]: 2025-10-13 15:24:16.03882316 +0000 UTC m=+1.293285081 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, release=1, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, io.openshift.expose-services=, container_name=swift_account_server, io.buildah.version=1.33.12, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:24:16 standalone.localdomain podman[484538]: 2025-10-13 15:24:16.083809731 +0000 UTC m=+1.346627749 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, release=1, version=17.1.9, batch=17.1_20250721.1, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:24:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:16.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3292: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.256 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.279 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Triggering sync for uuid 54a46fec-332e-42f9-83ed-88e763d13f63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.280 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Triggering sync for uuid 8f68d5aa-abc4-451d-89d2-f5342b71831c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.280 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.281 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.281 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.282 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.335 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.337 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:24:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:17.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:24:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2893296277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2893296277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:24:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: pgmap v3292: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2893296277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:24:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2893296277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:24:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3293: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:24:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:24:19 standalone.localdomain podman[484668]: 2025-10-13 15:24:19.852705897 +0000 UTC m=+0.076237352 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:24:19 standalone.localdomain podman[484668]: 2025-10-13 15:24:19.859524346 +0000 UTC m=+0.083055771 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct 13 15:24:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:24:20 standalone.localdomain ceph-mon[29756]: pgmap v3293: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:24:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f103f3389f1c0c290f1c4785293a9b4615c276e3f45d72abe9bd5ad8e441a786-merged.mount: Deactivated successfully.
Oct 13 15:24:20 standalone.localdomain python3[484776]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:24:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:20.994 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:24:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:20.996 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:24:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:20.997 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.026 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.027 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.028 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.045 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.045 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '974bc26d-a195-40c7-a308-d32539aaa423', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:20.997286', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b110fca4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': '71fd408b9e38bf44f30593822934ba82b808171bc1620c9dc43a37d279459a52'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:20.997286', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b1110866-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': 'cbe837438889f22c74dea1fb0eb0f75248387ceb84375ab016b6825e085d8128'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:20.997286', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b1111054-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': '29646a02d3a1502797ba0d7b8f2cb9dc8d60fb4e3f8ff626a5ac12dca5898515'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:20.997286', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b113bd22-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.217571983, 'message_signature': 'e67f1151d62812b8587f88b6735924a25c66993b97c42cece68539dee60c25c4'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:20.997286', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b113c84e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.217571983, 'message_signature': '710732536eca3781bffb3d7ea14fb17a027a407070ab3509f383b179c3185e4b'}]}, 'timestamp': '2025-10-13 15:24:21.046227', '_unique_id': 'e456714f67c648e8b995e0a3976345e2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.047 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.048 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.070 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:21 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.102 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '289c70da-02eb-4117-b0b6-d5fa0ff261ed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:24:21.048480', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'b1179988-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.259353066, 'message_signature': 'a2b42c48acff600282664d703c564dc8eef9ba4098bf32e0d5582ab944c05843'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:24:21.048480', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'b11c6fb2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.291308098, 'message_signature': '60f344b6410eba2607ca0c803a443a2000327d102f6c5b8a3ce7f460f1731a9a'}]}, 'timestamp': '2025-10-13 15:24:21.103017', '_unique_id': '3b54b83ab71442c19e7f6951181dc358'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.104 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.105 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.108 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.111 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac09dc35-5ebd-4c58-9207-e4823cf82cb2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.105744', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b11d62fa-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': 'a11851fc030bc949eacc5847d00d34a58674d8b84bf243e251ea216578423cb4'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.105744', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b11dc7f4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '6ada657fc579fce51f3214a62223c3d52331a883130e488004bfe65ced24b9a7'}]}, 'timestamp': '2025-10-13 15:24:21.111759', '_unique_id': 'd0ceb4d6521f4bd3a47d4429f876bf48'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.112 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.113 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.113 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.113 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1799889e-afe4-46c5-8001-429eb59ef3ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.113621', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b11e1c40-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': '4729fc17941600b33821d046fc9b07e645ff047070077f85eebd3d370b3839dc'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.113621', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b11e2500-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '1e1e2e93dd9f886dab6abfa0c31d3765cf11d14c8cabdd430ddef88f72ccc421'}]}, 'timestamp': '2025-10-13 15:24:21.114115', '_unique_id': 'c4dd35ff4d2141a4bc7e1a1afa13d5b7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.114 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.115 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.115 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 29250000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.115 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 29090000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9d87cca-1590-4ed3-ac86-d0b0e84e2f92', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 29250000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:24:21.115571', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'b11e67e0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.259353066, 'message_signature': '14bfa084f66a66d26c6a6cb6680c27774d0b817368a07272efb6a1edd29f7603'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 29090000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:24:21.115571', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'b11e6fc4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.291308098, 'message_signature': '8794ca1255451b6cd52dc0d638e981c437b76412de9dd3f5eca134802f291896'}]}, 'timestamp': '2025-10-13 15:24:21.115994', '_unique_id': 'd1cdddd347b1490e8b2596e419e7f620'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.117 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.117 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.162 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.163 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.163 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.185 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c6da008e-c9ad-4a74-b999-8ecff4c77a70', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.117276', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b125a2f8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'a6509929a0ca2e59a7cc04e636f48696cca671a529bc1e6960af86db132d13dc'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.117276', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b125b25c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '35666b85dd0cbed01d828a904f7070af202ca913b4dbe7151803f7ef9fcceb8e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.117276', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b125bd9c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '18effab60844179cefb4af9dac75d276200da026b9b90ef0c3006a6eb072e2b7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.117276', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b129223e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'dda4944f4978bbd6fc12bf8db29bab446d6f893bd94f5cd818df42b55424b847'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.117276', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b1293116-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'f4c98b85e5fbb99fc0f2c8ad01a4bac47415854aeceb3abe95f090e8a0a08f29'}]}, 'timestamp': '2025-10-13 15:24:21.186541', '_unique_id': '4952e97c1d0e442696fd6d0d44e83636'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.188 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.189 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.189 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.189 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.189 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.189 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f84fb2ce-ecd9-437a-a93a-b8fa64723550', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.189000', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b1299c6e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': 'd2cac586e3d5cad811c91ad3d9eac4b04a757aa9f35a49ac4e81ef9f4a54b83a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.189000', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b129a40c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': '6514df144da30ea8f33444f00a8691735072ab3225d74ee65c07223fa373defe'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.189000', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b129ae16-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': 'e2f9a909cd45dcc3093808c78bf37132dfca1598066e7a2093eb7fa4bb6052ff'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.189000', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b129b546-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.217571983, 'message_signature': 'e7d5dc9a11ec0c5aae20925a6607a591736a68ce2ff883968625992dfa25e4ed'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.189000', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b129bc1c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.217571983, 'message_signature': '8171e0127aa18567ee2ac37929c0882480998044f538459170ba571be51d058b'}]}, 'timestamp': '2025-10-13 15:24:21.190036', '_unique_id': 'bd4e4ef5bd204737ad13f7d29f8cdd95'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.190 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.191 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.191 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.191 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '86ef5228-47cc-4149-a34b-665c86f76f7b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.191457', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b129fd3a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': '5d4a89dc638c687e595ea33492c68b92f77c285278f72ccc917fc2395b5c3201'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.191457', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12a050a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '0f4a3b3d322edb9b7c813b1965cfe885fd533483649508ad8cf9015ec4b01b35'}]}, 'timestamp': '2025-10-13 15:24:21.191912', '_unique_id': 'b725cb39b32948708375a344f96516e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.193 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.193 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.193 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '931c4af5-c6af-4c76-86f7-51f4146135f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.193093', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12a3c28-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': '4ab8481fc3333c181b6138db47648b93541d73efb120196bc489fff25caa717f'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.193093', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12a43e4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '1ba97a3200c3583cdd28f160235bd0d9188d66095566449908f5d52a1deedda5'}]}, 'timestamp': '2025-10-13 15:24:21.193536', '_unique_id': '98398f4859684d648e5bbf1e54c3311d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.194 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5196e00a-5456-4790-bab2-aa71f58ef607', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.194622', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12a7882-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': 'a0942c23af9e02db4947dab6175ffaaac456c0876fccf5b1c5e07b76789d5518'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.194622', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12a814c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '356ba8efe8c93be4194c80fffdcbbe8bfe5f86b5014cb96679a5b6728de9746d'}]}, 'timestamp': '2025-10-13 15:24:21.195091', '_unique_id': 'fd8409a68cf64653a3d07afc4e2da549'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.195 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.196 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.196 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.196 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '15e54472-3eff-4193-bba4-c7c0d1dfb08b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.196173', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12ab3ba-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': '05d8d28090e1ecbecff7a9c6403bf63165d69225f21d6a1838a79c8b47401437'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.196173', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12abb6c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '612c7614085c50477da79defc8126e7a481db01af2b0e832a21f2406a2b38c0b'}]}, 'timestamp': '2025-10-13 15:24:21.196599', '_unique_id': '5f3d62d21f1e487384837ad743254ee6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.197 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.198 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.198 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.198 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a5238994-6cfb-4027-9820-e811d5ddd1b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.197647', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12aed58-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '9b1e9176fe75a284b292a60b7c1dfa0e1aa480f864638c9c472b47582f7cb3de'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.197647', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12af49c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '619177872d5ce0e6266fcc3a7545eaf6581c9b3106e05c6e3a0551c14cd99eda'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.197647', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b12afbc2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'e20add0c3ba10c7fbac8141776cf26cf785e9b07959bb2b98ef2a0d734a67074'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.197647', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12b02d4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'c14d3cd8052b30e52d164012019a30c13315f88ccf97ad182442282203882ab8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.197647', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12b09aa-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'ee9e676d6e6f412c152ce47f07506630cfde4aeadaa7221fb0141bc0463b9292'}]}, 'timestamp': '2025-10-13 15:24:21.198589', '_unique_id': '1be786636f86448bae89a1c05025890b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.199 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3294: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b87b0a3f-7437-4b4d-ab22-47f54933422e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.199811', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12b4208-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'f6c5ab2798978e21791f0be79ec45b241740360a300212035db0896bebb344a8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.199811', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12b4988-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'cf4bcc39e603bc77aff20197995ec5415e802c3ab0a2a05cd43382020c0dbb81'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.199811', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b12b50cc-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'a3eaa755760b0aadb4adf5761a71439e0d97b0ca0c5dc690c4039d4fcf36dc13'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.199811', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12b57b6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'e2382708de93aa81b33370b67e7bde7494f9fe56e36d8f487228f9cb047e331d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.199811', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12b5f9a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'e25ff6466ca06bc19e592f38d65623b93745f7aa5bfc59cfbbf58211fa235045'}]}, 'timestamp': '2025-10-13 15:24:21.200773', '_unique_id': 'a6ee51060e1d4bedac00ce6446338098'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.201 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '278ff988-e302-4697-9e27-c92cf62d7f35', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.201971', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12b96b8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': '498e6ad3a4424d377f2cca51dcbaf6e83c9608ac1c5fccbf589fc5f28c32cd22'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.201971', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12b9e9c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': 'ba512141f42c735955b1e5d47978fab8f86e986b0b018e2cda6c5e5f9b8f8fc3'}]}, 'timestamp': '2025-10-13 15:24:21.202397', '_unique_id': '665452d97a704202a86051d28d43e895'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.203 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '371fad0e-a3f5-4a5a-9403-938d0960d0c8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.203584', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12bd588-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '7b472382ee6dd0fd6bfecf1648abd0757e3fe6bf4f08e6f99fabd54c608c8fdd'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.203584', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12bdcfe-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'f68607b93382f9acb084f17279e2fda04cfaae7c6c10db7f5a7e7737c7433c44'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.203584', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b12be41a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '99b92f8a53d12396d137c3b0d9f5aa6c6e962f913f2e88bc8af6cadbd2251ece'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.203584', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12beb5e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'ec285acb293743159470758a9af5844270bc7b991e7e9d846d6f0e59e82a0418'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.203584', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12bf25c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': '6fc38121e0f20db6b3a5feea536a1f4652b7ce6301066016af2d73f087319cd7'}]}, 'timestamp': '2025-10-13 15:24:21.204548', '_unique_id': '700f198f4c414f519d768670973d4d6e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.205 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5ece744c-18b4-4282-bf67-747e92322ee2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.205775', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12c2b14-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'c0624697c0d2e364001e02b788de0aa76b0f8d9149a92fd389ed37a833ede41a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.205775', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12c328a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'cd3d215648c7476aae1ba47aa566f2efa222e48bd2002619bcd956ce7cd3f468'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.205775', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b12c39a6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '14bf137969eca0fa2d9358570a723e0e79b8ae4ee03558f11dd0f7046768c19c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.205775', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12c40ae-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'dfe04d396dc2b76eb1e53c985b609093b1385e0d100f7e3d6c6d07bdfce6dc62'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.205775', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12c482e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': 'e821380ca32e35facb79e5ab81da0abf9a5000d99ff3615c28171b0d199e343e'}]}, 'timestamp': '2025-10-13 15:24:21.206727', '_unique_id': '1b4211d8bb354ca596e3d2789f2e42d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.208 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.208 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a3b0b068-998e-4acd-8c1c-c971d16289b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.208063', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12c84f6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': 'f5983a65c279fbcc32aae0a05badf9ad40607f5bf663640e7d13aa0976038372'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.208063', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12c8cee-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '96c92dbd3d8e5fd30cccd6e17bace72c7bf128e092d25357809187522da0e20c'}]}, 'timestamp': '2025-10-13 15:24:21.208529', '_unique_id': '1bd7786e57ab4bd6a4da5078378875f2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.209 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b01b3d64-4951-40d1-aa32-d45e576e76a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.209919', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12ccd6c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': '226f4822effe71b30c55e1252eb7b5f5ae7140752fbcabf631c933ae91f221c2'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.209919', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12cd528-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '298d7e19d9abb06ecfae5f88dab3e8a9e554ecb70e6c12d556aa4eefda7c8a0d'}]}, 'timestamp': '2025-10-13 15:24:21.210342', '_unique_id': '4ea05174d71d4459bd9012fdf8b4390c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.212 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.212 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.212 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c604f251-598a-48d1-9217-62dd1a8554cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.211571', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12d0dc2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': 'acf7225c1418bb3aa9a1d1819c28f9f091d0f4a409b763579edefe61fe945905'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.211571', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12d1632-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': '349066e122b2530b17df5538917cc7dee1ff7269190bb2b131969c56ae6646d6'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.211571', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b12d1d94-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.18649989, 'message_signature': '007347d71e2ee02ccb5585c60d3268ff81ac1a24c114a1107e6fe0cecfca8885'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.211571', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12d24ce-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.217571983, 'message_signature': '6d17bf52ced9dcec5b26ce97fac3c5da83d79a47ee8a5ba2082cf0f34f6f72d6'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.211571', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12d2be0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.217571983, 'message_signature': '5cc39051c78581c1e5d59c49dfc4210857e6abeeb89d8f04cc8a6e4ccf32fa40'}]}, 'timestamp': '2025-10-13 15:24:21.212592', '_unique_id': 'cf50b7bdee3b4d20b83bafdf62e6e206'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.214 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e308edf9-5e64-4b5d-bb6e-f122d448a34a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:24:21.213967', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'b12d6b5a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.295011861, 'message_signature': 'f0b833d05a16bbc8ce8fdb1239d70de830f1b7724a704ce248dcb95f389e8b5f'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:24:21.213967', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'b12d7398-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.298405785, 'message_signature': '38b25450580ea119f8f1fa1e5fc67453e143841d5c7bcaff305d02b8656fc99a'}]}, 'timestamp': '2025-10-13 15:24:21.214402', '_unique_id': '79623bddea05469b8ac80d8e987d34d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.216 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.216 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.216 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9e117bc7-268b-4c47-93e3-90b3fc64db6c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:24:21.215678', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12db038-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '38bbab4e718d6479594616339f582dc43c7c09411b9dec7cbc5289172c9798a3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:24:21.215678', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12dbaec-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': '5b4184fb37199b872b9165242bc9151108f8bfd87a560c7b59582391af58e4e5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:24:21.215678', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'b12dc532-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.306535325, 'message_signature': 'bd9a661859d03115bb4a2dcde1590069170173db7e1267a9fdda4eafa16dacfc'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:24:21.215678', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b12dcf64-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': '97bcbfd18c0089193f42c248d881a9814d968785afa80cb889b51dcb7571545f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:24:21.215678', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b12dd7de-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8504.353161126, 'message_signature': '31273a01526c310f7eeb312b7bfdc476cbf9af37f2f0d15a1724463fe0c13f28'}]}, 'timestamp': '2025-10-13 15:24:21.216999', '_unique_id': '39136fda67874a60a0b23d5c32005ec7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.218 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:24:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:24:21.218 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:24:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:21.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:24:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:22.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:22 standalone.localdomain ceph-mon[29756]: pgmap v3294: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:24:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3295: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:24:23 standalone.localdomain podman[484788]: 2025-10-13 15:24:23.2609042 +0000 UTC m=+1.083875513 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:24:23 standalone.localdomain podman[484788]: 2025-10-13 15:24:23.275139557 +0000 UTC m=+1.098110880 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:24:23 standalone.localdomain podman[484788]: unhealthy
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:24:23
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'vms', '.mgr', 'backups', 'images', 'volumes']
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:24:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4733 DF PROTO=TCP SPT=54792 DPT=9102 SEQ=1551862037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD141E0000000001030307) 
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:24:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:24:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:24 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:24:24 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:24:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4734 DF PROTO=TCP SPT=54792 DPT=9102 SEQ=1551862037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD18360000000001030307) 
Oct 13 15:24:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:24 standalone.localdomain ceph-mon[29756]: pgmap v3295: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3296: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:24:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:26.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4735 DF PROTO=TCP SPT=54792 DPT=9102 SEQ=1551862037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD20360000000001030307) 
Oct 13 15:24:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:26 standalone.localdomain ceph-mon[29756]: pgmap v3296: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:24:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e5fd54a930c5dd347b749289ddb340e344fa5157d8593c8cf9b000e4c28a692-merged.mount: Deactivated successfully.
Oct 13 15:24:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:27.117 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3297: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e5fd54a930c5dd347b749289ddb340e344fa5157d8593c8cf9b000e4c28a692-merged.mount: Deactivated successfully.
Oct 13 15:24:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:27 standalone.localdomain podman[484828]: 2025-10-13 15:24:27.321709647 +0000 UTC m=+1.590362461 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:24:27 standalone.localdomain podman[484828]: 2025-10-13 15:24:27.358835176 +0000 UTC m=+1.627487980 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:24:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:27.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.086 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.112 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.112 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.113 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.113 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.113 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:24:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:24:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/769891937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.567 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.642 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.642 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.642 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.646 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.646 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:24:28 standalone.localdomain ceph-mon[29756]: pgmap v3297: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/769891937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.838 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.839 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9905MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.840 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.840 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.907 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.907 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.907 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.908 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:24:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:28.970 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:24:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3298: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:24:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:24:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1193285943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:24:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:29.386 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:24:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:29.394 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:24:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:24:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:29.414 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:24:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:29.416 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:24:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:29.417 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:24:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:24:29 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:24:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1193285943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:24:30 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:24:30 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx2094e63559404cd89bff0-0068ed19ae" "proxy-server 2" 0.0006 "-" 21 -
Oct 13 15:24:30 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx2094e63559404cd89bff0-0068ed19ae)
Oct 13 15:24:30 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx2094e63559404cd89bff0-0068ed19ae)
Oct 13 15:24:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:30.416 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:30.417 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:30.417 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:24:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4736 DF PROTO=TCP SPT=54792 DPT=9102 SEQ=1551862037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD2FF60000000001030307) 
Oct 13 15:24:30 standalone.localdomain ceph-mon[29756]: pgmap v3298: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:31.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:24:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3299: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:24:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:31.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.775465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369071775570, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1878, "num_deletes": 257, "total_data_size": 1689236, "memory_usage": 1727648, "flush_reason": "Manual Compaction"}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369071783135, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1656183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76143, "largest_seqno": 78020, "table_properties": {"data_size": 1648808, "index_size": 4334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15408, "raw_average_key_size": 19, "raw_value_size": 1633617, "raw_average_value_size": 2073, "num_data_blocks": 195, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760368896, "oldest_key_time": 1760368896, "file_creation_time": 1760369071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 7738 microseconds, and 2985 cpu microseconds.
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.783194) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1656183 bytes OK
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.783221) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.784896) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.784912) EVENT_LOG_v1 {"time_micros": 1760369071784907, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.784925) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1681113, prev total WAL file size 1681602, number of live WAL files 2.
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.785431) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373631' seq:72057594037927935, type:22 .. '6C6F676D0033303134' seq:0, type:0; will stop at (end)
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1617KB)], [173(5730KB)]
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369071785477, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 7524517, "oldest_snapshot_seqno": -1}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 6684 keys, 7422247 bytes, temperature: kUnknown
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369071815299, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 7422247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7380135, "index_size": 24284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 174604, "raw_average_key_size": 26, "raw_value_size": 7261154, "raw_average_value_size": 1086, "num_data_blocks": 972, "num_entries": 6684, "num_filter_entries": 6684, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.815601) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 7422247 bytes
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.817846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.9 rd, 248.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 5.6 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.0) write-amplify(4.5) OK, records in: 7216, records dropped: 532 output_compression: NoCompression
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.817867) EVENT_LOG_v1 {"time_micros": 1760369071817858, "job": 108, "event": "compaction_finished", "compaction_time_micros": 29871, "compaction_time_cpu_micros": 16707, "output_level": 6, "num_output_files": 1, "total_output_size": 7422247, "num_input_records": 7216, "num_output_records": 6684, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369071818139, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369071818649, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.785323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.818705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.818713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.818715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.818716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:31.818717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:24:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233-merged.mount: Deactivated successfully.
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:24:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233-merged.mount: Deactivated successfully.
Oct 13 15:24:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:32 standalone.localdomain ceph-mon[29756]: pgmap v3299: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.948 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.948 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.949 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:24:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:32.949 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:24:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3300: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:33 standalone.localdomain podman[467099]: time="2025-10-13T15:24:33Z" level=error msg="Getting root fs size for \"8083537206f9a520426dd4ac7a910d6426f6b582bfbae2cc7d166c01e39f484c\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:24:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:33.411 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:24:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:33.433 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:24:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:33.434 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:24:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:34 standalone.localdomain python3[484776]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "ce29a55d2c38088001806b701c9451ea98fc49e7770639b2a643dcaaa2908d05",
                                                                  "Digest": "sha256:181dcf5a5275401ac5ce338eac0af004d81af761f21c80f419e3e0b41697791d",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:181dcf5a5275401ac5ce338eac0af004d81af761f21c80f419e3e0b41697791d"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:28:01.944970148Z",
                                                                  "Config": {
                                                                       "User": "neutron",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 653439291,
                                                                  "VirtualSize": 653439291,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/96e9a1eb6145016f992fadcf28b3705343c009294ada58a4e5bd657e99e23074/diff:/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/b53495fd807a483d3758ac6587e076b70bd5581b77462be79452299021e7f342/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/b53495fd807a483d3758ac6587e076b70bd5581b77462be79452299021e7f342/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:2176850e44c3ecf34b145f4e4f66c2144233e08dc232d76e9b587fc651884364",
                                                                            "sha256:6d5cdc85f3033eb88737a2b601757c99134b1776a08260817c9e12e510e7f333"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "neutron",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:16:18.853030625Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:16:19.228070681Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage neutron",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:17:47.555190736Z",
                                                                            "created_by": "/bin/sh -c dnf -y install iputils net-tools openstack-neutron openstack-neutron-rpc-server openstack-neutron-ml2 openvswitch python3-networking-baremetal python3-openvswitch python3-unbound && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:17:55.204013754Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/neutron-base/neutron_sudoers /etc/sudoers.d/neutron_sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:20.180748405Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers.d/neutron_sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:43.442226259Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:27:25.951453781Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:28:01.942822695Z",
                                                                            "created_by": "/bin/sh -c dnf -y install openstack-neutron-sriov-nic-agent && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:28:01.942880157Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER neutron",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:28:05.652597027Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Oct 13 15:24:34 standalone.localdomain ceph-mon[29756]: pgmap v3300: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:35.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:24:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3301: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:24:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:24:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f103f3389f1c0c290f1c4785293a9b4615c276e3f45d72abe9bd5ad8e441a786-merged.mount: Deactivated successfully.
Oct 13 15:24:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f103f3389f1c0c290f1c4785293a9b4615c276e3f45d72abe9bd5ad8e441a786-merged.mount: Deactivated successfully.
Oct 13 15:24:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:36 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:36 standalone.localdomain podman[484913]: 2025-10-13 15:24:36.283596735 +0000 UTC m=+2.058768170 container remove 7a5b9f2d028d785b3d77081d56907c494e99e88d318975b70dfba328a474389d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1, name=neutron_sriov_agent, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bfea4b567a7178c2bf424bc40994d7e4'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-sriov-agent:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 10, 'ulimit': ['nofile=16384'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/sys/class/net:/sys/class/net:rw']}, config_id=tripleo_step4, container_name=neutron_sriov_agent, description=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-sriov-agent-container, vcs-type=git, name=rhosp17/openstack-neutron-sriov-agent, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, batch=17.1_20250721.1, build-date=2025-07-21T16:03:34, io.openshift.expose-services=, distribution-scope=public, vcs-ref=1ccae6c6723826528d8b740bd513cc0aac105969, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-sriov-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-sriov-agent, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 15:24:36 standalone.localdomain podman[484925]: 2025-10-13 15:24:36.312738429 +0000 UTC m=+0.584330178 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:24:36 standalone.localdomain podman[484925]: 2025-10-13 15:24:36.320873239 +0000 UTC m=+0.592464988 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:24:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:36.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:36 standalone.localdomain ceph-mon[29756]: pgmap v3301: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:24:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3302: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:37.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:24:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:24:38 standalone.localdomain python3[484776]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force neutron_sriov_agent
Oct 13 15:24:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:38 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:24:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:38 standalone.localdomain ceph-mon[29756]: pgmap v3302: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3303: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:39 standalone.localdomain podman[484951]: 2025-10-13 15:24:38.603892642 +0000 UTC m=+0.035140939 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Oct 13 15:24:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:40 standalone.localdomain ceph-mon[29756]: pgmap v3303: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3304: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:24:41 standalone.localdomain podman[484962]: 2025-10-13 15:24:41.292103314 +0000 UTC m=+0.060173149 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Oct 13 15:24:41 standalone.localdomain podman[484962]: 2025-10-13 15:24:41.381021224 +0000 UTC m=+0.149091049 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:24:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:41.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7237bb817dfe05d48f2606885847d1490e7fbe49ddf4120a0989298e8583c1c8-merged.mount: Deactivated successfully.
Oct 13 15:24:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7237bb817dfe05d48f2606885847d1490e7fbe49ddf4120a0989298e8583c1c8-merged.mount: Deactivated successfully.
Oct 13 15:24:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:42.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:42 standalone.localdomain ceph-mon[29756]: pgmap v3304: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:24:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:24:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:24:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee8156774c5567d5b052fd2feda4deab37cd8707219e5c5249721102d7085343-merged.mount: Deactivated successfully.
Oct 13 15:24:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233-merged.mount: Deactivated successfully.
Oct 13 15:24:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3305: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bca11670fe9d9ec31418d826d4a147639a712e2ddff8e509e9c02ae8cdba4233-merged.mount: Deactivated successfully.
Oct 13 15:24:43 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:24:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:24:43 standalone.localdomain podman[484986]: 2025-10-13 15:24:43.444558859 +0000 UTC m=+0.069868516 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:24:43 standalone.localdomain podman[484986]: 2025-10-13 15:24:43.451921515 +0000 UTC m=+0.077231182 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 13 15:24:44 standalone.localdomain ceph-mon[29756]: pgmap v3305: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3306: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:24:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:24:45 standalone.localdomain podman[484951]: 
Oct 13 15:24:45 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:24:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:45 standalone.localdomain podman[484951]: 2025-10-13 15:24:45.661580236 +0000 UTC m=+7.092828493 container create 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=neutron_sriov_agent, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']})
Oct 13 15:24:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:46.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:46 standalone.localdomain ceph-mon[29756]: pgmap v3306: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3307: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:47.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:24:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:24:48 standalone.localdomain python3[484776]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Oct 13 15:24:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:24:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:24:48 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:48 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:48 standalone.localdomain podman[485006]: 2025-10-13 15:24:48.190516821 +0000 UTC m=+0.111137735 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:24:48 standalone.localdomain podman[485006]: 2025-10-13 15:24:48.197826687 +0000 UTC m=+0.118447621 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:24:48 standalone.localdomain podman[485005]: 2025-10-13 15:24:48.173514607 +0000 UTC m=+0.099628350 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., config_id=edpm)
Oct 13 15:24:48 standalone.localdomain podman[485005]: 2025-10-13 15:24:48.309303563 +0000 UTC m=+0.235417356 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6)
Oct 13 15:24:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:48 standalone.localdomain ceph-mon[29756]: pgmap v3307: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3308: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:24:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:24:50 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:24:50 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:50 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:50 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:24:50 standalone.localdomain podman[485019]: 2025-10-13 15:24:50.265720046 +0000 UTC m=+2.174749854 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, architecture=x86_64, container_name=swift_account_server, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:24:50 standalone.localdomain podman[485012]: 2025-10-13 15:24:50.320735573 +0000 UTC m=+2.215112520 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, container_name=swift_container_server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, release=1, com.redhat.component=openstack-swift-container-container, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:24:50 standalone.localdomain podman[485004]: 2025-10-13 15:24:50.374228662 +0000 UTC m=+2.306701611 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, release=1, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, container_name=swift_object_server, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, managed_by=tripleo_ansible)
Oct 13 15:24:50 standalone.localdomain podman[485019]: 2025-10-13 15:24:50.444384934 +0000 UTC m=+2.353414732 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=)
Oct 13 15:24:50 standalone.localdomain podman[485012]: 2025-10-13 15:24:50.515772035 +0000 UTC m=+2.410148972 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, container_name=swift_container_server, name=rhosp17/openstack-swift-container, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc.)
Oct 13 15:24:50 standalone.localdomain podman[485004]: 2025-10-13 15:24:50.570010757 +0000 UTC m=+2.502483746 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, version=17.1.9, container_name=swift_object_server, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git)
Oct 13 15:24:50 standalone.localdomain ceph-mon[29756]: pgmap v3308: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3309: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:24:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:51 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:24:51 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:51 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:51 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:24:51 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:24:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:51.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:51 standalone.localdomain podman[485135]: 2025-10-13 15:24:51.459574531 +0000 UTC m=+0.216391202 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid)
Oct 13 15:24:51 standalone.localdomain podman[485135]: 2025-10-13 15:24:51.540399753 +0000 UTC m=+0.297216364 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:24:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:52.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:52 standalone.localdomain ceph-mon[29756]: pgmap v3309: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3310: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:24:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:24:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27100 DF PROTO=TCP SPT=36588 DPT=9102 SEQ=3924438963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD894E0000000001030307) 
Oct 13 15:24:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7237bb817dfe05d48f2606885847d1490e7fbe49ddf4120a0989298e8583c1c8-merged.mount: Deactivated successfully.
Oct 13 15:24:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7237bb817dfe05d48f2606885847d1490e7fbe49ddf4120a0989298e8583c1c8-merged.mount: Deactivated successfully.
Oct 13 15:24:53 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:24:54 standalone.localdomain python3.9[485274]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:24:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:24:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:24:54 standalone.localdomain podman[485294]: 2025-10-13 15:24:54.541622155 +0000 UTC m=+0.065603503 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:24:54 standalone.localdomain podman[485294]: 2025-10-13 15:24:54.556318549 +0000 UTC m=+0.080299927 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:24:54 standalone.localdomain podman[485294]: unhealthy
Oct 13 15:24:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27101 DF PROTO=TCP SPT=36588 DPT=9102 SEQ=3924438963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD8D770000000001030307) 
Oct 13 15:24:54 standalone.localdomain ceph-mon[29756]: pgmap v3310: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:55 standalone.localdomain python3.9[485409]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:24:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3311: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:55 standalone.localdomain python3.9[485462]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.925114) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369095925146, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 478, "num_deletes": 251, "total_data_size": 249680, "memory_usage": 259304, "flush_reason": "Manual Compaction"}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369095928519, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 245362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78021, "largest_seqno": 78498, "table_properties": {"data_size": 242764, "index_size": 646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6229, "raw_average_key_size": 18, "raw_value_size": 237639, "raw_average_value_size": 713, "num_data_blocks": 29, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369071, "oldest_key_time": 1760369071, "file_creation_time": 1760369095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 3438 microseconds, and 985 cpu microseconds.
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.928546) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 245362 bytes OK
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.928571) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.930422) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.930436) EVENT_LOG_v1 {"time_micros": 1760369095930431, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.930479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 246827, prev total WAL file size 246827, number of live WAL files 2.
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.930960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(239KB)], [176(7248KB)]
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369095931043, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 7667609, "oldest_snapshot_seqno": -1}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 6502 keys, 6676672 bytes, temperature: kUnknown
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369095978954, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 6676672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6636699, "index_size": 22659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 171478, "raw_average_key_size": 26, "raw_value_size": 6521624, "raw_average_value_size": 1003, "num_data_blocks": 896, "num_entries": 6502, "num_filter_entries": 6502, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.979365) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 6676672 bytes
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.981000) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.9 rd, 139.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.1 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(58.5) write-amplify(27.2) OK, records in: 7017, records dropped: 515 output_compression: NoCompression
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.981019) EVENT_LOG_v1 {"time_micros": 1760369095981011, "job": 110, "event": "compaction_finished", "compaction_time_micros": 47957, "compaction_time_cpu_micros": 24791, "output_level": 6, "num_output_files": 1, "total_output_size": 6676672, "num_input_records": 7017, "num_output_records": 6502, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369095981149, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369095981929, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.930825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.981988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.981998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.982000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.982001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:55 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:24:55.982003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:24:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:24:56 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:24:56 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:24:56 standalone.localdomain python3.9[485569]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760369095.5931659-346-51835818654136/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:24:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27102 DF PROTO=TCP SPT=36588 DPT=9102 SEQ=3924438963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBD95760000000001030307) 
Oct 13 15:24:56 standalone.localdomain python3.9[485622]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:24:56 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:24:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:24:56 standalone.localdomain systemd-rc-local-generator[485646]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:24:56 standalone.localdomain systemd-sysv-generator[485649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:24:56 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:24:56 standalone.localdomain ceph-mon[29756]: pgmap v3311: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3312: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:57 standalone.localdomain python3.9[485710]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:24:57 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:24:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:24:57.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:24:57 standalone.localdomain systemd-rc-local-generator[485734]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:24:57 standalone.localdomain systemd-sysv-generator[485742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:24:57 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:24:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:58 standalone.localdomain systemd[1]: Starting neutron_sriov_agent container...
Oct 13 15:24:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:24:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:58 standalone.localdomain ceph-mon[29756]: pgmap v3312: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:24:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:24:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:24:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3313: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:24:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:24:59 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:24:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:24:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8896dc6d7db0f301ae4b7703c1d81b3a9d7d12b72af3f0b56a52b28afb895f9d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 13 15:24:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8896dc6d7db0f301ae4b7703c1d81b3a9d7d12b72af3f0b56a52b28afb895f9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:24:59 standalone.localdomain podman[485751]: 2025-10-13 15:24:59.328749445 +0000 UTC m=+1.139921043 container init 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=neutron_sriov_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:24:59 standalone.localdomain podman[485751]: 2025-10-13 15:24:59.336609197 +0000 UTC m=+1.147780785 container start 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_sriov_agent, config_id=neutron_sriov_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:24:59 standalone.localdomain podman[485751]: neutron_sriov_agent
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + sudo -E kolla_set_configs
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Validating config file
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Copying service configuration files
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Writing out command to execute
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/lock
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dnsmasq_wrapper
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_haproxy_wrapper
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/03f66d1294ee0eb4465c67b7f02f44b76b4fb981cae67c336f96704556f02da9
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/interface
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/leases
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/pid
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/interface
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/leases
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/pid
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/interface
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/leases
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/pid
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/0c455abd-28d4-47e7-a254-e50de0526def.conf
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: ++ cat /run_command
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + CMD=/usr/bin/neutron-sriov-nic-agent
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + ARGS=
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + sudo kolla_copy_cacerts
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + [[ ! -n '' ]]
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + . kolla_extend_start
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + umask 0022
Oct 13 15:24:59 standalone.localdomain neutron_sriov_agent[485765]: + exec /usr/bin/neutron-sriov-nic-agent
Oct 13 15:24:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:25:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27103 DF PROTO=TCP SPT=36588 DPT=9102 SEQ=3924438963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBDA5360000000001030307) 
Oct 13 15:25:00 standalone.localdomain ceph-mon[29756]: pgmap v3313: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.095 2 INFO neutron.common.config [-] Logging enabled!
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.096 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.096 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.096 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.096 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.097 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.097 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'standalone.localdomain'}
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.097 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-512aa8e1-8523-441b-8fb7-4af102141044 - - - - - -] RPC agent_id: nic-switch-agent.standalone.localdomain
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.102 2 INFO neutron.agent.agent_extensions_manager [None req-512aa8e1-8523-441b-8fb7-4af102141044 - - - - - -] Loaded agent extensions: ['qos']
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.102 2 INFO neutron.agent.agent_extensions_manager [None req-512aa8e1-8523-441b-8fb7-4af102141044 - - - - - -] Initializing agent extension 'qos'
Oct 13 15:25:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3314: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:25:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:01.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.542 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-512aa8e1-8523-441b-8fb7-4af102141044 - - - - - -] Agent initialized successfully, now running... 
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.543 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-512aa8e1-8523-441b-8fb7-4af102141044 - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Oct 13 15:25:01 standalone.localdomain neutron_sriov_agent[485765]: 2025-10-13 15:25:01.543 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-512aa8e1-8523-441b-8fb7-4af102141044 - - - - - -] Agent out of sync with plugin!
Oct 13 15:25:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:25:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:25:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:01 standalone.localdomain systemd[1]: Started neutron_sriov_agent container.
Oct 13 15:25:01 standalone.localdomain podman[485777]: 2025-10-13 15:25:01.753311731 +0000 UTC m=+2.022778130 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:25:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:01 standalone.localdomain podman[485777]: 2025-10-13 15:25:01.794260593 +0000 UTC m=+2.063727052 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:25:02 standalone.localdomain python3.9[485907]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:25:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:02 standalone.localdomain systemd[1]: Stopping neutron_sriov_agent container...
Oct 13 15:25:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:02.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:02 standalone.localdomain ceph-mon[29756]: pgmap v3314: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3315: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:25:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:25:03 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:25:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:03 standalone.localdomain systemd[1]: libpod-448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce.scope: Deactivated successfully.
Oct 13 15:25:03 standalone.localdomain systemd[1]: libpod-448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce.scope: Consumed 1.844s CPU time.
Oct 13 15:25:03 standalone.localdomain podman[485911]: 2025-10-13 15:25:03.822991876 +0000 UTC m=+1.323986908 container died 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:25:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce-userdata-shm.mount: Deactivated successfully.
Oct 13 15:25:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:04 standalone.localdomain ceph-mon[29756]: pgmap v3315: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3316: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:25:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db193506eaeea5bc352278b61a9192c8541e438157f80fdad5fbfa7bd832e228-merged.mount: Deactivated successfully.
Oct 13 15:25:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db193506eaeea5bc352278b61a9192c8541e438157f80fdad5fbfa7bd832e228-merged.mount: Deactivated successfully.
Oct 13 15:25:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8896dc6d7db0f301ae4b7703c1d81b3a9d7d12b72af3f0b56a52b28afb895f9d-merged.mount: Deactivated successfully.
Oct 13 15:25:06 standalone.localdomain podman[485911]: 2025-10-13 15:25:06.047766732 +0000 UTC m=+3.548761774 container cleanup 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:25:06 standalone.localdomain podman[485911]: neutron_sriov_agent
Oct 13 15:25:06 standalone.localdomain podman[485924]: 2025-10-13 15:25:06.049378801 +0000 UTC m=+2.224383785 container cleanup 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, config_id=neutron_sriov_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:25:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:06.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:25:06.940 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:25:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:25:06.941 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:25:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:25:06.941 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:25:06 standalone.localdomain ceph-mon[29756]: pgmap v3316: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3317: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:07 standalone.localdomain podman[485938]: 2025-10-13 15:25:07.660743367 +0000 UTC m=+0.047181385 container cleanup 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=neutron_sriov_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:25:07 standalone.localdomain podman[485938]: neutron_sriov_agent
Oct 13 15:25:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:07.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:25:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully.
Oct 13 15:25:08 standalone.localdomain systemd[1]: Stopped neutron_sriov_agent container.
Oct 13 15:25:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:08 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:08 standalone.localdomain systemd[1]: Starting neutron_sriov_agent container...
Oct 13 15:25:08 standalone.localdomain podman[485950]: 2025-10-13 15:25:08.971612669 +0000 UTC m=+0.235089188 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:25:08 standalone.localdomain ceph-mon[29756]: pgmap v3317: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:08 standalone.localdomain podman[485950]: 2025-10-13 15:25:08.979801922 +0000 UTC m=+0.243278481 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:25:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3318: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:10 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:25:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:10 standalone.localdomain ceph-mon[29756]: pgmap v3318: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3319: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:11.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:25:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:12 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:25:12 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8896dc6d7db0f301ae4b7703c1d81b3a9d7d12b72af3f0b56a52b28afb895f9d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 13 15:25:12 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8896dc6d7db0f301ae4b7703c1d81b3a9d7d12b72af3f0b56a52b28afb895f9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:25:12 standalone.localdomain podman[485961]: 2025-10-13 15:25:12.148768237 +0000 UTC m=+3.193464761 container init 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:25:12 standalone.localdomain podman[485961]: 2025-10-13 15:25:12.154269985 +0000 UTC m=+3.198966499 container start 448fb348af7a883f77918efbad7794ad763f083e96f301e2c7885a93ccc191ce (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '7161a4fa22f1a2d14e1f5ef4fe5b9714ffae4256b5cd01ffb251a66d1af300cf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:25:12 standalone.localdomain podman[485961]: neutron_sriov_agent
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + sudo -E kolla_set_configs
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Validating config file
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Copying service configuration files
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Writing out command to execute
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/lock
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dnsmasq_wrapper
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_haproxy_wrapper
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/03f66d1294ee0eb4465c67b7f02f44b76b4fb981cae67c336f96704556f02da9
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/7fd88ee77a303e69e2d79df25f19e760c643c3e7da2c07fbeb6a01569dbedc50
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/interface
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/leases
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/pid
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/interface
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/leases
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/pid
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/interface
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/leases
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/pid
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/0c455abd-28d4-47e7-a254-e50de0526def.conf
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: ++ cat /run_command
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + CMD=/usr/bin/neutron-sriov-nic-agent
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + ARGS=
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + sudo kolla_copy_cacerts
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + [[ ! -n '' ]]
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + . kolla_extend_start
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + umask 0022
Oct 13 15:25:12 standalone.localdomain neutron_sriov_agent[485987]: + exec /usr/bin/neutron-sriov-nic-agent
Oct 13 15:25:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:12.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:12 standalone.localdomain sshd[486002]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:25:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:25:12 standalone.localdomain ceph-mon[29756]: pgmap v3319: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3320: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:13 standalone.localdomain unix_chkpwd[486004]: password check failed for user (root)
Oct 13 15:25:13 standalone.localdomain sshd[486002]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 13 15:25:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:25:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:25:13 standalone.localdomain sudo[486011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:25:13 standalone.localdomain sudo[486011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:25:13 standalone.localdomain sudo[486011]: pam_unix(sudo:session): session closed for user root
Oct 13 15:25:13 standalone.localdomain sudo[486034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 15:25:13 standalone.localdomain sudo[486034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.907 2 INFO neutron.common.config [-] Logging enabled!
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.908 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.909 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.909 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.909 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.909 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.909 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'standalone.localdomain'}
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.910 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bca565a-3f6c-48ef-a2af-d8222c0cc5b9 - - - - - -] RPC agent_id: nic-switch-agent.standalone.localdomain
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.915 2 INFO neutron.agent.agent_extensions_manager [None req-2bca565a-3f6c-48ef-a2af-d8222c0cc5b9 - - - - - -] Loaded agent extensions: ['qos']
Oct 13 15:25:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:13.915 2 INFO neutron.agent.agent_extensions_manager [None req-2bca565a-3f6c-48ef-a2af-d8222c0cc5b9 - - - - - -] Initializing agent extension 'qos'
Oct 13 15:25:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:25:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:14.049 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bca565a-3f6c-48ef-a2af-d8222c0cc5b9 - - - - - -] Agent initialized successfully, now running... 
Oct 13 15:25:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:14.049 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bca565a-3f6c-48ef-a2af-d8222c0cc5b9 - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Oct 13 15:25:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:25:14.049 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bca565a-3f6c-48ef-a2af-d8222c0cc5b9 - - - - - -] Agent out of sync with plugin!
Oct 13 15:25:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:14 standalone.localdomain systemd[1]: Started neutron_sriov_agent container.
Oct 13 15:25:14 standalone.localdomain podman[486005]: 2025-10-13 15:25:14.124772623 +0000 UTC m=+0.419669169 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:25:14 standalone.localdomain podman[486005]: 2025-10-13 15:25:14.184226136 +0000 UTC m=+0.479122682 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:25:14 standalone.localdomain sshd[477244]: pam_unix(sshd:session): session closed for user root
Oct 13 15:25:14 standalone.localdomain systemd[1]: session-296.scope: Deactivated successfully.
Oct 13 15:25:14 standalone.localdomain systemd[1]: session-296.scope: Consumed 22.417s CPU time.
Oct 13 15:25:14 standalone.localdomain systemd-logind[45629]: Session 296 logged out. Waiting for processes to exit.
Oct 13 15:25:14 standalone.localdomain systemd-logind[45629]: Removed session 296.
Oct 13 15:25:15 standalone.localdomain ceph-mon[29756]: pgmap v3320: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3321: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:25:15 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:25:15 standalone.localdomain sshd[486002]: Failed password for root from 193.46.255.99 port 52970 ssh2
Oct 13 15:25:15 standalone.localdomain sudo[486034]: pam_unix(sudo:session): session closed for user root
Oct 13 15:25:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:25:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:25:15 standalone.localdomain podman[486098]: 2025-10-13 15:25:15.824992998 +0000 UTC m=+0.085283280 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:25:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:15 standalone.localdomain podman[486098]: 2025-10-13 15:25:15.856356945 +0000 UTC m=+0.116647227 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:25:15 standalone.localdomain sudo[486121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:25:15 standalone.localdomain sudo[486121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:25:15 standalone.localdomain sudo[486121]: pam_unix(sudo:session): session closed for user root
Oct 13 15:25:15 standalone.localdomain sudo[486139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:25:15 standalone.localdomain sudo[486139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:25:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-78170edbfbc1c4123acff74d0c223c6ed428518291f85e84a08885022922a1b7-merged.mount: Deactivated successfully.
Oct 13 15:25:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:16.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:16 standalone.localdomain ceph-mon[29756]: pgmap v3321: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:16 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3322: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:17 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:25:17 standalone.localdomain unix_chkpwd[486170]: password check failed for user (root)
Oct 13 15:25:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:17.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:18 standalone.localdomain sudo[486139]: pam_unix(sudo:session): session closed for user root
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:25:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 12c4c9f6-3aa0-4dbd-9110-57a7eca3e949 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:25:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 12c4c9f6-3aa0-4dbd-9110-57a7eca3e949 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:25:18 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 12c4c9f6-3aa0-4dbd-9110-57a7eca3e949 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:25:18 standalone.localdomain sudo[486190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:25:18 standalone.localdomain sudo[486190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:25:18 standalone.localdomain sudo[486190]: pam_unix(sudo:session): session closed for user root
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3081922376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3081922376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: pgmap v3322: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3081922376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:25:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3081922376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:25:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3323: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:19 standalone.localdomain sshd[486002]: Failed password for root from 193.46.255.99 port 52970 ssh2
Oct 13 15:25:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:25:19 standalone.localdomain sshd[486208]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:25:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:25:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:25:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:20 standalone.localdomain sshd[486208]: Accepted publickey for root from 192.168.122.30 port 39664 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:25:20 standalone.localdomain systemd-logind[45629]: New session 297 of user root.
Oct 13 15:25:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:20 standalone.localdomain systemd[1]: Started Session 297 of User root.
Oct 13 15:25:20 standalone.localdomain sshd[486208]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:25:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db193506eaeea5bc352278b61a9192c8541e438157f80fdad5fbfa7bd832e228-merged.mount: Deactivated successfully.
Oct 13 15:25:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:25:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:25:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:20 standalone.localdomain podman[486250]: 2025-10-13 15:25:20.378142364 +0000 UTC m=+0.075836068 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 15:25:20 standalone.localdomain podman[486250]: 2025-10-13 15:25:20.426872217 +0000 UTC m=+0.124565951 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Oct 13 15:25:20 standalone.localdomain podman[486251]: 2025-10-13 15:25:20.435709389 +0000 UTC m=+0.132942089 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:25:20 standalone.localdomain podman[486251]: 2025-10-13 15:25:20.445964255 +0000 UTC m=+0.143196965 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:25:21 standalone.localdomain ceph-mon[29756]: pgmap v3323: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:21 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:25:21 standalone.localdomain unix_chkpwd[486354]: password check failed for user (root)
Oct 13 15:25:21 standalone.localdomain python3.9[486353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:25:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3324: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:21.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:25:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:25:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:25:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:21 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:25:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:21 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:25:21 standalone.localdomain podman[486377]: 2025-10-13 15:25:21.84256338 +0000 UTC m=+0.232426796 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T15:54:32, architecture=x86_64, tcib_managed=true, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:25:21 standalone.localdomain podman[486376]: 2025-10-13 15:25:21.898601378 +0000 UTC m=+0.287976639 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vendor=Red Hat, Inc., release=1, container_name=swift_object_server, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.9, tcib_managed=true)
Oct 13 15:25:21 standalone.localdomain podman[486378]: 2025-10-13 15:25:21.956455792 +0000 UTC m=+0.343717938 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:25:22 standalone.localdomain podman[486377]: 2025-10-13 15:25:22.049798799 +0000 UTC m=+0.439662175 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., version=17.1.9, container_name=swift_container_server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:25:22 standalone.localdomain podman[486376]: 2025-10-13 15:25:22.133810919 +0000 UTC m=+0.523186170 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, tcib_managed=true, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, release=1, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64)
Oct 13 15:25:22 standalone.localdomain podman[486378]: 2025-10-13 15:25:22.147269344 +0000 UTC m=+0.534531470 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, distribution-scope=public, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server)
Oct 13 15:25:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:22 standalone.localdomain python3.9[486536]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:25:22 standalone.localdomain ceph-mon[29756]: pgmap v3324: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3325: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:23 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:25:23 standalone.localdomain sshd[486002]: Failed password for root from 193.46.255.99 port 52970 ssh2
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:25:23
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'vms', 'backups', 'manila_data', '.mgr', 'images', 'volumes']
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:25:23 standalone.localdomain python3.9[486607]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:25:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44639 DF PROTO=TCP SPT=49530 DPT=9102 SEQ=3918167716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBDFE7F0000000001030307) 
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:25:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:25:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:25:23 standalone.localdomain systemd[1]: tmp-crun.zJNoFE.mount: Deactivated successfully.
Oct 13 15:25:23 standalone.localdomain podman[486609]: 2025-10-13 15:25:23.835599432 +0000 UTC m=+0.099860329 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:25:23 standalone.localdomain podman[486609]: 2025-10-13 15:25:23.84298538 +0000 UTC m=+0.107246287 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Oct 13 15:25:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44640 DF PROTO=TCP SPT=49530 DPT=9102 SEQ=3918167716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE02760000000001030307) 
Oct 13 15:25:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:24 standalone.localdomain sshd[486002]: Received disconnect from 193.46.255.99 port 52970:11:  [preauth]
Oct 13 15:25:24 standalone.localdomain sshd[486002]: Disconnected from authenticating user root 193.46.255.99 port 52970 [preauth]
Oct 13 15:25:24 standalone.localdomain sshd[486002]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 13 15:25:24 standalone.localdomain ceph-mon[29756]: pgmap v3325: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:24 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:25:24 standalone.localdomain sshd[486628]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:25:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0c8e7524cf6b6fffc8c0516c67342c7bc57cbe29bccdd18f09acae4c940add05-merged.mount: Deactivated successfully.
Oct 13 15:25:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3326: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:25:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-94ddaccf538b3b1317fb0419442c6d79d6cab209dc622d3597fafd8458f595b0-merged.mount: Deactivated successfully.
Oct 13 15:25:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-94ddaccf538b3b1317fb0419442c6d79d6cab209dc622d3597fafd8458f595b0-merged.mount: Deactivated successfully.
Oct 13 15:25:25 standalone.localdomain unix_chkpwd[486630]: password check failed for user (root)
Oct 13 15:25:25 standalone.localdomain sshd[486628]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 13 15:25:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:25:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:25:26 standalone.localdomain podman[486631]: 2025-10-13 15:25:26.28454605 +0000 UTC m=+0.077024506 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:25:26 standalone.localdomain podman[486631]: 2025-10-13 15:25:26.323829501 +0000 UTC m=+0.116307987 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:25:26 standalone.localdomain podman[486631]: unhealthy
Oct 13 15:25:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:26.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44641 DF PROTO=TCP SPT=49530 DPT=9102 SEQ=3918167716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE0A760000000001030307) 
Oct 13 15:25:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-94ddaccf538b3b1317fb0419442c6d79d6cab209dc622d3597fafd8458f595b0-merged.mount: Deactivated successfully.
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.811279) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369126811311, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 556, "num_deletes": 250, "total_data_size": 369069, "memory_usage": 378984, "flush_reason": "Manual Compaction"}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369126814427, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 289299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78499, "largest_seqno": 79054, "table_properties": {"data_size": 286602, "index_size": 745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7399, "raw_average_key_size": 20, "raw_value_size": 280936, "raw_average_value_size": 786, "num_data_blocks": 33, "num_entries": 357, "num_filter_entries": 357, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369096, "oldest_key_time": 1760369096, "file_creation_time": 1760369126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 3173 microseconds, and 957 cpu microseconds.
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.814452) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 289299 bytes OK
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.814468) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.818325) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.818338) EVENT_LOG_v1 {"time_micros": 1760369126818334, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.818351) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 365911, prev total WAL file size 366400, number of live WAL files 2.
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.819048) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303033' seq:72057594037927935, type:22 .. '6D6772737461740033323534' seq:0, type:0; will stop at (end)
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(282KB)], [179(6520KB)]
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369126819086, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 6965971, "oldest_snapshot_seqno": -1}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: pgmap v3326: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 6351 keys, 4967447 bytes, temperature: kUnknown
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369126859930, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 4967447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4932897, "index_size": 17556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 168505, "raw_average_key_size": 26, "raw_value_size": 4824914, "raw_average_value_size": 759, "num_data_blocks": 685, "num_entries": 6351, "num_filter_entries": 6351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.860291) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 4967447 bytes
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.861897) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.9 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 6.4 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(41.2) write-amplify(17.2) OK, records in: 6859, records dropped: 508 output_compression: NoCompression
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.861925) EVENT_LOG_v1 {"time_micros": 1760369126861913, "job": 112, "event": "compaction_finished", "compaction_time_micros": 40998, "compaction_time_cpu_micros": 21586, "output_level": 6, "num_output_files": 1, "total_output_size": 4967447, "num_input_records": 6859, "num_output_records": 6351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369126862111, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369126863001, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.818976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.863093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.863099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.863100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.863101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:25:26 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:25:26.863103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:25:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3327: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-78170edbfbc1c4123acff74d0c223c6ed428518291f85e84a08885022922a1b7-merged.mount: Deactivated successfully.
Oct 13 15:25:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-78170edbfbc1c4123acff74d0c223c6ed428518291f85e84a08885022922a1b7-merged.mount: Deactivated successfully.
Oct 13 15:25:27 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:25:27 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:25:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:27.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:28 standalone.localdomain sshd[486628]: Failed password for root from 193.46.255.99 port 39732 ssh2
Oct 13 15:25:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:28.085 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:28.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:28.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:28 standalone.localdomain python3.9[486760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 15:25:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:25:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132f8d778f9e9d93379d71b3e65d4acabc780080be588becf91e896afa6a5dd5-merged.mount: Deactivated successfully.
Oct 13 15:25:28 standalone.localdomain ceph-mon[29756]: pgmap v3327: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132f8d778f9e9d93379d71b3e65d4acabc780080be588becf91e896afa6a5dd5-merged.mount: Deactivated successfully.
Oct 13 15:25:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:29.086 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:25:29 standalone.localdomain python3.9[486871]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3328: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:29 standalone.localdomain unix_chkpwd[486889]: password check failed for user (root)
Oct 13 15:25:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:25:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:25:29 standalone.localdomain python3.9[486980]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.109 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.109 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.109 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.110 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.110 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:25:30 standalone.localdomain python3.9[487089]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:25:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:25:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3168756790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:25:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44642 DF PROTO=TCP SPT=49530 DPT=9102 SEQ=3918167716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE1A360000000001030307) 
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.641 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.719 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.721 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.721 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.728 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.728 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:25:30 standalone.localdomain ceph-mon[29756]: pgmap v3328: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:30 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3168756790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.891 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.892 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9832MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.892 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.892 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.975 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.975 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.975 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:25:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:30.975 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:25:30 standalone.localdomain python3.9[487218]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.027 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:25:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3329: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:31 standalone.localdomain sshd[486628]: Failed password for root from 193.46.255.99 port 39732 ssh2
Oct 13 15:25:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:25:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/579363511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.434 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:25:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.447 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.471 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.473 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.473 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:25:31 standalone.localdomain python3.9[487346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:31.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/579363511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:25:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:32 standalone.localdomain python3.9[487456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132f8d778f9e9d93379d71b3e65d4acabc780080be588becf91e896afa6a5dd5-merged.mount: Deactivated successfully.
Oct 13 15:25:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30-merged.mount: Deactivated successfully.
Oct 13 15:25:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:32.473 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:32 standalone.localdomain python3.9[487564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:32 standalone.localdomain ceph-mon[29756]: pgmap v3329: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:32.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:33 standalone.localdomain unix_chkpwd[487649]: password check failed for user (root)
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:25:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3330: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:33 standalone.localdomain python3.9[487673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.336 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.336 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.336 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.713 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.728 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:25:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:33.728 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:25:33 standalone.localdomain python3.9[487759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369132.6850808-106-146645027841337/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:25:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:25:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:25:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:25:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:25:34 standalone.localdomain podman[487868]: 2025-10-13 15:25:34.478501916 +0000 UTC m=+0.072328140 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 13 15:25:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:25:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0c8e7524cf6b6fffc8c0516c67342c7bc57cbe29bccdd18f09acae4c940add05-merged.mount: Deactivated successfully.
Oct 13 15:25:34 standalone.localdomain podman[487868]: 2025-10-13 15:25:34.512897847 +0000 UTC m=+0.106724091 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:25:34 standalone.localdomain python3.9[487867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:34 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:25:34 standalone.localdomain ceph-mon[29756]: pgmap v3330: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:35 standalone.localdomain python3.9[487973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369134.0856526-121-216446307963371/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:35 standalone.localdomain sshd[486628]: Failed password for root from 193.46.255.99 port 39732 ssh2
Oct 13 15:25:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3331: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0c8e7524cf6b6fffc8c0516c67342c7bc57cbe29bccdd18f09acae4c940add05-merged.mount: Deactivated successfully.
Oct 13 15:25:35 standalone.localdomain python3.9[488081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:25:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:25:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:25:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:36.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:25:36 standalone.localdomain python3.9[488167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369135.1765704-121-222255076359795/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:36.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:25:36 standalone.localdomain sshd[486628]: Received disconnect from 193.46.255.99 port 39732:11:  [preauth]
Oct 13 15:25:36 standalone.localdomain sshd[486628]: Disconnected from authenticating user root 193.46.255.99 port 39732 [preauth]
Oct 13 15:25:36 standalone.localdomain sshd[486628]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 13 15:25:36 standalone.localdomain python3.9[488275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:36 standalone.localdomain sshd[488292]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:25:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:36 standalone.localdomain ceph-mon[29756]: pgmap v3331: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:25:37 standalone.localdomain python3.9[488363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369136.2517116-121-125323251491088/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=8aba2ade5f4c27481f196753747284490216e81e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3332: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:25:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132f8d778f9e9d93379d71b3e65d4acabc780080be588becf91e896afa6a5dd5-merged.mount: Deactivated successfully.
Oct 13 15:25:37 standalone.localdomain unix_chkpwd[488381]: password check failed for user (root)
Oct 13 15:25:37 standalone.localdomain sshd[488292]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 13 15:25:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132f8d778f9e9d93379d71b3e65d4acabc780080be588becf91e896afa6a5dd5-merged.mount: Deactivated successfully.
Oct 13 15:25:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:37.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:38 standalone.localdomain python3.9[488472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:25:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:38 standalone.localdomain python3.9[488558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369137.9409347-179-235651534285054/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=809e3daaf14459c644bf02f1580900e4bae59863 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:38 standalone.localdomain ceph-mon[29756]: pgmap v3332: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3333: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:39 standalone.localdomain python3.9[488666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:39 standalone.localdomain sshd[488292]: Failed password for root from 193.46.255.99 port 40516 ssh2
Oct 13 15:25:39 standalone.localdomain python3.9[488752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369139.026577-194-94068753828030/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:40 standalone.localdomain python3.9[488860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-132f8d778f9e9d93379d71b3e65d4acabc780080be588becf91e896afa6a5dd5-merged.mount: Deactivated successfully.
Oct 13 15:25:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:25:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30-merged.mount: Deactivated successfully.
Oct 13 15:25:40 standalone.localdomain podman[488894]: 2025-10-13 15:25:40.758915731 +0000 UTC m=+0.081542515 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:25:40 standalone.localdomain podman[488894]: 2025-10-13 15:25:40.770041464 +0000 UTC m=+0.092668268 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:25:40 standalone.localdomain ceph-mon[29756]: pgmap v3333: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:41 standalone.localdomain python3.9[488970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369140.108407-194-244502956123934/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3334: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:41 standalone.localdomain unix_chkpwd[489026]: password check failed for user (root)
Oct 13 15:25:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:41.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:41 standalone.localdomain python3.9[489079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:41 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:25:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3942391ca9f80237581af7e7b18037e35031ab92d48c420734c7963817de1c30-merged.mount: Deactivated successfully.
Oct 13 15:25:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c5481f769777c9753e381b725ebc95f095fff1a1888ddebb0aedc5c68ba2eae3-merged.mount: Deactivated successfully.
Oct 13 15:25:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:41 standalone.localdomain python3.9[489134]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:25:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:25:42 standalone.localdomain python3.9[489242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:25:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:25:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:25:42 standalone.localdomain ceph-mon[29756]: pgmap v3334: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:25:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:25:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:25:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:42.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:43 standalone.localdomain python3.9[489330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369142.1050813-223-76291462068260/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3335: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:43 standalone.localdomain python3.9[489438]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:25:43 standalone.localdomain sshd[488292]: Failed password for root from 193.46.255.99 port 40516 ssh2
Oct 13 15:25:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:25:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:25:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:25:44 standalone.localdomain python3.9[489548]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:25:44 standalone.localdomain python3.9[489656]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:44 standalone.localdomain ceph-mon[29756]: pgmap v3335: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:44 standalone.localdomain unix_chkpwd[489681]: password check failed for user (root)
Oct 13 15:25:45 standalone.localdomain python3.9[489712]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3336: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:45 standalone.localdomain python3.9[489820]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:25:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:25:45 standalone.localdomain podman[489839]: 2025-10-13 15:25:45.87786254 +0000 UTC m=+0.049914200 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 13 15:25:45 standalone.localdomain podman[489839]: 2025-10-13 15:25:45.930209113 +0000 UTC m=+0.102260753 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:25:46 standalone.localdomain python3.9[489898]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:25:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:46.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:25:46 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:25:46 standalone.localdomain python3.9[490006]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:25:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:46 standalone.localdomain ceph-mon[29756]: pgmap v3336: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:46 standalone.localdomain sshd[488292]: Failed password for root from 193.46.255.99 port 40516 ssh2
Oct 13 15:25:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3337: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:47 standalone.localdomain python3.9[490114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:25:47 standalone.localdomain podman[467099]: time="2025-10-13T15:25:47Z" level=error msg="Getting root fs size for \"8bc349857c7dd0419f3cdb5b6b64f912584f4b858f54b05f34a30a8a4c46cd60\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:25:47 standalone.localdomain podman[490170]: 2025-10-13 15:25:47.74823244 +0000 UTC m=+0.076547230 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 13 15:25:47 standalone.localdomain podman[490170]: 2025-10-13 15:25:47.783834398 +0000 UTC m=+0.112149188 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:25:47 standalone.localdomain python3.9[490169]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:48.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:25:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:25:48 standalone.localdomain object-server[490296]: Object update sweep starting on /srv/node/d1 (pid: 28)
Oct 13 15:25:48 standalone.localdomain object-server[490296]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 28)
Oct 13 15:25:48 standalone.localdomain object-server[490296]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:25:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.10s
Oct 13 15:25:48 standalone.localdomain python3.9[490295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:25:48 standalone.localdomain sshd[488292]: Received disconnect from 193.46.255.99 port 40516:11:  [preauth]
Oct 13 15:25:48 standalone.localdomain sshd[488292]: Disconnected from authenticating user root 193.46.255.99 port 40516 [preauth]
Oct 13 15:25:48 standalone.localdomain sshd[488292]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99  user=root
Oct 13 15:25:48 standalone.localdomain ceph-mon[29756]: pgmap v3337: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:48 standalone.localdomain python3.9[490351]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3338: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:49 standalone.localdomain python3.9[490459]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:25:49 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:25:49 standalone.localdomain systemd-rc-local-generator[490482]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:25:49 standalone.localdomain systemd-sysv-generator[490485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:25:49 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:25:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:25:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c5481f769777c9753e381b725ebc95f095fff1a1888ddebb0aedc5c68ba2eae3-merged.mount: Deactivated successfully.
Oct 13 15:25:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c5481f769777c9753e381b725ebc95f095fff1a1888ddebb0aedc5c68ba2eae3-merged.mount: Deactivated successfully.
Oct 13 15:25:50 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:25:50 standalone.localdomain python3.9[490606]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:50 standalone.localdomain ceph-mon[29756]: pgmap v3338: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:51 standalone.localdomain python3.9[490661]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Oct 13 15:25:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:25:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3339: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:25:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:51.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:51 standalone.localdomain python3.9[490769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:25:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:25:52 standalone.localdomain podman[490825]: 2025-10-13 15:25:52.083920553 +0000 UTC m=+0.094520515 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Oct 13 15:25:52 standalone.localdomain systemd[1]: tmp-crun.ZNnXvm.mount: Deactivated successfully.
Oct 13 15:25:52 standalone.localdomain podman[490826]: 2025-10-13 15:25:52.132958924 +0000 UTC m=+0.140196923 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:25:52 standalone.localdomain python3.9[490824]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:52 standalone.localdomain podman[490826]: 2025-10-13 15:25:52.145899033 +0000 UTC m=+0.153137052 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:25:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:25:52 standalone.localdomain podman[490825]: 2025-10-13 15:25:52.201736725 +0000 UTC m=+0.212336697 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=)
Oct 13 15:25:52 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:25:52 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:25:52 standalone.localdomain python3.9[490968]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:25:52 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:25:52 standalone.localdomain ceph-mon[29756]: pgmap v3339: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:53 standalone.localdomain systemd-sysv-generator[490992]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:25:53 standalone.localdomain systemd-rc-local-generator[490986]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:25:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:53.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:53 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3340: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:25:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:25:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Oct 13 15:25:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:25:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:25:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:25:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:25:53 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:25:53 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:25:53 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:25:53 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:25:53 standalone.localdomain podman[491006]: 2025-10-13 15:25:53.450910315 +0000 UTC m=+0.097130656 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, name=rhosp17/openstack-swift-object, vcs-type=git, container_name=swift_object_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, release=1, com.redhat.component=openstack-swift-object-container)
Oct 13 15:25:53 standalone.localdomain podman[491007]: 2025-10-13 15:25:53.491184176 +0000 UTC m=+0.133223967 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.openshift.expose-services=)
Oct 13 15:25:53 standalone.localdomain podman[491008]: 2025-10-13 15:25:53.469326913 +0000 UTC m=+0.107254698 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vcs-type=git, container_name=swift_account_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:25:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12947 DF PROTO=TCP SPT=42232 DPT=9102 SEQ=2452892598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE73AF0000000001030307) 
Oct 13 15:25:53 standalone.localdomain podman[491006]: 2025-10-13 15:25:53.624713813 +0000 UTC m=+0.270934184 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1)
Oct 13 15:25:53 standalone.localdomain podman[491008]: 2025-10-13 15:25:53.640417428 +0000 UTC m=+0.278345223 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, architecture=x86_64, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container)
Oct 13 15:25:53 standalone.localdomain podman[491007]: 2025-10-13 15:25:53.674788857 +0000 UTC m=+0.316828658 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_container_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1)
Oct 13 15:25:54 standalone.localdomain python3.9[491197]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:25:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12948 DF PROTO=TCP SPT=42232 DPT=9102 SEQ=2452892598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE77B60000000001030307) 
Oct 13 15:25:54 standalone.localdomain python3.9[491305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:25:54 standalone.localdomain ceph-mon[29756]: pgmap v3340: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3341: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:55 standalone.localdomain python3.9[491391]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/root/.ansible/tmp/ansible-tmp-1760369154.409499-371-52106535014578/.source.json _original_basename=.wqkp1ilp follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:25:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:25:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-53bdd984fb7049ab7f79c8d6ad9e4d45d8ae9e4d29126e230f8a5d1abdc81fab-merged.mount: Deactivated successfully.
Oct 13 15:25:55 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:25:55 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:25:55 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:25:55 standalone.localdomain podman[491392]: 2025-10-13 15:25:55.707898615 +0000 UTC m=+0.261476043 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:25:55 standalone.localdomain podman[491392]: 2025-10-13 15:25:55.712987961 +0000 UTC m=+0.266565369 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:25:56 standalone.localdomain python3.9[491516]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:25:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:56.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12949 DF PROTO=TCP SPT=42232 DPT=9102 SEQ=2452892598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE7FB70000000001030307) 
Oct 13 15:25:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:25:56 standalone.localdomain ceph-mon[29756]: pgmap v3341: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3342: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:25:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3083e85742d15b3688bbd5bfc55abd305d83e3dc3c5e508c9fd32ae2eabd379-merged.mount: Deactivated successfully.
Oct 13 15:25:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:25:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:25:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:25:58 standalone.localdomain python3.9[491829]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False
Oct 13 15:25:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:25:58.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:25:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:25:58 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:25:58 standalone.localdomain podman[491782]: 2025-10-13 15:25:58.243444991 +0000 UTC m=+0.492460423 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:25:58 standalone.localdomain podman[491782]: 2025-10-13 15:25:58.281868065 +0000 UTC m=+0.530883467 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:25:58 standalone.localdomain podman[491782]: unhealthy
Oct 13 15:25:58 standalone.localdomain python3.9[491949]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:25:58 standalone.localdomain ceph-mon[29756]: pgmap v3342: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3343: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:25:59 standalone.localdomain python3.9[492057]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:25:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:25:59 standalone.localdomain podman[467099]: time="2025-10-13T15:25:59Z" level=error msg="Getting root fs size for \"8ded7ea8bd22e54751140af9557a4c25b630dc8d0794326e6685ab9e76934366\": getting diffsize of layer \"28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:25:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:25:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:26:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:26:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:00 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:26:00 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:26:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12950 DF PROTO=TCP SPT=42232 DPT=9102 SEQ=2452892598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBE8F760000000001030307) 
Oct 13 15:26:00 standalone.localdomain ceph-mon[29756]: pgmap v3343: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3344: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:01.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3083e85742d15b3688bbd5bfc55abd305d83e3dc3c5e508c9fd32ae2eabd379-merged.mount: Deactivated successfully.
Oct 13 15:26:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4dfe70c52a10e8bd718ed4752cbab0ea6264e7dcefe217a06d9a919012c408d9-merged.mount: Deactivated successfully.
Oct 13 15:26:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:02 standalone.localdomain ceph-mon[29756]: pgmap v3344: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:03.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3345: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:26:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:26:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-53bdd984fb7049ab7f79c8d6ad9e4d45d8ae9e4d29126e230f8a5d1abdc81fab-merged.mount: Deactivated successfully.
Oct 13 15:26:04 standalone.localdomain podman[492071]: 2025-10-13 15:26:04.806140308 +0000 UTC m=+0.079630275 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:26:04 standalone.localdomain podman[492071]: 2025-10-13 15:26:04.816528989 +0000 UTC m=+0.090018976 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:26:04 standalone.localdomain ceph-mon[29756]: pgmap v3345: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3346: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:06 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:26:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:06.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:26:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 6600.0 total, 600.0 interval
                                                        Cumulative writes: 17K writes, 79K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                        Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1502 writes, 7304 keys, 1502 commit groups, 1.0 writes per commit group, ingest: 5.49 MB, 0.01 MB/s
                                                        Interval WAL: 1502 writes, 1502 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.7      0.66              0.18        56    0.012       0      0       0.0       0.0
                                                          L6      1/0    4.74 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   5.1    199.2    169.1      1.60              0.77        55    0.029    288K    29K       0.0       0.0
                                                         Sum      1/0    4.74 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   6.1    140.8    142.9      2.26              0.95       111    0.020    288K    29K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   8.3    146.1    142.3      0.32              0.15        14    0.023     48K   3615       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    199.2    169.1      1.60              0.77        55    0.029    288K    29K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     80.0      0.66              0.18        55    0.012       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 6600.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.052, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.32 GB write, 0.05 MB/s write, 0.31 GB read, 0.05 MB/s read, 2.3 seconds
                                                        Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 41.91 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00034 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(3761,40.03 MB,12.9973%) FilterBlock(112,795.86 KB,0.25234%) IndexBlock(112,1.10 MB,0.358438%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 15:26:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:26:06.941 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:26:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:26:06.941 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:26:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:26:06.942 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:26:07 standalone.localdomain ceph-mon[29756]: pgmap v3346: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3347: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:08.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3083e85742d15b3688bbd5bfc55abd305d83e3dc3c5e508c9fd32ae2eabd379-merged.mount: Deactivated successfully.
Oct 13 15:26:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:09 standalone.localdomain ceph-mon[29756]: pgmap v3347: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3348: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:26:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:26:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:11 standalone.localdomain ceph-mon[29756]: pgmap v3348: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3349: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-006dca8b53e2bed08474999b261511727460c03a977d9613a5d06166fa6108c2-merged.mount: Deactivated successfully.
Oct 13 15:26:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:11.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:26:11 standalone.localdomain podman[492091]: 2025-10-13 15:26:11.794066425 +0000 UTC m=+0.073689483 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:26:11 standalone.localdomain podman[492091]: 2025-10-13 15:26:11.799904215 +0000 UTC m=+0.079527273 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:26:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:26:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:26:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:26:13 standalone.localdomain ceph-mon[29756]: pgmap v3349: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e3083e85742d15b3688bbd5bfc55abd305d83e3dc3c5e508c9fd32ae2eabd379-merged.mount: Deactivated successfully.
Oct 13 15:26:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:13.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4dfe70c52a10e8bd718ed4752cbab0ea6264e7dcefe217a06d9a919012c408d9-merged.mount: Deactivated successfully.
Oct 13 15:26:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3350: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:13 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:26:14 standalone.localdomain ceph-mon[29756]: pgmap v3350: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3351: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:16 standalone.localdomain ceph-mon[29756]: pgmap v3351: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:16.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:26:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:26:16 standalone.localdomain podman[492113]: 2025-10-13 15:26:16.830356676 +0000 UTC m=+0.093793123 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 15:26:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-caf1f29ceeb931335ec036bcac576e0e50be69d47f8ecf9dd6e53a9d522384f3-merged.mount: Deactivated successfully.
Oct 13 15:26:16 standalone.localdomain podman[492113]: 2025-10-13 15:26:16.860749562 +0000 UTC m=+0.124185799 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:26:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3352: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-caf1f29ceeb931335ec036bcac576e0e50be69d47f8ecf9dd6e53a9d522384f3-merged.mount: Deactivated successfully.
Oct 13 15:26:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:18 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:26:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:18.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:18 standalone.localdomain sudo[492138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:26:18 standalone.localdomain sudo[492138]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:26:18 standalone.localdomain sudo[492138]: pam_unix(sudo:session): session closed for user root
Oct 13 15:26:18 standalone.localdomain sudo[492156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:26:18 standalone.localdomain sudo[492156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3748915866' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3748915866' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: pgmap v3352: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3748915866' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:26:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3748915866' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:26:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3353: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:26:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:26:19 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:20 standalone.localdomain sudo[492156]: pam_unix(sudo:session): session closed for user root
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:26:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev fc669337-c37e-4ccc-a094-f5c56ede42d9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:26:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev fc669337-c37e-4ccc-a094-f5c56ede42d9 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:26:20 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event fc669337-c37e-4ccc-a094-f5c56ede42d9 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:26:20 standalone.localdomain sudo[492205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:26:20 standalone.localdomain sudo[492205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:26:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:26:20 standalone.localdomain sudo[492205]: pam_unix(sudo:session): session closed for user root
Oct 13 15:26:20 standalone.localdomain podman[492223]: 2025-10-13 15:26:20.437041353 +0000 UTC m=+0.067917514 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:26:20 standalone.localdomain podman[492223]: 2025-10-13 15:26:20.479978907 +0000 UTC m=+0.110855068 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:26:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: pgmap v3353: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:26:20 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:26:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:20.989 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:26:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:20.991 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:26:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:20.991 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.014 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.015 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.015 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.030 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.031 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '617d28d9-ecf7-4595-bfdd-62f7feff0be1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:20.991406', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f895a49e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '76b55a86c28e6b9d3dbd988211fd633fc16bd64f62682f0eb5740149f0c530a6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:20.991406', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f895b40c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': 'a6f6fa726822c397fc22b2996a00320a824c38485600321cb237c8d4d4ab6905'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:20.991406', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f895bf56-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '7646c1168eed7ee2169817c81605a27476b9e2a6103dddcedc5ee8a5139d2b76'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:20.991406', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f897ff64-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': 'fd9b4c0fedbd844085eed6c6245c87c5ed0ffb2009a08a39ecc430ab5e53caa3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:20.991406', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8980e50-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': '8183ae43afc232de93992cbaefc7e9eb1ec604fd828503b419a4a053d8ec4749'}]}, 'timestamp': '2025-10-13 15:26:21.031348', '_unique_id': '9e3e6399e1d6455e9e44ed532955aa8c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.032 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.033 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.052 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.052 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.053 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.062 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.064 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ead692a2-2451-4e13-bd73-7e780e896fd6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.034084', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f89b5704-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': 'ca135d26303c0250cbe2a028243b8b96ed515be4bee346bf9587958d629ab47a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.034084', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f89b6578-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '7765005cfc1a76d2ba03da41b7bc12199f46ec0ab9755acaa4eeee09994dbc21'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.034084', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f89b78d8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '13dfce1debd83d51b6a0d757aa1cc92d4fc0187b732f420b776251725b83294a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.034084', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f89d0a7c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.242945282, 'message_signature': 'ac0ffe56ef20307d54fcd0b342a5489e54253fc877132d75f75a6d93cd2b40a5'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.034084', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f89d16c0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.242945282, 'message_signature': '8b193024f8d9e17ec003454a3c2cd085d2d791407e3e474630c33886b1c80944'}]}, 'timestamp': '2025-10-13 15:26:21.064283', '_unique_id': 'eed17ae2550e46c68ae566400465398a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.065 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.068 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.069 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '79631b5a-76ec-4141-8e4d-40c1d9c41548', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.065984', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f89dbbac-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '7f24ccb267622c89dad84807b23d9b1cc1701881bbd59b8201dc5791b57ffb4b'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.065984', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f89dfce8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': '9819462a2f5de4003a488ebc3cb7307024da5976fd27bbb479b34b9a715a754d'}]}, 'timestamp': '2025-10-13 15:26:21.070210', '_unique_id': '2d1349e1b60a478eafa06c3247c74ee9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.071 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.072 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.072 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1134331-c1fe-4303-b140-60f352b7c9b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.072016', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f89e5044-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '8f2f5d7d6b4c587bd5f1792b001771a5104bdfecd151cced1fd45d2a9edf1bd4'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.072016', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f89e5b84-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': '548fe857ebacb77286435571b91ec0e68d47bd7981f5da7e1138121528bd8eec'}]}, 'timestamp': '2025-10-13 15:26:21.072606', '_unique_id': 'e8688e5786994adab914c7bf1c468794'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.073 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b71a75a-736b-4433-bef4-0fea73de3d4b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.073750', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f89e91ee-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '7ea60f3fb6e5ba24640786a493549d0ab178b3a7989da0b9b6655cf50e1f7e58'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.073750', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f89e9a54-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': '21b534b31bd6757438b6bf289e2142c85af6564890b1fde8d758bc553103a94f'}]}, 'timestamp': '2025-10-13 15:26:21.074188', '_unique_id': '49ee05157fbf4f7db250bcaebf15f6a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.074 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.075 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.088 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.099 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf0cfb77-fca6-4be1-9e28-2fe1c3df4a9d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:26:21.075274', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'f8a0c63a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.27725736, 'message_signature': 'ade60b3d9f42449b43badc74a463d56740197fcd2e29d11ea3d250a3590ebb30'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:26:21.075274', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'f8a2960e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.288594199, 'message_signature': '476bdf34c5bcbfe15c39769e71f79f173bad3d1d00afc707752cac4b24b80502'}]}, 'timestamp': '2025-10-13 15:26:21.100544', '_unique_id': 'f456e4925c374673a9c5ed4b38179616'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.103 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.103 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.104 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77eb0be0-fb7d-4a57-8968-965eb19f925e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.103148', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a30f94-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '2c33203d062888a8d94f818d4ec12dca94c142a1f1db78aaabbbec19ab910ae6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.103148', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a3193a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': 'b25d3ff7221b146b8118ffe731aea0ce7e9bd34f90b392eab2df1b29bd99cc35'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.103148', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a3220e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '6aac0493ea6a4672d21d21dbc8edbc3f1afb05b752930f71245647d8ed32036b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.103148', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a32b82-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': 'f3d2e4a9700006474beecf9cc2a6687d9114189600b0d627fd8109b6384746ea'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.103148', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a33348-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': '30438b76cb35b42be25ee20875200f092e89b18b7b0a6cc05254cc74b09ee5a8'}]}, 'timestamp': '2025-10-13 15:26:21.104313', '_unique_id': 'd3d3ae92e9da414d9c6be9eedc77cce1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.105 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 29800000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 29650000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d8aa55c-231a-4e2e-b59b-c8cfa5e464cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 29800000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:26:21.105768', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'f8a37506-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.27725736, 'message_signature': 'd60f3092121cdf4eea1e96f219b88b7b8badb1ce60767ea1e4b2d6ba5e3bdc64'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 29650000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:26:21.105768', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'f8a37cfe-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.288594199, 'message_signature': '363fd1dc4382dcfebe3841672bba1ff5e9666b01d9e161b72c7d2e99c6d12dcc'}]}, 'timestamp': '2025-10-13 15:26:21.106217', '_unique_id': '9c92f6f2f66649aba3a6950e23aaa53e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.106 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.107 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.107 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.107 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd7c768db-950d-4d10-b174-fd6908e7fb40', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.107427', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a3b656-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '0671b198f9b9dcde91596957bbc5e1fc1161fcffa366f4abe91c5dd88fccd952'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.107427', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a3bea8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': 'b26dc95be21ce4376b8035df9c3066bd243ead91829b3dff81a13628a2be6065'}]}, 'timestamp': '2025-10-13 15:26:21.107885', '_unique_id': 'c4fc82af661d4be7bef8c898e055cb28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.108 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d0fc2ab-5b34-4fc8-8012-bd3c68b23948', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.108981', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a3f1ac-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '194d82749e17d3ab36a17fba84e7f77888996f16906d08b1af63627d651ffeb5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.108981', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a3f972-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': 'f8c498d60048f7eabf1e75461d17de8000c97a7e688af46cc8a330503c45e45a'}]}, 'timestamp': '2025-10-13 15:26:21.109388', '_unique_id': 'e5314460ede6415692f58d6d120cc711'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.109 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.110 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.110 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.110 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ff83693-68c0-489b-b461-12d17d75bc41', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.110461', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a42c94-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '3aa3cc752cb51030555ec5c6845075ac56eb46078bfbf10c9a6d958436929fb7'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.110461', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a434c8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': '1f15fbd6c3d8a0535c37046b2d81811d08d2fed5d3670b56a8b01047d55cb671'}]}, 'timestamp': '2025-10-13 15:26:21.110908', '_unique_id': '6bd0813c45744afe88e8d03647c18db1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.111 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.112 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.112 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.112 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.112 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.112 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dad8f247-4d8c-40bb-a218-af07470f9416', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.111994', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a467c2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '9a56ad163c70c3ef8010eeeec58ee2d7776226de52faad4310de380de3a0d09d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.111994', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a46f74-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '877fc8f7381456fc7a79127c6bcbcb963f838bdfbfeb7fa416c83dc9e5e68c72'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.111994', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a47744-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': 'f8750ce50972c5cc63b9c5dd0dad757e1b12f40506798b2aa8531bd18a3bcc3a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.111994', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a47f46-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': 'fd3b39c3fcdeb45d64ef3d007b765a6c441cfa1cf55cc8221a8650078afd958a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.111994', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a48680-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': 'a415a5b4d5b0a061dea4ff6f5218a7d433f3aaa7ad019f13cf363eb8a9ddbea5'}]}, 'timestamp': '2025-10-13 15:26:21.112991', '_unique_id': '19a285b3a316495bb442efd632e09ba7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.113 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.114 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.114 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.114 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.114 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1da6845-936b-429d-9b02-91e43a03f181', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.114107', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a4bae2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '2d8821792f3b32157cf095e6b0f7b7b11c38fd3cdb60d75b065b23a54df6089c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.114107', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a4c28a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '8a567e2337f11300501fc1dafcaa0f5310c3a93ca11950679989f81bb79ea3d7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.114107', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a4ca82-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': 'e8d157522dd55540ade227da2bc91262ab9eb8fa023d5d72fa89d5c3b8f407d9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.114107', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a4d25c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': '12d75b527500de33141a301de1f2255acde0d4ad6111e73d4e037ff8a159c19e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.114107', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a4d9d2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': '2bf544fa7392ae69f30fdb18b3a8f7f95c02f5f45925265a0c2578932f0225e3'}]}, 'timestamp': '2025-10-13 15:26:21.115122', '_unique_id': 'c5ae83a8dd0647e8bf568a261e42298e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.116 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.116 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.116 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '588aedc0-2657-4f6b-bf6f-f260d9fc5940', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.116174', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a50af6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': 'a0912fc313df08d3d08dc83d156187372fb719a507c26818b01b025a520fab6b'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.116174', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a513a2-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': '1610a3aafe20512f9ef263fc95f84f05e4747d9b59abc22c652ef3b07c691a3c'}]}, 'timestamp': '2025-10-13 15:26:21.116615', '_unique_id': '9e4623214cbb41f29cd2677a4f494368'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.117 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.118 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.118 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.118 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '120a3eaa-33b6-4bf6-b371-861257cea7bf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.117610', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a542c8-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '0acb7d2012083b388a253597cc8def0467071f421b6eaab3d1dbe91c5882408a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.117610', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a54a7a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '4190409397129c078ab3c9fc682c824547c8bb84fc96cb463f334ecb6bef4c45'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.117610', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a551a0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '2d3332275c3ad93e3640eac90d20f197fe6b60cebec833fc3b290a1795e36e0b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.117610', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a55a24-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.242945282, 'message_signature': '6899e9da80b0fd23b42530dafbafe291625193c2e4291d715e9b329db3b22df0'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.117610', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a5641a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.242945282, 'message_signature': '581a397e59f417ee03ed4cdbd7d9095015d20963ee19565518c9fa221176ba1e'}]}, 'timestamp': '2025-10-13 15:26:21.118701', '_unique_id': 'e49f4abc560f47dfac7649844849be9d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.119 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.120 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.120 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.120 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.120 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ba4f999d-cfca-44e1-873a-ba1c127933c3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.119831', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a5998a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': 'a7ee3c6191ad4fcc7c841564fba3d99624c74fef28e7ad52452b4eae517bbe42'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.119831', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a5a646-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '8e5247b421266d1eafe34546bfae4a7696d6ed4efb4174af58a4c9086c00b122'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.119831', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a5b1d6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': 'f5a250f2b3d0c2c862e21e8a50d3c03ff2f08e18022bb07c45ea031676418531'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.119831', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a5b942-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': '9854b3bc2edb7cbfa9bf9dfadb453cce549be0bb153a94f273845ff8b83af98b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.119831', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a5c036-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': '1e3e4199442a9e1138e50a3b4b4109a2df8c1a6d2224461134f4f3378d71f8ba'}]}, 'timestamp': '2025-10-13 15:26:21.121056', '_unique_id': 'e823ad52b5c748e689415a45bfaec14c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.121 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.122 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.122 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.122 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ff240e6b-5d2e-4150-8e66-18d367a63f73', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.122572', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a60672-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '01970972c0185d8ac86f5f4b5972ae0b468f64d3f17bd07cdfa8da47cbfdb914'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.122572', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a610ae-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': '10af802bdf1a5ec69053faedf8f6dc11125b430861de61d27436091ff52d0c7b'}]}, 'timestamp': '2025-10-13 15:26:21.123093', '_unique_id': '638f680085e44143b39b7e38324f2eb0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.123 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.124 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.124 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.124 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '287d024c-e164-41cf-841a-3f8bfcf9a656', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.124172', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a6433a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '2a4c5e5ae8dae3b01964535e0bd1bca1033c1a276015eaca766b7431894f9307'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.124172', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a64be6-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': 'c6bb940e25ad46a3c391522cdc7a64fe9fa08ca4ff8fa8297ab41dc9c7c6cff4'}]}, 'timestamp': '2025-10-13 15:26:21.124607', '_unique_id': 'ff23491316764c42af40a7f06addf19c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.125 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.126 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.126 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.126 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f8dc1878-081c-4916-95c1-68590ec861ec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.125626', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a67c9c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '67fcdc3de3a98d8d77f803932ce4410014df798288456efd5b0c1e6d75cd3ae8'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.125626', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a6843a-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': '16bd8ca02269d3476c9fe4688f4f20ca0f5d2c299d71090dbbaf2439179a2606'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.125626', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a68b4c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.223341008, 'message_signature': 'b174383f40ed7926fd5d9cef4448a8087f2403f269d7433ec8add16f0a168e53'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.125626', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a69376-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.242945282, 'message_signature': '14a72c20272646f849fa46c12adac6eb1c9f21da30c798f8b34484b3fb865236'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.125626', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a69ba0-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.242945282, 'message_signature': 'bc69f2a1ecf987e7d801dd9c46fe57b55293fd2480904a7df706c83c2c82caec'}]}, 'timestamp': '2025-10-13 15:26:21.126640', '_unique_id': '0c5adf765c68448eb618543704eadef7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.127 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5e32c70c-b8ca-483f-b202-206b5cc1002d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:26:21.127725', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'f8a6ce22-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.255226191, 'message_signature': '318fc0f8dacba0da5d6fcb5d0c202133cf7a4a9eb2761dea415024322524e9fb'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:26:21.127725', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'f8a6d62e-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.257706797, 'message_signature': 'ec6690c251e42e20ca4e361c561ffa6f4ae4f4672cbafcf33b6a381ea079528b'}]}, 'timestamp': '2025-10-13 15:26:21.128150', '_unique_id': '43754b22ac8746eab7e1876ea9f4e440'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.129 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.129 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.129 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.129 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.129 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'afa45511-dec8-4118-a222-19a6307a8671', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:26:21.129219', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a70838-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '291fae5756f6e817c2b42d3db46bab1eb6f802356f4bacd88692eb939bfe904c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:26:21.129219', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a7106c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '0db207b6aa78c04238f528125fc93801ea2758c6292c4d31d56292707dac3f5a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:26:21.129219', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'f8a7188c-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.180619761, 'message_signature': '2063f37361ae1a8940fe4079a967bed2e92f64f1646961cea731c51cadf602bb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:26:21.129219', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f8a71fda-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': 'd223c1960ef71f6016f9b9d0ae79ca3735324c0b12c3bc42dc52feff0d3c94c3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:26:21.129219', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f8a726c4-a848-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8624.205423826, 'message_signature': 'f07ac0e2ad85a2286cccb5f9eda85ea706af15776b3cba4ca1e707a72d8aec3f'}]}, 'timestamp': '2025-10-13 15:26:21.130222', '_unique_id': 'ce4b9f40b5fc4bb8917bdd466b0fa3c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.130 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.131 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:26:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:26:21.131 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:26:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3354: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:21.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:22 standalone.localdomain ceph-mon[29756]: pgmap v3354: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3355: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:26:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:23.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:26:23
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_metadata', 'images', '.mgr', 'vms', 'manila_data', 'volumes']
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:26:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61334 DF PROTO=TCP SPT=40380 DPT=9102 SEQ=1056245617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBEE8DF0000000001030307) 
Oct 13 15:26:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:26:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:26:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:26:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:26:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-006dca8b53e2bed08474999b261511727460c03a977d9613a5d06166fa6108c2-merged.mount: Deactivated successfully.
Oct 13 15:26:23 standalone.localdomain podman[492242]: 2025-10-13 15:26:23.843815099 +0000 UTC m=+0.121011872 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 15:26:23 standalone.localdomain podman[492241]: 2025-10-13 15:26:23.904557442 +0000 UTC m=+0.179562497 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container)
Oct 13 15:26:23 standalone.localdomain podman[492242]: 2025-10-13 15:26:23.983879897 +0000 UTC m=+0.261076650 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:26:24 standalone.localdomain podman[492241]: 2025-10-13 15:26:24.045979581 +0000 UTC m=+0.320984646 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=edpm, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 15:26:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61335 DF PROTO=TCP SPT=40380 DPT=9102 SEQ=1056245617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBEECF60000000001030307) 
Oct 13 15:26:24 standalone.localdomain systemd[1]: tmp-crun.IhG1ts.mount: Deactivated successfully.
Oct 13 15:26:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:26:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-868ee7f6adbabfd307c31c3f6589e2eb42d60dc9c98a3ffc82e4943ad43a114b-merged.mount: Deactivated successfully.
Oct 13 15:26:24 standalone.localdomain ceph-mon[29756]: pgmap v3355: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:26:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:26:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:26:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3356: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:26:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:26:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:25 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:26:25 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:26:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:26:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:26:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:26:25 standalone.localdomain systemd[1]: tmp-crun.bgdOzT.mount: Deactivated successfully.
Oct 13 15:26:25 standalone.localdomain podman[492279]: 2025-10-13 15:26:25.824394437 +0000 UTC m=+0.093409201 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, vcs-type=git, batch=17.1_20250721.1, container_name=swift_account_server, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container)
Oct 13 15:26:25 standalone.localdomain podman[492277]: 2025-10-13 15:26:25.878783534 +0000 UTC m=+0.149596303 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., vcs-type=git, container_name=swift_object_server, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:26:25 standalone.localdomain podman[492278]: 2025-10-13 15:26:25.942807057 +0000 UTC m=+0.211796590 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_container_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:26:26 standalone.localdomain podman[492279]: 2025-10-13 15:26:26.027801108 +0000 UTC m=+0.296815892 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, vcs-type=git, container_name=swift_account_server, architecture=x86_64, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:26:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:26:26 standalone.localdomain podman[492277]: 2025-10-13 15:26:26.065968414 +0000 UTC m=+0.336781183 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, distribution-scope=public, container_name=swift_object_server, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team)
Oct 13 15:26:26 standalone.localdomain podman[492278]: 2025-10-13 15:26:26.142878525 +0000 UTC m=+0.411868048 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, distribution-scope=public, vendor=Red Hat, Inc., release=1, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, vcs-type=git, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, maintainer=OpenStack TripleO Team)
Oct 13 15:26:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61336 DF PROTO=TCP SPT=40380 DPT=9102 SEQ=1056245617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBEF4F60000000001030307) 
Oct 13 15:26:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:26.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:27 standalone.localdomain ceph-mon[29756]: pgmap v3356: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3357: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:26:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:26:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:27 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:26:27 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:26:27 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:26:28 standalone.localdomain ceph-mon[29756]: pgmap v3357: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:28.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:28.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:26:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:28 standalone.localdomain podman[492356]: 2025-10-13 15:26:28.4934756 +0000 UTC m=+0.144492225 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 13 15:26:28 standalone.localdomain podman[492356]: 2025-10-13 15:26:28.569919807 +0000 UTC m=+0.220936472 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 13 15:26:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:29.085 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:29.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:26:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3358: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:29 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:26:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.126 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.126 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.126 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.127 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.127 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:26:30 standalone.localdomain ceph-mon[29756]: pgmap v3358: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:26:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4159285829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.613 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:26:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:26:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61337 DF PROTO=TCP SPT=40380 DPT=9102 SEQ=1056245617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBF04B60000000001030307) 
Oct 13 15:26:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.713 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.713 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.714 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.718 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.718 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:26:30 standalone.localdomain podman[492398]: 2025-10-13 15:26:30.752682368 +0000 UTC m=+0.078807381 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:26:30 standalone.localdomain podman[492398]: 2025-10-13 15:26:30.786825261 +0000 UTC m=+0.112950294 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:26:30 standalone.localdomain podman[492398]: unhealthy
Oct 13 15:26:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.898 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.902 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9692MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.902 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:26:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:30.903 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:26:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:26:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-caf1f29ceeb931335ec036bcac576e0e50be69d47f8ecf9dd6e53a9d522384f3-merged.mount: Deactivated successfully.
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.045 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.046 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.047 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.048 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:26:31 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:26:31 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:26:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.105 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:26:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3359: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4159285829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:26:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:26:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3719231180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.520 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.526 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.602 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.603 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.604 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:26:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:31.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:32 standalone.localdomain ceph-mon[29756]: pgmap v3359: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:32 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3719231180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:26:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:26:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:26:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3360: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:33.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:33.603 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:33.604 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:33.604 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:26:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:34 standalone.localdomain ceph-mon[29756]: pgmap v3360: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:26:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf9420cd69b63e4cf6dac0289efb2e21fb7a99dd6512be09e26d2ba629cde543-merged.mount: Deactivated successfully.
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:26:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3361: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.494 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.495 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.495 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.495 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.932 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.953 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:26:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:35.953 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:26:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:36.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:26:36 standalone.localdomain ceph-mon[29756]: pgmap v3361: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:26:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:26:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:26:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:26:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:36.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:36 standalone.localdomain podman[492443]: 2025-10-13 15:26:36.69271546 +0000 UTC m=+0.099299613 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Oct 13 15:26:36 standalone.localdomain podman[492443]: 2025-10-13 15:26:36.708140185 +0000 UTC m=+0.114724308 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:26:36 standalone.localdomain systemd[1]: tmp-crun.WPHMvK.mount: Deactivated successfully.
Oct 13 15:26:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:26:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-868ee7f6adbabfd307c31c3f6589e2eb42d60dc9c98a3ffc82e4943ad43a114b-merged.mount: Deactivated successfully.
Oct 13 15:26:36 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:26:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3362: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:38.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:26:38 standalone.localdomain ceph-mon[29756]: pgmap v3362: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:26:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:26:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:26:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3363: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:26:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:40 standalone.localdomain ceph-mon[29756]: pgmap v3363: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:26:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3364: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:42 standalone.localdomain ceph-mon[29756]: pgmap v3364: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:26:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:26:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:26:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3365: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:43.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:26:43 standalone.localdomain podman[492461]: 2025-10-13 15:26:43.801387829 +0000 UTC m=+0.074358583 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:26:43 standalone.localdomain podman[492461]: 2025-10-13 15:26:43.807710425 +0000 UTC m=+0.080681219 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:26:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:26:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4eb8db4c42546afae0a3aee10f13d090e0d588e1e1565732d0102851e5608eab-merged.mount: Deactivated successfully.
Oct 13 15:26:44 standalone.localdomain ceph-mon[29756]: pgmap v3365: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:26:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3366: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf9420cd69b63e4cf6dac0289efb2e21fb7a99dd6512be09e26d2ba629cde543-merged.mount: Deactivated successfully.
Oct 13 15:26:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf9420cd69b63e4cf6dac0289efb2e21fb7a99dd6512be09e26d2ba629cde543-merged.mount: Deactivated successfully.
Oct 13 15:26:45 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:26:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:46.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:46 standalone.localdomain ceph-mon[29756]: pgmap v3366: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3367: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:26:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:26:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:26:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:48.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:26:48 standalone.localdomain podman[492484]: 2025-10-13 15:26:48.561573179 +0000 UTC m=+0.083721053 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:26:48 standalone.localdomain podman[492484]: 2025-10-13 15:26:48.589906092 +0000 UTC m=+0.112053936 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:26:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:48 standalone.localdomain ceph-mon[29756]: pgmap v3367: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3368: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:26:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:49 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:26:49 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txebc96e71d96b48f486104-0068ed1a39" "proxy-server 2" 0.0007 "-" 21 -
Oct 13 15:26:49 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txebc96e71d96b48f486104-0068ed1a39)
Oct 13 15:26:49 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txebc96e71d96b48f486104-0068ed1a39)
Oct 13 15:26:50 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:50 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:50 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:26:50 standalone.localdomain podman[467099]: time="2025-10-13T15:26:50Z" level=error msg="Getting root fs size for \"9add9c0bfbfce496fe734bea7cbae19d31b547406576a599e0d239dd93e80327\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": creating overlay mount to /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/W4XPC5KOLLZ5MP2UTF333CCBDD:/var/lib/containers/storage/overlay/l/5PSUIGDAXQERVK6KLHOROUARFH,upperdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/diff,workdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/work,nodev,metacopy=on\": no such file or directory"
Oct 13 15:26:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:26:51 standalone.localdomain ceph-mon[29756]: pgmap v3368: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3369: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:26:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:51 standalone.localdomain podman[492510]: 2025-10-13 15:26:51.587178593 +0000 UTC m=+0.106288908 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 13 15:26:51 standalone.localdomain podman[492510]: 2025-10-13 15:26:51.620936174 +0000 UTC m=+0.140046499 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:26:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:51.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:26:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-954d2e0c0f5814816175699e6bf4259e781f0fc918aa8a9354f4e7f06e62c6b8-merged.mount: Deactivated successfully.
Oct 13 15:26:52 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:26:53 standalone.localdomain ceph-mon[29756]: pgmap v3369: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:26:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3370: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:53.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5226 DF PROTO=TCP SPT=46038 DPT=9102 SEQ=2520490202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBF5E0F0000000001030307) 
Oct 13 15:26:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5227 DF PROTO=TCP SPT=46038 DPT=9102 SEQ=2520490202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBF62370000000001030307) 
Oct 13 15:26:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b7ccf96a28197636c7a5b8f45056e04db2357d7c2dc122633e916788515691d-merged.mount: Deactivated successfully.
Oct 13 15:26:55 standalone.localdomain ceph-mon[29756]: pgmap v3370: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4eb8db4c42546afae0a3aee10f13d090e0d588e1e1565732d0102851e5608eab-merged.mount: Deactivated successfully.
Oct 13 15:26:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3371: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:26:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:26:55 standalone.localdomain podman[492529]: 2025-10-13 15:26:55.819865379 +0000 UTC m=+0.073478956 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:26:55 standalone.localdomain podman[492529]: 2025-10-13 15:26:55.835714238 +0000 UTC m=+0.089327795 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:26:55 standalone.localdomain podman[492528]: 2025-10-13 15:26:55.879406885 +0000 UTC m=+0.138725748 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 13 15:26:55 standalone.localdomain podman[492528]: 2025-10-13 15:26:55.893288093 +0000 UTC m=+0.152606956 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 15:26:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5228 DF PROTO=TCP SPT=46038 DPT=9102 SEQ=2520490202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBF6A360000000001030307) 
Oct 13 15:26:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:56.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:26:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:57 standalone.localdomain ceph-mon[29756]: pgmap v3371: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3372: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:26:57 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:26:57 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:57 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:26:57 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:26:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:26:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:26:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:26:57 standalone.localdomain podman[492567]: 2025-10-13 15:26:57.813604793 +0000 UTC m=+0.076934434 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public)
Oct 13 15:26:57 standalone.localdomain podman[492568]: 2025-10-13 15:26:57.887470037 +0000 UTC m=+0.143773031 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, release=1, vcs-type=git, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container)
Oct 13 15:26:57 standalone.localdomain podman[492570]: 2025-10-13 15:26:57.848390942 +0000 UTC m=+0.096742641 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.buildah.version=1.33.12, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_id=tripleo_step4, container_name=swift_account_server, tcib_managed=true, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container)
Oct 13 15:26:58 standalone.localdomain podman[492567]: 2025-10-13 15:26:58.000816307 +0000 UTC m=+0.264145918 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:26:58 standalone.localdomain podman[492570]: 2025-10-13 15:26:58.033634637 +0000 UTC m=+0.281986326 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.openshift.expose-services=)
Oct 13 15:26:58 standalone.localdomain podman[492568]: 2025-10-13 15:26:58.142050491 +0000 UTC m=+0.398353545 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9)
Oct 13 15:26:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:26:58.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:26:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:26:58 standalone.localdomain podman[467099]: time="2025-10-13T15:26:58Z" level=error msg="Getting root fs size for \"9d920186a9daead738b7368217b8813c98ac1b0778fc986eb1bb21fa49950ebf\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:26:59 standalone.localdomain ceph-mon[29756]: pgmap v3372: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3373: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:26:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:26:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:26:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:26:59 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:26:59 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:26:59 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:26:59 standalone.localdomain podman[492646]: 2025-10-13 15:26:59.803546353 +0000 UTC m=+0.102798760 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:26:59 standalone.localdomain podman[492646]: 2025-10-13 15:26:59.819345789 +0000 UTC m=+0.118598237 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct 13 15:27:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5229 DF PROTO=TCP SPT=46038 DPT=9102 SEQ=2520490202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBF79F60000000001030307) 
Oct 13 15:27:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:01 standalone.localdomain ceph-mon[29756]: pgmap v3373: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3374: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:27:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:01.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a016b8ee5ebe6d708ab6632d2306c689124fcec83b86e66eb34d55977e3e3b6e-merged.mount: Deactivated successfully.
Oct 13 15:27:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:02 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:27:02 standalone.localdomain podman[492665]: 2025-10-13 15:27:02.306709322 +0000 UTC m=+0.577025467 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:27:02 standalone.localdomain podman[492665]: 2025-10-13 15:27:02.312144793 +0000 UTC m=+0.582460938 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:27:02 standalone.localdomain podman[492665]: unhealthy
Oct 13 15:27:02 standalone.localdomain ceph-mon[29756]: pgmap v3374: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3375: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:03.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:27:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:27:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:03 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:27:03 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:27:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:04 standalone.localdomain ceph-mon[29756]: pgmap v3375: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3376: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:27:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:27:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:06.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:06 standalone.localdomain ceph-mon[29756]: pgmap v3376: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:27:06.941 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:27:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:27:06.942 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:27:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:27:06.943 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:27:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:27:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-954d2e0c0f5814816175699e6bf4259e781f0fc918aa8a9354f4e7f06e62c6b8-merged.mount: Deactivated successfully.
Oct 13 15:27:07 standalone.localdomain podman[492686]: 2025-10-13 15:27:07.05319808 +0000 UTC m=+0.076081400 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:27:07 standalone.localdomain podman[492686]: 2025-10-13 15:27:07.062720752 +0000 UTC m=+0.085604072 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Oct 13 15:27:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3377: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:08.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:27:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 6600.1 total, 600.0 interval
                                                        Cumulative writes: 7014 writes, 30K keys, 7014 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 7014 writes, 1318 syncs, 5.32 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 45 writes, 100 keys, 45 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s
                                                        Interval WAL: 45 writes, 20 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:27:08 standalone.localdomain ceph-mon[29756]: pgmap v3377: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3378: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:09 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:27:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:10 standalone.localdomain ceph-mon[29756]: pgmap v3378: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3379: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:27:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-59dd28e840ca55d017be8e7da1cf03b089ee3ce35fdcc06fb7e7fd2d34874879-merged.mount: Deactivated successfully.
Oct 13 15:27:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:11.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:12 standalone.localdomain ceph-mon[29756]: pgmap v3379: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:27:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:27:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3380: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:13.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:14 standalone.localdomain ceph-mon[29756]: pgmap v3380: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3381: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 15:27:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:27:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:15 standalone.localdomain systemd[1]: tmp-crun.xxoK5I.mount: Deactivated successfully.
Oct 13 15:27:15 standalone.localdomain podman[492705]: 2025-10-13 15:27:15.836918892 +0000 UTC m=+0.132823618 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:27:15 standalone.localdomain podman[492705]: 2025-10-13 15:27:15.874837602 +0000 UTC m=+0.170742308 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:27:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:16 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:16 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:27:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a016b8ee5ebe6d708ab6632d2306c689124fcec83b86e66eb34d55977e3e3b6e-merged.mount: Deactivated successfully.
Oct 13 15:27:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:16 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:16.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:16 standalone.localdomain ceph-mon[29756]: pgmap v3381: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3382: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:27:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:27:17 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:27:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:18.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3419081752' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3419081752' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:27:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: pgmap v3382: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3419081752' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:27:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3419081752' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:27:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3383: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:27:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain sudo[492728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:27:20 standalone.localdomain sudo[492728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:27:20 standalone.localdomain sudo[492728]: pam_unix(sudo:session): session closed for user root
Oct 13 15:27:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:27:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain sudo[492748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:27:20 standalone.localdomain sudo[492748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:27:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain podman[492746]: 2025-10-13 15:27:20.554960959 +0000 UTC m=+0.104946133 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:27:20 standalone.localdomain podman[492746]: 2025-10-13 15:27:20.581742201 +0000 UTC m=+0.131727395 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:27:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a87f962b5d36e9727ad9caa16c1660fa93b6a87798b880bf239662f5eb69d8c8-merged.mount: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain ceph-mon[29756]: pgmap v3383: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:20 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:27:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3384: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:21 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:21.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:27:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:27:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:22 standalone.localdomain sudo[492748]: pam_unix(sudo:session): session closed for user root
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:27:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:27:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:27:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:27:22 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 608c526b-f866-4f5e-8655-2ecb91d04955 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:27:22 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 608c526b-f866-4f5e-8655-2ecb91d04955 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:27:22 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 608c526b-f866-4f5e-8655-2ecb91d04955 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:27:22 standalone.localdomain podman[492821]: 2025-10-13 15:27:22.84167829 +0000 UTC m=+0.108774367 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:27:22 standalone.localdomain sudo[492835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:27:22 standalone.localdomain sudo[492835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:27:22 standalone.localdomain sudo[492835]: pam_unix(sudo:session): session closed for user root
Oct 13 15:27:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:22 standalone.localdomain podman[492821]: 2025-10-13 15:27:22.907144375 +0000 UTC m=+0.174240442 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:27:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:23 standalone.localdomain ceph-mon[29756]: pgmap v3384: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:27:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:27:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:27:23 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3385: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:27:23
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'manila_data', 'volumes', 'vms', 'manila_metadata', 'backups', '.mgr']
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:27:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:23.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38282 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=1080102895 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBFD33F0000000001030307) 
Oct 13 15:27:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:27:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:27:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:27:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-59dd28e840ca55d017be8e7da1cf03b089ee3ce35fdcc06fb7e7fd2d34874879-merged.mount: Deactivated successfully.
Oct 13 15:27:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38283 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=1080102895 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBFD7360000000001030307) 
Oct 13 15:27:24 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:27:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-59dd28e840ca55d017be8e7da1cf03b089ee3ce35fdcc06fb7e7fd2d34874879-merged.mount: Deactivated successfully.
Oct 13 15:27:25 standalone.localdomain ceph-mon[29756]: pgmap v3385: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:27:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:27:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:27:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3386: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:26 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:27:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38284 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=1080102895 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBFDF370000000001030307) 
Oct 13 15:27:26 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:26.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:27 standalone.localdomain ceph-mon[29756]: pgmap v3386: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:27:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf7f5cc363ca2027e31b478163d4d24e61ff18cbc712f3882286eff78f2e65ae-merged.mount: Deactivated successfully.
Oct 13 15:27:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3387: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:27:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:27:27 standalone.localdomain podman[492858]: 2025-10-13 15:27:27.797158115 +0000 UTC m=+0.077745578 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:27:27 standalone.localdomain podman[492858]: 2025-10-13 15:27:27.809254123 +0000 UTC m=+0.089841556 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:27:27 standalone.localdomain systemd[1]: tmp-crun.bIfK6I.mount: Deactivated successfully.
Oct 13 15:27:27 standalone.localdomain podman[492857]: 2025-10-13 15:27:27.867609368 +0000 UTC m=+0.147651065 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Oct 13 15:27:27 standalone.localdomain podman[492857]: 2025-10-13 15:27:27.905831518 +0000 UTC m=+0.185873175 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, release=1755695350, maintainer=Red Hat, Inc., architecture=x86_64)
Oct 13 15:27:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:28.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:28 standalone.localdomain ceph-mon[29756]: pgmap v3387: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:28.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:27:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:27:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:27:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:29.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:27:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3388: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:29 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:27:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:27:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:27:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:27:29 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:27:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:29 standalone.localdomain systemd[1]: tmp-crun.rQB0if.mount: Deactivated successfully.
Oct 13 15:27:29 standalone.localdomain podman[492901]: 2025-10-13 15:27:29.890170671 +0000 UTC m=+0.087901909 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, container_name=swift_account_server, vendor=Red Hat, Inc., version=17.1.9, release=1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git)
Oct 13 15:27:29 standalone.localdomain podman[492894]: 2025-10-13 15:27:29.934975115 +0000 UTC m=+0.142214313 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, tcib_managed=true, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, config_id=tripleo_step4, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team)
Oct 13 15:27:30 standalone.localdomain podman[492895]: 2025-10-13 15:27:30.003952345 +0000 UTC m=+0.203282911 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, batch=17.1_20250721.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32)
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:30 standalone.localdomain podman[492901]: 2025-10-13 15:27:30.102188438 +0000 UTC m=+0.299919676 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, container_name=swift_account_server, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.openshift.expose-services=)
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.109 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.109 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.109 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.109 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:27:30 standalone.localdomain podman[492894]: 2025-10-13 15:27:30.157303848 +0000 UTC m=+0.364543006 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, batch=17.1_20250721.1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible)
Oct 13 15:27:30 standalone.localdomain podman[492895]: 2025-10-13 15:27:30.213811978 +0000 UTC m=+0.413142594 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:27:30 standalone.localdomain ceph-mon[29756]: pgmap v3388: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:27:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1016317499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.565 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:27:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38285 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=1080102895 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CBFEEF70000000001030307) 
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.654 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.659 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.660 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.665 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.665 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:27:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.837 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.839 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9631MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.839 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.839 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.909 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.909 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.910 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.910 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:27:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:30.960 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:27:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:27:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3389: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:27:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1016317499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:27:31 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:31 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:27:31 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:27:31 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:27:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:27:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2548119485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:27:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:31.396 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:27:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:31.401 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:27:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:31.417 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:27:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:31.419 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:27:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:31.419 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:27:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:31.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:27:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:32 standalone.localdomain ceph-mon[29756]: pgmap v3389: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:32 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2548119485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:27:32 standalone.localdomain podman[493020]: 2025-10-13 15:27:32.387992143 +0000 UTC m=+0.084235901 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:27:32 standalone.localdomain podman[493020]: 2025-10-13 15:27:32.39872718 +0000 UTC m=+0.094971028 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:27:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:32.415 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:32.416 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:32 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:27:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:32 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:33.085 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:33.111 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3390: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:33.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:34.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:34.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:27:34 standalone.localdomain ceph-mon[29756]: pgmap v3390: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:27:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:27:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a87f962b5d36e9727ad9caa16c1660fa93b6a87798b880bf239662f5eb69d8c8-merged.mount: Deactivated successfully.
Oct 13 15:27:34 standalone.localdomain podman[493039]: 2025-10-13 15:27:34.717793308 +0000 UTC m=+0.090772705 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:27:34 standalone.localdomain podman[493039]: 2025-10-13 15:27:34.75609815 +0000 UTC m=+0.129077517 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:27:34 standalone.localdomain podman[493039]: unhealthy
Oct 13 15:27:34 standalone.localdomain systemd[1]: tmp-crun.w2WfPX.mount: Deactivated successfully.
Oct 13 15:27:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:27:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee02a78034d58b97a791f95c9d2a559665d6d7cb40b95ff70c9188da0f009dd0-merged.mount: Deactivated successfully.
Oct 13 15:27:35 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:27:35 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:27:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3391: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.513 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.514 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.514 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.899 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.916 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:27:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:35.916 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:27:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:36 standalone.localdomain ceph-mon[29756]: pgmap v3391: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:27:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:27:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:36.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3392: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:27:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:27:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:38.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:27:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:38.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:38 standalone.localdomain ceph-mon[29756]: pgmap v3392: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3393: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:27:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:39 standalone.localdomain podman[493062]: 2025-10-13 15:27:39.74254906 +0000 UTC m=+0.097233114 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:27:39 standalone.localdomain podman[493062]: 2025-10-13 15:27:39.751790144 +0000 UTC m=+0.106474128 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 13 15:27:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c090e82b448dcd4a7f534d36b0ebcdfc6d0291dd16888ef524a3f3fe0e525a-merged.mount: Deactivated successfully.
Oct 13 15:27:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bf7f5cc363ca2027e31b478163d4d24e61ff18cbc712f3882286eff78f2e65ae-merged.mount: Deactivated successfully.
Oct 13 15:27:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:40 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:27:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:40 standalone.localdomain ceph-mon[29756]: pgmap v3393: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3394: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:27:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:41.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:42 standalone.localdomain ceph-mon[29756]: pgmap v3394: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:27:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:27:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:27:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:27:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab362c8aede5d2d40f9ec15781951f4230fcc7015e167ef000dee98898beb5a1-merged.mount: Deactivated successfully.
Oct 13 15:27:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3395: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:43.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:27:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede-merged.mount: Deactivated successfully.
Oct 13 15:27:44 standalone.localdomain ceph-mon[29756]: pgmap v3395: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3396: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:27:46 standalone.localdomain podman[493081]: 2025-10-13 15:27:46.823401299 +0000 UTC m=+0.091540887 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:27:46 standalone.localdomain podman[493081]: 2025-10-13 15:27:46.831872079 +0000 UTC m=+0.100011617 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:27:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:46.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-689e819bf70445544778bb624df4eb78f812d07944d6c0c226b8a8624d90a4a1-merged.mount: Deactivated successfully.
Oct 13 15:27:46 standalone.localdomain ceph-mon[29756]: pgmap v3396: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee02a78034d58b97a791f95c9d2a559665d6d7cb40b95ff70c9188da0f009dd0-merged.mount: Deactivated successfully.
Oct 13 15:27:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ee02a78034d58b97a791f95c9d2a559665d6d7cb40b95ff70c9188da0f009dd0-merged.mount: Deactivated successfully.
Oct 13 15:27:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3397: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:47 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:27:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:27:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:48.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 15:27:48 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 71, saving inputs in /var/lib/pacemaker/pengine/pe-input-69.bz2
Oct 13 15:27:48 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 71 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-69.bz2): Complete
Oct 13 15:27:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 15:27:48 standalone.localdomain ceph-mon[29756]: pgmap v3397: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3398: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:27:50 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:50 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:50 standalone.localdomain ceph-mon[29756]: pgmap v3398: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:27:51 standalone.localdomain podman[493104]: 2025-10-13 15:27:51.120082302 +0000 UTC m=+0.072306219 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:27:51 standalone.localdomain podman[493104]: 2025-10-13 15:27:51.147843522 +0000 UTC m=+0.100067419 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:27:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3399: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:51 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:51.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:52 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:27:52 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:27:52 standalone.localdomain ceph-mon[29756]: pgmap v3399: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:27:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3400: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:53.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:27:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24310 DF PROTO=TCP SPT=60086 DPT=9102 SEQ=2146245099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC0486F0000000001030307) 
Oct 13 15:27:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:27:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3b9ec403efa2eb805f6ccd9ed5698840a2ea6e37a8c8a3241e70550634ef8a13-merged.mount: Deactivated successfully.
Oct 13 15:27:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3b9ec403efa2eb805f6ccd9ed5698840a2ea6e37a8c8a3241e70550634ef8a13-merged.mount: Deactivated successfully.
Oct 13 15:27:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:27:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24311 DF PROTO=TCP SPT=60086 DPT=9102 SEQ=2146245099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC04C770000000001030307) 
Oct 13 15:27:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:27:54 standalone.localdomain podman[493129]: 2025-10-13 15:27:54.797765348 +0000 UTC m=+0.068040003 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 15:27:54 standalone.localdomain podman[493129]: 2025-10-13 15:27:54.840087779 +0000 UTC m=+0.110362454 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:27:54 standalone.localdomain ceph-mon[29756]: pgmap v3400: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3401: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:27:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-854c5f9be3b2242861943a614259b997a2303cdc4ab9189b356ce2a1b9f8ced1-merged.mount: Deactivated successfully.
Oct 13 15:27:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-854c5f9be3b2242861943a614259b997a2303cdc4ab9189b356ce2a1b9f8ced1-merged.mount: Deactivated successfully.
Oct 13 15:27:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24312 DF PROTO=TCP SPT=60086 DPT=9102 SEQ=2146245099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC054760000000001030307) 
Oct 13 15:27:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:27:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:27:56 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:56.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6bf0b4ae8ca561a8c7e45cdbb30d417510f12bd335f16e8c8ff70bfec89efede-merged.mount: Deactivated successfully.
Oct 13 15:27:56 standalone.localdomain ceph-mon[29756]: pgmap v3401: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:57 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:27:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3402: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:27:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:27:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:27:58.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:27:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:27:58 standalone.localdomain ceph-mon[29756]: pgmap v3402: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3403: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:27:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:27:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:27:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:27:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:27:59 standalone.localdomain systemd[1]: tmp-crun.4Tpz4q.mount: Deactivated successfully.
Oct 13 15:28:00 standalone.localdomain podman[493147]: 2025-10-13 15:28:00.005755377 +0000 UTC m=+0.124444330 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:28:00 standalone.localdomain podman[493147]: 2025-10-13 15:28:00.022064528 +0000 UTC m=+0.140753481 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 13 15:28:00 standalone.localdomain podman[493146]: 2025-10-13 15:27:59.957969065 +0000 UTC m=+0.085747566 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6)
Oct 13 15:28:00 standalone.localdomain podman[493146]: 2025-10-13 15:28:00.088019688 +0000 UTC m=+0.215798189 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Oct 13 15:28:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24313 DF PROTO=TCP SPT=60086 DPT=9102 SEQ=2146245099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC064370000000001030307) 
Oct 13 15:28:00 standalone.localdomain systemd[1]: tmp-crun.gYl3Mr.mount: Deactivated successfully.
Oct 13 15:28:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-854c5f9be3b2242861943a614259b997a2303cdc4ab9189b356ce2a1b9f8ced1-merged.mount: Deactivated successfully.
Oct 13 15:28:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214-merged.mount: Deactivated successfully.
Oct 13 15:28:01 standalone.localdomain ceph-mon[29756]: pgmap v3403: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3404: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:28:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:28:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:28:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:01 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:28:01 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:28:01 standalone.localdomain podman[493184]: 2025-10-13 15:28:01.744666616 +0000 UTC m=+0.281808521 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:28:01 standalone.localdomain podman[493183]: 2025-10-13 15:28:01.79655257 +0000 UTC m=+0.333159699 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=)
Oct 13 15:28:01 standalone.localdomain podman[493185]: 2025-10-13 15:28:01.851222425 +0000 UTC m=+0.386987779 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, managed_by=tripleo_ansible, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git)
Oct 13 15:28:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:01 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:01.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:02 standalone.localdomain podman[493184]: 2025-10-13 15:28:02.013993946 +0000 UTC m=+0.551135841 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, release=1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, version=17.1.9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20250721.1)
Oct 13 15:28:02 standalone.localdomain podman[493183]: 2025-10-13 15:28:02.046872598 +0000 UTC m=+0.583479777 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., batch=17.1_20250721.1, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, container_name=swift_object_server)
Oct 13 15:28:02 standalone.localdomain podman[493185]: 2025-10-13 15:28:02.065262082 +0000 UTC m=+0.601027426 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=17.1.9, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:28:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:28:03 standalone.localdomain ceph-mon[29756]: pgmap v3404: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3405: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:03.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:28:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:28:03 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:28:03 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:28:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:03 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:28:03 standalone.localdomain podman[493268]: 2025-10-13 15:28:03.853543901 +0000 UTC m=+1.127711984 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:28:03 standalone.localdomain podman[493268]: 2025-10-13 15:28:03.862866686 +0000 UTC m=+1.137034809 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:28:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:04 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:28:04 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:04 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:05 standalone.localdomain ceph-mon[29756]: pgmap v3405: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:28:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3406: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:05 standalone.localdomain podman[467099]: time="2025-10-13T15:28:05Z" level=error msg="Getting root fs size for \"ab4c512725106015b19651e136652279a60be6d1e42e9a13b456b4ca916ba288\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:28:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:05 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:05 standalone.localdomain podman[493288]: 2025-10-13 15:28:05.337175895 +0000 UTC m=+0.101611456 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:28:05 standalone.localdomain podman[493288]: 2025-10-13 15:28:05.343410029 +0000 UTC m=+0.107845610 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:28:05 standalone.localdomain podman[493288]: unhealthy
Oct 13 15:28:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:28:06.942 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:28:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:28:06.943 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:28:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:28:06.944 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:28:06 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:06.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:07 standalone.localdomain ceph-mon[29756]: pgmap v3406: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:28:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3407: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3b9ec403efa2eb805f6ccd9ed5698840a2ea6e37a8c8a3241e70550634ef8a13-merged.mount: Deactivated successfully.
Oct 13 15:28:07 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:28:07 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:28:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:28:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-854c5f9be3b2242861943a614259b997a2303cdc4ab9189b356ce2a1b9f8ced1-merged.mount: Deactivated successfully.
Oct 13 15:28:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:08.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:09 standalone.localdomain ceph-mon[29756]: pgmap v3407: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3408: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:28:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:10 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:28:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7-merged.mount: Deactivated successfully.
Oct 13 15:28:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:28:10 standalone.localdomain podman[493311]: 2025-10-13 15:28:10.329320994 +0000 UTC m=+0.086606711 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:28:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:10 standalone.localdomain podman[493311]: 2025-10-13 15:28:10.364707719 +0000 UTC m=+0.121993456 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Oct 13 15:28:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:11 standalone.localdomain ceph-mon[29756]: pgmap v3408: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:28:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3409: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:28:11 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:28:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:11 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:11.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.059799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369292059834, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1806, "num_deletes": 251, "total_data_size": 1625396, "memory_usage": 1665504, "flush_reason": "Manual Compaction"}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369292066761, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1582924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79055, "largest_seqno": 80860, "table_properties": {"data_size": 1575849, "index_size": 4098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14941, "raw_average_key_size": 19, "raw_value_size": 1561344, "raw_average_value_size": 2062, "num_data_blocks": 186, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369126, "oldest_key_time": 1760369126, "file_creation_time": 1760369292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 7010 microseconds, and 3183 cpu microseconds.
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.066801) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1582924 bytes OK
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.066824) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.068236) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.068249) EVENT_LOG_v1 {"time_micros": 1760369292068246, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.068266) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1617543, prev total WAL file size 1617543, number of live WAL files 2.
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.068799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1545KB)], [182(4851KB)]
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369292068872, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 6550371, "oldest_snapshot_seqno": -1}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 6590 keys, 5591224 bytes, temperature: kUnknown
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369292102946, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 5591224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5554095, "index_size": 19557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 173970, "raw_average_key_size": 26, "raw_value_size": 5440773, "raw_average_value_size": 825, "num_data_blocks": 766, "num_entries": 6590, "num_filter_entries": 6590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369292, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.103192) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 5591224 bytes
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.105220) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.9 rd, 163.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.7 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 7108, records dropped: 518 output_compression: NoCompression
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.105243) EVENT_LOG_v1 {"time_micros": 1760369292105233, "job": 114, "event": "compaction_finished", "compaction_time_micros": 34134, "compaction_time_cpu_micros": 22548, "output_level": 6, "num_output_files": 1, "total_output_size": 5591224, "num_input_records": 7108, "num_output_records": 6590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369292105503, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369292106152, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.068673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.106249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.106254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.106256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.106258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:28:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:28:12.106259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:28:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-854c5f9be3b2242861943a614259b997a2303cdc4ab9189b356ce2a1b9f8ced1-merged.mount: Deactivated successfully.
Oct 13 15:28:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-07eab6a202bcefe1fc265b9ba1883e76a885192cf5ee2f262b48971b016e8214-merged.mount: Deactivated successfully.
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:28:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:28:13 standalone.localdomain ceph-mon[29756]: pgmap v3409: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:28:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3410: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:13.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:14 standalone.localdomain ceph-mon[29756]: pgmap v3410: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:28:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:14 standalone.localdomain podman[467099]: time="2025-10-13T15:28:14Z" level=error msg="Getting root fs size for \"ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:28:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3411: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:28:16 standalone.localdomain ceph-mon[29756]: pgmap v3411: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:17.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3412: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:28:17 standalone.localdomain podman[493331]: 2025-10-13 15:28:17.837013008 +0000 UTC m=+0.104054746 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:28:17 standalone.localdomain podman[493331]: 2025-10-13 15:28:17.844137799 +0000 UTC m=+0.111179567 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:28:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:28:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:28:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:18.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2481968719' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2481968719' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:28:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:28:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7-merged.mount: Deactivated successfully.
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: pgmap v3412: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2481968719' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:28:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2481968719' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:28:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-172e54c564f32af96811ba336bd9fe2affd9d6300f093bed683876643f1afee7-merged.mount: Deactivated successfully.
Oct 13 15:28:18 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:28:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3413: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:28:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:28:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:28:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:20 standalone.localdomain ceph-mon[29756]: pgmap v3413: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.990 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.992 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.992 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.992 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.992 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.996 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:20.999 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '658a8079-1f7d-4c12-977c-2270f902f40c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:20.992848', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '40195a5e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': '816464e760507442127732fe13aa4cd0569de029534eae48e8ae74aa8b118a4d'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:20.992848', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4019c2b4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': 'd4e64bc6f7cee8f3ba4232d62cbc3304a052c720f1191c49ccdd81e39f2c3f68'}]}, 'timestamp': '2025-10-13 15:28:20.999582', '_unique_id': 'eb965f3c4fdf426497a2df704a0e5fe4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.000 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.001 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.035 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.036 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.036 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.050 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.050 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd263f767-28d4-4d8a-9f64-a24cef76f097', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.001290', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '401f5cf6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': 'c1fb53dbc2d252d3a0861ac1cbcef07c68a2a8b4038c21a59154f98df1e7f013'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.001290', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '401f6886-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '95b35d845c96a60c403367c0421cd77ea9e96f036af5ab8f5b04dc365b61c082'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.001290', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '401f72ae-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '985c7d8e4dcf074ff03161d84078bcefe58f721ecb011c6ba2299a6fb27b9c89'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.001290', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402188b4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '23666c39031307512ee17260ccba5c46740dae0b13d36b62a409aac0876f93d0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.001290', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40219534-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '13ba5c3837bc468c1682f02b47010ee5484005aec8511bffa5d1f97f7e3bef5b'}]}, 'timestamp': '2025-10-13 15:28:21.050832', '_unique_id': '9d9c536be92348c29edc6df7fc9651c1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.052 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.053 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.053 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.053 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.053 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.053 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.054 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cdeda7b6-fd7b-4c41-9ea5-4070c3608e60', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.053125', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4021fa60-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '79a88245d6ba6853e00fb97ef8797e9ec0375aa2035934869537b7f68a31ce55'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.053125', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40220316-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '8f834493b4747550a2c819f51de1da53443e5ec275d6208d0a9fa2e77c0aad71'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.053125', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '40220bd6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '35d6ec4fdfef9fd9500285bc73e69e396455f3193c0f8d2c1c394153353e354c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.053125', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '40221a0e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '48428d91a5476bf06e6189f4029fbdbaec5166ade14185c651c4f763eae9c398'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.053125', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '402222d8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '095f39dffa3898fea7996260e1c845a44d18e1c59d7715750dceb3fdd7c5d8ab'}]}, 'timestamp': '2025-10-13 15:28:21.054470', '_unique_id': '255e31cba6194944921bd3eefebd9d65'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.055 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.056 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.083 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.084 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.084 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.094 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.095 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '91330e20-33fb-4958-a33a-db45d7ee6406', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.056128', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4026a4f2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': '42ce706d4d3a9cd928504400a1c41cacf80835a25dfa8908bd4dcf4ea9097109'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.056128', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4026aeb6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': '9045fdebcfb5936e3a7e802cb774a4dd491b9207054dfa8373659a931aa170ca'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.056128', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4026b5e6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': 'b385a2478ca1412a0ca2de2ffd8be002fbba1487e0b72524b26949efa2ccb11f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.056128', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402853c4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.273618043, 'message_signature': '65c2ace9117c1032f464c19e136b03276704e1d4953e205624f52b3b6c2d28e9'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.056128', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40285c5c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.273618043, 'message_signature': 'cc5fcec133fef06a4909f5e5abd591eb04526453c7f84262212bf7ce9e1af35a'}]}, 'timestamp': '2025-10-13 15:28:21.095233', '_unique_id': 'fbecba15fd0b46feba59d63e8e76f1ad'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.096 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4cdcc13f-2589-471b-8b7d-a2ed84e1e803', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.096863', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4028a50e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': '6f79ed60860564850bda9523bce0289c4c40897e7dfe5bead12dc02efcb271fb'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.096863', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4028b1d4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': '823f0cb042846c90792deb0c47e85a6a94f5144a58d33b67c47f64bea8028dc6'}]}, 'timestamp': '2025-10-13 15:28:21.097431', '_unique_id': '3faadceb15f44f5f99bcaa4c2d0b7863'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.097 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.098 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.098 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.098 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ccac2910-accf-4991-bb35-b823689e0dab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.098461', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4028e398-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': 'bf0b0235216da157c400310d2b9b5e5bfe1bcd94f53463568999f26717a10af5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.098461', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4028ebea-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': 'd06b01114a4d086464068d2c6c61c51b68c7bf0f26ff4961e683b292063921ce'}]}, 'timestamp': '2025-10-13 15:28:21.098904', '_unique_id': 'c8d057e5cf7f4c8686660f60400f1589'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.099 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4c0d89e-0d49-473b-984d-ce1c785766b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.099910', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '40291c6e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': '77e199edb2680d7f97bea437143680b2351da2d9d750529fdb730b5b5d55e470'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.099910', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '40292754-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': '1ebf875343105a0460c5956aa5645456987fc414bd8a68e8e3aa617fc3915e48'}]}, 'timestamp': '2025-10-13 15:28:21.100429', '_unique_id': 'ec330922ca2c43938143d52c8e141f62'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.100 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.101 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.101 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.101 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36b840c9-7416-4315-ba0c-4799b74d3704', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.101474', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '40295990-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': '57c5cbd4a5f2ff3c574650ecec4e3960b154a923c258170e35cc502eb6757e54'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.101474', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '402961b0-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': 'b2fe73054672f069278c34b2266ad850b926606354147bdb7d634c5586612039'}]}, 'timestamp': '2025-10-13 15:28:21.101921', '_unique_id': '55547a6c3a624b80939cd77ee46d92f3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.102 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.103 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.103 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '168e70ed-aaa9-490b-81c0-9137268ce41a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.102937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402991a8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '493acfca1d6f9fd8fa1ef728d07e98fc1c5935ff00f96b8ab0468f1cca953ed7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.102937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4029995a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '00112978c96b71403d5e9c66bdae9b7ef52a878194da0fdebd9c349513d686db'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.102937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4029a094-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '2e49e55820dcb6cba6bcdeff51c275f5462bc7ba2db2233d270cba79344cf446'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.102937', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4029a8aa-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '45e51fd7a6e6bc74480517268a4ed6e6a47b2a18c9a8bd5676d1b0f333c2a806'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.102937', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4029afda-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '0302ea12986c7be69ecd15c51cb21b8df83770c40782f3450b548b0238fcd302'}]}, 'timestamp': '2025-10-13 15:28:21.103911', '_unique_id': '9b1b8fe5c6f643a380972ebebfaa5ad1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.104 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e77d409e-6794-47dd-919a-ddf5beac9bc0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.104976', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4029e16c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': 'e51a01831ce1eaf15f26f92f1707bc89db921f13d99bcab091e990e3fb9ceda5'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.104976', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4029ea54-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': '5df99e31b3b416a283b5c831866f71d853d43a4fa7b0f14548c3cacc8e1016a0'}]}, 'timestamp': '2025-10-13 15:28:21.105419', '_unique_id': '25e26c13e3414e37bec9c85b7d9ef805'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.106 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.107 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.107 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '891b396a-84a7-4962-b72c-e270fe2274f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.106423', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402a1a7e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': '7a5449c77a2b86df773f2eb0a418c97cdcaf0d22867cc08a44cc28d17997d73b'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.106423', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '402a221c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': 'e0ba88f1abe43754f8ad8ca4fa5ae5ef30edfbb03b57e799e63266f06d3fd4de'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.106423', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '402a294c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': 'e7df366835e9d55fd050bf7f5819132ca0f4dbfca8020dbded6e4e4b468f9de9'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.106423', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402a307c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.273618043, 'message_signature': 'e8c698c785c9e5d73a2c99c6218ffe947d97dfe1dfc7c1459a176f55b02d5fa4'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.106423', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '402a390a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.273618043, 'message_signature': '705d5dfaa5fb8fae65f1675aa0c703661d1d7f30be34482ed87cf62f46ee3e53'}]}, 'timestamp': '2025-10-13 15:28:21.107448', '_unique_id': '67f808d8cdcf465da0b95ef0a8c6fbc7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.108 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.109 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.109 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8747aad3-06c0-4fa4-8278-4bbca6dd7849', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.109038', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '402a8234-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': 'b8a3e165ef5a0a3f8bbf7f111c72a67885023337296062a301b3394aa1d62616'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.109038', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '402a8cc0-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': 'd29ee94a838e0826a778d6fcf47e2d24d933df484566c0e68573ab51dee22d52'}]}, 'timestamp': '2025-10-13 15:28:21.109597', '_unique_id': '60bd7176346342bb99c96b76c887da2e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.110 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.123 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 30370000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.139 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 30220000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62d427f4-46f7-44a5-96ff-14d1e28a6791', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30370000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:28:21.110970', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '402cc71a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.312910994, 'message_signature': '1f620b8685304c994a5a3d9df49fc098c1d461cfa6f5d8c1c1c9d8cef8c1c3a9'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30220000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:28:21.110970', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '402f34c8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.328757083, 'message_signature': 'd7ef7eda5ab1f1eda6f5b8699b60a6549927a0d2f0cd7cb780e754a9b9b66d9e'}]}, 'timestamp': '2025-10-13 15:28:21.140218', '_unique_id': '1075045dbd824c1b9d4589d56b5063ec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.141 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.142 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.142 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.142 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.143 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.143 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.143 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd112919f-c446-4bab-89c5-edc9396bc5ad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.142339', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402f9b2a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '3dcb459b960e7aa3453b46b8eed8a984c879dcea62ba2475bce6d2c1bcb56479'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.142339', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '402fa7b4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '8bfac9b364998560ad27feda0567010c5e9578fc9eebfd940a08fc870f0812a0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.142339', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '402fb326-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': 'e8b1271309aabd7be1b829ac6c8881c4de56d8a51a0137155667e7c1d067c888'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.142339', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '402fbfa6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '2cce1b83e8a0041ac939387300c099586dcd0ccd8a74ac56ac68c6c9974215a0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.142339', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '402fcb2c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': 'a8bf8c6a687b98114fab599eb9ffa441a8d4b1e597b206e307f8fbd6f2721464'}]}, 'timestamp': '2025-10-13 15:28:21.143986', '_unique_id': 'bffebfa658404100b15fbb10417642a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.144 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.145 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.145 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.146 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.146 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.146 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '022a0a9e-1045-4589-975e-f804cef2f699', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.145850', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4030207c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '9bed56d8c883e95788286588927bdacbe6cfec2d0791e7e440e3d191ac74f819'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.145850', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40302d1a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '7f307bc72114565d8fb65dc14aabedb5234e2c46bd73c101fa0275f82ac57778'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.145850', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4030399a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '4131106c6e59a623dd7ac1fe25e2ceda5c6d87acdcd79341da9ab84488d6a274'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.145850', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '403044b2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '6e0c5bf9e60e66a8a8f5d2c7a44e426c730845c1e38361b632a112e7b1bc2e7b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.145850', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40304f98-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '7bce51e9e8952f7522f395c33ef32051ce3ffacc5d7176c4b591d9fba3cc1fc0'}]}, 'timestamp': '2025-10-13 15:28:21.147334', '_unique_id': 'a3ea28badf284a578c295a4a79377c83'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.147 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.148 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.148 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.148 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.148 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '800a309e-0679-4c8e-9973-8f3585d79443', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.148728', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '40308e9a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': '4883b375669b5f9fa0844eb1ccb692dfd18263c55db46501160f164b3d070d05'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.148728', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '403096f6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': '9df5636b406e938e70e35eb0c788d84fd8a96ea34185892d9bb6dcf58e6def2c'}]}, 'timestamp': '2025-10-13 15:28:21.149183', '_unique_id': '222066070c6d41a3a66bd546ec3954fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.150 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.150 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.150 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.150 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.150 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1636338c-497d-43de-b283-ee56d3957dd6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.150260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4030ca40-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '4ca25d8b35165ab8275028098fd4b193b08863ea9da70c9189ab4ab78b9814aa'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.150260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4030d2c4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '29485999b42cee9071497d3397820173b8c5a9bb394a955d12ef6dabdcb67b4f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.150260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4030da1c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.190499806, 'message_signature': '2f59cb52e78eef52a9f9b4354c1aef0fcd1afc1454ed0b1f04ff5ce98e079e86'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.150260', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4030e174-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '3aee6a9d0fd94a3ce6772c4000419c7748d9b082e99924296e32fc24476f5615'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.150260', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4030e886-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.226080818, 'message_signature': '0355c5e44942420bb26c7f21be1acd2b506fc7478f5a3a6e815c043dbedff58f'}]}, 'timestamp': '2025-10-13 15:28:21.151253', '_unique_id': 'ea0b7ef4b46b4ac8b4019a4688b1c6ae'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.151 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.152 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.152 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.152 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '676ba1de-0e6c-42d5-93d8-d6354f835424', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:28:21.152382', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '40311dba-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.312910994, 'message_signature': 'd11c5a74f6d67fed05c1a9bc37f6888ceaf75f8d7c02e49a118d77d51abedf5f'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:28:21.152382', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4031320a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.328757083, 'message_signature': '3e500cc366f406b111759ac1e3579359f612db5151f884738a96a0496350ba3c'}]}, 'timestamp': '2025-10-13 15:28:21.153142', '_unique_id': '1f2905547590415884aff63dc4188b8b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.154 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.154 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.154 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.154 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.154 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.154 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9dbb3151-edc9-4f1e-96da-dcee94c41969', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:28:21.154153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '403162c0-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': '1fd02acaadd348787067cd86cc7499ce9f1264da1962edb3b0c5ed7995f27084'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:28:21.154153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40316a72-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': '40b0e1700e2b9a1a062a1649c0860b2f1376dccb226620040db54cb2e05b67c5'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:28:21.154153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '4031727e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.245371728, 'message_signature': 'df05dac75d70e1c1f8e31b9fd419a19d81938b8fd626c15db45a50adc0871ceb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:28:21.154153', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '403179ae-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.273618043, 'message_signature': 'aa17d3c4e490b5ee1e9f0148e14eba41ac49e4a2c1e3b48bb4a4031e267dff4c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:28:21.154153', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '40318098-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.273618043, 'message_signature': '188ca59ca0559f0196c0917e8e2c17254abacce8b65847c3d55fd18527bfc02c'}]}, 'timestamp': '2025-10-13 15:28:21.155130', '_unique_id': '98bb8e4a496b419d83184558aa404018'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.155 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.156 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.156 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.156 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f779493c-4847-4b1e-9c01-d3434f739e9d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.156209', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4031b324-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': '6230fbc01beb0e95ca9a95e9b19e3ab8434e572950f6e852ab06619d29c1d518'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.156209', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4031bbc6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': '2ecae6c0a6494deb9e503f4a8e32471395f3eefdb1ab1f3935210ee939e26e57'}]}, 'timestamp': '2025-10-13 15:28:21.156656', '_unique_id': 'd9e5c2ff8b184362b7ff565d49f48052'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.157 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b10e9161-db0b-4175-8725-1e39b0184379', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:28:21.157694', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '4031eca4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.182056806, 'message_signature': 'd13aef33214aadc03544f10c5b099f2468702af94ebec94c1e189663c5e3cec0'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:28:21.157694', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '4031f47e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8744.186108506, 'message_signature': '0cc78f3e4560bc20a6676daf2f61eb2ed268fd4b6b0681389705e533cd3c0556'}]}, 'timestamp': '2025-10-13 15:28:21.158105', '_unique_id': '00215e08c0984d0a91018cf6311115a8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:28:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:28:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:28:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3414: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:28:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:28:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:22.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:28:22 standalone.localdomain podman[493352]: 2025-10-13 15:28:22.396836538 +0000 UTC m=+0.063541748 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 15:28:22 standalone.localdomain podman[493352]: 2025-10-13 15:28:22.431803002 +0000 UTC m=+0.098508212 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:28:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:22 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:28:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:22 standalone.localdomain ceph-mon[29756]: pgmap v3414: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:22 standalone.localdomain podman[467099]: time="2025-10-13T15:28:22Z" level=error msg="Getting root fs size for \"b0d5af6659c3360267e7f653fff8db1eb89aa8ad92cb679bdf1b2445aacbd264\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:28:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:22 standalone.localdomain sudo[493375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:28:22 standalone.localdomain sudo[493375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:28:22 standalone.localdomain sudo[493375]: pam_unix(sudo:session): session closed for user root
Oct 13 15:28:23 standalone.localdomain sudo[493393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:28:23 standalone.localdomain sudo[493393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3415: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:28:23
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'manila_data', 'vms', 'backups', 'manila_metadata']
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:28:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:23.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16012 DF PROTO=TCP SPT=34204 DPT=9102 SEQ=125030640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC0BD9F0000000001030307) 
Oct 13 15:28:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:28:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:28:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16013 DF PROTO=TCP SPT=34204 DPT=9102 SEQ=125030640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC0C1B70000000001030307) 
Oct 13 15:28:24 standalone.localdomain ceph-mon[29756]: pgmap v3415: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:28:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3416: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf-merged.mount: Deactivated successfully.
Oct 13 15:28:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf-merged.mount: Deactivated successfully.
Oct 13 15:28:25 standalone.localdomain sudo[493393]: pam_unix(sudo:session): session closed for user root
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:28:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c2409870-6de1-4c60-b7bc-2f91f46f9c7b (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:28:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c2409870-6de1-4c60-b7bc-2f91f46f9c7b (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:28:25 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c2409870-6de1-4c60-b7bc-2f91f46f9c7b (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:28:25 standalone.localdomain sudo[493443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:28:25 standalone.localdomain sudo[493443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:28:25 standalone.localdomain sudo[493443]: pam_unix(sudo:session): session closed for user root
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:28:25 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:28:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16014 DF PROTO=TCP SPT=34204 DPT=9102 SEQ=125030640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC0C9B60000000001030307) 
Oct 13 15:28:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-aace4bbe54ac2cb069ca4a3a3f3474f5c390cddd7bccc6c051665cd581662d30-merged.mount: Deactivated successfully.
Oct 13 15:28:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:26 standalone.localdomain ceph-mon[29756]: pgmap v3416: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:27.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3417: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:28:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:28:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:28:27 standalone.localdomain systemd[1]: tmp-crun.z04135.mount: Deactivated successfully.
Oct 13 15:28:27 standalone.localdomain podman[493461]: 2025-10-13 15:28:27.78015495 +0000 UTC m=+0.103911132 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:28:27 standalone.localdomain podman[493461]: 2025-10-13 15:28:27.788765005 +0000 UTC m=+0.112521197 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:28:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:28.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:28.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:28:28 standalone.localdomain ceph-mon[29756]: pgmap v3417: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:29.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:28:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3418: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:28:29 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:28:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:28:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:28:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:28:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16015 DF PROTO=TCP SPT=34204 DPT=9102 SEQ=125030640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC0D9770000000001030307) 
Oct 13 15:28:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:28:31 standalone.localdomain ceph-mon[29756]: pgmap v3418: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:28:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.086 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.109 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.109 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:28:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3419: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:28:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4203365162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.528 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.607 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.608 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.608 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.610 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.611 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:28:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:28:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.779 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.780 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9691MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.780 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:28:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:31.780 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:28:31 standalone.localdomain podman[493501]: 2025-10-13 15:28:31.789453368 +0000 UTC m=+0.063179908 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Oct 13 15:28:31 standalone.localdomain podman[493501]: 2025-10-13 15:28:31.801925687 +0000 UTC m=+0.075652217 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Oct 13 15:28:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:32 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4203365162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.082 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.082 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.082 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.083 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.389 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:28:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:32 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.745 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.745 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:28:32 standalone.localdomain podman[493502]: 2025-10-13 15:28:32.753955628 +0000 UTC m=+1.024362660 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.762 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:28:32 standalone.localdomain podman[493502]: 2025-10-13 15:28:32.7699665 +0000 UTC m=+1.040373502 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.787 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:28:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:32.832 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:28:33 standalone.localdomain ceph-mon[29756]: pgmap v3419: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:28:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/361731402' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:28:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:33.251 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:28:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:33.258 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:28:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:33.281 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:28:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3420: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:33.285 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:28:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:33.286 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:28:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:33.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:33 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:28:33 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:33 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:28:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:28:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:28:34 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/361731402' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:28:34 standalone.localdomain podman[493562]: 2025-10-13 15:28:34.086625109 +0000 UTC m=+0.095042391 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, tcib_managed=true, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:28:34 standalone.localdomain podman[493561]: 2025-10-13 15:28:34.18783248 +0000 UTC m=+0.198944751 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true)
Oct 13 15:28:34 standalone.localdomain podman[493563]: 2025-10-13 15:28:34.154269898 +0000 UTC m=+0.152427656 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_account_server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhosp17/openstack-swift-account, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, release=1, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12)
Oct 13 15:28:34 standalone.localdomain podman[493562]: 2025-10-13 15:28:34.273959706 +0000 UTC m=+0.282376998 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64)
Oct 13 15:28:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:34.288 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:34 standalone.localdomain podman[493563]: 2025-10-13 15:28:34.330794166 +0000 UTC m=+0.328951964 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, version=17.1.9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.buildah.version=1.33.12)
Oct 13 15:28:34 standalone.localdomain podman[493561]: 2025-10-13 15:28:34.396818778 +0000 UTC m=+0.407931079 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, batch=17.1_20250721.1, vcs-type=git, container_name=swift_object_server, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:28:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:28:35 standalone.localdomain systemd[1]: tmp-crun.r3NMo0.mount: Deactivated successfully.
Oct 13 15:28:35 standalone.localdomain ceph-mon[29756]: pgmap v3420: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.089 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:28:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3421: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.540 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.540 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.540 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:28:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:35.540 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:28:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:28:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-333396cca4f2fb744fdc3f640c53a832d015dd9da8cc6511785f638d57d7bd1c-merged.mount: Deactivated successfully.
Oct 13 15:28:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:28:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-333396cca4f2fb744fdc3f640c53a832d015dd9da8cc6511785f638d57d7bd1c-merged.mount: Deactivated successfully.
Oct 13 15:28:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6b507ea62e23c4605e5566f890bdcc22037cf6450e2b1b38a6be624fb320b6bf-merged.mount: Deactivated successfully.
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.056 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.076 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.077 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.077 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.078 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.078 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:28:36 standalone.localdomain ceph-mon[29756]: pgmap v3421: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.108 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:28:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:36.109 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:36 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:28:36 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:28:36 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:28:36 standalone.localdomain podman[493643]: 2025-10-13 15:28:36.207263261 +0000 UTC m=+1.219382463 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid)
Oct 13 15:28:36 standalone.localdomain podman[493643]: 2025-10-13 15:28:36.222892133 +0000 UTC m=+1.235011355 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 15:28:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:37.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3422: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:28:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:28:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:28:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:28:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:28:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:38.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:38 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:28:38 standalone.localdomain podman[493663]: 2025-10-13 15:28:38.656689332 +0000 UTC m=+1.213553672 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:28:38 standalone.localdomain podman[493663]: 2025-10-13 15:28:38.669079658 +0000 UTC m=+1.225944048 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:28:38 standalone.localdomain podman[493663]: unhealthy
Oct 13 15:28:38 standalone.localdomain ceph-mon[29756]: pgmap v3422: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:28:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3423: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:39 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:28:39 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx90d2949748a94a21bd34d-0068ed1aa7" "proxy-server 2" 0.0004 "-" 21 -
Oct 13 15:28:39 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx90d2949748a94a21bd34d-0068ed1aa7)
Oct 13 15:28:39 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx90d2949748a94a21bd34d-0068ed1aa7)
Oct 13 15:28:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:40.102 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:40.103 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:28:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:40.103 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:28:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:28:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:28:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:28:40 standalone.localdomain ceph-mon[29756]: pgmap v3423: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:41 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:28:41 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:28:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:28:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3424: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:28:41 standalone.localdomain podman[493685]: 2025-10-13 15:28:41.802542969 +0000 UTC m=+0.073279128 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:28:41 standalone.localdomain podman[493685]: 2025-10-13 15:28:41.833501014 +0000 UTC m=+0.104237173 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251009, managed_by=edpm_ansible)
Oct 13 15:28:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:42.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:42 standalone.localdomain podman[467099]: time="2025-10-13T15:28:42Z" level=error msg="Getting root fs size for \"b208739d05df384bfccedbbf3c0b2bfb59c3b46c72979b34c93d345e59091a91\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:28:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:28:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:28:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:28:42 standalone.localdomain ceph-mon[29756]: pgmap v3424: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:28:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:43 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:28:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3425: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:43.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:44 standalone.localdomain ceph-mon[29756]: pgmap v3425: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:28:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-333396cca4f2fb744fdc3f640c53a832d015dd9da8cc6511785f638d57d7bd1c-merged.mount: Deactivated successfully.
Oct 13 15:28:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3426: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:46 standalone.localdomain ceph-mon[29756]: pgmap v3426: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:47.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3427: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:28:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:48.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:48 standalone.localdomain ceph-mon[29756]: pgmap v3427: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3428: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:28:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:28:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:28:49 standalone.localdomain podman[493704]: 2025-10-13 15:28:49.838453516 +0000 UTC m=+0.098251295 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:28:49 standalone.localdomain podman[493704]: 2025-10-13 15:28:49.848874774 +0000 UTC m=+0.108672623 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:28:49 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:28:50 standalone.localdomain systemd[1]: tmp-crun.XGxzbF.mount: Deactivated successfully.
Oct 13 15:28:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:28:51 standalone.localdomain ceph-mon[29756]: pgmap v3428: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3429: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:28:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:28:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:28:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:52.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:52 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:52 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:28:52 standalone.localdomain podman[493727]: 2025-10-13 15:28:52.825401365 +0000 UTC m=+0.089835887 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:28:52 standalone.localdomain podman[493727]: 2025-10-13 15:28:52.863881382 +0000 UTC m=+0.128315894 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 13 15:28:53 standalone.localdomain ceph-mon[29756]: pgmap v3429: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:28:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3430: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52136 DF PROTO=TCP SPT=38194 DPT=9102 SEQ=401702496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC132CE0000000001030307) 
Oct 13 15:28:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:28:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:28:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:54 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:28:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52137 DF PROTO=TCP SPT=38194 DPT=9102 SEQ=401702496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC136F60000000001030307) 
Oct 13 15:28:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:55 standalone.localdomain ceph-mon[29756]: pgmap v3430: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:55 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3431: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:55 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:55 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:28:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:28:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:28:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:28:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52138 DF PROTO=TCP SPT=38194 DPT=9102 SEQ=401702496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC13EF60000000001030307) 
Oct 13 15:28:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:28:57 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:15:28:57 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx29581a7dc9b24f1696f31-0068ed1ab9" "proxy-server 2" 0.0005 "-" 20 -
Oct 13 15:28:57 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx29581a7dc9b24f1696f31-0068ed1ab9)
Oct 13 15:28:57 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx29581a7dc9b24f1696f31-0068ed1ab9)
Oct 13 15:28:57 standalone.localdomain ceph-mon[29756]: pgmap v3431: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:57.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3432: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:28:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d307bd82fd874de705c72ae1d9d0871388589f1e07b79f85f6bf3ab4a2175384-merged.mount: Deactivated successfully.
Oct 13 15:28:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:28:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:28:58.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:28:59 standalone.localdomain ceph-mon[29756]: pgmap v3432: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3433: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:28:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:28:59 standalone.localdomain systemd[1]: tmp-crun.c3q4BT.mount: Deactivated successfully.
Oct 13 15:28:59 standalone.localdomain podman[493752]: 2025-10-13 15:28:59.79485694 +0000 UTC m=+0.067855177 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:28:59 standalone.localdomain podman[493752]: 2025-10-13 15:28:59.801797195 +0000 UTC m=+0.074795442 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 13 15:28:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:29:00 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:29:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52139 DF PROTO=TCP SPT=38194 DPT=9102 SEQ=401702496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC14EB60000000001030307) 
Oct 13 15:29:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:29:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:29:01 standalone.localdomain ceph-mon[29756]: pgmap v3433: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3434: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:02.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:03 standalone.localdomain ceph-mon[29756]: pgmap v3434: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:29:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3435: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:03 standalone.localdomain podman[493772]: 2025-10-13 15:29:03.29505719 +0000 UTC m=+0.080389267 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 15:29:03 standalone.localdomain podman[493772]: 2025-10-13 15:29:03.3031812 +0000 UTC m=+0.088513247 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:29:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:03.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:29:03 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:29:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:03 standalone.localdomain podman[493794]: 2025-10-13 15:29:03.970022281 +0000 UTC m=+0.238355746 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:29:03 standalone.localdomain podman[493794]: 2025-10-13 15:29:03.987670772 +0000 UTC m=+0.256004227 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 13 15:29:04 standalone.localdomain podman[467099]: time="2025-10-13T15:29:04Z" level=error msg="Getting root fs size for \"bc95f2511a775392bb0fe1df02ab6c50cd9b3390e418b54698b4eac88a061218\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:29:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:04 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:29:04 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:05 standalone.localdomain ceph-mon[29756]: pgmap v3435: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3436: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:29:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:29:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:29:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:29:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88290a70e6b0d5a66e3861564060c18eea821ae554c9c70ccab8ad18232a364f-merged.mount: Deactivated successfully.
Oct 13 15:29:06 standalone.localdomain systemd[1]: tmp-crun.NxUzbS.mount: Deactivated successfully.
Oct 13 15:29:06 standalone.localdomain podman[493815]: 2025-10-13 15:29:06.916592644 +0000 UTC m=+0.165231883 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, batch=17.1_20250721.1, distribution-scope=public, vcs-type=git, version=17.1.9, tcib_managed=true, container_name=swift_account_server)
Oct 13 15:29:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:29:06.944 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:29:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:29:06.945 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:29:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:29:06.945 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:29:06 standalone.localdomain podman[493814]: 2025-10-13 15:29:06.95953816 +0000 UTC m=+0.213750104 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 15:29:06 standalone.localdomain podman[493813]: 2025-10-13 15:29:06.882696398 +0000 UTC m=+0.141511107 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, release=1, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-swift-object)
Oct 13 15:29:07 standalone.localdomain podman[493813]: 2025-10-13 15:29:07.050182748 +0000 UTC m=+0.308997457 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, distribution-scope=public, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:29:07 standalone.localdomain ceph-mon[29756]: pgmap v3436: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:07.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:07 standalone.localdomain podman[493815]: 2025-10-13 15:29:07.113943336 +0000 UTC m=+0.362582595 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, build-date=2025-07-21T16:11:22, release=1, config_id=tripleo_step4)
Oct 13 15:29:07 standalone.localdomain podman[493814]: 2025-10-13 15:29:07.147680177 +0000 UTC m=+0.401892151 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, distribution-scope=public, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:29:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3437: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:07 standalone.localdomain systemd[1]: tmp-crun.uoCv4P.mount: Deactivated successfully.
Oct 13 15:29:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:29:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d307bd82fd874de705c72ae1d9d0871388589f1e07b79f85f6bf3ab4a2175384-merged.mount: Deactivated successfully.
Oct 13 15:29:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:08.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:29:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:29:09 standalone.localdomain ceph-mon[29756]: pgmap v3437: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:29:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3438: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:29:09 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:29:09 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:29:09 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:29:09 standalone.localdomain podman[493891]: 2025-10-13 15:29:09.460715454 +0000 UTC m=+0.722520453 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 13 15:29:09 standalone.localdomain podman[493891]: 2025-10-13 15:29:09.467981658 +0000 UTC m=+0.729786637 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:29:10 standalone.localdomain ceph-mon[29756]: pgmap v3438: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:29:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:29:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3439: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:29:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:12 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:29:12 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:29:12 standalone.localdomain podman[493912]: 2025-10-13 15:29:12.093983075 +0000 UTC m=+0.981228109 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:29:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:12.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:12 standalone.localdomain podman[493912]: 2025-10-13 15:29:12.130386509 +0000 UTC m=+1.017631573 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:29:12 standalone.localdomain podman[493912]: unhealthy
Oct 13 15:29:12 standalone.localdomain ceph-mon[29756]: pgmap v3439: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:29:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:29:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3440: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:13.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:29:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:14 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:29:14 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:29:14 standalone.localdomain podman[493936]: 2025-10-13 15:29:14.19899787 +0000 UTC m=+0.497619721 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Oct 13 15:29:14 standalone.localdomain podman[493936]: 2025-10-13 15:29:14.21130345 +0000 UTC m=+0.509925241 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:29:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:14 standalone.localdomain ceph-mon[29756]: pgmap v3440: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3441: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:15 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:29:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:15 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:16 standalone.localdomain ceph-mon[29756]: pgmap v3441: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:17.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3442: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:29:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d88352a2ff0b111ac4658de741be23293b9e0b9247be5bcd90b986f8e074eeae-merged.mount: Deactivated successfully.
Oct 13 15:29:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:18.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2422199668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2422199668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:29:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:29:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88290a70e6b0d5a66e3861564060c18eea821ae554c9c70ccab8ad18232a364f-merged.mount: Deactivated successfully.
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: pgmap v3442: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2422199668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:29:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2422199668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:29:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3443: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:29:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:29:20 standalone.localdomain podman[493953]: 2025-10-13 15:29:20.534143917 +0000 UTC m=+0.078579077 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:29:20 standalone.localdomain podman[493953]: 2025-10-13 15:29:20.541918197 +0000 UTC m=+0.086353357 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:29:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:29:20 standalone.localdomain ceph-mon[29756]: pgmap v3443: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:29:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3444: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:29:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:29:21 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:29:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:22.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:22 standalone.localdomain ceph-mon[29756]: pgmap v3444: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:29:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:29:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:29:23
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'backups', '.mgr', 'vms', 'volumes', 'manila_metadata', 'images']
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3445: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35294 DF PROTO=TCP SPT=46070 DPT=9102 SEQ=1616751837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC1A7FF0000000001030307) 
Oct 13 15:29:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:23.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:29:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:29:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:29:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:29:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35295 DF PROTO=TCP SPT=46070 DPT=9102 SEQ=1616751837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC1ABF60000000001030307) 
Oct 13 15:29:24 standalone.localdomain podman[493977]: 2025-10-13 15:29:24.583931852 +0000 UTC m=+0.098595214 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:29:24 standalone.localdomain podman[493977]: 2025-10-13 15:29:24.644021917 +0000 UTC m=+0.158685239 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:29:24 standalone.localdomain ceph-mon[29756]: pgmap v3445: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:25 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:25 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:25 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:29:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3446: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:25 standalone.localdomain podman[467099]: time="2025-10-13T15:29:25Z" level=error msg="Getting root fs size for \"c1c79e94a2dfbaa19fef6ae97d78176298fc5be6628c154c18b22b79b2ae617b\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:29:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:25 standalone.localdomain sudo[494002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:29:25 standalone.localdomain sudo[494002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:29:25 standalone.localdomain sudo[494002]: pam_unix(sudo:session): session closed for user root
Oct 13 15:29:26 standalone.localdomain sudo[494020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:29:26 standalone.localdomain sudo[494020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:29:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35296 DF PROTO=TCP SPT=46070 DPT=9102 SEQ=1616751837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC1B3F60000000001030307) 
Oct 13 15:29:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:26 standalone.localdomain ceph-mon[29756]: pgmap v3446: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:27.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3447: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90f296025e41828165e9f30e94e1d049e2fa598b29f53ba8a87fba0d0c070828-merged.mount: Deactivated successfully.
Oct 13 15:29:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:29:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d88352a2ff0b111ac4658de741be23293b9e0b9247be5bcd90b986f8e074eeae-merged.mount: Deactivated successfully.
Oct 13 15:29:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d88352a2ff0b111ac4658de741be23293b9e0b9247be5bcd90b986f8e074eeae-merged.mount: Deactivated successfully.
Oct 13 15:29:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:28.107 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:28 standalone.localdomain sudo[494020]: pam_unix(sudo:session): session closed for user root
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:29:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:29:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev a04437d5-7ef9-4805-8653-5d6833794ad2 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:29:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev a04437d5-7ef9-4805-8653-5d6833794ad2 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:29:28 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event a04437d5-7ef9-4805-8653-5d6833794ad2 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:29:28 standalone.localdomain sudo[494070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:29:28 standalone.localdomain sudo[494070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:29:28 standalone.localdomain sudo[494070]: pam_unix(sudo:session): session closed for user root
Oct 13 15:29:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:28.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bedf3a02c6a3e654014c5b18d30521b2df9e55f326acde83b2ea6fe9d2600174-merged.mount: Deactivated successfully.
Oct 13 15:29:29 standalone.localdomain ceph-mon[29756]: pgmap v3447: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:29:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:29:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:29:29 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:29:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:29.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:29:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3448: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:29:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:29:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:29:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:29:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:29:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35297 DF PROTO=TCP SPT=46070 DPT=9102 SEQ=1616751837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC1C3B70000000001030307) 
Oct 13 15:29:30 standalone.localdomain podman[494088]: 2025-10-13 15:29:30.679847984 +0000 UTC m=+0.067564686 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:29:30 standalone.localdomain podman[494088]: 2025-10-13 15:29:30.712935266 +0000 UTC m=+0.100651978 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:29:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:29:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:30 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:31 standalone.localdomain ceph-mon[29756]: pgmap v3448: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.108 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.109 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.109 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:29:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3449: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:29:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/879039744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.567 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.640 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.641 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.641 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.647 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.647 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.826 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.827 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9651MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.827 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.827 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.914 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.915 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.915 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.915 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:29:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:31.985 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:29:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:32 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/879039744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:29:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:32.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:29:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/799147377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:29:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:32.395 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:29:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:32.400 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:29:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:32.419 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:29:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:32.421 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:29:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:32.422 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:29:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:33 standalone.localdomain ceph-mon[29756]: pgmap v3449: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:33 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/799147377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:29:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:29:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3450: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:33.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:29:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f71faf331040c26a5f484565c527989857decd5d4ab474ba00bf72472b9f3d87-merged.mount: Deactivated successfully.
Oct 13 15:29:33 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:33 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:29:34 standalone.localdomain podman[494150]: 2025-10-13 15:29:34.232166104 +0000 UTC m=+0.095054105 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Oct 13 15:29:34 standalone.localdomain podman[494150]: 2025-10-13 15:29:34.246764884 +0000 UTC m=+0.109652905 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Oct 13 15:29:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:34.417 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:34.418 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:29:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:35 standalone.localdomain ceph-mon[29756]: pgmap v3450: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:35.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:35.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:29:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3451: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:35.679 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:29:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:35.679 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:29:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:35.680 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:29:35 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:29:35 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:35 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:35 standalone.localdomain podman[494168]: 2025-10-13 15:29:35.775683798 +0000 UTC m=+1.036434993 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:29:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:29:35 standalone.localdomain podman[494168]: 2025-10-13 15:29:35.806804798 +0000 UTC m=+1.067555983 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:29:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:36.042 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:29:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:36.059 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:29:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:36.060 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:29:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:36.060 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:36.060 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:29:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:36 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:29:37 standalone.localdomain ceph-mon[29756]: pgmap v3451: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:37.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:37.112 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:37.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3452: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:38.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:39 standalone.localdomain ceph-mon[29756]: pgmap v3452: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3453: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:29:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bedf3a02c6a3e654014c5b18d30521b2df9e55f326acde83b2ea6fe9d2600174-merged.mount: Deactivated successfully.
Oct 13 15:29:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:29:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:29:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:29:39 standalone.localdomain systemd[1]: tmp-crun.ptaP2r.mount: Deactivated successfully.
Oct 13 15:29:39 standalone.localdomain podman[494188]: 2025-10-13 15:29:39.577782637 +0000 UTC m=+0.093783936 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.openshift.expose-services=, release=1, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, version=17.1.9, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:29:39 standalone.localdomain podman[494189]: 2025-10-13 15:29:39.623840119 +0000 UTC m=+0.136975769 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12)
Oct 13 15:29:39 standalone.localdomain podman[494187]: 2025-10-13 15:29:39.700183675 +0000 UTC m=+0.217135533 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-type=git, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, com.redhat.component=openstack-swift-object-container)
Oct 13 15:29:39 standalone.localdomain podman[494188]: 2025-10-13 15:29:39.793861986 +0000 UTC m=+0.309863285 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, vcs-type=git, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc.)
Oct 13 15:29:39 standalone.localdomain podman[494189]: 2025-10-13 15:29:39.811259454 +0000 UTC m=+0.324395104 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-type=git, release=1, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, build-date=2025-07-21T16:11:22, com.redhat.component=openstack-swift-account-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1)
Oct 13 15:29:39 standalone.localdomain podman[494187]: 2025-10-13 15:29:39.907522525 +0000 UTC m=+0.424474413 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, vcs-type=git, io.openshift.expose-services=)
Oct 13 15:29:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:29:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:40.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:29:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67-merged.mount: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain ceph-mon[29756]: pgmap v3453: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:41 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:29:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3454: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:42.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:29:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:42 standalone.localdomain podman[494268]: 2025-10-13 15:29:42.336797869 +0000 UTC m=+0.059486787 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:29:42 standalone.localdomain podman[494268]: 2025-10-13 15:29:42.34266813 +0000 UTC m=+0.065357068 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:29:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:29:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:29:43 standalone.localdomain ceph-mon[29756]: pgmap v3454: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3455: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:29:43 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:29:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:43 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:43.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:29:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:44 standalone.localdomain podman[494287]: 2025-10-13 15:29:44.29262474 +0000 UTC m=+0.052369547 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:29:44 standalone.localdomain podman[494287]: 2025-10-13 15:29:44.326872478 +0000 UTC m=+0.086617275 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:29:44 standalone.localdomain podman[494287]: unhealthy
Oct 13 15:29:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:45 standalone.localdomain ceph-mon[29756]: pgmap v3455: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3456: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:29:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:29:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:45 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:29:45 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:29:45 standalone.localdomain podman[494310]: 2025-10-13 15:29:45.92801827 +0000 UTC m=+0.275920858 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:29:45 standalone.localdomain podman[494310]: 2025-10-13 15:29:45.968787188 +0000 UTC m=+0.316689736 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:29:46 standalone.localdomain ceph-mon[29756]: pgmap v3456: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:47.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3457: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:29:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f71faf331040c26a5f484565c527989857decd5d4ab474ba00bf72472b9f3d87-merged.mount: Deactivated successfully.
Oct 13 15:29:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:29:47 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:29:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:48.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:48 standalone.localdomain ceph-mon[29756]: pgmap v3457: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3458: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:29:49 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:50 standalone.localdomain ceph-mon[29756]: pgmap v3458: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3459: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:29:51 standalone.localdomain systemd[1]: tmp-crun.FLgbhX.mount: Deactivated successfully.
Oct 13 15:29:51 standalone.localdomain podman[494329]: 2025-10-13 15:29:51.623881934 +0000 UTC m=+0.060010493 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:29:51 standalone.localdomain podman[494329]: 2025-10-13 15:29:51.633994926 +0000 UTC m=+0.070123515 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:29:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:29:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:52.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:52 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:29:52 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:52 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:29:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e1ccf73ed83c48b16553af543aff822fe00fea1e3c80e1686975885d7aa5f294-merged.mount: Deactivated successfully.
Oct 13 15:29:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:29:52 standalone.localdomain ceph-mon[29756]: pgmap v3459: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:29:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3460: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62146 DF PROTO=TCP SPT=43916 DPT=9102 SEQ=61618363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC21D2F0000000001030307) 
Oct 13 15:29:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:53.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:29:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62147 DF PROTO=TCP SPT=43916 DPT=9102 SEQ=61618363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC221370000000001030307) 
Oct 13 15:29:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:54 standalone.localdomain ceph-mon[29756]: pgmap v3460: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:29:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-718cafd73a436aee332aceff0da4f3f9129fa96ed8d0b63027c1d529e51c6a98-merged.mount: Deactivated successfully.
Oct 13 15:29:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6dc651db1dd688436f3130013ee90dcee561a6fcd9f4951c6044c10caca8f67-merged.mount: Deactivated successfully.
Oct 13 15:29:55 standalone.localdomain podman[494352]: 2025-10-13 15:29:55.28138088 +0000 UTC m=+0.052442750 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller)
Oct 13 15:29:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3461: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:55 standalone.localdomain podman[494352]: 2025-10-13 15:29:55.328567547 +0000 UTC m=+0.099629427 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 13 15:29:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:29:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62148 DF PROTO=TCP SPT=43916 DPT=9102 SEQ=61618363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC229370000000001030307) 
Oct 13 15:29:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:29:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:29:56 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:29:56 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:29:57 standalone.localdomain ceph-mon[29756]: pgmap v3461: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:57.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3462: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:29:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:29:58.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:29:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:29:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:29:58 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:58 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:29:58 standalone.localdomain podman[467099]: time="2025-10-13T15:29:58Z" level=error msg="Getting root fs size for \"c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb\": getting diffsize of layer \"8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer 8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac: replacing mount point \"/var/lib/containers/storage/overlay/8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac/merged\": device or resource busy"
Oct 13 15:29:59 standalone.localdomain ceph-mon[29756]: pgmap v3462: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:29:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3463: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:30:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62149 DF PROTO=TCP SPT=43916 DPT=9102 SEQ=61618363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC238F70000000001030307) 
Oct 13 15:30:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:30:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:30:01 standalone.localdomain ceph-mon[29756]: pgmap v3463: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:30:01 standalone.localdomain podman[494378]: 2025-10-13 15:30:01.143047172 +0000 UTC m=+0.103472565 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:30:01 standalone.localdomain podman[494378]: 2025-10-13 15:30:01.171866912 +0000 UTC m=+0.132292235 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 13 15:30:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3464: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:02 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:30:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:02.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:03 standalone.localdomain ceph-mon[29756]: pgmap v3464: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3465: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:03.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-03412483bd4ca7b5539c903f8c964f9a8806e7da0b632626e9e28c6f9211cb67-merged.mount: Deactivated successfully.
Oct 13 15:30:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b6fff9c8e433cbfc969f016d7c00977424b6e0fe3f5e8a6774343b30e6ab0953-merged.mount: Deactivated successfully.
Oct 13 15:30:05 standalone.localdomain ceph-mon[29756]: pgmap v3465: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e1ccf73ed83c48b16553af543aff822fe00fea1e3c80e1686975885d7aa5f294-merged.mount: Deactivated successfully.
Oct 13 15:30:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e1ccf73ed83c48b16553af543aff822fe00fea1e3c80e1686975885d7aa5f294-merged.mount: Deactivated successfully.
Oct 13 15:30:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3466: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:30:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:30:05 standalone.localdomain podman[494398]: 2025-10-13 15:30:05.922957194 +0000 UTC m=+0.075374088 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:30:05 standalone.localdomain podman[494398]: 2025-10-13 15:30:05.965093455 +0000 UTC m=+0.117510339 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 15:30:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:30:06.946 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:30:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:30:06.947 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:30:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:30:06.947 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:30:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:07 standalone.localdomain ceph-mon[29756]: pgmap v3466: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:07.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:30:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:30:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3467: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:30:07 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:30:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:07 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:07 standalone.localdomain podman[494418]: 2025-10-13 15:30:07.508174275 +0000 UTC m=+0.212902273 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:30:07 standalone.localdomain podman[494418]: 2025-10-13 15:30:07.521784904 +0000 UTC m=+0.226512902 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 13 15:30:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:08.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:09 standalone.localdomain ceph-mon[29756]: pgmap v3467: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3468: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:30:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:30:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-93c0822e715760ae283b5dfa3c054d7f162a497c51033e354a5256453c1ce67c-merged.mount: Deactivated successfully.
Oct 13 15:30:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:09 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:30:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:11 standalone.localdomain ceph-mon[29756]: pgmap v3468: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3469: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:30:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:30:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:30:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:11 standalone.localdomain systemd[1]: tmp-crun.eYZOEg.mount: Deactivated successfully.
Oct 13 15:30:11 standalone.localdomain podman[494437]: 2025-10-13 15:30:11.542708119 +0000 UTC m=+0.063185682 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, vcs-type=git, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., release=1, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true)
Oct 13 15:30:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:11 standalone.localdomain podman[494438]: 2025-10-13 15:30:11.625106762 +0000 UTC m=+0.141980244 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, vendor=Red Hat, Inc., batch=17.1_20250721.1, release=1, container_name=swift_container_server, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team)
Oct 13 15:30:11 standalone.localdomain podman[494437]: 2025-10-13 15:30:11.731858517 +0000 UTC m=+0.252336080 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:30:11 standalone.localdomain podman[494439]: 2025-10-13 15:30:11.741721322 +0000 UTC m=+0.256906731 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1, io.buildah.version=1.33.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, distribution-scope=public)
Oct 13 15:30:11 standalone.localdomain podman[494438]: 2025-10-13 15:30:11.811135165 +0000 UTC m=+0.328008617 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, release=1, container_name=swift_container_server)
Oct 13 15:30:11 standalone.localdomain podman[494439]: 2025-10-13 15:30:11.919359765 +0000 UTC m=+0.434545174 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, container_name=swift_account_server, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 15:30:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:12.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:12 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain systemd[1]: tmp-crun.M1BKCX.mount: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e5a8eff02736f76a8eaf288551e295af46a07d57fc5a8429f8e2724a24fb08-merged.mount: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8d123e2bf97cc7b3622c68162b04c29912e1822cdbe31a1ddf70016995925bac-merged.mount: Deactivated successfully.
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:30:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:30:13 standalone.localdomain ceph-mon[29756]: pgmap v3469: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3470: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:30:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:13.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:13 standalone.localdomain podman[494516]: 2025-10-13 15:30:13.694119577 +0000 UTC m=+0.104252630 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:30:13 standalone.localdomain podman[494516]: 2025-10-13 15:30:13.700864034 +0000 UTC m=+0.110997077 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:30:14 standalone.localdomain systemd[1]: tmp-crun.CMEmFf.mount: Deactivated successfully.
Oct 13 15:30:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:30:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:30:14 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:30:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:15 standalone.localdomain ceph-mon[29756]: pgmap v3470: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3471: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:30:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:30:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:30:16 standalone.localdomain podman[494535]: 2025-10-13 15:30:16.020174915 +0000 UTC m=+0.083875970 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:30:16 standalone.localdomain podman[494535]: 2025-10-13 15:30:16.028038767 +0000 UTC m=+0.091739832 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:30:16 standalone.localdomain podman[494535]: unhealthy
Oct 13 15:30:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:30:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb58a49d5c7110b38db4af0cfdf577e1fdd1bcc07384a7e5d5a914c89b0fa557-merged.mount: Deactivated successfully.
Oct 13 15:30:16 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:30:16 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:30:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:17 standalone.localdomain ceph-mon[29756]: pgmap v3471: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:17.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3472: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:30:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:30:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:30:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:18 standalone.localdomain ceph-mon[29756]: pgmap v3472: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:30:18 standalone.localdomain podman[494557]: 2025-10-13 15:30:18.573748416 +0000 UTC m=+0.089279227 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:30:18 standalone.localdomain podman[494557]: 2025-10-13 15:30:18.584198738 +0000 UTC m=+0.099729599 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:30:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:30:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2712666840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:30:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:30:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2712666840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:30:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2712666840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:30:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2712666840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:30:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3473: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:20 standalone.localdomain ceph-mon[29756]: pgmap v3473: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:30:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:20 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:30:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:20.992 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:30:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:20.994 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:30:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:20.995 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.000 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.004 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5f59a05b-1ea1-4def-a5ad-8b53258eb61d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:20.995267', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87a08924-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '36a96ef1c41e617fc22e0ed032493cd5a550298f11da3636c62d804dc9df7457'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:20.995267', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87a12f78-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '5af8f73119e9dafdc1d6a95a7f1ad99d3528861ce3f38bf50631cb00a59c785a'}]}, 'timestamp': '2025-10-13 15:30:21.005600', '_unique_id': 'ad20387547394d05a0e0c78e0b94c81f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.007 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.009 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:30:21 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.045 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.046 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.046 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.066 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.066 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '386ea09e-a91b-415e-bbb3-4413729b2cd0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.009399', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87a770cc-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '7a657219ef3c9eca6773659b3c23aa2f7aea5cdae91d17bd388031d667049399'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.009399', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87a785bc-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '27adc444eea2ee843e1a0629b82d219b6a1cbd4c7aa91dd3e45f96895e6b3ee6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.009399', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87a796f6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '0c441b211171ce81c829013dc1ab029d1634aea3b530c5b291142319e738742c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.009399', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87aa91bc-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': 'bd84407c6f5af7e00e69f1fbda33b65e9bb42ff82e72f5c5bbac2b64e58dda90'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.009399', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87aaa4ea-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '26505643027190eb27ad34f033f7d6bc139277c7d3b2cc4caa6ec8c591ce0b9b'}]}, 'timestamp': '2025-10-13 15:30:21.067468', '_unique_id': '4beade2342324b2b893a5f731d480bb0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.069 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.071 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.071 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.072 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'acd4ef8c-5b60-416c-87a5-85987d5da872', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.071882', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87ab69fc-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '12efbf219b063df7eb1379dbf3ef7d6a8ee06b2757912602957f9cae7888647d'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.071882', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87ab84e6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '3d527c33f23e1bd16611939df03b74c1225bf374c39632d0b13f71ea91b1e5ac'}]}, 'timestamp': '2025-10-13 15:30:21.073422', '_unique_id': '291fee7e8c80411e9b4f0e47bbcfc799'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.074 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.075 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.102 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd85d5e69-4047-46d4-90e9-460507025501', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.076114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b02a8c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': '2e7531915208043e1105e75f442433b1c1201e58557ab01266c1b8207dab546b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.076114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b0386a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': 'fd788ed288ff124e05c6b2e29e56321ce79c99b79c4937408e55d5cf0d959673'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.076114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87b0415c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': 'ef62e16444c889d0db1bdd60dd601348899bfe2dbb49422a06e40660770c8805'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.076114', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b1e49e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.293306127, 'message_signature': '52a1e9314375301ba1fc70f0d3526d9eb87c882300be323eef76918cc9fa7ef4'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.076114', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b1f0f6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.293306127, 'message_signature': '94309e85b8c8c0f4fc713a3728b981d86968420a7bda4a5cb1397207f4749876'}]}, 'timestamp': '2025-10-13 15:30:21.115171', '_unique_id': '2e8f0826e5d04107ac29d69f966a0516'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.116 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.117 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.117 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.117 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.117 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.118 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2befe8ce-0729-4a00-a4ff-4a54142b0bff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.117038', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b246c8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': 'b833dda90c1dac846c1e9402853f2c9ddc7466ab374d20ef8e42efc9e1718b0f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.117038', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b25406-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': 'dedca41a1e582a9358f7b87712e61a4cbfc50a3a94cfc94f3b5c3aa80b6df16d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.117038', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87b25da2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '79b89961390717908ac5a30fe2c35fdff379055d9815958324df153bdb750180'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.117038', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b267b6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '1e7f4392d03f9f97622fe9f4faa9500727e6ed46622ad13016f16b6998ed075b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.117038', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b27184-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '9c673d6d86baa7b6babc2319955901d3206f438e12886d86b74757eeefb441c1'}]}, 'timestamp': '2025-10-13 15:30:21.118433', '_unique_id': '1ca4312a3ced41ab9a10b7f3b6f63411'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.119 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.120 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.120 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.120 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.120 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.121 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c60bec63-bac6-4688-bd30-7d483aacb3cc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.120017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b2baae-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': 'c1498e50be5a338cc62b1914f74942de4d9d51374492a67c048264c548b12429'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.120017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b2c544-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '7e10040cc5fa617898ff9a30cd647f64147a58caea20c03b407d994ef4b9301c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.120017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87b2cfe4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '631cfb6d1dd6a9f16375e86e302b04fd3bfb74d343c6d34c9ee08f8c0286ec1f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.120017', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b2d9da-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '0275d4e60b0e397c5213caa859b570b6a61c4d5989ad6bd3f3dcf94943cc72b8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.120017', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b2e376-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '812f9dd660b3ff1d385d9c811166011eaebfc25bbd8abc0f676814b1f32c55dc'}]}, 'timestamp': '2025-10-13 15:30:21.121365', '_unique_id': 'fa8dfa88815441ee885466a6f7f7c0ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.122 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.123 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.123 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.123 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.123 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2b838cd7-6afe-44a2-91b1-46346b6f39f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.122937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b32c82-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '2d8f193135c4e1f9a6cbb7f4d3837461a999ea58b1f285a9f82f81903be08989'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.122937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b3366e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': 'bbf77e287b8715258ac3d9c8e767388071bea38f9800840c24b24addb0ac29a6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.122937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87b33fce-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '52ef350efb52f695b96ac626d845158715b9bacce3fc30a7324825a16a52d562'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.122937', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b348c0-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '2fb1d87bc32f62324c2d8f86ab9565cf2f3d9985c28ac4c85c0cacd722481ce8'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.122937', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b351e4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '1daec6763f95e80fd497895eb69325158a1b6e4a4d4a692bb0236cce37d9769b'}]}, 'timestamp': '2025-10-13 15:30:21.124176', '_unique_id': 'fef7edb075e84899afd9b1519a99f2c4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.124 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.125 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.125 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.125 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.126 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.126 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.126 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43713d2c-5778-4a7f-a754-6a3b5864bbda', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.125660', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b396ea-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': '8fc5220776baa075b00894ec86d519d70e42b3a954452388c3e9db5df1b40e21'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.125660', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b3a16c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': 'a60475c5da1b3cf35e207c4b76520881dba6e6617a2943e4a654b9e07f9e6c69'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.125660', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87b3aba8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': 'ce983abaad96cfd56d731b554a787ef2dc750197a88562b5962f51f1f5cd5ccf'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.125660', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87b3b68e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.293306127, 'message_signature': 'c3c010e3cf679316b6ff244789fa6fe2e61599b5c0e556e70334db01f956dc98'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.125660', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87b3c034-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.293306127, 'message_signature': '980d33027d30ec3d8276af942fce7a15249da262cbb01c16752372edc4bde11a'}]}, 'timestamp': '2025-10-13 15:30:21.126999', '_unique_id': '83a3707d79aa45fc87be776e3f3163f8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.128 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.128 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.128 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.128 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '59d9bb05-9a74-4de2-8542-92a25578cf12', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.128569', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87b40922-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': 'e5aae94e345d9d15056b1c99a4d159454f4787fe7d3f7a438f94da95d1bdc625'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.128569', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87b41426-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '0ffac18d9fd8860ea581a9fb0a09e1cd0e8f13526dd324d45bb0d5265ec5227c'}]}, 'timestamp': '2025-10-13 15:30:21.129159', '_unique_id': '34996c044ded434291a4cb4d1e0ee75e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.129 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.130 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.130 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.130 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f3a34f7d-cf3b-426e-97cf-6b2befdc48a2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.130353', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87b44ebe-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '2fb8ac9cc389d61e84c67714ed3c2ffc58a5d9d1861f32bf4b04d7326f3f3d05'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.130353', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87b4592c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '8d1606686efe4c4b6b76b84f917fbaf530d4342777112687abd5ebda29f5a873'}]}, 'timestamp': '2025-10-13 15:30:21.130893', '_unique_id': 'cf2a8f7e19eb43b09876c29521c05ac1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.131 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.148 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 30930000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.163 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 30780000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '33b55600-27c2-40b8-9cfe-0e58f01ee674', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30930000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:30:21.131959', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '87b713d8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.337164591, 'message_signature': '28381fe7719744ac6ac0d923facd50fea06c63fbfdf75d06212318b0c330a64b'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30780000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:30:21.131959', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '87b955bc-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.352105382, 'message_signature': '4f464c8d414f5b0be8474eaaf6a4c98df1b94448be7f19694268ec27385b6ded'}]}, 'timestamp': '2025-10-13 15:30:21.163755', '_unique_id': '6a5a24d5596e411c90663d7f8ebca61d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.165 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.166 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.166 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.166 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '300c26dc-9690-468f-9616-59c41f00a63c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.166428', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87b9d2d0-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '43c1d55acb971ce7df60ef79fe8e92a28a8bed8b67c5598a7c78302080c13ff9'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.166428', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87b9de42-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': 'f8eaf0d8d498e0c6c6a3f6cf9ab90f5b8032d68ab7998cc1d74ed8cac366a42d'}]}, 'timestamp': '2025-10-13 15:30:21.167099', '_unique_id': '4512b22cd1be4f428b53b89c37048989'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.168 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.168 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '822f46e1-bae1-4fe7-9bae-15b94b2830b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.168647', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87ba2690-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '4621507a3fe3b7cfccdbd9d387235f18f5fec63ed005062ef077f57c97c33930'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.168647', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87ba320c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '939c0598e69e41e279940edd6890112e7ff2513be64b72960eb88f5c900184e6'}]}, 'timestamp': '2025-10-13 15:30:21.169247', '_unique_id': '02cbd66e327a4318a84d87281876b23e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.169 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.170 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.170 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.170 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.171 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.171 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '493b32ea-4e97-49ca-adb0-abe243a2d30a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:30:21.171005', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '87ba8270-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.337164591, 'message_signature': '3e64cb5005c8bf93a49d69b5a831044a644d2ac9d2e4d4690beb196b3aa3ab22'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:30:21.171005', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '87ba909e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.352105382, 'message_signature': '7022e4d42123171c966dcacda46e2cb423e78efbab9387040ddf30332c41f843'}]}, 'timestamp': '2025-10-13 15:30:21.171660', '_unique_id': '3d5238a98b7c49b7ae23c15ac5e413fc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.173 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.173 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '09063d8e-6ceb-4dfc-9bb5-614a92bd8e42', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.173166', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87bad70c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '72fd1126d374c3bf865a456c07d08a5233d85bfd52d782e4a0f2b1b2482c0d4e'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.173166', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87bae472-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '94d0e35946879fd50a87c80e7d492be5bf6468df169c6d64830d729d27066c1a'}]}, 'timestamp': '2025-10-13 15:30:21.173885', '_unique_id': 'f01f514979e846ac82cc5a50990b946d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.174 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.175 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.175 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.176 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9db29b40-9a32-4e55-8899-9e75ac4a2950', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.175753', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87bb3d50-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '1934afb7f67fb7a3030cadce2d58feac82b40fae4d36650db34fc33e67fad6be'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.175753', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87bb493a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '09bdcb83bc2c09166af40d4ce3c267b5243b8409344281702bb9c21797f89be8'}]}, 'timestamp': '2025-10-13 15:30:21.176393', '_unique_id': 'baa6f01fa2b2437abf2fceba61aa9f28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.177 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.178 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.178 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.178 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.179 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9adc9da2-a4e4-4ae7-a05c-2965e1a022ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.177945', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87bb916a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': 'ca3df0ca5ea9b3f05be663e9f60507c1c534935ddc1cd64dffe6fc6392f118ce'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.177945', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87bb9b74-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '91115149ab5bd9180c9d0e5b99be87be26a9577c8025a5fc8b004f72d2b847c7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.177945', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87bba876-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '455fd5ffdd925df3d82ae1e813205870b2ea393506e35022003bd4614e226d98'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.177945', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87bbb370-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': 'dcea1b626a191a5b1e7e7543bf957cf4b2eb3fabe879da641cd87ae3f3979ee8'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.177945', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87bbbd52-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': 'bb637ae50af346b9ceb99ab44d09078de7002e41e72274d5ddae46312c5d2310'}]}, 'timestamp': '2025-10-13 15:30:21.179350', '_unique_id': 'cdbf286cf61f4958b9e3ce8adf7b979c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.181 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '829b487a-52d3-49f5-9189-0fd71954bb29', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.180938', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87bc06b8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '2b32b06b7c6a0d8d94786f7a5418c1c64e8e0780ee90ec1e6a11e2baf25cac2e'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.180938', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87bc13ce-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': 'f0e7bb40e25fa1775c3d2ac1461d3b2bdee5cdb3460df325427f5df7f8c1361b'}]}, 'timestamp': '2025-10-13 15:30:21.181604', '_unique_id': 'ec1272853a5d4d57a8306056defe528f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.182 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.183 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.183 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.183 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.183 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.184 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.184 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4cef0854-4ecc-4ecb-a402-60a78fef2a90', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.183122', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87bc5cf8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '3181aa0ad234901640bf934f2fdfb8d18cf75354dbcdec8c2b8817407e828cb6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.183122', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87bc6978-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '7a5e1a077b6810f7284348680e6b204f45d3a2c87c67518ff95649cd31aa1063'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.183122', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87bc7364-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.198702287, 'message_signature': '3853eac0e7c3ec8804a97bcf727cc9dd3165918a24490d9a9ac117e497f424e8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.183122', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87bc7d3c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': '5135108c57bdb8ba837923e27fd68645b608212762849596d578b408f239e20d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.183122', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87bc86ba-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.236608098, 'message_signature': 'f4bbcf60104ba072f2453d603f3637c17cf197f30ea88db0ba80acb5354ee676'}]}, 'timestamp': '2025-10-13 15:30:21.185575', '_unique_id': '4b66a71479434ac6aef7223d86186942'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.186 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.187 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.187 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.187 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.188 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.188 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd7be987-d022-4bd5-8cec-893b9a5122a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:30:21.187269', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87bcfe10-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': 'a442efd55c36293e7f21c6c7e836bda11fcd1add8c6104448650ccb933b886fc'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:30:21.187269', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87bd0a2c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': '45d6d67ef0dddbfc810997011e2ab8db388f06e58fe68a439445aeedc0fd2934'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:30:21.187269', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '87bd159e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.265381535, 'message_signature': 'd630bd36eb5da8cf875faaf2b960e2e951f821f1da483411b49393f03475cb65'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:30:21.187269', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87bd2048-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.293306127, 'message_signature': 'f0145233286056ef7ed7284d5c2839fc3f65e2d21ad77a8fe5197bb538060968'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:30:21.187269', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87bd2b88-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.293306127, 'message_signature': 'e0d3ac8e1e2de338e5c199dfc46134deb8382b5aff7cda0f48bfc7c4942315a4'}]}, 'timestamp': '2025-10-13 15:30:21.188731', '_unique_id': '00be13db61254f5fa976830b2fcbdbbf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.190 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.190 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.190 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c83e4b8e-10ac-433e-8247-ffcb38682fba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:30:21.190303', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '87bd74a8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.184485228, 'message_signature': '7a216ab23e8831b529b01ae0fdb5866a9dff8017e6465e3cf5b00ab41111b98e'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:30:21.190303', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '87bd8376-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8864.190538315, 'message_signature': '144d0042dbe3888e1ad40b708511ad9c78ee8aecbe8d4fd9b2bf8355ad7fe633'}]}, 'timestamp': '2025-10-13 15:30:21.191003', '_unique_id': '8ecc59d15e7b43129331bcc3af4fc725'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:30:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:30:21.191 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:30:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3474: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:22.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:30:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:22 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:22 standalone.localdomain podman[494574]: 2025-10-13 15:30:22.350179873 +0000 UTC m=+0.086869862 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:30:22 standalone.localdomain podman[494574]: 2025-10-13 15:30:22.355533759 +0000 UTC m=+0.092223768 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:30:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-03412483bd4ca7b5539c903f8c964f9a8806e7da0b632626e9e28c6f9211cb67-merged.mount: Deactivated successfully.
Oct 13 15:30:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:22 standalone.localdomain ceph-mon[29756]: pgmap v3474: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:30:23
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'vms', '.mgr', 'volumes', 'backups', 'manila_data', 'images']
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3475: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47758 DF PROTO=TCP SPT=46394 DPT=9102 SEQ=617630441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC2925F0000000001030307) 
Oct 13 15:30:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:23.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:30:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:30:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:30:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:23 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:30:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47759 DF PROTO=TCP SPT=46394 DPT=9102 SEQ=617630441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC296760000000001030307) 
Oct 13 15:30:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:25 standalone.localdomain ceph-mon[29756]: pgmap v3475: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3476: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d29d2ea8946261949ed750c0626f01bb8c54f722399da45bf75e3237c128367f-merged.mount: Deactivated successfully.
Oct 13 15:30:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47760 DF PROTO=TCP SPT=46394 DPT=9102 SEQ=617630441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC29E760000000001030307) 
Oct 13 15:30:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:30:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:27 standalone.localdomain ceph-mon[29756]: pgmap v3476: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:27 standalone.localdomain podman[494597]: 2025-10-13 15:30:27.050658343 +0000 UTC m=+0.075852483 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 13 15:30:27 standalone.localdomain podman[494597]: 2025-10-13 15:30:27.078334366 +0000 UTC m=+0.103528506 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:30:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:27.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3477: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:27 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:30:28 standalone.localdomain sudo[494622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:30:28 standalone.localdomain sudo[494622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:30:28 standalone.localdomain sudo[494622]: pam_unix(sudo:session): session closed for user root
Oct 13 15:30:28 standalone.localdomain sudo[494640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:30:28 standalone.localdomain sudo[494640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:30:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:28.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:29 standalone.localdomain ceph-mon[29756]: pgmap v3477: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:29.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:29.092 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:30:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6e5a8eff02736f76a8eaf288551e295af46a07d57fc5a8429f8e2724a24fb08-merged.mount: Deactivated successfully.
Oct 13 15:30:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3478: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:30 standalone.localdomain sudo[494640]: pam_unix(sudo:session): session closed for user root
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:30:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:30:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev a6bb8672-8aa5-4c49-a5df-feaf3533f8dd (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:30:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev a6bb8672-8aa5-4c49-a5df-feaf3533f8dd (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:30:30 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event a6bb8672-8aa5-4c49-a5df-feaf3533f8dd (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:30:30 standalone.localdomain sudo[494689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:30:30 standalone.localdomain sudo[494689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:30:30 standalone.localdomain sudo[494689]: pam_unix(sudo:session): session closed for user root
Oct 13 15:30:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47761 DF PROTO=TCP SPT=46394 DPT=9102 SEQ=617630441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC2AE370000000001030307) 
Oct 13 15:30:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:31 standalone.localdomain ceph-mon[29756]: pgmap v3478: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:30:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:30:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:30:31 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:30:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3479: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:30:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:30:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:30:32 standalone.localdomain podman[494707]: 2025-10-13 15:30:32.142528353 +0000 UTC m=+0.066789813 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:30:32 standalone.localdomain podman[494707]: 2025-10-13 15:30:32.149043674 +0000 UTC m=+0.073305154 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:30:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:32.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:32 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:30:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:33 standalone.localdomain ceph-mon[29756]: pgmap v3479: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.115 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.115 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.115 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.115 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.116 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:30:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3480: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:30:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/169309892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.539 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.594 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.595 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.595 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.597 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.597 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.727 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.728 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9502MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.729 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.729 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:30:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:30:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.802 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.803 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.803 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.803 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:30:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:33.848 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:30:34 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/169309892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:30:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96-merged.mount: Deactivated successfully.
Oct 13 15:30:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:30:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4145958433' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:30:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:34.277 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:30:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:34.284 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:30:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:34.336 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:30:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:34.339 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:30:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:34.339 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:30:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:30:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:30:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:30:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4-merged.mount: Deactivated successfully.
Oct 13 15:30:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c7b1648ca07963a73362905865b5e148d2096f74e5b48b5a7bd40d6efde1ab3f-merged.mount: Deactivated successfully.
Oct 13 15:30:35 standalone.localdomain ceph-mon[29756]: pgmap v3480: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:35 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4145958433' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:30:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:30:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:30:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:30:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:30:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:30:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3481: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.335 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.336 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.336 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.336 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:30:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.547 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.547 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.547 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.548 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.874 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.891 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.891 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.891 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.892 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:35.892 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:30:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:30:36 standalone.localdomain ceph-mon[29756]: pgmap v3481: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:30:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:30:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:30:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:37.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3482: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5-merged.mount: Deactivated successfully.
Oct 13 15:30:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:30:37 standalone.localdomain podman[494767]: 2025-10-13 15:30:37.647574737 +0000 UTC m=+0.070494847 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc.)
Oct 13 15:30:37 standalone.localdomain podman[494767]: 2025-10-13 15:30:37.685802898 +0000 UTC m=+0.108723018 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:30:38 standalone.localdomain systemd[1]: tmp-crun.ZQeDJy.mount: Deactivated successfully.
Oct 13 15:30:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:38 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:38 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:30:39 standalone.localdomain ceph-mon[29756]: pgmap v3482: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:39.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3483: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:39 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:30:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:40 standalone.localdomain podman[494787]: 2025-10-13 15:30:40.069553417 +0000 UTC m=+0.082730365 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:30:40 standalone.localdomain podman[494787]: 2025-10-13 15:30:40.079238085 +0000 UTC m=+0.092415033 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:30:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:30:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:40 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:40 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:30:41 standalone.localdomain ceph-mon[29756]: pgmap v3483: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d29d2ea8946261949ed750c0626f01bb8c54f722399da45bf75e3237c128367f-merged.mount: Deactivated successfully.
Oct 13 15:30:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3484: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:41 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:42.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:30:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:42.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:30:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:30:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:30:42 standalone.localdomain podman[494809]: 2025-10-13 15:30:42.551462186 +0000 UTC m=+0.066295248 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., container_name=swift_account_server, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, architecture=x86_64, batch=17.1_20250721.1)
Oct 13 15:30:42 standalone.localdomain podman[494808]: 2025-10-13 15:30:42.583262797 +0000 UTC m=+0.097977345 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, version=17.1.9, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:30:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:30:42 standalone.localdomain podman[494807]: 2025-10-13 15:30:42.610528539 +0000 UTC m=+0.115243378 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, io.openshift.expose-services=, container_name=swift_object_server, version=17.1.9, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:30:42 standalone.localdomain podman[494809]: 2025-10-13 15:30:42.74856869 +0000 UTC m=+0.263401752 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20250721.1, container_name=swift_account_server, release=1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:30:42 standalone.localdomain podman[494808]: 2025-10-13 15:30:42.76928882 +0000 UTC m=+0.284003398 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, name=rhosp17/openstack-swift-container, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, io.buildah.version=1.33.12)
Oct 13 15:30:42 standalone.localdomain podman[494807]: 2025-10-13 15:30:42.780656641 +0000 UTC m=+0.285371450 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, container_name=swift_object_server, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:30:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:30:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:30:43 standalone.localdomain ceph-mon[29756]: pgmap v3484: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ce87643b35ec000a1792ecfe3a8ec00ec1224e0b2865439ada2796f407c1075b-merged.mount: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3485: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:30:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:43.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:30:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:30:44 standalone.localdomain podman[494889]: 2025-10-13 15:30:44.784606666 +0000 UTC m=+0.062935344 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 13 15:30:44 standalone.localdomain podman[494889]: 2025-10-13 15:30:44.817805611 +0000 UTC m=+0.096134289 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 13 15:30:45 standalone.localdomain ceph-mon[29756]: pgmap v3485: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:45 standalone.localdomain systemd[1]: tmp-crun.On3QHv.mount: Deactivated successfully.
Oct 13 15:30:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3486: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:45 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:30:45 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:46 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:30:46 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:47 standalone.localdomain podman[494906]: 2025-10-13 15:30:47.023507144 +0000 UTC m=+0.045174915 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:30:47 standalone.localdomain podman[494906]: 2025-10-13 15:30:47.035727061 +0000 UTC m=+0.057394822 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:30:47 standalone.localdomain ceph-mon[29756]: pgmap v3486: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:47 standalone.localdomain podman[494906]: unhealthy
Oct 13 15:30:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:47.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3487: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:30:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-363ca58f2fdd66d3f83e207fb876fe212480858dd07aa9a917453d678975c0bc-merged.mount: Deactivated successfully.
Oct 13 15:30:48 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:30:48 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:30:48 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:30:48 standalone.localdomain object-server[494929]: Object update sweep starting on /srv/node/d1 (pid: 29)
Oct 13 15:30:48 standalone.localdomain object-server[494929]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 29)
Oct 13 15:30:48 standalone.localdomain object-server[494929]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:30:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.10s
Oct 13 15:30:48 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:48.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:49 standalone.localdomain ceph-mon[29756]: pgmap v3487: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3488: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:30:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:50 standalone.localdomain podman[494930]: 2025-10-13 15:30:50.820368592 +0000 UTC m=+0.092368613 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:30:50 standalone.localdomain podman[494930]: 2025-10-13 15:30:50.834006733 +0000 UTC m=+0.106006814 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 13 15:30:50 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:51 standalone.localdomain ceph-mon[29756]: pgmap v3488: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3489: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c7b1648ca07963a73362905865b5e148d2096f74e5b48b5a7bd40d6efde1ab3f-merged.mount: Deactivated successfully.
Oct 13 15:30:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:52.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:30:52 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:52 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:30:53 standalone.localdomain ceph-mon[29756]: pgmap v3489: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:30:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3490: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22694 DF PROTO=TCP SPT=38064 DPT=9102 SEQ=234121047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC3078F0000000001030307) 
Oct 13 15:30:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:53.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:30:53 standalone.localdomain podman[494949]: 2025-10-13 15:30:53.917553133 +0000 UTC m=+0.078473133 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:30:53 standalone.localdomain podman[494949]: 2025-10-13 15:30:53.924853599 +0000 UTC m=+0.085773599 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:30:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:54 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:30:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22695 DF PROTO=TCP SPT=38064 DPT=9102 SEQ=234121047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC30BB60000000001030307) 
Oct 13 15:30:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:30:55 standalone.localdomain ceph-mon[29756]: pgmap v3490: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3491: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:30:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22696 DF PROTO=TCP SPT=38064 DPT=9102 SEQ=234121047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC313B60000000001030307) 
Oct 13 15:30:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:30:57 standalone.localdomain ceph-mon[29756]: pgmap v3491: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:30:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:57.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3492: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:30:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:30:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:30:58 standalone.localdomain podman[494972]: 2025-10-13 15:30:58.195559532 +0000 UTC m=+0.088116591 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:30:58 standalone.localdomain podman[494972]: 2025-10-13 15:30:58.241457189 +0000 UTC m=+0.134014248 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:30:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:30:58.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:30:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:30:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec-merged.mount: Deactivated successfully.
Oct 13 15:30:59 standalone.localdomain ceph-mon[29756]: pgmap v3492: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3493: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:30:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:30:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ce87643b35ec000a1792ecfe3a8ec00ec1224e0b2865439ada2796f407c1075b-merged.mount: Deactivated successfully.
Oct 13 15:30:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ce87643b35ec000a1792ecfe3a8ec00ec1224e0b2865439ada2796f407c1075b-merged.mount: Deactivated successfully.
Oct 13 15:30:59 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:59 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:30:59 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:31:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:31:00 standalone.localdomain podman[467099]: time="2025-10-13T15:31:00Z" level=error msg="Getting root fs size for \"da25f04b81165f539615f5b90c3c6210e742164ef5310656704da23cb7def95b\": getting diffsize of layer \"8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer 8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730: replacing mount point \"/var/lib/containers/storage/overlay/8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730/merged\": device or resource busy"
Oct 13 15:31:00 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22697 DF PROTO=TCP SPT=38064 DPT=9102 SEQ=234121047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC323770000000001030307) 
Oct 13 15:31:01 standalone.localdomain ceph-mon[29756]: pgmap v3493: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3494: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-363ca58f2fdd66d3f83e207fb876fe212480858dd07aa9a917453d678975c0bc-merged.mount: Deactivated successfully.
Oct 13 15:31:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:02.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:31:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:31:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-927bd228992d5daf6278b2cf1e7a3606939c12c35e65ead5742f030df49b151a-merged.mount: Deactivated successfully.
Oct 13 15:31:02 standalone.localdomain systemd[1]: tmp-crun.nJalSF.mount: Deactivated successfully.
Oct 13 15:31:02 standalone.localdomain podman[494997]: 2025-10-13 15:31:02.78939049 +0000 UTC m=+0.076692738 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:31:02 standalone.localdomain podman[494997]: 2025-10-13 15:31:02.817652422 +0000 UTC m=+0.104954650 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:31:03 standalone.localdomain ceph-mon[29756]: pgmap v3494: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3495: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:03.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:31:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:31:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:04 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:31:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:31:05 standalone.localdomain ceph-mon[29756]: pgmap v3495: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3496: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:05 standalone.localdomain podman[467099]: time="2025-10-13T15:31:05Z" level=error msg="Getting root fs size for \"dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004\": getting diffsize of layer \"eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d\" and its parent \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:31:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:31:06.948 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:31:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:31:06.948 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:31:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:31:06.949 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:31:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:06 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:31:07 standalone.localdomain ceph-mon[29756]: pgmap v3496: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:07.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3497: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:08 standalone.localdomain ceph-mon[29756]: pgmap v3497: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:08.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:31:08 standalone.localdomain podman[495016]: 2025-10-13 15:31:08.81168203 +0000 UTC m=+0.085116708 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:31:08 standalone.localdomain podman[495016]: 2025-10-13 15:31:08.825974332 +0000 UTC m=+0.099409000 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=)
Oct 13 15:31:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:31:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:31:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3498: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:10 standalone.localdomain ceph-mon[29756]: pgmap v3498: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:31:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:31:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8b72b428a811fff966752483db02baf6eee3c29de2fa12a8e31c22c4982857ec-merged.mount: Deactivated successfully.
Oct 13 15:31:11 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:31:11 standalone.localdomain podman[495035]: 2025-10-13 15:31:11.243741341 +0000 UTC m=+0.266333612 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:31:11 standalone.localdomain podman[495035]: 2025-10-13 15:31:11.26188932 +0000 UTC m=+0.284481561 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:31:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3499: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:31:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:12.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:31:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:31:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:31:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:31:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:31:13 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:13 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:31:13 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:31:13 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:13 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:31:13 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:31:13 standalone.localdomain ceph-mon[29756]: pgmap v3499: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:31:13 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:13 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:31:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3500: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:13.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:31:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:31:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:31:13 standalone.localdomain podman[495054]: 2025-10-13 15:31:13.788149749 +0000 UTC m=+0.055679740 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, release=1)
Oct 13 15:31:13 standalone.localdomain podman[495055]: 2025-10-13 15:31:13.819585159 +0000 UTC m=+0.084341724 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, managed_by=tripleo_ansible, container_name=swift_container_server, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git)
Oct 13 15:31:13 standalone.localdomain podman[495056]: 2025-10-13 15:31:13.871942715 +0000 UTC m=+0.135759501 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22)
Oct 13 15:31:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:13 standalone.localdomain podman[495054]: 2025-10-13 15:31:13.974762399 +0000 UTC m=+0.242292390 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, tcib_managed=true, config_id=tripleo_step4, container_name=swift_object_server, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc.)
Oct 13 15:31:14 standalone.localdomain podman[495055]: 2025-10-13 15:31:14.00170903 +0000 UTC m=+0.266465595 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, release=1, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, vcs-type=git, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc.)
Oct 13 15:31:14 standalone.localdomain podman[495056]: 2025-10-13 15:31:14.049839747 +0000 UTC m=+0.313656563 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, version=17.1.9, tcib_managed=true, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account)
Oct 13 15:31:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:14 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:31:14 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:31:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:14 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:31:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:14 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:15 standalone.localdomain ceph-mon[29756]: pgmap v3500: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3501: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a3b3404ad94bc57e2e9bbeee5710aeedc4c81e80a3e18de9d8dfbabe3d88730-merged.mount: Deactivated successfully.
Oct 13 15:31:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-927bd228992d5daf6278b2cf1e7a3606939c12c35e65ead5742f030df49b151a-merged.mount: Deactivated successfully.
Oct 13 15:31:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:31:16 standalone.localdomain podman[495137]: 2025-10-13 15:31:16.099579526 +0000 UTC m=+0.113259057 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 13 15:31:16 standalone.localdomain podman[495137]: 2025-10-13 15:31:16.107722157 +0000 UTC m=+0.121401698 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:31:16 standalone.localdomain systemd[1]: tmp-crun.BZEKZl.mount: Deactivated successfully.
Oct 13 15:31:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:17 standalone.localdomain ceph-mon[29756]: pgmap v3501: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:17.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:31:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3502: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:31:17 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:31:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:31:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e167f4177388f76c1ad6441c898b0a79fb6ad068ac7f87433fc33b2068b41b1-merged.mount: Deactivated successfully.
Oct 13 15:31:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:31:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:31:18 standalone.localdomain podman[495156]: 2025-10-13 15:31:18.613596453 +0000 UTC m=+0.061783625 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:31:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:31:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2985030973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:31:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:31:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2985030973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:31:18 standalone.localdomain podman[495156]: 2025-10-13 15:31:18.625262832 +0000 UTC m=+0.073449994 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:31:18 standalone.localdomain podman[495156]: unhealthy
Oct 13 15:31:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:18.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:18 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:31:18 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:31:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:31:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:19 standalone.localdomain ceph-mon[29756]: pgmap v3502: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2985030973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:31:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2985030973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:31:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3503: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:19 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:20 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:21 standalone.localdomain ceph-mon[29756]: pgmap v3503: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3504: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:31:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96b51a5cc988a88f9993b66e8a15f32d57fa9953fbb5e785faf55f0f27288b1b-merged.mount: Deactivated successfully.
Oct 13 15:31:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:22.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1a6a4ee30a57a3d453735aee084adc7708560542e5e91aea887f02a6c49125a0-merged.mount: Deactivated successfully.
Oct 13 15:31:23 standalone.localdomain ceph-mon[29756]: pgmap v3504: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:31:23
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'manila_data', '.mgr', 'backups', 'manila_metadata']
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3505: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2125 DF PROTO=TCP SPT=42916 DPT=9102 SEQ=160493054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC37CBF0000000001030307) 
Oct 13 15:31:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:31:23 standalone.localdomain systemd[1]: tmp-crun.sgFxWJ.mount: Deactivated successfully.
Oct 13 15:31:23 standalone.localdomain podman[495179]: 2025-10-13 15:31:23.66841047 +0000 UTC m=+0.057980268 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:31:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:23 standalone.localdomain podman[495179]: 2025-10-13 15:31:23.684016611 +0000 UTC m=+0.073586389 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:31:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:23.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:31:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:31:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2126 DF PROTO=TCP SPT=42916 DPT=9102 SEQ=160493054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC380B70000000001030307) 
Oct 13 15:31:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:31:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:31:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:31:24 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:31:24 standalone.localdomain systemd[1]: tmp-crun.GaAX1W.mount: Deactivated successfully.
Oct 13 15:31:24 standalone.localdomain podman[495197]: 2025-10-13 15:31:24.704180146 +0000 UTC m=+0.091789059 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:31:24 standalone.localdomain podman[495197]: 2025-10-13 15:31:24.712834943 +0000 UTC m=+0.100443906 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:31:25 standalone.localdomain ceph-mon[29756]: pgmap v3505: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3506: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:31:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2127 DF PROTO=TCP SPT=42916 DPT=9102 SEQ=160493054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC388B60000000001030307) 
Oct 13 15:31:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b862ff560e6941e4f65b129950e9f4e88ab5e17db7e4d1f5688162d32661403d-merged.mount: Deactivated successfully.
Oct 13 15:31:26 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:26 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:26 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:31:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:27 standalone.localdomain ceph-mon[29756]: pgmap v3506: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:27.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3507: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:28 standalone.localdomain podman[467099]: time="2025-10-13T15:31:28Z" level=error msg="Getting root fs size for \"e1e961b24c94263e906e151226459fe433c4d3741c50997f33ff369591a18ed8\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": unmounting layer d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610: replacing mount point \"/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged\": device or resource busy"
Oct 13 15:31:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:28 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:28.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:29 standalone.localdomain ceph-mon[29756]: pgmap v3507: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:31:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3508: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:31:29 standalone.localdomain podman[495219]: 2025-10-13 15:31:29.840030101 +0000 UTC m=+0.099129345 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:31:29 standalone.localdomain podman[495219]: 2025-10-13 15:31:29.911967158 +0000 UTC m=+0.171066362 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 13 15:31:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:30.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:30 standalone.localdomain sudo[495244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:31:30 standalone.localdomain sudo[495244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:31:30 standalone.localdomain sudo[495244]: pam_unix(sudo:session): session closed for user root
Oct 13 15:31:30 standalone.localdomain sudo[495262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:31:30 standalone.localdomain sudo[495262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:31:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2128 DF PROTO=TCP SPT=42916 DPT=9102 SEQ=160493054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC398760000000001030307) 
Oct 13 15:31:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8876cdfee2ff3f96001da3a351ac752d9af9c42182b5980adef66a8d52c7f993-merged.mount: Deactivated successfully.
Oct 13 15:31:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:31.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: pgmap v3508: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.126713) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369491126755, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2111, "num_deletes": 251, "total_data_size": 1968043, "memory_usage": 2008456, "flush_reason": "Manual Compaction"}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369491136781, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1908917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80861, "largest_seqno": 82971, "table_properties": {"data_size": 1900746, "index_size": 4937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17069, "raw_average_key_size": 19, "raw_value_size": 1884018, "raw_average_value_size": 2190, "num_data_blocks": 224, "num_entries": 860, "num_filter_entries": 860, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369293, "oldest_key_time": 1760369293, "file_creation_time": 1760369491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 10132 microseconds, and 5343 cpu microseconds.
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.136833) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1908917 bytes OK
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.136864) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.138689) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.138709) EVENT_LOG_v1 {"time_micros": 1760369491138702, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.138730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1959039, prev total WAL file size 1959039, number of live WAL files 2.
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.139370) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1864KB)], [185(5460KB)]
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369491139423, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 7500141, "oldest_snapshot_seqno": -1}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 6932 keys, 6510809 bytes, temperature: kUnknown
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369491174729, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 6510809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6470060, "index_size": 22345, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 181563, "raw_average_key_size": 26, "raw_value_size": 6349351, "raw_average_value_size": 915, "num_data_blocks": 882, "num_entries": 6932, "num_filter_entries": 6932, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.176435) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 6510809 bytes
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.178334) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.9 rd, 182.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 5.3 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 7450, records dropped: 518 output_compression: NoCompression
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.178363) EVENT_LOG_v1 {"time_micros": 1760369491178350, "job": 116, "event": "compaction_finished", "compaction_time_micros": 35733, "compaction_time_cpu_micros": 22296, "output_level": 6, "num_output_files": 1, "total_output_size": 6510809, "num_input_records": 7450, "num_output_records": 6932, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369491178955, "job": 116, "event": "table_file_deletion", "file_number": 187}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369491179896, "job": 116, "event": "table_file_deletion", "file_number": 185}
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.139233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.180000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.180007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.180011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.180015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:31:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:31:31.180019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:31:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3509: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-798b8b03064872dd018f7f961bc8f06e53ebff246a80987dc38ebea2d662177c-merged.mount: Deactivated successfully.
Oct 13 15:31:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7e167f4177388f76c1ad6441c898b0a79fb6ad068ac7f87433fc33b2068b41b1-merged.mount: Deactivated successfully.
Oct 13 15:31:31 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:31:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:32.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:31:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:31:33 standalone.localdomain ceph-mon[29756]: pgmap v3509: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3510: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:33.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f09732678c4f553db79023fb1fd3265d4dd20bff405739440c315d666c1595a6-merged.mount: Deactivated successfully.
Oct 13 15:31:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f09732678c4f553db79023fb1fd3265d4dd20bff405739440c315d666c1595a6-merged.mount: Deactivated successfully.
Oct 13 15:31:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:34 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.086 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.171 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.172 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.172 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.172 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.173 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:31:34 standalone.localdomain sudo[495262]: pam_unix(sudo:session): session closed for user root
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:31:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ccf2f067-f486-4d84-b53f-cae409d5f8a8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:31:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ccf2f067-f486-4d84-b53f-cae409d5f8a8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:31:34 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ccf2f067-f486-4d84-b53f-cae409d5f8a8 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:31:34 standalone.localdomain sudo[495334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:31:34 standalone.localdomain sudo[495334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:31:34 standalone.localdomain sudo[495334]: pam_unix(sudo:session): session closed for user root
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:31:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1324515996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.596 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.679 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.679 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.679 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.683 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.684 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:31:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:31:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:34 standalone.localdomain podman[495354]: 2025-10-13 15:31:34.81992929 +0000 UTC m=+0.086683362 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.838 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.840 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9552MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.840 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.840 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:31:34 standalone.localdomain podman[495354]: 2025-10-13 15:31:34.852833485 +0000 UTC m=+0.119587527 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.908 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.908 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.909 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.909 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:31:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:34.957 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:31:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: pgmap v3510: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1324515996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:31:35 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:31:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3511: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:31:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3230295401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:31:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:35.409 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:31:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:35.415 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:31:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:35.430 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:31:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:35.432 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:31:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:35.432 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:31:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:36 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:31:36 standalone.localdomain ceph-mon[29756]: pgmap v3511: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3230295401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:31:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:36 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:31:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:36.466 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:36.466 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:31:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.130 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.130 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.131 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3512: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.439 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.458 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.458 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.458 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.458 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:37.458 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:31:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e8cc2e601c02b5645a16db37fa62b8027dececa9771e380b54894add683430c-merged.mount: Deactivated successfully.
Oct 13 15:31:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:37 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-96b51a5cc988a88f9993b66e8a15f32d57fa9953fbb5e785faf55f0f27288b1b-merged.mount: Deactivated successfully.
Oct 13 15:31:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:38.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:39 standalone.localdomain ceph-mon[29756]: pgmap v3512: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3513: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f09732678c4f553db79023fb1fd3265d4dd20bff405739440c315d666c1595a6-merged.mount: Deactivated successfully.
Oct 13 15:31:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8675278b750d990aa4a2dbfe907b259973251cdfa372b24fe7e026c128a60e7a-merged.mount: Deactivated successfully.
Oct 13 15:31:41 standalone.localdomain ceph-mon[29756]: pgmap v3513: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:41.078 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:41.096 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:41 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:31:41 standalone.localdomain podman[495393]: 2025-10-13 15:31:41.313610005 +0000 UTC m=+0.083602467 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Oct 13 15:31:41 standalone.localdomain podman[495393]: 2025-10-13 15:31:41.326975187 +0000 UTC m=+0.096967699 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Oct 13 15:31:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3514: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:42.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:31:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:42.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:42 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:42 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:31:42 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:31:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:31:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:31:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:31:43 standalone.localdomain ceph-mon[29756]: pgmap v3514: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3515: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:31:43 standalone.localdomain podman[495414]: 2025-10-13 15:31:43.702113303 +0000 UTC m=+0.056794281 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:31:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:43.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:43 standalone.localdomain podman[495414]: 2025-10-13 15:31:43.721945414 +0000 UTC m=+0.076626432 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd)
Oct 13 15:31:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:44 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:44 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:31:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:31:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:31:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:31:44 standalone.localdomain podman[495433]: 2025-10-13 15:31:44.52078348 +0000 UTC m=+0.075601311 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, container_name=swift_object_server, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git)
Oct 13 15:31:44 standalone.localdomain podman[495434]: 2025-10-13 15:31:44.542741096 +0000 UTC m=+0.094068109 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:31:44 standalone.localdomain podman[495435]: 2025-10-13 15:31:44.596746331 +0000 UTC m=+0.147117905 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, tcib_managed=true, build-date=2025-07-21T16:11:22, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, release=1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:31:44 standalone.localdomain podman[495433]: 2025-10-13 15:31:44.745289498 +0000 UTC m=+0.300107289 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, managed_by=tripleo_ansible, batch=17.1_20250721.1, architecture=x86_64, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, tcib_managed=true, version=17.1.9, name=rhosp17/openstack-swift-object, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 15:31:44 standalone.localdomain podman[495434]: 2025-10-13 15:31:44.757178644 +0000 UTC m=+0.308505617 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, config_id=tripleo_step4)
Oct 13 15:31:44 standalone.localdomain podman[495435]: 2025-10-13 15:31:44.816398599 +0000 UTC m=+0.366770153 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, version=17.1.9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.openshift.expose-services=, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., container_name=swift_account_server)
Oct 13 15:31:45 standalone.localdomain ceph-mon[29756]: pgmap v3515: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3516: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:45 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:31:45 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:31:45 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:31:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:47 standalone.localdomain ceph-mon[29756]: pgmap v3516: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:47.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3517: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:31:47 standalone.localdomain podman[495513]: 2025-10-13 15:31:47.825243652 +0000 UTC m=+0.096357819 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 15:31:47 standalone.localdomain podman[495513]: 2025-10-13 15:31:47.837961305 +0000 UTC m=+0.109075472 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:31:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8876cdfee2ff3f96001da3a351ac752d9af9c42182b5980adef66a8d52c7f993-merged.mount: Deactivated successfully.
Oct 13 15:31:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:31:48 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:31:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:48.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:31:49 standalone.localdomain podman[495532]: 2025-10-13 15:31:49.067190822 +0000 UTC m=+0.080775330 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:31:49 standalone.localdomain ceph-mon[29756]: pgmap v3517: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:49 standalone.localdomain podman[495532]: 2025-10-13 15:31:49.103926324 +0000 UTC m=+0.117510802 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:31:49 standalone.localdomain podman[495532]: unhealthy
Oct 13 15:31:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3518: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f09732678c4f553db79023fb1fd3265d4dd20bff405739440c315d666c1595a6-merged.mount: Deactivated successfully.
Oct 13 15:31:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f09732678c4f553db79023fb1fd3265d4dd20bff405739440c315d666c1595a6-merged.mount: Deactivated successfully.
Oct 13 15:31:51 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:31:51 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:31:51 standalone.localdomain ceph-mon[29756]: pgmap v3518: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3519: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:52.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:53 standalone.localdomain ceph-mon[29756]: pgmap v3519: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:31:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3520: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48976 DF PROTO=TCP SPT=56092 DPT=9102 SEQ=2891388168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC3F1EF0000000001030307) 
Oct 13 15:31:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:53.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:31:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48977 DF PROTO=TCP SPT=56092 DPT=9102 SEQ=2891388168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC3F5F60000000001030307) 
Oct 13 15:31:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:31:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:31:54 standalone.localdomain podman[495555]: 2025-10-13 15:31:54.857522993 +0000 UTC m=+0.118444271 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:31:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:54 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:31:54 standalone.localdomain podman[495555]: 2025-10-13 15:31:54.900945051 +0000 UTC m=+0.161866329 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:31:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:31:55 standalone.localdomain ceph-mon[29756]: pgmap v3520: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3521: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48978 DF PROTO=TCP SPT=56092 DPT=9102 SEQ=2891388168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC3FDF60000000001030307) 
Oct 13 15:31:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f09732678c4f553db79023fb1fd3265d4dd20bff405739440c315d666c1595a6-merged.mount: Deactivated successfully.
Oct 13 15:31:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8675278b750d990aa4a2dbfe907b259973251cdfa372b24fe7e026c128a60e7a-merged.mount: Deactivated successfully.
Oct 13 15:31:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:31:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:31:57 standalone.localdomain ceph-mon[29756]: pgmap v3521: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:57 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:31:57 standalone.localdomain podman[495574]: 2025-10-13 15:31:57.185329441 +0000 UTC m=+0.132585856 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:31:57 standalone.localdomain podman[495574]: 2025-10-13 15:31:57.195811114 +0000 UTC m=+0.143067529 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:31:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:57.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3522: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:31:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:31:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:31:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:31:58.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:31:59 standalone.localdomain ceph-mon[29756]: pgmap v3522: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3523: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:31:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:31:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90ad63805e4e2b4992e516cdb5b5f8a715356c3206cbc457c40619f8af2b3b5c-merged.mount: Deactivated successfully.
Oct 13 15:31:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90ad63805e4e2b4992e516cdb5b5f8a715356c3206cbc457c40619f8af2b3b5c-merged.mount: Deactivated successfully.
Oct 13 15:31:59 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:32:00 standalone.localdomain ceph-mon[29756]: pgmap v3523: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Oct 13 15:32:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Oct 13 15:32:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48979 DF PROTO=TCP SPT=56092 DPT=9102 SEQ=2891388168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC40DB70000000001030307) 
Oct 13 15:32:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3524: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-eb4c4d0373a82328ba2ba6e7c6b844d401031336d6397d409b55d5dd2f808e3d-merged.mount: Deactivated successfully.
Oct 13 15:32:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:32:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:01 standalone.localdomain podman[495597]: 2025-10-13 15:32:01.836067178 +0000 UTC m=+0.092866983 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:32:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:01 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:01 standalone.localdomain podman[495597]: 2025-10-13 15:32:01.869797678 +0000 UTC m=+0.126597433 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:32:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:02.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:02 standalone.localdomain systemd[1]: tmp-crun.rE6gCT.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8e53c626af8f2c4d4f73282ff962891d82a21f7b0dd776480787a910cef793ec-merged.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8e53c626af8f2c4d4f73282ff962891d82a21f7b0dd776480787a910cef793ec-merged.mount: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:32:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:02 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:03 standalone.localdomain ceph-mon[29756]: pgmap v3524: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3525: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:03 standalone.localdomain podman[467099]: time="2025-10-13T15:32:03Z" level=error msg="Getting root fs size for \"e424aa3074ca6a84ef71baa8ad166a3f305c806ab6c804d6946a55f18e820802\": unmounting layer e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df: replacing mount point \"/var/lib/containers/storage/overlay/e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df/merged\": device or resource busy"
Oct 13 15:32:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:03 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:03.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb070b9ecf3b54835972bb9ccc48095100b9730e4fda6a8fbfcf81c1bc8e8845-merged.mount: Deactivated successfully.
Oct 13 15:32:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-22c9c267afdf127d08eca445cb214129244d6909cac33b961d5f917bf640f921-merged.mount: Deactivated successfully.
Oct 13 15:32:05 standalone.localdomain ceph-mon[29756]: pgmap v3525: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3526: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:32:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:32:06.949 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:32:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:32:06.949 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:32:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:32:06.950 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:32:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:32:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:07 standalone.localdomain ceph-mon[29756]: pgmap v3526: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:32:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:07.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3527: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:07 standalone.localdomain podman[495635]: 2025-10-13 15:32:07.688860735 +0000 UTC m=+0.954619987 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:32:07 standalone.localdomain podman[495635]: 2025-10-13 15:32:07.721747598 +0000 UTC m=+0.987506850 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:32:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:32:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:08.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:09 standalone.localdomain ceph-mon[29756]: pgmap v3527: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3528: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:32:09 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:32:09 standalone.localdomain podman[467099]: time="2025-10-13T15:32:09Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged: invalid argument"
Oct 13 15:32:09 standalone.localdomain podman[467099]: time="2025-10-13T15:32:09Z" level=error msg="Getting root fs size for \"f2436a379d8f08d628e1b8357f6fcb1654c8cddbc4a9222c3d173cdff3a680a3\": getting diffsize of layer \"d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610\" and its parent \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\": creating overlay mount to /var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/W4XPC5KOLLZ5MP2UTF333CCBDD:/var/lib/containers/storage/overlay/l/5PSUIGDAXQERVK6KLHOROUARFH,upperdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/diff,workdir=/var/lib/containers/storage/overlay/d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610/work,nodev,metacopy=on\": no such file or directory"
Oct 13 15:32:09 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:09 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:32:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:11 standalone.localdomain ceph-mon[29756]: pgmap v3528: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3529: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:32:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7861ba3a2d0fa6c88e81d0b962398561746b4f0944affb5e220c19f62f3ba4ad-merged.mount: Deactivated successfully.
Oct 13 15:32:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:12.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:32:12 standalone.localdomain python3[495760]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:32:12 standalone.localdomain systemd[1]: tmp-crun.myz0Ys.mount: Deactivated successfully.
Oct 13 15:32:12 standalone.localdomain podman[495761]: 2025-10-13 15:32:12.822720838 +0000 UTC m=+0.081538824 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 15:32:12 standalone.localdomain podman[495761]: 2025-10-13 15:32:12.830523148 +0000 UTC m=+0.089341194 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:32:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:32:13 standalone.localdomain ceph-mon[29756]: pgmap v3529: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3530: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:13.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:32:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:32:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-182f4b56e6e8809f2ffde261aea7a82f597fbc875533d1efd7f59fe7c8a139ed-merged.mount: Deactivated successfully.
Oct 13 15:32:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:32:14 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:32:14 standalone.localdomain podman[495792]: 2025-10-13 15:32:14.6748684 +0000 UTC m=+0.156716610 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 13 15:32:14 standalone.localdomain podman[495792]: 2025-10-13 15:32:14.713955615 +0000 UTC m=+0.195803825 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:32:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-90ad63805e4e2b4992e516cdb5b5f8a715356c3206cbc457c40619f8af2b3b5c-merged.mount: Deactivated successfully.
Oct 13 15:32:15 standalone.localdomain ceph-mon[29756]: pgmap v3530: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3531: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:32:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:32:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:32:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:16 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:32:16 standalone.localdomain podman[495818]: 2025-10-13 15:32:16.927545534 +0000 UTC m=+1.183490680 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_account_server, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:11:22, release=1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container)
Oct 13 15:32:16 standalone.localdomain podman[495811]: 2025-10-13 15:32:16.979552516 +0000 UTC m=+1.249117322 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28)
Oct 13 15:32:17 standalone.localdomain podman[495812]: 2025-10-13 15:32:17.029537996 +0000 UTC m=+1.296498571 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, distribution-scope=public, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, release=1, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:32:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:17 standalone.localdomain ceph-mon[29756]: pgmap v3531: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:17 standalone.localdomain podman[495818]: 2025-10-13 15:32:17.126594567 +0000 UTC m=+1.382539733 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, version=17.1.9, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, tcib_managed=true, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4)
Oct 13 15:32:17 standalone.localdomain podman[495811]: 2025-10-13 15:32:17.206221381 +0000 UTC m=+1.475786167 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, container_name=swift_object_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, tcib_managed=true, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1, version=17.1.9, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:32:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:17.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:17 standalone.localdomain podman[495812]: 2025-10-13 15:32:17.243594633 +0000 UTC m=+1.510555238 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, release=1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, vendor=Red Hat, Inc.)
Oct 13 15:32:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3532: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully.
Oct 13 15:32:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Oct 13 15:32:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:32:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:32:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2035572529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:32:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:32:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2035572529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:32:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:18.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:18 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:18 standalone.localdomain podman[495919]: 2025-10-13 15:32:18.110464904 +0000 UTC m=+0.051788877 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:32:18 standalone.localdomain podman[495933]: 2025-10-13 15:32:18.834431642 +0000 UTC m=+0.361054225 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 13 15:32:18 standalone.localdomain podman[495933]: 2025-10-13 15:32:18.840260722 +0000 UTC m=+0.366883325 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 15:32:19 standalone.localdomain ceph-mon[29756]: pgmap v3532: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2035572529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:32:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2035572529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:32:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3533: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:19 standalone.localdomain podman[495919]: 
Oct 13 15:32:19 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:32:19 standalone.localdomain podman[495919]: 2025-10-13 15:32:19.498888588 +0000 UTC m=+1.440212551 container create 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251009, container_name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:32:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:19 standalone.localdomain python3[495760]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:32:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8e53c626af8f2c4d4f73282ff962891d82a21f7b0dd776480787a910cef793ec-merged.mount: Deactivated successfully.
Oct 13 15:32:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:20.993 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:32:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:20.998 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:32:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:20.999 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.006 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.010 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '78683fe7-dc02-43c5-a0a9-3d37522415c1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:20.999718', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf2803e4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': '023dab314b6590ff09eaaaef14d9cd3426edc30d2537e79005ac0ea17a960856'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:20.999718', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf28a254-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': 'e30929dd9a141e69e0bee0a788622400504bf7f617c427e324d3599abb9bd973'}]}, 'timestamp': '2025-10-13 15:32:21.011357', '_unique_id': 'd2b982291c034d0b877fe021f1953d9f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.012 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.013 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.013 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.014 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a946bd6e-3ae9-4d8a-bf17-0fb0b40a07c2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.013941', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf291734-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': 'e3054feac89b3ee319020b36d75560f7ee20a139cb30d4fc4d5d091656864faa'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.013941', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf292238-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': '289788c56b4599e67677f87f8a083c0133c54db7cb5166f224215112534b5806'}]}, 'timestamp': '2025-10-13 15:32:21.014541', '_unique_id': '3d4c2cdbec874c549aa9bb3bb83569d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.015 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.016 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8cea298a-76f4-4972-9c56-aca9e9f4afae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.015969', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf29663a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': '2e4912988c61b20d0c67f2487582f90a5d45b7eb07a8fb09217ac03f039c39e5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.015969', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf297116-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': '77d7657743088cac132767a728a876afb8c795d81f6ff5b422ad8f50beae970b'}]}, 'timestamp': '2025-10-13 15:32:21.016585', '_unique_id': 'ffa4ebc87ed34cd4b11f25a8ea07725c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.017 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.018 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.044 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.068 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d03b110-0dd2-44e4-a9d3-fedead2f0af1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:32:21.018167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'cf2dd18e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.233786052, 'message_signature': 'f8ee08731a3302d11a7ae001d3f51e28c6302418edb9464dedae2846e67227ff'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:32:21.018167', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'cf317154-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.257410581, 'message_signature': '6e752c0e8e664d34bbc219ca91a0b42329e81204e046c6b27d3f3a30b0a3fe05'}]}, 'timestamp': '2025-10-13 15:32:21.069135', '_unique_id': '4bbc8b6e19df426588800dded2e6a66f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.070 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.071 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.099 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.099 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.100 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.115 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.116 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9521bc99-dd56-4286-9fbc-108f9bc98bd8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.072130', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf362726-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': '183266ae77cbf28e110e941fda6ce3aeb014324d103338d3660afc87c5f4ae18'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.072130', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf363a7c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': 'c70d3fba7a21d22be7b7f8fff9a34cee02b6f1444cd64fa4ae0336c4f6055eb1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.072130', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf3652be-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': '355301ff668c5ed1ca02829e25bc18f19ebf56db820765a3a9d750277996ca21'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.072130', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf38a08c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.290274814, 'message_signature': 'ebf203a98fda56e1af3c88aab9855f021a73512749a0f7dc85dfe57426e3b78f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.072130', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf38ba2c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.290274814, 'message_signature': 'f241ec1e7e0ac3ff2f6ddb7de757cebcce44aa9eaa6e30510aa1e08baafe8b9c'}]}, 'timestamp': '2025-10-13 15:32:21.116871', '_unique_id': 'b2eb46c00ca84f97b3d89d201d4d3f5c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.118 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.120 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.120 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.120 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74916105-a775-48c4-a2f3-a19a6d6e76a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.120220', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf3954d2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': 'a280321aa382ad264d754bd67ddfe58c681c2c6cbdcd904c28b91fad929fc029'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.120220', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf396a4e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': 'f6e1188c5b18503dffff1d83eca86fbee833c03cfabcee9013763747727b6a6e'}]}, 'timestamp': '2025-10-13 15:32:21.121327', '_unique_id': 'f1447dd00bda4a7ea74ddb68c28843ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.123 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.123 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.124 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.124 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceph-mon[29756]: pgmap v3533: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.125 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.125 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2f87a00-bfa7-496f-8bf6-aa7dcab5f6f3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.123884', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf39e1d6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': '09530cae901a041ce4be8ee3985097229a08227dddc2f4be8042ec876950d91b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.123884', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf39f4d2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': 'a9aa30bc62fb96c92f96b64610125434be5394d91b1a4396935ab2437261f267'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.123884', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf3a0706-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': 'abc409841b02d2b4491285af2983e29630d9a97c93f9100ae2ea5baf3b42d84e'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.123884', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf3a1868-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.290274814, 'message_signature': '64d6c8990e94b5f17653747a81cbf337008456a1076bc9e9d6cb553378b82965'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.123884', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf3a28b2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.290274814, 'message_signature': '982d0008bf99ee3b312421257b1298479775a1ac334d44439dec461b61844836'}]}, 'timestamp': '2025-10-13 15:32:21.126174', '_unique_id': '937346ce4ea8403f8eeb0a6abb818b34'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.127 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.128 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.163 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.164 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.164 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.189 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.190 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9338f6a3-c752-432f-9c19-262352376aff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.128835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf3ffada-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': 'bfa099031c67f6b220741e468936dd60ce67eb44ae6b79d68a01a0f0d7cd5734'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.128835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf4013e4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '1a28841d8f8e2968c25c46e05d8d9003b7497c4843430ba5d23ab5506edacda4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.128835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf4024a6-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '10fddaf5d94414621c1fc5957d160c70d8222c7936328f42b6e1bf1785b0ea73'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.128835', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf43e780-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': 'd94d645a3f5c6256632bd8e77528d22ed18bc64f5c8d8e3251276879324a2c24'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.128835', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf440102-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': '8cd757b864b313c461692ac90826370bebf469ac1254f6fe8f91afe83b8af14e'}]}, 'timestamp': '2025-10-13 15:32:21.190835', '_unique_id': '5b5123bf3f584654b1ce1657d0feb53e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.194 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.196 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.196 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.197 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a6f1cef2-344c-4e78-88bf-a546bc326c35', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.194986', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf44bef8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': '3bb3fb7a3c6997d0790492241526aa4feef0b65add8f420406d87c16aa4fd491'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.194986', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf44d92e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': '7c580f3c0282e713bacd0f1241f3e84495f323d9ee369f0469e9ecf86b8ae729'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.194986', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf44ed24-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.261387663, 'message_signature': 'e274c40df53a19e5a7c763d2d3dd4ba0650b07a12131a8efb140b98e15c52af7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.194986', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf4501ce-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.290274814, 'message_signature': '7c11c9abeb124c0d6e1eb866532a67550ef607dda7fc2701aa343251a22066e0'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.194986', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf45192a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.290274814, 'message_signature': '4bc1d0020454a5b65621940b710f355bf64247ea5b1e93d7681c13d0f25f3b73'}]}, 'timestamp': '2025-10-13 15:32:21.197884', '_unique_id': 'f5347aea32cd449bb691a0acbdff9ae8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.201 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.201 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d24cac7-e350-440f-93b5-b643051fa03a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.201593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf45c564-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': 'f6c8fc32107702c76a53d210b555dd752921bd2b194259996a08d35781b2f16c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.201593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf45da18-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '70c11b1a12f4c3160f34dde4ed2a6f73563d36def001330cc987d47bf2f3471c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.201593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf45eb34-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '4a4013c6e520be05679b0ce06be25cfff46167773ad59cdedfcaebd8afd80425'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.201593', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf4606aa-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': '5f5d510e4bd16c5f39992c2bcbd919fc3323848d583cfccad704fbb724e36ff7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.201593', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf46183e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': '73d0ce4d5bb7321cab8ff7820f0741bac017e11e7889293bd0ad2f5eca50b7dc'}]}, 'timestamp': '2025-10-13 15:32:21.204402', '_unique_id': '27e9522e5bc94cfdbf801d1caed325a1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.207 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.207 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 64 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.208 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 49 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1bd2b738-fe2d-4b83-ad16-ebfaa56f808d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 64, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.207437', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf46a70e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': '13b95fc3eefa8e130fc01948dc2e7eb651239f2d93c1369bca225b124524722b'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 49, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.207437', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf46c324-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': 'b298d754bdeeeff5ec89ca47c2324e88074655141694673613b3e1700e42fbdc'}]}, 'timestamp': '2025-10-13 15:32:21.208868', '_unique_id': 'f79fb92702e443268d1608567f225c92'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.212 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.212 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.213 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.213 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'faad4c88-e1b6-4e94-92bb-88eb5441f300', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.211480', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf474204-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': 'f23fc0954ef16d90b1c3811255ca9ef65f4e1e0fda2f4b06403cc1e5cd62f0bc'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.211480', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf4754ec-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '373c41ecc8e96b7988a758f6a8cd0711ba6e8b55be60d61e59022cd84389e55f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.211480', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf476770-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '05baae03fe06dab8067fc48567c6ec2632918efabdf0442902aa00e8aa2250e0'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.211480', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf477c2e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': '21fd1e950592f673370a2c1bf1450910649dd1d13dae3e496b315e7c93a0969a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.211480', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf4789bc-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': 'e0b7b63bbb97612af53ed19011b6ae299c515f9fb953e1ef71aa429f53175a71'}]}, 'timestamp': '2025-10-13 15:32:21.213765', '_unique_id': '4996cb65791644e685cb275651fb45b2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.215 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.216 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.216 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '587fa58b-388f-4df3-8ed4-60fe81c8698b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.215335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf47d278-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': 'f36fc51772d5f3449f8a89c351f508db3f014384b25cab82a3e3acb4a241b9be'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.215335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf47deb2-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '1167dc1dc6a5bc13a7b09af4ee847f558ad2b9b54f5cde3bca4cfdcf90dbdd92'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.215335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf47ebaa-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '1b572064f6133555cc4f897343157ddb2b43ca70dbfba60ffceb50fb8702d76a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.215335', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf47fc26-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': 'bd385ca4fc8ed91df25bd6c9e22e1bf8221e8598be5343a47d25e9d679c1252c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.215335', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf480c3e-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': 'f8aff15834fa38a5b9eca7f497df3f135646aa3bede838d0196983be22cdccd1'}]}, 'timestamp': '2025-10-13 15:32:21.217182', '_unique_id': '879a891f40184911803233218254108f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.218 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.219 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.219 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.219 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '99745148-5ae5-4349-9462-f91b99cfa6fc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.219119', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf48679c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': '9bfcce9a0eb2773f24d2415cdc9c88dc42d1657a61a3dff9c5f2f85dd7a12b98'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.219119', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf487656-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': '9b9e3c05cb7a457885c1b91c627e091c8865fdeee75b85dc7985a082a6b6fced'}]}, 'timestamp': '2025-10-13 15:32:21.219879', '_unique_id': 'fe9cd52e91974829a6a4285c03fe379f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.221 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4738 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.222 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3576 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a8f1517c-f0ea-479e-998a-5ef2d36655e9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4738, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.221753', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf48ce08-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': 'dc7c267cc79d37d431e3b8bcfbd68b757caa6e8c7fe99c941d3a5664cb9b47f1'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3576, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.221753', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf48dc22-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': 'ddc12c2e8835ab20a6663e0d1b3ccf8e7f33b14391b800c23414e24d353d0961'}]}, 'timestamp': '2025-10-13 15:32:21.222505', '_unique_id': '1597152dd0c6457090e0bfe0111dc9d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.224 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.224 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.224 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.224 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.224 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.225 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.225 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.225 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.225 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0be01ba9-2ab4-45a9-bb64-30bbb3eab6f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.224834', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf49454a-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '81748f4d9e2feef1c36d46782acd1e15461bb70255930e9cac87a80464597c4d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.224834', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf495238-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '568c20d0312371f0b7fa35cc868bdf2019649f79e65d0ff0c2307bf04a3ad34e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.224834', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf495d50-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': 'cc9c5ea5e88e75ca3989687a23ce05da33e698c6c8a42d28040aa7be00431346'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.224834', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf496714-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': 'b5e170177d709021b1630730f1256340b5d45c32b450260ad066f00c55a6682b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.224834', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf49713c-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': 'cab5ff47b0a73d6c26644c0ae49b166fa69291da1912678010fce53d56eb827e'}]}, 'timestamp': '2025-10-13 15:32:21.226249', '_unique_id': 'c7ae2690cc65423e829584f9f340008c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.226 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.227 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.227 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 31490000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.228 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 31340000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '554d37e1-d82d-4110-85c4-04029599c4e6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31490000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:32:21.227800', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'cf49b8b8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.233786052, 'message_signature': '6815514d11f2cb7454deaa72302c93b2f033df98bfba0d9b86fccfcf579003c5'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31340000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:32:21.227800', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'cf49c592-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.257410581, 'message_signature': '8d9008520ec17941ea4c9e572cd15bd47af059257b7361471f66a210a78a1866'}]}, 'timestamp': '2025-10-13 15:32:21.228409', '_unique_id': 'b03fb21557074cd5b05b8a27a4a87d7b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.229 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.230 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3774cc9d-94bd-49b5-9a89-061402b5ad49', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.229944', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf4a0d36-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': 'a14feadc98a11e485a711047f3169759de9cdd2b48566b4201a79940d300c0af'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.229944', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf4a19d4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': '211173d67fb4030c2649da04df7ec160b8530219501704ea61558f925dc2d0fb'}]}, 'timestamp': '2025-10-13 15:32:21.230606', '_unique_id': '18e0c2aefcaa42ec8888c23c02157a76'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.231 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.232 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.232 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '798db669-d1ed-4235-b6de-10ce7e150cf9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.232023', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf4a5fe8-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': '8ca81485abd140bcdde98431ca47e429f6f9a5d9f596e9562934ff1c2563f6ea'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.232023', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf4a6c04-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': '9d788bbd11506e41eca49c804b6c0aa0af67b4e6d71af840cd449617b349d4de'}]}, 'timestamp': '2025-10-13 15:32:21.232677', '_unique_id': '2f06067839c6473b94b39e102facb33b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.234 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.234 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.234 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d3a3bc9-ffde-4908-a393-a421af4763bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:32:21.234153', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'cf4ab164-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.189000422, 'message_signature': '4683901e5b4a45ae913ac603682ca9c07738c5b2632a0bb1c7568eecbfec352b'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:32:21.234153', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'cf4abd26-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.196739301, 'message_signature': '16ebb648537baa36443492456f18d4eff0af2cb6b6d0d2f86acb5edaee80e863'}]}, 'timestamp': '2025-10-13 15:32:21.234755', '_unique_id': 'db09782adc374169959800a958d78bfd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.235 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.236 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.236 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.236 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.236 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.237 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.237 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '22e9f53d-af62-4bdb-8c8f-f143ad950657', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:32:21.236200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf4b0128-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '7c2b2d0673a7db63eb216e03456cfbe7cff67e429f61425cffcdf3ce93d83ef7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:32:21.236200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf4b0de4-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '7d5debabb35229122111fa7e5076476f1302404685ce5c86fbbb718f1773133f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:32:21.236200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'cf4b1938-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.318120901, 'message_signature': '4e861c87a6a0e10e7031ef1c9205144a1ba04473bbdd1edf7fbaf322517bc298'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:32:21.236200', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'cf4b2428-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': '65230d8b0ad0b99dff4515cfdf2bf7ed538c4ff7f96bec9e51a0ea20b93dc970'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:32:21.236200', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'cf4b2e64-a849-11f0-a10a-fa163e2e69a7', 'monotonic_time': 8984.354624736, 'message_signature': '9b22eca06ef458aa657d08f2af9fac867df50c83a5f9d2836b33b201eec5148e'}]}, 'timestamp': '2025-10-13 15:32:21.237660', '_unique_id': 'e73843da328b425b852ed86e61aabc59'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:32:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:32:21.238 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:32:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3534: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:32:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:32:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:22.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ed161745c15cbd99d7c949d604dd3eb9ddf0df704d54afd7fe76909dd70cd6ab-merged.mount: Deactivated successfully.
Oct 13 15:32:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ed161745c15cbd99d7c949d604dd3eb9ddf0df704d54afd7fe76909dd70cd6ab-merged.mount: Deactivated successfully.
Oct 13 15:32:22 standalone.localdomain podman[495963]: 2025-10-13 15:32:22.384552566 +0000 UTC m=+1.013073318 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:32:22 standalone.localdomain podman[495963]: 2025-10-13 15:32:22.392003185 +0000 UTC m=+1.020523987 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:32:22 standalone.localdomain podman[495963]: unhealthy
Oct 13 15:32:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:23 standalone.localdomain ceph-mon[29756]: pgmap v3534: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:32:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:32:23 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:32:23 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:32:23 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:32:23
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'volumes', 'images', '.mgr', 'vms', 'backups', 'manila_metadata']
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3535: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42479 DF PROTO=TCP SPT=37516 DPT=9102 SEQ=592663294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC4671E0000000001030307) 
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:32:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:32:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:23.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:32:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:24 standalone.localdomain ceph-mon[29756]: pgmap v3535: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:24 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42480 DF PROTO=TCP SPT=37516 DPT=9102 SEQ=592663294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC46B360000000001030307) 
Oct 13 15:32:24 standalone.localdomain python3.9[496106]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:32:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f49b9fcb7527e4e06386bb74b403d49154983873c705746d0322d416fcfe3182-merged.mount: Deactivated successfully.
Oct 13 15:32:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-22c9c267afdf127d08eca445cb214129244d6909cac33b961d5f917bf640f921-merged.mount: Deactivated successfully.
Oct 13 15:32:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:25 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3536: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:25 standalone.localdomain python3.9[496216]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:32:25 standalone.localdomain python3.9[496269]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:32:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-696b99ecafeb62775d4be1c37484863243dbe05bbb108c98eb21626c3e1b01e2-merged.mount: Deactivated successfully.
Oct 13 15:32:26 standalone.localdomain ceph-mon[29756]: pgmap v3536: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42481 DF PROTO=TCP SPT=37516 DPT=9102 SEQ=592663294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC473360000000001030307) 
Oct 13 15:32:26 standalone.localdomain python3.9[496376]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760369546.0399778-459-176391129590354/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:32:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:32:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:27.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:32:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:32:27 standalone.localdomain python3.9[496429]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:32:27 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:32:27 standalone.localdomain podman[496430]: 2025-10-13 15:32:27.321321476 +0000 UTC m=+0.066453169 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:32:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3537: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:27 standalone.localdomain podman[496430]: 2025-10-13 15:32:27.362828495 +0000 UTC m=+0.107960208 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute)
Oct 13 15:32:27 standalone.localdomain systemd-rc-local-generator[496471]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:32:27 standalone.localdomain systemd-sysv-generator[496475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:32:27 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:32:27 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully.
Oct 13 15:32:27 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:32:28 standalone.localdomain python3.9[496537]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:32:28 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:32:28 standalone.localdomain systemd-sysv-generator[496565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:32:28 standalone.localdomain systemd-rc-local-generator[496559]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:32:28 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:32:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:28.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:28 standalone.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Oct 13 15:32:29 standalone.localdomain ceph-mon[29756]: pgmap v3537: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:32:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3538: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:29 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:32:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04fc369ff501ac98757c118a4920bcf5c49fec6ddac9b1bcc6ada4fbd4c0ffb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 13 15:32:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04fc369ff501ac98757c118a4920bcf5c49fec6ddac9b1bcc6ada4fbd4c0ffb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:32:29 standalone.localdomain podman[496578]: 2025-10-13 15:32:29.708620547 +0000 UTC m=+0.912199139 container init 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_dhcp, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']})
Oct 13 15:32:29 standalone.localdomain podman[496578]: 2025-10-13 15:32:29.715581182 +0000 UTC m=+0.919159774 container start 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_managed=true, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:32:29 standalone.localdomain podman[496578]: neutron_dhcp_agent
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + sudo -E kolla_set_configs
Oct 13 15:32:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-73e750623bd8a901358d7444f4c120903c403b7f36bb0eee604846a4f9da0a88-merged.mount: Deactivated successfully.
Oct 13 15:32:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Validating config file
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Copying service configuration files
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Writing out command to execute
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/lock
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dnsmasq_wrapper
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_haproxy_wrapper
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/03f66d1294ee0eb4465c67b7f02f44b76b4fb981cae67c336f96704556f02da9
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/7fd88ee77a303e69e2d79df25f19e760c643c3e7da2c07fbeb6a01569dbedc50
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/interface
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/leases
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/pid
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/interface
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/leases
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/pid
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/interface
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/leases
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/pid
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/0c455abd-28d4-47e7-a254-e50de0526def.conf
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: ++ cat /run_command
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + CMD=/usr/bin/neutron-dhcp-agent
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + ARGS=
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + sudo kolla_copy_cacerts
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + [[ ! -n '' ]]
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + . kolla_extend_start
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: Running command: '/usr/bin/neutron-dhcp-agent'
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + umask 0022
Oct 13 15:32:29 standalone.localdomain neutron_dhcp_agent[496591]: + exec /usr/bin/neutron-dhcp-agent
Oct 13 15:32:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:30.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42482 DF PROTO=TCP SPT=37516 DPT=9102 SEQ=592663294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC482F60000000001030307) 
Oct 13 15:32:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:30 standalone.localdomain systemd[1]: Started neutron_dhcp_agent container.
Oct 13 15:32:30 standalone.localdomain podman[496599]: 2025-10-13 15:32:30.910312216 +0000 UTC m=+1.107926100 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:32:30 standalone.localdomain podman[496599]: 2025-10-13 15:32:30.921026276 +0000 UTC m=+1.118640210 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:32:31 standalone.localdomain ceph-mon[29756]: pgmap v3538: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:31.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:31 standalone.localdomain neutron_dhcp_agent[496591]: 2025-10-13 15:32:31.151 496595 INFO neutron.common.config [-] Logging enabled!
Oct 13 15:32:31 standalone.localdomain neutron_dhcp_agent[496591]: 2025-10-13 15:32:31.152 496595 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Oct 13 15:32:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3539: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:31 standalone.localdomain python3.9[496733]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:32:31 standalone.localdomain neutron_dhcp_agent[496591]: 2025-10-13 15:32:31.584 496595 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Oct 13 15:32:31 standalone.localdomain systemd[1]: Stopping neutron_dhcp_agent container...
Oct 13 15:32:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.066032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369552066069, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 812, "num_deletes": 256, "total_data_size": 606930, "memory_usage": 623152, "flush_reason": "Manual Compaction"}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Oct 13 15:32:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b84240793023d0060bc44a6d5c6a81ebdf8f4232b50de9763d8b1b4c56d59c81-merged.mount: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369552074066, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 596087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82972, "largest_seqno": 83783, "table_properties": {"data_size": 592338, "index_size": 1541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8393, "raw_average_key_size": 18, "raw_value_size": 584704, "raw_average_value_size": 1308, "num_data_blocks": 70, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369492, "oldest_key_time": 1760369492, "file_creation_time": 1760369552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 8122 microseconds, and 1795 cpu microseconds.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.074144) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 596087 bytes OK
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.074170) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.075994) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.076007) EVENT_LOG_v1 {"time_micros": 1760369552076003, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.076026) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 602790, prev total WAL file size 603279, number of live WAL files 2.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.076866) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303133' seq:72057594037927935, type:22 .. '6C6F676D0033323635' seq:0, type:0; will stop at (end)
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(582KB)], [188(6358KB)]
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369552076929, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 7106896, "oldest_snapshot_seqno": -1}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 6852 keys, 7010827 bytes, temperature: kUnknown
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369552115520, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 7010827, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6969106, "index_size": 23509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 180787, "raw_average_key_size": 26, "raw_value_size": 6848278, "raw_average_value_size": 999, "num_data_blocks": 931, "num_entries": 6852, "num_filter_entries": 6852, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.116211) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 7010827 bytes
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.117593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 179.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(23.7) write-amplify(11.8) OK, records in: 7379, records dropped: 527 output_compression: NoCompression
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.117630) EVENT_LOG_v1 {"time_micros": 1760369552117615, "job": 118, "event": "compaction_finished", "compaction_time_micros": 39035, "compaction_time_cpu_micros": 25548, "output_level": 6, "num_output_files": 1, "total_output_size": 7010827, "num_input_records": 7379, "num_output_records": 6852, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369552118243, "job": 118, "event": "table_file_deletion", "file_number": 190}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369552119460, "job": 118, "event": "table_file_deletion", "file_number": 188}
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.076397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.119541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.119547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.119549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.119551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:32:32 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:32:32.119552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:32:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b84240793023d0060bc44a6d5c6a81ebdf8f4232b50de9763d8b1b4c56d59c81-merged.mount: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain neutron_dhcp_agent[496591]: 2025-10-13 15:32:32.179 496595 INFO neutron.agent.dhcp.agent [None req-53c9fe4e-4841-42b8-ab0e-61a0d8d96de5 - - - - - -] Agent has just been revived. Scheduling full sync
Oct 13 15:32:32 standalone.localdomain neutron_dhcp_agent[496591]: 2025-10-13 15:32:32.184 496595 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Oct 13 15:32:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:32.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:32 standalone.localdomain systemd[1]: libpod-5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00.scope: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain systemd[1]: libpod-5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00.scope: Consumed 2.224s CPU time.
Oct 13 15:32:32 standalone.localdomain podman[496738]: 2025-10-13 15:32:32.574025031 +0000 UTC m=+0.948406045 container died 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=neutron_dhcp_agent)
Oct 13 15:32:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:32:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00-userdata-shm.mount: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e794d9016fa7b11bc2e0ec272b767b863143d54dc9add54763809bf9da301e7e-merged.mount: Deactivated successfully.
Oct 13 15:32:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3ea6486050b1357ab82c68b9483cc00c71ff83bf406dd5e32ebd009c718a069e-merged.mount: Deactivated successfully.
Oct 13 15:32:33 standalone.localdomain ceph-mon[29756]: pgmap v3539: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3540: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:33.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.122 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.123 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.123 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.123 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.123 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:32:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0872b2fd8f18340bdd8c87aa4e1f8890b937e1bd1d9117661eb5e01a1158d600-merged.mount: Deactivated successfully.
Oct 13 15:32:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7861ba3a2d0fa6c88e81d0b962398561746b4f0944affb5e220c19f62f3ba4ad-merged.mount: Deactivated successfully.
Oct 13 15:32:34 standalone.localdomain sudo[496795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:32:34 standalone.localdomain sudo[496795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:32:34 standalone.localdomain sudo[496795]: pam_unix(sudo:session): session closed for user root
Oct 13 15:32:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7861ba3a2d0fa6c88e81d0b962398561746b4f0944affb5e220c19f62f3ba4ad-merged.mount: Deactivated successfully.
Oct 13 15:32:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a04fc369ff501ac98757c118a4920bcf5c49fec6ddac9b1bcc6ada4fbd4c0ffb-merged.mount: Deactivated successfully.
Oct 13 15:32:34 standalone.localdomain sudo[496813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:32:34 standalone.localdomain sudo[496813]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:32:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:32:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/541211994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:32:34 standalone.localdomain podman[496738]: 2025-10-13 15:32:34.597260894 +0000 UTC m=+2.971641938 container cleanup 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:32:34 standalone.localdomain podman[496738]: neutron_dhcp_agent
Oct 13 15:32:34 standalone.localdomain podman[496752]: 2025-10-13 15:32:34.614632189 +0000 UTC m=+2.031851749 container cleanup 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, tcib_managed=true)
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.616 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.709 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.710 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.710 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.714 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.715 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.862 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.863 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9405MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.864 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:32:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:34.864 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.018 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.019 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.019 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.019 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:32:35 standalone.localdomain ceph-mon[29756]: pgmap v3540: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:35 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/541211994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:32:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e794d9016fa7b11bc2e0ec272b767b863143d54dc9add54763809bf9da301e7e-merged.mount: Deactivated successfully.
Oct 13 15:32:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3541: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.416 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:32:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:32:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/232822400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.824 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.832 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.892 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.895 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:32:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:35.896 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:32:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/232822400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:32:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:32:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:36.898 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:36.899 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:36.899 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:32:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:36.900 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:32:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:37 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:32:37 standalone.localdomain ceph-mon[29756]: pgmap v3541: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.201 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.202 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.202 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.202 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3542: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.586 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.605 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.605 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.606 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.606 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:37.607 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:32:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:38.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: pgmap v3542: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:39 standalone.localdomain podman[496887]: error opening file `/run/crun/5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00/status`: No such file or directory
Oct 13 15:32:39 standalone.localdomain podman[496873]: 2025-10-13 15:32:39.1519064 +0000 UTC m=+2.015551697 container cleanup 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2)
Oct 13 15:32:39 standalone.localdomain podman[496873]: neutron_dhcp_agent
Oct 13 15:32:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3543: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:39 standalone.localdomain sudo[496813]: pam_unix(sudo:session): session closed for user root
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:32:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:32:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 6c604cfb-fb28-4d66-ba45-96ee540da148 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:32:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 6c604cfb-fb28-4d66-ba45-96ee540da148 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:32:39 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 6c604cfb-fb28-4d66-ba45-96ee540da148 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:32:39 standalone.localdomain sudo[496906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:32:39 standalone.localdomain sudo[496906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:32:39 standalone.localdomain sudo[496906]: pam_unix(sudo:session): session closed for user root
Oct 13 15:32:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:32:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:32:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:32:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:32:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:32:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:32:40 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:32:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:32:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:32:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:32:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:40 standalone.localdomain systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully.
Oct 13 15:32:40 standalone.localdomain systemd[1]: Stopped neutron_dhcp_agent container.
Oct 13 15:32:40 standalone.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Oct 13 15:32:40 standalone.localdomain podman[496924]: 2025-10-13 15:32:40.400661979 +0000 UTC m=+0.567571730 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 13 15:32:40 standalone.localdomain podman[496924]: 2025-10-13 15:32:40.406929022 +0000 UTC m=+0.573838803 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:32:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:41 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:32:41 standalone.localdomain ceph-mon[29756]: pgmap v3543: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:32:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3544: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:41 standalone.localdomain podman[496760]: 2025-10-13 15:32:41.738541104 +0000 UTC m=+9.141257667 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:32:41 standalone.localdomain podman[496760]: 2025-10-13 15:32:41.82479628 +0000 UTC m=+9.227512833 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:32:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:32:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04fc369ff501ac98757c118a4920bcf5c49fec6ddac9b1bcc6ada4fbd4c0ffb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 13 15:32:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04fc369ff501ac98757c118a4920bcf5c49fec6ddac9b1bcc6ada4fbd4c0ffb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:32:41 standalone.localdomain podman[496936]: 2025-10-13 15:32:41.866783536 +0000 UTC m=+1.482495842 container init 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 15:32:41 standalone.localdomain systemd[1]: tmp-crun.Vmy4hC.mount: Deactivated successfully.
Oct 13 15:32:41 standalone.localdomain podman[496936]: 2025-10-13 15:32:41.883895753 +0000 UTC m=+1.499608039 container start 5a98afb93b554b32ec4df4b3396b4fa6dcd5663fb30b41fc4bbc0e8605c2ba00 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'abe420c4812f0b325cc12c1ca24404db8186fb14d6fc3d382b127dc9653132d1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=neutron_dhcp, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:32:41 standalone.localdomain podman[496936]: neutron_dhcp_agent
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + sudo -E kolla_set_configs
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Validating config file
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Copying service configuration files
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Writing out command to execute
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/lock
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dnsmasq_wrapper
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_haproxy_wrapper
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/333254bb87316156e96cebc0941f89c4b6bf7d0c72b62f2bd2e3f232ec27cb23
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/03f66d1294ee0eb4465c67b7f02f44b76b4fb981cae67c336f96704556f02da9
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/7fd88ee77a303e69e2d79df25f19e760c643c3e7da2c07fbeb6a01569dbedc50
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/interface
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/leases
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/pid
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/interface
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/leases
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/pid
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/interface
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/leases
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/pid
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/lock/neutron-iptables-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/0c455abd-28d4-47e7-a254-e50de0526def.conf
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: ++ cat /run_command
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + CMD=/usr/bin/neutron-dhcp-agent
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + ARGS=
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + sudo kolla_copy_cacerts
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + [[ ! -n '' ]]
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + . kolla_extend_start
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + umask 0022
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: + exec /usr/bin/neutron-dhcp-agent
Oct 13 15:32:41 standalone.localdomain neutron_dhcp_agent[496972]: Running command: '/usr/bin/neutron-dhcp-agent'
Oct 13 15:32:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:42.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:32:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:32:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:32:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:43.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:43.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:32:43 standalone.localdomain ceph-mon[29756]: pgmap v3544: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:43.276 496978 INFO neutron.common.config [-] Logging enabled!
Oct 13 15:32:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:43.276 496978 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Oct 13 15:32:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3545: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:32:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:32:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:32:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:43.698 496978 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Oct 13 15:32:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:43.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b66d622709c541af08dec90fa6b7756643f5dba8027f8aba80b234ad514ed706-merged.mount: Deactivated successfully.
Oct 13 15:32:44 standalone.localdomain systemd[1]: Started neutron_dhcp_agent container.
Oct 13 15:32:44 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:32:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ed161745c15cbd99d7c949d604dd3eb9ddf0df704d54afd7fe76909dd70cd6ab-merged.mount: Deactivated successfully.
Oct 13 15:32:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.381 496978 INFO neutron.agent.dhcp.agent [None req-c91addea-21b0-4582-a274-9e2c3f9b9fdf - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:32:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.382 496978 INFO neutron.agent.dhcp.agent [-] Starting network a12f1166-9c4d-4d97-bc78-657c05e7af68 dhcp configuration
Oct 13 15:32:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.430 496978 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpm2edmw4h/privsep.sock']
Oct 13 15:32:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.430 496978 INFO neutron.agent.dhcp.agent [-] Starting network 0c455abd-28d4-47e7-a254-e50de0526def dhcp configuration
Oct 13 15:32:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.433 496978 INFO neutron.agent.dhcp.agent [-] Starting network 777e62f3-0875-437c-ae6a-3924fea34ee8 dhcp configuration
Oct 13 15:32:44 standalone.localdomain sshd[486208]: pam_unix(sshd:session): session closed for user root
Oct 13 15:32:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:32:44 standalone.localdomain systemd[1]: session-297.scope: Deactivated successfully.
Oct 13 15:32:44 standalone.localdomain systemd[1]: session-297.scope: Consumed 32.889s CPU time.
Oct 13 15:32:44 standalone.localdomain systemd-logind[45629]: Session 297 logged out. Waiting for processes to exit.
Oct 13 15:32:44 standalone.localdomain systemd-logind[45629]: Removed session 297.
Oct 13 15:32:44 standalone.localdomain systemd[1]: tmp-crun.cV1WoR.mount: Deactivated successfully.
Oct 13 15:32:44 standalone.localdomain podman[497011]: 2025-10-13 15:32:44.815426495 +0000 UTC m=+0.076597092 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:32:44 standalone.localdomain podman[497011]: 2025-10-13 15:32:44.823797113 +0000 UTC m=+0.084967670 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:45.084 496978 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.959 497031 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.962 497031 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.965 497031 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:44.965 497031 INFO oslo.privsep.daemon [-] privsep daemon running as pid 497031
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:45.088 496978 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Oct 13 15:32:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:45.089 496978 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Oct 13 15:32:45 standalone.localdomain ceph-mon[29756]: pgmap v3545: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:32:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3546: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:32:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:32:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:32:45 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:32:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:32:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:46 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:46 standalone.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:46 standalone.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:47 standalone.localdomain ceph-mon[29756]: pgmap v3546: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:32:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:47.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3547: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e0d5b365d1d4f2cbdec218bcecccb17a52487dea7c1e0a1ce7e4461f7c3a058-merged.mount: Deactivated successfully.
Oct 13 15:32:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:47 standalone.localdomain podman[467099]: time="2025-10-13T15:32:47Z" level=error msg="Getting root fs size for \"fac4a7bab9f10135839cb5e568f1fad7d37cd7b905abbbe47f89a6220d8e2616\": getting diffsize of layer \"19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8\" and its parent \"e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df\": unmounting layer 19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8: replacing mount point \"/var/lib/containers/storage/overlay/19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8/merged\": device or resource busy"
Oct 13 15:32:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:47 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:47 standalone.localdomain podman[497084]: 2025-10-13 15:32:47.737098852 +0000 UTC m=+0.502224607 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:32:47 standalone.localdomain podman[497084]: 2025-10-13 15:32:47.748698989 +0000 UTC m=+0.513824734 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:32:48 standalone.localdomain ceph-mon[29756]: pgmap v3547: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:32:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:32:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:32:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ec428e69571aa789160db41f2d270b1a0ec90be244ec1a5a2bef3ac87e11321b-merged.mount: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain dnsmasq[179222]: exiting on receipt of SIGTERM
Oct 13 15:32:49 standalone.localdomain podman[497099]: 2025-10-13 15:32:49.255052505 +0000 UTC m=+1.536445524 container kill 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, tcib_managed=true, build-date=2025-07-21T16:28:54, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, vendor=Red Hat, Inc., release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git)
Oct 13 15:32:49 standalone.localdomain systemd[1]: libpod-8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee.scope: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain podman[497100]: 2025-10-13 15:32:49.303982723 +0000 UTC m=+1.582358399 container kill b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, maintainer=OpenStack TripleO Team, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, batch=17.1_20250721.1, com.redhat.component=openstack-neutron-dhcp-agent-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, release=1, architecture=x86_64, version=17.1.9, build-date=2025-07-21T16:28:54)
Oct 13 15:32:49 standalone.localdomain dnsmasq[153655]: exiting on receipt of SIGTERM
Oct 13 15:32:49 standalone.localdomain systemd[1]: libpod-b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c.scope: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain podman[497133]: 2025-10-13 15:32:49.338678932 +0000 UTC m=+1.100867333 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, config_id=tripleo_step4, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=swift_object_server, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=)
Oct 13 15:32:49 standalone.localdomain podman[497135]: 2025-10-13 15:32:49.294503222 +0000 UTC m=+1.053537295 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:32:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3548: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:49 standalone.localdomain systemd[1]: tmp-crun.3iqdZ7.mount: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain dnsmasq[178073]: exiting on receipt of SIGTERM
Oct 13 15:32:49 standalone.localdomain podman[497101]: 2025-10-13 15:32:49.405622175 +0000 UTC m=+1.680268427 container kill c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-dhcp-agent-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, release=1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:28:54, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9)
Oct 13 15:32:49 standalone.localdomain systemd[1]: libpod-c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb.scope: Deactivated successfully.
Oct 13 15:32:49 standalone.localdomain podman[497134]: 2025-10-13 15:32:49.410536127 +0000 UTC m=+1.169590351 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.33.12, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, version=17.1.9, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:32:49 standalone.localdomain podman[497135]: 2025-10-13 15:32:49.49988352 +0000 UTC m=+1.258917593 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_account_server, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, release=1, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account)
Oct 13 15:32:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:32:49 standalone.localdomain podman[497133]: 2025-10-13 15:32:49.54175449 +0000 UTC m=+1.303942851 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, tcib_managed=true)
Oct 13 15:32:49 standalone.localdomain podman[497134]: 2025-10-13 15:32:49.588852282 +0000 UTC m=+1.347906506 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-swift-container-container, version=17.1.9, vcs-type=git, batch=17.1_20250721.1, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, container_name=swift_container_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team)
Oct 13 15:32:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5c6de20ee9f73151254b053a0024fcbdd9b55691492d339c494637f80bb81826-merged.mount: Deactivated successfully.
Oct 13 15:32:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-696b99ecafeb62775d4be1c37484863243dbe05bbb108c98eb21626c3e1b01e2-merged.mount: Deactivated successfully.
Oct 13 15:32:50 standalone.localdomain ceph-mon[29756]: pgmap v3548: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3549: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain podman[497195]: 2025-10-13 15:32:52.052343911 +0000 UTC m=+2.726734012 container died b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:28:54, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.openshift.expose-services=, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, release=1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, name=rhosp17/openstack-neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, tcib_managed=true)
Oct 13 15:32:52 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:52 standalone.localdomain podman[497173]: 2025-10-13 15:32:52.082533681 +0000 UTC m=+2.807517761 container died 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, com.redhat.component=openstack-neutron-dhcp-agent-container, batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 15:32:52 standalone.localdomain podman[497223]: 2025-10-13 15:32:52.117411276 +0000 UTC m=+2.699174283 container died c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, distribution-scope=public, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:28:54, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, io.openshift.expose-services=, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, version=17.1.9, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent)
Oct 13 15:32:52 standalone.localdomain podman[497248]: 2025-10-13 15:32:52.156432618 +0000 UTC m=+2.638149673 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:32:52 standalone.localdomain podman[497248]: 2025-10-13 15:32:52.164782565 +0000 UTC m=+2.646499600 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:32:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:52.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee-userdata-shm.mount: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb-userdata-shm.mount: Deactivated successfully.
Oct 13 15:32:52 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c-userdata-shm.mount: Deactivated successfully.
Oct 13 15:32:53 standalone.localdomain ceph-mon[29756]: pgmap v3549: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:32:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3550: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully.
Oct 13 15:32:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:32:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully.
Oct 13 15:32:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12646 DF PROTO=TCP SPT=34660 DPT=9102 SEQ=3932664694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC4DC500000000001030307) 
Oct 13 15:32:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:53.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4de3b1bdb5a8693c0cf197a01129c6d27b663ac340b2bc6737270378a06d0e1e-merged.mount: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain podman[497313]: 2025-10-13 15:32:54.282672105 +0000 UTC m=+0.883775133 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:32:54 standalone.localdomain podman[497313]: 2025-10-13 15:32:54.293803668 +0000 UTC m=+0.894906706 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:32:54 standalone.localdomain podman[497313]: unhealthy
Oct 13 15:32:54 standalone.localdomain podman[497173]: 2025-10-13 15:32:54.367281702 +0000 UTC m=+5.092265762 container cleanup 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, version=17.1.9, com.redhat.component=openstack-neutron-dhcp-agent-container, architecture=x86_64, name=rhosp17/openstack-neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, distribution-scope=public)
Oct 13 15:32:54 standalone.localdomain podman[497195]: 2025-10-13 15:32:54.490684895 +0000 UTC m=+5.165075006 container cleanup b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, com.redhat.component=openstack-neutron-dhcp-agent-container, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-neutron-dhcp-agent, build-date=2025-07-21T16:28:54, io.buildah.version=1.33.12, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Oct 13 15:32:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12647 DF PROTO=TCP SPT=34660 DPT=9102 SEQ=3932664694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC4E0760000000001030307) 
Oct 13 15:32:54 standalone.localdomain podman[497223]: 2025-10-13 15:32:54.612282052 +0000 UTC m=+5.194045049 container cleanup c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, architecture=x86_64, com.redhat.component=openstack-neutron-dhcp-agent-container, name=rhosp17/openstack-neutron-dhcp-agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, version=17.1.9, build-date=2025-07-21T16:28:54, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=)
Oct 13 15:32:54 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:32:54 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:32:54 standalone.localdomain systemd[1]: libpod-conmon-8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee.scope: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain podman[497266]: 2025-10-13 15:32:54.633009791 +0000 UTC m=+2.581121635 container remove 8344d37429b41c94b4bf4785944166a74975bf8ee81075e6b155aa6b933000ee (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, io.buildah.version=1.33.12, name=rhosp17/openstack-neutron-dhcp-agent, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, tcib_managed=true, com.redhat.component=openstack-neutron-dhcp-agent-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, build-date=2025-07-21T16:28:54, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vendor=Red Hat, Inc.)
Oct 13 15:32:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7d3d4bbb5323787d2179afa6229aa20b6d84f11955342adb310c1b60e2d4dfdb-merged.mount: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cd5889d2ca5f80aba463de7e4a66b72ebab8dece7bfb670659690bed14931210-merged.mount: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: Stopping User Manager for UID 0...
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Activating special unit Exit the Session...
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Removed slice User Background Tasks Slice.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped target Main User Target.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped target Basic System.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped target Paths.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped target Sockets.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped target Timers.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Closed D-Bus User Message Bus Socket.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Stopped Create User's Volatile Files and Directories.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Removed slice User Application Slice.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Reached target Shutdown.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Finished Exit the Session.
Oct 13 15:32:54 standalone.localdomain systemd[336099]: Reached target Exit the Session.
Oct 13 15:32:54 standalone.localdomain systemd[1]: user@0.service: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: Stopped User Manager for UID 0.
Oct 13 15:32:54 standalone.localdomain systemd[1]: user@0.service: Consumed 20.081s CPU time, read 20.0K from disk, written 0B to disk.
Oct 13 15:32:54 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 15:32:54 standalone.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 15:32:54 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 15:32:54 standalone.localdomain systemd[1]: Removed slice User Slice of UID 0.
Oct 13 15:32:54 standalone.localdomain systemd[1]: user-0.slice: Consumed 15min 46.863s CPU time.
Oct 13 15:32:55 standalone.localdomain ceph-mon[29756]: pgmap v3550: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3551: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:55 standalone.localdomain systemd[1]: libpod-conmon-b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c.scope: Deactivated successfully.
Oct 13 15:32:55 standalone.localdomain podman[497267]: 2025-10-13 15:32:55.848654769 +0000 UTC m=+3.796893297 container remove b2d77df29e34af655c7c0dafd5c4fb72b58b3b64786c721b0f5671d07c75c50c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-neutron-dhcp-agent-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, vcs-type=git, batch=17.1_20250721.1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, release=1, build-date=2025-07-21T16:28:54, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-dhcp-agent, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1)
Oct 13 15:32:55 standalone.localdomain systemd[1]: libpod-conmon-c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb.scope: Deactivated successfully.
Oct 13 15:32:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12648 DF PROTO=TCP SPT=34660 DPT=9102 SEQ=3932664694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC4E8760000000001030307) 
Oct 13 15:32:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:32:56 standalone.localdomain podman[497277]: 2025-10-13 15:32:56.989834434 +0000 UTC m=+4.929533259 container remove c84fc00e92badaf0811eae1ec275f8ed4445d9aab063c61596c2e5c58becd4eb (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-dhcp-agent:17.1, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T16:28:54, release=1, vcs-ref=449ddab920996105186c9c912d7920ff2e2db1a9, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-neutron-dhcp-agent/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-neutron-dhcp-agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-dhcp-agent, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=openstack-neutron-dhcp-agent-container)
Oct 13 15:32:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:32:57 standalone.localdomain ceph-mon[29756]: pgmap v3551: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:57.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3552: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:57.505 496978 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpn0faivra/privsep.sock']
Oct 13 15:32:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:57 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:32:57 standalone.localdomain podman[497363]: 2025-10-13 15:32:57.785317396 +0000 UTC m=+0.057577696 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm)
Oct 13 15:32:57 standalone.localdomain podman[497363]: 2025-10-13 15:32:57.79681833 +0000 UTC m=+0.069078640 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.161 496978 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.052 497381 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.057 497381 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.061 497381 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.061 497381 INFO oslo.privsep.daemon [-] privsep daemon running as pid 497381
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.164 496978 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Oct 13 15:32:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:32:58.165 496978 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Oct 13 15:32:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:32:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:32:58 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:32:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:58 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:32:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:32:58.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:32:59 standalone.localdomain ceph-mon[29756]: pgmap v3552: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3553: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:32:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-46476cb54c317ede576986c939135db930b5a6eeb4db9b988aa8d7ddee484bf8-merged.mount: Deactivated successfully.
Oct 13 15:32:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b84240793023d0060bc44a6d5c6a81ebdf8f4232b50de9763d8b1b4c56d59c81-merged.mount: Deactivated successfully.
Oct 13 15:33:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e794d9016fa7b11bc2e0ec272b767b863143d54dc9add54763809bf9da301e7e-merged.mount: Deactivated successfully.
Oct 13 15:33:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3ea6486050b1357ab82c68b9483cc00c71ff83bf406dd5e32ebd009c718a069e-merged.mount: Deactivated successfully.
Oct 13 15:33:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12649 DF PROTO=TCP SPT=34660 DPT=9102 SEQ=3932664694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC4F8370000000001030307) 
Oct 13 15:33:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:33:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e794d9016fa7b11bc2e0ec272b767b863143d54dc9add54763809bf9da301e7e-merged.mount: Deactivated successfully.
Oct 13 15:33:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e794d9016fa7b11bc2e0ec272b767b863143d54dc9add54763809bf9da301e7e-merged.mount: Deactivated successfully.
Oct 13 15:33:01 standalone.localdomain ceph-mon[29756]: pgmap v3553: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3554: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:33:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5d47d2bb79e4159e5b63c2e82543365dfb186987c7afdef56d8598cc2a6c5bdf-merged.mount: Deactivated successfully.
Oct 13 15:33:01 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c-merged.mount: Deactivated successfully.
Oct 13 15:33:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:02.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:33:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:33:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:33:02 standalone.localdomain podman[497549]: 2025-10-13 15:33:02.970219651 +0000 UTC m=+0.269864266 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:33:02 standalone.localdomain podman[497549]: 2025-10-13 15:33:02.983956104 +0000 UTC m=+0.283600739 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:33:03 standalone.localdomain ceph-mon[29756]: pgmap v3554: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3555: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:03.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424-merged.mount: Deactivated successfully.
Oct 13 15:33:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:04 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:33:04 standalone.localdomain podman[497528]: 2025-10-13 15:33:02.912305517 +0000 UTC m=+1.553739088 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:33:04 standalone.localdomain podman[497527]: 2025-10-13 15:33:02.914957379 +0000 UTC m=+1.553125379 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:33:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3ea6486050b1357ab82c68b9483cc00c71ff83bf406dd5e32ebd009c718a069e-merged.mount: Deactivated successfully.
Oct 13 15:33:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:04 standalone.localdomain podman[497566]: 2025-10-13 15:33:04.044956538 +0000 UTC m=+1.119134446 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:33:05 standalone.localdomain ceph-mon[29756]: pgmap v3555: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3556: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:33:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:33:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:33:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:33:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d33a74ddba43a8c0deb95d8e678513479d7605354398a4facd4f04bb49f5adb5-merged.mount: Deactivated successfully.
Oct 13 15:33:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:33:06.950 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:33:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:33:06.951 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:33:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:33:06.952 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:33:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:07 standalone.localdomain ceph-mon[29756]: pgmap v3556: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:07.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3557: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:33:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-28b1387bff6df38663e5c9d5c64bcd337df0d8c5cc96c4c247d46c8724c4778b-merged.mount: Deactivated successfully.
Oct 13 15:33:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:09 standalone.localdomain ceph-mon[29756]: pgmap v3557: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:33:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:33:09 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:33:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3558: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:11 standalone.localdomain ceph-mon[29756]: pgmap v3558: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:33:11 standalone.localdomain podman[497590]: 2025-10-13 15:33:11.28267676 +0000 UTC m=+0.101242662 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 13 15:33:11 standalone.localdomain podman[497590]: 2025-10-13 15:33:11.317076599 +0000 UTC m=+0.135642581 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:33:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3559: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-289f8b289900e7324d29f6f964e5f506342d2e856712a01beefed5699acdd003-merged.mount: Deactivated successfully.
Oct 13 15:33:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ec428e69571aa789160db41f2d270b1a0ec90be244ec1a5a2bef3ac87e11321b-merged.mount: Deactivated successfully.
Oct 13 15:33:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:33:11 standalone.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Oct 13 15:33:11 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:33:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:12.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:33:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:33:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:13 standalone.localdomain ceph-mon[29756]: pgmap v3559: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3560: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:13.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:33:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:33:14 standalone.localdomain podman[497609]: 2025-10-13 15:33:14.453293868 +0000 UTC m=+0.058000718 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 13 15:33:14 standalone.localdomain podman[497609]: 2025-10-13 15:33:14.515257888 +0000 UTC m=+0.119964758 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:33:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:15 standalone.localdomain ceph-mon[29756]: pgmap v3560: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3561: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:15 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:15:33:15 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx9175f8ed278d40deb399d-0068ed1bbb" "proxy-server 2" 0.0008 "-" 23 -
Oct 13 15:33:15 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx9175f8ed278d40deb399d-0068ed1bbb)
Oct 13 15:33:15 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx9175f8ed278d40deb399d-0068ed1bbb)
Oct 13 15:33:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:33:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:33:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88928f310f1ba382800c4333f5e27b201b6df1b159808455dd99069f0d30925c-merged.mount: Deactivated successfully.
Oct 13 15:33:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88928f310f1ba382800c4333f5e27b201b6df1b159808455dd99069f0d30925c-merged.mount: Deactivated successfully.
Oct 13 15:33:16 standalone.localdomain podman[497566]: 
Oct 13 15:33:16 standalone.localdomain podman[467099]: time="2025-10-13T15:33:16Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\""
Oct 13 15:33:16 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:14:40 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1"
Oct 13 15:33:16 standalone.localdomain podman[497527]: 
Oct 13 15:33:16 standalone.localdomain podman[497566]: 2025-10-13 15:33:16.641960999 +0000 UTC m=+13.716138847 container create 19af37e22916651190c7368a38efbd7c0065db40d9b894dba9ae2006ed72bd23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:33:16 standalone.localdomain podman[497528]: 
Oct 13 15:33:16 standalone.localdomain podman[497527]: 2025-10-13 15:33:16.647451159 +0000 UTC m=+15.285619079 container create 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:33:16 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:33:16 standalone.localdomain podman[497528]: 2025-10-13 15:33:16.666419202 +0000 UTC m=+15.307852723 container create 75dc77348389bc4f5776c3a11c75058c6fac24f99f4ecf2b008a06d6c87b1d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:33:16 standalone.localdomain podman[497634]: 2025-10-13 15:33:16.688845483 +0000 UTC m=+0.254312666 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:33:16 standalone.localdomain podman[497634]: 2025-10-13 15:33:16.699863313 +0000 UTC m=+0.265330526 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64)
Oct 13 15:33:16 standalone.localdomain systemd[1]: Started libpod-conmon-47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e.scope.
Oct 13 15:33:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:33:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728afed0dea2a9676dec0cdf56d8f24419b0c031219f402a4450aa664395ca01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:33:16 standalone.localdomain podman[497527]: 2025-10-13 15:33:16.771942834 +0000 UTC m=+15.410110734 container init 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:33:16 standalone.localdomain podman[497527]: 2025-10-13 15:33:16.778387282 +0000 UTC m=+15.416555182 container start 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:33:16 standalone.localdomain systemd[1]: Started libpod-conmon-19af37e22916651190c7368a38efbd7c0065db40d9b894dba9ae2006ed72bd23.scope.
Oct 13 15:33:16 standalone.localdomain dnsmasq[497662]: started, version 2.85 cachesize 150
Oct 13 15:33:16 standalone.localdomain dnsmasq[497662]: DNS service limited to local subnets
Oct 13 15:33:16 standalone.localdomain dnsmasq[497662]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:33:16 standalone.localdomain dnsmasq[497662]: warning: no upstream servers configured
Oct 13 15:33:16 standalone.localdomain dnsmasq-dhcp[497662]: DHCP, static leases only on 192.168.122.0, lease time 1d
Oct 13 15:33:16 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:33:16 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:33:16 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:33:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:33:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d60a92c3dd7e4274e1834143cc681a937d1732f294711d9e88dcd3e0ec4499e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:33:16 standalone.localdomain podman[497566]: 2025-10-13 15:33:16.814260508 +0000 UTC m=+13.888438346 container init 19af37e22916651190c7368a38efbd7c0065db40d9b894dba9ae2006ed72bd23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:33:16 standalone.localdomain dnsmasq[497667]: started, version 2.85 cachesize 150
Oct 13 15:33:16 standalone.localdomain dnsmasq[497667]: DNS service limited to local subnets
Oct 13 15:33:16 standalone.localdomain dnsmasq[497667]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:33:16 standalone.localdomain dnsmasq[497667]: warning: no upstream servers configured
Oct 13 15:33:16 standalone.localdomain dnsmasq-dhcp[497667]: DHCP, static leases only on 192.168.0.0, lease time 1d
Oct 13 15:33:16 standalone.localdomain dnsmasq[497667]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/addn_hosts - 3 addresses
Oct 13 15:33:16 standalone.localdomain dnsmasq-dhcp[497667]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/host
Oct 13 15:33:16 standalone.localdomain dnsmasq-dhcp[497667]: read /var/lib/neutron/dhcp/0c455abd-28d4-47e7-a254-e50de0526def/opts
Oct 13 15:33:16 standalone.localdomain podman[497566]: 2025-10-13 15:33:16.827736684 +0000 UTC m=+13.901914522 container start 19af37e22916651190c7368a38efbd7c0065db40d9b894dba9ae2006ed72bd23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c455abd-28d4-47e7-a254-e50de0526def, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:33:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:17 standalone.localdomain ceph-mon[29756]: pgmap v3561: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:17.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3562: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:33:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/817742760' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:33:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:33:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/817742760' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:33:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:18.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:18 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:33:18 standalone.localdomain systemd[1]: Started libpod-conmon-75dc77348389bc4f5776c3a11c75058c6fac24f99f4ecf2b008a06d6c87b1d68.scope.
Oct 13 15:33:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:33:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf128e99a62c5733593b2d8b7278d7231491ce3f97b5399c63b35bbbee258b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:33:18 standalone.localdomain podman[497528]: 2025-10-13 15:33:18.869151287 +0000 UTC m=+17.510584798 container init 75dc77348389bc4f5776c3a11c75058c6fac24f99f4ecf2b008a06d6c87b1d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:33:18 standalone.localdomain podman[497528]: 2025-10-13 15:33:18.880105285 +0000 UTC m=+17.521538796 container start 75dc77348389bc4f5776c3a11c75058c6fac24f99f4ecf2b008a06d6c87b1d68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-777e62f3-0875-437c-ae6a-3924fea34ee8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:33:18 standalone.localdomain dnsmasq[497673]: started, version 2.85 cachesize 150
Oct 13 15:33:18 standalone.localdomain dnsmasq[497673]: DNS service limited to local subnets
Oct 13 15:33:18 standalone.localdomain dnsmasq[497673]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:33:18 standalone.localdomain dnsmasq[497673]: warning: no upstream servers configured
Oct 13 15:33:18 standalone.localdomain dnsmasq-dhcp[497673]: DHCP, static leases only on 192.168.199.0, lease time 1d
Oct 13 15:33:18 standalone.localdomain dnsmasq[497673]: read /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/addn_hosts - 0 addresses
Oct 13 15:33:18 standalone.localdomain dnsmasq-dhcp[497673]: read /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/host
Oct 13 15:33:18 standalone.localdomain dnsmasq-dhcp[497673]: read /var/lib/neutron/dhcp/777e62f3-0875-437c-ae6a-3924fea34ee8/opts
Oct 13 15:33:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:18.882 496978 INFO neutron.agent.dhcp.agent [None req-e45b1d56-122e-407b-b2e4-935ae7473794 - - - - - -] Finished network a12f1166-9c4d-4d97-bc78-657c05e7af68 dhcp configuration
Oct 13 15:33:19 standalone.localdomain ceph-mon[29756]: pgmap v3562: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/817742760' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:33:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/817742760' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:33:19 standalone.localdomain systemd-journald[48591]: Data hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Oct 13 15:33:19 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:33:19 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:33:19 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:33:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3563: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:19.449 496978 INFO neutron.agent.dhcp.agent [None req-857020c1-2fe8-4922-bafa-7d34a210a6c0 - - - - - -] DHCP configuration for ports {'88d4c439-3ced-4b8c-9aea-848d2cfc109e', 'dc5eaba3-5a43-4142-aa7e-8a037cb22fad', 'ae82040a-05c2-498b-820e-c10da001624a', 'fff1b0db-e739-4525-986c-687923da6b6e'} is completed
Oct 13 15:33:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:33:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:20.130 496978 INFO neutron.agent.dhcp.agent [None req-ecf1dd41-3a0e-44dd-bad6-822260b96783 - - - - - -] Finished network 777e62f3-0875-437c-ae6a-3924fea34ee8 dhcp configuration
Oct 13 15:33:20 standalone.localdomain podman[497690]: 2025-10-13 15:33:20.171064513 +0000 UTC m=+0.683142260 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:33:20 standalone.localdomain podman[497690]: 2025-10-13 15:33:20.213538183 +0000 UTC m=+0.725615930 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd)
Oct 13 15:33:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:20.259 496978 INFO neutron.agent.dhcp.agent [None req-d793bf11-b18c-478c-92b1-737109c0b036 - - - - - -] DHCP configuration for ports {'647bb9e9-c19a-483e-af58-d359c6d0b7b5', 'b7ed1856-7267-428d-bfda-0ee53d5ea6b6'} is completed
Oct 13 15:33:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:20 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:33:21 standalone.localdomain ceph-mon[29756]: pgmap v3563: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3564: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:21 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [NOTICE]   (379022) : haproxy version is 2.8.14-c23fe91
Oct 13 15:33:21 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [NOTICE]   (379022) : path to executable is /usr/sbin/haproxy
Oct 13 15:33:21 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [WARNING]  (379022) : Exiting Master process...
Oct 13 15:33:21 standalone.localdomain podman[497710]: 2025-10-13 15:33:21.376418055 +0000 UTC m=+0.061118244 container kill 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:33:21 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [ALERT]    (379022) : Current worker (379024) exited with code 143 (Terminated)
Oct 13 15:33:21 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[379018]: [WARNING]  (379022) : All workers exited. Exiting... (0)
Oct 13 15:33:21 standalone.localdomain systemd[1]: libpod-4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d.scope: Deactivated successfully.
Oct 13 15:33:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:22.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:33:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:33:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:33:23 standalone.localdomain ceph-mon[29756]: pgmap v3564: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:33:23
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'images', 'backups', '.mgr', 'vms', 'volumes']
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3565: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21493 DF PROTO=TCP SPT=47060 DPT=9102 SEQ=18295593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC5517E0000000001030307) 
Oct 13 15:33:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-919c2496449756819846525fbfb351457636bf59d0964ccd47919cff1ec5dc94-merged.mount: Deactivated successfully.
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:33:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:33:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:23.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5d47d2bb79e4159e5b63c2e82543365dfb186987c7afdef56d8598cc2a6c5bdf-merged.mount: Deactivated successfully.
Oct 13 15:33:23 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5d47d2bb79e4159e5b63c2e82543365dfb186987c7afdef56d8598cc2a6c5bdf-merged.mount: Deactivated successfully.
Oct 13 15:33:23 standalone.localdomain podman[497724]: 2025-10-13 15:33:23.955906658 +0000 UTC m=+2.563241874 container died 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:33:23 standalone.localdomain podman[497736]: 2025-10-13 15:33:23.986645556 +0000 UTC m=+1.255625482 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, release=1, distribution-scope=public, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:33:24 standalone.localdomain podman[497738]: 2025-10-13 15:33:24.040735023 +0000 UTC m=+1.300492624 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=swift_account_server, managed_by=tripleo_ansible, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12)
Oct 13 15:33:24 standalone.localdomain podman[497737]: 2025-10-13 15:33:24.091566829 +0000 UTC m=+1.361271617 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible)
Oct 13 15:33:24 standalone.localdomain podman[497738]: 2025-10-13 15:33:24.222677819 +0000 UTC m=+1.482435360 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, container_name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-account, release=1, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., batch=17.1_20250721.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 15:33:24 standalone.localdomain podman[497736]: 2025-10-13 15:33:24.254986424 +0000 UTC m=+1.523966410 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, distribution-scope=public, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, managed_by=tripleo_ansible)
Oct 13 15:33:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:33:24 standalone.localdomain podman[497737]: 2025-10-13 15:33:24.286401302 +0000 UTC m=+1.556106050 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T15:54:32, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20250721.1, version=17.1.9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, release=1, io.openshift.expose-services=)
Oct 13 15:33:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21494 DF PROTO=TCP SPT=47060 DPT=9102 SEQ=18295593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC555760000000001030307) 
Oct 13 15:33:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:33:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:33:25 standalone.localdomain ceph-mon[29756]: pgmap v3565: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3566: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-461aa997cb27c140eaa5a112d1c6dafece82fe45950c8843271a52b350b33ee2-merged.mount: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain podman[497724]: 2025-10-13 15:33:25.883751923 +0000 UTC m=+4.491087109 container cleanup 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:33:25 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain systemd[1]: libpod-conmon-4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d.scope: Deactivated successfully.
Oct 13 15:33:25 standalone.localdomain podman[497770]: 2025-10-13 15:33:25.925701136 +0000 UTC m=+1.973387549 container remove 4a92922603a22fb5abd70dde94970b285cf969a1fa2d4e59c603d0d8a79c109d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:33:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:25.939 496978 INFO neutron.agent.dhcp.agent [None req-6fbb5a71-cd57-4b8f-b297-f4064d00954d - - - - - -] Finished network 0c455abd-28d4-47e7-a254-e50de0526def dhcp configuration
Oct 13 15:33:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:25.940 496978 INFO neutron.agent.dhcp.agent [None req-c91addea-21b0-4582-a274-9e2c3f9b9fdf - - - - - -] Synchronizing state complete
Oct 13 15:33:25 standalone.localdomain podman[497846]: 2025-10-13 15:33:25.962072057 +0000 UTC m=+1.259519072 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:33:25 standalone.localdomain podman[497846]: 2025-10-13 15:33:25.971804156 +0000 UTC m=+1.269251131 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:33:25 standalone.localdomain podman[497846]: unhealthy
Oct 13 15:33:25 standalone.localdomain podman[497834]: 2025-10-13 15:33:25.938593523 +0000 UTC m=+1.653938305 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:33:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:26.020 496978 INFO neutron.agent.dhcp.agent [None req-c91addea-21b0-4582-a274-9e2c3f9b9fdf - - - - - -] DHCP agent started
Oct 13 15:33:26 standalone.localdomain podman[497834]: 2025-10-13 15:33:26.02190614 +0000 UTC m=+1.737250902 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:33:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:33:26.530 496978 INFO neutron.agent.dhcp.agent [None req-77e1a1b2-74ae-4cae-bc2c-4773eaab0163 - - - - - -] DHCP configuration for ports {'b7ed1856-7267-428d-bfda-0ee53d5ea6b6', '52f81088-6dd1-4b23-992e-fdefd91123e1', '9a2e0de2-0849-4467-97f1-b171963f0a27', '27f28391-9736-4486-ba91-ddea55e8e6c2', 'fff1b0db-e739-4525-986c-687923da6b6e', '8a49767c-fb09-4185-95da-4261d8043fad', 'ae82040a-05c2-498b-820e-c10da001624a', '88d4c439-3ced-4b8c-9aea-848d2cfc109e', 'dc5eaba3-5a43-4142-aa7e-8a037cb22fad', '647bb9e9-c19a-483e-af58-d359c6d0b7b5', 'da3e5a61-7adb-481b-b7f5-703c3939cde2'} is completed
Oct 13 15:33:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21495 DF PROTO=TCP SPT=47060 DPT=9102 SEQ=18295593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC55D770000000001030307) 
Oct 13 15:33:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:26 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:33:26 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Failed with result 'exit-code'.
Oct 13 15:33:26 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:33:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:27 standalone.localdomain ceph-mon[29756]: pgmap v3566: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3567: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:27.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:28 standalone.localdomain ceph-mon[29756]: pgmap v3567: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:28 standalone.localdomain sshd[497877]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:33:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:33:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:28.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:28 standalone.localdomain podman[497879]: 2025-10-13 15:33:28.790737958 +0000 UTC m=+0.057525573 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 15:33:28 standalone.localdomain podman[497879]: 2025-10-13 15:33:28.799658613 +0000 UTC m=+0.066446208 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:33:29 standalone.localdomain unix_chkpwd[497899]: password check failed for user (root)
Oct 13 15:33:29 standalone.localdomain sshd[497877]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:33:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3568: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-da82297647e243d4edb901be4ed74d543084bc0e345f281c209d10ffa81727a6-merged.mount: Deactivated successfully.
Oct 13 15:33:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d33a74ddba43a8c0deb95d8e678513479d7605354398a4facd4f04bb49f5adb5-merged.mount: Deactivated successfully.
Oct 13 15:33:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d33a74ddba43a8c0deb95d8e678513479d7605354398a4facd4f04bb49f5adb5-merged.mount: Deactivated successfully.
Oct 13 15:33:29 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:33:30 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:30.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:30 standalone.localdomain ceph-mon[29756]: pgmap v3568: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21496 DF PROTO=TCP SPT=47060 DPT=9102 SEQ=18295593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC56D360000000001030307) 
Oct 13 15:33:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:31.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:31 standalone.localdomain sshd[497877]: Failed password for root from 193.46.255.244 port 16578 ssh2
Oct 13 15:33:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3569: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:31 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:33:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:32.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:32 standalone.localdomain unix_chkpwd[497900]: password check failed for user (root)
Oct 13 15:33:33 standalone.localdomain ceph-mon[29756]: pgmap v3569: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3570: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:33.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-948d63d72c90238568600bb4ced3a347f3a772760aabfa54019ccce9078bd0ca-merged.mount: Deactivated successfully.
Oct 13 15:33:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:33:34 standalone.localdomain podman[497901]: 2025-10-13 15:33:34.168123826 +0000 UTC m=+0.083651199 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:33:34 standalone.localdomain podman[497901]: 2025-10-13 15:33:34.177913217 +0000 UTC m=+0.093440600 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:33:34 standalone.localdomain sshd[497877]: Failed password for root from 193.46.255.244 port 16578 ssh2
Oct 13 15:33:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:33:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:35 standalone.localdomain ceph-mon[29756]: pgmap v3570: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.232 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.232 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.233 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:33:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d02971ddf65d005a908e4946d9530a2c20c528ccdcb222adb37714b18dbf1610-merged.mount: Deactivated successfully.
Oct 13 15:33:35 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:33:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3571: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.575 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.600 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:33:35 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:35.601 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:33:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e0f86229f02c4331620c9ec8e21be769ac9cff125fc1f01f8404fcae9b59e3df-merged.mount: Deactivated successfully.
Oct 13 15:33:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-19b5df687512785465f13112d48e85c216168957a07bbef3f89b587f68ca7ca8-merged.mount: Deactivated successfully.
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.091 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.114 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.114 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.115 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.115 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.115 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:33:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:33:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/690325662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.531 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:33:36 standalone.localdomain unix_chkpwd[497946]: password check failed for user (root)
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.702 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.702 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.702 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.706 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.706 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.852 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.854 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9288MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.854 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:33:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:36.855 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:33:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.127 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.128 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.128 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.128 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:33:37 standalone.localdomain ceph-mon[29756]: pgmap v3571: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:37 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/690325662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.358 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:33:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3572: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.572 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.573 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.587 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.607 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_USB _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:33:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:37.657 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:33:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:33:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/835837047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.048 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.056 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.080 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.083 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.083 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:33:38 standalone.localdomain sshd[497877]: Failed password for root from 193.46.255.244 port 16578 ssh2
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.107 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:33:38 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/835837047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:33:38 standalone.localdomain sshd[497877]: Received disconnect from 193.46.255.244 port 16578:11:  [preauth]
Oct 13 15:33:38 standalone.localdomain sshd[497877]: Disconnected from authenticating user root 193.46.255.244 port 16578 [preauth]
Oct 13 15:33:38 standalone.localdomain sshd[497877]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:33:38 standalone.localdomain sshd[497969]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:33:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e1c1920b4961eb109ce76883e2a632166f4de131f08418737a718e8b679f7eb-merged.mount: Deactivated successfully.
Oct 13 15:33:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88928f310f1ba382800c4333f5e27b201b6df1b159808455dd99069f0d30925c-merged.mount: Deactivated successfully.
Oct 13 15:33:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:38.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:38 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:14:45 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 417187 "" "Go-http-client/1.1"
Oct 13 15:33:38 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:33:38.897Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 13 15:33:38 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:33:38.902Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 13 15:33:38 standalone.localdomain podman_exporter[467321]: ts=2025-10-13T15:33:38.902Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882
Oct 13 15:33:39 standalone.localdomain ceph-mon[29756]: pgmap v3572: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3573: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:39 standalone.localdomain unix_chkpwd[497971]: password check failed for user (root)
Oct 13 15:33:39 standalone.localdomain sshd[497969]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:33:39 standalone.localdomain sudo[497972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:33:39 standalone.localdomain sudo[497972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:33:39 standalone.localdomain sudo[497972]: pam_unix(sudo:session): session closed for user root
Oct 13 15:33:39 standalone.localdomain sudo[497990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:33:39 standalone.localdomain sudo[497990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:33:40 standalone.localdomain sudo[497990]: pam_unix(sudo:session): session closed for user root
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:33:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:33:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 2c3edaa0-a523-40c4-8677-8aea769b23a3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:33:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 2c3edaa0-a523-40c4-8677-8aea769b23a3 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:33:40 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 2c3edaa0-a523-40c4-8677-8aea769b23a3 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:33:40 standalone.localdomain sudo[498040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:33:40 standalone.localdomain sudo[498040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:33:40 standalone.localdomain sudo[498040]: pam_unix(sudo:session): session closed for user root
Oct 13 15:33:41 standalone.localdomain ceph-mon[29756]: pgmap v3573: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:33:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:33:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:33:41 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:33:41 standalone.localdomain sshd[497969]: Failed password for root from 193.46.255.244 port 32882 ssh2
Oct 13 15:33:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3574: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:41 standalone.localdomain podman[467099]: time="2025-10-13T15:33:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:33:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:33:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 413474 "" "Go-http-client/1.1"
Oct 13 15:33:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:33:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 48623 "" "Go-http-client/1.1"
Oct 13 15:33:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:42.103 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:42.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:33:42 standalone.localdomain podman[498061]: 2025-10-13 15:33:42.500102645 +0000 UTC m=+0.072356311 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:33:42 standalone.localdomain podman[498061]: 2025-10-13 15:33:42.533889296 +0000 UTC m=+0.106142942 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent)
Oct 13 15:33:42 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:33:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:33:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:33:43 standalone.localdomain unix_chkpwd[498080]: password check failed for user (root)
Oct 13 15:33:43 standalone.localdomain ceph-mon[29756]: pgmap v3574: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3575: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:43.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:44 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:44.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:44 standalone.localdomain sshd[497969]: Failed password for root from 193.46.255.244 port 32882 ssh2
Oct 13 15:33:45 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:45.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:45 standalone.localdomain ceph-mon[29756]: pgmap v3575: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:33:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:33:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:33:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3576: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:33:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:33:46 standalone.localdomain unix_chkpwd[498089]: password check failed for user (root)
Oct 13 15:33:46 standalone.localdomain podman[498081]: 2025-10-13 15:33:46.800378993 +0000 UTC m=+0.064244490 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:33:46 standalone.localdomain podman[498081]: 2025-10-13 15:33:46.8353062 +0000 UTC m=+0.099171817 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:33:46 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:33:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:47.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:47.092 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.112408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369627112460, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 985, "num_deletes": 250, "total_data_size": 805293, "memory_usage": 824032, "flush_reason": "Manual Compaction"}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369627117874, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 517840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83784, "largest_seqno": 84768, "table_properties": {"data_size": 514327, "index_size": 1241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9624, "raw_average_key_size": 20, "raw_value_size": 506634, "raw_average_value_size": 1077, "num_data_blocks": 58, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369552, "oldest_key_time": 1760369552, "file_creation_time": 1760369627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 5524 microseconds, and 2226 cpu microseconds.
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.117919) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 517840 bytes OK
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.117951) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.121977) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.121998) EVENT_LOG_v1 {"time_micros": 1760369627121991, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.122018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 800524, prev total WAL file size 801013, number of live WAL files 2.
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.122674) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323533' seq:72057594037927935, type:22 .. '6D6772737461740033353034' seq:0, type:0; will stop at (end)
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(505KB)], [191(6846KB)]
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369627122754, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 7528667, "oldest_snapshot_seqno": -1}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 6838 keys, 5728913 bytes, temperature: kUnknown
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369627159083, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 5728913, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5691163, "index_size": 19538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 180609, "raw_average_key_size": 26, "raw_value_size": 5574440, "raw_average_value_size": 815, "num_data_blocks": 768, "num_entries": 6838, "num_filter_entries": 6838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.159433) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 5728913 bytes
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.161436) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.5 rd, 157.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 6.7 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(25.6) write-amplify(11.1) OK, records in: 7322, records dropped: 484 output_compression: NoCompression
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.161467) EVENT_LOG_v1 {"time_micros": 1760369627161453, "job": 120, "event": "compaction_finished", "compaction_time_micros": 36453, "compaction_time_cpu_micros": 23317, "output_level": 6, "num_output_files": 1, "total_output_size": 5728913, "num_input_records": 7322, "num_output_records": 6838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369627161746, "job": 120, "event": "table_file_deletion", "file_number": 193}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369627162943, "job": 120, "event": "table_file_deletion", "file_number": 191}
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.122520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.163120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.163132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.163136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.163140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:33:47.163144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:33:47 standalone.localdomain ceph-mon[29756]: pgmap v3576: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3577: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:47.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:47 standalone.localdomain sshd[498107]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:33:47 standalone.localdomain sshd[498107]: Accepted publickey for root from 192.168.122.30 port 52110 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:33:47 standalone.localdomain systemd[1]: Created slice User Slice of UID 0.
Oct 13 15:33:47 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 15:33:47 standalone.localdomain systemd-logind[45629]: New session 298 of user root.
Oct 13 15:33:47 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 15:33:47 standalone.localdomain systemd[1]: Starting User Manager for UID 0...
Oct 13 15:33:48 standalone.localdomain systemd[498111]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:33:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:48.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:33:48 standalone.localdomain ceph-mon[29756]: pgmap v3577: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Queued start job for default target Main User Target.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Created slice User Application Slice.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Reached target Paths.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Reached target Timers.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Starting D-Bus User Message Bus Socket...
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Starting Create User's Volatile Files and Directories...
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Listening on D-Bus User Message Bus Socket.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Finished Create User's Volatile Files and Directories.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Reached target Sockets.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Reached target Basic System.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Reached target Main User Target.
Oct 13 15:33:48 standalone.localdomain systemd[498111]: Startup finished in 178ms.
Oct 13 15:33:48 standalone.localdomain systemd[1]: Started User Manager for UID 0.
Oct 13 15:33:48 standalone.localdomain systemd[1]: Started Session 298 of User root.
Oct 13 15:33:48 standalone.localdomain sshd[498107]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:33:48 standalone.localdomain sshd[497969]: Failed password for root from 193.46.255.244 port 32882 ssh2
Oct 13 15:33:48 standalone.localdomain sshd[497969]: Received disconnect from 193.46.255.244 port 32882:11:  [preauth]
Oct 13 15:33:48 standalone.localdomain sshd[497969]: Disconnected from authenticating user root 193.46.255.244 port 32882 [preauth]
Oct 13 15:33:48 standalone.localdomain sshd[497969]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:33:48 standalone.localdomain sshd[498182]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:33:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:48.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:49 standalone.localdomain python3.9[498236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:33:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3578: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:49 standalone.localdomain unix_chkpwd[498241]: password check failed for user (root)
Oct 13 15:33:49 standalone.localdomain sshd[498182]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:33:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:33:49 standalone.localdomain podman[498242]: 2025-10-13 15:33:49.825649063 +0000 UTC m=+0.076503438 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 15:33:49 standalone.localdomain podman[498242]: 2025-10-13 15:33:49.84179114 +0000 UTC m=+0.092645515 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, release=1755695350)
Oct 13 15:33:49 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:33:50 standalone.localdomain ceph-mon[29756]: pgmap v3578: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:50 standalone.localdomain python3.9[498368]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:33:51 standalone.localdomain python3.9[498476]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:33:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3579: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:33:51 standalone.localdomain sshd[498182]: Failed password for root from 193.46.255.244 port 33456 ssh2
Oct 13 15:33:51 standalone.localdomain podman[498540]: 2025-10-13 15:33:51.55758273 +0000 UTC m=+0.094152692 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible)
Oct 13 15:33:51 standalone.localdomain podman[498540]: 2025-10-13 15:33:51.573982625 +0000 UTC m=+0.110552627 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 13 15:33:51 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:33:51 standalone.localdomain python3.9[498603]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:33:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:52 standalone.localdomain python3.9[498711]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 15:33:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:52.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:53 standalone.localdomain python3.9[498819]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:33:53 standalone.localdomain ceph-mon[29756]: pgmap v3579: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:53 standalone.localdomain unix_chkpwd[498851]: password check failed for user (root)
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:33:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3580: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26120 DF PROTO=TCP SPT=34440 DPT=9102 SEQ=38746225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC5C6AF0000000001030307) 
Oct 13 15:33:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:53.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:53 standalone.localdomain python3.9[498928]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:33:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26121 DF PROTO=TCP SPT=34440 DPT=9102 SEQ=38746225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC5CAB60000000001030307) 
Oct 13 15:33:54 standalone.localdomain sshd[498182]: Failed password for root from 193.46.255.244 port 33456 ssh2
Oct 13 15:33:55 standalone.localdomain unix_chkpwd[499002]: password check failed for user (root)
Oct 13 15:33:55 standalone.localdomain ceph-mon[29756]: pgmap v3580: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3581: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:55 standalone.localdomain python3.9[499039]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:33:56 standalone.localdomain python3.9[499149]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:33:56 standalone.localdomain network[499166]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:33:56 standalone.localdomain network[499167]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:33:56 standalone.localdomain network[499168]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:33:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:33:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:33:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:33:56 standalone.localdomain podman[499175]: 2025-10-13 15:33:56.563617775 +0000 UTC m=+0.093909084 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4)
Oct 13 15:33:56 standalone.localdomain podman[499174]: 2025-10-13 15:33:56.60789682 +0000 UTC m=+0.141807871 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, vcs-type=git, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:33:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26122 DF PROTO=TCP SPT=34440 DPT=9102 SEQ=38746225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC5D2B60000000001030307) 
Oct 13 15:33:56 standalone.localdomain podman[499173]: 2025-10-13 15:33:56.683430758 +0000 UTC m=+0.217581196 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, config_id=tripleo_step4, vcs-type=git, release=1, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:33:56 standalone.localdomain podman[499175]: 2025-10-13 15:33:56.778892359 +0000 UTC m=+0.309183708 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, vcs-type=git, distribution-scope=public, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:33:56 standalone.localdomain podman[499174]: 2025-10-13 15:33:56.819726498 +0000 UTC m=+0.353637609 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, container_name=swift_container_server, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9)
Oct 13 15:33:56 standalone.localdomain sshd[498182]: Failed password for root from 193.46.255.244 port 33456 ssh2
Oct 13 15:33:56 standalone.localdomain podman[499173]: 2025-10-13 15:33:56.897283117 +0000 UTC m=+0.431433535 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, container_name=swift_object_server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 15:33:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:33:57 standalone.localdomain ceph-mon[29756]: pgmap v3581: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:57 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:33:57 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:33:57 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:33:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:33:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:33:57 standalone.localdomain podman[499259]: 2025-10-13 15:33:57.273384977 +0000 UTC m=+0.073474606 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:33:57 standalone.localdomain podman[499260]: 2025-10-13 15:33:57.330357872 +0000 UTC m=+0.129599214 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:33:57 standalone.localdomain podman[499260]: 2025-10-13 15:33:57.342829626 +0000 UTC m=+0.142070958 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:33:57 standalone.localdomain podman[499259]: 2025-10-13 15:33:57.359676755 +0000 UTC m=+0.159766474 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:33:57 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:33:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3582: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:57 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:33:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:57.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:57 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:33:58 standalone.localdomain sshd[498182]: Received disconnect from 193.46.255.244 port 33456:11:  [preauth]
Oct 13 15:33:58 standalone.localdomain sshd[498182]: Disconnected from authenticating user root 193.46.255.244 port 33456 [preauth]
Oct 13 15:33:58 standalone.localdomain sshd[498182]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:33:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:33:58.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:33:59 standalone.localdomain ceph-mon[29756]: pgmap v3582: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:33:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3583: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26123 DF PROTO=TCP SPT=34440 DPT=9102 SEQ=38746225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC5E2760000000001030307) 
Oct 13 15:34:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:34:00 standalone.localdomain systemd[1]: tmp-crun.tzpqGL.mount: Deactivated successfully.
Oct 13 15:34:00 standalone.localdomain podman[499426]: 2025-10-13 15:34:00.840391519 +0000 UTC m=+0.111349602 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:34:00 standalone.localdomain podman[499426]: 2025-10-13 15:34:00.851038878 +0000 UTC m=+0.121996921 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:34:00 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:34:01 standalone.localdomain ceph-mon[29756]: pgmap v3583: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3584: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:01 standalone.localdomain python3.9[499557]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:02 standalone.localdomain python3.9[499667]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:02.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:03 standalone.localdomain python3.9[499777]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:03 standalone.localdomain ceph-mon[29756]: pgmap v3584: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3585: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:03.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:03 standalone.localdomain python3.9[499885]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:04 standalone.localdomain ceph-mon[29756]: pgmap v3585: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:04 standalone.localdomain python3.9[499993]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:04 standalone.localdomain python3.9[500048]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3586: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:05 standalone.localdomain python3.9[500156]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:34:05 standalone.localdomain podman[500159]: 2025-10-13 15:34:05.821905089 +0000 UTC m=+0.089256101 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:34:05 standalone.localdomain podman[500159]: 2025-10-13 15:34:05.860952422 +0000 UTC m=+0.128303434 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:34:05 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:34:06 standalone.localdomain python3.9[500234]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:06 standalone.localdomain ceph-mon[29756]: pgmap v3586: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.951 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.952 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.952 378821 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.953 378821 ERROR neutron.agent.linux.external_process [-] metadata-proxy for metadata with uuid 0c455abd-28d4-47e7-a254-e50de0526def not found. The process should not have died
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.953 378821 WARNING neutron.agent.linux.external_process [-] Respawning metadata-proxy for uuid 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.953 378821 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.954 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[02152bfe-e5d8-40da-9fd6-a72a175203bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.954 378821 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: global
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     log         /dev/log local0 debug
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     log-tag     haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     user        root
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     group       root
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     maxconn     1024
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     pidfile     /var/lib/neutron/external/pids/0c455abd-28d4-47e7-a254-e50de0526def.pid.haproxy
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     daemon
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: defaults
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     log global
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     mode http
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     option httplog
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     option dontlognull
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     option http-server-close
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     option forwardfor
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     retries                 3
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     timeout http-request    30s
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     timeout connect         30s
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     timeout client          32s
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     timeout server          32s
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     timeout http-keep-alive 30s
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: listen listener
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     bind 169.254.169.254:80
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     server metadata /var/lib/neutron/metadata_proxy
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:     http-request add-header X-OVN-Network-ID 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 13 15:34:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:06.955 378821 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'env', 'PROCESS_TAG=haproxy-0c455abd-28d4-47e7-a254-e50de0526def', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c455abd-28d4-47e7-a254-e50de0526def.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 13 15:34:06 standalone.localdomain python3.9[500342]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:07 standalone.localdomain podman[500456]: 
Oct 13 15:34:07 standalone.localdomain podman[500456]: 2025-10-13 15:34:07.380226327 +0000 UTC m=+0.072365630 container create 5ebe253be7587c71f27004bcb0f14b3adcc7168592269a4136fa0fdaac981f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:34:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3587: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:07 standalone.localdomain systemd[1]: Started libpod-conmon-5ebe253be7587c71f27004bcb0f14b3adcc7168592269a4136fa0fdaac981f42.scope.
Oct 13 15:34:07 standalone.localdomain podman[500456]: 2025-10-13 15:34:07.350408098 +0000 UTC m=+0.042547381 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 15:34:07 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:34:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:07.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:07 standalone.localdomain python3.9[500486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f161c7f0f881da215a30163e3cdd049cc17abff46debd20ccee74e7caeb35c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:34:07 standalone.localdomain podman[500456]: 2025-10-13 15:34:07.53092436 +0000 UTC m=+0.223063673 container init 5ebe253be7587c71f27004bcb0f14b3adcc7168592269a4136fa0fdaac981f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:34:07 standalone.localdomain podman[500456]: 2025-10-13 15:34:07.540399503 +0000 UTC m=+0.232538806 container start 5ebe253be7587c71f27004bcb0f14b3adcc7168592269a4136fa0fdaac981f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:34:07 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[500490]: [NOTICE]   (500496) : New worker (500498) forked
Oct 13 15:34:07 standalone.localdomain neutron-haproxy-ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def[500490]: [NOTICE]   (500496) : Loading success.
Oct 13 15:34:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:34:07.620 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:34:08 standalone.localdomain python3.9[500559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:08.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:09 standalone.localdomain ceph-mon[29756]: pgmap v3587: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:09 standalone.localdomain python3.9[500667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3588: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:09 standalone.localdomain python3.9[500722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:10 standalone.localdomain python3.9[500830]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:34:10 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:34:10 standalone.localdomain systemd-sysv-generator[500854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:34:10 standalone.localdomain systemd-rc-local-generator[500851]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:34:10 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:34:11 standalone.localdomain ceph-mon[29756]: pgmap v3588: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3589: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:11 standalone.localdomain python3.9[500975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:11 standalone.localdomain podman[467099]: time="2025-10-13T15:34:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:34:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:34:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414659 "" "Go-http-client/1.1"
Oct 13 15:34:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:34:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49105 "" "Go-http-client/1.1"
Oct 13 15:34:11 standalone.localdomain python3.9[501030]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:12 standalone.localdomain python3.9[501138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:12.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:34:12 standalone.localdomain podman[501194]: 2025-10-13 15:34:12.790319672 +0000 UTC m=+0.061319350 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 13 15:34:12 standalone.localdomain podman[501194]: 2025-10-13 15:34:12.799927198 +0000 UTC m=+0.070926916 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:34:12 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:34:12 standalone.localdomain python3.9[501193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:34:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:34:13 standalone.localdomain ceph-mon[29756]: pgmap v3589: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3590: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:13 standalone.localdomain python3.9[501319]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:34:13 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:34:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:13.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:13 standalone.localdomain systemd-rc-local-generator[501345]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:34:13 standalone.localdomain systemd-sysv-generator[501349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:34:13 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:34:14 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:34:14 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:34:14 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:34:14 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:34:14 standalone.localdomain python3.9[501469]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:15 standalone.localdomain ceph-mon[29756]: pgmap v3590: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3591: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:15 standalone.localdomain python3.9[501577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:15 standalone.localdomain python3.9[501632]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/lib/openstack/healthchecks/iscsid/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/iscsid/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:16 standalone.localdomain ceph-mon[29756]: pgmap v3591: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:16 standalone.localdomain python3.9[501740]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:17 standalone.localdomain python3.9[501848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3592: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:17.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:34:17 standalone.localdomain python3.9[501903]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/iscsid.json _original_basename=.rac_c6o7 recurse=False state=file path=/var/lib/kolla/config_files/iscsid.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:17 standalone.localdomain podman[501904]: 2025-10-13 15:34:17.844813399 +0000 UTC m=+0.103601043 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:34:17 standalone.localdomain podman[501904]: 2025-10-13 15:34:17.885262585 +0000 UTC m=+0.144050239 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:34:17 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:34:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:34:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1283640814' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:34:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:34:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1283640814' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:34:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:18.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:19 standalone.localdomain python3.9[502036]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:19 standalone.localdomain ceph-mon[29756]: pgmap v3592: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1283640814' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:34:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1283640814' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:34:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3593: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:34:20 standalone.localdomain podman[502217]: 2025-10-13 15:34:20.808209723 +0000 UTC m=+0.079608395 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64)
Oct 13 15:34:20 standalone.localdomain podman[502217]: 2025-10-13 15:34:20.819270604 +0000 UTC m=+0.090669306 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Oct 13 15:34:20 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:34:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:20.992 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:34:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:20.996 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:34:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:20.997 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:34:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:20.997 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.026 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.028 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.028 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.049 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.050 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd571a75b-f815-4c2a-a0c8-c346419d61c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:20.998209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16b1bec6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': 'd8d935e97a2a5ad98fd9c9f89342b5bf393dea12a4a3f07107103e88bfd2ad5f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:20.998209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16b1d97e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': 'edb99f0436a85ae6bc0712cbd792f337199bfdc7a8e0731ebd2a251ecc2fb1b2'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:20.998209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16b1ed6a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '85773e6a51ff255fc6dffc1bf6be055d5aa862baf04f31fa617e5a8682c339d7'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:20.998209', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16b52da4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.218696568, 'message_signature': '8eaba9ed1c55708bb01851ad0404a940c553be4e37b093faa2782e8526c6f8e5'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:20.998209', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16b54398-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.218696568, 'message_signature': 'd28a1d3a752d81590a66df1932573756a0ed980281cb3a0287984c0d9ff2b5ba'}]}, 'timestamp': '2025-10-13 15:34:21.051214', '_unique_id': '83331c44e7114898b73d38e9110007a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.052 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.054 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.087 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2c062ba2-2d3c-4e93-b943-e8178b5089fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:34:21.054718', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '16baec58-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.276734536, 'message_signature': '58c42c26fc2bfd3edeb11337df9b6d031078c9b6055b758a8454ac51171c6a6f'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:34:21.054718', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '16bf01bc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.303543193, 'message_signature': '2fd9a4079592f751a54f632bba255ae133628519c1297d471d07d6efc3cd2f3e'}]}, 'timestamp': '2025-10-13 15:34:21.115086', '_unique_id': 'd7bd72c3af2c4e148039075b146397b2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.117 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.123 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.126 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2c563800-98a9-4da1-b656-95fc3d37392b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.117868', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16c05d96-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': 'f3ac49b3a83751dfef35e6b50420cbc7d15be188f46971fb03e783355ae82a41'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.117868', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16c0e6a8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': '63194b4ec3e75488e69942de7dfdbcb57fa7f2dfc70ead45e14aa2861a12b46a'}]}, 'timestamp': '2025-10-13 15:34:21.127523', '_unique_id': '801554cbfaba447bb2926c7f768bdd1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.130 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceph-mon[29756]: pgmap v3593: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.169 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.170 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.191 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.191 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43fdb55d-28c5-4233-86df-1a2d62c70bfd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.130264', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16c7565a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'e2477e20cee722f0adb6f6f4a6b7961ff36b37191e0e31b91eb17f15de3d0cae'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.130264', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16c768d4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '675730b14452867cfc7123a246ceb781d48522b370a6503d3f1707bdfc61d029'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.130264', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16c772fc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'e419f48861f5f9727f323701c8931cf01c5b6b6331ebad010ef607ec2f0e21b5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.130264', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cab9a8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': 'b598a71398069ce2df00d40c34efbb5dd6e3cc630c084c295dc440866de596c9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.130264', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cac7c2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '7b87cad26a238b3bf92d1391c1f26e2f39f728e768e0692b70ec54b7c0a94b6a'}]}, 'timestamp': '2025-10-13 15:34:21.192120', '_unique_id': '33d9002e22ee4c0e868748daa77eab84'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.194 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.194 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.194 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a188c134-275c-41f3-9e78-540af1fa3676', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.194555', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cb32b6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '4de2131fe60f23fdffdfc195b7ffbd63e0b6319a1aaaf3670d354a10aa5b2742'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.194555', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cb3c98-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': '135fd865272cb6a4b3c3ef44f6c3ccd5b1f8b75cf85a05f2fe4846baec0f14e6'}]}, 'timestamp': '2025-10-13 15:34:21.195096', '_unique_id': 'd36fafa832c644e6bd196da1e1116cf2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.195 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.196 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.196 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.196 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0afe8263-fa10-4533-b6f4-7e01b8d89367', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.196436', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cb7c3a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': 'cbb20e9da324bc9770e4a25057e4e84e3fcc3884a4ca23f36a6b280fdd870ded'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.196436', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cb8608-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': 'cb3481e6744c78e22293a377ca16e754e97b1d9b11ce561f20f544e5376127b9'}]}, 'timestamp': '2025-10-13 15:34:21.196974', '_unique_id': '23142b4a5b804d0b951066fad07ee9ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.198 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.198 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4918 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.198 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3756 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c569528c-fe5a-4440-bfdf-3498c43d80be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4918, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.198458', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cbcaaa-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '13b259b4bce12a2328c28fc7d3c289909a071eae8dd990104e4ca0bee67b0dbc'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3756, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.198458', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cbd3e2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': 'bae3acc56a3d74e6d6e10dafe80e9005d92f31d3ac00a07c683f179f0eff747a'}]}, 'timestamp': '2025-10-13 15:34:21.198966', '_unique_id': '21627a8990e648ce84097b6b25e94575'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.200 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 32090000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 31920000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd079d675-394f-4df5-bae5-d9fb0ec6bf27', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 32090000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:34:21.200328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '16cc12bc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.276734536, 'message_signature': '7c06d3284c8ed5322880171861f9c47458bff4fb082076de2c49ec2b9397ec81'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 31920000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:34:21.200328', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '16cc1cf8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.303543193, 'message_signature': 'a09cfdd33fb184fe5f3f4f002a124a39dbc2abd8d173dcc884b60e5d5e94728a'}]}, 'timestamp': '2025-10-13 15:34:21.200827', '_unique_id': '648116ac26a84971b596a3858fed4f0f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.202 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.202 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a79658cb-a6f4-4e40-91b1-b41d656bd958', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.202139', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cc59a2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '2f3df323e0a69f1279eaa8a905a4966350f52c2ebd66e2717d5ee6f61d9b6b0a'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.202139', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cc63f2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': 'd943e30a4f5546fe2a07b965e8d45514fa6996934526f46520342aed4c43e05d'}]}, 'timestamp': '2025-10-13 15:34:21.202653', '_unique_id': '636cf118b9d041cc8f1a907610e7b261'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 180 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 180 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '17b8eff7-80d6-4f57-83ec-2361c0b80859', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 180, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.203918', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cc9f48-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '02a0da9d8143901e4ab94c512cdb04d5286ff93b064e498d7f8562f65ee0dcc9'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 180, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.203918', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cca84e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': '622d2e77ed380777daa73653e080e0c810ab3e475a402fbd240a5afc0262ae33'}]}, 'timestamp': '2025-10-13 15:34:21.204403', '_unique_id': '3ce7fbc0928448c286926e0ffef124f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.205 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2024ee11-93cd-4e5d-93e2-339bb280d51f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.205725', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cce5a2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '319ce1025343bb324fac726d292c2fca8cccce64db5fe549f0cf7451ecfa6e8c'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.205725', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16ccef84-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '1ca3cc51ea89f66bf8fe335c93611d3eb548d650fc39e7f34473ded45f1ddec8'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.205725', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16ccf7ae-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '00726c81c2e00d70e64e6521c0a654507318efce97ac2a9830fd1342ef9b5440'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.205725', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cd0078-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.218696568, 'message_signature': 'f9fa0ea9ba06eeac2baea8f245f4392ecd682e5369337487acea9d71f2ea0166'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.205725', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cd08c0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.218696568, 'message_signature': '5fad71039ea9855f613c0c95f03220ea527be495271e6aff733623f781d9021f'}]}, 'timestamp': '2025-10-13 15:34:21.206865', '_unique_id': '4750fd4995dc4002be7e6bb1d779eae7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.208 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.209 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.209 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '883a40ec-b00a-4e4e-89d8-ea4f84cb1fc1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.208335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cd4bd2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'c3531b65e5cb8a37f2d5a52f8ea59a359b287837b79296a837f7da086b708924'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.208335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cd55e6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '91aca605d95ebb606c281c40e9d2455fa54d284f11373e8c1aed0ed1dbfc62eb'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.208335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16cd5e4c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'f5d3d6de9eba6adcb07f830d90f6c56fb4d6514cc1c4c01d3c1c36cf2487e494'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.208335', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cd669e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '0858d2f8c1b09cc1e4b7e890c61218fff8ba2b71e7fc2a4a65061ec7557fdb1b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.208335', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cd6e82-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '360286df63a0884f2f92fd751a3281173a096d7d783dd01743018c1fedaa7df5'}]}, 'timestamp': '2025-10-13 15:34:21.209466', '_unique_id': '00f36efcac17403fae358c2cec6e6069'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.210 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.212 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.212 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.212 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b583fa6c-bcf4-42c1-b005-a8eeee571ae1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.211140', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cdbb1c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '41d9c5c28878c5af6dc6a8c4c1b9ccbe9df23216e595b20cfd135a160d4c2747'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.211140', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cdcf1c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '28f33a528ae010c1b6b0240a9626f65413492335fe03584eb22ba83936286bd4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.211140', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16cde0ba-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '6da9a7dee185ea384f380863e16f57021c8bdb42980b4acdab11aec8191264dc'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.211140', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cdec54-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '3eec2c2a3dbd24ddb7189a3021e496223936e5c3d886e129b5e836674f7b8cf1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.211140', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cdf5f0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': 'aae8847722a520d0ec5b74e71da2cb4113d2a46515ac4b5fd08be94738a2398b'}]}, 'timestamp': '2025-10-13 15:34:21.212952', '_unique_id': '1b281a8aa36345d99a4dc99d74e94582'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.213 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.214 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.214 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '880595ab-894f-45dc-b103-99486f636fb9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.214835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16ce4aa0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '49aafa7a77998450f980a43dcf643f5beccf463fe3722316b91cf986e9326bce'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.214835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16ce5590-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '57fcd2efc841367e8d28762209d3b0b196e060bab45cf915e42ad71a5d8baeb3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.214835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16ce60b2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'b7ad263133f2be9457512437b94f806daf8a225803a9d81704b0ade2cd4750ea'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.214835', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16ce6ad0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '0c0bd2875311ca549186e098c7063a7d43b757cff99799501221585e0aa04595'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.214835', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16ce7494-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '6dbdc7746d9586744059ebeb076b7f0826c2cac2c4ff79759f4c8c3fbf79bb6b'}]}, 'timestamp': '2025-10-13 15:34:21.216200', '_unique_id': 'd2b3e5a210ad45bc9c9f8b5d8aed9c8b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.216 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.218 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.218 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 66 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.219 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 51 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ebb12a26-e85a-4cf7-add6-37c888212d41', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 66, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.218790', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cee640-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '556f1e812a14a45404443f482b59aa71f1cdff5579d43e270163bee99141e829'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 51, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.218790', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cef220-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': '1adaf89881f8d96a7c6f5d2b8757d984dcd88e4cc26997bf7fcf848d0ee7ed33'}]}, 'timestamp': '2025-10-13 15:34:21.219428', '_unique_id': '2d26950b1bed4cce862dd98422d9a8f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.221 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.221 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d21ad0f-e694-4b24-846c-08c03c54057d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.221171', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16cf45e0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '352de8db5e25caf519a49c2c95b72125bf0d2299ceb4ea367f29385a5282ff3e'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.221171', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16cf5378-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': '6a8286df7f3ac86c5fc1be1c5414791e75eb01c73297436db7416bd16030d663'}]}, 'timestamp': '2025-10-13 15:34:21.221922', '_unique_id': 'a9b7adf7398b4cd38e23e660015f8e14'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.222 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.223 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.224 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.224 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.224 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.224 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '68252739-218a-41f3-bcaf-1e77fdf86279', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.223687', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cfa47c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'bb142bd0537f729d836fa10567a6dd011a72427db290d26d3152b1e2cfa7063f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.223687', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cfaf80-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'c70a76f64025be1158f0238d1ee06d432b6f4681392c1e33e6b1c8d4c7ba03f1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.223687', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16cfb976-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'f79461343282f468d4128b56e17f1d5f28704e801a12632e4f0a1c744550b5ea'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.223687', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16cfc484-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '704f3d70ee8e08df00765fd4bbfc3ead9b82ef4a7756a12b9cc945c80b56c115'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.223687', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16cfce52-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '58923cc9b3645232a84484421f3a276632bf95535c2c321b5e044814890ee757'}]}, 'timestamp': '2025-10-13 15:34:21.225049', '_unique_id': '6ac09ebeb7d74cce9e92a4633316d75e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.225 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.226 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.226 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.227 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '176c6314-4571-4dbc-b20e-5934a295c11e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.226926', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16d0235c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': '008504da951883a20104904bd4a4eacc6d71cd2297b7a4fcbb43aec0f653f8a5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.226926', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16d02f50-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': 'ab9605fb91cab97c935613ad1ed7f0f4780e195550f486149e16930892015d1c'}]}, 'timestamp': '2025-10-13 15:34:21.227572', '_unique_id': '0a610488d500470a9032914006c5061f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.229 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.229 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.229 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.229 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.230 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.230 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0437d8c7-a373-481c-a43d-6e9bcaafaef3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.229350', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16d08284-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '89da290fb2c7916535be101b345c3db838a3f6a0ea31dad010e19adb0c64778e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.229350', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16d08d60-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '03597e598f28712bedb228ad44dc9c814f356b73742494fff01d129c73283bb1'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.229350', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16d0977e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.187534928, 'message_signature': '7c6e760500488cf33878c4238c435eae0099dea4f58d45c1f3a61bfdd03e578a'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.229350', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16d0a372-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.218696568, 'message_signature': '7c6bccda41c70b59a19821dda19a5fb9d53311b0a2f1f10b050eb7b9a0f6e10f'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.229350', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16d0aea8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.218696568, 'message_signature': '3867370f3cc226fa30bb97c54b9a25cd9998ec5ff39a19f611605581cdc8abd2'}]}, 'timestamp': '2025-10-13 15:34:21.230795', '_unique_id': 'b8bf87149cf84d7e971cd48419289c4e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.232 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.232 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.232 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.233 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.233 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.233 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a331ed85-7487-4bed-a08f-2664d893d12f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:34:21.232570', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16d0ffc0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': 'c252364b5ded0a7f5bd6421db7b790ad1e500c1a627604772edb4993408ac4c6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:34:21.232570', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16d10a56-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '25c4ce17bef236e505158b48a4909f5f3be85b3845cf3fdf4f9d62cce63fc962'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:34:21.232570', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '16d113de-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.319516405, 'message_signature': '8f513eba3d4d323639a6b40ab3fdbd9baaec6c9232e7ff2d831d28e973fa2bcc'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:34:21.232570', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '16d11e74-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '8b96ad7083f618e3c60c4ac5cdeed44d81ae911c78729cca11ac418a62f517ae'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:34:21.232570', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '16d12842-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.359547318, 'message_signature': '1999e549692d10622712b85836fb76cd3c553c6f63d6ddd862193056c2d1af72'}]}, 'timestamp': '2025-10-13 15:34:21.233906', '_unique_id': '45077ce48aee43fcabd8b7c4c3ef9213'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.234 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.235 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.235 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.235 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.236 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '861f0b8d-7050-4df4-8393-1d5791eeec3e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:34:21.235857', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '16d17ffe-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.307125343, 'message_signature': 'cfb3a3b6611aeb2d51b7eff55fb3e75c10e4cfacae8ed54ca763fa57411b6ca5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:34:21.235857', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '16d18b3e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9104.313208921, 'message_signature': '1bdeb0e45a1c13d88befb8adb1818a167ae39416ba3ff071b7f4ebf3b628a43b'}]}, 'timestamp': '2025-10-13 15:34:21.236451', '_unique_id': '6335bc5568af4ade9b817aff39b690df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:34:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:34:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:34:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3594: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:21 standalone.localdomain python3.9[502327]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 13 15:34:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:34:21 standalone.localdomain podman[502383]: 2025-10-13 15:34:21.831326949 +0000 UTC m=+0.090258892 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:34:21 standalone.localdomain podman[502383]: 2025-10-13 15:34:21.841862184 +0000 UTC m=+0.100794117 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:34:21 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:34:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:22 standalone.localdomain python3.9[502453]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:34:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:22.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:23 standalone.localdomain ceph-mon[29756]: pgmap v3594: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:34:23 standalone.localdomain python3.9[502561]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:34:23
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'vms', 'manila_data', 'backups', '.mgr', 'images', 'manila_metadata']
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3595: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39029 DF PROTO=TCP SPT=47166 DPT=9102 SEQ=2028791049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC63BDF0000000001030307) 
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:34:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:34:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:23.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39030 DF PROTO=TCP SPT=47166 DPT=9102 SEQ=2028791049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC63FF60000000001030307) 
Oct 13 15:34:25 standalone.localdomain ceph-mon[29756]: pgmap v3595: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3596: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39031 DF PROTO=TCP SPT=47166 DPT=9102 SEQ=2028791049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC647F60000000001030307) 
Oct 13 15:34:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:27 standalone.localdomain ceph-mon[29756]: pgmap v3596: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3597: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:27.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:34:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:34:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:34:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:34:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:34:27 standalone.localdomain podman[502607]: 2025-10-13 15:34:27.881736093 +0000 UTC m=+0.139591972 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, container_name=swift_object_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:34:27 standalone.localdomain podman[502625]: 2025-10-13 15:34:27.929792554 +0000 UTC m=+0.172999812 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:34:27 standalone.localdomain podman[502625]: 2025-10-13 15:34:27.962409029 +0000 UTC m=+0.205616347 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:34:27 standalone.localdomain podman[502619]: 2025-10-13 15:34:27.860188949 +0000 UTC m=+0.104786590 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20250721.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., version=17.1.9, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:34:28 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:34:28 standalone.localdomain podman[502608]: 2025-10-13 15:34:27.832462694 +0000 UTC m=+0.084732351 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc.)
Oct 13 15:34:28 standalone.localdomain podman[502607]: 2025-10-13 15:34:28.060869133 +0000 UTC m=+0.318725042 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., container_name=swift_object_server, config_id=tripleo_step4, release=1, version=17.1.9, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64)
Oct 13 15:34:28 standalone.localdomain podman[502619]: 2025-10-13 15:34:28.069813148 +0000 UTC m=+0.314410779 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., container_name=swift_account_server, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:34:28 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:34:28 standalone.localdomain podman[502608]: 2025-10-13 15:34:28.076793423 +0000 UTC m=+0.329063090 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, tcib_managed=true, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, distribution-scope=public, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20250721.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 15:34:28 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:34:28 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:34:28 standalone.localdomain podman[502606]: 2025-10-13 15:34:28.012821913 +0000 UTC m=+0.271683423 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:34:28 standalone.localdomain podman[502606]: 2025-10-13 15:34:28.146082778 +0000 UTC m=+0.404944318 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid)
Oct 13 15:34:28 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:34:28 standalone.localdomain python3[502817]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:34:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:28.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:28 standalone.localdomain python3[502817]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "7acf5363984cc8f102650810da36ae6f915a365c30bf42518548c6b195c5c57c",
                                                                  "Digest": "sha256:5f2ae19ef15dc3d54ad5e1703eef8fefe0d5435f620e138c261e2c61f335cf42",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-iscsid@sha256:5f2ae19ef15dc3d54ad5e1703eef8fefe0d5435f620e138c261e2c61f335cf42"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:15:23.997982946Z",
                                                                  "Config": {
                                                                       "User": "root",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 403860617,
                                                                  "VirtualSize": 403860617,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/91a4eecb381682fec00b796b5c8dfad5758b6624b0e7b9dfc84b2667c778f31e/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:0ab1c826063c05055623c37c80ed0ce3e93bc2506e1bd27474790c822896beee"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "root",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:14:28.849756008Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:14.186138693Z",
                                                                            "created_by": "/bin/sh -c dnf -y install iscsi-initiator-utils python3-rtslib targetcli socat && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:18.201887791Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/iscsid/extend_start.sh /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:20.468735099Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:23.966935506Z",
                                                                            "created_by": "/bin/sh -c rm -f /etc/iscsi/initiatorname.iscsi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:15:26.178206367Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-iscsid:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:34:29 standalone.localdomain ceph-mon[29756]: pgmap v3597: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:34:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3598: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:29 standalone.localdomain python3.9[502989]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:30 standalone.localdomain ceph-mon[29756]: pgmap v3598: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:30 standalone.localdomain python3.9[503099]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39032 DF PROTO=TCP SPT=47166 DPT=9102 SEQ=2028791049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC657B60000000001030307) 
Oct 13 15:34:30 standalone.localdomain python3.9[503152]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:31.102 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3599: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:34:31 standalone.localdomain podman[503207]: 2025-10-13 15:34:31.599587605 +0000 UTC m=+0.090351924 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:34:31 standalone.localdomain podman[503207]: 2025-10-13 15:34:31.642128316 +0000 UTC m=+0.132892645 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 13 15:34:31 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:34:32 standalone.localdomain python3.9[503279]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760369671.2294197-342-233690429578452/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.255 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.280 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Triggering sync for uuid 54a46fec-332e-42f9-83ed-88e763d13f63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.280 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Triggering sync for uuid 8f68d5aa-abc4-451d-89d2-f5342b71831c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.281 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.281 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.282 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.282 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.343 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.357 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:34:32 standalone.localdomain python3.9[503332]: ansible-systemd Invoked with state=started name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:34:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:32.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:33 standalone.localdomain ceph-mon[29756]: pgmap v3599: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:33 standalone.localdomain python3.9[503442]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3600: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:33.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:33 standalone.localdomain python3.9[503552]: ansible-ansible.builtin.systemd Invoked with name=edpm_iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:34:33 standalone.localdomain systemd[1]: Stopping iscsid container...
Oct 13 15:34:34 standalone.localdomain systemd[1]: tmp-crun.QKPY7B.mount: Deactivated successfully.
Oct 13 15:34:34 standalone.localdomain iscsid[436036]: iscsid shutting down.
Oct 13 15:34:34 standalone.localdomain systemd[1]: libpod-2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.scope: Deactivated successfully.
Oct 13 15:34:34 standalone.localdomain podman[503556]: 2025-10-13 15:34:34.066887722 +0000 UTC m=+0.099531028 container died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.timer: Deactivated successfully.
Oct 13 15:34:34 standalone.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Failed to open /run/systemd/transient/2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: No such file or directory
Oct 13 15:34:34 standalone.localdomain podman[503556]: 2025-10-13 15:34:34.181558656 +0000 UTC m=+0.214201862 container cleanup 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:34:34 standalone.localdomain podman[503556]: iscsid
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.timer: Failed to open /run/systemd/transient/2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.timer: No such file or directory
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Failed to open /run/systemd/transient/2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: No such file or directory
Oct 13 15:34:34 standalone.localdomain podman[503581]: 2025-10-13 15:34:34.261071125 +0000 UTC m=+0.051753786 container cleanup 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true)
Oct 13 15:34:34 standalone.localdomain podman[503581]: iscsid
Oct 13 15:34:34 standalone.localdomain systemd[1]: edpm_iscsid.service: Deactivated successfully.
Oct 13 15:34:34 standalone.localdomain systemd[1]: Stopped iscsid container.
Oct 13 15:34:34 standalone.localdomain systemd[1]: Starting iscsid container...
Oct 13 15:34:34 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:34:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba114622faabd7df4acfc2ea05e1fb69108349bd698d1d0ee856afc8245d96/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:34:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba114622faabd7df4acfc2ea05e1fb69108349bd698d1d0ee856afc8245d96/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 13 15:34:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baba114622faabd7df4acfc2ea05e1fb69108349bd698d1d0ee856afc8245d96/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.timer: Failed to open /run/systemd/transient/2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.timer: No such file or directory
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Failed to open /run/systemd/transient/2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: No such file or directory
Oct 13 15:34:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:34:34 standalone.localdomain podman[503595]: 2025-10-13 15:34:34.415300048 +0000 UTC m=+0.126783488 container init 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + sudo -E kolla_set_configs
Oct 13 15:34:34 standalone.localdomain sudo[503616]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 13 15:34:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:34:34 standalone.localdomain podman[503595]: 2025-10-13 15:34:34.447741198 +0000 UTC m=+0.159224518 container start 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid)
Oct 13 15:34:34 standalone.localdomain podman[503595]: iscsid
Oct 13 15:34:34 standalone.localdomain systemd[1]: Started Session c22 of User root.
Oct 13 15:34:34 standalone.localdomain systemd[1]: Started iscsid container.
Oct 13 15:34:34 standalone.localdomain sudo[503616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: INFO:__main__:Validating config file
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: INFO:__main__:Writing out command to execute
Oct 13 15:34:34 standalone.localdomain sudo[503616]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:34 standalone.localdomain systemd[1]: session-c22.scope: Deactivated successfully.
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: ++ cat /run_command
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + CMD='/usr/sbin/iscsid -f'
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + ARGS=
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + sudo kolla_copy_cacerts
Oct 13 15:34:34 standalone.localdomain sudo[503640]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 13 15:34:34 standalone.localdomain podman[503618]: 2025-10-13 15:34:34.532245131 +0000 UTC m=+0.080883583 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:34:34 standalone.localdomain systemd[1]: Started Session c23 of User root.
Oct 13 15:34:34 standalone.localdomain sudo[503640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:34:34 standalone.localdomain sudo[503640]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + [[ ! -n '' ]]
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + . kolla_extend_start
Oct 13 15:34:34 standalone.localdomain systemd[1]: session-c23.scope: Deactivated successfully.
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: Running command: '/usr/sbin/iscsid -f'
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + umask 0022
Oct 13 15:34:34 standalone.localdomain iscsid[503610]: + exec /usr/sbin/iscsid -f
Oct 13 15:34:34 standalone.localdomain podman[503618]: 2025-10-13 15:34:34.573048038 +0000 UTC m=+0.121686480 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:34:34 standalone.localdomain podman[503618]: unhealthy
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 15:34:34 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Failed with result 'exit-code'.
Oct 13 15:34:35 standalone.localdomain python3.9[503749]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:35 standalone.localdomain ceph-mon[29756]: pgmap v3600: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3601: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:35 standalone.localdomain python3.9[503857]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:34:35 standalone.localdomain network[503874]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:34:35 standalone.localdomain network[503875]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:34:35 standalone.localdomain network[503876]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:34:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:34:35 standalone.localdomain podman[503882]: 2025-10-13 15:34:35.988830284 +0000 UTC m=+0.080633366 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:34:36 standalone.localdomain podman[503882]: 2025-10-13 15:34:36.024037329 +0000 UTC m=+0.115840391 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:34:36 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:36.112 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:36 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:34:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:37.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:37.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:34:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:37.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:34:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:37 standalone.localdomain ceph-mon[29756]: pgmap v3601: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:34:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3602: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:37 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:34:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:37.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.229 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.229 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.230 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.230 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.588 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.604 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.606 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.607 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.607 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.608 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.608 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.626 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.628 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.628 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.629 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.630 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:34:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:38.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:34:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4266432424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.110 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:34:39 standalone.localdomain ceph-mon[29756]: pgmap v3602: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:39 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4266432424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.181 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.182 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.182 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.185 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.185 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.368 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.371 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9250MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.371 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.372 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:34:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3603: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.434 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.434 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.434 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.434 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:34:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:39.503 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:34:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:34:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3175630189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:34:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:40.019 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:34:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:40.030 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:34:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:40.048 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:34:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:40.053 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:34:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:40.053 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:34:40 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3175630189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:34:40 standalone.localdomain python3.9[504185]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 15:34:40 standalone.localdomain sudo[504186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:34:40 standalone.localdomain sudo[504186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:34:40 standalone.localdomain sudo[504186]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:40 standalone.localdomain sudo[504221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 15:34:40 standalone.localdomain sudo[504221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:34:41 standalone.localdomain python3.9[504356]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 13 15:34:41 standalone.localdomain ceph-mon[29756]: pgmap v3603: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3604: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:41 standalone.localdomain podman[504436]: 2025-10-13 15:34:41.437809538 +0000 UTC m=+0.102846091 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Oct 13 15:34:41 standalone.localdomain podman[504436]: 2025-10-13 15:34:41.564813461 +0000 UTC m=+0.229850004 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, ceph=True, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Oct 13 15:34:41 standalone.localdomain podman[467099]: time="2025-10-13T15:34:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:34:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:34:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414658 "" "Go-http-client/1.1"
Oct 13 15:34:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:34:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49102 "" "Go-http-client/1.1"
Oct 13 15:34:41 standalone.localdomain python3.9[504529]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:42 standalone.localdomain sudo[504221]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:42 standalone.localdomain sudo[504617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:34:42 standalone.localdomain sudo[504617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:34:42 standalone.localdomain sudo[504617]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:42 standalone.localdomain sudo[504635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:34:42 standalone.localdomain sudo[504635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:34:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:42.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:42 standalone.localdomain sudo[504635]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:42 standalone.localdomain python3.9[504714]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:34:42 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:34:42 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 5e67324b-afdf-43eb-80ba-93366708288b (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:34:42 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 5e67324b-afdf-43eb-80ba-93366708288b (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:34:42 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 5e67324b-afdf-43eb-80ba-93366708288b (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:34:42 standalone.localdomain sudo[504755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:34:42 standalone.localdomain sudo[504755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:34:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:34:42 standalone.localdomain sudo[504755]: pam_unix(sudo:session): session closed for user root
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:34:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:34:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:34:43 standalone.localdomain podman[504772]: 2025-10-13 15:34:43.01967676 +0000 UTC m=+0.089054344 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:34:43 standalone.localdomain podman[504772]: 2025-10-13 15:34:43.030804043 +0000 UTC m=+0.100181637 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:34:43 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: pgmap v3604: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:43 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:34:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3605: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:43 standalone.localdomain python3.9[504879]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:43.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:44 standalone.localdomain python3.9[504987]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: pgmap v3605: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.174794) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369685174857, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 823, "num_deletes": 251, "total_data_size": 627156, "memory_usage": 642376, "flush_reason": "Manual Compaction"}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369685180302, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 615912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84769, "largest_seqno": 85591, "table_properties": {"data_size": 612087, "index_size": 1617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8803, "raw_average_key_size": 19, "raw_value_size": 604261, "raw_average_value_size": 1339, "num_data_blocks": 72, "num_entries": 451, "num_filter_entries": 451, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369627, "oldest_key_time": 1760369627, "file_creation_time": 1760369685, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 5542 microseconds, and 2767 cpu microseconds.
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.180338) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 615912 bytes OK
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.180364) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.182194) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.182244) EVENT_LOG_v1 {"time_micros": 1760369685182232, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.182272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 622992, prev total WAL file size 622992, number of live WAL files 2.
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.182938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(601KB)], [194(5594KB)]
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369685182991, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 6344825, "oldest_snapshot_seqno": -1}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 6769 keys, 5319615 bytes, temperature: kUnknown
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369685209667, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 5319615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5282781, "index_size": 18814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 179802, "raw_average_key_size": 26, "raw_value_size": 5167682, "raw_average_value_size": 763, "num_data_blocks": 732, "num_entries": 6769, "num_filter_entries": 6769, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369685, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.210004) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 5319615 bytes
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.211928) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.9 rd, 198.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.5 +0.0 blob) out(5.1 +0.0 blob), read-write-amplify(18.9) write-amplify(8.6) OK, records in: 7289, records dropped: 520 output_compression: NoCompression
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.211961) EVENT_LOG_v1 {"time_micros": 1760369685211946, "job": 122, "event": "compaction_finished", "compaction_time_micros": 26779, "compaction_time_cpu_micros": 16331, "output_level": 6, "num_output_files": 1, "total_output_size": 5319615, "num_input_records": 7289, "num_output_records": 6769, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369685212387, "job": 122, "event": "table_file_deletion", "file_number": 196}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369685213358, "job": 122, "event": "table_file_deletion", "file_number": 194}
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.182831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.213465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.213470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.213473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.213476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:34:45 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:34:45.213478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:34:45 standalone.localdomain python3.9[505095]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3606: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:45 standalone.localdomain python3.9[505205]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:34:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:46.537 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:46.539 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:34:46 standalone.localdomain python3.9[505315]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:34:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:47 standalone.localdomain ceph-mon[29756]: pgmap v3606: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3607: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:47 standalone.localdomain python3.9[505424]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:47.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:48 standalone.localdomain ceph-mon[29756]: pgmap v3607: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:48 standalone.localdomain python3.9[505532]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:34:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:48.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:48 standalone.localdomain podman[505641]: 2025-10-13 15:34:48.844184607 +0000 UTC m=+0.103480719 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 15:34:48 standalone.localdomain python3.9[505640]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:48 standalone.localdomain podman[505641]: 2025-10-13 15:34:48.882232359 +0000 UTC m=+0.141528361 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:34:48 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:34:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3608: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:49 standalone.localdomain python3.9[505774]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:50 standalone.localdomain python3.9[505882]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:50 standalone.localdomain ceph-mon[29756]: pgmap v3608: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:50 standalone.localdomain python3.9[505990]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:34:51 standalone.localdomain python3.9[506100]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3609: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:34:51 standalone.localdomain podman[506131]: 2025-10-13 15:34:51.59402841 +0000 UTC m=+0.073254508 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:34:51 standalone.localdomain podman[506131]: 2025-10-13 15:34:51.609995622 +0000 UTC m=+0.089221810 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible)
Oct 13 15:34:51 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:34:51 standalone.localdomain python3.9[506227]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:52 standalone.localdomain python3.9[506282]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:52 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:52.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:34:52 standalone.localdomain podman[506371]: 2025-10-13 15:34:52.835775872 +0000 UTC m=+0.091307814 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:34:52 standalone.localdomain podman[506371]: 2025-10-13 15:34:52.847884436 +0000 UTC m=+0.103416398 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:34:52 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:34:53 standalone.localdomain python3.9[506401]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:53 standalone.localdomain ceph-mon[29756]: pgmap v3609: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:34:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3610: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5242 DF PROTO=TCP SPT=54480 DPT=9102 SEQ=3043655953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC6B10F0000000001030307) 
Oct 13 15:34:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:53.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:54 standalone.localdomain python3.9[506464]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:34:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5243 DF PROTO=TCP SPT=54480 DPT=9102 SEQ=3043655953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC6B5360000000001030307) 
Oct 13 15:34:54 standalone.localdomain python3.9[506572]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:55 standalone.localdomain ceph-mon[29756]: pgmap v3610: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3611: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:55 standalone.localdomain python3.9[506680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:56 standalone.localdomain python3.9[506735]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5244 DF PROTO=TCP SPT=54480 DPT=9102 SEQ=3043655953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC6BD360000000001030307) 
Oct 13 15:34:56 standalone.localdomain python3.9[506843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:34:57 standalone.localdomain ceph-mon[29756]: pgmap v3611: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:57 standalone.localdomain python3.9[506898]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:34:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3612: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:57 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:57.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:58 standalone.localdomain python3.9[507006]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:34:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:34:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:34:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:34:58 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:34:58 standalone.localdomain systemd-rc-local-generator[507081]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:34:58 standalone.localdomain podman[507008]: 2025-10-13 15:34:58.191706249 +0000 UTC m=+0.087804756 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_object_server, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28)
Oct 13 15:34:58 standalone.localdomain systemd-sysv-generator[507084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:34:58 standalone.localdomain podman[507009]: 2025-10-13 15:34:58.250747939 +0000 UTC m=+0.143643008 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1)
Oct 13 15:34:58 standalone.localdomain podman[507010]: 2025-10-13 15:34:58.227756541 +0000 UTC m=+0.113794948 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:34:58 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:34:58 standalone.localdomain podman[507010]: 2025-10-13 15:34:58.308140748 +0000 UTC m=+0.194179175 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:34:58 standalone.localdomain podman[507008]: 2025-10-13 15:34:58.389848626 +0000 UTC m=+0.285947113 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, io.buildah.version=1.33.12)
Oct 13 15:34:58 standalone.localdomain podman[507009]: 2025-10-13 15:34:58.423841973 +0000 UTC m=+0.316737032 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, io.openshift.expose-services=, batch=17.1_20250721.1, container_name=swift_account_server, tcib_managed=true, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 15:34:58 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:34:58 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:34:58 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:34:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:34:58 standalone.localdomain podman[507124]: 2025-10-13 15:34:58.613672613 +0000 UTC m=+0.059290778 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, version=17.1.9, distribution-scope=public, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32)
Oct 13 15:34:58 standalone.localdomain podman[507124]: 2025-10-13 15:34:58.829924956 +0000 UTC m=+0.275543151 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, container_name=swift_container_server, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:34:58 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:34:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:34:58.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:34:59 standalone.localdomain python3.9[507258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:34:59 standalone.localdomain ceph-mon[29756]: pgmap v3612: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3613: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:34:59 standalone.localdomain python3.9[507313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:00 standalone.localdomain python3.9[507421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:35:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5245 DF PROTO=TCP SPT=54480 DPT=9102 SEQ=3043655953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC6CCF60000000001030307) 
Oct 13 15:35:00 standalone.localdomain python3.9[507476]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:01 standalone.localdomain ceph-mon[29756]: pgmap v3613: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3614: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:01 standalone.localdomain python3.9[507584]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:01 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:35:01 standalone.localdomain systemd-rc-local-generator[507608]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:35:01 standalone.localdomain systemd-sysv-generator[507613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:35:01 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:35:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:35:02 standalone.localdomain systemd[1]: Starting Create netns directory...
Oct 13 15:35:02 standalone.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 15:35:02 standalone.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 15:35:02 standalone.localdomain systemd[1]: Finished Create netns directory.
Oct 13 15:35:02 standalone.localdomain systemd[1]: tmp-crun.UthaOs.mount: Deactivated successfully.
Oct 13 15:35:02 standalone.localdomain podman[507622]: 2025-10-13 15:35:02.076167545 +0000 UTC m=+0.101395485 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct 13 15:35:02 standalone.localdomain podman[507622]: 2025-10-13 15:35:02.096849722 +0000 UTC m=+0.122077652 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 13 15:35:02 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:35:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:02 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:02.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:02 standalone.localdomain python3.9[507752]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:35:03 standalone.localdomain ceph-mon[29756]: pgmap v3614: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3615: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:03 standalone.localdomain python3.9[507860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:35:03 standalone.localdomain python3.9[507915]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:35:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:03.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:35:04 standalone.localdomain systemd[1]: tmp-crun.7JguLi.mount: Deactivated successfully.
Oct 13 15:35:04 standalone.localdomain podman[507971]: 2025-10-13 15:35:04.833794976 +0000 UTC m=+0.098958660 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:35:04 standalone.localdomain podman[507971]: 2025-10-13 15:35:04.850965455 +0000 UTC m=+0.116129139 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:35:04 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:35:05 standalone.localdomain ceph-mon[29756]: pgmap v3615: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3616: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:05 standalone.localdomain python3.9[508043]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:35:06 standalone.localdomain python3.9[508151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:35:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:35:06 standalone.localdomain systemd[1]: tmp-crun.RHrwvf.mount: Deactivated successfully.
Oct 13 15:35:06 standalone.localdomain podman[508207]: 2025-10-13 15:35:06.817310016 +0000 UTC m=+0.083113062 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:35:06 standalone.localdomain podman[508207]: 2025-10-13 15:35:06.825281692 +0000 UTC m=+0.091084738 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:35:06 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:35:06 standalone.localdomain python3.9[508206]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.4p9qqo7z recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:35:06.953 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:35:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:35:06.954 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:35:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:35:06.955 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:35:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:07 standalone.localdomain ceph-mon[29756]: pgmap v3616: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3617: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:07 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:07.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:08 standalone.localdomain python3.9[508337]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:08.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:09 standalone.localdomain ceph-mon[29756]: pgmap v3617: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3618: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:09 standalone.localdomain python3.9[508608]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 13 15:35:10 standalone.localdomain ceph-mon[29756]: pgmap v3618: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:10 standalone.localdomain python3.9[508716]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:35:11 standalone.localdomain python3.9[508824]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 15:35:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3619: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:11 standalone.localdomain podman[467099]: time="2025-10-13T15:35:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:35:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:35:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:35:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:35:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49107 "" "Go-http-client/1.1"
Oct 13 15:35:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:12 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:12.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:35:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:35:13 standalone.localdomain ceph-mon[29756]: pgmap v3619: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3620: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:35:13 standalone.localdomain systemd[1]: tmp-crun.Y1BQuj.mount: Deactivated successfully.
Oct 13 15:35:13 standalone.localdomain podman[508869]: 2025-10-13 15:35:13.853779096 +0000 UTC m=+0.102847481 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:35:13 standalone.localdomain podman[508869]: 2025-10-13 15:35:13.863900028 +0000 UTC m=+0.112968413 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:35:13 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:35:14 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:14.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:15 standalone.localdomain ceph-mon[29756]: pgmap v3620: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3621: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:17 standalone.localdomain ceph-mon[29756]: pgmap v3621: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:17 standalone.localdomain python3[508977]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:35:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3622: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:17 standalone.localdomain python3[508977]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "7042d0e4c063a84abce3ee29396c85a102ad504e82c1a0963682094dbdd1cf87",
                                                                  "Digest": "sha256:cf6b2b2c6e86f3ba78d1c237ab387f4ccc3192ed96fdaf8aae01b30f6d5a6f4e",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:cf6b2b2c6e86f3ba78d1c237ab387f4ccc3192ed96fdaf8aae01b30f6d5a6f4e"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:11:26.07180862Z",
                                                                  "Config": {
                                                                       "User": "root",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 249351656,
                                                                  "VirtualSize": 249351656,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/94ddaccf538b3b1317fb0419442c6d79d6cab209dc622d3597fafd8458f595b0/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/94ddaccf538b3b1317fb0419442c6d79d6cab209dc622d3597fafd8458f595b0/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:d6fd2be14aa1d4de24371de0c3b1d5b1f1b7d0160e56d5be6ddce67596898b25"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "root",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:45.514463861Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:11:26.069122347Z",
                                                                            "created_by": "/bin/sh -c dnf -y install device-mapper-multipath iscsi-initiator-utils && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:11:30.122832283Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 15:35:17 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:17.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:35:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1914988314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:35:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:35:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1914988314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:35:18 standalone.localdomain python3.9[509149]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:35:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:19.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:19 standalone.localdomain ceph-mon[29756]: pgmap v3622: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1914988314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:35:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1914988314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:35:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3623: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:19 standalone.localdomain python3.9[509259]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:35:19 standalone.localdomain podman[509313]: 2025-10-13 15:35:19.82815634 +0000 UTC m=+0.092855942 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:35:19 standalone.localdomain podman[509313]: 2025-10-13 15:35:19.866438789 +0000 UTC m=+0.131138351 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:35:19 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:35:19 standalone.localdomain python3.9[509312]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:35:20 standalone.localdomain ceph-mon[29756]: pgmap v3623: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:20 standalone.localdomain python3.9[509445]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760369719.997939-743-36711330953364/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:21 standalone.localdomain python3.9[509498]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3624: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:35:21 standalone.localdomain systemd[1]: tmp-crun.jdiMpc.mount: Deactivated successfully.
Oct 13 15:35:21 standalone.localdomain podman[509576]: 2025-10-13 15:35:21.817879291 +0000 UTC m=+0.086675912 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter)
Oct 13 15:35:21 standalone.localdomain podman[509576]: 2025-10-13 15:35:21.830811009 +0000 UTC m=+0.099607610 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Oct 13 15:35:21 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:35:21 standalone.localdomain python3.9[509619]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:35:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:22 standalone.localdomain python3.9[509736]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:22 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:22.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:23 standalone.localdomain ceph-mon[29756]: pgmap v3624: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:35:23
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'images', 'volumes', 'manila_metadata', '.mgr', 'backups', 'vms']
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:35:23 standalone.localdomain python3.9[509844]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3625: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52357 DF PROTO=TCP SPT=40110 DPT=9102 SEQ=3246303450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC7263F0000000001030307) 
Oct 13 15:35:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:35:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:35:23 standalone.localdomain podman[509919]: 2025-10-13 15:35:23.818747484 +0000 UTC m=+0.077997194 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 13 15:35:23 standalone.localdomain podman[509919]: 2025-10-13 15:35:23.832911361 +0000 UTC m=+0.092161101 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:35:23 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:35:24 standalone.localdomain python3.9[509965]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 13 15:35:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:24.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52358 DF PROTO=TCP SPT=40110 DPT=9102 SEQ=3246303450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC72A360000000001030307) 
Oct 13 15:35:24 standalone.localdomain python3.9[510076]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:35:25 standalone.localdomain ceph-mon[29756]: pgmap v3625: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:25 standalone.localdomain python3.9[510131]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3626: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:26 standalone.localdomain python3.9[510239]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52359 DF PROTO=TCP SPT=40110 DPT=9102 SEQ=3246303450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC732370000000001030307) 
Oct 13 15:35:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:27 standalone.localdomain ceph-mon[29756]: pgmap v3626: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:27 standalone.localdomain python3.9[510347]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 15:35:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3627: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:27 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:27.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:28 standalone.localdomain ceph-mon[29756]: pgmap v3627: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:28 standalone.localdomain python3.9[510408]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 15:35:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:35:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:35:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:35:28 standalone.localdomain podman[510410]: 2025-10-13 15:35:28.843445344 +0000 UTC m=+0.107564665 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, tcib_managed=true, managed_by=tripleo_ansible, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:35:28 standalone.localdomain podman[510411]: 2025-10-13 15:35:28.895348813 +0000 UTC m=+0.156470862 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, container_name=swift_account_server, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22)
Oct 13 15:35:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:35:28 standalone.localdomain podman[510412]: 2025-10-13 15:35:28.951965478 +0000 UTC m=+0.207632278 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:35:28 standalone.localdomain podman[510412]: 2025-10-13 15:35:28.963855555 +0000 UTC m=+0.219522365 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:35:28 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:35:29 standalone.localdomain podman[510410]: 2025-10-13 15:35:29.055780567 +0000 UTC m=+0.319899818 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=swift_object_server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:35:29 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:35:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:29.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:29 standalone.localdomain podman[510461]: 2025-10-13 15:35:29.085379229 +0000 UTC m=+0.174718565 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., release=1, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, version=17.1.9, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:35:29 standalone.localdomain podman[510411]: 2025-10-13 15:35:29.131875002 +0000 UTC m=+0.392997061 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, release=1, version=17.1.9, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container)
Oct 13 15:35:29 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:35:29 standalone.localdomain podman[510461]: 2025-10-13 15:35:29.275034493 +0000 UTC m=+0.364373859 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, version=17.1.9, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, release=1)
Oct 13 15:35:29 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:35:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3628: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:30 standalone.localdomain ceph-mon[29756]: pgmap v3628: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52360 DF PROTO=TCP SPT=40110 DPT=9102 SEQ=3246303450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC741F70000000001030307) 
Oct 13 15:35:31 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:31.092 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3629: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:32.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:35:32 standalone.localdomain podman[510589]: 2025-10-13 15:35:32.82366564 +0000 UTC m=+0.079102078 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:35:32 standalone.localdomain podman[510589]: 2025-10-13 15:35:32.834806953 +0000 UTC m=+0.090243341 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:35:32 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:35:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:32.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:33 standalone.localdomain python3.9[510639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 15:35:33 standalone.localdomain ceph-mon[29756]: pgmap v3629: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3630: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:34.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:34 standalone.localdomain python3.9[510752]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:35:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:35:35 standalone.localdomain ceph-mon[29756]: pgmap v3630: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3631: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:35 standalone.localdomain python3.9[510860]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:35:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:35:35 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:35:35 standalone.localdomain systemd-rc-local-generator[510902]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:35:35 standalone.localdomain podman[510862]: 2025-10-13 15:35:35.715400055 +0000 UTC m=+0.138774438 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:35:35 standalone.localdomain systemd-sysv-generator[510905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:35:35 standalone.localdomain podman[510862]: 2025-10-13 15:35:35.729793658 +0000 UTC m=+0.153168051 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 13 15:35:35 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:35:35 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:35:36 standalone.localdomain python3.9[511024]: ansible-ansible.builtin.service_facts Invoked
Oct 13 15:35:36 standalone.localdomain network[511041]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 15:35:36 standalone.localdomain network[511042]: 'network-scripts' will be removed from distribution in near future.
Oct 13 15:35:36 standalone.localdomain network[511043]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.085 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.090 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:35:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:37 standalone.localdomain ceph-mon[29756]: pgmap v3631: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.244 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.245 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.245 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:35:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3632: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:35:37 standalone.localdomain systemd[1]: tmp-crun.oX6I7o.mount: Deactivated successfully.
Oct 13 15:35:37 standalone.localdomain podman[511050]: 2025-10-13 15:35:37.586162266 +0000 UTC m=+0.077397168 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:35:37 standalone.localdomain podman[511050]: 2025-10-13 15:35:37.594172264 +0000 UTC m=+0.085407136 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:35:37 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:35:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:37.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:38 standalone.localdomain ceph-mon[29756]: pgmap v3632: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:38.658 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:35:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:38.674 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:35:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:38.675 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:35:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:38.676 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.089 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.106 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.106 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.107 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.107 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.107 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:35:39 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:35:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3633: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:35:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3239779336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.566 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.637 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.637 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.638 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.641 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.641 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.833 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.834 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9271MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.834 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.835 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.903 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.903 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.903 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.904 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:35:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:39.946 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:35:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:35:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/890625786' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:35:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:40.415 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:35:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:40.425 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:35:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:40.459 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:35:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:40.464 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:35:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:40.465 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:35:40 standalone.localdomain ceph-mon[29756]: pgmap v3633: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:40 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3239779336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:35:40 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/890625786' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:35:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3634: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:41.468 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:41 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:41.468 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:35:41 standalone.localdomain podman[467099]: time="2025-10-13T15:35:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:35:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:35:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:35:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:35:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49110 "" "Go-http-client/1.1"
Oct 13 15:35:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:42 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:42.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:35:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:35:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:35:43 standalone.localdomain sudo[511227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:35:43 standalone.localdomain sudo[511227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:35:43 standalone.localdomain sudo[511227]: pam_unix(sudo:session): session closed for user root
Oct 13 15:35:43 standalone.localdomain sudo[511250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 15:35:43 standalone.localdomain sudo[511250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:35:43 standalone.localdomain ceph-mon[29756]: pgmap v3634: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3635: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:43 standalone.localdomain sudo[511250]: pam_unix(sudo:session): session closed for user root
Oct 13 15:35:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:35:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:43 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:35:43 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:43 standalone.localdomain sudo[511300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:35:43 standalone.localdomain sudo[511300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:35:43 standalone.localdomain sudo[511300]: pam_unix(sudo:session): session closed for user root
Oct 13 15:35:43 standalone.localdomain sudo[511318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:35:43 standalone.localdomain sudo[511318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:35:44 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:44.086 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:44 standalone.localdomain sudo[511318]: pam_unix(sudo:session): session closed for user root
Oct 13 15:35:44 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:44.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:35:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 792109df-bdcc-4be1-bea0-5be7642102bd (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:35:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 792109df-bdcc-4be1-bea0-5be7642102bd (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:35:44 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 792109df-bdcc-4be1-bea0-5be7642102bd (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:35:44 standalone.localdomain sudo[511367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:35:44 standalone.localdomain sudo[511367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:35:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:35:44 standalone.localdomain sudo[511367]: pam_unix(sudo:session): session closed for user root
Oct 13 15:35:44 standalone.localdomain podman[511386]: 2025-10-13 15:35:44.333610155 +0000 UTC m=+0.078961786 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 13 15:35:44 standalone.localdomain podman[511386]: 2025-10-13 15:35:44.368892593 +0000 UTC m=+0.114244264 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 13 15:35:44 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: pgmap v3635: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:44 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:35:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:35:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:35:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:45 standalone.localdomain python3.9[511511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3636: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:45 standalone.localdomain python3.9[511620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:46.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:35:46 standalone.localdomain python3.9[511729]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:47.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:35:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:47 standalone.localdomain ceph-mon[29756]: pgmap v3636: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:47 standalone.localdomain python3.9[511838]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3637: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:47 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:47.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:48 standalone.localdomain python3.9[511947]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:48 standalone.localdomain ceph-mon[29756]: pgmap v3637: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:35:48 standalone.localdomain object-server[512057]: Object update sweep starting on /srv/node/d1 (pid: 30)
Oct 13 15:35:48 standalone.localdomain object-server[512057]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 30)
Oct 13 15:35:48 standalone.localdomain object-server[512057]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:35:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.07s
Oct 13 15:35:48 standalone.localdomain python3.9[512056]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:49 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:49.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3638: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:49 standalone.localdomain python3.9[512166]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:35:50 standalone.localdomain podman[512168]: 2025-10-13 15:35:50.066262642 +0000 UTC m=+0.076218501 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:35:50 standalone.localdomain podman[512168]: 2025-10-13 15:35:50.097096403 +0000 UTC m=+0.107052252 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:35:50 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:35:50 standalone.localdomain ceph-mon[29756]: pgmap v3638: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:50 standalone.localdomain python3.9[512300]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:35:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3639: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:52 standalone.localdomain python3.9[512409]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:52 standalone.localdomain python3.9[512517]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:35:52 standalone.localdomain systemd[1]: tmp-crun.mIf5hw.mount: Deactivated successfully.
Oct 13 15:35:52 standalone.localdomain podman[512518]: 2025-10-13 15:35:52.83323195 +0000 UTC m=+0.098087605 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:35:52 standalone.localdomain podman[512518]: 2025-10-13 15:35:52.850959158 +0000 UTC m=+0.115814813 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public)
Oct 13 15:35:52 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:35:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:53.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:53 standalone.localdomain ceph-mon[29756]: pgmap v3639: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:35:53 standalone.localdomain python3.9[512643]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3640: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21676 DF PROTO=TCP SPT=60684 DPT=9102 SEQ=3241982653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC79B6F0000000001030307) 
Oct 13 15:35:53 standalone.localdomain python3.9[512751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:54 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:54.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21677 DF PROTO=TCP SPT=60684 DPT=9102 SEQ=3241982653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC79F760000000001030307) 
Oct 13 15:35:54 standalone.localdomain python3.9[512859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:35:54 standalone.localdomain systemd[1]: tmp-crun.uhNQth.mount: Deactivated successfully.
Oct 13 15:35:54 standalone.localdomain podman[512880]: 2025-10-13 15:35:54.833058122 +0000 UTC m=+0.090366318 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 13 15:35:54 standalone.localdomain podman[512880]: 2025-10-13 15:35:54.868619189 +0000 UTC m=+0.125927385 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:35:54 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:35:55 standalone.localdomain python3.9[512985]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:55 standalone.localdomain ceph-mon[29756]: pgmap v3640: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3641: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:55 standalone.localdomain python3.9[513093]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:56 standalone.localdomain ceph-mon[29756]: pgmap v3641: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:56 standalone.localdomain python3.9[513201]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21678 DF PROTO=TCP SPT=60684 DPT=9102 SEQ=3241982653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC7A7760000000001030307) 
Oct 13 15:35:57 standalone.localdomain python3.9[513309]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:35:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3642: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:57 standalone.localdomain python3.9[513417]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:58.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:58 standalone.localdomain python3.9[513525]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:58 standalone.localdomain python3.9[513633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:59 standalone.localdomain ceph-mon[29756]: pgmap v3642: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:59 standalone.localdomain nova_compute[456855]: 2025-10-13 15:35:59.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:35:59 standalone.localdomain python3.9[513741]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3643: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:35:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:35:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:35:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:35:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:35:59 standalone.localdomain podman[513851]: 2025-10-13 15:35:59.81928954 +0000 UTC m=+0.079751711 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, container_name=swift_container_server, release=1, com.redhat.component=openstack-swift-container-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, config_id=tripleo_step4)
Oct 13 15:35:59 standalone.localdomain podman[513852]: 2025-10-13 15:35:59.870950273 +0000 UTC m=+0.129763023 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, name=rhosp17/openstack-swift-account, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-account-container, release=1, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, version=17.1.9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12)
Oct 13 15:35:59 standalone.localdomain python3.9[513849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:35:59 standalone.localdomain podman[513858]: 2025-10-13 15:35:59.920362786 +0000 UTC m=+0.174373858 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:35:59 standalone.localdomain podman[513858]: 2025-10-13 15:35:59.926780015 +0000 UTC m=+0.180791097 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:35:59 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:35:59 standalone.localdomain podman[513850]: 2025-10-13 15:35:59.849264174 +0000 UTC m=+0.115255985 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, tcib_managed=true, version=17.1.9, container_name=swift_object_server, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:35:59 standalone.localdomain podman[513851]: 2025-10-13 15:35:59.993796752 +0000 UTC m=+0.254258913 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, container_name=swift_container_server, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, architecture=x86_64, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:36:00 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:36:00 standalone.localdomain podman[513852]: 2025-10-13 15:36:00.054995368 +0000 UTC m=+0.313808108 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T16:11:22, version=17.1.9, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, tcib_managed=true, vendor=Red Hat, Inc.)
Oct 13 15:36:00 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:36:00 standalone.localdomain podman[513850]: 2025-10-13 15:36:00.108076985 +0000 UTC m=+0.374068846 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, container_name=swift_object_server, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:36:00 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:36:00 standalone.localdomain python3.9[514061]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21679 DF PROTO=TCP SPT=60684 DPT=9102 SEQ=3241982653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC7B7360000000001030307) 
Oct 13 15:36:00 standalone.localdomain python3.9[514169]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:01 standalone.localdomain ceph-mon[29756]: pgmap v3643: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3644: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:01 standalone.localdomain python3.9[514277]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                            systemctl disable --now certmonger.service
                                                            test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                          fi
                                                           _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:02 standalone.localdomain python3.9[514387]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 15:36:03 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:03.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:03 standalone.localdomain ceph-mon[29756]: pgmap v3644: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:03 standalone.localdomain python3.9[514495]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 15:36:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:36:03 standalone.localdomain systemd[1]: Reloading.
Oct 13 15:36:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3645: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:03 standalone.localdomain podman[514497]: 2025-10-13 15:36:03.465004688 +0000 UTC m=+0.105834315 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:36:03 standalone.localdomain podman[514497]: 2025-10-13 15:36:03.477586656 +0000 UTC m=+0.118416253 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:36:03 standalone.localdomain systemd-rc-local-generator[514539]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 15:36:03 standalone.localdomain systemd-sysv-generator[514542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 15:36:03 standalone.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Oct 13 15:36:03 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:36:04 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:04.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:04 standalone.localdomain python3.9[514659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:05 standalone.localdomain python3.9[514768]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:05 standalone.localdomain ceph-mon[29756]: pgmap v3645: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3646: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:05 standalone.localdomain python3.9[514877]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:06 standalone.localdomain python3.9[514986]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:36:06 standalone.localdomain ceph-mon[29756]: pgmap v3646: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:06 standalone.localdomain podman[514988]: 2025-10-13 15:36:06.377350149 +0000 UTC m=+0.093777772 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:36:06 standalone.localdomain podman[514988]: 2025-10-13 15:36:06.412018909 +0000 UTC m=+0.128446532 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:36:06 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:36:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 7200.0 total, 600.0 interval
                                                        Cumulative writes: 18K writes, 86K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                        Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1534 writes, 6949 keys, 1534 commit groups, 1.0 writes per commit group, ingest: 5.79 MB, 0.01 MB/s
                                                        Interval WAL: 1534 writes, 1534 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     82.7      0.70              0.20        61    0.011       0      0       0.0       0.0
                                                          L6      1/0    5.07 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   5.2    198.7    168.9      1.77              0.88        60    0.029    325K    31K       0.0       0.0
                                                         Sum      1/0    5.07 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   6.2    142.5    144.5      2.47              1.08       121    0.020    325K    31K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.8    160.3    161.9      0.21              0.13        10    0.021     36K   2567       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    198.7    168.9      1.77              0.88        60    0.029    325K    31K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     82.9      0.70              0.20        60    0.012       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 7200.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.056, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.35 GB write, 0.05 MB/s write, 0.34 GB read, 0.05 MB/s read, 2.5 seconds
                                                        Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 46.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000522 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(4154,44.18 MB,14.344%) FilterBlock(122,904.17 KB,0.286682%) IndexBlock(122,1.22 MB,0.397447%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 15:36:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:36:06.954 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:36:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:36:06.955 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:36:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:36:06.956 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:36:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3647: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:36:07 standalone.localdomain podman[515117]: 2025-10-13 15:36:07.830519822 +0000 UTC m=+0.089507301 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:36:07 standalone.localdomain python3.9[515116]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:07 standalone.localdomain podman[515117]: 2025-10-13 15:36:07.865435618 +0000 UTC m=+0.124423087 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:36:07 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:36:08 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:08.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:09 standalone.localdomain ceph-mon[29756]: pgmap v3647: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:09 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:09.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3648: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:09 standalone.localdomain python3.9[515249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:10 standalone.localdomain python3.9[515358]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:10 standalone.localdomain ceph-mon[29756]: pgmap v3648: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:11 standalone.localdomain python3.9[515467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:36:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3649: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:11 standalone.localdomain podman[467099]: time="2025-10-13T15:36:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:36:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:36:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:36:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:36:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49114 "" "Go-http-client/1.1"
Oct 13 15:36:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:36:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:36:13 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:13.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:13 standalone.localdomain ceph-mon[29756]: pgmap v3649: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3650: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:13 standalone.localdomain python3.9[515576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:14 standalone.localdomain ceph-mon[29756]: pgmap v3650: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:14 standalone.localdomain python3.9[515684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:14 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:14.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:36:14 standalone.localdomain systemd[1]: tmp-crun.dnmTJl.mount: Deactivated successfully.
Oct 13 15:36:14 standalone.localdomain podman[515793]: 2025-10-13 15:36:14.824921887 +0000 UTC m=+0.093619958 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:36:14 standalone.localdomain podman[515793]: 2025-10-13 15:36:14.829961113 +0000 UTC m=+0.098659174 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:36:14 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:36:14 standalone.localdomain python3.9[515792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3651: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:15 standalone.localdomain python3.9[515918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:16 standalone.localdomain python3.9[516026]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:16 standalone.localdomain python3.9[516134]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:16 standalone.localdomain ceph-mon[29756]: pgmap v3651: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:17 standalone.localdomain python3.9[516242]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3652: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:17 standalone.localdomain python3.9[516350]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:18 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:18.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:18 standalone.localdomain python3.9[516458]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2430572227' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:36:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:36:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2430572227' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:36:18 standalone.localdomain python3.9[516566]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:19 standalone.localdomain ceph-mon[29756]: pgmap v3652: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2430572227' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:36:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2430572227' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:36:19 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:19.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:19 standalone.localdomain python3.9[516674]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3653: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:19 standalone.localdomain python3.9[516782]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:20 standalone.localdomain ceph-mon[29756]: pgmap v3653: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:36:20 standalone.localdomain podman[516800]: 2025-10-13 15:36:20.817809718 +0000 UTC m=+0.084119815 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:36:20 standalone.localdomain podman[516800]: 2025-10-13 15:36:20.858932496 +0000 UTC m=+0.125242623 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 15:36:20 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:36:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:20.994 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:36:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:20.998 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:36:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:20.998 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.004 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 66 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.007 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 51 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '399d01e4-19f2-43b4-86d2-712ed806546f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 66, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:20.999018', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e34b604-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '2b220c1aa89766d9e0ae1b9815db8414857b82de86a2765431ef2c562d3861bd'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 51, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:20.999018', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e354c2c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': '3e39b11ffa44fbf381d1a9e9cd377db4ab7163044d41dae7df06f214ed1d8583'}]}, 'timestamp': '2025-10-13 15:36:21.008640', '_unique_id': 'cb4e1f8558c54cee966c3081212f01f8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.010 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.011 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.038 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 52.27734375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.061 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '325c655e-6b8a-4e8e-84ab-de4d6e6c210c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.27734375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:36:21.012154', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '5e39faf6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.227708061, 'message_signature': 'f1bb3eb56004d0374113f9f28e5606f776865bb845096016f648e4a5af2429f9'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:36:21.012154', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '5e3d7a14-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.250700839, 'message_signature': '50aa521db63845769dbcdccb9ac8d19f35444cc90bb4a35dca0b63f196667bf8'}]}, 'timestamp': '2025-10-13 15:36:21.062083', '_unique_id': '983ea569a500418ba3820506b79e0159'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.063 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.064 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.064 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.112 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 501653653 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.113 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.113 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.143 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.144 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6824f573-30b1-4832-a7a3-4d5ac3deb70c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 501653653, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.064218', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e453e84-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': '8581ba1627aaf53239af815f9a4d7dead5631dd4d78121214c3d7f07d3a29b19'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.064218', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e455cd4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'f1a78c2bb47cac499fb4062eface60222ebff8212fd6900916005769840d9f69'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.064218', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e4575a2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'fd2a44bf3c5eda6e4bfc928ab9292dc7d8cff8e2d51df036bf7663ac83f9ddc2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.064218', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e4a0892-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': 'ad3cdc6b00a8af216f360a96d13cbf0ae2c73a483604f42f0bee20dea48df144'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.064218', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e4a1f08-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': 'b1f42bb9ae989810f52654daf0660f257ea53354b29a79610c41b3d61c3087bd'}]}, 'timestamp': '2025-10-13 15:36:21.144997', '_unique_id': '6c1e8e8522a44064a085164c18d26503'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.146 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.148 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.148 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.149 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '79333919-8c71-4b73-84c1-1161da82b76f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.148741', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e4ac976-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '52fe0bf4b6c97771790cfccd9b5a29d0d58b815e0d35fd5195889bc12f64128d'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.148741', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e4ae582-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': 'eb9493d2e536cad799f4feaca6b9a075489f317a1f2e44580554a87c8e1dde7e'}]}, 'timestamp': '2025-10-13 15:36:21.150187', '_unique_id': 'a96894897ada4625b08dd85b281922df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.153 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.155 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.155 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.155 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 4918 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.156 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3756 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1f9442f1-550b-47f7-b257-2e5aaacb9d45', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4918, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.155801', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e4bdb36-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '25bc1a3652257b69388ff64b0c32a8ab9bda7495b06d73e59583762243b7b455'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3756, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.155801', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e4bf2e2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': '5828df762696aa8d3c2bbb3113df5f1ab815d09a5b0a45fe4af05dfb74431493'}]}, 'timestamp': '2025-10-13 15:36:21.156976', '_unique_id': 'c48c316afac4457680c4c335b71f6071'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.158 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.159 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.159 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 857967408 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 155394833 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 63114260 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.160 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.161 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8a1e49ed-c2b3-4e0c-8354-1bc5ea7d113a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 857967408, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.159584', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e4c6c36-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'd3084d9b256093944a27e202574fb609c06af4f8d5ef55de77080bdad0c027c6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 155394833, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.159584', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e4c7dfc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': '59e2df1d7ddaa24814c4fe634952701c1368ad49c19f6c9630850f89bbac76df'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 63114260, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.159584', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e4c8fcc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'c714d4dd3727112cb22e9229242035c3bf4ebc6952fae202f6c1d52e150817ed'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.159584', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e4c9ff8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '8bd1bad40d17981f2689586239242dbc7a57aceb16a569847e43fe6ff124d137'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.159584', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e4cb1be-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '73ef3a13f9fb0c22e21bc0d84c0d3df5286dd3ab34bc0aa7859f575013ef94df'}]}, 'timestamp': '2025-10-13 15:36:21.161836', '_unique_id': 'bacd40d767424056bce85215d89d9486'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.164 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.196 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.197 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.198 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c41bc5a1-5715-4a41-805f-99ad2f4898b4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.164393', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e522a2c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': '123e48225bd7958cf351662428260fd7bd2eaebd65cb64fa9b844d82029eb621'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.164393', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e524124-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': '0d0ab328a804e5eb190c004c56cedfb6768233b8051479f6ed441f16d991b136'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.164393', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e5254f2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': 'f000806f6d29d44f8533e0a40c0a32f23ddbe3a72541389bd7eca756a38c19d6'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.164393', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e54f324-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.388034375, 'message_signature': '5d9f7a543e5fca52c3ad59a702a5abdb3239401d1c3ee9eed2324074787662c1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.164393', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e55045e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.388034375, 'message_signature': '900b8edb159498d3ae2e6a167f8268048a691e4b1371a59db0f45a9c406bfc11'}]}, 'timestamp': '2025-10-13 15:36:21.216393', '_unique_id': 'b4304fafc719444bafdef57144d3d82e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.218 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.219 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.219 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.220 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '28017f66-d096-4e53-adec-a7a6f90dbd29', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.219513', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e559194-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '8adac4ad67baf382647921a279ba064a6a9892762465bcf11aae6fc2e3b8ce91'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.219513', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e55a38c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': 'cc05815d35fd7a42b19bb15ec9d1f8d39cb8340f0d5f621b99e89f55d225c231'}]}, 'timestamp': '2025-10-13 15:36:21.220464', '_unique_id': '6ff8235e2e6a4647a180b23d1a92f955'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.222 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.223 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.223 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 4526 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.223 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'de54b396-a3ee-4378-afb5-4fb6bd5e6fe1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4526, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.223159', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e561ed4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': 'cfea1b075bd4eb7f10636fa2dcbc480ff40ac6f9201fe16d3227c5e1c7bbe3ec'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.223159', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e563356-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': '354bba888f13fe56137b911ea06b0e950de04e26dee7e9ee1d50edceaf6660c8'}]}, 'timestamp': '2025-10-13 15:36:21.224144', '_unique_id': 'feb5182f937945f6bbfc4710a10cf325'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.225 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.226 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.226 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.227 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '463da44b-bf52-40f4-bfcc-0965682aef48', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.226478', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e56a426-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '209a1dc691ceeb2fb3306d27e28d03f040576a71fa1af4c84977af771ae25f12'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.226478', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e56c078-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': 'f01e203d7b54d00dc82395de2f1f1b9b01c65e8d258c28f866c6c65f41b04d6e'}]}, 'timestamp': '2025-10-13 15:36:21.227851', '_unique_id': 'ac89174b824f42d6abc371d1ac4d3c9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.230 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.230 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.230 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.231 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.231 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.232 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6411c91b-cc73-463e-b9af-f1c65784fdd0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.230284', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5736e8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': '814cc454ea77c6b3473736fcde55b32461448756dbc6014bfa18ade3d4bf4ee9'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.230284', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e574836-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': 'a29119a87f831eca1634f70cc4db00a7d00191fabac2090d255ef716f2ccb409'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.230284', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e575bfa-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': '9c1bb2381386355d1ffe583767602d537cf65d4d6e6c935949c06d0fb3e3989c'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.230284', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e57711c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.388034375, 'message_signature': '1ea0e036770d55a617b36c40e4b55b1027c9632cf5c8803186b97c04a10d12f1'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.230284', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5784ae-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.388034375, 'message_signature': 'a33c4320dfef1f6e30b0ff4dbcee4be554d979d44d7140052480a8561be70f6c'}]}, 'timestamp': '2025-10-13 15:36:21.232765', '_unique_id': '393d7b6d0f064c43948d093561cab274'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.233 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.235 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.235 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.235 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '91502d2b-5e5a-4e6c-9d1f-a8efe28d9550', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.235181', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e57f556-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '2148ec9e975efe9778b4f3aef4d1e1becf8b1fb4f04247939c06390b2bb00cac'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.235181', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e58082a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': 'b38500dc40e894814c000b15e97fc16cd6b667441e4ba6632ecbb7fc18f4fa3e'}]}, 'timestamp': '2025-10-13 15:36:21.236142', '_unique_id': '4412182e5cf14f018e94545cef3d5638'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.238 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.238 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.238 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.239 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a531aaaf-69a3-432c-8b4b-980be919d5fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.238551', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e5877ec-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '07e4f642bd4f76af217cd7701fa277be5b0d79d3049c02381e8dfeed430629bc'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.238551', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e58893a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': '35188a552070fe7a2970f39c99484fb6907d4f9964f705e94b6f2fdfb74d1766'}]}, 'timestamp': '2025-10-13 15:36:21.239467', '_unique_id': 'f3066482bda2455cb358ef50ff2b5d35'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.240 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.241 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.241 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.242 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.242 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.243 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.243 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c0484569-eddc-427b-8f96-972f89912af1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.241716', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e58f35c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': '7ad8496460570836ffd5db36f8a27c018c3f4ed39a5b99c8e4b39849e6f8a97d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.241716', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e590a0e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': 'b39af3512bd119f830155989a0141bf750f8e6f92b96545037a2a1f38040a85c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.241716', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e5919e0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.353697026, 'message_signature': 'cea6c540172440ddf5e98c354c5509c1f4b4df1341bec9e9ab9b6be8d7914fba'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.241716', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e59293a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.388034375, 'message_signature': '87408982c83be7007289ab2e755d44edf4e16ab66d4da386ff25726d6443aea9'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.241716', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e593a88-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.388034375, 'message_signature': '276ac1a657eef989a6812a47113c0f4ef14ab8f95d42845f3d30f0347a2165bf'}]}, 'timestamp': '2025-10-13 15:36:21.243960', '_unique_id': '93a3a53ffdd745bdb027f6078419a611'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.245 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.246 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.247 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.247 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bac27e19-6751-4640-8cfc-18ddf02eb0d6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.247053', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e59c458-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': 'c646589baf086a67882f44a8f9a787763bf3b64a738eb8035fa5152988f55c19'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.247053', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e59ebd6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': 'ec2514be7d7cde89acd52c9d414a6f95700f622f6f075d59e90d1087aa8378e4'}]}, 'timestamp': '2025-10-13 15:36:21.248576', '_unique_id': 'a5be3505951f4da0b76e7b8679ce834f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.250 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.252 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.252 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 32690000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.253 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 32550000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da7cbfcc-410b-4e79-92c5-cb791b7b956d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 32690000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:36:21.252576', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '5e5a9d42-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.227708061, 'message_signature': '37f363f49b86be58c5caed4ba211990294a00395a1c9d6d881bd49e3f5cd2f79'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 32550000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:36:21.252576', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '5e5ab958-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.250700839, 'message_signature': '4b7a77b9494a16c8421e25859dc423045eb80fb1087293a89616330b6c28fbdd'}]}, 'timestamp': '2025-10-13 15:36:21.253805', '_unique_id': '19f29eb8dc634d3db1b483a15d2f3251'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.254 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.255 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.255 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.256 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.256 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.257 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.257 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'afbcc2e7-14c4-4d95-baff-17f4e7af0886', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.255767', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5b19c0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': '5c008919fcbcece0ce30d5a051b2d6737c3618d8ebd548498de170d23b932407'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.255767', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5b2dd4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'e3055ca1a44eff82be7419e153059cd2892fdf27c031d772cf5115fa2992856e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.255767', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e5b3e14-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'dde39ad9f135cc7eb4a8644522b9d66d5bc3f57c912d760375354d007f4130ef'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.255767', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5b4a6c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '4d4fb201ea3de37cb81a77b894c595859c6ef44ee9c2cfef3fa9de5769c5ee99'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.255767', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5b5944-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': 'b044b521162b0dbc27ea7abad3a5f487c2c89c0b5c73e3f5492521c04864398e'}]}, 'timestamp': '2025-10-13 15:36:21.257884', '_unique_id': 'b4faddd21c0d447983244a06ea276eea'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.258 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.260 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.260 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 495 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.260 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.261 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.261 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.262 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4f9ac177-b9ca-4a4f-898d-cac221f7a89c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 495, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.260258', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5bcab4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': '3d40424179b661d47c25126ddcd8f1326b435762b9b85741a1406e7c1e4ddc49'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.260258', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5bdacc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'd24a13db140a4d7d7aab223814d55bb240f52fe659ac4550479a529294379231'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.260258', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e5bea94-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'e656c09e9188dd25c47ebec90b85b39602c97e86fc0e3832f560e5646be99730'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.260258', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5bfec6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '3b490215c15dd41b724edbea8842f8eb404972e9d86360349acacaeac006bfbd'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.260258', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5c1492-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '234f588e48b79c77b57a4c9fd779cc1b3946695600e6982ef2d078927a15c7a4'}]}, 'timestamp': '2025-10-13 15:36:21.262634', '_unique_id': 'd7da017660f84835bddc0f8d5b302493'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.263 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.264 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.264 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.265 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.265 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2321920 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.266 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.266 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01115979-1620-4f0e-a6bd-76039e7802e9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.264951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5c7c70-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'd3135468bb70bc56bd72cab67f3429bc1e0acf3318a366503b8028e6801d80e7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.264951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5c8f94-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'a1ada4303d38f09c69ee176da54416a60c0904c8c099a1d0dc11f6edeb60b423'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2321920, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.264951', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e5ca2d6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'dd1afbdbe3aca25948eb3833a265fc450047ad5b8e85ba2984b6fb2622d19d49'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.264951', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5cb438-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': 'c472f6656daf894ed7aff7bdd4ffcbe2efd048a9e55f11e82aa3c29749ca9379'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.264951', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5cc2de-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '638995350b446beec6b7c83d10da686dd5ef2c43d7368909d0a4332b18471708'}]}, 'timestamp': '2025-10-13 15:36:21.267157', '_unique_id': '41fbb10b389743d7ba9f1f13f5430cdb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.268 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.269 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.269 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.269 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.270 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.270 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.270 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac53480e-2df0-4300-986b-00cd70189281', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:36:21.269362', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5d2b3e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': '184e40b19035e032ca2a0767c1ae9e409b7e5462b89617855d809c655923d01f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:36:21.269362', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5d39d0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': '7b65ff22292c924f5be6b1628797d37702672cd9c0fd06165addd763b94d9a50'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 108, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:36:21.269362', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5e5d46e6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.253448584, 'message_signature': 'a919e7b3b386bb22a486af3165eb13ccafd20c5f94ed2c633c8b059279c502ba'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:36:21.269362', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5e5d56b8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '7b739a85b2bf77c44af61aa5daa2da40d2ac96505e1a30a4fa2ddce4f76bc8c0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:36:21.269362', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5e5d628e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.303823088, 'message_signature': '345be2f64a79d93250ae575f65b4f228db15b12e5529539a2dcca3f19acfec9a'}]}, 'timestamp': '2025-10-13 15:36:21.271132', '_unique_id': 'fce14303e34c4e0cb201b9b29b9d5395'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.272 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.273 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.273 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.273 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '25205e03-3f6f-494a-a68b-dbae1b903b52', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:36:21.273368', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5e5dcb02-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.188267534, 'message_signature': '35e2dc8d9afb7af2a5d82865cd50d1aa5c428f087be3b878c53fcbb62702b40c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:36:21.273368', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5e5ddcfa-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9224.193996921, 'message_signature': '840a031bdd67000ef4286a02ad29568425a254f8ba664ef37a257825f0f96b3a'}]}, 'timestamp': '2025-10-13 15:36:21.274998', '_unique_id': '899afa33d97d43e295e34d575c530691'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:36:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:36:21.276 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:36:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3654: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:23 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:23.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:23 standalone.localdomain ceph-mon[29756]: pgmap v3654: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:36:23
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_data', 'backups', 'manila_metadata', 'images', '.mgr', 'volumes', 'vms']
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3655: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25973 DF PROTO=TCP SPT=60388 DPT=9102 SEQ=1922239468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC8109F0000000001030307) 
Oct 13 15:36:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:36:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:36:23 standalone.localdomain podman[516826]: 2025-10-13 15:36:23.797092862 +0000 UTC m=+0.068224285 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350)
Oct 13 15:36:23 standalone.localdomain podman[516826]: 2025-10-13 15:36:23.810188046 +0000 UTC m=+0.081319459 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Oct 13 15:36:23 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:36:24 standalone.localdomain ceph-mon[29756]: pgmap v3655: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:24 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:24.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25974 DF PROTO=TCP SPT=60388 DPT=9102 SEQ=1922239468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC814B60000000001030307) 
Oct 13 15:36:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3656: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:36:25 standalone.localdomain podman[516847]: 2025-10-13 15:36:25.821161121 +0000 UTC m=+0.088424898 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:36:25 standalone.localdomain podman[516847]: 2025-10-13 15:36:25.840125376 +0000 UTC m=+0.107389143 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:36:25 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:36:26 standalone.localdomain ceph-mon[29756]: pgmap v3656: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25975 DF PROTO=TCP SPT=60388 DPT=9102 SEQ=1922239468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC81CB60000000001030307) 
Oct 13 15:36:26 standalone.localdomain python3.9[516956]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 13 15:36:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3657: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:27 standalone.localdomain sshd[516975]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:36:27 standalone.localdomain sshd[516975]: Accepted publickey for root from 192.168.122.30 port 59824 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:36:27 standalone.localdomain systemd-logind[45629]: New session 300 of user root.
Oct 13 15:36:27 standalone.localdomain systemd[1]: Started Session 300 of User root.
Oct 13 15:36:27 standalone.localdomain sshd[516975]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:36:27 standalone.localdomain sshd[516978]: Received disconnect from 192.168.122.30 port 59824:11: disconnected by user
Oct 13 15:36:27 standalone.localdomain sshd[516978]: Disconnected from user root 192.168.122.30 port 59824
Oct 13 15:36:27 standalone.localdomain sshd[516975]: pam_unix(sshd:session): session closed for user root
Oct 13 15:36:27 standalone.localdomain systemd[1]: session-300.scope: Deactivated successfully.
Oct 13 15:36:27 standalone.localdomain systemd-logind[45629]: Session 300 logged out. Waiting for processes to exit.
Oct 13 15:36:27 standalone.localdomain systemd-logind[45629]: Removed session 300.
Oct 13 15:36:28 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:28.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:28 standalone.localdomain python3.9[517086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:29 standalone.localdomain python3.9[517172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369788.030211-1410-134647143651803/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:36:29 standalone.localdomain ceph-mon[29756]: pgmap v3657: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:29 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:29.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3658: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:29 standalone.localdomain python3.9[517280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:30 standalone.localdomain ceph-mon[29756]: pgmap v3658: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25976 DF PROTO=TCP SPT=60388 DPT=9102 SEQ=1922239468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC82C760000000001030307) 
Oct 13 15:36:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:36:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:36:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:36:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:36:30 standalone.localdomain podman[517286]: 2025-10-13 15:36:30.824607369 +0000 UTC m=+0.085580079 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:36:30 standalone.localdomain podman[517286]: 2025-10-13 15:36:30.82982968 +0000 UTC m=+0.090802410 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:36:30 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:36:30 standalone.localdomain podman[517283]: 2025-10-13 15:36:30.886593211 +0000 UTC m=+0.148601044 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 15:36:30 standalone.localdomain podman[517284]: 2025-10-13 15:36:30.933544069 +0000 UTC m=+0.201595658 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, build-date=2025-07-21T15:54:32, container_name=swift_container_server, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Oct 13 15:36:31 standalone.localdomain podman[517285]: 2025-10-13 15:36:31.002373302 +0000 UTC m=+0.261120605 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, version=17.1.9, io.openshift.expose-services=, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:36:31 standalone.localdomain python3.9[517391]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:31 standalone.localdomain podman[517283]: 2025-10-13 15:36:31.112983252 +0000 UTC m=+0.374991105 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, batch=17.1_20250721.1, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1, name=rhosp17/openstack-swift-object, container_name=swift_object_server, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28)
Oct 13 15:36:31 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:36:31 standalone.localdomain podman[517284]: 2025-10-13 15:36:31.18294254 +0000 UTC m=+0.450994169 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, vcs-type=git, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:36:31 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:36:31 standalone.localdomain podman[517285]: 2025-10-13 15:36:31.22380226 +0000 UTC m=+0.482549543 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, architecture=x86_64, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, distribution-scope=public, release=1)
Oct 13 15:36:31 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:36:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3659: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:31 standalone.localdomain python3.9[517547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:32 standalone.localdomain python3.9[517633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369791.1711855-1410-175116325475101/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:32 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:32.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:32 standalone.localdomain python3.9[517741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:33 standalone.localdomain python3.9[517827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369792.1565144-1410-42849302090802/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=f34c7eb1930a91cf1fb02b6d58032eaf6db6dbe6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:33.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:33 standalone.localdomain ceph-mon[29756]: pgmap v3659: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:33 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:33.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3660: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:33 standalone.localdomain python3.9[517935]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:34 standalone.localdomain ceph-mon[29756]: pgmap v3660: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:34 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:34.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:36:34 standalone.localdomain podman[518022]: 2025-10-13 15:36:34.835732656 +0000 UTC m=+0.094416243 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:36:34 standalone.localdomain podman[518022]: 2025-10-13 15:36:34.847853549 +0000 UTC m=+0.106537136 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:36:34 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:36:34 standalone.localdomain python3.9[518021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/root/.ansible/tmp/ansible-tmp-1760369793.2363453-1410-263890975307378/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3661: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:35 standalone.localdomain python3.9[518148]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:36 standalone.localdomain ceph-mon[29756]: pgmap v3661: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 13 15:36:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:36:36 standalone.localdomain python3.9[518256]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:36 standalone.localdomain podman[518257]: 2025-10-13 15:36:36.7886266 +0000 UTC m=+0.059059192 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:36:36 standalone.localdomain podman[518257]: 2025-10-13 15:36:36.800095854 +0000 UTC m=+0.070528476 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:36:36 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.092 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.092 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:36:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.249 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.250 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.250 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.250 2 DEBUG nova.objects.instance [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:36:37 standalone.localdomain python3.9[518383]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3662: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.655 2 DEBUG nova.network.neutron [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.691 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.691 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:36:37 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:37.692 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:38 standalone.localdomain python3.9[518493]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:38 standalone.localdomain ceph-mon[29756]: pgmap v3662: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:38.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:38 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:38.687 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:38 standalone.localdomain python3.9[518601]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:36:38 standalone.localdomain systemd[1]: tmp-crun.KOJMrR.mount: Deactivated successfully.
Oct 13 15:36:38 standalone.localdomain podman[518604]: 2025-10-13 15:36:38.829654532 +0000 UTC m=+0.096071595 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:36:38 standalone.localdomain podman[518604]: 2025-10-13 15:36:38.84225789 +0000 UTC m=+0.108674953 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:36:38 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.115 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.115 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.116 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.116 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.116 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:36:39 standalone.localdomain python3.9[518736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3663: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:36:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3723073221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.614 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.679 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.681 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.682 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.687 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.688 2 DEBUG nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:36:39 standalone.localdomain python3.9[518810]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.881 2 WARNING nova.virt.libvirt.driver [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.882 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9266MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.882 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:36:39 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:39.882 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.147 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.148 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.148 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.149 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.207 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:36:40 standalone.localdomain ceph-mon[29756]: pgmap v3663: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:40 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3723073221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:36:40 standalone.localdomain python3.9[518940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 15:36:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:36:40 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2184913667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.599 2 DEBUG oslo_concurrency.processutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.606 2 DEBUG nova.compute.provider_tree [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.621 2 DEBUG nova.scheduler.client.report [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.622 2 DEBUG nova.compute.resource_tracker [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:36:40 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:40.622 2 DEBUG oslo_concurrency.lockutils [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:36:40 standalone.localdomain python3.9[518997]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 15:36:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3664: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:41 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2184913667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:36:41 standalone.localdomain podman[467099]: time="2025-10-13T15:36:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:36:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:36:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:36:41 standalone.localdomain python3.9[519105]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 13 15:36:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:36:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49119 "" "Go-http-client/1.1"
Oct 13 15:36:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:42 standalone.localdomain python3.9[519213]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:36:42 standalone.localdomain ceph-mon[29756]: pgmap v3664: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:36:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:36:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:36:43 standalone.localdomain python3[519321]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:36:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:43.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:43 standalone.localdomain python3[519321]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "97abb4e5d6eb812c6abde306e15dbdde9dbba5ef5cd42ad11b83abc055914569",
                                                                  "Digest": "sha256:2b70db6709e60609ff94638c3a7a40f509d80d41e53eb54f065703da11802cd3",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:2b70db6709e60609ff94638c3a7a40f509d80d41e53eb54f065703da11802cd3"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:31:21.381002217Z",
                                                                  "Config": {
                                                                       "User": "nova",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 1207007612,
                                                                  "VirtualSize": 1207007612,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96/diff:/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:1a40b8e27c60dd8893fdbc5fa30120a44a28acf4612c80547c0553f310501dd5",
                                                                            "sha256:2a9ec46259af39bbeda42274a61412e662148c4183ea89d81b5121453e60e59e"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "nova",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:11.761349972Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:20.180231702Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:44.424181775Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:20:01.549627401Z",
                                                                            "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:20:10.269093781Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:29:37.419592755Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:20.482454363Z",
                                                                            "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:20.985389477Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:21.378472405Z",
                                                                            "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:21.378530597Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:28.94629191Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 15:36:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3665: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:43.623 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:43 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:43.623 2 DEBUG nova.compute.manager [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:36:44 standalone.localdomain python3.9[519493]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:44 standalone.localdomain sudo[519513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:36:44 standalone.localdomain sudo[519513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:36:44 standalone.localdomain sudo[519513]: pam_unix(sudo:session): session closed for user root
Oct 13 15:36:44 standalone.localdomain sudo[519531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:36:44 standalone.localdomain sudo[519531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:36:44 standalone.localdomain ceph-mon[29756]: pgmap v3665: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:44 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:44.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:44 standalone.localdomain sudo[519531]: pam_unix(sudo:session): session closed for user root
Oct 13 15:36:45 standalone.localdomain python3.9[519654]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:36:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev d08ab16e-178f-422f-98f7-b7bcb904fae7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:36:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev d08ab16e-178f-422f-98f7-b7bcb904fae7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:36:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event d08ab16e-178f-422f-98f7-b7bcb904fae7 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:36:45 standalone.localdomain sudo[519679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:36:45 standalone.localdomain sudo[519679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:36:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:36:45 standalone.localdomain sudo[519679]: pam_unix(sudo:session): session closed for user root
Oct 13 15:36:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:36:45 standalone.localdomain podman[519707]: 2025-10-13 15:36:45.213658803 +0000 UTC m=+0.073669572 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:36:45 standalone.localdomain podman[519707]: 2025-10-13 15:36:45.219962208 +0000 UTC m=+0.079972987 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:36:45 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:36:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3666: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:36:45 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:36:46 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:46.091 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:46 standalone.localdomain python3.9[519814]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 15:36:46 standalone.localdomain python3[519922]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 15:36:47 standalone.localdomain python3[519922]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                             {
                                                                  "Id": "97abb4e5d6eb812c6abde306e15dbdde9dbba5ef5cd42ad11b83abc055914569",
                                                                  "Digest": "sha256:2b70db6709e60609ff94638c3a7a40f509d80d41e53eb54f065703da11802cd3",
                                                                  "RepoTags": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                  ],
                                                                  "RepoDigests": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:2b70db6709e60609ff94638c3a7a40f509d80d41e53eb54f065703da11802cd3"
                                                                  ],
                                                                  "Parent": "",
                                                                  "Comment": "",
                                                                  "Created": "2025-10-13T06:31:21.381002217Z",
                                                                  "Config": {
                                                                       "User": "nova",
                                                                       "Env": [
                                                                            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                            "LANG=en_US.UTF-8",
                                                                            "TZ=UTC",
                                                                            "container=oci"
                                                                       ],
                                                                       "Entrypoint": [
                                                                            "dumb-init",
                                                                            "--single-child",
                                                                            "--"
                                                                       ],
                                                                       "Cmd": [
                                                                            "kolla_start"
                                                                       ],
                                                                       "Labels": {
                                                                            "io.buildah.version": "1.41.3",
                                                                            "maintainer": "OpenStack Kubernetes Operator team",
                                                                            "org.label-schema.build-date": "20251009",
                                                                            "org.label-schema.license": "GPLv2",
                                                                            "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                            "org.label-schema.schema-version": "1.0",
                                                                            "org.label-schema.vendor": "CentOS",
                                                                            "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                            "tcib_managed": "true"
                                                                       },
                                                                       "StopSignal": "SIGTERM"
                                                                  },
                                                                  "Version": "",
                                                                  "Author": "",
                                                                  "Architecture": "amd64",
                                                                  "Os": "linux",
                                                                  "Size": 1207007612,
                                                                  "VirtualSize": 1207007612,
                                                                  "GraphDriver": {
                                                                       "Name": "overlay",
                                                                       "Data": {
                                                                            "LowerDir": "/var/lib/containers/storage/overlay/db748b4cdb30a617a163bf186638b12157c2da5481b73c1cb39630dc1d937c96/diff:/var/lib/containers/storage/overlay/00cbc6d8860896a86790c9b72aab7c791e98c5a7078f2ddda23ef8d09bdfcbb4/diff:/var/lib/containers/storage/overlay/00e34e4ed090003f737f695a35bb18e31ccfed17328d91583c884a412d772d4c/diff:/var/lib/containers/storage/overlay/f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424/diff",
                                                                            "UpperDir": "/var/lib/containers/storage/overlay/2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5/diff",
                                                                            "WorkDir": "/var/lib/containers/storage/overlay/2f869e978a3914fac885699b921afa1af4d3c121bcdff90f8b5cbe23f9a11dc5/work"
                                                                       }
                                                                  },
                                                                  "RootFS": {
                                                                       "Type": "layers",
                                                                       "Layers": [
                                                                            "sha256:f3f40f6483bf6d587286da9e86e40878c2aaaf723da5aa2364fff24f5ea28424",
                                                                            "sha256:b783c4d6e5ad469a2da52d1f9f5642b4a99005a13a4f8f9855f9f6ed3dcfeddf",
                                                                            "sha256:4dd9b6336279e9bb933efd1972a9f033bb03b41daa8f211234bd85673a3f81ca",
                                                                            "sha256:1a40b8e27c60dd8893fdbc5fa30120a44a28acf4612c80547c0553f310501dd5",
                                                                            "sha256:2a9ec46259af39bbeda42274a61412e662148c4183ea89d81b5121453e60e59e"
                                                                       ]
                                                                  },
                                                                  "Labels": {
                                                                       "io.buildah.version": "1.41.3",
                                                                       "maintainer": "OpenStack Kubernetes Operator team",
                                                                       "org.label-schema.build-date": "20251009",
                                                                       "org.label-schema.license": "GPLv2",
                                                                       "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                       "org.label-schema.schema-version": "1.0",
                                                                       "org.label-schema.vendor": "CentOS",
                                                                       "tcib_build_tag": "92672cd85fd36317d65faa0525acf849",
                                                                       "tcib_managed": "true"
                                                                  },
                                                                  "Annotations": {},
                                                                  "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                  "User": "nova",
                                                                  "History": [
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.867908726Z",
                                                                            "created_by": "/bin/sh -c #(nop) ADD file:b2e608b9da8e087a764c2aebbd9c2cc9181047f5b301f1dab77fdf098a28268b in / ",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:03.868015697Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251009\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-09T00:18:07.890794359Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.75496777Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                            "comment": "FROM quay.io/centos/centos:stream9",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.754993311Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755013771Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755032692Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755082953Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:46.755102984Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:09:47.201574917Z",
                                                                            "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:23.544915378Z",
                                                                            "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.119117694Z",
                                                                            "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.552286083Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:27.962220801Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:28.715322167Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.14263152Z",
                                                                            "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:29.560856504Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.004904301Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.42712225Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:30.823321864Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.238615281Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:31.634039923Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.0188837Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.395414523Z",
                                                                            "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:32.805928482Z",
                                                                            "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.197074343Z",
                                                                            "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:33.605614349Z",
                                                                            "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.372128638Z",
                                                                            "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:37.755904629Z",
                                                                            "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:38.104256162Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:39.755515954Z",
                                                                            "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870322882Z",
                                                                            "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870431084Z",
                                                                            "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870453635Z",
                                                                            "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:41.870467555Z",
                                                                            "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:10:43.347797515Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:12:52.112593379Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:34.238039873Z",
                                                                            "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:13:37.655425275Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:11.761349972Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:20.180231702Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:18:44.424181775Z",
                                                                            "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:20:01.549627401Z",
                                                                            "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:20:10.269093781Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:29:37.419592755Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER root",
                                                                            "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:92672cd85fd36317d65faa0525acf849",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:20.482454363Z",
                                                                            "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:20.985389477Z",
                                                                            "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:21.378472405Z",
                                                                            "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:21.378530597Z",
                                                                            "created_by": "/bin/sh -c #(nop) USER nova",
                                                                            "empty_layer": true
                                                                       },
                                                                       {
                                                                            "created": "2025-10-13T06:31:28.94629191Z",
                                                                            "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"92672cd85fd36317d65faa0525acf849\""
                                                                       }
                                                                  ],
                                                                  "NamesHistory": [
                                                                       "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                  ]
                                                             }
                                                        ]
                                                        : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 15:36:47 standalone.localdomain ceph-mon[29756]: pgmap v3666: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3667: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:48 standalone.localdomain ceph-mon[29756]: pgmap v3667: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:48 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:48.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:48 standalone.localdomain python3.9[520096]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:49 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:49.090 2 DEBUG oslo_service.periodic_task [None req-e2e9d507-a691-46b1-b69b-fda8332e4f53 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:36:49 standalone.localdomain python3.9[520206]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3668: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:49 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:49.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:50 standalone.localdomain python3.9[520313]: ansible-copy Invoked with src=/root/.ansible/tmp/ansible-tmp-1760369809.433266-1622-231276920849436/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 15:36:50 standalone.localdomain ceph-mon[29756]: pgmap v3668: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:50 standalone.localdomain python3.9[520366]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 15:36:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3669: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:36:51 standalone.localdomain podman[520477]: 2025-10-13 15:36:51.764096848 +0000 UTC m=+0.101898164 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 13 15:36:51 standalone.localdomain python3.9[520476]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:51 standalone.localdomain podman[520477]: 2025-10-13 15:36:51.803122162 +0000 UTC m=+0.140923468 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:36:51 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:36:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:52 standalone.localdomain python3.9[520608]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:52 standalone.localdomain ceph-mon[29756]: pgmap v3669: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:53 standalone.localdomain python3.9[520716]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:36:53 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:53.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3670: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8194 DF PROTO=TCP SPT=44654 DPT=9102 SEQ=3513195184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC885CF0000000001030307) 
Oct 13 15:36:54 standalone.localdomain python3.9[520824]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 15:36:54 standalone.localdomain ceph-mon[29756]: pgmap v3670: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:54 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 109.0 (363 of 333 items), suggesting rotation.
Oct 13 15:36:54 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:36:54 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:36:54 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:36:54 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:36:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8195 DF PROTO=TCP SPT=44654 DPT=9102 SEQ=3513195184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC889F70000000001030307) 
Oct 13 15:36:54 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:54.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:36:54 standalone.localdomain systemd[1]: tmp-crun.cCD49h.mount: Deactivated successfully.
Oct 13 15:36:54 standalone.localdomain podman[520954]: 2025-10-13 15:36:54.83471357 +0000 UTC m=+0.093498295 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:36:54 standalone.localdomain podman[520954]: 2025-10-13 15:36:54.849196367 +0000 UTC m=+0.107981132 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 15:36:54 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:36:55 standalone.localdomain python3.9[520960]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 15:36:55 standalone.localdomain systemd[1]: Stopping nova_compute container...
Oct 13 15:36:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3671: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:56 standalone.localdomain ceph-mon[29756]: pgmap v3671: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8196 DF PROTO=TCP SPT=44654 DPT=9102 SEQ=3513195184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC891F70000000001030307) 
Oct 13 15:36:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:36:56 standalone.localdomain podman[520996]: 2025-10-13 15:36:56.817762424 +0000 UTC m=+0.085338393 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:36:56 standalone.localdomain podman[520996]: 2025-10-13 15:36:56.833926002 +0000 UTC m=+0.101501951 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:36:56 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:36:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:36:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3672: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:58 standalone.localdomain ceph-mon[29756]: pgmap v3672: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:58 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:58.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:36:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3673: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:36:59 standalone.localdomain nova_compute[456855]: 2025-10-13 15:36:59.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:37:00.240 2 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Oct 13 15:37:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:37:00.242 2 DEBUG oslo_concurrency.lockutils [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:37:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:37:00.243 2 DEBUG oslo_concurrency.lockutils [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:37:00 standalone.localdomain nova_compute[456855]: 2025-10-13 15:37:00.243 2 DEBUG oslo_concurrency.lockutils [None req-21ce295a-1a1a-4199-9628-a647663ef9a7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:37:00 standalone.localdomain ceph-mon[29756]: pgmap v3673: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8197 DF PROTO=TCP SPT=44654 DPT=9102 SEQ=3513195184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC8A1B60000000001030307) 
Oct 13 15:37:00 standalone.localdomain virtqemud[425408]: End of file while reading data: Input/output error
Oct 13 15:37:00 standalone.localdomain systemd[1]: libpod-d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719.scope: Deactivated successfully.
Oct 13 15:37:00 standalone.localdomain systemd[1]: libpod-d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719.scope: Consumed 34.557s CPU time.
Oct 13 15:37:00 standalone.localdomain podman[520982]: 2025-10-13 15:37:00.700809011 +0000 UTC m=+5.546217277 container died d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:37:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719-userdata-shm.mount: Deactivated successfully.
Oct 13 15:37:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc-merged.mount: Deactivated successfully.
Oct 13 15:37:00 standalone.localdomain podman[520982]: 2025-10-13 15:37:00.903211193 +0000 UTC m=+5.748619409 container cleanup d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:37:00 standalone.localdomain podman[520982]: nova_compute
Oct 13 15:37:00 standalone.localdomain podman[521015]: 2025-10-13 15:37:00.90671079 +0000 UTC m=+0.195832390 container cleanup d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:37:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:37:01 standalone.localdomain podman[521029]: 2025-10-13 15:37:01.028577849 +0000 UTC m=+0.092428771 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:37:01 standalone.localdomain podman[521029]: 2025-10-13 15:37:01.043058645 +0000 UTC m=+0.106909517 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:37:01 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:37:01 standalone.localdomain podman[521030]: 2025-10-13 15:37:01.109415392 +0000 UTC m=+0.169848769 container cleanup d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:37:01 standalone.localdomain podman[521030]: nova_compute
Oct 13 15:37:01 standalone.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 13 15:37:01 standalone.localdomain systemd[1]: Stopped nova_compute container.
Oct 13 15:37:01 standalone.localdomain systemd[1]: edpm_nova_compute.service: Consumed 1.023s CPU time, read 0B from disk, written 20.0K to disk.
Oct 13 15:37:01 standalone.localdomain systemd[1]: Starting nova_compute container...
Oct 13 15:37:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:37:01 standalone.localdomain podman[521066]: 2025-10-13 15:37:01.22448088 +0000 UTC m=+0.081555335 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, release=1, container_name=swift_object_server, architecture=x86_64, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:37:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:37:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:37:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d725b6764492faed396078ff3568f346ef427dd61aebcff3834fc770f3242dcc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:37:01 standalone.localdomain podman[521065]: 2025-10-13 15:37:01.316803538 +0000 UTC m=+0.170219861 container init d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + sudo -E kolla_set_configs
Oct 13 15:37:01 standalone.localdomain podman[521065]: 2025-10-13 15:37:01.329708905 +0000 UTC m=+0.183125228 container start d014bd5310a8685ce7872624913dd1c3cd066aec5817f5a2e9aeca2654eff719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251009)
Oct 13 15:37:01 standalone.localdomain podman[521065]: nova_compute
Oct 13 15:37:01 standalone.localdomain systemd[1]: Started nova_compute container.
Oct 13 15:37:01 standalone.localdomain podman[521108]: 2025-10-13 15:37:01.387014623 +0000 UTC m=+0.067168053 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, container_name=swift_account_server, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9)
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Validating config file
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying service configuration files
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /etc/ceph
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Creating directory /etc/ceph
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/ceph
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Writing out command to execute
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: ++ cat /run_command
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + CMD=nova-compute
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + ARGS=
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + sudo kolla_copy_cacerts
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + [[ ! -n '' ]]
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + . kolla_extend_start
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: Running command: 'nova-compute'
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + echo 'Running command: '\''nova-compute'\'''
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + umask 0022
Oct 13 15:37:01 standalone.localdomain nova_compute[521101]: + exec nova-compute
Oct 13 15:37:01 standalone.localdomain podman[521066]: 2025-10-13 15:37:01.437783528 +0000 UTC m=+0.294857943 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-swift-object, io.buildah.version=1.33.12, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, container_name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:37:01 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:37:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3674: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:01 standalone.localdomain podman[521107]: 2025-10-13 15:37:01.444073212 +0000 UTC m=+0.128138863 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, config_id=tripleo_step4, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team)
Oct 13 15:37:01 standalone.localdomain podman[521108]: 2025-10-13 15:37:01.555009453 +0000 UTC m=+0.235162893 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, vcs-type=git, config_id=tripleo_step4, container_name=swift_account_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:37:01 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:37:01 standalone.localdomain podman[521107]: 2025-10-13 15:37:01.642048437 +0000 UTC m=+0.326114078 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, config_id=tripleo_step4, container_name=swift_container_server, distribution-scope=public)
Oct 13 15:37:01 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:37:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:02 standalone.localdomain python3.9[521280]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 15:37:02 standalone.localdomain systemd[1]: Started libpod-conmon-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d.scope.
Oct 13 15:37:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 13 15:37:02 standalone.localdomain ceph-mon[29756]: pgmap v3674: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:02 standalone.localdomain podman[521307]: 2025-10-13 15:37:02.539114571 +0000 UTC m=+0.151279156 container init 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init)
Oct 13 15:37:02 standalone.localdomain podman[521307]: 2025-10-13 15:37:02.553776143 +0000 UTC m=+0.165940728 container start 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:37:02 standalone.localdomain python3.9[521280]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Applying nova statedir ownership
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 0:0 to 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 0:0 to 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63 already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63 to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63/console.log
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/468faea603e9a8e7d8ab655f127ad3cb3131d39b
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-468faea603e9a8e7d8ab655f127ad3cb3131d39b
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/8f68d5aa-abc4-451d-89d2-f5342b71831c/console.log
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/7dbe5bae7bc27ef07490c629ec1f09edaa9e8c135ff89c3f08f1e44f39cf5928
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/4e1c8ff05ea4b4d3fd130f5de9a8e0ba3414c26740fc6ca454deea0f548914ea
Oct 13 15:37:02 standalone.localdomain nova_compute_init[521328]: INFO:nova_statedir:Nova statedir ownership complete
Oct 13 15:37:02 standalone.localdomain systemd[1]: libpod-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d.scope: Deactivated successfully.
Oct 13 15:37:02 standalone.localdomain podman[521342]: 2025-10-13 15:37:02.698446145 +0000 UTC m=+0.052964964 container died 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:37:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:37:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fb81b6705c127d8ae6a85a0c5aa00e223afd3bd2e8fc0274e5929ac927615f25-merged.mount: Deactivated successfully.
Oct 13 15:37:02 standalone.localdomain podman[521342]: 2025-10-13 15:37:02.732949299 +0000 UTC m=+0.087468078 container cleanup 10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:37:02 standalone.localdomain systemd[1]: libpod-conmon-10c941742e1135ddb49d309edc51a4dc0ae21f055945ea9ba8cf0fdd3c52c37d.scope: Deactivated successfully.
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.085 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.085 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.085 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.086 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.205 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.219 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:37:03 standalone.localdomain sshd[498107]: pam_unix(sshd:session): session closed for user root
Oct 13 15:37:03 standalone.localdomain systemd-logind[45629]: Session 298 logged out. Waiting for processes to exit.
Oct 13 15:37:03 standalone.localdomain systemd[1]: session-298.scope: Deactivated successfully.
Oct 13 15:37:03 standalone.localdomain systemd[1]: session-298.scope: Consumed 1min 47.124s CPU time.
Oct 13 15:37:03 standalone.localdomain systemd-logind[45629]: Removed session 298.
Oct 13 15:37:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3675: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.733 2 INFO nova.virt.driver [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.898 2 INFO nova.compute.provider_config [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.911 2 DEBUG oslo_concurrency.lockutils [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.912 2 DEBUG oslo_concurrency.lockutils [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.912 2 DEBUG oslo_concurrency.lockutils [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.912 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.913 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.913 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.913 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.913 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.913 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.914 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.914 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.914 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.914 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.915 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.915 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.915 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.915 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.915 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.916 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.916 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.916 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.916 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.917 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] console_host                   = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.917 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.917 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.917 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.917 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.918 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.918 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.918 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.918 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.918 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.919 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.919 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.919 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.919 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.920 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.920 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.920 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.920 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.920 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.921 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] host                           = standalone.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.921 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.921 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.921 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.922 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.922 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.922 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.922 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.922 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.923 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.923 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.923 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.923 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.924 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.924 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.924 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.924 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.925 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.925 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.925 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.925 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.925 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.926 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.926 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.926 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.926 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.926 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.927 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.927 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.927 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.927 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.927 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.928 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.928 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.928 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.928 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.928 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.929 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.929 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.929 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.929 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.930 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.930 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.930 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.930 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.930 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.931 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.931 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.931 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.932 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.932 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.932 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.932 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.932 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.933 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.933 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.933 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.933 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.933 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.934 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.934 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.934 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.934 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.935 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.935 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.935 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.935 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.935 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.936 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.936 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.936 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.936 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.937 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.937 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.937 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.937 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.937 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.938 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.938 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.938 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.938 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.938 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.939 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.939 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.939 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.939 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.939 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.940 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.940 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.940 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.940 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.940 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.941 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.941 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.941 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.941 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.942 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.942 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.942 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.942 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.942 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.943 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.943 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.943 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.943 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.943 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.944 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.944 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.944 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.944 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.945 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.945 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.945 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.945 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.945 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.946 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.946 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.946 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.946 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.947 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.947 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.947 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.947 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.947 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.948 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.948 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.948 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.948 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.948 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.949 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.949 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.949 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.949 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.949 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.950 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.950 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.950 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.950 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.951 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.951 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.951 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.951 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.951 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.952 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.952 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.952 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.952 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.953 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.953 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.953 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.953 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.953 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.954 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.955 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.956 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.957 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.957 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.957 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.957 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.957 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.957 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.958 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.959 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.960 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.960 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.960 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.960 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.960 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.960 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.961 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.962 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.963 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.964 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.964 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.964 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.964 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.964 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.964 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.965 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.966 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.966 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.966 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.966 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.966 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.966 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.967 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.968 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.969 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.970 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.970 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.970 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.970 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.970 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.970 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.971 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.972 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.972 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.972 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.972 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.972 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.972 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.973 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.974 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.975 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.975 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.975 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.975 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.976 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.976 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.976 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.976 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.976 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.976 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.977 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.978 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.978 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.978 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.978 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.978 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.978 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.979 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.980 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.980 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.980 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.980 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.980 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.980 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.981 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.981 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.981 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.981 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.981 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.981 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.982 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.982 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.982 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.982 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.982 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.983 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.983 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.983 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.983 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.984 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.984 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.984 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.984 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.984 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.985 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.985 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.985 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.985 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.986 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.986 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.986 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.986 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.987 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.987 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.987 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.987 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.987 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.988 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.988 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.988 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.988 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.989 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.989 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.989 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.989 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.989 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.990 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.990 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.990 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.990 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.990 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.990 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.991 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.992 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.993 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.994 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.995 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.995 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.995 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.995 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.995 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.995 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.996 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.997 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.998 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:03.999 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.000 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.001 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.001 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.001 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.001 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.001 2 WARNING oslo_config.cfg [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: and ``live_migration_inbound_addr`` respectively.
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: ).  Its value may be silently ignored in the future.
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.002 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.003 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rbd_secret_uuid        = 627e7f45-65aa-56de-94df-66eaee84a56e log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.004 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.005 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.006 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.006 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.006 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.007 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.007 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.007 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.007 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.007 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.008 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.008 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.008 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.008 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.008 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.008 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.009 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.010 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.011 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.012 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.013 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.014 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.015 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.016 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.017 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.018 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.019 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.020 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.021 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.022 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.022 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.022 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.022 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.022 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.022 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.023 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.024 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.025 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.026 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.027 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.028 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.028 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.028 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.028 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.028 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.028 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.029 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.030 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.030 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.030 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.030 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.030 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.030 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.031 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.032 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.033 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.034 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.035 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.036 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.037 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.037 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.037 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.037 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.037 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.037 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.038 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.039 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.040 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.040 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.040 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.040 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.040 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.040 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.041 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.042 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.043 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.044 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.044 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.044 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.044 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.044 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.044 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.045 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.046 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.047 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.048 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.049 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.050 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.051 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.051 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.051 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.051 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.051 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.051 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.052 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.053 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.054 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.055 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.056 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.057 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.058 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.059 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.060 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.061 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.062 2 DEBUG oslo_service.service [None req-64f275e7-80fd-4dc3-96a0-8c4cc60c3b1e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.063 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.101 2 INFO nova.virt.node [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Determined node identity e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from /var/lib/nova/compute_id
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.102 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.103 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.103 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.103 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.113 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1a087c9250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.116 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1a087c9250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.117 2 INFO nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Connection event '1' reason 'None'
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.122 2 INFO nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Libvirt host capabilities <capabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <host>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <uuid>6635e8bc-8f5a-4138-a382-e7fa61dbcec1</uuid>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <arch>x86_64</arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model>EPYC-Rome-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <vendor>AMD</vendor>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <microcode version='16777317'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <signature family='23' model='49' stepping='0'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='x2apic'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='tsc-deadline'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='osxsave'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='hypervisor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='tsc_adjust'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='spec-ctrl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='stibp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='arch-capabilities'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='cmp_legacy'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='topoext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='virt-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='lbrv'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='tsc-scale'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='vmcb-clean'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='pause-filter'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='pfthreshold'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='svme-addr-chk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='rdctl-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='skip-l1dfl-vmentry'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='mds-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature name='pschange-mc-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <pages unit='KiB' size='4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <pages unit='KiB' size='2048'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <pages unit='KiB' size='1048576'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <power_management>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <suspend_mem/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <suspend_disk/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <suspend_hybrid/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </power_management>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <iommu support='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <migration_features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <live/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <uri_transports>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <uri_transport>tcp</uri_transport>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <uri_transport>rdma</uri_transport>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </uri_transports>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </migration_features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <topology>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <cells num='1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <cell id='0'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           <memory unit='KiB'>16116612</memory>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           <pages unit='KiB' size='4'>4029153</pages>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           <pages unit='KiB' size='2048'>0</pages>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           <distances>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <sibling id='0' value='10'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           </distances>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           <cpus num='8'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:           </cpus>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         </cell>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </cells>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </topology>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <cache>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </cache>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <secmodel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model>selinux</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <doi>0</doi>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </secmodel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <secmodel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model>dac</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <doi>0</doi>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </secmodel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </host>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <guest>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <os_type>hvm</os_type>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <arch name='i686'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <wordsize>32</wordsize>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <domain type='qemu'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <domain type='kvm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <pae/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <nonpae/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <acpi default='on' toggle='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <apic default='on' toggle='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <cpuselection/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <deviceboot/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <disksnapshot default='on' toggle='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <externalSnapshot/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </guest>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <guest>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <os_type>hvm</os_type>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <arch name='x86_64'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <wordsize>64</wordsize>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <domain type='qemu'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <domain type='kvm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <acpi default='on' toggle='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <apic default='on' toggle='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <cpuselection/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <deviceboot/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <disksnapshot default='on' toggle='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <externalSnapshot/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </guest>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: </capabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.132 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.133 2 DEBUG nova.virt.libvirt.volume.mount [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.139 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: <domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <domain>kvm</domain>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <arch>i686</arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <vcpu max='1024'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <iothreads supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <os supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='firmware'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <loader supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>rom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pflash</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='readonly'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>yes</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='secure'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </loader>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </os>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='maximum' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='maximumMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-model' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <vendor>AMD</vendor>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='x2apic'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='stibp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='succor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lbrv'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='mds-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='custom' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Dhyana-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-128'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-256'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-512'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <memoryBacking supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='sourceType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>file</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>anonymous</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>memfd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </memoryBacking>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <disk supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='diskDevice'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>disk</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cdrom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>floppy</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>lun</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>fdc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>sata</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <graphics supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vnc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egl-headless</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>dbus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </graphics>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <video supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='modelType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vga</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cirrus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>none</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>bochs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ramfb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </video>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hostdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='mode'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>subsystem</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='startupPolicy'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>mandatory</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>requisite</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>optional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='subsysType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pci</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='capsType'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='pciBackend'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hostdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <rng supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>random</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </rng>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <filesystem supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='driverType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>path</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>handle</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtiofs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </filesystem>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <tpm supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-tis</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-crb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emulator</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>external</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendVersion'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>2.0</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </tpm>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <redirdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </redirdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <channel supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pty</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>unix</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </channel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <crypto supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>qemu</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </crypto>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <interface supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>passt</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </interface>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <panic supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>isa</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>hyperv</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </panic>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <gic supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <vmcoreinfo supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <genid supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backingStoreInput supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backup supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <async-teardown supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <ps2 supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sev supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sgx supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hyperv supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='features'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>relaxed</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vapic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>spinlocks</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vpindex</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>runtime</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>synic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>stimer</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reset</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vendor_id</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>frequencies</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reenlightenment</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tlbflush</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ipi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>avic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emsr_bitmap</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>xmm_input</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hyperv>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <launchSecurity supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: </domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.146 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: <domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <domain>kvm</domain>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <arch>i686</arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <vcpu max='240'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <iothreads supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <os supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='firmware'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <loader supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>rom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pflash</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='readonly'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>yes</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='secure'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </loader>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </os>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='maximum' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='maximumMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-model' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <vendor>AMD</vendor>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='x2apic'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='stibp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='succor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lbrv'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='mds-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='custom' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Dhyana-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-128'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-256'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-512'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <memoryBacking supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='sourceType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>file</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>anonymous</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>memfd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </memoryBacking>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <disk supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='diskDevice'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>disk</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cdrom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>floppy</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>lun</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ide</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>fdc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>sata</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <graphics supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vnc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egl-headless</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>dbus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </graphics>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <video supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='modelType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vga</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cirrus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>none</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>bochs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ramfb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </video>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hostdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='mode'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>subsystem</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='startupPolicy'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>mandatory</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>requisite</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>optional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='subsysType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pci</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='capsType'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='pciBackend'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hostdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <rng supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>random</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </rng>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <filesystem supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='driverType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>path</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>handle</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtiofs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </filesystem>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <tpm supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-tis</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-crb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emulator</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>external</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendVersion'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>2.0</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </tpm>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <redirdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </redirdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <channel supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pty</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>unix</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </channel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <crypto supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>qemu</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </crypto>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <interface supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>passt</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </interface>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <panic supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>isa</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>hyperv</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </panic>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <gic supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <vmcoreinfo supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <genid supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backingStoreInput supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backup supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <async-teardown supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <ps2 supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sev supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sgx supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hyperv supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='features'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>relaxed</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vapic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>spinlocks</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vpindex</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>runtime</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>synic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>stimer</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reset</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vendor_id</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>frequencies</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reenlightenment</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tlbflush</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ipi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>avic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emsr_bitmap</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>xmm_input</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hyperv>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <launchSecurity supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: </domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.166 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.171 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: <domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <domain>kvm</domain>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <arch>x86_64</arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <vcpu max='1024'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <iothreads supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <os supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='firmware'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>efi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <loader supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>rom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pflash</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='readonly'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>yes</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='secure'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>yes</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </loader>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </os>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='maximum' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='maximumMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-model' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <vendor>AMD</vendor>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='x2apic'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='stibp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='succor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lbrv'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='mds-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='custom' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Dhyana-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-128'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-256'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-512'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <memoryBacking supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='sourceType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>file</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>anonymous</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>memfd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </memoryBacking>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <disk supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='diskDevice'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>disk</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cdrom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>floppy</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>lun</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>fdc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>sata</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <graphics supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vnc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egl-headless</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>dbus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </graphics>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <video supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='modelType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vga</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cirrus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>none</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>bochs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ramfb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </video>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hostdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='mode'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>subsystem</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='startupPolicy'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>mandatory</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>requisite</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>optional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='subsysType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pci</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='capsType'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='pciBackend'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hostdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <rng supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>random</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </rng>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <filesystem supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='driverType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>path</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>handle</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtiofs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </filesystem>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <tpm supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-tis</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-crb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emulator</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>external</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendVersion'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>2.0</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </tpm>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <redirdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </redirdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <channel supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pty</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>unix</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </channel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <crypto supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>qemu</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </crypto>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <interface supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>passt</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </interface>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <panic supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>isa</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>hyperv</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </panic>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <gic supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <vmcoreinfo supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <genid supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backingStoreInput supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backup supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <async-teardown supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <ps2 supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sev supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sgx supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hyperv supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='features'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>relaxed</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vapic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>spinlocks</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vpindex</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>runtime</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>synic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>stimer</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reset</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vendor_id</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>frequencies</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reenlightenment</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tlbflush</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ipi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>avic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emsr_bitmap</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>xmm_input</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hyperv>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <launchSecurity supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: </domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.224 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: <domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <path>/usr/libexec/qemu-kvm</path>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <domain>kvm</domain>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <arch>x86_64</arch>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <vcpu max='240'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <iothreads supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <os supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='firmware'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <loader supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>rom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pflash</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='readonly'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>yes</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='secure'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>no</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </loader>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </os>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-passthrough' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='hostPassthroughMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='maximum' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='maximumMigratable'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>on</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>off</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='host-model' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <vendor>AMD</vendor>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='x2apic'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-deadline'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='hypervisor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc_adjust'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='spec-ctrl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='stibp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='arch-capabilities'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='cmp_legacy'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='overflow-recov'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='succor'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='amd-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='virt-ssbd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lbrv'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='tsc-scale'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='vmcb-clean'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pause-filter'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pfthreshold'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='svme-addr-chk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='rdctl-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='mds-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='require' name='pschange-mc-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <feature policy='disable' name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <mode name='custom' supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Broadwell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cascadelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Cooperlake-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Denverton-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Dhyana-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Genoa-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='auto-ibrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Milan-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amd-psfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='no-nested-data-bp'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='null-sel-clr-base'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='stibp-always-on'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-Rome-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='EPYC-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='GraniteRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-128'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-256'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx10-512'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='prefetchiti'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Haswell-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-noTSX'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v6'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Icelake-Server-v7'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='IvyBridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='KnightsMill-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4fmaps'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-4vnniw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512er'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512pf'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G4-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Opteron_G5-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fma4'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tbm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xop'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SapphireRapids-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='amx-tile'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-bf16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-fp16'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512-vpopcntdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bitalg'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vbmi2'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrc'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fzrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='la57'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='taa-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='tsx-ldtrk'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xfd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='SierraForest-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ifma'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-ne-convert'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx-vnni-int8'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='bus-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cmpccxadd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fbsdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='fsrs'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ibrs-all'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mcdt-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pbrsb-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='psdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='sbdr-ssdp-no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='serialize'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vaes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='vpclmulqdq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Client-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='hle'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='rtm'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Skylake-Server-v5'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512bw'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512cd'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512dq'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512f'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='avx512vl'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='invpcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pcid'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='pku'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='mpx'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v2'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v3'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='core-capability'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='split-lock-detect'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='Snowridge-v4'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='cldemote'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='erms'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='gfni'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdir64b'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='movdiri'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='xsaves'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='athlon-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='core2duo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='coreduo-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='n270-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='ss'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <blockers model='phenom-v1'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnow'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <feature name='3dnowext'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </blockers>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </mode>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </cpu>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <memoryBacking supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <enum name='sourceType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>file</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>anonymous</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <value>memfd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </memoryBacking>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <disk supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='diskDevice'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>disk</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cdrom</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>floppy</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>lun</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ide</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>fdc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>sata</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <graphics supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vnc</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egl-headless</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>dbus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </graphics>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <video supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='modelType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vga</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>cirrus</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>none</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>bochs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ramfb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </video>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hostdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='mode'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>subsystem</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='startupPolicy'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>mandatory</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>requisite</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>optional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='subsysType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pci</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>scsi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='capsType'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='pciBackend'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hostdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <rng supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtio-non-transitional</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>random</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>egd</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </rng>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <filesystem supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='driverType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>path</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>handle</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>virtiofs</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </filesystem>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <tpm supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-tis</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tpm-crb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emulator</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>external</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendVersion'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>2.0</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </tpm>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <redirdev supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='bus'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>usb</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </redirdev>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <channel supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>pty</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>unix</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </channel>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <crypto supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='type'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>qemu</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendModel'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>builtin</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </crypto>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <interface supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='backendType'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>default</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>passt</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </interface>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <panic supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='model'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>isa</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>hyperv</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </panic>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </devices>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   <features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <gic supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <vmcoreinfo supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <genid supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backingStoreInput supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <backup supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <async-teardown supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <ps2 supported='yes'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sev supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <sgx supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <hyperv supported='yes'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       <enum name='features'>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>relaxed</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vapic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>spinlocks</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vpindex</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>runtime</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>synic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>stimer</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reset</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>vendor_id</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>frequencies</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>reenlightenment</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>tlbflush</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>ipi</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>avic</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>emsr_bitmap</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:         <value>xmm_input</value>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:       </enum>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     </hyperv>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:     <launchSecurity supported='no'/>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:   </features>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: </domainCapabilities>
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.271 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.272 2 INFO nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Secure Boot support detected
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.274 2 INFO nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.274 2 INFO nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.284 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.332 2 INFO nova.virt.node [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Determined node identity e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from /var/lib/nova/compute_id
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.355 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Verified node e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 matches my host standalone.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.392 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.397 2 DEBUG nova.virt.libvirt.vif [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,hidden=False,host='standalone.localdomain',hostname='test',id=1,image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:16:19Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=<?>,node='standalone.localdomain',numa_topology=None,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-iqeurhsr',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T14:16:19Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=54a46fec-332e-42f9-83ed-88e763d13f63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.398 2 DEBUG nova.network.os_vif_util [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Converting VIF {"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.399 2 DEBUG nova.network.os_vif_util [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.400 2 DEBUG os_vif [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.446 2 DEBUG ovsdbapp.backend.ovs_idl [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.447 2 DEBUG ovsdbapp.backend.ovs_idl [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.447 2 DEBUG ovsdbapp.backend.ovs_idl [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.468 2 INFO oslo.privsep.daemon [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpn6ilxp8q/privsep.sock']
Oct 13 15:37:04 standalone.localdomain ceph-mon[29756]: pgmap v3675: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:04.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.130 2 INFO oslo.privsep.daemon [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Spawned new privsep daemon via rootwrap
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.013 40 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.018 40 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.022 40 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.022 40 INFO oslo.privsep.daemon [-] privsep daemon running as pid 40
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a49767c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a49767c-fb, col_values=(('external_ids', {'iface-id': '8a49767c-fb09-4185-95da-4261d8043fad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:4e:c0', 'vm-uuid': '54a46fec-332e-42f9-83ed-88e763d13f63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.420 2 INFO os_vif [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb')
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.420 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:05 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:15:37:05 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txb590012497844ef5a1737-0068ed1ca1" "proxy-server 2" 0.0005 "-" 23 -
Oct 13 15:37:05 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txb590012497844ef5a1737-0068ed1ca1)
Oct 13 15:37:05 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txb590012497844ef5a1737-0068ed1ca1)
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.424 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.425 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.430 2 DEBUG nova.virt.libvirt.vif [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:17:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='bfv-server',display_name='bfv-server',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=<?>,hidden=False,host='standalone.localdomain',hostname='bfv-server',id=2,image_ref='',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:17:39Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=<?>,node='standalone.localdomain',numa_topology=None,old_flavor=<?>,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-n0khd6io',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata=<?>,tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T14:17:40Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=8f68d5aa-abc4-451d-89d2-f5342b71831c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.431 2 DEBUG nova.network.os_vif_util [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Converting VIF {"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.432 2 DEBUG nova.network.os_vif_util [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.433 2 DEBUG os_vif [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda3e5a61-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.438 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda3e5a61-7a, col_values=(('external_ids', {'iface-id': 'da3e5a61-7adb-481b-b7f5-703c3939cde2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:20:5b', 'vm-uuid': '8f68d5aa-abc4-451d-89d2-f5342b71831c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.440 2 INFO os_vif [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:20:5b,bridge_name='br-int',has_traffic_filtering=True,id=da3e5a61-7adb-481b-b7f5-703c3939cde2,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda3e5a61-7a')
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.440 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.443 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.443 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 13 15:37:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3676: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.802 2 DEBUG oslo_concurrency.lockutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.804 2 DEBUG oslo_concurrency.lockutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.804 2 DEBUG oslo_concurrency.lockutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.804 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:37:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:05.805 2 DEBUG oslo_concurrency.processutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:37:05 standalone.localdomain systemd[1]: tmp-crun.5AAtXY.mount: Deactivated successfully.
Oct 13 15:37:05 standalone.localdomain podman[521418]: 2025-10-13 15:37:05.844176444 +0000 UTC m=+0.109638982 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm)
Oct 13 15:37:05 standalone.localdomain podman[521418]: 2025-10-13 15:37:05.8628736 +0000 UTC m=+0.128336178 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:37:05 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:37:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:37:06 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/659188651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.282 2 DEBUG oslo_concurrency.processutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.358 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.358 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.358 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.362 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.362 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:37:06 standalone.localdomain ceph-mon[29756]: pgmap v3676: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:06 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/659188651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.568 2 WARNING nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.569 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9303MB free_disk=6.949638366699219GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.569 2 DEBUG oslo_concurrency.lockutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.570 2 DEBUG oslo_concurrency.lockutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.937 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.937 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.938 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:37:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:06.939 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:37:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:06.956 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:06.957 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:06.958 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.158 2 DEBUG nova.scheduler.client.report [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.176 2 DEBUG nova.scheduler.client.report [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.177 2 DEBUG nova.compute.provider_tree [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.197 2 DEBUG nova.scheduler.client.report [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.216 2 DEBUG nova.scheduler.client.report [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:37:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.292 2 DEBUG oslo_concurrency.processutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:37:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:37:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3677: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:07 standalone.localdomain systemd[1]: tmp-crun.O3UBEq.mount: Deactivated successfully.
Oct 13 15:37:07 standalone.localdomain podman[521479]: 2025-10-13 15:37:07.563554108 +0000 UTC m=+0.114101230 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:37:07 standalone.localdomain podman[521479]: 2025-10-13 15:37:07.574882398 +0000 UTC m=+0.125429500 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:37:07 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:37:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:37:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2812631868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.731 2 DEBUG oslo_concurrency.processutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.738 2 DEBUG nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.738 2 INFO nova.virt.libvirt.host [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] kernel doesn't support AMD SEV
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.741 2 DEBUG nova.compute.provider_tree [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.741 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.770 2 DEBUG nova.scheduler.client.report [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.810 2 DEBUG nova.compute.resource_tracker [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.810 2 DEBUG oslo_concurrency.lockutils [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.811 2 DEBUG nova.service [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.839 2 DEBUG nova.service [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 13 15:37:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:07.840 2 DEBUG nova.servicegroup.drivers.db [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] DB_Driver: join new ServiceGroup member standalone.localdomain to the compute group, service = <Service: host=standalone.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 13 15:37:08 standalone.localdomain ceph-mon[29756]: pgmap v3677: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:08 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2812631868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:37:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 7200.1 total, 600.0 interval
                                                        Cumulative writes: 7059 writes, 30K keys, 7059 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 7059 writes, 1338 syncs, 5.28 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 45 writes, 100 keys, 45 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s
                                                        Interval WAL: 45 writes, 20 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.842 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.868 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Triggering sync for uuid 54a46fec-332e-42f9-83ed-88e763d13f63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.869 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Triggering sync for uuid 8f68d5aa-abc4-451d-89d2-f5342b71831c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.870 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.870 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.871 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.871 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.872 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.900 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:08.906 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:09.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3678: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:09.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:37:09 standalone.localdomain podman[521500]: 2025-10-13 15:37:09.836904535 +0000 UTC m=+0.088172370 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:37:09 standalone.localdomain podman[521500]: 2025-10-13 15:37:09.850006739 +0000 UTC m=+0.101274524 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:37:09 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:37:10 standalone.localdomain ceph-mon[29756]: pgmap v3678: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3679: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:11 standalone.localdomain podman[467099]: time="2025-10-13T15:37:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:37:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:37:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:37:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:37:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49112 "" "Go-http-client/1.1"
Oct 13 15:37:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:12 standalone.localdomain ceph-mon[29756]: pgmap v3679: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:37:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:37:13 standalone.localdomain systemd[1]: Stopping User Manager for UID 0...
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Activating special unit Exit the Session...
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped target Main User Target.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped target Basic System.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped target Paths.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped target Sockets.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped target Timers.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Closed D-Bus User Message Bus Socket.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Stopped Create User's Volatile Files and Directories.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Removed slice User Application Slice.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Reached target Shutdown.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Finished Exit the Session.
Oct 13 15:37:13 standalone.localdomain systemd[498111]: Reached target Exit the Session.
Oct 13 15:37:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3680: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:13 standalone.localdomain systemd[1]: user@0.service: Deactivated successfully.
Oct 13 15:37:13 standalone.localdomain systemd[1]: Stopped User Manager for UID 0.
Oct 13 15:37:13 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 15:37:13 standalone.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 15:37:13 standalone.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 15:37:13 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 15:37:13 standalone.localdomain systemd[1]: Removed slice User Slice of UID 0.
Oct 13 15:37:13 standalone.localdomain systemd[1]: user-0.slice: Consumed 1min 48.055s CPU time.
Oct 13 15:37:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:14.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:14 standalone.localdomain ceph-mon[29756]: pgmap v3680: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:14.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3681: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 15:37:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:37:15 standalone.localdomain systemd[1]: tmp-crun.EYtLC0.mount: Deactivated successfully.
Oct 13 15:37:15 standalone.localdomain podman[521525]: 2025-10-13 15:37:15.822095598 +0000 UTC m=+0.088090239 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:37:15 standalone.localdomain podman[521525]: 2025-10-13 15:37:15.857147068 +0000 UTC m=+0.123141769 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:37:15 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:37:16 standalone.localdomain ceph-mon[29756]: pgmap v3681: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3682: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:18 standalone.localdomain ceph-mon[29756]: pgmap v3682: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:37:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1039143348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:37:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:37:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1039143348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:37:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1039143348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:37:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1039143348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:37:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:19.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3683: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:19.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:20 standalone.localdomain ceph-mon[29756]: pgmap v3683: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3684: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:22 standalone.localdomain ceph-mon[29756]: pgmap v3684: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:37:22 standalone.localdomain podman[521543]: 2025-10-13 15:37:22.806152141 +0000 UTC m=+0.065905194 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Oct 13 15:37:22 standalone.localdomain podman[521543]: 2025-10-13 15:37:22.859791275 +0000 UTC m=+0.119544308 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:37:22 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:37:23
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['vms', 'manila_data', 'volumes', 'images', 'manila_metadata', '.mgr', 'backups']
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3685: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3686 DF PROTO=TCP SPT=39158 DPT=9102 SEQ=3778186230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC8FAFF0000000001030307) 
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:37:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:37:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:24 standalone.localdomain ceph-mon[29756]: pgmap v3685: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3687 DF PROTO=TCP SPT=39158 DPT=9102 SEQ=3778186230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC8FEF60000000001030307) 
Oct 13 15:37:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:24.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3686: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:37:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:25.732 2 DEBUG nova.compute.manager [None req-a0d5c487-9d04-4c5e-bd5f-5345fd58609c 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:25.740 2 INFO nova.compute.manager [None req-a0d5c487-9d04-4c5e-bd5f-5345fd58609c 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Retrieving diagnostics
Oct 13 15:37:25 standalone.localdomain podman[521568]: 2025-10-13 15:37:25.813082779 +0000 UTC m=+0.081858626 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=edpm, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Oct 13 15:37:25 standalone.localdomain podman[521568]: 2025-10-13 15:37:25.830896018 +0000 UTC m=+0.099671845 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, version=9.6, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Oct 13 15:37:25 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:37:26 standalone.localdomain ceph-mon[29756]: pgmap v3686: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3688 DF PROTO=TCP SPT=39158 DPT=9102 SEQ=3778186230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC906F60000000001030307) 
Oct 13 15:37:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3687: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:37:27 standalone.localdomain podman[521588]: 2025-10-13 15:37:27.815131138 +0000 UTC m=+0.079191974 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:37:27 standalone.localdomain podman[521588]: 2025-10-13 15:37:27.830271965 +0000 UTC m=+0.094332841 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 13 15:37:27 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:37:28 standalone.localdomain ceph-mon[29756]: pgmap v3687: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006640188791178113 of space, bias 1.0, pg target 0.6640188791178112 quantized to 32 (current 32)
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:37:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3688: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:29.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:30 standalone.localdomain ceph-mon[29756]: pgmap v3688: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3689 DF PROTO=TCP SPT=39158 DPT=9102 SEQ=3778186230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC916B70000000001030307) 
Oct 13 15:37:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3689: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:37:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:37:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:37:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:37:31 standalone.localdomain systemd[1]: tmp-crun.xTlvNi.mount: Deactivated successfully.
Oct 13 15:37:31 standalone.localdomain podman[521607]: 2025-10-13 15:37:31.813390777 +0000 UTC m=+0.085651653 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, release=1, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, maintainer=OpenStack TripleO Team)
Oct 13 15:37:31 standalone.localdomain podman[521609]: 2025-10-13 15:37:31.880713363 +0000 UTC m=+0.146656113 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=swift_account_server, vcs-type=git, build-date=2025-07-21T16:11:22, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:37:31 standalone.localdomain podman[521608]: 2025-10-13 15:37:31.849045757 +0000 UTC m=+0.115425222 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, release=1, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public)
Oct 13 15:37:31 standalone.localdomain podman[521615]: 2025-10-13 15:37:31.94515684 +0000 UTC m=+0.205060964 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:37:31 standalone.localdomain podman[521615]: 2025-10-13 15:37:31.957883292 +0000 UTC m=+0.217787226 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:37:31 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:37:32 standalone.localdomain podman[521608]: 2025-10-13 15:37:32.023823516 +0000 UTC m=+0.290202951 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, container_name=swift_container_server, managed_by=tripleo_ansible, version=17.1.9, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container)
Oct 13 15:37:32 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:37:32 standalone.localdomain podman[521607]: 2025-10-13 15:37:32.032823504 +0000 UTC m=+0.305084360 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:37:32 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:37:32 standalone.localdomain podman[521609]: 2025-10-13 15:37:32.096706814 +0000 UTC m=+0.362649564 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, release=1, io.openshift.expose-services=, config_id=tripleo_step4, container_name=swift_account_server, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account)
Oct 13 15:37:32 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:37:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:32 standalone.localdomain ceph-mon[29756]: pgmap v3689: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:32 standalone.localdomain systemd[1]: tmp-crun.Y4dfrj.mount: Deactivated successfully.
Oct 13 15:37:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:33.478 2 DEBUG oslo_concurrency.lockutils [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:33.478 2 DEBUG oslo_concurrency.lockutils [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:33.479 2 DEBUG nova.compute.manager [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3690: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:33.483 2 DEBUG nova.compute.manager [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 13 15:37:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:33.488 2 DEBUG nova.objects.instance [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'flavor' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:37:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:33.526 2 DEBUG nova.virt.libvirt.driver [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 13 15:37:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:34.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:34 standalone.localdomain ceph-mon[29756]: pgmap v3690: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:34.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3691: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 682 B/s rd, 8.5 KiB/s wr, 1 op/s
Oct 13 15:37:35 standalone.localdomain kernel: device tap8a49767c-fb left promiscuous mode
Oct 13 15:37:35 standalone.localdomain NetworkManager[5962]: <info>  [1760369855.7578] device (tap8a49767c-fb): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Oct 13 15:37:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:37:35Z|00058|binding|INFO|Releasing lport 8a49767c-fb09-4185-95da-4261d8043fad from this chassis (sb_readonly=0)
Oct 13 15:37:35 standalone.localdomain systemd-journald[48591]: Field hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Oct 13 15:37:35 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:37:35 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:37:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:37:35Z|00059|binding|INFO|Setting lport 8a49767c-fb09-4185-95da-4261d8043fad down in Southbound
Oct 13 15:37:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:37:35Z|00060|binding|INFO|Removing iface tap8a49767c-fb ovn-installed in OVS
Oct 13 15:37:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:35.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:35.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.781 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:4e:c0 192.168.0.238'], port_security=['fa:16:3e:15:4e:c0 192.168.0.238'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.238/24', 'neutron:device_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'standalone.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c455abd-28d4-47e7-a254-e50de0526def', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'abececb5-f6f2-4dbd-993f-ddc54effe614', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0c27a6-9f97-46f6-a22d-9a9321701921, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8a49767c-fb09-4185-95da-4261d8043fad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:37:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:35.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.784 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8a49767c-fb09-4185-95da-4261d8043fad in datapath 0c455abd-28d4-47e7-a254-e50de0526def unbound from our chassis
Oct 13 15:37:35 standalone.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 13 15:37:35 standalone.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 31.488s CPU time.
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.788 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 84bcd2dc-1ea4-4b7f-a1ad-ec9a7d798be1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.788 378821 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:37:35 standalone.localdomain systemd-machined[183383]: Machine qemu-1-instance-00000001 terminated.
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.802 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[eec9f8ef-2869-40f8-b5c1-0d34581d58fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:37:35 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.827 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[b0523805-7120-4807-810f-9b8221d3094f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.830 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[82ae0ff8-e486-468b-a17d-9a107541955b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.851 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[45c293c2-028d-4305-8b18-b00fde85af92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.866 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[27731d9b-55ec-4e16-979b-c8a4885eecc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c455abd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e5:ca:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 55, 'tx_packets': 72, 'rx_bytes': 4010, 'tx_bytes': 4356, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 55, 'tx_packets': 72, 'rx_bytes': 4010, 'tx_bytes': 4356, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442179, 'reachable_time': 40123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 9, 'inoctets': 672, 'indelivers': 2, 'outforwdatagrams': 0, 'outpkts': 20, 'outoctets': 1412, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 20, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 672, 'outmcastoctets': 1412, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 2, 'inerrors': 0, 'outmsgs': 20, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 521722, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.880 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[8f77c5a9-d065-4a78-b2c1-4571cc6a2d27]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442185, 'tstamp': 442185}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 521723, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442186, 'tstamp': 442186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 521723, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.883 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c455abd-20, bridge=br-ctlplane, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:35.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:35.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.938 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c455abd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.938 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.939 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c455abd-20, col_values=(('external_ids', {'iface-id': '52f81088-6dd1-4b23-992e-fdefd91123e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:35.940 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:37:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:37:36 standalone.localdomain podman[521736]: 2025-10-13 15:37:36.146289306 +0000 UTC m=+0.096325882 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:37:36 standalone.localdomain podman[521736]: 2025-10-13 15:37:36.161584907 +0000 UTC m=+0.111621463 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:37:36 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.549 2 INFO nova.virt.libvirt.driver [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Instance shutdown successfully after 3 seconds.
Oct 13 15:37:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:36.551 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:37:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:36.553 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:37:36.554 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.558 2 INFO nova.virt.libvirt.driver [-] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Instance destroyed successfully.
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.559 2 DEBUG nova.objects.instance [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'numa_topology' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:37:36 standalone.localdomain ceph-mon[29756]: pgmap v3691: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 682 B/s rd, 8.5 KiB/s wr, 1 op/s
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.576 2 DEBUG nova.compute.manager [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.649 2 DEBUG oslo_concurrency.lockutils [None req-d5f5158f-93f6-40cc-88e5-7b8c7316153d 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.661 2 DEBUG nova.compute.manager [req-410074c4-220f-40fd-8a78-7f2718507b45 req-1fa79d32-41cb-4b46-bfe1-ea4b3f47b754 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received event network-vif-unplugged-8a49767c-fb09-4185-95da-4261d8043fad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.661 2 DEBUG oslo_concurrency.lockutils [req-410074c4-220f-40fd-8a78-7f2718507b45 req-1fa79d32-41cb-4b46-bfe1-ea4b3f47b754 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.662 2 DEBUG oslo_concurrency.lockutils [req-410074c4-220f-40fd-8a78-7f2718507b45 req-1fa79d32-41cb-4b46-bfe1-ea4b3f47b754 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.662 2 DEBUG oslo_concurrency.lockutils [req-410074c4-220f-40fd-8a78-7f2718507b45 req-1fa79d32-41cb-4b46-bfe1-ea4b3f47b754 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.662 2 DEBUG nova.compute.manager [req-410074c4-220f-40fd-8a78-7f2718507b45 req-1fa79d32-41cb-4b46-bfe1-ea4b3f47b754 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] No waiting events found dispatching network-vif-unplugged-8a49767c-fb09-4185-95da-4261d8043fad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:37:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:36.663 2 WARNING nova.compute.manager [req-410074c4-220f-40fd-8a78-7f2718507b45 req-1fa79d32-41cb-4b46-bfe1-ea4b3f47b754 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received unexpected event network-vif-unplugged-8a49767c-fb09-4185-95da-4261d8043fad for instance with vm_state stopped and task_state None.
Oct 13 15:37:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3692: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 682 B/s rd, 8.5 KiB/s wr, 1 op/s
Oct 13 15:37:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:37:37 standalone.localdomain podman[521755]: 2025-10-13 15:37:37.817896865 +0000 UTC m=+0.085128096 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid)
Oct 13 15:37:37 standalone.localdomain podman[521755]: 2025-10-13 15:37:37.834001392 +0000 UTC m=+0.101232643 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:37:37 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:37:38 standalone.localdomain ceph-mon[29756]: pgmap v3692: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 682 B/s rd, 8.5 KiB/s wr, 1 op/s
Oct 13 15:37:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:38.701 2 DEBUG nova.compute.manager [req-517671f2-7135-4855-b525-c96a16457ed4 req-04c08e6a-8891-4cfb-bfb2-2a4d4931fc3b 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received event network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:37:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:38.701 2 DEBUG oslo_concurrency.lockutils [req-517671f2-7135-4855-b525-c96a16457ed4 req-04c08e6a-8891-4cfb-bfb2-2a4d4931fc3b 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:37:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:38.701 2 DEBUG oslo_concurrency.lockutils [req-517671f2-7135-4855-b525-c96a16457ed4 req-04c08e6a-8891-4cfb-bfb2-2a4d4931fc3b 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:37:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:38.702 2 DEBUG oslo_concurrency.lockutils [req-517671f2-7135-4855-b525-c96a16457ed4 req-04c08e6a-8891-4cfb-bfb2-2a4d4931fc3b 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:37:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:38.702 2 DEBUG nova.compute.manager [req-517671f2-7135-4855-b525-c96a16457ed4 req-04c08e6a-8891-4cfb-bfb2-2a4d4931fc3b 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] No waiting events found dispatching network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:37:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:38.702 2 WARNING nova.compute.manager [req-517671f2-7135-4855-b525-c96a16457ed4 req-04c08e6a-8891-4cfb-bfb2-2a4d4931fc3b 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received unexpected event network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad for instance with vm_state stopped and task_state None.
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3693: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.625 2 DEBUG nova.compute.manager [None req-2c9887c5-8d76-4271-9f18-ee29ce90c6a8 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server [None req-2c9887c5-8d76-4271-9f18-ee29ce90c6a8 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance 54a46fec-332e-42f9-83ed-88e763d13f63 in power state shutdown. Cannot get_diagnostics while the instance is in this state.
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server     raise exception.InstanceInvalidState(
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance 54a46fec-332e-42f9-83ed-88e763d13f63 in power state shutdown. Cannot get_diagnostics while the instance is in this state.
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.646 2 ERROR oslo_messaging.rpc.server 
Oct 13 15:37:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:39.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:40 standalone.localdomain ceph-mon[29756]: pgmap v3693: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:37:40 standalone.localdomain podman[521774]: 2025-10-13 15:37:40.832413157 +0000 UTC m=+0.098059465 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:37:40 standalone.localdomain podman[521774]: 2025-10-13 15:37:40.848049249 +0000 UTC m=+0.113695477 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:37:40 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:37:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3694: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:41 standalone.localdomain podman[467099]: time="2025-10-13T15:37:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:37:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:37:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:37:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:37:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49121 "" "Go-http-client/1.1"
Oct 13 15:37:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:42 standalone.localdomain ceph-mon[29756]: pgmap v3694: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:37:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:37:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:37:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3695: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:44.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:44 standalone.localdomain ceph-mon[29756]: pgmap v3695: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:44.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:45 standalone.localdomain sudo[521797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:37:45 standalone.localdomain sudo[521797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:37:45 standalone.localdomain sudo[521797]: pam_unix(sudo:session): session closed for user root
Oct 13 15:37:45 standalone.localdomain sudo[521815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:37:45 standalone.localdomain sudo[521815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:37:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3696: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:45 standalone.localdomain sudo[521815]: pam_unix(sudo:session): session closed for user root
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:37:45 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:37:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev d8a392d6-e576-4a2a-8360-0e6de53012c7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:37:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev d8a392d6-e576-4a2a-8360-0e6de53012c7 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:37:45 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event d8a392d6-e576-4a2a-8360-0e6de53012c7 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:37:45 standalone.localdomain sudo[521867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:37:46 standalone.localdomain sudo[521867]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:37:46 standalone.localdomain sudo[521867]: pam_unix(sudo:session): session closed for user root
Oct 13 15:37:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:37:46 standalone.localdomain podman[521885]: 2025-10-13 15:37:46.133239165 +0000 UTC m=+0.086687035 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:37:46 standalone.localdomain podman[521885]: 2025-10-13 15:37:46.144806501 +0000 UTC m=+0.098254351 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:37:46 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:37:46 standalone.localdomain ceph-mon[29756]: pgmap v3696: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 853 B/s rd, 8.5 KiB/s wr, 2 op/s
Oct 13 15:37:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:37:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:37:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:37:46 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:37:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3697: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 13 15:37:48 standalone.localdomain ceph-mon[29756]: pgmap v3697: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 13 15:37:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3698: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 13 15:37:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:49.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:49.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:37:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:37:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:37:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:51.003 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760369856.0013437, 54a46fec-332e-42f9-83ed-88e763d13f63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:37:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:51.004 2 INFO nova.compute.manager [-] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] VM Stopped (Lifecycle Event)
Oct 13 15:37:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:51.070 2 DEBUG nova.compute.manager [None req-e10a8386-5950-4f78-aace-d6adada95f58 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:51.074 2 DEBUG nova.compute.manager [None req-e10a8386-5950-4f78-aace-d6adada95f58 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 13 15:37:51 standalone.localdomain ceph-mon[29756]: pgmap v3698: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 13 15:37:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:37:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3699: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:53 standalone.localdomain ceph-mon[29756]: pgmap v3699: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:37:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3700: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28847 DF PROTO=TCP SPT=54742 DPT=9102 SEQ=485590309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC9702F0000000001030307) 
Oct 13 15:37:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:37:53 standalone.localdomain podman[521905]: 2025-10-13 15:37:53.81998225 +0000 UTC m=+0.083307810 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:37:53 standalone.localdomain podman[521905]: 2025-10-13 15:37:53.878863906 +0000 UTC m=+0.142189436 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:37:53 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:37:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:54.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28848 DF PROTO=TCP SPT=54742 DPT=9102 SEQ=485590309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC974360000000001030307) 
Oct 13 15:37:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:54.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:55 standalone.localdomain ceph-mon[29756]: pgmap v3700: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3701: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28849 DF PROTO=TCP SPT=54742 DPT=9102 SEQ=485590309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC97C360000000001030307) 
Oct 13 15:37:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:37:56 standalone.localdomain podman[521931]: 2025-10-13 15:37:56.816505526 +0000 UTC m=+0.085066704 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:37:56 standalone.localdomain podman[521931]: 2025-10-13 15:37:56.859007707 +0000 UTC m=+0.127568915 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible)
Oct 13 15:37:56 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:37:57 standalone.localdomain ceph-mon[29756]: pgmap v3701: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:37:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3702: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:58 standalone.localdomain ceph-mon[29756]: pgmap v3702: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.530 2 DEBUG nova.compute.manager [None req-ca9ec1db-8276-4ec7-8257-5dbcb0756a94 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server [None req-ca9ec1db-8276-4ec7-8257-5dbcb0756a94 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance 54a46fec-332e-42f9-83ed-88e763d13f63 in power state shutdown. Cannot get_diagnostics while the instance is in this state.
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     self.force_reraise()
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     raise self.value
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server     raise exception.InstanceInvalidState(
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance 54a46fec-332e-42f9-83ed-88e763d13f63 in power state shutdown. Cannot get_diagnostics while the instance is in this state.
Oct 13 15:37:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:58.598 2 ERROR oslo_messaging.rpc.server 
Oct 13 15:37:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:37:58 standalone.localdomain podman[521952]: 2025-10-13 15:37:58.796668111 +0000 UTC m=+0.064500080 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009)
Oct 13 15:37:58 standalone.localdomain podman[521952]: 2025-10-13 15:37:58.809890869 +0000 UTC m=+0.077722778 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 13 15:37:58 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:37:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3703: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:37:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:59.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:37:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:37:59.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:00 standalone.localdomain ceph-mon[29756]: pgmap v3703: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28850 DF PROTO=TCP SPT=54742 DPT=9102 SEQ=485590309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC98BF70000000001030307) 
Oct 13 15:38:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3704: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.565420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369881565459, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2114, "num_deletes": 251, "total_data_size": 2000206, "memory_usage": 2040064, "flush_reason": "Manual Compaction"}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369881577653, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 1941366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85592, "largest_seqno": 87705, "table_properties": {"data_size": 1933045, "index_size": 5087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17237, "raw_average_key_size": 19, "raw_value_size": 1916052, "raw_average_value_size": 2217, "num_data_blocks": 229, "num_entries": 864, "num_filter_entries": 864, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369686, "oldest_key_time": 1760369686, "file_creation_time": 1760369881, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 12295 microseconds, and 6776 cpu microseconds.
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.577707) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 1941366 bytes OK
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.577740) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.579831) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.579858) EVENT_LOG_v1 {"time_micros": 1760369881579849, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.579883) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 1991181, prev total WAL file size 1991181, number of live WAL files 2.
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.580696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(1895KB)], [197(5194KB)]
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369881580766, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 7260981, "oldest_snapshot_seqno": -1}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 7110 keys, 6250699 bytes, temperature: kUnknown
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369881617710, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 6250699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6210108, "index_size": 21739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 187356, "raw_average_key_size": 26, "raw_value_size": 6087448, "raw_average_value_size": 856, "num_data_blocks": 854, "num_entries": 7110, "num_filter_entries": 7110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760369881, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.618101) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 6250699 bytes
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.619961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 168.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 5.1 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7633, records dropped: 523 output_compression: NoCompression
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.619991) EVENT_LOG_v1 {"time_micros": 1760369881619977, "job": 124, "event": "compaction_finished", "compaction_time_micros": 37059, "compaction_time_cpu_micros": 23263, "output_level": 6, "num_output_files": 1, "total_output_size": 6250699, "num_input_records": 7633, "num_output_records": 7110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369881620424, "job": 124, "event": "table_file_deletion", "file_number": 199}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760369881621393, "job": 124, "event": "table_file_deletion", "file_number": 197}
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.580559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.621518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.621524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.621527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.621530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:38:01 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:38:01.621532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:38:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:02 standalone.localdomain ceph-mon[29756]: pgmap v3704: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:38:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:38:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:38:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:38:02 standalone.localdomain podman[521971]: 2025-10-13 15:38:02.814642099 +0000 UTC m=+0.084811756 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, container_name=swift_object_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:38:02 standalone.localdomain podman[521977]: 2025-10-13 15:38:02.894457091 +0000 UTC m=+0.151099481 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, container_name=swift_account_server, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account)
Oct 13 15:38:02 standalone.localdomain podman[521979]: 2025-10-13 15:38:02.927691446 +0000 UTC m=+0.182883851 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:38:02 standalone.localdomain podman[521979]: 2025-10-13 15:38:02.936998713 +0000 UTC m=+0.192191168 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:38:02 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:38:02 standalone.localdomain podman[521972]: 2025-10-13 15:38:02.955664759 +0000 UTC m=+0.215993272 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, vcs-type=git, container_name=swift_container_server, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12)
Oct 13 15:38:03 standalone.localdomain podman[521971]: 2025-10-13 15:38:03.013067999 +0000 UTC m=+0.283237656 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, managed_by=tripleo_ansible, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, distribution-scope=public)
Oct 13 15:38:03 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:38:03 standalone.localdomain podman[521977]: 2025-10-13 15:38:03.098295297 +0000 UTC m=+0.354937657 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, name=rhosp17/openstack-swift-account, release=1, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:38:03 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:38:03 standalone.localdomain podman[521972]: 2025-10-13 15:38:03.178053886 +0000 UTC m=+0.438382339 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, release=1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team)
Oct 13 15:38:03 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:38:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:03.280 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:03.281 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:03.282 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:38:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:03.282 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:38:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3705: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:04.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:04 standalone.localdomain ceph-mon[29756]: pgmap v3705: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:04.926 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:38:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:04.926 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:38:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:04.927 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:38:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:04.927 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3706: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.632 2 DEBUG nova.objects.instance [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'flavor' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.646 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.678 2 DEBUG oslo_concurrency.lockutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.743 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.744 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.744 2 DEBUG oslo_concurrency.lockutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.745 2 DEBUG nova.network.neutron [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.745 2 DEBUG nova.objects.instance [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.748 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.749 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.749 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.750 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.750 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.751 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.751 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.752 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:38:05 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:05Z|00061|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.802 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.803 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.804 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.804 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:38:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:05.805 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:38:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:38:06 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1465638521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.274 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.300 2 DEBUG nova.network.neutron [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:38:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.316 2 DEBUG oslo_concurrency.lockutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.380 2 INFO nova.virt.libvirt.driver [-] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Instance destroyed successfully.
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.383 2 DEBUG nova.objects.instance [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'numa_topology' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.402 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.403 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.403 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.404 2 DEBUG nova.objects.instance [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'resources' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.415 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.416 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:06 standalone.localdomain podman[522097]: 2025-10-13 15:38:06.428931529 +0000 UTC m=+0.109921691 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.431 2 DEBUG nova.virt.libvirt.vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='standalone.localdomain',hostname='test',id=1,image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:16:19Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='standalone.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-iqeurhsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:37:36Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=54a46fec-332e-42f9-83ed-88e763d13f63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.434 2 DEBUG nova.network.os_vif_util [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Converting VIF {"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.435 2 DEBUG nova.network.os_vif_util [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.436 2 DEBUG os_vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a49767c-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:06 standalone.localdomain podman[522097]: 2025-10-13 15:38:06.44290099 +0000 UTC m=+0.123891122 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.453 2 INFO os_vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb')
Oct 13 15:38:06 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:38:06 standalone.localdomain systemd[1]: Starting libvirt secret daemon...
Oct 13 15:38:06 standalone.localdomain systemd[1]: Started libvirt secret daemon.
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.520 2 DEBUG oslo_concurrency.lockutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.521 2 DEBUG oslo_concurrency.lockutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.523 2 DEBUG oslo_concurrency.lockutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.527 2 DEBUG nova.virt.libvirt.host [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.527 2 INFO nova.virt.libvirt.host [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] UEFI support detected
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.537 2 DEBUG nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Start _get_guest_xml network_info=[{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=9de40855-8304-4f15-8bc5-8a8a2b61b79b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'image_id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'size': 1}], 'block_device_mapping': [{'disk_bus': 'virtio', 'guest_format': None, 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0f364d66-0f07-4584-a6a7-fbb971c71200', 'hosts': ['172.18.0.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0f364d66-0f07-4584-a6a7-fbb971c71200', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '54a46fec-332e-42f9-83ed-88e763d13f63', 'attached_at': '', 'detached_at': '', 'volume_id': '0f364d66-0f07-4584-a6a7-fbb971c71200', 'serial': '0f364d66-0f07-4584-a6a7-fbb971c71200'}, 'mount_device': '/dev/vdc', 'attachment_id': 'a59f404c-c2bc-4084-9b37-4a4c77ab7cd0', 'delete_on_termination': False, 'device_type': 'disk', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.542 2 WARNING nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.545 2 DEBUG nova.virt.libvirt.host [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Searching host: 'standalone.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.546 2 DEBUG nova.virt.libvirt.host [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.548 2 DEBUG nova.virt.libvirt.host [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Searching host: 'standalone.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.549 2 DEBUG nova.virt.libvirt.host [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.550 2 DEBUG nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.551 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T14:15:16Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='993bc811-5d85-498c-8b2f-935e295a6567',id=3,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=9de40855-8304-4f15-8bc5-8a8a2b61b79b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.552 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.552 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.552 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.553 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.553 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.553 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.554 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.554 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.554 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.555 2 DEBUG nova.virt.hardware [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.555 2 DEBUG nova.objects.instance [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.584 2 DEBUG nova.privsep.utils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.585 2 DEBUG oslo_concurrency.processutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:38:06 standalone.localdomain ceph-mon[29756]: pgmap v3706: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:06 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1465638521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.697 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.698 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9651MB free_disk=6.9495849609375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.698 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.699 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.781 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.781 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.782 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.782 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:38:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:06.864 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:38:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:06.957 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:06.959 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:06.960 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2860640089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.033 2 DEBUG oslo_concurrency.processutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.036 2 DEBUG oslo_concurrency.processutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3210316572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.317 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.327 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.344 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.368 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.369 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/32298254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.472 2 DEBUG oslo_concurrency.processutils [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:38:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3707: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.500 2 DEBUG nova.virt.libvirt.vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='standalone.localdomain',hostname='test',id=1,image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:16:19Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='standalone.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-iqeurhsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:37:36Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=54a46fec-332e-42f9-83ed-88e763d13f63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.501 2 DEBUG nova.network.os_vif_util [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Converting VIF {"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.502 2 DEBUG nova.network.os_vif_util [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.504 2 DEBUG nova.objects.instance [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.521 2 DEBUG nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] End _get_guest_xml xml=<domain type="kvm">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <uuid>54a46fec-332e-42f9-83ed-88e763d13f63</uuid>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <name>instance-00000001</name>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <memory>524288</memory>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <vcpu>1</vcpu>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <metadata>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:name>test</nova:name>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:creationTime>2025-10-13 15:38:06</nova:creationTime>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:flavor name="m1.small">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:memory>512</nova:memory>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:disk>1</nova:disk>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:swap>0</nova:swap>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:ephemeral>1</nova:ephemeral>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:vcpus>1</nova:vcpus>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </nova:flavor>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:owner>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:user uuid="3d9eeef137fc40c78332936114fd7ee4">admin</nova:user>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:project uuid="e44641a80bcb466cb3dd688e48b72d8e">admin</nova:project>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </nova:owner>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:root type="image" uuid="9de40855-8304-4f15-8bc5-8a8a2b61b79b"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <nova:ports>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <nova:port uuid="8a49767c-fb09-4185-95da-4261d8043fad">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:           <nova:ip type="fixed" address="192.168.0.238" ipVersion="4"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         </nova:port>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </nova:ports>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </nova:instance>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </metadata>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <sysinfo type="smbios">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <system>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <entry name="manufacturer">RDO</entry>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <entry name="product">OpenStack Compute</entry>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <entry name="serial">54a46fec-332e-42f9-83ed-88e763d13f63</entry>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <entry name="uuid">54a46fec-332e-42f9-83ed-88e763d13f63</entry>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <entry name="family">Virtual Machine</entry>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </system>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </sysinfo>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <os>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <type arch="x86_64" machine="pc-q35-rhel9.0.0">hvm</type>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <boot dev="hd"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <smbios mode="sysinfo"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </os>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <features>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <acpi/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <apic/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <vmcoreinfo/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </features>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <clock offset="utc">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <timer name="pit" tickpolicy="delay"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <timer name="hpet" present="no"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </clock>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <cpu mode="host-model" match="exact">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <topology sockets="1" cores="1" threads="1"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </cpu>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   <devices>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <disk type="network" device="disk">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <driver type="raw" cache="none"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <source protocol="rbd" name="vms/54a46fec-332e-42f9-83ed-88e763d13f63_disk">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <host name="172.18.0.100" port="6789"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </source>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <auth username="openstack">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <secret type="ceph" uuid="627e7f45-65aa-56de-94df-66eaee84a56e"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </auth>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <target dev="vda" bus="virtio"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <disk type="network" device="disk">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <driver type="raw" cache="none"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <source protocol="rbd" name="vms/54a46fec-332e-42f9-83ed-88e763d13f63_disk.eph0">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <host name="172.18.0.100" port="6789"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </source>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <auth username="openstack">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <secret type="ceph" uuid="627e7f45-65aa-56de-94df-66eaee84a56e"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </auth>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <target dev="vdb" bus="virtio"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <disk type="network" device="disk">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <source protocol="rbd" name="volumes/volume-0f364d66-0f07-4584-a6a7-fbb971c71200">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <host name="172.18.0.100" port="6789"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </source>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <auth username="openstack">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:         <secret type="ceph" uuid="627e7f45-65aa-56de-94df-66eaee84a56e"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       </auth>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <target dev="vdc" bus="virtio"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <serial>0f364d66-0f07-4584-a6a7-fbb971c71200</serial>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <interface type="ethernet">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <mac address="fa:16:3e:15:4e:c0"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <model type="virtio"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <driver name="vhost" rx_queue_size="512"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <mtu size="1442"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <target dev="tap8a49767c-fb"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </interface>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <serial type="pty">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <log file="/var/lib/nova/instances/54a46fec-332e-42f9-83ed-88e763d13f63/console.log" append="off"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </serial>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <video>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <model type="virtio"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </video>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <input type="tablet" bus="usb"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <input type="keyboard" bus="usb"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <rng model="virtio">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <backend model="random">/dev/urandom</backend>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </rng>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <controller type="usb" index="0"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     <memballoon model="virtio">
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:       <stats period="10"/>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:     </memballoon>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:   </devices>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: </domain>
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.523 2 DEBUG nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.523 2 DEBUG nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.523 2 DEBUG nova.virt.libvirt.driver [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.525 2 DEBUG nova.virt.libvirt.vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T14:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test',display_name='test',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='standalone.localdomain',hostname='test',id=1,image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T14:16:19Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='standalone.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='e44641a80bcb466cb3dd688e48b72d8e',ramdisk_id='',reservation_id='r-iqeurhsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='9de40855-8304-4f15-8bc5-8a8a2b61b79b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:37:36Z,user_data=None,user_id='3d9eeef137fc40c78332936114fd7ee4',uuid=54a46fec-332e-42f9-83ed-88e763d13f63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.525 2 DEBUG nova.network.os_vif_util [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Converting VIF {"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.526 2 DEBUG nova.network.os_vif_util [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.527 2 DEBUG os_vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a49767c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a49767c-fb, col_values=(('external_ids', {'iface-id': '8a49767c-fb09-4185-95da-4261d8043fad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:4e:c0', 'vm-uuid': '54a46fec-332e-42f9-83ed-88e763d13f63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.587 2 INFO os_vif [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:4e:c0,bridge_name='br-int',has_traffic_filtering=True,id=8a49767c-fb09-4185-95da-4261d8043fad,network=Network(0c455abd-28d4-47e7-a254-e50de0526def),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a49767c-fb')
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2860640089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3210316572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:38:07 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/32298254' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:38:07 standalone.localdomain kernel: device tap8a49767c-fb entered promiscuous mode
Oct 13 15:38:07 standalone.localdomain NetworkManager[5962]: <info>  [1760369887.6603] manager: (tap8a49767c-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/23)
Oct 13 15:38:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:07Z|00062|binding|INFO|Claiming lport 8a49767c-fb09-4185-95da-4261d8043fad for this chassis.
Oct 13 15:38:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:07Z|00063|binding|INFO|8a49767c-fb09-4185-95da-4261d8043fad: Claiming fa:16:3e:15:4e:c0 192.168.0.238
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain systemd-udevd[522211]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:38:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:07Z|00064|binding|INFO|Setting lport 8a49767c-fb09-4185-95da-4261d8043fad ovn-installed in OVS
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:07Z|00065|binding|INFO|Setting lport 8a49767c-fb09-4185-95da-4261d8043fad up in Southbound
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.677 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:4e:c0 192.168.0.238'], port_security=['fa:16:3e:15:4e:c0 192.168.0.238'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.238/24', 'neutron:device_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c455abd-28d4-47e7-a254-e50de0526def', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'abececb5-f6f2-4dbd-993f-ddc54effe614', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a0c27a6-9f97-46f6-a22d-9a9321701921, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8a49767c-fb09-4185-95da-4261d8043fad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.679 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8a49767c-fb09-4185-95da-4261d8043fad in datapath 0c455abd-28d4-47e7-a254-e50de0526def bound to our chassis
Oct 13 15:38:07 standalone.localdomain NetworkManager[5962]: <info>  [1760369887.6822] device (tap8a49767c-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 15:38:07 standalone.localdomain NetworkManager[5962]: <info>  [1760369887.6831] device (tap8a49767c-fb): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.683 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 84bcd2dc-1ea4-4b7f-a1ad-ec9a7d798be1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.683 378821 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c455abd-28d4-47e7-a254-e50de0526def
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.698 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5d69ac84-ecba-4642-a1b0-ba9fd747825f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:38:07 standalone.localdomain systemd-machined[183383]: New machine qemu-3-instance-00000001.
Oct 13 15:38:07 standalone.localdomain systemd[1]: Started Virtual Machine qemu-3-instance-00000001.
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.722 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[42a8265b-79c6-4947-aafc-8a569aae1d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.727 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac1d920-e31b-4277-943d-5583b1729df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.756 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[f42b0bf7-8df2-425f-8aed-834b59277e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.777 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a14f8ddf-88d2-46f9-8ef0-e3597efae901]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c455abd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e5:ca:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 55, 'tx_packets': 74, 'rx_bytes': 4010, 'tx_bytes': 4440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 55, 'tx_packets': 74, 'rx_bytes': 4010, 'tx_bytes': 4440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442179, 'reachable_time': 40123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 9, 'inoctets': 672, 'indelivers': 2, 'outforwdatagrams': 0, 'outpkts': 20, 'outoctets': 1412, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 20, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 672, 'outmcastoctets': 1412, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 2, 'inerrors': 0, 'outmsgs': 20, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 522227, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.798 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[68b0e553-bc67-4c24-8182-a8818d774f72]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442185, 'tstamp': 442185}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 522229, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap0c455abd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442186, 'tstamp': 442186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 522229, 'error': None, 'target': 'ovnmeta-0c455abd-28d4-47e7-a254-e50de0526def', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.800 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c455abd-20, bridge=br-ctlplane, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:07.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.806 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c455abd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.806 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.807 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c455abd-20, col_values=(('external_ids', {'iface-id': '52f81088-6dd1-4b23-992e-fdefd91123e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:07.807 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.375 2 DEBUG nova.virt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Emitting event <LifecycleEvent: 1760369888.3749642, 54a46fec-332e-42f9-83ed-88e763d13f63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.376 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] VM Resumed (Lifecycle Event)
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.378 2 DEBUG nova.compute.manager [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.382 2 INFO nova.virt.libvirt.driver [-] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Instance rebooted successfully.
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.383 2 DEBUG nova.compute.manager [None req-97a50d16-3391-4f7a-a3b4-c109e3a38c28 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.405 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.408 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.425 2 DEBUG nova.compute.manager [req-41b1b78f-db49-4a75-966b-ecfc5628328d req-2b4f3d3a-a358-4ecc-b7a5-d683a75ac5ed 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received event network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.425 2 DEBUG oslo_concurrency.lockutils [req-41b1b78f-db49-4a75-966b-ecfc5628328d req-2b4f3d3a-a358-4ecc-b7a5-d683a75ac5ed 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.425 2 DEBUG oslo_concurrency.lockutils [req-41b1b78f-db49-4a75-966b-ecfc5628328d req-2b4f3d3a-a358-4ecc-b7a5-d683a75ac5ed 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.426 2 DEBUG oslo_concurrency.lockutils [req-41b1b78f-db49-4a75-966b-ecfc5628328d req-2b4f3d3a-a358-4ecc-b7a5-d683a75ac5ed 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.426 2 DEBUG nova.compute.manager [req-41b1b78f-db49-4a75-966b-ecfc5628328d req-2b4f3d3a-a358-4ecc-b7a5-d683a75ac5ed 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] No waiting events found dispatching network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.426 2 WARNING nova.compute.manager [req-41b1b78f-db49-4a75-966b-ecfc5628328d req-2b4f3d3a-a358-4ecc-b7a5-d683a75ac5ed 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received unexpected event network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad for instance with vm_state stopped and task_state powering-on.
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.429 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.430 2 DEBUG nova.virt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Emitting event <LifecycleEvent: 1760369888.376219, 54a46fec-332e-42f9-83ed-88e763d13f63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.430 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] VM Started (Lifecycle Event)
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.445 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:38:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:08.449 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 13 15:38:08 standalone.localdomain ceph-mon[29756]: pgmap v3707: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:38:08 standalone.localdomain podman[522291]: 2025-10-13 15:38:08.827296571 +0000 UTC m=+0.095178416 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:38:08 standalone.localdomain podman[522291]: 2025-10-13 15:38:08.839111916 +0000 UTC m=+0.106993811 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:38:08 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:38:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3708: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 98 KiB/s rd, 170 B/s wr, 12 op/s
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.524 2 DEBUG nova.compute.manager [req-b1061aa6-779e-407b-bd9f-383fc9aedb3a req-4d18a4b8-9c5d-4bb1-beec-48a27e03b652 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received event network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.525 2 DEBUG oslo_concurrency.lockutils [req-b1061aa6-779e-407b-bd9f-383fc9aedb3a req-4d18a4b8-9c5d-4bb1-beec-48a27e03b652 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.526 2 DEBUG oslo_concurrency.lockutils [req-b1061aa6-779e-407b-bd9f-383fc9aedb3a req-4d18a4b8-9c5d-4bb1-beec-48a27e03b652 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.527 2 DEBUG oslo_concurrency.lockutils [req-b1061aa6-779e-407b-bd9f-383fc9aedb3a req-4d18a4b8-9c5d-4bb1-beec-48a27e03b652 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.527 2 DEBUG nova.compute.manager [req-b1061aa6-779e-407b-bd9f-383fc9aedb3a req-4d18a4b8-9c5d-4bb1-beec-48a27e03b652 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] No waiting events found dispatching network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:38:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:10.527 2 WARNING nova.compute.manager [req-b1061aa6-779e-407b-bd9f-383fc9aedb3a req-4d18a4b8-9c5d-4bb1-beec-48a27e03b652 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Received unexpected event network-vif-plugged-8a49767c-fb09-4185-95da-4261d8043fad for instance with vm_state active and task_state None.
Oct 13 15:38:10 standalone.localdomain ceph-mon[29756]: pgmap v3708: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 98 KiB/s rd, 170 B/s wr, 12 op/s
Oct 13 15:38:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3709: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 98 KiB/s rd, 170 B/s wr, 12 op/s
Oct 13 15:38:11 standalone.localdomain podman[467099]: time="2025-10-13T15:38:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:38:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:38:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:38:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:38:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:38:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49120 "" "Go-http-client/1.1"
Oct 13 15:38:11 standalone.localdomain podman[522311]: 2025-10-13 15:38:11.795913578 +0000 UTC m=+0.068532164 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:38:11 standalone.localdomain podman[522311]: 2025-10-13 15:38:11.806470544 +0000 UTC m=+0.079089140 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:38:11 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:38:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:12.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:12 standalone.localdomain ceph-mon[29756]: pgmap v3709: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 98 KiB/s rd, 170 B/s wr, 12 op/s
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:38:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:38:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3710: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 98 KiB/s rd, 170 B/s wr, 12 op/s
Oct 13 15:38:14 standalone.localdomain ceph-mon[29756]: pgmap v3710: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 98 KiB/s rd, 170 B/s wr, 12 op/s
Oct 13 15:38:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:15.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3711: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 73 op/s
Oct 13 15:38:16 standalone.localdomain ceph-mon[29756]: pgmap v3711: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 73 op/s
Oct 13 15:38:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:38:16 standalone.localdomain systemd[1]: tmp-crun.paxX8X.mount: Deactivated successfully.
Oct 13 15:38:16 standalone.localdomain podman[522333]: 2025-10-13 15:38:16.826628467 +0000 UTC m=+0.097838449 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:38:16 standalone.localdomain podman[522333]: 2025-10-13 15:38:16.859698017 +0000 UTC m=+0.130907999 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:38:16 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:38:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3712: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 73 op/s
Oct 13 15:38:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:17.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:18 standalone.localdomain ceph-mon[29756]: pgmap v3712: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 73 op/s
Oct 13 15:38:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3713: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 170 B/s wr, 75 op/s
Oct 13 15:38:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:20.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:20 standalone.localdomain ceph-mon[29756]: pgmap v3713: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 170 B/s wr, 75 op/s
Oct 13 15:38:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:20.995 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:20.999 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.000 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.000 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.020 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 10210000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.048 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 33180000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ce6506a1-1028-4e14-9f6c-78db6cc09509', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 10210000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:38:21.000981', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'a5bdd1a4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.209735196, 'message_signature': '88fd79393e57193f441a97614e129b6cbf85720c8595a8b1b7a083bb584e53cf'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 33180000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:38:21.000981', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'a5c20e18-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.23774829, 'message_signature': '4bdec5e28f9b62a5bd917ff40068309f10a8db40db4384942a01d5cb956383e8'}]}, 'timestamp': '2025-10-13 15:38:21.049352', '_unique_id': '02af364e6f0b458c92e863cd8b424bdb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.051 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.052 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.090 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.091 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.092 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 26173440 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.119 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.120 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f2d4200-174d-4c15-b9d1-4d13fece382e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.053101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5c882de-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'a26b71ad4ad73c6676dcbb389497c8fef383441ea9b241c5517d59d95bb678e5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.053101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5c89f62-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '99a87e3a51b08273ca00260859030a9abc8b84d767efe8cd61ac4c7b7d7ed1fe'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 26173440, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.053101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5c8b4ca-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '2efd6a7d45cc6494306e3efd291ed840c78f91131630cc9a06252eabcddcf32e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.053101', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5ccdffa-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': 'c416d1402331e5de814338dc34c4ca11a7ac7bda36fffa5c82bac33234a7eadd'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.053101', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5ccfd28-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '65e81b15c714c781f6bdf4bdcb13e21e84f19839d9a9fac19995dccd202a0a35'}]}, 'timestamp': '2025-10-13 15:38:21.120938', '_unique_id': '8741ea8f1f5a4a8f878f8b4e625c10b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.122 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.124 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.129 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 616 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.133 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 3924 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '484b4ebb-aed9-4d99-9d5f-0f78de5aac3b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 616, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.124262', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5ce6f78-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': 'db721adb453373e0a9bb3f9dd4e4479d2b1d3856994890686383d38d50653bc2'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3924, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.124262', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5cf0762-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': 'd5a381b834798f6dfd9197b1d6068149da2312227d0aacb9a8ac7bdd09f5b16b'}]}, 'timestamp': '2025-10-13 15:38:21.134343', '_unique_id': 'b4a165fdc44f443696e369f03c1cf472'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.137 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.138 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.138 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dcdd5f64-5fca-4c65-936f-a8c48cbd5915', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.138048', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5cfaf96-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': 'f85847a0040463db98c48c9adaabfb9642b37961c53b67727b35158319d9f069'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.138048', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5cfcb02-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': 'e947669606e876562a0f823ac1836bc0e6e43699ab82a50bcf22ed8cdecacefe'}]}, 'timestamp': '2025-10-13 15:38:21.139275', '_unique_id': 'a8779409e29d47e4838c9b228d37183e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.140 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.142 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.142 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 266 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.143 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77896ebf-51e5-4672-b085-765d2c03df39', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 266, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.142439', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5d05d92-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': '5bc80e11b2e42daeef654e0fec640bf53dc9e6f84d49b8511e238343dfe12d6c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.142439', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5d07ad4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': 'c13e58d45f2ebccbb1e4ac6d7deb71ccbd743b688835d6636ef9a532986b52c3'}]}, 'timestamp': '2025-10-13 15:38:21.143801', '_unique_id': 'bf03e6b0b53f42319f743dc554c3bdb7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.146 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.146 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 26499072 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.148 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2150400 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.149 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 2880512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.149 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.150 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8081e830-1b7e-46dd-b49f-b6eebf5fd045', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 26499072, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.146769', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d10850-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '5eab58c1a55f2116733e111fd51238983f95db669316901e24eb0ef6000c9460'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2150400, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.146769', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d1473e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'ede66ce03e9543946d86c6b01419f407317c6373b162547a029f950f94a28488'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2880512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.146769', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5d15ae4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '71e461db683745712718792a31f084c74347425c3b2af7cd737c1d5deede6b92'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.146769', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d172d6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '4a658f291ea5891b709cf2cb1069ee45d8b6d1e6b5d0d5afc1ab313f91f23f9b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.146769', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d18474-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '1bac1fa7e379cfeaf113b37b1db4b80b420b1a12a61ba23c95950da1fa6324fc'}]}, 'timestamp': '2025-10-13 15:38:21.150665', '_unique_id': 'fcc3d98ad1ad47b78c7389dd37f67284'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.153 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.154 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.154 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '44b92416-1c96-49d9-bf26-134ebd42cfe4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.154020', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5d220f0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': 'bbb7e63c24bf420dfa6b0aa3583d0cc6dff03fae2c898c2d1d0f572d7935e254'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.154020', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5d23b58-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': '7983c62591a37719ea6044c0ac2914c6ca525d16a6e3888cafd1f1a53f975464'}]}, 'timestamp': '2025-10-13 15:38:21.155308', '_unique_id': 'f40761cc674e4ed3bd133786bb07b0ea'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.156 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.158 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.158 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 616 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.159 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a1ae350-bb92-4f04-b2af-e3f53f7645f3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 616, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.158566', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5d2d536-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': '8f360330b0bcd2c942cea4e7b6f3ba0a4de2918c76ad9e3c9fff19d408274487'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 168, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.158566', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5d2e4fe-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': 'fc8b3f3e2deae6541f032707b7ad31727dbe3b01fe14069ecac481df3e895941'}]}, 'timestamp': '2025-10-13 15:38:21.159514', '_unique_id': '476e3c3c3ebd4930b5cbd0af44992bbc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.160 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.175 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.175 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.190 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.190 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '11ce5062-33be-47f6-b1c7-4a6ecc2f9ecb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.161090', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d565b2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': 'f0cf561225e9d923374d2701af95eeb006dbbf942a174730b4462e532023051c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.161090', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d57110-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': 'db9358b4cb014c80e5e37e671d42bbadecd14beaae9ca761f4072c90cf680adb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.161090', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5d57bb0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': '77d758a6a3993161f8cb1e1b1e225220378799daf9e01b062c4c62e3fdfa6e56'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.161090', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d7b02e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.365713277, 'message_signature': 'e4bc3df2d7ae0139af0ef29cd6e62907925c47898b06ab379d65f031a689f6f4'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.161090', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d7bd44-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.365713277, 'message_signature': 'baeb84a761e75c6205850dbecdad3a4a0d78ee0815f8d9aeb5323a0bb33545b4'}]}, 'timestamp': '2025-10-13 15:38:21.191239', '_unique_id': '338d78a77f074869bd163e6f61855398'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.193 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.193 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.193 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20c2b223-f6ec-494b-840d-e76a736704f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 8, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.193121', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5d812da-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': '698b99d9c8c6d7a78730e8b8860cd2d36b0c522edb96e9f18e40f60981d4de50'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 55, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.193121', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5d81f0a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': '44c1a71de4ddf84770d141cb53ea68a41b03d84496c1dbee8b4ce3b4a930fb1c'}]}, 'timestamp': '2025-10-13 15:38:21.193742', '_unique_id': 'f3e9e5b3120b4feebcd8153a83efb242'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.194 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.195 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.196 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.196 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f065776c-6118-49d2-9052-bd139ceccfc7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.195381', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d86b04-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': '3e96009385b21f468c1239b34dd168a234072eba18fde976d724c070f365cb4f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.195381', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d875a4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': 'f2888dfdb213824d64d277d94a3aa6f70f5bb58a31a735c0be5c59d431e1a7f5'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.195381', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5d88008-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': '9b536b8aedcb0e0d1c77dca66e87581cfdc8cd9e8a0838a2f85f0fdb63244fce'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.195381', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d88a26-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.365713277, 'message_signature': '45ef88c989afd1e1553039db44377b81cfbaf85b1b29c0ff185a2c43cfc2df68'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.195381', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d89642-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.365713277, 'message_signature': 'ca0fd29c7981473cec87904b3282624730b65f4a62f278dd48e6ebf33de8961c'}]}, 'timestamp': '2025-10-13 15:38:21.196827', '_unique_id': '1f5006075bff431f824753e03024b789'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.198 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.198 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.198 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a9e31584-db7b-4916-80b9-de086c716fe0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.198429', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5d8e2c8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': '31b8b1838ac33698cd5a84f2cf6b5d97c8f36025e5cdfb4d20b80dd9dfd3e9ac'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.198429', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5d8eec6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': '585629dc8fa8803513d70727335377a73f1a2e84f19f12573e4508755a405b53'}]}, 'timestamp': '2025-10-13 15:38:21.199063', '_unique_id': '8988c1f2a6cd4134acc7a93e90941534'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.200 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 266 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d8d2fde-a1bd-422a-92ae-8e2dec7104e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 266, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.200501', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5d934f8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': '053abfee19a0d501cc30e323e838b5fa64d6456e41ba9d94ce7151ddeea06a29'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.200501', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5d94222-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': '1c04328f3a54c932b36a07080e02a952630900732320440b97e5699c6a47788e'}]}, 'timestamp': '2025-10-13 15:38:21.201201', '_unique_id': '0632cd60130246d296d25d58c2ad8e2c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.202 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 602165617 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 43901911 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 91748251 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b035e35-2433-43ce-8a0e-51b8ad779eea', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 602165617, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.202650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d9864c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '31c943ada22c84cf99abbd8945b4803e78d02ca11bfa1b506c9197158500599c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 43901911, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.202650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d991be-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'cf428a104828a13011069914cf5adc90b8272758d754704e065de7a2a7c6716f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 91748251, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.202650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5d99c72-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '2c74ba9fcf6268ffabd1736940ff3782ad8621e71d6d6b120799e9e7de0a143b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.202650', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5d9a802-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '6ee68d1c80f1e3922bff62d0a86ef46179899aa8c4b78216ca8cc3e65b2cc4bb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.202650', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5d9b1bc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': 'ab8964b17d439f58194c4c33a70a9e7366dd27957c558e15b6c3a2062aba08b7'}]}, 'timestamp': '2025-10-13 15:38:21.204037', '_unique_id': 'd46621f2bf0c426ea81b220bb20767fc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.205 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.205 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 878 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 99 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 153 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'db316817-8437-4773-bfaa-4b6aec950450', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 878, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.205815', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5da0202-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'dc8a24764ae0d7e5af89f99e11291c8d1d0931c43a72027338c8fafc8a6b7d30'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 99, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.205815', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5da0f4a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '18481f30153841e017e3de878406a346433f4206262d83fd2340ba796e184949'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 153, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.205815', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5da1c9c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'f806f01efd5ddf7de66f198e6263213d6e3295cfbc338c98324f636a219d58c7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.205815', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5da276e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '71534bdea780d4d2a5e1150ecac6bc421f199713a3c6b93978499f8308565276'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.205815', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5da30d8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '38e19f5e80d98904949739363a6aa45015fe008d7a5fc65bc84c00176c3d2f06'}]}, 'timestamp': '2025-10-13 15:38:21.207287', '_unique_id': '0de9c57fe11441708d8c17b8789ccc2b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.207 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.208 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 48.7109375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '84432378-f4a3-40c2-ac61-b4b15e99269a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 48.7109375, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:38:21.208778', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'a5da757a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.209735196, 'message_signature': 'b1b280543c071a08b4e906c0d9f034f0f7245dbedad68283c0219cc5bf3c546b'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:38:21.208778', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'a5da7f34-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.23774829, 'message_signature': 'b2c5ebf5994b2a03a88d7b1a086d207c2cd4100ad240ff52ff2c60754b855e29'}]}, 'timestamp': '2025-10-13 15:38:21.209295', '_unique_id': '3d8452ee987d48e49fe78d745eee9669'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.210 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.210 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.211 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '970fc439-113d-462d-b10b-e9bd28bf425f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.210871', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5dac782-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': 'c4be5180d4527a9b568ea290766d4f6dae66da199ec00e35a68202bb821fd368'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.210871', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5dad29a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': 'db7569e2dd8b0aceefc918c3b90f6d2ea53cd5d994a15ed829d26fe4b95bda1c'}]}, 'timestamp': '2025-10-13 15:38:21.211457', '_unique_id': 'b852369247b245edbd6e6cb9fcc68343'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.212 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.213 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.214 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '70085e73-ffa5-46e5-be5d-a5d735442d95', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.213003', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5db1a70-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': '3942fab6f0c4b016ae981c6984329506ec678f98d6b63da0b1ee7582617847a8'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.213003', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5db24c0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': 'cb3a8e8ec3e9e91593f5a6b175941de65dc2d36963453abda80eed9a0ccfdc91'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.213003', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5db3078-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.350335182, 'message_signature': '5921d71f95124b3bab5fd1a117b79b94f52e9f55723250581bb325b22ed8aa41'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.213003', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5db3a50-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.365713277, 'message_signature': 'c893019493e933dd418fb16217c69eac274a2eac38c60131da56c2ffa4c7bb73'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.213003', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5db43e2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.365713277, 'message_signature': '0275708d18afb6276950f425ea6b7643b2ff590da78441c4fc23f810d89fa8cb'}]}, 'timestamp': '2025-10-13 15:38:21.214368', '_unique_id': '00681c6b2e7e41cdbaf47783b0d04f07'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.216 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.216 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 248 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.216 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9215680e-0f85-448e-89b5-d2131ef892ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.215963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5db8e42-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '8eb29a8a2442f4f76a49e6b8d2a614573b6551d9232d102e174906424ea4653d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.215963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5db9892-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '978855422e659abb85b32d63fd046e1993936828c42e2d8d1fb2324b2edb5238'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 248, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.215963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5dba3e6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '4960e09e8f4eb60252ea127c09ffbf062e52ae07dab94f97acecc75c84446b21'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.215963', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5dbadc8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': 'fdd47ece69fc57e334b1883b9e7805d944ed3862227fe4f86c8dc2f5861a0a10'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.215963', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5dbb746-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': 'fe680012386a0d6345c68d0219512b3877416d77122fdef66c9b245472b9c021'}]}, 'timestamp': '2025-10-13 15:38:21.217284', '_unique_id': 'a44863b5eb394279bfc628dc5b3f0ae6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.218 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.218 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.219 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.219 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 7719235907 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.219 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.220 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f709404b-5c58-43d8-ac1d-d586a1e9ed1b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:38:21.218942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5dc02c8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': '44688683110b107c84371a943248b9ce59f28ee298834ef6cde43d9e77efdb1b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:38:21.218942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5dc0cdc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'cf570fb361e7aa1adc80f6ef6e0398bbba095fd7b005cf4aa7a3a1a12f0f479b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 7719235907, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:38:21.218942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'a5dc19fc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.242533488, 'message_signature': 'd1e64652ca4ecfcda7d81de1e8b16972320105728dec75df4735e9788a52718b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:38:21.218942', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a5dc2410-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': '46df3e7af56d247645c875c5e60980dcc578a5293f007bc64187c082d1ff578a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:38:21.218942', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a5dc2db6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.28215217, 'message_signature': 'ec61ea95535aad773a3ffe9a6069bfca3164062f86c5dc38a2ea12e95e638f68'}]}, 'timestamp': '2025-10-13 15:38:21.220317', '_unique_id': '21885647fc8642e99dcd194ad083587b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 3 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.222 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36956a17-e281-4daf-aff2-3a9627f116ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 3, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:38:21.221847', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'a5dc744c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.313559038, 'message_signature': '9cee3211ccc06d7316e1feae6727fd49a5bffa86e5232a213314d98a1ee64e54'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:38:21.221847', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'a5dc7f64-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9344.319829531, 'message_signature': '699b33b0ce381a5e937439e5456fccdc89e50e1bbd117b40d5f5ccce6a67b647'}]}, 'timestamp': '2025-10-13 15:38:21.222421', '_unique_id': '83c4b6c0b9dc4869b4027cb110a5af8d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:38:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:38:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:38:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3714: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 62 op/s
Oct 13 15:38:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:4e:c0 192.168.0.238
Oct 13 15:38:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:4e:c0 192.168.0.238
Oct 13 15:38:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:22 standalone.localdomain ceph-mon[29756]: pgmap v3714: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 62 op/s
Oct 13 15:38:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:22.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:38:23
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'images', 'volumes', 'manila_data', 'backups', 'vms']
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3715: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 62 op/s
Oct 13 15:38:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13780 DF PROTO=TCP SPT=41728 DPT=9102 SEQ=4157299322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC9E55F0000000001030307) 
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:38:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:38:24 standalone.localdomain ceph-mon[29756]: pgmap v3715: 177 pgs: 177 active+clean; 308 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 62 op/s
Oct 13 15:38:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13781 DF PROTO=TCP SPT=41728 DPT=9102 SEQ=4157299322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC9E9760000000001030307) 
Oct 13 15:38:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:38:24 standalone.localdomain podman[522352]: 2025-10-13 15:38:24.872345155 +0000 UTC m=+0.136550083 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 13 15:38:24 standalone.localdomain podman[522352]: 2025-10-13 15:38:24.937138743 +0000 UTC m=+0.201343681 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 13 15:38:24 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:38:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:25.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3716: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 13 15:38:26 standalone.localdomain ceph-mon[29756]: pgmap v3716: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct 13 15:38:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13782 DF PROTO=TCP SPT=41728 DPT=9102 SEQ=4157299322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CC9F1760000000001030307) 
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:27.078 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:27.082 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:27 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3717: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 13 15:38:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:27.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:38:27 standalone.localdomain podman[522377]: 2025-10-13 15:38:27.868906503 +0000 UTC m=+0.097972353 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 15:38:27 standalone.localdomain podman[522377]: 2025-10-13 15:38:27.909459793 +0000 UTC m=+0.138525643 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Oct 13 15:38:27 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:38:28 standalone.localdomain ceph-mon[29756]: pgmap v3717: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 13 15:38:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:28.347 2 DEBUG nova.compute.manager [None req-769cdf9f-0011-4b83-86cd-786c2b1debd7 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:38:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:28.350 2 INFO nova.compute.manager [None req-769cdf9f-0011-4b83-86cd-786c2b1debd7 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Retrieving diagnostics
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:28.792 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:28.793 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200  len: 146 time: 1.7117164
Oct 13 15:38:28 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54372 [13/Oct/2025:15:38:27.076] listener listener/metadata 0/0/0/1716/1716 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1"
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:28.802 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:28.803 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-keys HTTP/1.0
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:28 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.013121706623394751 of space, bias 1.0, pg target 1.312170662339475 quantized to 32 (current 32)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7161098461055276 quantized to 32 (current 32)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:38:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3718: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 13 15:38:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:38:29 standalone.localdomain podman[522398]: 2025-10-13 15:38:29.813947145 +0000 UTC m=+0.082904458 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:38:29 standalone.localdomain podman[522398]: 2025-10-13 15:38:29.827172503 +0000 UTC m=+0.096129806 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 13 15:38:29 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:38:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:30.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:30 standalone.localdomain ceph-mon[29756]: pgmap v3718: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct 13 15:38:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13783 DF PROTO=TCP SPT=41728 DPT=9102 SEQ=4157299322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCA01360000000001030307) 
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.040 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.040 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:38:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:31.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:31 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54376 [13/Oct/2025:15:38:28.801] listener listener/metadata 0/0/0/2355/2355 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1"
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.157 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 404  len: 297 time: 2.3548224
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.174 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.175 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.291 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.292 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200  len: 146 time: 0.1170914
Oct 13 15:38:31 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54388 [13/Oct/2025:15:38:31.174] listener listener/metadata 0/0/0/118/118 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1"
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.298 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:31.299 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/ami-launch-index HTTP/1.0
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:31 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3719: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 13 15:38:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.337 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.338 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" status: 200  len: 136 time: 1.0390799
Oct 13 15:38:32 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54404 [13/Oct/2025:15:38:31.297] listener listener/metadata 0/0/0/1040/1040 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1"
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.345 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.346 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-type HTTP/1.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:32 standalone.localdomain ceph-mon[29756]: pgmap v3719: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.593 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:32 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54418 [13/Oct/2025:15:38:32.345] listener listener/metadata 0/0/0/249/249 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1"
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.594 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200  len: 143 time: 0.2478817
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.601 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.602 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-ipv4 HTTP/1.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:32.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.849 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.850 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" status: 200  len: 149 time: 0.2480853
Oct 13 15:38:32 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54432 [13/Oct/2025:15:38:32.601] listener listener/metadata 0/0/0/249/249 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1"
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.856 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.857 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-ipv4 HTTP/1.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.973 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.974 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 150 time: 0.1174500
Oct 13 15:38:32 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54446 [13/Oct/2025:15:38:32.855] listener listener/metadata 0/0/0/118/118 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1"
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.978 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:32.979 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/hostname HTTP/1.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:32 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.089 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.089 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/hostname HTTP/1.1" status: 200  len: 139 time: 0.1103675
Oct 13 15:38:33 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54456 [13/Oct/2025:15:38:32.978] listener listener/metadata 0/0/0/111/111 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1"
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.097 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.098 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-hostname HTTP/1.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.216 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.216 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" status: 200  len: 139 time: 0.1184347
Oct 13 15:38:33 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:54470 [13/Oct/2025:15:38:33.097] listener listener/metadata 0/0/0/119/119 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1"
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.225 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.225 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/user-data HTTP/1.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.330 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/user-data HTTP/1.1" status: 404  len: 297 time: 0.1047237
Oct 13 15:38:33 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37516 [13/Oct/2025:15:38:33.224] listener listener/metadata 0/0/0/105/105 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1"
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.346 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.346 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping HTTP/1.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3720: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 13 15:38:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:38:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:38:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:38:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:38:33 standalone.localdomain systemd[1]: tmp-crun.zSzdpa.mount: Deactivated successfully.
Oct 13 15:38:33 standalone.localdomain podman[522417]: 2025-10-13 15:38:33.850819715 +0000 UTC m=+0.117167154 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, batch=17.1_20250721.1, distribution-scope=public, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 15:38:33 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37532 [13/Oct/2025:15:38:33.345] listener listener/metadata 0/0/0/545/545 200 144 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1"
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.890 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.891 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" status: 200  len: 160 time: 0.5446985
Oct 13 15:38:33 standalone.localdomain podman[522430]: 2025-10-13 15:38:33.897055782 +0000 UTC m=+0.145939682 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.901 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:33.903 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:33 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:33 standalone.localdomain podman[522430]: 2025-10-13 15:38:33.910274279 +0000 UTC m=+0.159158199 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:38:33 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:38:34 standalone.localdomain podman[522418]: 2025-10-13 15:38:34.00306471 +0000 UTC m=+0.262013960 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.010 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:34 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37540 [13/Oct/2025:15:38:33.901] listener listener/metadata 0/0/0/110/110 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1"
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.011 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200  len: 138 time: 0.1079736
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.015 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.015 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ebs0 HTTP/1.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:34 standalone.localdomain podman[522424]: 2025-10-13 15:38:34.047843191 +0000 UTC m=+0.300091405 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, tcib_managed=true, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, distribution-scope=public, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:38:34 standalone.localdomain podman[522417]: 2025-10-13 15:38:34.08506781 +0000 UTC m=+0.351415329 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, name=rhosp17/openstack-swift-object, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Oct 13 15:38:34 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.130 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:34 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37544 [13/Oct/2025:15:38:34.014] listener listener/metadata 0/0/0/115/115 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ebs0 HTTP/1.1"
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.131 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/block-device-mapping/ebs0 HTTP/1.1" status: 200  len: 143 time: 0.1151943
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.134 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.134 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:34 standalone.localdomain podman[522418]: 2025-10-13 15:38:34.214763879 +0000 UTC m=+0.473713099 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:38:34 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:38:34 standalone.localdomain podman[522424]: 2025-10-13 15:38:34.234934401 +0000 UTC m=+0.487182525 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 15:38:34 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.286 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:34 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37556 [13/Oct/2025:15:38:34.133] listener listener/metadata 0/0/0/153/153 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1"
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.287 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" status: 200  len: 143 time: 0.1528759
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.294 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.295 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.433 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.433 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200  len: 143 time: 0.1380689
Oct 13 15:38:34 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37562 [13/Oct/2025:15:38:34.294] listener listener/metadata 0/0/0/139/139 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1"
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.437 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.438 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-hostname HTTP/1.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:34 standalone.localdomain ceph-mon[29756]: pgmap v3720: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.937 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:34 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37574 [13/Oct/2025:15:38:34.437] listener listener/metadata 0/0/0/500/500 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1"
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.938 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" status: 200  len: 139 time: 0.4993889
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.942 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:34.943 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 192.168.0.238
Oct 13 15:38:34 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: 0c455abd-28d4-47e7-a254-e50de0526def __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:38:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:35.043 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:38:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:35.185 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:38:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:38:35.186 378920 INFO eventlet.wsgi.server [-] 192.168.0.238,<local> "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200  len: 139 time: 0.2431893
Oct 13 15:38:35 standalone.localdomain haproxy-metadata-proxy-0c455abd-28d4-47e7-a254-e50de0526def[500498]: 192.168.0.238:37588 [13/Oct/2025:15:38:34.942] listener listener/metadata 0/0/0/244/244 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1"
Oct 13 15:38:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:35.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3721: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 13 15:38:36 standalone.localdomain ceph-mon[29756]: pgmap v3721: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 13 15:38:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:38:36 standalone.localdomain podman[522524]: 2025-10-13 15:38:36.834565829 +0000 UTC m=+0.094611929 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:38:36 standalone.localdomain podman[522524]: 2025-10-13 15:38:36.850006596 +0000 UTC m=+0.110052696 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:38:36 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:38:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3722: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 682 B/s rd, 25 KiB/s wr, 0 op/s
Oct 13 15:38:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:38:37Z|00066|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 13 15:38:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:37.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:38 standalone.localdomain ceph-mon[29756]: pgmap v3722: 177 pgs: 177 active+clean; 339 MiB data, 282 MiB used, 6.7 GiB / 7.0 GiB avail; 682 B/s rd, 25 KiB/s wr, 0 op/s
Oct 13 15:38:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3723: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 188 KiB/s rd, 128 KiB/s wr, 40 op/s
Oct 13 15:38:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:38:39 standalone.localdomain podman[522543]: 2025-10-13 15:38:39.828786056 +0000 UTC m=+0.086706875 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 13 15:38:39 standalone.localdomain podman[522543]: 2025-10-13 15:38:39.866815589 +0000 UTC m=+0.124736428 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:38:39 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:38:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:40.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:40 standalone.localdomain ceph-mon[29756]: pgmap v3723: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 188 KiB/s rd, 128 KiB/s wr, 40 op/s
Oct 13 15:38:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3724: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 112 KiB/s wr, 40 op/s
Oct 13 15:38:41 standalone.localdomain podman[467099]: time="2025-10-13T15:38:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:38:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:38:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:38:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:38:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49114 "" "Go-http-client/1.1"
Oct 13 15:38:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:42 standalone.localdomain ceph-mon[29756]: pgmap v3724: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 112 KiB/s wr, 40 op/s
Oct 13 15:38:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:38:42 standalone.localdomain systemd[1]: tmp-crun.xsn6ka.mount: Deactivated successfully.
Oct 13 15:38:42 standalone.localdomain podman[522562]: 2025-10-13 15:38:42.835189969 +0000 UTC m=+0.097144287 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:38:42 standalone.localdomain podman[522562]: 2025-10-13 15:38:42.850915974 +0000 UTC m=+0.112870302 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:38:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:42.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:42 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:38:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:38:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:38:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3725: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 112 KiB/s wr, 40 op/s
Oct 13 15:38:44 standalone.localdomain ceph-mon[29756]: pgmap v3725: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 112 KiB/s wr, 40 op/s
Oct 13 15:38:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:45.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3726: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 123 KiB/s wr, 40 op/s
Oct 13 15:38:46 standalone.localdomain sudo[522587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:38:46 standalone.localdomain sudo[522587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:38:46 standalone.localdomain sudo[522587]: pam_unix(sudo:session): session closed for user root
Oct 13 15:38:46 standalone.localdomain sudo[522605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:38:46 standalone.localdomain sudo[522605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:38:46 standalone.localdomain ceph-mon[29756]: pgmap v3726: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 123 KiB/s wr, 40 op/s
Oct 13 15:38:46 standalone.localdomain sudo[522605]: pam_unix(sudo:session): session closed for user root
Oct 13 15:38:46 standalone.localdomain sudo[522655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:38:46 standalone.localdomain sudo[522655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:38:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:38:46 standalone.localdomain sudo[522655]: pam_unix(sudo:session): session closed for user root
Oct 13 15:38:47 standalone.localdomain sudo[522679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Oct 13 15:38:47 standalone.localdomain sudo[522679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:38:47 standalone.localdomain podman[522672]: 2025-10-13 15:38:47.074608275 +0000 UTC m=+0.086042175 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:38:47 standalone.localdomain podman[522672]: 2025-10-13 15:38:47.104586419 +0000 UTC m=+0.116020279 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:38:47 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:47 standalone.localdomain sudo[522679]: pam_unix(sudo:session): session closed for user root
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:38:47 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:38:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b0354ae7-e0ac-4153-8f7e-4ddb54f85aa8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:38:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b0354ae7-e0ac-4153-8f7e-4ddb54f85aa8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:38:47 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b0354ae7-e0ac-4153-8f7e-4ddb54f85aa8 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:38:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3727: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 114 KiB/s wr, 40 op/s
Oct 13 15:38:47 standalone.localdomain sudo[522729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:38:47 standalone.localdomain sudo[522729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:38:47 standalone.localdomain sudo[522729]: pam_unix(sudo:session): session closed for user root
Oct 13 15:38:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:47.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:38:48 standalone.localdomain ceph-mon[29756]: pgmap v3727: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 114 KiB/s wr, 40 op/s
Oct 13 15:38:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3728: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 114 KiB/s wr, 40 op/s
Oct 13 15:38:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:38:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:38:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:50.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:51 standalone.localdomain ceph-mon[29756]: pgmap v3728: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 187 KiB/s rd, 114 KiB/s wr, 40 op/s
Oct 13 15:38:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:38:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3729: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 11 KiB/s wr, 0 op/s
Oct 13 15:38:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:52.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:53 standalone.localdomain ceph-mon[29756]: pgmap v3729: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 11 KiB/s wr, 0 op/s
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:38:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3730: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 11 KiB/s wr, 0 op/s
Oct 13 15:38:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6164 DF PROTO=TCP SPT=60678 DPT=9102 SEQ=3664566498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCA5A8F0000000001030307) 
Oct 13 15:38:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6165 DF PROTO=TCP SPT=60678 DPT=9102 SEQ=3664566498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCA5EB60000000001030307) 
Oct 13 15:38:55 standalone.localdomain ceph-mon[29756]: pgmap v3730: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 11 KiB/s wr, 0 op/s
Oct 13 15:38:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 15:38:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3731: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 11 KiB/s wr, 0 op/s
Oct 13 15:38:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:38:55 standalone.localdomain podman[522747]: 2025-10-13 15:38:55.830255654 +0000 UTC m=+0.095419104 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:38:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:55.869 2 DEBUG oslo_concurrency.lockutils [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:38:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:55.870 2 DEBUG oslo_concurrency.lockutils [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:38:55 standalone.localdomain podman[522747]: 2025-10-13 15:38:55.873984352 +0000 UTC m=+0.139147782 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:38:55 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:38:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:55.891 2 INFO nova.compute.manager [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Detaching volume 0f364d66-0f07-4584-a6a7-fbb971c71200
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.037 2 INFO nova.virt.block_device [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Attempting to driver detach volume 0f364d66-0f07-4584-a6a7-fbb971c71200 from mountpoint /dev/vdc
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.053 2 DEBUG nova.virt.libvirt.driver [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Attempting to detach device vdc from instance 54a46fec-332e-42f9-83ed-88e763d13f63 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.054 2 DEBUG nova.virt.libvirt.guest [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] detach device xml: <disk type="network" device="disk">
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <source protocol="rbd" name="volumes/volume-0f364d66-0f07-4584-a6a7-fbb971c71200">
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:     <host name="172.18.0.100" port="6789"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   </source>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <target dev="vdc" bus="virtio"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <serial>0f364d66-0f07-4584-a6a7-fbb971c71200</serial>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: </disk>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.067 2 INFO nova.virt.libvirt.driver [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Successfully detached device vdc from instance 54a46fec-332e-42f9-83ed-88e763d13f63 from the persistent domain config.
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.067 2 DEBUG nova.virt.libvirt.driver [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 54a46fec-332e-42f9-83ed-88e763d13f63 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.068 2 DEBUG nova.virt.libvirt.guest [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] detach device xml: <disk type="network" device="disk">
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <source protocol="rbd" name="volumes/volume-0f364d66-0f07-4584-a6a7-fbb971c71200">
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:     <host name="172.18.0.100" port="6789"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   </source>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <target dev="vdc" bus="virtio"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <serial>0f364d66-0f07-4584-a6a7-fbb971c71200</serial>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:   <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: </disk>
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.227 2 DEBUG nova.virt.libvirt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Received event <DeviceRemovedEvent: 1760369936.2270255, 54a46fec-332e-42f9-83ed-88e763d13f63 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.230 2 DEBUG nova.virt.libvirt.driver [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 54a46fec-332e-42f9-83ed-88e763d13f63 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.233 2 INFO nova.virt.libvirt.driver [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Successfully detached device vdc from instance 54a46fec-332e-42f9-83ed-88e763d13f63 from the live domain config.
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.371 2 DEBUG nova.objects.instance [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'flavor' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:38:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:56.406 2 DEBUG oslo_concurrency.lockutils [None req-008cf37e-1c35-4ef2-bca4-06d814a4cfd4 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:38:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6166 DF PROTO=TCP SPT=60678 DPT=9102 SEQ=3664566498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCA66B60000000001030307) 
Oct 13 15:38:57 standalone.localdomain ceph-mon[29756]: pgmap v3731: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 11 KiB/s wr, 0 op/s
Oct 13 15:38:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:38:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3732: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:38:57.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:38:58 standalone.localdomain ceph-mon[29756]: pgmap v3732: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:38:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:38:58 standalone.localdomain podman[522774]: 2025-10-13 15:38:58.824615304 +0000 UTC m=+0.086374604 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9)
Oct 13 15:38:58 standalone.localdomain podman[522774]: 2025-10-13 15:38:58.842891427 +0000 UTC m=+0.104650707 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Oct 13 15:38:58 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:38:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3733: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 13 15:39:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:00.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:00 standalone.localdomain ceph-mon[29756]: pgmap v3733: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 13 15:39:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6167 DF PROTO=TCP SPT=60678 DPT=9102 SEQ=3664566498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCA76760000000001030307) 
Oct 13 15:39:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:39:00 standalone.localdomain podman[522795]: 2025-10-13 15:39:00.824656802 +0000 UTC m=+0.088623135 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:39:00 standalone.localdomain podman[522795]: 2025-10-13 15:39:00.866053598 +0000 UTC m=+0.130019931 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:39:00 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:39:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3734: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 13 15:39:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:02 standalone.localdomain ceph-mon[29756]: pgmap v3734: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 13 15:39:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:02.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3735: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 13 15:39:04 standalone.localdomain ceph-mon[29756]: pgmap v3735: 177 pgs: 177 active+clean; 351 MiB data, 283 MiB used, 6.7 GiB / 7.0 GiB avail; 170 B/s rd, 9.7 KiB/s wr, 1 op/s
Oct 13 15:39:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:39:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:39:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:39:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:39:04 standalone.localdomain systemd[1]: tmp-crun.FIs9zK.mount: Deactivated successfully.
Oct 13 15:39:04 standalone.localdomain podman[522814]: 2025-10-13 15:39:04.852647187 +0000 UTC m=+0.117297077 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-object-container, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, name=rhosp17/openstack-swift-object, architecture=x86_64, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, container_name=swift_object_server, release=1, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9)
Oct 13 15:39:04 standalone.localdomain podman[522815]: 2025-10-13 15:39:04.910406849 +0000 UTC m=+0.167928640 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-swift-container, tcib_managed=true, distribution-scope=public, build-date=2025-07-21T15:54:32, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, release=1)
Oct 13 15:39:04 standalone.localdomain podman[522821]: 2025-10-13 15:39:04.883262662 +0000 UTC m=+0.136024586 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.9, container_name=swift_account_server, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64)
Oct 13 15:39:04 standalone.localdomain podman[522822]: 2025-10-13 15:39:04.983115421 +0000 UTC m=+0.229428016 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:39:05 standalone.localdomain podman[522822]: 2025-10-13 15:39:05.023968651 +0000 UTC m=+0.270281286 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:39:05 standalone.localdomain podman[522814]: 2025-10-13 15:39:05.041038717 +0000 UTC m=+0.305688677 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, architecture=x86_64, release=1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28)
Oct 13 15:39:05 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:39:05 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:39:05 standalone.localdomain podman[522821]: 2025-10-13 15:39:05.110803009 +0000 UTC m=+0.363564923 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, name=rhosp17/openstack-swift-account, architecture=x86_64, tcib_managed=true, container_name=swift_account_server, release=1, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12)
Oct 13 15:39:05 standalone.localdomain podman[522815]: 2025-10-13 15:39:05.12282774 +0000 UTC m=+0.380349521 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, vcs-type=git, container_name=swift_container_server, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., release=1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=17.1.9, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:39:05 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:39:05 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:39:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:05.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3736: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 9.7 KiB/s wr, 7 op/s
Oct 13 15:39:06 standalone.localdomain ceph-mon[29756]: pgmap v3736: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 9.7 KiB/s wr, 7 op/s
Oct 13 15:39:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:39:06.958 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:39:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:39:06.959 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:39:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:39:06.960 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:39:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:07.312 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:07.313 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:07.340 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:07.341 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:39:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:39:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3737: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 9.7 KiB/s wr, 7 op/s
Oct 13 15:39:07 standalone.localdomain podman[522920]: 2025-10-13 15:39:07.534693526 +0000 UTC m=+0.084822286 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:39:07 standalone.localdomain podman[522920]: 2025-10-13 15:39:07.571841582 +0000 UTC m=+0.121970362 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:39:07 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:08 standalone.localdomain ceph-mon[29756]: pgmap v3737: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 9.7 KiB/s wr, 7 op/s
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.373 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.373 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.373 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.789 2 DEBUG oslo_concurrency.lockutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.790 2 DEBUG oslo_concurrency.lockutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.806 2 DEBUG nova.objects.instance [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'flavor' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.842 2 DEBUG oslo_concurrency.lockutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.869 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.886 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.887 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.887 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.888 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.888 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.889 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.889 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.890 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.890 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.890 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.905 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.906 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.907 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.907 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.908 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.966 2 DEBUG oslo_concurrency.lockutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.968 2 DEBUG oslo_concurrency.lockutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:39:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:08.968 2 INFO nova.compute.manager [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Attaching volume 0f364d66-0f07-4584-a6a7-fbb971c71200 to /dev/vdc
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.037 2 DEBUG os_brick.utils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'standalone.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.039 2 INFO oslo.privsep.daemon [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp9q8sgl8f/privsep.sock']
Oct 13 15:39:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:39:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1999770300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.388 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:39:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1999770300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.485 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.486 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.493 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.493 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:39:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3738: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 9.7 KiB/s wr, 7 op/s
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.716 2 INFO oslo.privsep.daemon [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Spawned new privsep daemon via rootwrap
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.590 202 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.593 202 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.595 202 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.595 202 INFO oslo.privsep.daemon [-] privsep daemon running as pid 202
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.720 202 DEBUG oslo.privsep.daemon [-] privsep: reply[08da1d22-dc74-43eb-b06c-3f7935caf9a2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.728 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.729 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9329MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.730 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.730 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.813 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.814 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.815 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.815 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.816 202 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.830 202 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.830 202 DEBUG oslo.privsep.daemon [-] privsep: reply[efe8dad4-67ba-40b3-b9d0-f5d32e6ddf02]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.832 202 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.841 202 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.841 202 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc31650-8c9b-438c-b5b5-a742bb69558c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b28b134257b\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.848 202 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.856 202 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.856 202 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5b97a4-9c72-441a-a401-1f3470e23e4a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.859 202 DEBUG oslo.privsep.daemon [-] privsep: reply[316eb944-2ede-4c21-8b3b-154fc03044b3]: (4, '6635e8bc-8f5a-4138-a382-e7fa61dbcec1') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.860 2 DEBUG oslo_concurrency.processutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.888 2 DEBUG oslo_concurrency.processutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.893 2 DEBUG os_brick.initiator.connectors.lightos [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.894 2 DEBUG os_brick.initiator.connectors.lightos [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.895 2 DEBUG os_brick.initiator.connectors.lightos [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:6635e8bc-8f5a-4138-a382-e7fa61dbcec1 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.896 2 DEBUG os_brick.utils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] <== get_connector_properties: return (857ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'standalone.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b28b134257b', 'do_local_attach': False, 'nvme_hostid': '6635e8bc-8f5a-4138-a382-e7fa61dbcec1', 'system uuid': '6635e8bc-8f5a-4138-a382-e7fa61dbcec1', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:6635e8bc-8f5a-4138-a382-e7fa61dbcec1', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.897 2 DEBUG nova.virt.block_device [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating existing volume attachment record: 26235109-e792-4690-ac16-c54222c55de1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 13 15:39:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:09.904 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:39:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:39:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/514150688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.307 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.314 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.336 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.357 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.357 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:39:10 standalone.localdomain ceph-mon[29756]: pgmap v3738: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 9.7 KiB/s wr, 7 op/s
Oct 13 15:39:10 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/514150688' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 15:39:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3721118849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.634 2 DEBUG nova.objects.instance [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lazy-loading 'flavor' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.656 2 DEBUG nova.virt.libvirt.driver [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Attempting to attach volume 0f364d66-0f07-4584-a6a7-fbb971c71200 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.659 2 DEBUG nova.virt.libvirt.guest [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] attach device xml: <disk type="network" device="disk">
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   <source protocol="rbd" name="volumes/volume-0f364d66-0f07-4584-a6a7-fbb971c71200">
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:     <host name="172.18.0.100" port="6789"/>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   </source>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   <auth username="openstack">
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:     <secret type="ceph" uuid="627e7f45-65aa-56de-94df-66eaee84a56e"/>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   </auth>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   <target dev="vdc" bus="virtio"/>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:   <serial>0f364d66-0f07-4584-a6a7-fbb971c71200</serial>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: </disk>
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 13 15:39:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.788 2 DEBUG nova.virt.libvirt.driver [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.789 2 DEBUG nova.virt.libvirt.driver [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.789 2 DEBUG nova.virt.libvirt.driver [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 13 15:39:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:10.790 2 DEBUG nova.virt.libvirt.driver [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] No VIF found with MAC fa:16:3e:15:4e:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 13 15:39:10 standalone.localdomain podman[523018]: 2025-10-13 15:39:10.84106752 +0000 UTC m=+0.106740513 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:39:10 standalone.localdomain podman[523018]: 2025-10-13 15:39:10.850887933 +0000 UTC m=+0.116560846 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:39:10 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:39:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:11.120 2 DEBUG oslo_concurrency.lockutils [None req-49ba115f-09cc-4226-874e-4b69e3d51353 3d9eeef137fc40c78332936114fd7ee4 e44641a80bcb466cb3dd688e48b72d8e - - default default] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:39:11 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3721118849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:39:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3739: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.7 KiB/s rd, 85 B/s wr, 6 op/s
Oct 13 15:39:11 standalone.localdomain podman[467099]: time="2025-10-13T15:39:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:39:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:39:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:39:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:39:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49130 "" "Go-http-client/1.1"
Oct 13 15:39:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:12 standalone.localdomain ceph-mon[29756]: pgmap v3739: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.7 KiB/s rd, 85 B/s wr, 6 op/s
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:39:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:39:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:13.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3740: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.7 KiB/s rd, 85 B/s wr, 6 op/s
Oct 13 15:39:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:39:13 standalone.localdomain podman[523036]: 2025-10-13 15:39:13.827553509 +0000 UTC m=+0.084173647 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:39:13 standalone.localdomain podman[523036]: 2025-10-13 15:39:13.864897261 +0000 UTC m=+0.121517319 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:39:13 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:39:14 standalone.localdomain ceph-mon[29756]: pgmap v3740: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 3.7 KiB/s rd, 85 B/s wr, 6 op/s
Oct 13 15:39:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:15.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3741: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s rd, 170 B/s wr, 10 op/s
Oct 13 15:39:16 standalone.localdomain ceph-mon[29756]: pgmap v3741: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s rd, 170 B/s wr, 10 op/s
Oct 13 15:39:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3742: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:39:17 standalone.localdomain podman[523059]: 2025-10-13 15:39:17.815231373 +0000 UTC m=+0.084320482 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 13 15:39:17 standalone.localdomain podman[523059]: 2025-10-13 15:39:17.820928338 +0000 UTC m=+0.090017467 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 13 15:39:17 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:39:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:18.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:18 standalone.localdomain ceph-mon[29756]: pgmap v3742: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:39:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2897308499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:39:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:39:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2897308499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:39:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2897308499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:39:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2897308499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:39:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3743: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:20 standalone.localdomain ceph-mon[29756]: pgmap v3743: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:20.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3744: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:22 standalone.localdomain ceph-mon[29756]: pgmap v3744: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:23.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:39:23
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'manila_metadata', 'manila_data', 'vms', 'backups']
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3745: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37509 DF PROTO=TCP SPT=37300 DPT=9102 SEQ=4066418634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCACFBF0000000001030307) 
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:39:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:39:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37510 DF PROTO=TCP SPT=37300 DPT=9102 SEQ=4066418634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCAD3B60000000001030307) 
Oct 13 15:39:24 standalone.localdomain ceph-mon[29756]: pgmap v3745: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3746: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:25.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:26 standalone.localdomain ceph-mon[29756]: pgmap v3746: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 6.3 KiB/s rd, 85 B/s wr, 4 op/s
Oct 13 15:39:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37511 DF PROTO=TCP SPT=37300 DPT=9102 SEQ=4066418634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCADBB60000000001030307) 
Oct 13 15:39:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:39:26 standalone.localdomain podman[523077]: 2025-10-13 15:39:26.827586906 +0000 UTC m=+0.087950533 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:39:26 standalone.localdomain podman[523077]: 2025-10-13 15:39:26.890204367 +0000 UTC m=+0.150567984 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:39:26 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:39:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3747: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:28.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:28 standalone.localdomain ceph-mon[29756]: pgmap v3747: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:39:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3748: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:39:29 standalone.localdomain podman[523102]: 2025-10-13 15:39:29.820722709 +0000 UTC m=+0.092369129 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct 13 15:39:29 standalone.localdomain podman[523102]: 2025-10-13 15:39:29.833878325 +0000 UTC m=+0.105524715 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 15:39:29 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:39:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:30.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:30 standalone.localdomain ceph-mon[29756]: pgmap v3748: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37512 DF PROTO=TCP SPT=37300 DPT=9102 SEQ=4066418634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCAEB760000000001030307) 
Oct 13 15:39:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3749: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:39:31 standalone.localdomain podman[523122]: 2025-10-13 15:39:31.830178449 +0000 UTC m=+0.095463675 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:39:31 standalone.localdomain podman[523122]: 2025-10-13 15:39:31.854134577 +0000 UTC m=+0.119419803 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:39:31 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:39:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:32 standalone.localdomain ceph-mon[29756]: pgmap v3749: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:33.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3750: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:34 standalone.localdomain ceph-mon[29756]: pgmap v3750: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3751: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:35.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:39:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:39:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:39:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:39:35 standalone.localdomain podman[523141]: 2025-10-13 15:39:35.835959931 +0000 UTC m=+0.092043840 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, architecture=x86_64, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, version=17.1.9)
Oct 13 15:39:35 standalone.localdomain podman[523140]: 2025-10-13 15:39:35.873165028 +0000 UTC m=+0.136283975 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, tcib_managed=true, build-date=2025-07-21T14:56:28, distribution-scope=public, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:39:35 standalone.localdomain podman[523147]: 2025-10-13 15:39:35.891529233 +0000 UTC m=+0.141481353 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.33.12, container_name=swift_account_server)
Oct 13 15:39:35 standalone.localdomain podman[523152]: 2025-10-13 15:39:35.908384953 +0000 UTC m=+0.150906514 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:39:35 standalone.localdomain podman[523152]: 2025-10-13 15:39:35.919844047 +0000 UTC m=+0.162365658 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:39:35 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:39:36 standalone.localdomain podman[523140]: 2025-10-13 15:39:36.049777994 +0000 UTC m=+0.312896971 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.9, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:39:36 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:39:36 standalone.localdomain podman[523141]: 2025-10-13 15:39:36.092078398 +0000 UTC m=+0.348162237 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, release=1, version=17.1.9, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, batch=17.1_20250721.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:39:36 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:39:36 standalone.localdomain podman[523147]: 2025-10-13 15:39:36.143860936 +0000 UTC m=+0.393813106 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.33.12, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.openshift.expose-services=, vcs-type=git, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, container_name=swift_account_server, release=1, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:39:36 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:39:36 standalone.localdomain ceph-mon[29756]: pgmap v3751: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3752: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:39:37 standalone.localdomain podman[523242]: 2025-10-13 15:39:37.805321653 +0000 UTC m=+0.073532820 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:39:37 standalone.localdomain podman[523242]: 2025-10-13 15:39:37.813969909 +0000 UTC m=+0.082181096 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:39:37 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:39:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:38.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:38 standalone.localdomain ceph-mon[29756]: pgmap v3752: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3753: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:40 standalone.localdomain ceph-mon[29756]: pgmap v3753: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:40.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3754: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:41 standalone.localdomain podman[467099]: time="2025-10-13T15:39:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:39:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:39:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:39:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:39:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:39:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49121 "" "Go-http-client/1.1"
Oct 13 15:39:41 standalone.localdomain podman[523260]: 2025-10-13 15:39:41.843724279 +0000 UTC m=+0.098469818 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:39:41 standalone.localdomain podman[523260]: 2025-10-13 15:39:41.853225832 +0000 UTC m=+0.107971351 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:39:41 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:39:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:42 standalone.localdomain ceph-mon[29756]: pgmap v3754: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:39:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:39:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:39:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:43.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3755: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:44 standalone.localdomain ceph-mon[29756]: pgmap v3755: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:39:44 standalone.localdomain systemd[1]: tmp-crun.NCJHUm.mount: Deactivated successfully.
Oct 13 15:39:44 standalone.localdomain podman[523279]: 2025-10-13 15:39:44.8111736 +0000 UTC m=+0.079133562 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:39:44 standalone.localdomain podman[523279]: 2025-10-13 15:39:44.84944095 +0000 UTC m=+0.117400922 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:39:44 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:39:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3756: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:45.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:46 standalone.localdomain ceph-mon[29756]: pgmap v3756: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3757: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:47 standalone.localdomain sudo[523302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:39:47 standalone.localdomain sudo[523302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:39:47 standalone.localdomain sudo[523302]: pam_unix(sudo:session): session closed for user root
Oct 13 15:39:47 standalone.localdomain sudo[523320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:39:47 standalone.localdomain sudo[523320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:39:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:48.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:48 standalone.localdomain sudo[523320]: pam_unix(sudo:session): session closed for user root
Oct 13 15:39:48 standalone.localdomain ceph-mon[29756]: pgmap v3757: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:48 standalone.localdomain sudo[523371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:39:48 standalone.localdomain sudo[523371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:39:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:39:48 standalone.localdomain sudo[523371]: pam_unix(sudo:session): session closed for user root
Oct 13 15:39:48 standalone.localdomain sudo[523395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 627e7f45-65aa-56de-94df-66eaee84a56e -- inventory --format=json-pretty --filter-for-batch
Oct 13 15:39:48 standalone.localdomain sudo[523395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:39:48 standalone.localdomain podman[523389]: 2025-10-13 15:39:48.58212931 +0000 UTC m=+0.091092611 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:39:48 standalone.localdomain podman[523389]: 2025-10-13 15:39:48.612776275 +0000 UTC m=+0.121739546 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 15:39:48 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:39:49 standalone.localdomain podman[523466]: 
Oct 13 15:39:49 standalone.localdomain podman[523466]: 2025-10-13 15:39:49.17977531 +0000 UTC m=+0.078304306 container create e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=)
Oct 13 15:39:49 standalone.localdomain systemd[1]: Started libpod-conmon-e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c.scope.
Oct 13 15:39:49 standalone.localdomain podman[523466]: 2025-10-13 15:39:49.139655843 +0000 UTC m=+0.038184889 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 15:39:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:39:49 standalone.localdomain podman[523466]: 2025-10-13 15:39:49.26539009 +0000 UTC m=+0.163919086 container init e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, release=553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=)
Oct 13 15:39:49 standalone.localdomain podman[523466]: 2025-10-13 15:39:49.275471591 +0000 UTC m=+0.174000597 container start e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.buildah.version=1.33.12)
Oct 13 15:39:49 standalone.localdomain podman[523466]: 2025-10-13 15:39:49.275875903 +0000 UTC m=+0.174404909 container attach e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Oct 13 15:39:49 standalone.localdomain peaceful_maxwell[523482]: 167 167
Oct 13 15:39:49 standalone.localdomain systemd[1]: libpod-e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c.scope: Deactivated successfully.
Oct 13 15:39:49 standalone.localdomain podman[523466]: 2025-10-13 15:39:49.28356458 +0000 UTC m=+0.182093616 container died e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, release=553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:39:49 standalone.localdomain podman[523487]: 2025-10-13 15:39:49.379924192 +0000 UTC m=+0.085641292 container remove e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, release=553, ceph=True)
Oct 13 15:39:49 standalone.localdomain systemd[1]: libpod-conmon-e8005d00aa77d559324cbd73bf8526e3fa21fb824629199529b06f7d465f149c.scope: Deactivated successfully.
Oct 13 15:39:49 standalone.localdomain podman[523509]: 
Oct 13 15:39:49 standalone.localdomain podman[523509]: 2025-10-13 15:39:49.558657074 +0000 UTC m=+0.059751264 container create 4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_borg, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, GIT_CLEAN=True)
Oct 13 15:39:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3758: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-20039059890f480a3fe94d0a0498b01e1f9e286a8fa2afd2c8a6dbe5fac3ebb9-merged.mount: Deactivated successfully.
Oct 13 15:39:49 standalone.localdomain systemd[1]: Started libpod-conmon-4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6.scope.
Oct 13 15:39:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:39:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c602f9fc1f3e5f9b06844f483dd9ed6694bd78b32537124eb6e3e222951db203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 13 15:39:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c602f9fc1f3e5f9b06844f483dd9ed6694bd78b32537124eb6e3e222951db203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 13 15:39:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c602f9fc1f3e5f9b06844f483dd9ed6694bd78b32537124eb6e3e222951db203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 13 15:39:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c602f9fc1f3e5f9b06844f483dd9ed6694bd78b32537124eb6e3e222951db203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 13 15:39:49 standalone.localdomain podman[523509]: 2025-10-13 15:39:49.633971026 +0000 UTC m=+0.135065246 container init 4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_borg, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7)
Oct 13 15:39:49 standalone.localdomain podman[523509]: 2025-10-13 15:39:49.537169991 +0000 UTC m=+0.038264201 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Oct 13 15:39:49 standalone.localdomain podman[523509]: 2025-10-13 15:39:49.644542213 +0000 UTC m=+0.145636403 container start 4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_borg, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-09-24T08:57:55, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc.)
Oct 13 15:39:49 standalone.localdomain podman[523509]: 2025-10-13 15:39:49.6447714 +0000 UTC m=+0.145865620 container attach 4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_borg, distribution-scope=public, io.openshift.expose-services=, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True)
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]: [
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:     {
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "available": false,
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "ceph_device": false,
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "lsm_data": {},
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "lvs": [],
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "path": "/dev/sr0",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "rejected_reasons": [
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "Has a FileSystem",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "Insufficient space (<5GB)"
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         ],
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         "sys_api": {
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "actuators": null,
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "device_nodes": "sr0",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "human_readable_size": "482.00 KB",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "id_bus": "ata",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "model": "QEMU DVD-ROM",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "nr_requests": "2",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "partitions": {},
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "path": "/dev/sr0",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "removable": "1",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "rev": "2.5+",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "ro": "0",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "rotational": "1",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "sas_address": "",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "sas_device_handle": "",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "scheduler_mode": "mq-deadline",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "sectors": 0,
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "sectorsize": "2048",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "size": 493568.0,
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "support_discard": "0",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "type": "disk",
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:             "vendor": "QEMU"
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:         }
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]:     }
Oct 13 15:39:50 standalone.localdomain happy_borg[523524]: ]
Oct 13 15:39:50 standalone.localdomain systemd[1]: libpod-4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6.scope: Deactivated successfully.
Oct 13 15:39:50 standalone.localdomain podman[523509]: 2025-10-13 15:39:50.606934051 +0000 UTC m=+1.108028271 container died 4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_borg, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, build-date=2025-09-24T08:57:55, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, release=553)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: pgmap v3758: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:50 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c602f9fc1f3e5f9b06844f483dd9ed6694bd78b32537124eb6e3e222951db203-merged.mount: Deactivated successfully.
Oct 13 15:39:50 standalone.localdomain podman[525382]: 2025-10-13 15:39:50.69151329 +0000 UTC m=+0.078451941 container remove 4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_borg, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Oct 13 15:39:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:50.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:50 standalone.localdomain systemd[1]: libpod-conmon-4df1f1d8994cb15a5e982dd8401e5a1393c616c58e03c195a9d374d8778939f6.scope: Deactivated successfully.
Oct 13 15:39:50 standalone.localdomain sudo[523395]: pam_unix(sudo:session): session closed for user root
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:39:50 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:39:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 72614e39-17a0-43d4-9bda-fa45814609bf (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:39:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 72614e39-17a0-43d4-9bda-fa45814609bf (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:39:50 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 72614e39-17a0-43d4-9bda-fa45814609bf (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:39:50 standalone.localdomain sudo[525396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:39:50 standalone.localdomain sudo[525396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:39:50 standalone.localdomain sudo[525396]: pam_unix(sudo:session): session closed for user root
Oct 13 15:39:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3759: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:39:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:39:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:51 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:39:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:52 standalone.localdomain ceph-mon[29756]: pgmap v3759: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:39:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:53.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32433 DF PROTO=TCP SPT=39958 DPT=9102 SEQ=2508135293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCB44EF0000000001030307) 
Oct 13 15:39:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3760: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32434 DF PROTO=TCP SPT=39958 DPT=9102 SEQ=2508135293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCB48F60000000001030307) 
Oct 13 15:39:54 standalone.localdomain ceph-mon[29756]: pgmap v3760: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:39:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:39:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3761: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:55.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:39:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32435 DF PROTO=TCP SPT=39958 DPT=9102 SEQ=2508135293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCB50F60000000001030307) 
Oct 13 15:39:57 standalone.localdomain ceph-mon[29756]: pgmap v3761: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:39:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3762: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:39:57 standalone.localdomain systemd[1]: tmp-crun.sO62UG.mount: Deactivated successfully.
Oct 13 15:39:57 standalone.localdomain podman[525414]: 2025-10-13 15:39:57.830252621 +0000 UTC m=+0.089933065 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 15:39:57 standalone.localdomain podman[525414]: 2025-10-13 15:39:57.868856654 +0000 UTC m=+0.128537088 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:39:57 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:39:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:39:58.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:39:58 standalone.localdomain ceph-mon[29756]: pgmap v3762: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:39:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3763: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:00 standalone.localdomain ceph-mon[29756]: pgmap v3763: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32436 DF PROTO=TCP SPT=39958 DPT=9102 SEQ=2508135293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCB60B70000000001030307) 
Oct 13 15:40:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:40:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:00.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:00 standalone.localdomain podman[525439]: 2025-10-13 15:40:00.829760272 +0000 UTC m=+0.094013171 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41)
Oct 13 15:40:00 standalone.localdomain podman[525439]: 2025-10-13 15:40:00.84798501 +0000 UTC m=+0.112237909 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6)
Oct 13 15:40:00 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:40:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3764: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:40:02 standalone.localdomain podman[525459]: 2025-10-13 15:40:02.001740492 +0000 UTC m=+0.088288196 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:40:02 standalone.localdomain podman[525459]: 2025-10-13 15:40:02.018867587 +0000 UTC m=+0.105415311 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:40:02 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:40:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:02 standalone.localdomain ceph-mon[29756]: pgmap v3764: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:03.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3765: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:04 standalone.localdomain ceph-mon[29756]: pgmap v3765: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3766: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:05.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:06 standalone.localdomain ceph-mon[29756]: pgmap v3766: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:40:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:40:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:40:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:40:06 standalone.localdomain podman[525479]: 2025-10-13 15:40:06.842227535 +0000 UTC m=+0.100752387 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, version=17.1.9, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, tcib_managed=true, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container)
Oct 13 15:40:06 standalone.localdomain podman[525481]: 2025-10-13 15:40:06.902777959 +0000 UTC m=+0.150487610 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=17.1.9, name=rhosp17/openstack-swift-account, release=1, container_name=swift_account_server, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:40:06 standalone.localdomain podman[525480]: 2025-10-13 15:40:06.948716817 +0000 UTC m=+0.201479263 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, release=1, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, version=17.1.9, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, vcs-type=git, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container)
Oct 13 15:40:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:40:06.959 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:40:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:40:06.960 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:40:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:40:06.961 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:40:07 standalone.localdomain podman[525487]: 2025-10-13 15:40:07.01118551 +0000 UTC m=+0.253508806 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:40:07 standalone.localdomain podman[525487]: 2025-10-13 15:40:07.020417463 +0000 UTC m=+0.262740779 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:40:07 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:40:07 standalone.localdomain podman[525481]: 2025-10-13 15:40:07.100777305 +0000 UTC m=+0.348486876 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, vcs-type=git, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, batch=17.1_20250721.1, version=17.1.9, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, com.redhat.component=openstack-swift-account-container)
Oct 13 15:40:07 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:40:07 standalone.localdomain podman[525479]: 2025-10-13 15:40:07.15154207 +0000 UTC m=+0.410066882 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, build-date=2025-07-21T14:56:28, name=rhosp17/openstack-swift-object, architecture=x86_64, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, distribution-scope=public)
Oct 13 15:40:07 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:40:07 standalone.localdomain podman[525480]: 2025-10-13 15:40:07.200047386 +0000 UTC m=+0.452809812 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, config_id=tripleo_step4, container_name=swift_container_server, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:40:07 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:40:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3767: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:08 standalone.localdomain ceph-mon[29756]: pgmap v3767: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:08.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:40:08 standalone.localdomain podman[525586]: 2025-10-13 15:40:08.817207062 +0000 UTC m=+0.084558541 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:40:08 standalone.localdomain podman[525586]: 2025-10-13 15:40:08.830383425 +0000 UTC m=+0.097734894 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:40:08 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:40:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3768: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.359 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.360 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.361 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.361 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:40:10 standalone.localdomain ceph-mon[29756]: pgmap v3768: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.980 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.980 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.981 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:40:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:10.981 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.364 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.383 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.384 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.385 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.385 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.385 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.386 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.386 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.386 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.386 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.387 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.403 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.404 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.406 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.406 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.407 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:40:11 standalone.localdomain account-server[114171]: Devices pass completed: 0.00s
Oct 13 15:40:11 standalone.localdomain podman[467099]: time="2025-10-13T15:40:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:40:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3769: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:40:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:40:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:40:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49132 "" "Go-http-client/1.1"
Oct 13 15:40:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:40:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:40:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/146248875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:40:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:11.973 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:40:12 standalone.localdomain systemd[1]: tmp-crun.prrBE7.mount: Deactivated successfully.
Oct 13 15:40:12 standalone.localdomain podman[525625]: 2025-10-13 15:40:12.010424777 +0000 UTC m=+0.080792116 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.043 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:40:12 standalone.localdomain podman[525625]: 2025-10-13 15:40:12.043999725 +0000 UTC m=+0.114367054 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.043 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.044 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.048 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.048 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:40:12 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.180 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.181 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9290MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.182 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.182 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.251 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.252 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.252 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.252 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.303 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:40:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:12 standalone.localdomain ceph-mon[29756]: pgmap v3769: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/146248875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:40:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:40:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2367238675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.693 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.699 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.715 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.718 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:40:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:12.718 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:40:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:40:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:13.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3770: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2367238675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:40:14 standalone.localdomain ceph-mon[29756]: pgmap v3770: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3771: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:40:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:15.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:15 standalone.localdomain podman[525668]: 2025-10-13 15:40:15.827670336 +0000 UTC m=+0.088735329 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:40:15 standalone.localdomain podman[525668]: 2025-10-13 15:40:15.840048605 +0000 UTC m=+0.101113618 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:40:15 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:40:16 standalone.localdomain ceph-mon[29756]: pgmap v3771: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:16 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=4560 DF PROTO=TCP SPT=53556 DPT=19885 SEQ=130598243 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09A8E7A50000000001030307) 
Oct 13 15:40:16 standalone.localdomain sshd[525691]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:40:16 standalone.localdomain sshd[525691]: Accepted publickey for zuul from 38.102.83.114 port 58128 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:40:16 standalone.localdomain systemd[1]: Starting User Manager for UID 1000...
Oct 13 15:40:16 standalone.localdomain systemd-logind[45629]: New session 301 of user zuul.
Oct 13 15:40:16 standalone.localdomain systemd[525694]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Queued start job for default target Main User Target.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Created slice User Application Slice.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Reached target Paths.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Reached target Timers.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Starting D-Bus User Message Bus Socket...
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Starting Create User's Volatile Files and Directories...
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Listening on D-Bus User Message Bus Socket.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Reached target Sockets.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Finished Create User's Volatile Files and Directories.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Reached target Basic System.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Reached target Main User Target.
Oct 13 15:40:17 standalone.localdomain systemd[525694]: Startup finished in 167ms.
Oct 13 15:40:17 standalone.localdomain systemd[1]: Started User Manager for UID 1000.
Oct 13 15:40:17 standalone.localdomain systemd[1]: Started Session 301 of User zuul.
Oct 13 15:40:17 standalone.localdomain sshd[525691]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 15:40:17 standalone.localdomain sudo[525727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwpgvwxnsiqicyoadxirjzchdujiwqim ; /usr/bin/python3
Oct 13 15:40:17 standalone.localdomain sudo[525727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.377913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370017377959, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1572, "num_deletes": 254, "total_data_size": 1408766, "memory_usage": 1439480, "flush_reason": "Manual Compaction"}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370017386789, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 1363830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87706, "largest_seqno": 89277, "table_properties": {"data_size": 1357445, "index_size": 3536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13511, "raw_average_key_size": 19, "raw_value_size": 1344455, "raw_average_value_size": 1937, "num_data_blocks": 159, "num_entries": 694, "num_filter_entries": 694, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760369882, "oldest_key_time": 1760369882, "file_creation_time": 1760370017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 8937 microseconds, and 4369 cpu microseconds.
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.386840) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 1363830 bytes OK
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.386872) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.388954) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.388977) EVENT_LOG_v1 {"time_micros": 1760370017388971, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.388998) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 1401780, prev total WAL file size 1402269, number of live WAL files 2.
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.389744) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323634' seq:72057594037927935, type:22 .. '6C6F676D0033353135' seq:0, type:0; will stop at (end)
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(1331KB)], [200(6104KB)]
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370017389801, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 7614529, "oldest_snapshot_seqno": -1}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 7278 keys, 7518797 bytes, temperature: kUnknown
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370017428646, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 7518797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7474538, "index_size": 24959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 191618, "raw_average_key_size": 26, "raw_value_size": 7346478, "raw_average_value_size": 1009, "num_data_blocks": 992, "num_entries": 7278, "num_filter_entries": 7278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.428981) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 7518797 bytes
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.430570) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.4 rd, 192.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(11.1) write-amplify(5.5) OK, records in: 7804, records dropped: 526 output_compression: NoCompression
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.430600) EVENT_LOG_v1 {"time_micros": 1760370017430586, "job": 126, "event": "compaction_finished", "compaction_time_micros": 38977, "compaction_time_cpu_micros": 24598, "output_level": 6, "num_output_files": 1, "total_output_size": 7518797, "num_input_records": 7804, "num_output_records": 7278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370017430944, "job": 126, "event": "table_file_deletion", "file_number": 202}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370017431847, "job": 126, "event": "table_file_deletion", "file_number": 200}
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.389620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.431950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.431957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.431959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.431961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:40:17 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:40:17.431963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:40:17 standalone.localdomain python3[525729]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=fa163ec2-ffbe-5cff-7786-000000000046-1-controller zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:40:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3772: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:17 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=4561 DF PROTO=TCP SPT=53556 DPT=19885 SEQ=130598243 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09A8EBB10000000001030307) 
Oct 13 15:40:17 standalone.localdomain subscription-manager[525730]: Unregistered machine with identity: 8ba5dc1a-e4d1-4e93-afbc-cee2f5a7a9d3
Oct 13 15:40:18 standalone.localdomain sudo[525727]: pam_unix(sudo:session): session closed for user root
Oct 13 15:40:18 standalone.localdomain ceph-mon[29756]: pgmap v3772: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:18.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:40:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2734601862' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:40:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:40:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2734601862' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:40:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:40:18 standalone.localdomain podman[525732]: 2025-10-13 15:40:18.837185204 +0000 UTC m=+0.095350532 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:40:18 standalone.localdomain podman[525732]: 2025-10-13 15:40:18.874229448 +0000 UTC m=+0.132394736 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 13 15:40:18 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:40:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2734601862' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:40:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2734601862' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:40:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3773: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:19 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=4562 DF PROTO=TCP SPT=53556 DPT=19885 SEQ=130598243 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09A8F3B10000000001030307) 
Oct 13 15:40:20 standalone.localdomain ceph-mon[29756]: pgmap v3773: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:20.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:20.995 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:40:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:20.999 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:20.999 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.006 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.012 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '325e02a8-7d00-4e5c-9d8e-d29bda6789ff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.000138', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed4229ee-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': '2ac81d096f68d8afc33a353d38a18c57e2746d68d7c1f1a413858860e75ac30d'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.000138', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed430f26-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': '9a5d971fddf32ed46bc9a78b3308698e8c01708af28591880604c4e28beac585'}]}, 'timestamp': '2025-10-13 15:40:21.012911', '_unique_id': '8dddc2138d3d41f8aa0479bfe2686e42'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.014 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.075 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.076 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.076 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.092 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.092 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df9b4472-4161-4b38-8a3b-331c479481dd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.016046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed4cb7ce-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '4e36d4a57d4ab39dee2d655096d3b139573d3143a77f44ee3abbb14c36c11ad7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.016046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed4cc7a0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': 'a1af97166d6fffad26fd691e886c3d0e7d82d2f473faa02cf580f8a76ea04670'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.016046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed4cd4ca-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '8b15a56bb1aefcda1def03268b04fe9998d766146fa26fc9f23fd368b37771dd'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.016046', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed4f3bca-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': 'e167ffb24302b34d3053bab27f9d8a2d3e8b7436c530ab3246b1db464f28555e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.016046', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed4f4a34-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '028d5acedbd3717fab2560d2d439229b711585ddbd47058b1ca8bc9a67f1d065'}]}, 'timestamp': '2025-10-13 15:40:21.093016', '_unique_id': 'c0cf49586d8040f3a0fee3f9233a4fe3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.094 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.095 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.095 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.095 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '003c5c66-c1cb-4f2d-a187-0d22424c0d5d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.095327', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed4fb438-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': '9e93467cf77866254a6233f8a6f26a78e233b92b7a6c7dd2d72b95850f9464dd'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.095327', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed4fc036-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': 'd6751aeb35ae435fecc4925fa3d46019ed0dd2c4794f150f7578eed9eefbe424'}]}, 'timestamp': '2025-10-13 15:40:21.095990', '_unique_id': '4cc912c7277f40eca26599a4d6bc9766'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.096 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.097 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.097 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.097 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.098 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.098 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.098 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '75a51866-14b9-403d-baba-169046d2c37d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.097631', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed500c26-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': 'e3166bae6c6196a692d71cbfd1ca26949f857e7c171674d214354d15807b20f6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.097631', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed50170c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': 'c164d8c51b074820843357f1a34524c4a15c97c12181df97ff06425389c60b0c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.097631', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed5022f6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '1c2baf558ac967fa920bee3a068af5a3b42e78300eb506cad182b6d2b4ba65ce'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.097631', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed502ea4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '3fe08d407bc5087258d34a466dd93220bff5786dd13d3a453e84e4dd2f180f7b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.097631', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5038cc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': 'a97771942abc5e777d469a310cc9bc8c89ad475475d2913ffda8f34233d43330'}]}, 'timestamp': '2025-10-13 15:40:21.099062', '_unique_id': '0846fb491897428899ba9eb12275402f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.099 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.100 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.100 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.100 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.101 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.101 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.101 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.102 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dffe68cf-1d72-4cca-90c5-a2f3c51d141e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.100943', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed508d90-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '4649312cb2af2eb48c3c04123d4894eb93cb9e3ec5b2284aeaa842d6748851d2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.100943', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed50986c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '375d17b5afd166a11781e9cc25995a3ccbd0dda59fae86388389976ef7950484'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.100943', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed50a582-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '02854b8fb04120ffa2fe1b59b6024323005b591d9d96f3e938ea425402084dee'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.100943', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed50b3ce-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '526416fdfc07cfc9d66ce4b5d61f1f53bf7739e28e8e0b85952b8864ccc3e9e0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.100943', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed50be00-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '5dcff47c8543e5365e9f9482c3ef4ff50366dc4d178749a0006788713f96ec5c'}]}, 'timestamp': '2025-10-13 15:40:21.102507', '_unique_id': '782d245b78904464b7acdef6b59f9c27'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.103 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.104 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.104 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.104 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0c1f4c2e-26c6-4d34-9706-aafe96c15719', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.104099', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed5108d8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': '88c5d1af617a1a4f4c2ac7437cea88468a68648cb5679a0701d8281a6f5eba70'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.104099', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed5115b2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': '7ec1911edc28fcb9eec4b33b17b0241417a0c41197d19ce4bd11c89efb06407c'}]}, 'timestamp': '2025-10-13 15:40:21.104779', '_unique_id': 'c1fa801505c046bea1c4d0503aeefb8c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.106 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.106 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9bb150da-71e8-4acc-9d13-e9ab317aab31', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.106306', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed515f04-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': '31f8fe6a365a5b4cbc3a430ee2cd84f75f15d28907f4f41bcb1db2f6201f80fc'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.106306', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed516cba-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': 'c23179d0bc2cd80857aa2e3dcf6b4a6b08d6712da331f7e755803f10366b923f'}]}, 'timestamp': '2025-10-13 15:40:21.106960', '_unique_id': '4cd02d4483914c2db8f37e34ab0ab7fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.107 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.108 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.123 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.124 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.124 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.136 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.137 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '79886f08-4b41-4f13-8202-da4c1d2e9eb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.108593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed54066e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '4c92760876ddd51835e238b809926beaf7822becfbfe88c0a1ec4e546abfe90f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.108593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed54126c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '723e0eeeca6ebaec5ba960db936977aaacc4e39874c2b3a264f88f45d1e7667c'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.108593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed541ca8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': 'bdd28f17989bfdbff22cbf17aa9985d2cce1d71916f6bec4397c7be67f548a6e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.108593', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5604a0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.313803258, 'message_signature': 'd19e9874cafaaa178bf6c992ce90b65ce32a9dde37bea5af139adc02dbd5231e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.108593', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed562084-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.313803258, 'message_signature': 'd5b8a396665340dff45af28d2334de5f97ea60fa35ffeadc7d30336f9000c195'}]}, 'timestamp': '2025-10-13 15:40:21.137923', '_unique_id': '9471d942f5314ac08fa9cae4408a1aba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.139 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.140 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.140 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.140 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.141 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 8344 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.141 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d6f0323-0e60-4f8d-8545-f9d8977be3db', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 8344, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.141173', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed56b4f4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': 'f681ab04411cb14f61278575a7aa2b72759cf5cf994a0f1582afd5e3e37d932e'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 84, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.141173', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed56c7dc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': 'ab88dedcddce4417ecba9c3aa79a6e3c8d3725c3c1625acea9b937c1b15dd51d'}]}, 'timestamp': '2025-10-13 15:40:21.142128', '_unique_id': 'b25878e61a6a4bd59f6e1e80ff24c9c1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.144 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 11910000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 33800000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af4db578-13e1-4fa4-9784-cbcdacac3e20', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 11910000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:40:21.145082', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ed59a880-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.349705938, 'message_signature': 'da8ab9706e050f7f9333520b233d4bfcbfb5ee67e47ffdd6cc47595a818fe108'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 33800000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:40:21.145082', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ed5bbd32-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.363356106, 'message_signature': 'f621328ffbdb1e09de9580ef499ff264530bb725775ff4da40291da45dd79d9e'}]}, 'timestamp': '2025-10-13 15:40:21.174596', '_unique_id': 'e9b90fefb4ec4c0bb3127f1d0115be6a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.176 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.176 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 11277 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.177 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a8fcdeb3-deab-4aec-98b5-5e765bf5d733', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 11277, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.176894', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed5c2628-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': 'fe6a82ffcecc62adbfd6cb7a3a5755fd1a7b253be4ff9a2a1c5636d988054675'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.176894', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed5c3244-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': '951324385bf10b88aaae6833311ca19610e522050786ebd219f3b493bcaeba5c'}]}, 'timestamp': '2025-10-13 15:40:21.177577', '_unique_id': '8f0c1e69d8e745ddb662f107eb4c16f0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.178 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.179 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.179 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.179 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.179 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.180 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6469f5ef-7627-4a8f-b62b-d7f50ab90886', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.179057', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5c793e-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '0df80ff710d4dd9b32061cbcbffdf5e2ac8c237b08eafd1c4788c44d6b9f994b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.179057', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5c8500-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '480fad0d6e85f2f9279b28491ebd198236b65008415d2fba74cc4c46e242de6c'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.179057', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed5c8fd2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '199f0034b2b464b684b8358cbb4850b88fdb9aa962a899a353440debe2609e01'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.179057', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5c9aa4-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.313803258, 'message_signature': '0afe06591dc1b74ab022347360ee075337c176e88ae7160654a31470d8b2e388'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.179057', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5ca508-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.313803258, 'message_signature': '3e6d6d062679d351c9b26dfcb494807cc6410810e58e93e9a6c94dceaf5573f4'}]}, 'timestamp': '2025-10-13 15:40:21.180472', '_unique_id': 'fe9a291c560041c4bb7589386b761b1b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.181 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.182 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.182 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.182 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.182 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.183 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.183 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43159b4e-1249-4bc0-a054-0685c837857e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.182144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5cf1c0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '26bc338f5ebc2bbba32e0f0701595709a0fa1a14083ae1c0324f5b6f40d6f7f3'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.182144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5cfe40-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '0e256e01803e405999a7142d5777de4c782f79c7496033428cced5fceba95ba4'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.182144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed5d0bf6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.297837248, 'message_signature': '05e19f417d80bdb2095c33f3f3ab1d7d7467318181cfc37ab49b52ff5a40b964'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.182144', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5d16a0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.313803258, 'message_signature': '60cfbc5390a53aa3b900d099ce0c65118a6c3bc7f0cb3d51dbb455ab79888727'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.182144', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5d21b8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.313803258, 'message_signature': 'db71320b96ded3429a4deeca28925a5d7a568d23634739b0d190405ef91e2d3b'}]}, 'timestamp': '2025-10-13 15:40:21.183677', '_unique_id': '40ee847747be4a58aa7e2a6057095619'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.184 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.185 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.185 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.185 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '730833a0-d401-460e-b129-8cde49e1888d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.185433', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed5d732a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': '9bf09b4917d5739cc2ca547ba3a3ccadead4f3da5b532fa8d6f96a32ff50340c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.185433', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed5d7ece-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': '70aed4240e3b6a565a45d31823df67f1df9a0196f9a6886e5f82043538943c6a'}]}, 'timestamp': '2025-10-13 15:40:21.186069', '_unique_id': '313a1d3b5c114b8aad953c9ac37a5219'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.186 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.187 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.187 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.188 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.188 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.188 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b4254d53-ac17-4834-be68-a7fbbc5dc445', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.187525', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5dc3fc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '92af67e79c151a314da5c92219f030498486ed17e7ef3b49bdd6684161f97a6d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.187525', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5dcf28-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '6f456a55435a78f32ef94f18bc53ad9eba2fe0a64dbf60b1000756bd412dea4d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.187525', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed5dd9c8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '5ae806b4c67586dbf1117d0b31397816d907ce668045ed8014e487d644dcf035'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.187525', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5de526-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '7a5974ee5ff57a98b1c92ce03a3d2283b7a88bb541cdde482b84ba2cb7a4e7fe'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.187525', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5defc6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '7deaf7fa5e47212fb621390821fb8e52da9018e3ff2c72915d75da952d20469e'}]}, 'timestamp': '2025-10-13 15:40:21.188928', '_unique_id': 'e78f3c2ee6c34db394a3970782e56bfe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.190 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.190 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.190 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ba770dd-13f8-46c0-88a7-b94d944eb5ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.190475', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed5e3832-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': 'f30233f69271c30760aa14e00d4dcad0697cea9d220ec202871daaa3bee150b2'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.190475', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed5e43cc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': 'b4dafde5bb7dc771ea81fef5000565e3b5f955273f78a5a30e3558c87816458c'}]}, 'timestamp': '2025-10-13 15:40:21.191113', '_unique_id': 'd989912a1d394646bebb0f8b3f27a5b7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.191 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.192 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.192 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9afeb70e-f361-4f0b-aa0e-4eddda0c0813', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.192648', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed5e8bfc-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': 'ceb259e3bb902d6513cae3ab34cc2e0bc7c2a4200f09766c196912af0e824981'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.192648', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed5e9b06-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': '6ce6f1a94b0a93dfdfe2f697b48fa2417eba00b20a2905918fdf5a0f06e35f53'}]}, 'timestamp': '2025-10-13 15:40:21.193346', '_unique_id': '6cf5aa4c9cac4850a03d92d065f81e9e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.194 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.194 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.195 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.195 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.195 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bdf70e70-50d6-49ab-aa79-b80380056b92', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.194803', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5ee00c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': 'fd3276224002b43ae15bfcc530cabfe11fecdc46d98f11eb4c19876dbe405de2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.194803', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5eeade-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': 'd5a09ee176f9620eab9c97c365bfe84f2e95993ad994a481717fd0cefacecfde'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.194803', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed5ef632-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': 'ac02b7938a17c9f2ffa8daca0d223fcba8867e51079097e4d8a74e0f398cf071'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.194803', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5f00e6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': '3e4d8c26235a56bd9b05a68202d3980faee6c01921be064d270cc035f61947d3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.194803', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5f0b4a-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': 'b656e69ad07dfdd3a66c121d9409a1e1a3b4eb99b55b4a655dc4437da2c36b35'}]}, 'timestamp': '2025-10-13 15:40:21.196203', '_unique_id': '85e189d504f1446eb02e1efe715227c4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.196 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.197 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.197 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f5befe63-9916-46dd-a68e-7f3a9a296e54', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:40:21.197761', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'ed5f53c0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.349705938, 'message_signature': '845feb5b6e5ee418806130828a9a54b7c74182d967e0c47deff158f37bb0163b'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:40:21.197761', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'ed5f5e9c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.363356106, 'message_signature': 'fb2ec160b335f48430afaad3edf6a90850601071dba4ce2b74524284a1ee3f8e'}]}, 'timestamp': '2025-10-13 15:40:21.198333', '_unique_id': 'd6a451d31f9340c4a38f89ed8567388e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.198 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.199 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.199 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7eacbfdf-5d66-41e5-98a1-fc3968ed9621', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:40:21.199787', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5fa294-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '23a9915a75e14e70d6ad29bc8e6510da2103f0aa055f9f6da636337e7f538720'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:40:21.199787', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5fada2-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '8debc05eeffff959e4501c9870b8d11ebecca41d316744719b2994da81615a5a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:40:21.199787', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'ed5fb8d8-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.205291754, 'message_signature': '922a66b8e43d79349d1af3b4315a5eef93201232b20535c2024589887b0329c5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:40:21.199787', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed5fc396-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': 'd41907a4b795190c04e70657044a15f3d797d7d75e332c9b726812889a1e9f3a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:40:21.199787', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed5fcdf0-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.266079226, 'message_signature': 'a2b576bb341083ebbefa871e500d5bf532ace1bcf29d64171ffeb5973f5ca89d'}]}, 'timestamp': '2025-10-13 15:40:21.201207', '_unique_id': '4fee453867704c24ba8955673f2b007b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.202 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b48e52f2-9df1-4111-944a-16d2dceb80e3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:40:21.202808', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'ed601ac6-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.189408778, 'message_signature': '1215f1fe36e6f36a800ef7454678c3dd055607a4ebc2cd15d2b18a97137f2bec'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:40:21.202808', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'ed60273c-a84a-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9464.196282588, 'message_signature': 'ca1ba2332d0b1e61344089c2311f2d1bb4aac6511b88f8e718558b9ea66eec33'}]}, 'timestamp': '2025-10-13 15:40:21.203479', '_unique_id': '626a416e106c4445b79963eb1f7901e8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:40:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:40:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:40:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3774: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:22 standalone.localdomain ceph-mon[29756]: pgmap v3774: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:40:23
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'manila_metadata', 'manila_data', 'images', 'vms']
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:40:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62845 DF PROTO=TCP SPT=33948 DPT=9102 SEQ=1571267338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCBBA200000000001030307) 
Oct 13 15:40:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:23.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3775: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:40:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:40:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62846 DF PROTO=TCP SPT=33948 DPT=9102 SEQ=1571267338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCBBE360000000001030307) 
Oct 13 15:40:24 standalone.localdomain ceph-mon[29756]: pgmap v3775: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3776: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:25.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62847 DF PROTO=TCP SPT=33948 DPT=9102 SEQ=1571267338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCBC6360000000001030307) 
Oct 13 15:40:26 standalone.localdomain ceph-mon[29756]: pgmap v3776: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3777: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:28 standalone.localdomain ceph-mon[29756]: pgmap v3777: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:28.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:40:28 standalone.localdomain podman[525750]: 2025-10-13 15:40:28.828677811 +0000 UTC m=+0.089528923 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:40:28 standalone.localdomain podman[525750]: 2025-10-13 15:40:28.869661467 +0000 UTC m=+0.130512559 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:40:28 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:40:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3778: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:30 standalone.localdomain ceph-mon[29756]: pgmap v3778: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62848 DF PROTO=TCP SPT=33948 DPT=9102 SEQ=1571267338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCBD5F60000000001030307) 
Oct 13 15:40:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:30.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3779: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:40:31 standalone.localdomain systemd[1]: tmp-crun.4WKKIN.mount: Deactivated successfully.
Oct 13 15:40:31 standalone.localdomain podman[525776]: 2025-10-13 15:40:31.831585386 +0000 UTC m=+0.095264549 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, version=9.6)
Oct 13 15:40:31 standalone.localdomain podman[525776]: 2025-10-13 15:40:31.850146735 +0000 UTC m=+0.113825878 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm)
Oct 13 15:40:31 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:40:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:32 standalone.localdomain ceph-mon[29756]: pgmap v3779: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:40:32 standalone.localdomain podman[525796]: 2025-10-13 15:40:32.813276087 +0000 UTC m=+0.078354071 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, container_name=multipathd)
Oct 13 15:40:32 standalone.localdomain podman[525796]: 2025-10-13 15:40:32.828983438 +0000 UTC m=+0.094061462 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:40:32 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:40:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3780: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:33.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:40:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:40:34 standalone.localdomain ceph-mon[29756]: pgmap v3780: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3781: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:35.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:36 standalone.localdomain ceph-mon[29756]: pgmap v3781: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:40:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:40:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:40:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:40:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:40:37 standalone.localdomain podman[525817]: 2025-10-13 15:40:37.539009964 +0000 UTC m=+0.090290227 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, release=1, io.openshift.expose-services=, container_name=swift_container_server, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1)
Oct 13 15:40:37 standalone.localdomain systemd[1]: tmp-crun.JocWbe.mount: Deactivated successfully.
Oct 13 15:40:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3782: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:37 standalone.localdomain podman[525816]: 2025-10-13 15:40:37.605867222 +0000 UTC m=+0.158216567 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, tcib_managed=true, container_name=swift_object_server, version=17.1.9, io.openshift.expose-services=, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:40:37 standalone.localdomain podman[525818]: 2025-10-13 15:40:37.576135802 +0000 UTC m=+0.118445780 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, container_name=swift_account_server, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, vcs-type=git)
Oct 13 15:40:37 standalone.localdomain podman[525824]: 2025-10-13 15:40:37.677716403 +0000 UTC m=+0.215497372 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:40:37 standalone.localdomain podman[525824]: 2025-10-13 15:40:37.686948636 +0000 UTC m=+0.224729585 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:40:37 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:40:37 standalone.localdomain podman[525817]: 2025-10-13 15:40:37.763864322 +0000 UTC m=+0.315144605 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, version=17.1.9, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, maintainer=OpenStack TripleO Team)
Oct 13 15:40:37 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:40:37 standalone.localdomain podman[525818]: 2025-10-13 15:40:37.784923137 +0000 UTC m=+0.327233145 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, container_name=swift_account_server, managed_by=tripleo_ansible, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 15:40:37 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:40:37 standalone.localdomain podman[525816]: 2025-10-13 15:40:37.870981203 +0000 UTC m=+0.423330498 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, batch=17.1_20250721.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4)
Oct 13 15:40:37 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:40:38 standalone.localdomain ceph-mon[29756]: pgmap v3782: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:38.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3783: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:40:39 standalone.localdomain podman[525919]: 2025-10-13 15:40:39.826237486 +0000 UTC m=+0.089156242 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:40:39 standalone.localdomain podman[525919]: 2025-10-13 15:40:39.840868124 +0000 UTC m=+0.103786880 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:40:39 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:40:40 standalone.localdomain ceph-mon[29756]: pgmap v3783: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:40.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:41 standalone.localdomain podman[467099]: time="2025-10-13T15:40:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:40:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3784: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:40:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:40:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:40:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49141 "" "Go-http-client/1.1"
Oct 13 15:40:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:42 standalone.localdomain ceph-mon[29756]: pgmap v3784: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:40:42 standalone.localdomain systemd[1]: tmp-crun.1rRIdd.mount: Deactivated successfully.
Oct 13 15:40:42 standalone.localdomain podman[525940]: 2025-10-13 15:40:42.829605185 +0000 UTC m=+0.092947378 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:40:42 standalone.localdomain podman[525940]: 2025-10-13 15:40:42.870049423 +0000 UTC m=+0.133391616 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:40:42 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:40:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:40:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:40:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3785: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:43.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:44 standalone.localdomain ceph-mon[29756]: pgmap v3785: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3786: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:45.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:46 standalone.localdomain ceph-mon[29756]: pgmap v3786: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:40:46 standalone.localdomain podman[525959]: 2025-10-13 15:40:46.809572499 +0000 UTC m=+0.080162727 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:40:46 standalone.localdomain podman[525959]: 2025-10-13 15:40:46.820904356 +0000 UTC m=+0.091494584 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:40:46 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:40:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3787: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:48 standalone.localdomain ceph-mon[29756]: pgmap v3787: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:40:48 standalone.localdomain object-server[525982]: Object update sweep starting on /srv/node/d1 (pid: 31)
Oct 13 15:40:48 standalone.localdomain object-server[525982]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 31)
Oct 13 15:40:48 standalone.localdomain object-server[525982]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:40:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:48.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.08s
Oct 13 15:40:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3788: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:40:49 standalone.localdomain podman[525983]: 2025-10-13 15:40:49.825064059 +0000 UTC m=+0.091630888 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:40:49 standalone.localdomain podman[525983]: 2025-10-13 15:40:49.830943659 +0000 UTC m=+0.097510458 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:40:49 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:40:50 standalone.localdomain ceph-mon[29756]: pgmap v3788: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:50.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:50 standalone.localdomain sudo[526001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:40:50 standalone.localdomain sudo[526001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:40:50 standalone.localdomain sudo[526001]: pam_unix(sudo:session): session closed for user root
Oct 13 15:40:51 standalone.localdomain sudo[526019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:40:51 standalone.localdomain sudo[526019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:40:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3789: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:51 standalone.localdomain sudo[526019]: pam_unix(sudo:session): session closed for user root
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:40:51 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:40:51 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 6032b993-0829-4352-9e1d-347b96b72273 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:40:51 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 6032b993-0829-4352-9e1d-347b96b72273 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:40:51 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 6032b993-0829-4352-9e1d-347b96b72273 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:40:51 standalone.localdomain sudo[526070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:40:51 standalone.localdomain sudo[526070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:40:51 standalone.localdomain sudo[526070]: pam_unix(sudo:session): session closed for user root
Oct 13 15:40:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:52 standalone.localdomain ceph-mon[29756]: pgmap v3789: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:40:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:40:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:40:52 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:40:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49412 DF PROTO=TCP SPT=46566 DPT=9102 SEQ=2411096963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCC2F4E0000000001030307) 
Oct 13 15:40:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3790: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:53.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49413 DF PROTO=TCP SPT=46566 DPT=9102 SEQ=2411096963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCC33760000000001030307) 
Oct 13 15:40:54 standalone.localdomain ceph-mon[29756]: pgmap v3790: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:40:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:40:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:40:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3791: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:55.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:40:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49414 DF PROTO=TCP SPT=46566 DPT=9102 SEQ=2411096963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCC3B760000000001030307) 
Oct 13 15:40:57 standalone.localdomain ceph-mon[29756]: pgmap v3791: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:40:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3792: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:58 standalone.localdomain ceph-mon[29756]: pgmap v3792: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:40:58.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:40:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3793: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:40:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:40:59 standalone.localdomain systemd[1]: tmp-crun.1FpWLc.mount: Deactivated successfully.
Oct 13 15:40:59 standalone.localdomain podman[526088]: 2025-10-13 15:40:59.823752838 +0000 UTC m=+0.083046604 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:40:59 standalone.localdomain podman[526088]: 2025-10-13 15:40:59.921937755 +0000 UTC m=+0.181231531 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:40:59 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:41:00 standalone.localdomain ceph-mon[29756]: pgmap v3793: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49415 DF PROTO=TCP SPT=46566 DPT=9102 SEQ=2411096963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCC4B360000000001030307) 
Oct 13 15:41:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:00.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3794: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:41:02 standalone.localdomain podman[526113]: 2025-10-13 15:41:02.08352437 +0000 UTC m=+0.092749002 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, io.openshift.tags=minimal rhel9)
Oct 13 15:41:02 standalone.localdomain podman[526113]: 2025-10-13 15:41:02.101108448 +0000 UTC m=+0.110333080 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7)
Oct 13 15:41:02 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:41:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:02 standalone.localdomain ceph-mon[29756]: pgmap v3794: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3795: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:03.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:41:03 standalone.localdomain podman[526134]: 2025-10-13 15:41:03.819063383 +0000 UTC m=+0.078097964 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:41:03 standalone.localdomain podman[526134]: 2025-10-13 15:41:03.852664282 +0000 UTC m=+0.111698903 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:41:03 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:41:04 standalone.localdomain ceph-mon[29756]: pgmap v3795: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3796: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:05.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:06 standalone.localdomain ceph-mon[29756]: pgmap v3796: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:41:06.960 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:41:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:41:06.961 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:41:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:41:06.962 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:41:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3797: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:41:07 standalone.localdomain podman[526154]: 2025-10-13 15:41:07.829023405 +0000 UTC m=+0.095613060 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:41:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:41:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:41:07 standalone.localdomain podman[526154]: 2025-10-13 15:41:07.862077557 +0000 UTC m=+0.128667232 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:41:07 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:41:07 standalone.localdomain podman[526177]: 2025-10-13 15:41:07.944796011 +0000 UTC m=+0.086419399 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, release=1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, version=17.1.9, com.redhat.component=openstack-swift-container-container, distribution-scope=public, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32)
Oct 13 15:41:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:41:08 standalone.localdomain podman[526178]: 2025-10-13 15:41:08.005575773 +0000 UTC m=+0.141167425 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-07-21T16:11:22, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64)
Oct 13 15:41:08 standalone.localdomain podman[526208]: 2025-10-13 15:41:08.088973178 +0000 UTC m=+0.126528417 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T14:56:28, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:41:08 standalone.localdomain podman[526177]: 2025-10-13 15:41:08.178046146 +0000 UTC m=+0.319669514 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, com.redhat.component=openstack-swift-container-container, release=1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-container, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, batch=17.1_20250721.1, io.openshift.expose-services=)
Oct 13 15:41:08 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:41:08 standalone.localdomain podman[526178]: 2025-10-13 15:41:08.232883316 +0000 UTC m=+0.368474898 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T16:11:22, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:41:08 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:41:08 standalone.localdomain podman[526208]: 2025-10-13 15:41:08.308950765 +0000 UTC m=+0.346506024 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:41:08 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:41:08 standalone.localdomain ceph-mon[29756]: pgmap v3797: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:08.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:08 standalone.localdomain systemd[1]: tmp-crun.pppBg4.mount: Deactivated successfully.
Oct 13 15:41:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:09.582 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:09.583 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3798: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:09.613 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:09.614 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:41:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:10.365 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:41:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:10.366 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:41:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:10.366 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:41:10 standalone.localdomain ceph-mon[29756]: pgmap v3798: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:41:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:10.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:10 standalone.localdomain systemd[1]: tmp-crun.IcwV8h.mount: Deactivated successfully.
Oct 13 15:41:10 standalone.localdomain podman[526260]: 2025-10-13 15:41:10.827778311 +0000 UTC m=+0.093670899 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:41:10 standalone.localdomain podman[526260]: 2025-10-13 15:41:10.863860587 +0000 UTC m=+0.129753145 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:41:10 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.120 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.152 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.153 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.153 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.154 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.154 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.155 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.155 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.156 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.157 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.157 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.274 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.275 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.276 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.276 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.277 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:41:11 standalone.localdomain podman[467099]: time="2025-10-13T15:41:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:41:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3799: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:41:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:41:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:41:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49126 "" "Go-http-client/1.1"
Oct 13 15:41:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:41:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1368070941' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.790 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.915 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.916 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.916 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.922 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:41:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:11.922 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.115 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.116 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9300MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.116 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.116 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.181 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.182 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.182 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.182 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.228 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3068831414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.650 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.657 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: pgmap v3799: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1368070941' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3068831414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.678285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370072678330, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 793, "num_deletes": 251, "total_data_size": 575293, "memory_usage": 590728, "flush_reason": "Manual Compaction"}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370072684242, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 564441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89278, "largest_seqno": 90070, "table_properties": {"data_size": 560807, "index_size": 1426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8447, "raw_average_key_size": 19, "raw_value_size": 553394, "raw_average_value_size": 1263, "num_data_blocks": 65, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760370017, "oldest_key_time": 1760370017, "file_creation_time": 1760370072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 6008 microseconds, and 3014 cpu microseconds.
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.684290) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 564441 bytes OK
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.684312) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.685910) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.685929) EVENT_LOG_v1 {"time_micros": 1760370072685923, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.685973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 571243, prev total WAL file size 571243, number of live WAL files 2.
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.686429) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(551KB)], [203(7342KB)]
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370072686477, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 8083238, "oldest_snapshot_seqno": -1}
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.701 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.704 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:41:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:12.704 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 7198 keys, 7041894 bytes, temperature: kUnknown
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370072716515, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 7041894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6998720, "index_size": 24066, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 190589, "raw_average_key_size": 26, "raw_value_size": 6872583, "raw_average_value_size": 954, "num_data_blocks": 949, "num_entries": 7198, "num_filter_entries": 7198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.716720) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 7041894 bytes
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.718334) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 268.8 rd, 234.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.2 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(26.8) write-amplify(12.5) OK, records in: 7716, records dropped: 518 output_compression: NoCompression
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.718353) EVENT_LOG_v1 {"time_micros": 1760370072718344, "job": 128, "event": "compaction_finished", "compaction_time_micros": 30075, "compaction_time_cpu_micros": 16903, "output_level": 6, "num_output_files": 1, "total_output_size": 7041894, "num_input_records": 7716, "num_output_records": 7198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370072718523, "job": 128, "event": "table_file_deletion", "file_number": 205}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370072719089, "job": 128, "event": "table_file_deletion", "file_number": 203}
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.686356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.719186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.719192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.719196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.719198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:41:12 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:41:12.719201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:41:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:41:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3800: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:41:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:13.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:13 standalone.localdomain systemd[1]: tmp-crun.ATngAC.mount: Deactivated successfully.
Oct 13 15:41:13 standalone.localdomain podman[526323]: 2025-10-13 15:41:13.795666194 +0000 UTC m=+0.067090776 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:41:13 standalone.localdomain podman[526323]: 2025-10-13 15:41:13.807910499 +0000 UTC m=+0.079335051 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 13 15:41:13 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:41:14 standalone.localdomain ceph-mon[29756]: pgmap v3800: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3801: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:15.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:16 standalone.localdomain ceph-mon[29756]: pgmap v3801: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3802: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:41:17 standalone.localdomain podman[526343]: 2025-10-13 15:41:17.829165367 +0000 UTC m=+0.091259187 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:41:17 standalone.localdomain podman[526343]: 2025-10-13 15:41:17.844060193 +0000 UTC m=+0.106154003 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:41:17 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:41:18 standalone.localdomain sshd[525710]: Received disconnect from 38.102.83.114 port 58128:11: disconnected by user
Oct 13 15:41:18 standalone.localdomain sshd[525710]: Disconnected from user zuul 38.102.83.114 port 58128
Oct 13 15:41:18 standalone.localdomain sshd[525691]: pam_unix(sshd:session): session closed for user zuul
Oct 13 15:41:18 standalone.localdomain systemd[1]: session-301.scope: Deactivated successfully.
Oct 13 15:41:18 standalone.localdomain systemd-logind[45629]: Session 301 logged out. Waiting for processes to exit.
Oct 13 15:41:18 standalone.localdomain systemd-logind[45629]: Removed session 301.
Oct 13 15:41:18 standalone.localdomain ceph-mon[29756]: pgmap v3802: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:41:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2348950073' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:41:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:41:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2348950073' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:41:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:18.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2348950073' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:41:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2348950073' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:41:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3803: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:20 standalone.localdomain ceph-mon[29756]: pgmap v3803: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:41:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:20.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:20 standalone.localdomain systemd[1]: tmp-crun.nSZoQR.mount: Deactivated successfully.
Oct 13 15:41:20 standalone.localdomain podman[526365]: 2025-10-13 15:41:20.811926664 +0000 UTC m=+0.079116155 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:41:20 standalone.localdomain podman[526365]: 2025-10-13 15:41:20.816567026 +0000 UTC m=+0.083756507 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:41:20 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:41:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3804: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:22 standalone.localdomain ceph-mon[29756]: pgmap v3804: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:41:23
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'vms', 'backups', 'manila_data', 'volumes', 'images', '.mgr']
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:41:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38554 DF PROTO=TCP SPT=57540 DPT=9102 SEQ=3545834102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCCA47F0000000001030307) 
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3805: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:41:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:41:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:23.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38555 DF PROTO=TCP SPT=57540 DPT=9102 SEQ=3545834102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCCA8770000000001030307) 
Oct 13 15:41:24 standalone.localdomain ceph-mon[29756]: pgmap v3805: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3806: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:25.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38556 DF PROTO=TCP SPT=57540 DPT=9102 SEQ=3545834102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCCB0770000000001030307) 
Oct 13 15:41:26 standalone.localdomain ceph-mon[29756]: pgmap v3806: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:27 standalone.localdomain sshd[526383]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:41:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3807: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:28 standalone.localdomain systemd[1]: Stopping User Manager for UID 1000...
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Activating special unit Exit the Session...
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped target Main User Target.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped target Basic System.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped target Paths.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped target Sockets.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped target Timers.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Closed D-Bus User Message Bus Socket.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Stopped Create User's Volatile Files and Directories.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Removed slice User Application Slice.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Reached target Shutdown.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Finished Exit the Session.
Oct 13 15:41:28 standalone.localdomain systemd[525694]: Reached target Exit the Session.
Oct 13 15:41:28 standalone.localdomain systemd[1]: user@1000.service: Deactivated successfully.
Oct 13 15:41:28 standalone.localdomain systemd[1]: Stopped User Manager for UID 1000.
Oct 13 15:41:28 standalone.localdomain unix_chkpwd[526385]: password check failed for user (root)
Oct 13 15:41:28 standalone.localdomain sshd[526383]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:41:28 standalone.localdomain ceph-mon[29756]: pgmap v3807: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:28.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0072334327889447235 of space, bias 1.0, pg target 0.7233432788944724 quantized to 32 (current 32)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.30076598269123395 quantized to 32 (current 32)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Oct 13 15:41:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3808: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:29 standalone.localdomain sshd[526383]: Failed password for root from 193.46.255.244 port 25856 ssh2
Oct 13 15:41:30 standalone.localdomain unix_chkpwd[526386]: password check failed for user (root)
Oct 13 15:41:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38557 DF PROTO=TCP SPT=57540 DPT=9102 SEQ=3545834102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCCC0360000000001030307) 
Oct 13 15:41:30 standalone.localdomain ceph-mon[29756]: pgmap v3808: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:41:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:30.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:30 standalone.localdomain podman[526387]: 2025-10-13 15:41:30.837924709 +0000 UTC m=+0.096333371 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:41:30 standalone.localdomain podman[526387]: 2025-10-13 15:41:30.880020469 +0000 UTC m=+0.138429201 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:41:30 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:41:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3809: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:31 standalone.localdomain sshd[526383]: Failed password for root from 193.46.255.244 port 25856 ssh2
Oct 13 15:41:32 standalone.localdomain unix_chkpwd[526413]: password check failed for user (root)
Oct 13 15:41:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:32 standalone.localdomain ceph-mon[29756]: pgmap v3809: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:41:32 standalone.localdomain podman[526414]: 2025-10-13 15:41:32.802747786 +0000 UTC m=+0.072409279 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm)
Oct 13 15:41:32 standalone.localdomain podman[526414]: 2025-10-13 15:41:32.817063144 +0000 UTC m=+0.086724637 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:41:32 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:41:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3810: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:33.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:33 standalone.localdomain sshd[526383]: Failed password for root from 193.46.255.244 port 25856 ssh2
Oct 13 15:41:34 standalone.localdomain ceph-mon[29756]: pgmap v3810: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:41:34 standalone.localdomain podman[526434]: 2025-10-13 15:41:34.815871511 +0000 UTC m=+0.087672326 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 13 15:41:34 standalone.localdomain podman[526434]: 2025-10-13 15:41:34.853517795 +0000 UTC m=+0.125318590 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 13 15:41:34 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:41:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3811: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:35 standalone.localdomain sshd[526383]: Received disconnect from 193.46.255.244 port 25856:11:  [preauth]
Oct 13 15:41:35 standalone.localdomain sshd[526383]: Disconnected from authenticating user root 193.46.255.244 port 25856 [preauth]
Oct 13 15:41:35 standalone.localdomain sshd[526383]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:41:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:35.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:35 standalone.localdomain sshd[526454]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:41:36 standalone.localdomain unix_chkpwd[526456]: password check failed for user (root)
Oct 13 15:41:36 standalone.localdomain sshd[526454]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:41:36 standalone.localdomain ceph-mon[29756]: pgmap v3811: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3812: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:38 standalone.localdomain ceph-mon[29756]: pgmap v3812: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:38 standalone.localdomain sshd[526454]: Failed password for root from 193.46.255.244 port 11070 ssh2
Oct 13 15:41:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:41:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:41:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:41:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:41:38 standalone.localdomain podman[526460]: 2025-10-13 15:41:38.810270457 +0000 UTC m=+0.061170195 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:41:38 standalone.localdomain podman[526460]: 2025-10-13 15:41:38.825924647 +0000 UTC m=+0.076824435 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:41:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:38.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:38 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:41:38 standalone.localdomain podman[526458]: 2025-10-13 15:41:38.829785665 +0000 UTC m=+0.091257866 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, tcib_managed=true, io.openshift.expose-services=, container_name=swift_container_server, version=17.1.9, build-date=2025-07-21T15:54:32)
Oct 13 15:41:38 standalone.localdomain podman[526459]: 2025-10-13 15:41:38.903978237 +0000 UTC m=+0.162822049 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, architecture=x86_64, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1)
Oct 13 15:41:38 standalone.localdomain podman[526457]: 2025-10-13 15:41:38.930542001 +0000 UTC m=+0.193281891 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, config_id=tripleo_step4, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, release=1, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true)
Oct 13 15:41:39 standalone.localdomain podman[526458]: 2025-10-13 15:41:39.041178421 +0000 UTC m=+0.302650632 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, batch=17.1_20250721.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, version=17.1.9, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=)
Oct 13 15:41:39 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:41:39 standalone.localdomain podman[526457]: 2025-10-13 15:41:39.119919152 +0000 UTC m=+0.382659112 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, tcib_managed=true, container_name=swift_object_server, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public)
Oct 13 15:41:39 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:41:39 standalone.localdomain podman[526459]: 2025-10-13 15:41:39.151982154 +0000 UTC m=+0.410826026 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, build-date=2025-07-21T16:11:22, container_name=swift_account_server, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:41:39 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:41:39 standalone.localdomain account-server[114555]: 172.20.0.100 - - [13/Oct/2025:15:41:39 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txd29951d7ee2f43ab98d84-0068ed1db3" "proxy-server 2" 0.0005 "-" 20 -
Oct 13 15:41:39 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txd29951d7ee2f43ab98d84-0068ed1db3)
Oct 13 15:41:39 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txd29951d7ee2f43ab98d84-0068ed1db3)
Oct 13 15:41:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3813: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:39 standalone.localdomain systemd[1]: tmp-crun.s5YGcs.mount: Deactivated successfully.
Oct 13 15:41:40 standalone.localdomain unix_chkpwd[526561]: password check failed for user (root)
Oct 13 15:41:40 standalone.localdomain ceph-mon[29756]: pgmap v3813: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:40.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:41 standalone.localdomain podman[467099]: time="2025-10-13T15:41:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:41:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3814: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:41:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:41:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:41:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:41:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49131 "" "Go-http-client/1.1"
Oct 13 15:41:41 standalone.localdomain podman[526562]: 2025-10-13 15:41:41.820905688 +0000 UTC m=+0.090912556 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=edpm)
Oct 13 15:41:41 standalone.localdomain podman[526562]: 2025-10-13 15:41:41.83173255 +0000 UTC m=+0.101739448 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:41:41 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:41:41 standalone.localdomain sshd[526454]: Failed password for root from 193.46.255.244 port 11070 ssh2
Oct 13 15:41:42 standalone.localdomain unix_chkpwd[526581]: password check failed for user (root)
Oct 13 15:41:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:42 standalone.localdomain ceph-mon[29756]: pgmap v3814: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:41:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:41:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:41:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3815: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:43.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:44 standalone.localdomain sshd[526454]: Failed password for root from 193.46.255.244 port 11070 ssh2
Oct 13 15:41:44 standalone.localdomain ceph-mon[29756]: pgmap v3815: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:41:44 standalone.localdomain podman[526582]: 2025-10-13 15:41:44.828254649 +0000 UTC m=+0.093785434 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 15:41:44 standalone.localdomain podman[526582]: 2025-10-13 15:41:44.839690149 +0000 UTC m=+0.105220984 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:41:44 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:41:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3816: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:45.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:45 standalone.localdomain sshd[526454]: Received disconnect from 193.46.255.244 port 11070:11:  [preauth]
Oct 13 15:41:45 standalone.localdomain sshd[526454]: Disconnected from authenticating user root 193.46.255.244 port 11070 [preauth]
Oct 13 15:41:45 standalone.localdomain sshd[526454]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:41:45 standalone.localdomain sshd[526602]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:41:46 standalone.localdomain ceph-mon[29756]: pgmap v3816: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:46 standalone.localdomain unix_chkpwd[526604]: password check failed for user (root)
Oct 13 15:41:46 standalone.localdomain sshd[526602]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:41:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3817: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 13 15:41:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e66 e66: 1 total, 1 up, 1 in
Oct 13 15:41:47 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e66: 1 total, 1 up, 1 in
Oct 13 15:41:48 standalone.localdomain sshd[526602]: Failed password for root from 193.46.255.244 port 55494 ssh2
Oct 13 15:41:48 standalone.localdomain unix_chkpwd[526605]: password check failed for user (root)
Oct 13 15:41:48 standalone.localdomain ceph-mon[29756]: pgmap v3817: 177 pgs: 177 active+clean; 318 MiB data, 257 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:41:48 standalone.localdomain ceph-mon[29756]: osdmap e66: 1 total, 1 up, 1 in
Oct 13 15:41:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:41:48 standalone.localdomain podman[526606]: 2025-10-13 15:41:48.834630511 +0000 UTC m=+0.094083272 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:41:48 standalone.localdomain podman[526606]: 2025-10-13 15:41:48.846048182 +0000 UTC m=+0.105500903 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:41:48 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:41:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:48.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 13 15:41:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 e67: 1 total, 1 up, 1 in
Oct 13 15:41:49 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e67: 1 total, 1 up, 1 in
Oct 13 15:41:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3820: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 13 15:41:50 standalone.localdomain sshd[526602]: Failed password for root from 193.46.255.244 port 55494 ssh2
Oct 13 15:41:50 standalone.localdomain ceph-mon[29756]: osdmap e67: 1 total, 1 up, 1 in
Oct 13 15:41:50 standalone.localdomain ceph-mon[29756]: pgmap v3820: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 13 15:41:50 standalone.localdomain unix_chkpwd[526629]: password check failed for user (root)
Oct 13 15:41:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3821: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 13 15:41:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:41:51 standalone.localdomain podman[526630]: 2025-10-13 15:41:51.814305654 +0000 UTC m=+0.079908278 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:41:51 standalone.localdomain podman[526630]: 2025-10-13 15:41:51.843285612 +0000 UTC m=+0.108888206 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:41:51 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:41:51 standalone.localdomain sudo[526649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:41:51 standalone.localdomain sudo[526649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:41:51 standalone.localdomain sudo[526649]: pam_unix(sudo:session): session closed for user root
Oct 13 15:41:52 standalone.localdomain sudo[526667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:41:52 standalone.localdomain sudo[526667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: pgmap v3821: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 13 15:41:52 standalone.localdomain sudo[526667]: pam_unix(sudo:session): session closed for user root
Oct 13 15:41:52 standalone.localdomain sshd[526602]: Failed password for root from 193.46.255.244 port 55494 ssh2
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:41:52 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:41:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 11fbbcd1-3c1f-46b7-803b-afcd88599ded (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:41:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 11fbbcd1-3c1f-46b7-803b-afcd88599ded (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:41:52 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 11fbbcd1-3c1f-46b7-803b-afcd88599ded (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:41:52 standalone.localdomain sudo[526718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:41:52 standalone.localdomain sudo[526718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:41:52 standalone.localdomain sudo[526718]: pam_unix(sudo:session): session closed for user root
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:41:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60009 DF PROTO=TCP SPT=56440 DPT=9102 SEQ=278869355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD19AF0000000001030307) 
Oct 13 15:41:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3822: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 13 15:41:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:41:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:41:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:41:53 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:41:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:53.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:54 standalone.localdomain sshd[526602]: Received disconnect from 193.46.255.244 port 55494:11:  [preauth]
Oct 13 15:41:54 standalone.localdomain sshd[526602]: Disconnected from authenticating user root 193.46.255.244 port 55494 [preauth]
Oct 13 15:41:54 standalone.localdomain sshd[526602]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 13 15:41:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60010 DF PROTO=TCP SPT=56440 DPT=9102 SEQ=278869355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD1DB60000000001030307) 
Oct 13 15:41:54 standalone.localdomain ceph-mon[29756]: pgmap v3822: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Oct 13 15:41:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:41:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:41:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:41:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3823: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 13 15:41:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:55.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:41:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60011 DF PROTO=TCP SPT=56440 DPT=9102 SEQ=278869355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD25B60000000001030307) 
Oct 13 15:41:57 standalone.localdomain ceph-mon[29756]: pgmap v3823: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 13 15:41:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:41:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3824: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 27 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 13 15:41:58 standalone.localdomain ceph-mon[29756]: pgmap v3824: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 27 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 13 15:41:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:41:58.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:41:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3825: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 3.2 KiB/s rd, 503 B/s wr, 4 op/s
Oct 13 15:42:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60012 DF PROTO=TCP SPT=56440 DPT=9102 SEQ=278869355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD35760000000001030307) 
Oct 13 15:42:00 standalone.localdomain ceph-mon[29756]: pgmap v3825: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 3.2 KiB/s rd, 503 B/s wr, 4 op/s
Oct 13 15:42:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:00.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3826: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Oct 13 15:42:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:42:01 standalone.localdomain podman[526736]: 2025-10-13 15:42:01.816625385 +0000 UTC m=+0.080624351 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:42:01 standalone.localdomain podman[526736]: 2025-10-13 15:42:01.85794361 +0000 UTC m=+0.121942626 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:01 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:42:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:02 standalone.localdomain ceph-mon[29756]: pgmap v3826: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.257 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.258 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.258 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.282 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3827: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Oct 13 15:42:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:42:03 standalone.localdomain podman[526761]: 2025-10-13 15:42:03.832926518 +0000 UTC m=+0.098171298 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64)
Oct 13 15:42:03 standalone.localdomain podman[526761]: 2025-10-13 15:42:03.876926196 +0000 UTC m=+0.142170966 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, version=9.6, architecture=x86_64)
Oct 13 15:42:03 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:42:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:03.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:04 standalone.localdomain ceph-mon[29756]: pgmap v3827: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.305 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.306 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.306 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.531 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.531 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.531 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.532 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:42:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3828: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Oct 13 15:42:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:42:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:05.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:05 standalone.localdomain podman[526781]: 2025-10-13 15:42:05.808924137 +0000 UTC m=+0.072879214 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 13 15:42:05 standalone.localdomain podman[526781]: 2025-10-13 15:42:05.821912274 +0000 UTC m=+0.085867371 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 13 15:42:05 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:42:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:06.007 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:42:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:06.022 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:42:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:06.022 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:42:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:06.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:06.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:06 standalone.localdomain ceph-mon[29756]: pgmap v3828: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Oct 13 15:42:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:06.961 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:42:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:06.962 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:42:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:06.963 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:42:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:07.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:07.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.448521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370127448567, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 784, "num_deletes": 250, "total_data_size": 574850, "memory_usage": 590336, "flush_reason": "Manual Compaction"}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370127453104, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 392131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90071, "largest_seqno": 90854, "table_properties": {"data_size": 388974, "index_size": 1013, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8588, "raw_average_key_size": 20, "raw_value_size": 382162, "raw_average_value_size": 912, "num_data_blocks": 47, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760370073, "oldest_key_time": 1760370073, "file_creation_time": 1760370127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 4638 microseconds, and 1868 cpu microseconds.
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.453155) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 392131 bytes OK
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.453177) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.455448) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.455473) EVENT_LOG_v1 {"time_micros": 1760370127455464, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.455524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 570864, prev total WAL file size 571187, number of live WAL files 2.
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.456303) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353033' seq:72057594037927935, type:22 .. '6D6772737461740033373534' seq:0, type:0; will stop at (end)
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(382KB)], [206(6876KB)]
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370127456382, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 7434025, "oldest_snapshot_seqno": -1}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 7119 keys, 5543974 bytes, temperature: kUnknown
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370127488804, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 5543974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5505435, "index_size": 19623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 189071, "raw_average_key_size": 26, "raw_value_size": 5384776, "raw_average_value_size": 756, "num_data_blocks": 765, "num_entries": 7119, "num_filter_entries": 7119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.489055) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 5543974 bytes
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.490731) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.8 rd, 170.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 6.7 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(33.1) write-amplify(14.1) OK, records in: 7617, records dropped: 498 output_compression: NoCompression
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.490751) EVENT_LOG_v1 {"time_micros": 1760370127490742, "job": 130, "event": "compaction_finished", "compaction_time_micros": 32489, "compaction_time_cpu_micros": 18027, "output_level": 6, "num_output_files": 1, "total_output_size": 5543974, "num_input_records": 7617, "num_output_records": 7119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370127490896, "job": 130, "event": "table_file_deletion", "file_number": 208}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370127491633, "job": 130, "event": "table_file_deletion", "file_number": 206}
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.456210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.491737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.491743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.491746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.491755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:42:07 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:42:07.491758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:42:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3829: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.266 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.267 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.268 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.268 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.269 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:42:08 standalone.localdomain ceph-mon[29756]: pgmap v3829: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:42:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2239173894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.710 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.781 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.782 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.782 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.786 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.786 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.970 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.971 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9310MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.972 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.972 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:42:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:08.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:09.341 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:42:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:09.341 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:42:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:09.341 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:42:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:09.342 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:42:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2239173894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:42:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3830: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:09.665 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:42:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:42:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:42:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:42:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:42:09 standalone.localdomain podman[526822]: 2025-10-13 15:42:09.84533497 +0000 UTC m=+0.096626012 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, architecture=x86_64, build-date=2025-07-21T14:56:28, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, container_name=swift_object_server, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:42:09 standalone.localdomain podman[526823]: 2025-10-13 15:42:09.902045587 +0000 UTC m=+0.148114819 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, io.buildah.version=1.33.12, distribution-scope=public, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, tcib_managed=true, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=swift_container_server, name=rhosp17/openstack-swift-container, release=1, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:42:09 standalone.localdomain podman[526827]: 2025-10-13 15:42:09.96451943 +0000 UTC m=+0.202982468 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, build-date=2025-07-21T16:11:22, container_name=swift_account_server, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, tcib_managed=true, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, distribution-scope=public)
Oct 13 15:42:10 standalone.localdomain podman[526830]: 2025-10-13 15:42:10.009356403 +0000 UTC m=+0.243235942 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.017 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.017 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.047 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:42:10 standalone.localdomain podman[526830]: 2025-10-13 15:42:10.049884495 +0000 UTC m=+0.283764034 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:42:10 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.078 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:42:10 standalone.localdomain podman[526822]: 2025-10-13 15:42:10.07976266 +0000 UTC m=+0.331053642 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, com.redhat.component=openstack-swift-object-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, version=17.1.9, container_name=swift_object_server)
Oct 13 15:42:10 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:42:10 standalone.localdomain podman[526823]: 2025-10-13 15:42:10.121898401 +0000 UTC m=+0.367967633 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, release=1, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, batch=17.1_20250721.1, config_id=tripleo_step4, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 15:42:10 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.143 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:42:10 standalone.localdomain podman[526827]: 2025-10-13 15:42:10.145719761 +0000 UTC m=+0.384182789 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, build-date=2025-07-21T16:11:22, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, container_name=swift_account_server, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=)
Oct 13 15:42:10 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:42:10 standalone.localdomain ceph-mon[29756]: pgmap v3830: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:42:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3976226775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.554 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.561 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.582 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.583 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.583 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:42:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:10.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:11 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3976226775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:42:11 standalone.localdomain podman[467099]: time="2025-10-13T15:42:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:42:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:11.583 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:11.584 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:42:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3831: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:42:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:42:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:42:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49124 "" "Go-http-client/1.1"
Oct 13 15:42:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:42:12 standalone.localdomain podman[526948]: 2025-10-13 15:42:12.134977876 +0000 UTC m=+0.084557111 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:42:12 standalone.localdomain podman[526948]: 2025-10-13 15:42:12.147939533 +0000 UTC m=+0.097518788 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible)
Oct 13 15:42:12 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:42:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:12 standalone.localdomain ceph-mon[29756]: pgmap v3831: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:42:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:42:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3832: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:14.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:14 standalone.localdomain ceph-mon[29756]: pgmap v3832: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3833: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:42:15 standalone.localdomain systemd[1]: tmp-crun.miGfIG.mount: Deactivated successfully.
Oct 13 15:42:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:15.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:15 standalone.localdomain podman[526967]: 2025-10-13 15:42:15.825073099 +0000 UTC m=+0.094010370 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:15 standalone.localdomain podman[526967]: 2025-10-13 15:42:15.834802558 +0000 UTC m=+0.103739859 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 13 15:42:15 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:42:16 standalone.localdomain ceph-mon[29756]: pgmap v3833: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3834: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:18 standalone.localdomain ceph-mon[29756]: pgmap v3834: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:42:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/358525451' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:42:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:42:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/358525451' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:42:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:19.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/358525451' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:42:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/358525451' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:42:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3835: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:42:19 standalone.localdomain podman[526986]: 2025-10-13 15:42:19.833657179 +0000 UTC m=+0.094246248 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:42:19 standalone.localdomain podman[526986]: 2025-10-13 15:42:19.84512817 +0000 UTC m=+0.105717199 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:42:19 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:42:20 standalone.localdomain ceph-mon[29756]: pgmap v3835: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:20.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:20.995 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:42:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:20.999 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:20.999 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.000 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.052 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.053 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.053 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.082 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.082 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '93256af5-9bd6-48e0-af21-43f094e5cdd4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.000370', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34cfbe16-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': 'd78e45010455729008c58ec51e794a4e47b42d74fbba51d707f87b2793a6d8ac'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.000370', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34cfd9c8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '8214b1973803a2616ad7a3ffb4310825c7e382c04ef0584ba1612b096cf3bbd4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.000370', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34cff246-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '4c37ddba83a693621f9536f4a97a27d4e0a89abc4c1552a9b4c0e109ee45c10f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.000370', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34d44954-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': 'fd408f3de318c3385f3833b331a5175ecae7581472115d1189cb9ee45d02f828'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.000370', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34d463f8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': 'd93e9d10104ef1701f420c02a917a0b74adad09b43e1efb8ed9ab263c4d335db'}]}, 'timestamp': '2025-10-13 15:42:21.083798', '_unique_id': '6dde968b4aa2413c98c19515757f44d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.086 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.088 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.088 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.088 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.089 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.090 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.090 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03c81f96-2540-4e89-9945-3ff81ccbcbc9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.088424', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34d5356c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '041ad891f84d88dca184c4e36fe581a71e325211c3808efddade79adecf46f75'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.088424', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34d54818-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '00d4b4278dd7078fa6be7dac5e77bab5a2a1c710acbaf392b7ad98a2456f0301'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.088424', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34d55b8c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '6c108130a9b1e4e956b0fa25ad5d6fbe22079c15f9a3af2f1afa687d69adddc4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.088424', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34d579e6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '750b4477a45c9f09f2c84a1c29cc07c591439d547d1e02a73ae5277afd9fa3c3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.088424', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34d58d8c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '051b8adca722ebb7f03b57c22659c34f8d6f89964730ec9a2ec51972c289bd0c'}]}, 'timestamp': '2025-10-13 15:42:21.091163', '_unique_id': '4c515e388b8445c9844a2840617c80f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.093 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.112 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.113 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.113 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.132 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.133 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd142b053-8cfb-45b8-9ffd-7a5da679196c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.093703', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34d8f120-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': 'ccb875e2b877e07bdaeafdc0a54ad0ef89c963e4219b013b5651b6b4afd0123a'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.093703', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34d90430-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': 'ec4311ae3088cd8d64b82d4eb28c2451ab7ef0e13a61fc2dea2f8bb2c7967fbb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.093703', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34d913ee-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': '8f6a03482f6f5c2fd3959674bdecd87aedd3c6a80cfbb0ed11b335badb316f73'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.093703', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34dbfc3a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.303464403, 'message_signature': 'e0ddc0b3ee56d5eb9146ee8f0f2659b1c1a09264a34962ba0c692797fbbbd770'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.093703', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34dc0eb4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.303464403, 'message_signature': '3a17365cb73d759fd81db67c662cff85de534d94a5a33b797680d00b9b4c1b4c'}]}, 'timestamp': '2025-10-13 15:42:21.133802', '_unique_id': 'd884997597844a24a6a3c73f192e2bb3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.135 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.136 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.140 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.144 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '470baa75-46e7-424a-90ee-c21eaf72d340', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.136469', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34dd3672-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': '6dfbef94eb9d96a91ef7e55097ccf613c0a42a6ce0f6d5cfd3fe992cfdb4c6a0'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.136469', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34ddbca0-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': 'bfcf62b19c15fca819c2dc1a0ba776d0bb01a3b77ae2c550acc5f06fcdc51da1'}]}, 'timestamp': '2025-10-13 15:42:21.144827', '_unique_id': 'cd20f81e68e94bffb20c652b164e7438'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.146 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.147 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.147 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ac166d2-2d32-4d4c-9a82-6e5b6dacd344', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.147130', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34de28ca-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': '1682dbdaec563ec5c14da4f9d88a6b7a5c3775ebe75d1ce5a394c277d729eb7a'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.147130', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34de3bb2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': '6335d9800f2f5fbb1f015a71395f0e102c4750ba5f9268d8a39db2c98905918f'}]}, 'timestamp': '2025-10-13 15:42:21.148067', '_unique_id': '445cef1655ea418997671b3def14b04a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.150 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.150 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.150 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67ee1a99-fcee-457b-b16b-02a2b50707af', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.150269', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34dea494-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': '1e02bd732d75fbd671aae4a9fd9f78e68884ddd3bd32ce2c3ce6c496705f6f88'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.150269', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34deb5ba-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': '0dce43a24d5556488dc6483e24d9c4a93c87d9877c966795557522b4f1e56db2'}]}, 'timestamp': '2025-10-13 15:42:21.151218', '_unique_id': 'b68b118ac79940de933eef8cc1301d50'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.153 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.153 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.154 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '598b6745-afc6-4fd6-ab28-5784718ceb52', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.153563', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34df2414-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': '92280599a1e050d01be906a1302f5514837ba9ca41fddd83e970b99647c0d102'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.153563', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34df3508-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': 'ed8e8698c138b6c1bccc63c9ea524ec94846fb6155c97f1d6c5f1c3797fd6ef9'}]}, 'timestamp': '2025-10-13 15:42:21.154445', '_unique_id': 'b1989e8276984cacbc463df0db870fdd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.155 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.156 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.197 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1756d451-b541-4146-8447-75f3865c3abc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:42:21.156841', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '34e29586-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.365229805, 'message_signature': '10cd1b01d27ecc2945db3fcbf1c3feb14d527c4d440662ae9b7da25fda5fb662'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:42:21.156841', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '34e5dee4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.386779045, 'message_signature': '4ae55485ea5efbdb7dd1be128ae77919e0e484226d377492d69c011c30354157'}]}, 'timestamp': '2025-10-13 15:42:21.198122', '_unique_id': '20817d2f17ac4619b3529d4ec0bedc8f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.199 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.200 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.200 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.201 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '119787b5-1578-4f18-a424-f4338b9a8c31', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.200592', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34e6514e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': 'c60eb1fbbe0d22786b6c735efe7f65e79eb485224e1f0bfb818cf4dd13cf03ca'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.200592', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34e6627e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': '808b87e7df13f2a2705441088d395b4d5255589cd59ac6826e9b616fb4b417b3'}]}, 'timestamp': '2025-10-13 15:42:21.201492', '_unique_id': '1bbe2100bd3a4f5b93c5a2fb8f2a929b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.203 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.204 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.204 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.205 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.205 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2147b796-659d-4bfd-8579-22fd8609d7e3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.203742', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e6cc1e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': '06158d546828d84928718e3cc0fa650ad0feacb5502c6378e130ceeb1325e9ad'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.203742', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e6dc5e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': '3f6df74ad1a7912a90c5f14fce2ff5cfefba5690d6fbb0fdbfa582ad2c563816'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.203742', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34e6ed48-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': '146f39748cd306c77fc56e218bd7d361978121ab9d4c48847c481591f2900bfb'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.203742', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e6fc8e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.303464403, 'message_signature': '797072865c5f616229ba6d6379beea8f36414fdf5371b8c16dad48b59b569df1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.203742', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e70cba-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.303464403, 'message_signature': 'b66b001340194a8644e4c273898a39fd41d4cc31e4211812c6ee754f846ae0b5'}]}, 'timestamp': '2025-10-13 15:42:21.205829', '_unique_id': 'be33ff77f7db4614aaab4019832b0ef1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.206 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.208 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.208 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.209 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.209 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.209 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b8eaa2bf-c098-4735-94ed-4ebb15503515', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.208147', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e777fe-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '2da874bed02ab5b252a597d9ed00607bf4b27c5c0c6bd6ba57cc55f7b5ce0d3a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.208147', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e78988-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': 'e9fa005cf66a1f4df3937e51c2ecdbba1ca98b866926dca445686b02645e30ce'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.208147', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34e798ce-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '9ef2a97327e07efddda812c70ebef05f4bd9100e7df4b358927c403a1e2daa36'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.208147', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e7a904-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '2675d01195379fb90ef759306a457d752e308425cee9cc85b46a08ac96310376'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.208147', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e7b7be-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '38ee50ad1a3ab4879b18c56351fdbf601b682974fd3b94b8d6fae2b1f25ea919'}]}, 'timestamp': '2025-10-13 15:42:21.210198', '_unique_id': '526b91dfed2043cca7af944402db60e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.211 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.212 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.212 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.212 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.212 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.213 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.214 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.214 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5df58205-9b93-4194-8ec2-3f139f61b984', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.212827', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e82ef6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': 'c053fea700cd9da7d132421a095c0e13eb0d1736215425053c753103d9cfd127'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.212827', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e83ed2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '895f1f052614407bde9750ab5d0bc650e3d629a3d31194a898bdc1e42b2879d7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.212827', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34e84f76-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': 'bdff39154674d084c6f315c8d1fc80578202ebaf6b0bf36874bb3d1c8f9fcfb6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.212827', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e85e8a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '1fa8ff120287ea72a8fda1924b9d0d9ec9da93d0838ae3b048bb1b7a3e7b85ac'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.212827', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e86e8e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': 'b942c3b9da6ead4978217cf4f587e74912b8bd09e76f217dc4132d97e769c37c'}]}, 'timestamp': '2025-10-13 15:42:21.214880', '_unique_id': '967eeef2e2e44f829b8e5919ae579d04'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.215 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.217 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.217 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.217 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.218 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.218 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.218 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ad286899-2578-43a4-b13d-64355e36bdff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.217192', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e8d928-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '37dbdd5f02d48464e405f1e714c9a31e6083521726f13630ca9fd8f1e67c2a09'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.217192', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e8ea80-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '0cfec49652710765348f2cef8455150d1399d7c32c4bfa568b385131f764659f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.217192', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34e8fc32-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '9dc01af34fcc3c11f69243d551965c1b2e8067e04f369cf6f9d68159dc426d8c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.217192', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34e90cfe-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '4c424949334c53e6620ee326255b6b390ccd1e849e9b35c988341c43aac7dcbe'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.217192', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34e91bd6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '741047c3419ea8cf9fc42c41f275a1ae23fa6c33ea2728be915e9650743b4051'}]}, 'timestamp': '2025-10-13 15:42:21.219316', '_unique_id': '76bb0a8b7b6e45a181f1a151c0792c6e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.221 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.222 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '80164874-63d8-4408-9f99-c2cf53e5f46a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.221942', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34e992fa-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': 'ed053f91907d7c5646f41fa073b72acefc96ce9c4c80a7bb7bd73a84436a5c40'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.221942', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34e9a4fc-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': '820ec4eba01d9238c0192a6a68d49ff8985f6032f2e7aba6e0f9e67284b86db9'}]}, 'timestamp': '2025-10-13 15:42:21.222849', '_unique_id': 'abefc63880cd4dcdb95859de9b83c393'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.223 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.224 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.225 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.225 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.226 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.226 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.226 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bbc89258-69ba-474a-bb90-61a4b189b105', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.225067', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34ea10f4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '6c9ff13f82efba747959e8f350ea0347d06275a9faf55136f675774139579f5f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.225067', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34ea22a6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '6e313ab4f45d9f170772855a66b8905be937258db82917c1bcc42801d5a17ff6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.225067', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34ea319c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.189634017, 'message_signature': '07376f4d45dec6a00854d8647d5fba883d2ef682a57210c46a206402649962e1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.225067', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34ea41f0-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': '51a6480042a8302e06fcb4cf21fea6fae8a6ae87cb51b13efe055763fc76b6f0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.225067', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34ea50d2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.243710723, 'message_signature': 'd28ddbf28adeef1dc48e0f64fbc583397f1029632e8110705a21593eb3980c5a'}]}, 'timestamp': '2025-10-13 15:42:21.227226', '_unique_id': '09fd10e1186541609ec65e26d309c487'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.228 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.229 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.229 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.230 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '658718b7-d3d1-4b4d-874c-4f6028cc02df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.229590', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34eabe78-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': 'fbaf33c14f361fe46110c3e346cc5e133ebc8a96badbb1b1fef2b8e44f6e92ae'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.229590', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34eacf3a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': '095b362b62df2eaedd427526012843da7242abbabbf47d8427a58c375f67d03b'}]}, 'timestamp': '2025-10-13 15:42:21.230479', '_unique_id': '2bdd092d1d3e4e57b60a00f67e15a897'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.231 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.232 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.232 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.233 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd46bd247-a6f8-421c-b37d-d330638885bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.232706', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34eb375e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': '27d0d49455737c4d1c086e38ec1f3a2444c00a5c3d975476da67a1b7b0424341'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.232706', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34eb4866-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': '91cc887f0e182d62a652e4fa8e15f80097116140c695aa4248ec585cf5b8c582'}]}, 'timestamp': '2025-10-13 15:42:21.233614', '_unique_id': '00721131778d4049abe9ce2de51353a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.234 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.235 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.235 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.236 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ed78016-54fb-4564-9ed8-9305a79e3340', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.235767', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34ebaedc-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': 'aff4e56ea56d00a825426743760879d467fa7d8599f7df0f9c1bbf18d956cb8a'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.235767', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34ebc0ac-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': 'e570c63827f98e83bcd5fc5860154dea7f7ab5f64cd63e753c5f80f58aac9888'}]}, 'timestamp': '2025-10-13 15:42:21.236690', '_unique_id': 'cc188ab6b1b84b2b9a45b5360ccd29d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.237 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.238 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.238 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.238 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f2d55047-d8a6-4a7c-82a2-1fde390e2da9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:42:21.238101', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '34ec06b6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.325760056, 'message_signature': 'a00fd4e4dfabe8f184b40a8c1308f54577c06049de5d6e583ea986d93f686155'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:42:21.238101', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '34ec11d8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.330594654, 'message_signature': 'ed15dcb287d47043c79e6097c024367a23a77d054725c963facfeb0cf2773857'}]}, 'timestamp': '2025-10-13 15:42:21.238659', '_unique_id': 'b5149fcc7ee14be98b794b5f1d34da6d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.239 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.240 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.240 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 12550000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.240 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 34470000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e08f1f1b-bf51-4e45-a710-8c07d1c54e3b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12550000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:42:21.240076', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '34ec547c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.365229805, 'message_signature': 'bfe2d69d89a101a415d8c367c9d86e366a356fa870ad3b2a504a8a7277f2ba42'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 34470000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:42:21.240076', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '34ec5df0-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.386779045, 'message_signature': '33536ceeb3828667281f3353eceeb3dc40e974f06ce4d61ec79f13fa6de3a7c4'}]}, 'timestamp': '2025-10-13 15:42:21.240615', '_unique_id': '88da11463b9c4c7c87a477918d1b92a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.241 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.242 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.242 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.242 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.242 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5b91d6e2-d96e-4a91-92eb-d1b8d889a164', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:42:21.241931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34ec9c20-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': 'b4184c585b83d7927fdcab2cd9e4d40bc116c0abfe6f4f17e9a61ffeea904f17'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:42:21.241931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34eca576-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': '9f47c06a9e7455cf266dc7e3d5feecee022afe17b391f1362623c7f88687175a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:42:21.241931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '34ecaf76-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.282958445, 'message_signature': 'f362a2a0ebebcaeb0c8691a63ab7d71ab561cc677fa2091d7e4cdede6de55b54'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:42:21.241931', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '34ecb8ea-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.303464403, 'message_signature': '208fa0b2a27ee2d1dd34a7200e8c32a5a301a87b07a6ebc37f2371bc2dadab4a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:42:21.241931', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '34ecc204-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9584.303464403, 'message_signature': '5ba0f308bb3ba8efc47650c3de0023e82651fd844fc321e8097842dcff370125'}]}, 'timestamp': '2025-10-13 15:42:21.243155', '_unique_id': '42bfea01856b4caa8090c35fb5ed5c2b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:42:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:42:21.243 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:42:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3836: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:21.682 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:42:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:21.683 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:42:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:21.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:42:22 standalone.localdomain podman[527010]: 2025-10-13 15:42:22.159773692 +0000 UTC m=+0.094956869 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:42:22 standalone.localdomain podman[527010]: 2025-10-13 15:42:22.19397531 +0000 UTC m=+0.129158537 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 13 15:42:22 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:42:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:22.686 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:42:22 standalone.localdomain ceph-mon[29756]: pgmap v3836: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:42:23
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'vms', 'manila_data', 'images', '.mgr', 'backups', 'volumes']
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:42:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59962 DF PROTO=TCP SPT=35634 DPT=9102 SEQ=3356846214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD8EDF0000000001030307) 
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3837: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:42:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:42:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:24.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59963 DF PROTO=TCP SPT=35634 DPT=9102 SEQ=3356846214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD92F60000000001030307) 
Oct 13 15:42:24 standalone.localdomain ceph-mon[29756]: pgmap v3837: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3838: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:25.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59964 DF PROTO=TCP SPT=35634 DPT=9102 SEQ=3356846214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCD9AF60000000001030307) 
Oct 13 15:42:26 standalone.localdomain ceph-mon[29756]: pgmap v3838: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3839: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:28 standalone.localdomain ceph-mon[29756]: pgmap v3839: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:29.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:42:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3840: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59965 DF PROTO=TCP SPT=35634 DPT=9102 SEQ=3356846214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCDAAB60000000001030307) 
Oct 13 15:42:30 standalone.localdomain ceph-mon[29756]: pgmap v3840: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:30.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3841: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:42:32 standalone.localdomain podman[527028]: 2025-10-13 15:42:32.164655441 +0000 UTC m=+0.086256123 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct 13 15:42:32 standalone.localdomain podman[527028]: 2025-10-13 15:42:32.206028379 +0000 UTC m=+0.127629051 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:32 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:42:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:32 standalone.localdomain ceph-mon[29756]: pgmap v3841: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3842: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:34.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:34 standalone.localdomain ceph-mon[29756]: pgmap v3842: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:42:34 standalone.localdomain podman[527053]: 2025-10-13 15:42:34.826413726 +0000 UTC m=+0.091302268 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350)
Oct 13 15:42:34 standalone.localdomain podman[527053]: 2025-10-13 15:42:34.838022021 +0000 UTC m=+0.102910583 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:42:34 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:42:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3843: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:35.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:36 standalone.localdomain ceph-mon[29756]: pgmap v3843: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:42:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:42:36 standalone.localdomain podman[527073]: 2025-10-13 15:42:36.825590045 +0000 UTC m=+0.088510583 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:42:36 standalone.localdomain podman[527073]: 2025-10-13 15:42:36.866410324 +0000 UTC m=+0.129330942 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:42:36 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:42:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3844: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:38 standalone.localdomain ceph-mon[29756]: pgmap v3844: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:39.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3845: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:40 standalone.localdomain ceph-mon[29756]: pgmap v3845: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:42:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:42:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:42:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:42:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:40.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:40 standalone.localdomain podman[527095]: 2025-10-13 15:42:40.824929351 +0000 UTC m=+0.081769445 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, container_name=swift_account_server, tcib_managed=true, release=1, version=17.1.9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:42:40 standalone.localdomain podman[527101]: 2025-10-13 15:42:40.837798126 +0000 UTC m=+0.088560854 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:42:40 standalone.localdomain podman[527101]: 2025-10-13 15:42:40.877813331 +0000 UTC m=+0.128576099 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:42:40 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:42:40 standalone.localdomain podman[527093]: 2025-10-13 15:42:40.902744675 +0000 UTC m=+0.166527201 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, batch=17.1_20250721.1, version=17.1.9, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, container_name=swift_object_server, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:42:40 standalone.localdomain podman[527094]: 2025-10-13 15:42:40.990611296 +0000 UTC m=+0.248582635 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, config_id=tripleo_step4, release=1, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-container, version=17.1.9, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:42:41 standalone.localdomain podman[527095]: 2025-10-13 15:42:41.005340358 +0000 UTC m=+0.262180472 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=swift_account_server, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public)
Oct 13 15:42:41 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:42:41 standalone.localdomain podman[527093]: 2025-10-13 15:42:41.11903398 +0000 UTC m=+0.382816536 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, architecture=x86_64, release=1, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.component=openstack-swift-object-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., vcs-type=git, container_name=swift_object_server)
Oct 13 15:42:41 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:42:41 standalone.localdomain podman[527094]: 2025-10-13 15:42:41.162055309 +0000 UTC m=+0.420026578 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, distribution-scope=public, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git)
Oct 13 15:42:41 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:42:41 standalone.localdomain podman[467099]: time="2025-10-13T15:42:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:42:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3846: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:42:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:42:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:42:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49138 "" "Go-http-client/1.1"
Oct 13 15:42:41 standalone.localdomain systemd[1]: tmp-crun.6Rl615.mount: Deactivated successfully.
Oct 13 15:42:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:42:42 standalone.localdomain podman[527203]: 2025-10-13 15:42:42.552580773 +0000 UTC m=+0.069729657 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 15:42:42 standalone.localdomain podman[527203]: 2025-10-13 15:42:42.564880839 +0000 UTC m=+0.082029763 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:42:42 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:42:42 standalone.localdomain ceph-mon[29756]: pgmap v3846: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:42:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:42:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:42:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3847: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:44.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:44 standalone.localdomain ceph-mon[29756]: pgmap v3847: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3848: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:45.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:46 standalone.localdomain ceph-mon[29756]: pgmap v3848: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:42:46 standalone.localdomain podman[527223]: 2025-10-13 15:42:46.812568815 +0000 UTC m=+0.079246270 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:46 standalone.localdomain podman[527223]: 2025-10-13 15:42:46.845206064 +0000 UTC m=+0.111883489 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:46 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:42:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3849: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_IDLE -> S_POLICY_ENGINE
Oct 13 15:42:48 standalone.localdomain ceph-mon[29756]: pgmap v3849: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:48 standalone.localdomain pacemaker-schedulerd[57910]:  notice: Calculated transition 72, saving inputs in /var/lib/pacemaker/pengine/pe-input-69.bz2
Oct 13 15:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: Transition 72 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-69.bz2): Complete
Oct 13 15:42:48 standalone.localdomain pacemaker-controld[57911]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE
Oct 13 15:42:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:49.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3850: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:49.758 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:42:49Z, description=, device_id=d1e9d42c-0dba-41c5-bfdd-57a6743b3894, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889299a30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889299160>], id=12c327ba-141a-423f-904e-8aa2c3cfd181, ip_allocation=immediate, mac_address=fa:16:3e:ae:48:c5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=512, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:42:49Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:42:49 standalone.localdomain podman[527260]: 2025-10-13 15:42:49.973918762 +0000 UTC m=+0.046183975 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:42:49 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:42:49 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:42:49 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:42:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:42:50 standalone.localdomain systemd[1]: tmp-crun.Rz4WzM.mount: Deactivated successfully.
Oct 13 15:42:50 standalone.localdomain podman[527274]: 2025-10-13 15:42:50.086409198 +0000 UTC m=+0.091476573 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:42:50 standalone.localdomain podman[527274]: 2025-10-13 15:42:50.093377262 +0000 UTC m=+0.098444657 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:42:50 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.213 496978 INFO neutron.agent.dhcp.agent [None req-b74d570f-a56c-4388-9c8e-e74727ae386b - - - - - -] DHCP configuration for ports {'12c327ba-141a-423f-904e-8aa2c3cfd181'} is completed
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.226 496978 INFO oslo.privsep.daemon [None req-24c97340-6abf-46e9-a835-216dd239f81b - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp444lqzjb/privsep.sock']
Oct 13 15:42:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:50.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:50 standalone.localdomain ceph-mon[29756]: pgmap v3850: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:50.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.892 496978 INFO oslo.privsep.daemon [None req-24c97340-6abf-46e9-a835-216dd239f81b - - - - - -] Spawned new privsep daemon via rootwrap
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.769 527308 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.775 527308 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.779 527308 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 13 15:42:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:50.780 527308 INFO oslo.privsep.daemon [-] privsep daemon running as pid 527308
Oct 13 15:42:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3851: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:51.878 496978 INFO neutron.agent.linux.ip_lib [None req-24c97340-6abf-46e9-a835-216dd239f81b - - - - - -] Device tap1aecebee-40 cannot be used as it has no MAC address
Oct 13 15:42:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:51.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:51 standalone.localdomain kernel: device tap1aecebee-40 entered promiscuous mode
Oct 13 15:42:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:51.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:51Z|00067|binding|INFO|Claiming lport 1aecebee-4022-4047-b159-efaa3357f490 for this chassis.
Oct 13 15:42:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:51Z|00068|binding|INFO|1aecebee-4022-4047-b159-efaa3357f490: Claiming unknown
Oct 13 15:42:51 standalone.localdomain NetworkManager[5962]: <info>  [1760370171.9641] manager: (tap1aecebee-40): new Generic device (/org/freedesktop/NetworkManager/Devices/24)
Oct 13 15:42:51 standalone.localdomain systemd-udevd[527324]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:42:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:51Z|00069|binding|INFO|Setting lport 1aecebee-4022-4047-b159-efaa3357f490 ovn-installed in OVS
Oct 13 15:42:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:51Z|00070|binding|INFO|Setting lport 1aecebee-4022-4047-b159-efaa3357f490 up in Southbound
Oct 13 15:42:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:51.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:51.978 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-dbde66a4-7ae3-49df-83a5-87d4c031bb6a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbde66a4-7ae3-49df-83a5-87d4c031bb6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37930a68ff674d4e892ddf69bd83b74b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=136ca00c-e299-4386-84a2-0595c4035b9a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=1aecebee-4022-4047-b159-efaa3357f490) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:42:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:51.981 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 1aecebee-4022-4047-b159-efaa3357f490 in datapath dbde66a4-7ae3-49df-83a5-87d4c031bb6a bound to our chassis
Oct 13 15:42:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:51.983 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 347d90b9-329f-4bd6-97b5-116a9bb981ed IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:42:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:51.983 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dbde66a4-7ae3-49df-83a5-87d4c031bb6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:42:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:51.984 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[91311507-c0d9-430c-bebf-0283014c77c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:42:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:51.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:52.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:52 standalone.localdomain ceph-mon[29756]: pgmap v3851: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:42:52 standalone.localdomain systemd[1]: tmp-crun.TEtTvt.mount: Deactivated successfully.
Oct 13 15:42:52 standalone.localdomain podman[527357]: 2025-10-13 15:42:52.94590962 +0000 UTC m=+0.200879705 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:42:52 standalone.localdomain podman[527357]: 2025-10-13 15:42:52.957974969 +0000 UTC m=+0.212945124 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 13 15:42:52 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:42:53 standalone.localdomain podman[527390]: 
Oct 13 15:42:53 standalone.localdomain podman[527390]: 2025-10-13 15:42:53.010749746 +0000 UTC m=+0.109431003 container create e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:42:53 standalone.localdomain sudo[527406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:42:53 standalone.localdomain systemd[1]: Started libpod-conmon-e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84.scope.
Oct 13 15:42:53 standalone.localdomain sudo[527406]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:42:53 standalone.localdomain podman[527390]: 2025-10-13 15:42:52.955588486 +0000 UTC m=+0.054269773 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:42:53 standalone.localdomain sudo[527406]: pam_unix(sudo:session): session closed for user root
Oct 13 15:42:53 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:42:53 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eab674be01240c0ea46f8264920e637099224fa9759d22a99ba56703c5afc97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:42:53 standalone.localdomain podman[527390]: 2025-10-13 15:42:53.093994166 +0000 UTC m=+0.192675423 container init e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:42:53 standalone.localdomain podman[527390]: 2025-10-13 15:42:53.099556156 +0000 UTC m=+0.198237413 container start e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: started, version 2.85 cachesize 150
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: DNS service limited to local subnets
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: warning: no upstream servers configured
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/addn_hosts - 0 addresses
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/host
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/opts
Oct 13 15:42:53 standalone.localdomain sudo[527428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:42:53 standalone.localdomain sudo[527428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:42:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:53.155 496978 INFO neutron.agent.dhcp.agent [None req-51b7bcb1-344d-4ae6-a157-3c6deb62ee85 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:42:50Z, description=, device_id=d1e9d42c-0dba-41c5-bfdd-57a6743b3894, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889252c40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924a4f0>], id=3476abfc-1e53-4603-ab65-8167f7ca77dc, ip_allocation=immediate, mac_address=fa:16:3e:54:09:a2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:42:47Z, description=, dns_domain=, id=dbde66a4-7ae3-49df-83a5-87d4c031bb6a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-534897889-network, port_security_enabled=True, project_id=37930a68ff674d4e892ddf69bd83b74b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47538, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=491, status=ACTIVE, subnets=['003b036e-fbe4-4b7f-b4ff-57f6f20acec5'], tags=[], tenant_id=37930a68ff674d4e892ddf69bd83b74b, updated_at=2025-10-13T15:42:48Z, vlan_transparent=None, network_id=dbde66a4-7ae3-49df-83a5-87d4c031bb6a, port_security_enabled=False, project_id=37930a68ff674d4e892ddf69bd83b74b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=529, status=DOWN, tags=[], tenant_id=37930a68ff674d4e892ddf69bd83b74b, updated_at=2025-10-13T15:42:50Z on network dbde66a4-7ae3-49df-83a5-87d4c031bb6a
Oct 13 15:42:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:53.215 496978 INFO neutron.agent.dhcp.agent [None req-2d0dd1cb-e3e7-41e6-8c27-9fd32cd1fd9c - - - - - -] DHCP configuration for ports {'908a9270-4d71-41c8-bcc5-1ee185baec6a'} is completed
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/addn_hosts - 1 addresses
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/host
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/opts
Oct 13 15:42:53 standalone.localdomain podman[527465]: 2025-10-13 15:42:53.350712059 +0000 UTC m=+0.059290647 container kill e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:42:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:53.525 496978 INFO neutron.agent.dhcp.agent [None req-6a13a632-f1dd-4ebb-92bd-aa73e4a1d482 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:42:50Z, description=, device_id=d1e9d42c-0dba-41c5-bfdd-57a6743b3894, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889162340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891622e0>], id=3476abfc-1e53-4603-ab65-8167f7ca77dc, ip_allocation=immediate, mac_address=fa:16:3e:54:09:a2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:42:47Z, description=, dns_domain=, id=dbde66a4-7ae3-49df-83a5-87d4c031bb6a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-534897889-network, port_security_enabled=True, project_id=37930a68ff674d4e892ddf69bd83b74b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47538, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=491, status=ACTIVE, subnets=['003b036e-fbe4-4b7f-b4ff-57f6f20acec5'], tags=[], tenant_id=37930a68ff674d4e892ddf69bd83b74b, updated_at=2025-10-13T15:42:48Z, vlan_transparent=None, network_id=dbde66a4-7ae3-49df-83a5-87d4c031bb6a, port_security_enabled=False, project_id=37930a68ff674d4e892ddf69bd83b74b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=529, status=DOWN, tags=[], tenant_id=37930a68ff674d4e892ddf69bd83b74b, updated_at=2025-10-13T15:42:50Z on network dbde66a4-7ae3-49df-83a5-87d4c031bb6a
Oct 13 15:42:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29539 DF PROTO=TCP SPT=33190 DPT=9102 SEQ=3030282888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE040F0000000001030307) 
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3852: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:53.671 496978 INFO neutron.agent.dhcp.agent [None req-a07d0146-b97d-4302-bf74-3808a7bfb9be - - - - - -] DHCP configuration for ports {'3476abfc-1e53-4603-ab65-8167f7ca77dc'} is completed
Oct 13 15:42:53 standalone.localdomain dnsmasq[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/addn_hosts - 1 addresses
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/host
Oct 13 15:42:53 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/opts
Oct 13 15:42:53 standalone.localdomain podman[527522]: 2025-10-13 15:42:53.743375118 +0000 UTC m=+0.067387875 container kill e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:42:53 standalone.localdomain sudo[527428]: pam_unix(sudo:session): session closed for user root
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:42:53 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 54d75a1f-6a29-4d97-a3bb-a245746dd973 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 54d75a1f-6a29-4d97-a3bb-a245746dd973 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:42:53 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 54d75a1f-6a29-4d97-a3bb-a245746dd973 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:42:53 standalone.localdomain sudo[527555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:42:53 standalone.localdomain sudo[527555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:42:53 standalone.localdomain sudo[527555]: pam_unix(sudo:session): session closed for user root
Oct 13 15:42:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:54.144 496978 INFO neutron.agent.dhcp.agent [None req-fe1504da-bf49-4d25-a667-4d75d383a031 - - - - - -] DHCP configuration for ports {'3476abfc-1e53-4603-ab65-8167f7ca77dc'} is completed
Oct 13 15:42:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:54.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29540 DF PROTO=TCP SPT=33190 DPT=9102 SEQ=3030282888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE08360000000001030307) 
Oct 13 15:42:54 standalone.localdomain ceph-mon[29756]: pgmap v3852: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:42:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:42:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:42:54 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:42:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:42:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:42:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:42:55 standalone.localdomain dnsmasq[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/addn_hosts - 0 addresses
Oct 13 15:42:55 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/host
Oct 13 15:42:55 standalone.localdomain podman[527591]: 2025-10-13 15:42:55.497729027 +0000 UTC m=+0.069926193 container kill e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:42:55 standalone.localdomain dnsmasq-dhcp[527446]: read /var/lib/neutron/dhcp/dbde66a4-7ae3-49df-83a5-87d4c031bb6a/opts
Oct 13 15:42:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3853: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:55Z|00071|binding|INFO|Releasing lport 1aecebee-4022-4047-b159-efaa3357f490 from this chassis (sb_readonly=0)
Oct 13 15:42:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:55Z|00072|binding|INFO|Setting lport 1aecebee-4022-4047-b159-efaa3357f490 down in Southbound
Oct 13 15:42:55 standalone.localdomain kernel: device tap1aecebee-40 left promiscuous mode
Oct 13 15:42:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:55.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:55.671 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-dbde66a4-7ae3-49df-83a5-87d4c031bb6a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbde66a4-7ae3-49df-83a5-87d4c031bb6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37930a68ff674d4e892ddf69bd83b74b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=136ca00c-e299-4386-84a2-0595c4035b9a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=1aecebee-4022-4047-b159-efaa3357f490) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:42:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:55.673 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 1aecebee-4022-4047-b159-efaa3357f490 in datapath dbde66a4-7ae3-49df-83a5-87d4c031bb6a unbound from our chassis
Oct 13 15:42:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:55.675 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dbde66a4-7ae3-49df-83a5-87d4c031bb6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:42:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:55.676 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[cf30849b-25be-4c04-8af0-dc40e2bdcfd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:42:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:55.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:55.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:42:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29541 DF PROTO=TCP SPT=33190 DPT=9102 SEQ=3030282888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE10360000000001030307) 
Oct 13 15:42:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:56Z|00073|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:42:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:56.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:56 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:42:56 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:42:56 standalone.localdomain podman[527630]: 2025-10-13 15:42:56.943217905 +0000 UTC m=+0.056858023 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:42:56 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:42:57 standalone.localdomain ceph-mon[29756]: pgmap v3853: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:57 standalone.localdomain dnsmasq[527446]: exiting on receipt of SIGTERM
Oct 13 15:42:57 standalone.localdomain podman[527669]: 2025-10-13 15:42:57.304151901 +0000 UTC m=+0.056258174 container kill e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:42:57 standalone.localdomain systemd[1]: libpod-e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84.scope: Deactivated successfully.
Oct 13 15:42:57 standalone.localdomain podman[527683]: 2025-10-13 15:42:57.370554344 +0000 UTC m=+0.050838648 container died e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:42:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84-userdata-shm.mount: Deactivated successfully.
Oct 13 15:42:57 standalone.localdomain podman[527683]: 2025-10-13 15:42:57.40337322 +0000 UTC m=+0.083657454 container cleanup e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:57 standalone.localdomain systemd[1]: libpod-conmon-e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84.scope: Deactivated successfully.
Oct 13 15:42:57 standalone.localdomain podman[527691]: 2025-10-13 15:42:57.446529472 +0000 UTC m=+0.117729787 container remove e61ccf7f7c9ac645548cb37dc9f4d791b7323d4c33644e86bb9644d1e1945f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dbde66a4-7ae3-49df-83a5-87d4c031bb6a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:42:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:57.479 496978 INFO neutron.agent.dhcp.agent [None req-576fdc88-f1f3-438c-bd7b-7ede7fabd4e4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:42:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:57.488 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:42:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3854: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5eab674be01240c0ea46f8264920e637099224fa9759d22a99ba56703c5afc97-merged.mount: Deactivated successfully.
Oct 13 15:42:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2ddbde66a4\x2d7ae3\x2d49df\x2d83a5\x2d87d4c031bb6a.mount: Deactivated successfully.
Oct 13 15:42:58 standalone.localdomain ceph-mon[29756]: pgmap v3854: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:58.790 496978 INFO neutron.agent.linux.ip_lib [None req-c3e84545-f33f-4bac-a777-6bd9cc141fc7 - - - - - -] Device tap996d53c6-53 cannot be used as it has no MAC address
Oct 13 15:42:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:58.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:58 standalone.localdomain kernel: device tap996d53c6-53 entered promiscuous mode
Oct 13 15:42:58 standalone.localdomain NetworkManager[5962]: <info>  [1760370178.8214] manager: (tap996d53c6-53): new Generic device (/org/freedesktop/NetworkManager/Devices/25)
Oct 13 15:42:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:58Z|00074|binding|INFO|Claiming lport 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 for this chassis.
Oct 13 15:42:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:58Z|00075|binding|INFO|996d53c6-53d6-4fb8-8e31-3e4ad0c50325: Claiming unknown
Oct 13 15:42:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:58.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:58 standalone.localdomain systemd-udevd[527726]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:42:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:58.832 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-54611189-ebe7-4ed1-a32e-ac2f11d01075', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54611189-ebe7-4ed1-a32e-ac2f11d01075', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61821c87868d4275b161c847f4c45923', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58b3d824-ef83-4fc1-9a7d-a4e6cfc6beda, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=996d53c6-53d6-4fb8-8e31-3e4ad0c50325) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:42:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:58.833 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 in datapath 54611189-ebe7-4ed1-a32e-ac2f11d01075 bound to our chassis
Oct 13 15:42:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:58.835 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 54611189-ebe7-4ed1-a32e-ac2f11d01075 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:42:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:42:58.835 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bd2d05-a2a2-4285-b016-9daa9fa7a9d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:42:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: hostname: standalone.localdomain
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:58Z|00076|binding|INFO|Setting lport 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 ovn-installed in OVS
Oct 13 15:42:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:42:58Z|00077|binding|INFO|Setting lport 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 up in Southbound
Oct 13 15:42:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:58.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap996d53c6-53: No such device
Oct 13 15:42:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:58.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:42:59.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:42:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3855: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:42:59 standalone.localdomain podman[527797]: 
Oct 13 15:42:59 standalone.localdomain podman[527797]: 2025-10-13 15:42:59.863003513 +0000 UTC m=+0.083819999 container create b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:42:59 standalone.localdomain systemd[1]: Started libpod-conmon-b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc.scope.
Oct 13 15:42:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:42:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299b7f5ed80dafb5025f8c175301dd13fe3036a11381b70e271cf8c092f3145a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:42:59 standalone.localdomain podman[527797]: 2025-10-13 15:42:59.917255456 +0000 UTC m=+0.138071952 container init b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:42:59 standalone.localdomain podman[527797]: 2025-10-13 15:42:59.82439204 +0000 UTC m=+0.045208586 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:42:59 standalone.localdomain podman[527797]: 2025-10-13 15:42:59.925824658 +0000 UTC m=+0.146641144 container start b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:42:59 standalone.localdomain dnsmasq[527816]: started, version 2.85 cachesize 150
Oct 13 15:42:59 standalone.localdomain dnsmasq[527816]: DNS service limited to local subnets
Oct 13 15:42:59 standalone.localdomain dnsmasq[527816]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:42:59 standalone.localdomain dnsmasq[527816]: warning: no upstream servers configured
Oct 13 15:42:59 standalone.localdomain dnsmasq-dhcp[527816]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:42:59 standalone.localdomain dnsmasq[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/addn_hosts - 0 addresses
Oct 13 15:42:59 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/host
Oct 13 15:42:59 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/opts
Oct 13 15:42:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:42:59.968 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:42:59Z, description=, device_id=29b7daa9-287d-47f9-899e-4fa4c0b8ddf8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119fd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891192e0>], id=6440f31f-76a6-4282-88af-ca177e2a7f22, ip_allocation=immediate, mac_address=fa:16:3e:b4:e1:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=586, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:42:59Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:00.089 496978 INFO neutron.agent.dhcp.agent [None req-2baef891-abc1-4e64-9f87-fc36669748a3 - - - - - -] DHCP configuration for ports {'26d79697-d38d-4767-8507-787a8ebead6b'} is completed
Oct 13 15:43:00 standalone.localdomain podman[527834]: 2025-10-13 15:43:00.194533808 +0000 UTC m=+0.043497673 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:00 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:43:00 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:00 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:00.452 496978 INFO neutron.agent.dhcp.agent [None req-601e9b35-4672-4929-bb3a-48a2da99533e - - - - - -] DHCP configuration for ports {'6440f31f-76a6-4282-88af-ca177e2a7f22'} is completed
Oct 13 15:43:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29542 DF PROTO=TCP SPT=33190 DPT=9102 SEQ=3030282888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE1FF60000000001030307) 
Oct 13 15:43:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:00.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:00 standalone.localdomain ceph-mon[29756]: pgmap v3855: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:00.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:01.170 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:00Z, description=, device_id=29b7daa9-287d-47f9-899e-4fa4c0b8ddf8, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889129d00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889129220>], id=184cdb02-411e-4622-afb5-d15983545116, ip_allocation=immediate, mac_address=fa:16:3e:94:ca:24, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:42:57Z, description=, dns_domain=, id=54611189-ebe7-4ed1-a32e-ac2f11d01075, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-1217574693-network, port_security_enabled=True, project_id=61821c87868d4275b161c847f4c45923, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19910, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=565, status=ACTIVE, subnets=['1a0bdd75-4cb1-45b3-984e-ef237fd4aa9d'], tags=[], tenant_id=61821c87868d4275b161c847f4c45923, updated_at=2025-10-13T15:42:58Z, vlan_transparent=None, network_id=54611189-ebe7-4ed1-a32e-ac2f11d01075, port_security_enabled=False, project_id=61821c87868d4275b161c847f4c45923, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=597, status=DOWN, tags=[], tenant_id=61821c87868d4275b161c847f4c45923, updated_at=2025-10-13T15:43:01Z on network 54611189-ebe7-4ed1-a32e-ac2f11d01075
Oct 13 15:43:01 standalone.localdomain podman[527873]: 2025-10-13 15:43:01.400199651 +0000 UTC m=+0.067177969 container kill b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:01 standalone.localdomain dnsmasq[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/addn_hosts - 1 addresses
Oct 13 15:43:01 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/host
Oct 13 15:43:01 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/opts
Oct 13 15:43:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3856: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:01.700 496978 INFO neutron.agent.dhcp.agent [None req-6359ff69-8fc0-48e5-942d-7712787e34de - - - - - -] DHCP configuration for ports {'184cdb02-411e-4622-afb5-d15983545116'} is completed
Oct 13 15:43:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:01.932 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:00Z, description=, device_id=29b7daa9-287d-47f9-899e-4fa4c0b8ddf8, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911d280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911dc40>], id=184cdb02-411e-4622-afb5-d15983545116, ip_allocation=immediate, mac_address=fa:16:3e:94:ca:24, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:42:57Z, description=, dns_domain=, id=54611189-ebe7-4ed1-a32e-ac2f11d01075, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-1217574693-network, port_security_enabled=True, project_id=61821c87868d4275b161c847f4c45923, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19910, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=565, status=ACTIVE, subnets=['1a0bdd75-4cb1-45b3-984e-ef237fd4aa9d'], tags=[], tenant_id=61821c87868d4275b161c847f4c45923, updated_at=2025-10-13T15:42:58Z, vlan_transparent=None, network_id=54611189-ebe7-4ed1-a32e-ac2f11d01075, port_security_enabled=False, project_id=61821c87868d4275b161c847f4c45923, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=597, status=DOWN, tags=[], tenant_id=61821c87868d4275b161c847f4c45923, updated_at=2025-10-13T15:43:01Z on network 54611189-ebe7-4ed1-a32e-ac2f11d01075
Oct 13 15:43:02 standalone.localdomain systemd[1]: tmp-crun.Gj9pgC.mount: Deactivated successfully.
Oct 13 15:43:02 standalone.localdomain dnsmasq[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/addn_hosts - 1 addresses
Oct 13 15:43:02 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/host
Oct 13 15:43:02 standalone.localdomain podman[527910]: 2025-10-13 15:43:02.186281049 +0000 UTC m=+0.079437504 container kill b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:43:02 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/opts
Oct 13 15:43:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:02.472 496978 INFO neutron.agent.dhcp.agent [None req-39ebe7fe-8bc9-47e5-91ac-5cbc5ce2068a - - - - - -] DHCP configuration for ports {'184cdb02-411e-4622-afb5-d15983545116'} is completed
Oct 13 15:43:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:02 standalone.localdomain ceph-mon[29756]: pgmap v3856: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:02 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:43:02 standalone.localdomain podman[527931]: 2025-10-13 15:43:02.83334835 +0000 UTC m=+0.090904055 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 13 15:43:02 standalone.localdomain podman[527931]: 2025-10-13 15:43:02.865863047 +0000 UTC m=+0.123418692 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:02 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:43:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3857: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:04.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 13 15:43:04 standalone.localdomain ceph-mon[29756]: pgmap v3857: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e68 e68: 1 total, 1 up, 1 in
Oct 13 15:43:04 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e68: 1 total, 1 up, 1 in
Oct 13 15:43:05 standalone.localdomain dnsmasq[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/addn_hosts - 0 addresses
Oct 13 15:43:05 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/host
Oct 13 15:43:05 standalone.localdomain dnsmasq-dhcp[527816]: read /var/lib/neutron/dhcp/54611189-ebe7-4ed1-a32e-ac2f11d01075/opts
Oct 13 15:43:05 standalone.localdomain podman[527973]: 2025-10-13 15:43:05.046914916 +0000 UTC m=+0.073200505 container kill b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:05 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:43:05 standalone.localdomain podman[527985]: 2025-10-13 15:43:05.176057422 +0000 UTC m=+0.101289553 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct 13 15:43:05 standalone.localdomain podman[527985]: 2025-10-13 15:43:05.195145997 +0000 UTC m=+0.120378128 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 15:43:05 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:43:05 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:05Z|00078|binding|INFO|Releasing lport 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 from this chassis (sb_readonly=0)
Oct 13 15:43:05 standalone.localdomain kernel: device tap996d53c6-53 left promiscuous mode
Oct 13 15:43:05 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:05Z|00079|binding|INFO|Setting lport 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 down in Southbound
Oct 13 15:43:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:05.288 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-54611189-ebe7-4ed1-a32e-ac2f11d01075', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54611189-ebe7-4ed1-a32e-ac2f11d01075', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61821c87868d4275b161c847f4c45923', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58b3d824-ef83-4fc1-9a7d-a4e6cfc6beda, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=996d53c6-53d6-4fb8-8e31-3e4ad0c50325) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:05.290 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 996d53c6-53d6-4fb8-8e31-3e4ad0c50325 in datapath 54611189-ebe7-4ed1-a32e-ac2f11d01075 unbound from our chassis
Oct 13 15:43:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:05.292 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54611189-ebe7-4ed1-a32e-ac2f11d01075, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:05.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:05.294 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[14f46d5b-d6bc-4064-b3dd-dbd561a1556f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3859: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:05 standalone.localdomain ceph-mon[29756]: osdmap e68: 1 total, 1 up, 1 in
Oct 13 15:43:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:05.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:06.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 13 15:43:06 standalone.localdomain ceph-mon[29756]: pgmap v3859: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:06 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e69 e69: 1 total, 1 up, 1 in
Oct 13 15:43:06 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e69: 1 total, 1 up, 1 in
Oct 13 15:43:06 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:06Z|00080|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:06.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:06.962 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:43:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:06.963 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:43:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:06.964 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:43:07 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:43:07 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:07 standalone.localdomain podman[528030]: 2025-10-13 15:43:07.032270611 +0000 UTC m=+0.069487629 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:07 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:43:07 standalone.localdomain podman[528044]: 2025-10-13 15:43:07.122928159 +0000 UTC m=+0.068309684 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:43:07 standalone.localdomain podman[528044]: 2025-10-13 15:43:07.134360479 +0000 UTC m=+0.079742074 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:43:07 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:43:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:07.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:07.230 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:43:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:07 standalone.localdomain dnsmasq[527816]: exiting on receipt of SIGTERM
Oct 13 15:43:07 standalone.localdomain podman[528088]: 2025-10-13 15:43:07.512073569 +0000 UTC m=+0.066191289 container kill b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:43:07 standalone.localdomain systemd[1]: libpod-b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc.scope: Deactivated successfully.
Oct 13 15:43:07 standalone.localdomain podman[528104]: 2025-10-13 15:43:07.594654208 +0000 UTC m=+0.065613550 container died b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:07 standalone.localdomain podman[528104]: 2025-10-13 15:43:07.621239693 +0000 UTC m=+0.092198965 container cleanup b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:43:07 standalone.localdomain systemd[1]: libpod-conmon-b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc.scope: Deactivated successfully.
Oct 13 15:43:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3861: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:07 standalone.localdomain podman[528105]: 2025-10-13 15:43:07.670432419 +0000 UTC m=+0.137048789 container remove b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54611189-ebe7-4ed1-a32e-ac2f11d01075, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:43:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:07.698 496978 INFO neutron.agent.dhcp.agent [None req-f9f0edb4-fa50-41c8-a31a-5160095e2ff8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:07 standalone.localdomain ceph-mon[29756]: osdmap e69: 1 total, 1 up, 1 in
Oct 13 15:43:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:07.840 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-299b7f5ed80dafb5025f8c175301dd13fe3036a11381b70e271cf8c092f3145a-merged.mount: Deactivated successfully.
Oct 13 15:43:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b104f7294c0a84804153904e2671d69482d9ea3b95d3e6b7842eac73e8f2acdc-userdata-shm.mount: Deactivated successfully.
Oct 13 15:43:08 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d54611189\x2debe7\x2d4ed1\x2da32e\x2dac2f11d01075.mount: Deactivated successfully.
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.157 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.157 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.158 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.340 2 DEBUG oslo_concurrency.processutils [None req-4a3b145c-64d6-4a13-b30d-b8d4aef5443e c3f9d53d09d7497fbeb9e6d9c8ab19f1 f0f1143b9b934b6b8a05084fd28596cb - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.365 2 DEBUG oslo_concurrency.processutils [None req-4a3b145c-64d6-4a13-b30d-b8d4aef5443e c3f9d53d09d7497fbeb9e6d9c8ab19f1 f0f1143b9b934b6b8a05084fd28596cb - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:43:08 standalone.localdomain ceph-mon[29756]: pgmap v3861: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.933 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.955 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.956 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.957 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.975 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.976 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.976 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.976 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:43:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:08.976 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:43:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:43:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2100564212' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.370 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.455 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.456 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.456 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.462 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.462 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.641 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.642 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9218MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.643 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.643 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:43:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3862: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.725 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.725 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.726 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.726 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:43:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2100564212' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:43:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:09.787 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:43:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:43:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1273208401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.240 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.248 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.268 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.271 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.272 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.543 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.544 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.545 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.545 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.546 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.546 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.546 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:43:10 standalone.localdomain ceph-mon[29756]: pgmap v3862: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Oct 13 15:43:10 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1273208401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:43:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:10.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:11 standalone.localdomain podman[467099]: time="2025-10-13T15:43:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:43:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:43:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:43:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3863: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Oct 13 15:43:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:43:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:43:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:43:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:43:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:43:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49123 "" "Go-http-client/1.1"
Oct 13 15:43:11 standalone.localdomain podman[528179]: 2025-10-13 15:43:11.836874555 +0000 UTC m=+0.094052511 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, build-date=2025-07-21T16:11:22, release=1, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:43:11 standalone.localdomain podman[528177]: 2025-10-13 15:43:11.878570953 +0000 UTC m=+0.145452337 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, tcib_managed=true, release=1, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:43:11 standalone.localdomain podman[528178]: 2025-10-13 15:43:11.944525013 +0000 UTC m=+0.206792065 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, container_name=swift_container_server, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9)
Oct 13 15:43:11 standalone.localdomain podman[528180]: 2025-10-13 15:43:11.97837622 +0000 UTC m=+0.239219918 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:43:12 standalone.localdomain podman[528180]: 2025-10-13 15:43:12.013373572 +0000 UTC m=+0.274217250 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:43:12 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:43:12 standalone.localdomain podman[528179]: 2025-10-13 15:43:12.034617163 +0000 UTC m=+0.291795159 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12)
Oct 13 15:43:12 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:43:12 standalone.localdomain podman[528177]: 2025-10-13 15:43:12.066755807 +0000 UTC m=+0.333637171 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, version=17.1.9, com.redhat.component=openstack-swift-object-container, release=1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.12)
Oct 13 15:43:12 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:43:12 standalone.localdomain podman[528178]: 2025-10-13 15:43:12.206113556 +0000 UTC m=+0.468380668 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., container_name=swift_container_server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, release=1)
Oct 13 15:43:12 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:43:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:12.225 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:43:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 13 15:43:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e70 e70: 1 total, 1 up, 1 in
Oct 13 15:43:12 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e70: 1 total, 1 up, 1 in
Oct 13 15:43:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:43:12 standalone.localdomain ceph-mon[29756]: pgmap v3863: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Oct 13 15:43:12 standalone.localdomain ceph-mon[29756]: osdmap e70: 1 total, 1 up, 1 in
Oct 13 15:43:12 standalone.localdomain podman[528283]: 2025-10-13 15:43:12.823199868 +0000 UTC m=+0.085186690 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:43:12 standalone.localdomain podman[528283]: 2025-10-13 15:43:12.839280851 +0000 UTC m=+0.101267673 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:43:12 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:43:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:43:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3865: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Oct 13 15:43:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:14.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:14 standalone.localdomain ceph-mon[29756]: pgmap v3865: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Oct 13 15:43:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:15.464 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:15Z, description=, device_id=3f223ac2-1fc7-4557-8ed6-15ac15e4958d, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911db50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911d880>], id=55f8d816-4d76-4229-9e8a-04d28bebf022, ip_allocation=immediate, mac_address=fa:16:3e:2e:23:d0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=690, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:15Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3866: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 32 KiB/s rd, 3.6 KiB/s wr, 45 op/s
Oct 13 15:43:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:15.684 496978 INFO neutron.agent.linux.ip_lib [None req-25ade127-befc-4e20-9ac6-c31d625978b9 - - - - - -] Device tap07c37f3e-66 cannot be used as it has no MAC address
Oct 13 15:43:15 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:43:15 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:15 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:15 standalone.localdomain podman[528324]: 2025-10-13 15:43:15.717371872 +0000 UTC m=+0.062594049 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:43:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:15.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:15 standalone.localdomain kernel: device tap07c37f3e-66 entered promiscuous mode
Oct 13 15:43:15 standalone.localdomain NetworkManager[5962]: <info>  [1760370195.7722] manager: (tap07c37f3e-66): new Generic device (/org/freedesktop/NetworkManager/Devices/26)
Oct 13 15:43:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:15Z|00081|binding|INFO|Claiming lport 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 for this chassis.
Oct 13 15:43:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:15Z|00082|binding|INFO|07c37f3e-6610-45ba-ba12-dc6ac3fc3363: Claiming unknown
Oct 13 15:43:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:15.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:15 standalone.localdomain systemd-udevd[528344]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:15.782 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-652e654a-6470-4c94-8131-b591b2c48f52', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652e654a-6470-4c94-8131-b591b2c48f52', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517eaf033d3a490ba9e558b9a65a8c18', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a8badb8-349f-436c-8168-fc044ceb20ed, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=07c37f3e-6610-45ba-ba12-dc6ac3fc3363) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:15.784 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 in datapath 652e654a-6470-4c94-8131-b591b2c48f52 bound to our chassis
Oct 13 15:43:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:15.786 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9a66a155-5a3e-4394-9471-5e1c0e91ea98 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:43:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:15.786 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 652e654a-6470-4c94-8131-b591b2c48f52, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:15.787 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ff820acb-54f2-4cd5-a4a7-0253ee35ec22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:15Z|00083|binding|INFO|Setting lport 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 ovn-installed in OVS
Oct 13 15:43:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:15Z|00084|binding|INFO|Setting lport 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 up in Southbound
Oct 13 15:43:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:15.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:15.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:15.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:16.098 496978 INFO neutron.agent.dhcp.agent [None req-020c6647-881e-45ca-9294-5bce42ffc02d - - - - - -] DHCP configuration for ports {'55f8d816-4d76-4229-9e8a-04d28bebf022'} is completed
Oct 13 15:43:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:16.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:16 standalone.localdomain podman[528407]: 
Oct 13 15:43:16 standalone.localdomain podman[528407]: 2025-10-13 15:43:16.798836069 +0000 UTC m=+0.089063319 container create c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:43:16 standalone.localdomain ceph-mon[29756]: pgmap v3866: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 32 KiB/s rd, 3.6 KiB/s wr, 45 op/s
Oct 13 15:43:16 standalone.localdomain podman[528407]: 2025-10-13 15:43:16.750622251 +0000 UTC m=+0.040849541 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:16 standalone.localdomain systemd[1]: Started libpod-conmon-c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9.scope.
Oct 13 15:43:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:16.855 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:16Z, description=, device_id=7ef56836-a806-40d6-a3f1-e21614b7d7d9, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891517f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889151f70>], id=9d12c293-7d5b-4913-bfde-b14a00fe4820, ip_allocation=immediate, mac_address=fa:16:3e:67:7f:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=705, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:16Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:43:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ce6502512f8d0c4fc50600665cca4377d845b6901efdf329923fe48e466ada/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:16 standalone.localdomain podman[528407]: 2025-10-13 15:43:16.904793724 +0000 UTC m=+0.195020984 container init c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:43:16 standalone.localdomain podman[528407]: 2025-10-13 15:43:16.917533065 +0000 UTC m=+0.207760315 container start c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:43:16 standalone.localdomain dnsmasq[528433]: started, version 2.85 cachesize 150
Oct 13 15:43:16 standalone.localdomain dnsmasq[528433]: DNS service limited to local subnets
Oct 13 15:43:16 standalone.localdomain dnsmasq[528433]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:16 standalone.localdomain dnsmasq[528433]: warning: no upstream servers configured
Oct 13 15:43:16 standalone.localdomain dnsmasq-dhcp[528433]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:16 standalone.localdomain dnsmasq[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/addn_hosts - 0 addresses
Oct 13 15:43:16 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/host
Oct 13 15:43:16 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/opts
Oct 13 15:43:16 standalone.localdomain podman[528424]: 2025-10-13 15:43:16.98624397 +0000 UTC m=+0.085894083 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:43:16 standalone.localdomain podman[528424]: 2025-10-13 15:43:16.995843983 +0000 UTC m=+0.095494116 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:17 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:43:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:17.070 496978 INFO neutron.agent.dhcp.agent [None req-57ccf523-c9fe-4305-a962-d468073f628c - - - - - -] DHCP configuration for ports {'9adf379e-fcd7-482c-b212-8ad2c8a0e2ed'} is completed
Oct 13 15:43:17 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:43:17 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:17 standalone.localdomain podman[528461]: 2025-10-13 15:43:17.133955644 +0000 UTC m=+0.047608980 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:43:17 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:17.187 496978 INFO neutron.agent.linux.ip_lib [None req-1b4f95b5-f0cd-404b-8da7-a596d5a8f75e - - - - - -] Device tap93da4689-f6 cannot be used as it has no MAC address
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain kernel: device tap93da4689-f6 entered promiscuous mode
Oct 13 15:43:17 standalone.localdomain NetworkManager[5962]: <info>  [1760370197.2276] manager: (tap93da4689-f6): new Generic device (/org/freedesktop/NetworkManager/Devices/27)
Oct 13 15:43:17 standalone.localdomain systemd-udevd[528346]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:17Z|00085|binding|INFO|Claiming lport 93da4689-f6ba-4750-b7f0-96bc35769995 for this chassis.
Oct 13 15:43:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:17Z|00086|binding|INFO|93da4689-f6ba-4750-b7f0-96bc35769995: Claiming unknown
Oct 13 15:43:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:17.242 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d44d7126-23d6-48cf-9909-a6e1eb5b5797', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d44d7126-23d6-48cf-9909-a6e1eb5b5797', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64beb98988de48a5a53cdc81d039c522', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b548e3a-eb7f-4ac4-94a3-73a66c656cf7, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=93da4689-f6ba-4750-b7f0-96bc35769995) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:17.244 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 93da4689-f6ba-4750-b7f0-96bc35769995 in datapath d44d7126-23d6-48cf-9909-a6e1eb5b5797 bound to our chassis
Oct 13 15:43:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:17Z|00087|binding|INFO|Setting lport 93da4689-f6ba-4750-b7f0-96bc35769995 ovn-installed in OVS
Oct 13 15:43:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:17Z|00088|binding|INFO|Setting lport 93da4689-f6ba-4750-b7f0-96bc35769995 up in Southbound
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:17.248 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port b8949f55-5e51-4617-82d9-779e30e12db6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:43:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:17.248 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d44d7126-23d6-48cf-9909-a6e1eb5b5797, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:17.249 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[05f05a72-0f27-4932-afe9-f7354b59d451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:17.286 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:17Z, description=, device_id=3f223ac2-1fc7-4557-8ed6-15ac15e4958d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889106f70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891062b0>], id=a7702722-30a4-4d91-b3a3-5895b6be3ef3, ip_allocation=immediate, mac_address=fa:16:3e:bb:c7:66, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:13Z, description=, dns_domain=, id=652e654a-6470-4c94-8131-b591b2c48f52, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-640864463-network, port_security_enabled=True, project_id=517eaf033d3a490ba9e558b9a65a8c18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48548, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=675, status=ACTIVE, subnets=['385e922b-28fc-46bf-b734-4107128c044f'], tags=[], tenant_id=517eaf033d3a490ba9e558b9a65a8c18, updated_at=2025-10-13T15:43:14Z, vlan_transparent=None, network_id=652e654a-6470-4c94-8131-b591b2c48f52, port_security_enabled=False, project_id=517eaf033d3a490ba9e558b9a65a8c18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=706, status=DOWN, tags=[], tenant_id=517eaf033d3a490ba9e558b9a65a8c18, updated_at=2025-10-13T15:43:17Z on network 652e654a-6470-4c94-8131-b591b2c48f52
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:17.415 496978 INFO neutron.agent.dhcp.agent [None req-aa332843-13b3-4207-81a2-00869fb9a8f2 - - - - - -] DHCP configuration for ports {'9d12c293-7d5b-4913-bfde-b14a00fe4820'} is completed
Oct 13 15:43:17 standalone.localdomain dnsmasq[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/addn_hosts - 1 addresses
Oct 13 15:43:17 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/host
Oct 13 15:43:17 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/opts
Oct 13 15:43:17 standalone.localdomain podman[528520]: 2025-10-13 15:43:17.475183786 +0000 UTC m=+0.037632513 container kill c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:17.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3867: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 39 op/s
Oct 13 15:43:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:17.649 496978 INFO neutron.agent.dhcp.agent [None req-1d3fa4c5-3ee5-4776-9bcc-e61b7c324eb1 - - - - - -] DHCP configuration for ports {'a7702722-30a4-4d91-b3a3-5895b6be3ef3'} is completed
Oct 13 15:43:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:17.961 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:17Z, description=, device_id=3f223ac2-1fc7-4557-8ed6-15ac15e4958d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5430>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5460>], id=a7702722-30a4-4d91-b3a3-5895b6be3ef3, ip_allocation=immediate, mac_address=fa:16:3e:bb:c7:66, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:13Z, description=, dns_domain=, id=652e654a-6470-4c94-8131-b591b2c48f52, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-640864463-network, port_security_enabled=True, project_id=517eaf033d3a490ba9e558b9a65a8c18, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48548, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=675, status=ACTIVE, subnets=['385e922b-28fc-46bf-b734-4107128c044f'], tags=[], tenant_id=517eaf033d3a490ba9e558b9a65a8c18, updated_at=2025-10-13T15:43:14Z, vlan_transparent=None, network_id=652e654a-6470-4c94-8131-b591b2c48f52, port_security_enabled=False, project_id=517eaf033d3a490ba9e558b9a65a8c18, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=706, status=DOWN, tags=[], tenant_id=517eaf033d3a490ba9e558b9a65a8c18, updated_at=2025-10-13T15:43:17Z on network 652e654a-6470-4c94-8131-b591b2c48f52
Oct 13 15:43:18 standalone.localdomain podman[528599]: 
Oct 13 15:43:18 standalone.localdomain dnsmasq[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/addn_hosts - 1 addresses
Oct 13 15:43:18 standalone.localdomain podman[528611]: 2025-10-13 15:43:18.226216702 +0000 UTC m=+0.066822458 container kill c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/host
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/opts
Oct 13 15:43:18 standalone.localdomain podman[528599]: 2025-10-13 15:43:18.168517785 +0000 UTC m=+0.035992284 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:18 standalone.localdomain podman[528599]: 2025-10-13 15:43:18.267677342 +0000 UTC m=+0.135151761 container create a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:43:18 standalone.localdomain systemd[1]: Started libpod-conmon-a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f.scope.
Oct 13 15:43:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54538332a3851e7cda28a866980ba50aaeb16370c123a6236944822f71563aea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:18 standalone.localdomain podman[528599]: 2025-10-13 15:43:18.332955991 +0000 UTC m=+0.200430410 container init a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:43:18 standalone.localdomain podman[528599]: 2025-10-13 15:43:18.341685399 +0000 UTC m=+0.209159798 container start a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:43:18 standalone.localdomain dnsmasq[528639]: started, version 2.85 cachesize 150
Oct 13 15:43:18 standalone.localdomain dnsmasq[528639]: DNS service limited to local subnets
Oct 13 15:43:18 standalone.localdomain dnsmasq[528639]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:18 standalone.localdomain dnsmasq[528639]: warning: no upstream servers configured
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528639]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:18 standalone.localdomain dnsmasq[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/addn_hosts - 0 addresses
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/host
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/opts
Oct 13 15:43:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:18.411 496978 INFO neutron.agent.dhcp.agent [None req-8ecb04b2-fef9-4c93-aaeb-f56b6a15ed72 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:17Z, description=, device_id=7ef56836-a806-40d6-a3f1-e21614b7d7d9, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188916ce80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188916cc40>], id=c4142f00-3794-4087-840d-7c1f1a580c1a, ip_allocation=immediate, mac_address=fa:16:3e:8e:c3:0d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:15Z, description=, dns_domain=, id=d44d7126-23d6-48cf-9909-a6e1eb5b5797, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-76329201-network, port_security_enabled=True, project_id=64beb98988de48a5a53cdc81d039c522, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30536, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=686, status=ACTIVE, subnets=['1a976448-95e3-4197-8acd-757e360b8e5b'], tags=[], tenant_id=64beb98988de48a5a53cdc81d039c522, updated_at=2025-10-13T15:43:15Z, vlan_transparent=None, network_id=d44d7126-23d6-48cf-9909-a6e1eb5b5797, port_security_enabled=False, project_id=64beb98988de48a5a53cdc81d039c522, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=713, status=DOWN, tags=[], tenant_id=64beb98988de48a5a53cdc81d039c522, updated_at=2025-10-13T15:43:17Z on network d44d7126-23d6-48cf-9909-a6e1eb5b5797
Oct 13 15:43:18 standalone.localdomain ceph-mon[29756]: pgmap v3867: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 39 op/s
Oct 13 15:43:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:18.537 496978 INFO neutron.agent.dhcp.agent [None req-0dbb5941-fdac-4233-837e-76cd311b035a - - - - - -] DHCP configuration for ports {'a7702722-30a4-4d91-b3a3-5895b6be3ef3', '7695212c-6e06-4240-80c5-adb56caa9603'} is completed
Oct 13 15:43:18 standalone.localdomain dnsmasq[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/addn_hosts - 1 addresses
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/host
Oct 13 15:43:18 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/opts
Oct 13 15:43:18 standalone.localdomain podman[528660]: 2025-10-13 15:43:18.656340368 +0000 UTC m=+0.054107359 container kill a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:43:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3383272458' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:43:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:43:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3383272458' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:43:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:18.941 496978 INFO neutron.agent.dhcp.agent [None req-70bede0e-6f6c-4dca-9ee7-0d67b4f53379 - - - - - -] DHCP configuration for ports {'c4142f00-3794-4087-840d-7c1f1a580c1a'} is completed
Oct 13 15:43:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:19.307 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:17Z, description=, device_id=7ef56836-a806-40d6-a3f1-e21614b7d7d9, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18899d5310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892b85e0>], id=c4142f00-3794-4087-840d-7c1f1a580c1a, ip_allocation=immediate, mac_address=fa:16:3e:8e:c3:0d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:15Z, description=, dns_domain=, id=d44d7126-23d6-48cf-9909-a6e1eb5b5797, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-76329201-network, port_security_enabled=True, project_id=64beb98988de48a5a53cdc81d039c522, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30536, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=686, status=ACTIVE, subnets=['1a976448-95e3-4197-8acd-757e360b8e5b'], tags=[], tenant_id=64beb98988de48a5a53cdc81d039c522, updated_at=2025-10-13T15:43:15Z, vlan_transparent=None, network_id=d44d7126-23d6-48cf-9909-a6e1eb5b5797, port_security_enabled=False, project_id=64beb98988de48a5a53cdc81d039c522, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=713, status=DOWN, tags=[], tenant_id=64beb98988de48a5a53cdc81d039c522, updated_at=2025-10-13T15:43:17Z on network d44d7126-23d6-48cf-9909-a6e1eb5b5797
Oct 13 15:43:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:19.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3383272458' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:43:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3383272458' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:43:19 standalone.localdomain systemd[1]: tmp-crun.FFPIzZ.mount: Deactivated successfully.
Oct 13 15:43:19 standalone.localdomain dnsmasq[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/addn_hosts - 1 addresses
Oct 13 15:43:19 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/host
Oct 13 15:43:19 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/opts
Oct 13 15:43:19 standalone.localdomain podman[528699]: 2025-10-13 15:43:19.540659175 +0000 UTC m=+0.061603738 container kill a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3868: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:19.862 496978 INFO neutron.agent.dhcp.agent [None req-b803c1d8-c6ef-4f6f-b629-e8573f5b60e3 - - - - - -] DHCP configuration for ports {'c4142f00-3794-4087-840d-7c1f1a580c1a'} is completed
Oct 13 15:43:20 standalone.localdomain ceph-mon[29756]: pgmap v3868: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:43:20 standalone.localdomain podman[528719]: 2025-10-13 15:43:20.799155666 +0000 UTC m=+0.070073827 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:43:20 standalone.localdomain podman[528719]: 2025-10-13 15:43:20.81690343 +0000 UTC m=+0.087821611 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:43:20 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:43:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:20.915 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:20Z, description=, device_id=ae1e46ad-d3da-490d-ab59-22a6fc45375a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889171550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889178490>], id=da021f74-3ae4-4991-b89c-821634b58a09, ip_allocation=immediate, mac_address=fa:16:3e:a1:4a:04, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=730, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:20Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:21.159 496978 INFO neutron.agent.linux.ip_lib [None req-8d6a106f-223d-4aa9-b1cc-e199400781ea - - - - - -] Device tap931de190-5d cannot be used as it has no MAC address
Oct 13 15:43:21 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:43:21 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:21 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:21 standalone.localdomain podman[528763]: 2025-10-13 15:43:21.172601795 +0000 UTC m=+0.064369982 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain kernel: device tap931de190-5d entered promiscuous mode
Oct 13 15:43:21 standalone.localdomain NetworkManager[5962]: <info>  [1760370201.2093] manager: (tap931de190-5d): new Generic device (/org/freedesktop/NetworkManager/Devices/28)
Oct 13 15:43:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:21Z|00089|binding|INFO|Claiming lport 931de190-5d1b-4dde-9904-56c18df9213a for this chassis.
Oct 13 15:43:21 standalone.localdomain systemd-udevd[528785]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:21Z|00090|binding|INFO|931de190-5d1b-4dde-9904-56c18df9213a: Claiming unknown
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.233 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-32a9dbae-2e38-4407-9d1b-3340c23ceaac', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32a9dbae-2e38-4407-9d1b-3340c23ceaac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21021b5518144ee590654b7f7154816b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad85b933-b50e-427a-b768-3c7127288675, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=931de190-5d1b-4dde-9904-56c18df9213a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.235 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 931de190-5d1b-4dde-9904-56c18df9213a in datapath 32a9dbae-2e38-4407-9d1b-3340c23ceaac bound to our chassis
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.239 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port f0adced7-984b-48ac-beb3-c601716adb74 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.240 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32a9dbae-2e38-4407-9d1b-3340c23ceaac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.240 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2d10f6-4b5c-4e38-b21e-acad8c918dde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:21Z|00091|binding|INFO|Setting lport 931de190-5d1b-4dde-9904-56c18df9213a ovn-installed in OVS
Oct 13 15:43:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:21Z|00092|binding|INFO|Setting lport 931de190-5d1b-4dde-9904-56c18df9213a up in Southbound
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap931de190-5d: No such device
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:21.432 496978 INFO neutron.agent.dhcp.agent [None req-ff874b83-2857-441d-acaa-403b5491cf18 - - - - - -] DHCP configuration for ports {'da021f74-3ae4-4991-b89c-821634b58a09'} is completed
Oct 13 15:43:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3869: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain dnsmasq[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/addn_hosts - 0 addresses
Oct 13 15:43:21 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/host
Oct 13 15:43:21 standalone.localdomain dnsmasq-dhcp[528433]: read /var/lib/neutron/dhcp/652e654a-6470-4c94-8131-b591b2c48f52/opts
Oct 13 15:43:21 standalone.localdomain podman[528852]: 2025-10-13 15:43:21.795668801 +0000 UTC m=+0.083549930 container kill c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.861 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.863 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.864 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:43:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:21Z|00093|binding|INFO|Releasing lport 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 from this chassis (sb_readonly=0)
Oct 13 15:43:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:21.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:21Z|00094|binding|INFO|Setting lport 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 down in Southbound
Oct 13 15:43:21 standalone.localdomain kernel: device tap07c37f3e-66 left promiscuous mode
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.994 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-652e654a-6470-4c94-8131-b591b2c48f52', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652e654a-6470-4c94-8131-b591b2c48f52', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '517eaf033d3a490ba9e558b9a65a8c18', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a8badb8-349f-436c-8168-fc044ceb20ed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=07c37f3e-6610-45ba-ba12-dc6ac3fc3363) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:21.996 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 07c37f3e-6610-45ba-ba12-dc6ac3fc3363 in datapath 652e654a-6470-4c94-8131-b591b2c48f52 unbound from our chassis
Oct 13 15:43:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:22.000 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 652e654a-6470-4c94-8131-b591b2c48f52, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:22.001 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[029917ff-ed2f-42fd-a60d-c50cc0420ad7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:22.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:22 standalone.localdomain podman[528902]: 
Oct 13 15:43:22 standalone.localdomain podman[528902]: 2025-10-13 15:43:22.299342319 +0000 UTC m=+0.095280319 container create d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:22 standalone.localdomain systemd[1]: Started libpod-conmon-d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8.scope.
Oct 13 15:43:22 standalone.localdomain podman[528902]: 2025-10-13 15:43:22.252238316 +0000 UTC m=+0.048176366 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d7ac5f8f211c05ec6625db5e75afce4b59fd5892be07a56d45495df6e5d327/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:22 standalone.localdomain podman[528902]: 2025-10-13 15:43:22.373661566 +0000 UTC m=+0.169599576 container init d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:43:22 standalone.localdomain podman[528902]: 2025-10-13 15:43:22.380904718 +0000 UTC m=+0.176842728 container start d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:43:22 standalone.localdomain dnsmasq[528920]: started, version 2.85 cachesize 150
Oct 13 15:43:22 standalone.localdomain dnsmasq[528920]: DNS service limited to local subnets
Oct 13 15:43:22 standalone.localdomain dnsmasq[528920]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:22 standalone.localdomain dnsmasq[528920]: warning: no upstream servers configured
Oct 13 15:43:22 standalone.localdomain dnsmasq-dhcp[528920]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:22 standalone.localdomain dnsmasq[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/addn_hosts - 0 addresses
Oct 13 15:43:22 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/host
Oct 13 15:43:22 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/opts
Oct 13 15:43:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:22.580 496978 INFO neutron.agent.dhcp.agent [None req-3c51608f-2e9e-4790-b155-a185c88a3d99 - - - - - -] DHCP configuration for ports {'3f054c33-cd78-4e31-ab14-f2657507cd15'} is completed
Oct 13 15:43:22 standalone.localdomain ceph-mon[29756]: pgmap v3869: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:22 standalone.localdomain dnsmasq[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/addn_hosts - 0 addresses
Oct 13 15:43:22 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/host
Oct 13 15:43:22 standalone.localdomain dnsmasq-dhcp[528639]: read /var/lib/neutron/dhcp/d44d7126-23d6-48cf-9909-a6e1eb5b5797/opts
Oct 13 15:43:22 standalone.localdomain podman[528938]: 2025-10-13 15:43:22.921911249 +0000 UTC m=+0.066202248 container kill a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:23Z|00095|binding|INFO|Releasing lport 93da4689-f6ba-4750-b7f0-96bc35769995 from this chassis (sb_readonly=0)
Oct 13 15:43:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:23Z|00096|binding|INFO|Setting lport 93da4689-f6ba-4750-b7f0-96bc35769995 down in Southbound
Oct 13 15:43:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:23.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:23 standalone.localdomain kernel: device tap93da4689-f6 left promiscuous mode
Oct 13 15:43:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:23.129 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d44d7126-23d6-48cf-9909-a6e1eb5b5797', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d44d7126-23d6-48cf-9909-a6e1eb5b5797', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64beb98988de48a5a53cdc81d039c522', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b548e3a-eb7f-4ac4-94a3-73a66c656cf7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=93da4689-f6ba-4750-b7f0-96bc35769995) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:23.132 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 93da4689-f6ba-4750-b7f0-96bc35769995 in datapath d44d7126-23d6-48cf-9909-a6e1eb5b5797 unbound from our chassis
Oct 13 15:43:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:23.135 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d44d7126-23d6-48cf-9909-a6e1eb5b5797, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:23.137 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[39eb253b-b088-4488-9065-95b9a959f59a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:43:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:23.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:23 standalone.localdomain podman[528962]: 2025-10-13 15:43:23.238727604 +0000 UTC m=+0.089096029 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:43:23 standalone.localdomain podman[528962]: 2025-10-13 15:43:23.247425521 +0000 UTC m=+0.097793906 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:23 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:43:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:23.329 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:22Z, description=, device_id=ae1e46ad-d3da-490d-ab59-22a6fc45375a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891119a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111610>], id=a7976192-f8e9-466d-84b3-2381b48637d6, ip_allocation=immediate, mac_address=fa:16:3e:ae:05:b9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:17Z, description=, dns_domain=, id=32a9dbae-2e38-4407-9d1b-3340c23ceaac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1283891462-network, port_security_enabled=True, project_id=21021b5518144ee590654b7f7154816b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54913, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=712, status=ACTIVE, subnets=['16cca762-2e5a-4ab9-8141-0e89216a5374'], tags=[], tenant_id=21021b5518144ee590654b7f7154816b, updated_at=2025-10-13T15:43:19Z, vlan_transparent=None, network_id=32a9dbae-2e38-4407-9d1b-3340c23ceaac, port_security_enabled=False, project_id=21021b5518144ee590654b7f7154816b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=754, status=DOWN, tags=[], tenant_id=21021b5518144ee590654b7f7154816b, updated_at=2025-10-13T15:43:22Z on network 32a9dbae-2e38-4407-9d1b-3340c23ceaac
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:43:23
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'vms', 'manila_data', 'volumes', 'manila_metadata', 'backups', '.mgr']
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:43:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28758 DF PROTO=TCP SPT=55294 DPT=9102 SEQ=2412212939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE793E0000000001030307) 
Oct 13 15:43:23 standalone.localdomain dnsmasq[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/addn_hosts - 1 addresses
Oct 13 15:43:23 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/host
Oct 13 15:43:23 standalone.localdomain podman[528998]: 2025-10-13 15:43:23.552100193 +0000 UTC m=+0.063393082 container kill d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:23 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/opts
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3870: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:43:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:43:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:23.842 496978 INFO neutron.agent.dhcp.agent [None req-b49a526a-c2a4-4d96-a684-f7c9d0b87528 - - - - - -] DHCP configuration for ports {'a7976192-f8e9-466d-84b3-2381b48637d6'} is completed
Oct 13 15:43:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:24.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28759 DF PROTO=TCP SPT=55294 DPT=9102 SEQ=2412212939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE7D360000000001030307) 
Oct 13 15:43:24 standalone.localdomain ceph-mon[29756]: pgmap v3870: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:24Z|00097|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:24.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:24 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:43:24 standalone.localdomain podman[529036]: 2025-10-13 15:43:24.974295838 +0000 UTC m=+0.102322025 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:43:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:25Z|00098|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:25.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:25.065 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:22Z, description=, device_id=ae1e46ad-d3da-490d-ab59-22a6fc45375a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912c730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912ca00>], id=a7976192-f8e9-466d-84b3-2381b48637d6, ip_allocation=immediate, mac_address=fa:16:3e:ae:05:b9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:17Z, description=, dns_domain=, id=32a9dbae-2e38-4407-9d1b-3340c23ceaac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1283891462-network, port_security_enabled=True, project_id=21021b5518144ee590654b7f7154816b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54913, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=712, status=ACTIVE, subnets=['16cca762-2e5a-4ab9-8141-0e89216a5374'], tags=[], tenant_id=21021b5518144ee590654b7f7154816b, updated_at=2025-10-13T15:43:19Z, vlan_transparent=None, network_id=32a9dbae-2e38-4407-9d1b-3340c23ceaac, port_security_enabled=False, project_id=21021b5518144ee590654b7f7154816b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=754, status=DOWN, tags=[], tenant_id=21021b5518144ee590654b7f7154816b, updated_at=2025-10-13T15:43:22Z on network 32a9dbae-2e38-4407-9d1b-3340c23ceaac
Oct 13 15:43:25 standalone.localdomain dnsmasq[528433]: exiting on receipt of SIGTERM
Oct 13 15:43:25 standalone.localdomain podman[529115]: 2025-10-13 15:43:25.352504224 +0000 UTC m=+0.046153725 container kill c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:43:25 standalone.localdomain systemd[1]: libpod-c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9.scope: Deactivated successfully.
Oct 13 15:43:25 standalone.localdomain systemd[1]: tmp-crun.YkgtIL.mount: Deactivated successfully.
Oct 13 15:43:25 standalone.localdomain podman[529125]: 2025-10-13 15:43:25.39416893 +0000 UTC m=+0.069758988 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:43:25 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:43:25 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:25 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:25 standalone.localdomain dnsmasq[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/addn_hosts - 1 addresses
Oct 13 15:43:25 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/host
Oct 13 15:43:25 standalone.localdomain podman[529090]: 2025-10-13 15:43:25.408899851 +0000 UTC m=+0.168901584 container kill d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:43:25 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/opts
Oct 13 15:43:25 standalone.localdomain podman[529143]: 2025-10-13 15:43:25.459712977 +0000 UTC m=+0.099551000 container died c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:43:25 standalone.localdomain podman[529143]: 2025-10-13 15:43:25.488396857 +0000 UTC m=+0.128234880 container cleanup c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:43:25 standalone.localdomain systemd[1]: libpod-conmon-c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9.scope: Deactivated successfully.
Oct 13 15:43:25 standalone.localdomain podman[529154]: 2025-10-13 15:43:25.533792667 +0000 UTC m=+0.149336226 container remove c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652e654a-6470-4c94-8131-b591b2c48f52, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:43:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:25.573 496978 INFO neutron.agent.dhcp.agent [None req-8f710e1e-2e0a-4373-b883-e6076f414d76 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:25.574 496978 INFO neutron.agent.dhcp.agent [None req-8f710e1e-2e0a-4373-b883-e6076f414d76 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3871: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:25.754 496978 INFO neutron.agent.dhcp.agent [None req-0bb92fac-290c-418e-aabb-92deb060af1e - - - - - -] DHCP configuration for ports {'a7976192-f8e9-466d-84b3-2381b48637d6'} is completed
Oct 13 15:43:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:25.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-82ce6502512f8d0c4fc50600665cca4377d845b6901efdf329923fe48e466ada-merged.mount: Deactivated successfully.
Oct 13 15:43:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c805771dcc168af9afe9bc239b35f2d5c8d845f83d2ff6c77411558d7d8455f9-userdata-shm.mount: Deactivated successfully.
Oct 13 15:43:25 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d652e654a\x2d6470\x2d4c94\x2d8131\x2db591b2c48f52.mount: Deactivated successfully.
Oct 13 15:43:25 standalone.localdomain dnsmasq[528639]: exiting on receipt of SIGTERM
Oct 13 15:43:25 standalone.localdomain podman[529210]: 2025-10-13 15:43:25.982673446 +0000 UTC m=+0.095298489 container kill a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:43:25 standalone.localdomain systemd[1]: libpod-a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f.scope: Deactivated successfully.
Oct 13 15:43:26 standalone.localdomain podman[529225]: 2025-10-13 15:43:26.040444026 +0000 UTC m=+0.043702570 container died a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:43:26 standalone.localdomain podman[529225]: 2025-10-13 15:43:26.064551454 +0000 UTC m=+0.067809958 container cleanup a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:43:26 standalone.localdomain systemd[1]: libpod-conmon-a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f.scope: Deactivated successfully.
Oct 13 15:43:26 standalone.localdomain podman[529227]: 2025-10-13 15:43:26.116883508 +0000 UTC m=+0.110968061 container remove a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d44d7126-23d6-48cf-9909-a6e1eb5b5797, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:43:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:26.310 496978 INFO neutron.agent.dhcp.agent [None req-0209fe25-433d-4fb4-8792-46cf371c3328 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:26.311 496978 INFO neutron.agent.dhcp.agent [None req-0209fe25-433d-4fb4-8792-46cf371c3328 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:26.432 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28760 DF PROTO=TCP SPT=55294 DPT=9102 SEQ=2412212939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE85360000000001030307) 
Oct 13 15:43:26 standalone.localdomain ceph-mon[29756]: pgmap v3871: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-54538332a3851e7cda28a866980ba50aaeb16370c123a6236944822f71563aea-merged.mount: Deactivated successfully.
Oct 13 15:43:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7c7d372c89e120b4c6bf2b3382152e44ddd7a90c2b79263069879cab5e9696f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:43:26 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd44d7126\x2d23d6\x2d48cf\x2d9909\x2da6e1eb5b5797.mount: Deactivated successfully.
Oct 13 15:43:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3872: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:28 standalone.localdomain ceph-mon[29756]: pgmap v3872: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:28.591 2 INFO neutron.agent.securitygroups_rpc [req-fec7f7d6-10b4-47cc-8641-88910b094a30 req-642f7f9a-c5db-4abe-beab-51237f1786e8 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['e187fa2d-888e-46ca-b2ca-1ec586942001']
Oct 13 15:43:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:28.955 2 INFO neutron.agent.securitygroups_rpc [req-aaf6fb9e-c2da-4e15-bad1-5159cc99171c req-3fb9e731-b4b4-4838-9de1-287c591807a6 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['ddeb77c3-c7b0-49f2-bc3d-f4357eb23eec']
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:43:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:29.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3873: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:29 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:29.754 2 INFO neutron.agent.securitygroups_rpc [req-0ad2aea2-b747-41de-bdce-e167df47aaec req-f4811f4d-788d-420a-ac4c-01458610290a 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['60d28831-6070-4bc6-9e33-40f50a97bbdf']
Oct 13 15:43:30 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:30.587 2 INFO neutron.agent.securitygroups_rpc [req-ca5c3d2e-0ed2-43db-b9a6-107a5fec814e req-7e3044ec-16c1-4832-b166-223fc6c60616 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['87df4d5b-2a7c-4fe3-8aa9-7caf3477658c']
Oct 13 15:43:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28761 DF PROTO=TCP SPT=55294 DPT=9102 SEQ=2412212939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCE94F70000000001030307) 
Oct 13 15:43:30 standalone.localdomain ceph-mon[29756]: pgmap v3873: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:30.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:31.589 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:31Z, description=, device_id=35ae121e-2ac9-4ef9-8c70-41b932b0a80a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892995b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890c2880>], id=bc4ab67b-7447-4a44-b033-e0e2d1224fa8, ip_allocation=immediate, mac_address=fa:16:3e:3c:31:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=814, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:31Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3874: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:31 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:31.688 2 INFO neutron.agent.securitygroups_rpc [req-9c5c60a0-cc7f-475c-a6a5-d5a6eb7c7ff8 req-c8a67c56-72e9-4457-b330-d818dd96592f 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['de62c621-0d19-465b-b5a8-72646daefb77']
Oct 13 15:43:31 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:43:31 standalone.localdomain podman[529273]: 2025-10-13 15:43:31.848400276 +0000 UTC m=+0.072142181 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:43:31 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:31 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:31.972 496978 INFO neutron.agent.linux.ip_lib [None req-816adf38-2130-47ad-bc89-26908360b2fc - - - - - -] Device tapbaa3eb91-3e cannot be used as it has no MAC address
Oct 13 15:43:32 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:32.027 2 INFO neutron.agent.securitygroups_rpc [req-ffbadd81-ce2e-461d-8dc9-5ce4591090fb req-e677f8de-3b0a-43fb-b606-c4e81cd30b4a 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['de62c621-0d19-465b-b5a8-72646daefb77']
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain kernel: device tapbaa3eb91-3e entered promiscuous mode
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:32Z|00099|binding|INFO|Claiming lport baa3eb91-3e7f-4a3c-a691-a11c83979add for this chassis.
Oct 13 15:43:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:32Z|00100|binding|INFO|baa3eb91-3e7f-4a3c-a691-a11c83979add: Claiming unknown
Oct 13 15:43:32 standalone.localdomain NetworkManager[5962]: <info>  [1760370212.0878] manager: (tapbaa3eb91-3e): new Generic device (/org/freedesktop/NetworkManager/Devices/29)
Oct 13 15:43:32 standalone.localdomain systemd-udevd[529303]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:32.093 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f9684289-9ed5-4a83-855f-6832281c6ea8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9684289-9ed5-4a83-855f-6832281c6ea8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7847f696e2214db29eb613f97c6affa3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afb25bc6-edf8-4b8e-a866-0bb6969bc134, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=baa3eb91-3e7f-4a3c-a691-a11c83979add) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:32.095 378821 INFO neutron.agent.ovn.metadata.agent [-] Port baa3eb91-3e7f-4a3c-a691-a11c83979add in datapath f9684289-9ed5-4a83-855f-6832281c6ea8 bound to our chassis
Oct 13 15:43:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:32.099 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 669dc002-5fdc-46a8-9a66-f49ce02e8ee6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:43:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:32.099 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9684289-9ed5-4a83-855f-6832281c6ea8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:32.100 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[967057cf-2dc5-4de7-ba0a-ff6269dec709]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:32Z|00101|binding|INFO|Setting lport baa3eb91-3e7f-4a3c-a691-a11c83979add ovn-installed in OVS
Oct 13 15:43:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:32Z|00102|binding|INFO|Setting lport baa3eb91-3e7f-4a3c-a691-a11c83979add up in Southbound
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:32.251 496978 INFO neutron.agent.dhcp.agent [None req-ba1ba1ff-4655-4526-88b1-d2f0c9b97b5e - - - - - -] DHCP configuration for ports {'bc4ab67b-7447-4a44-b033-e0e2d1224fa8'} is completed
Oct 13 15:43:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:32.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:32 standalone.localdomain ceph-mon[29756]: pgmap v3874: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:32 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:32.731 2 INFO neutron.agent.securitygroups_rpc [req-5cb4497b-0714-48cf-b64e-de08678394bc req-e36e6a3b-03b2-48e6-9aff-f6531f84887e 81cd5e08497a41f2a73de783bfb5523f 21021b5518144ee590654b7f7154816b - - default default] Security group rule updated ['de62c621-0d19-465b-b5a8-72646daefb77']
Oct 13 15:43:33 standalone.localdomain podman[529360]: 
Oct 13 15:43:33 standalone.localdomain podman[529360]: 2025-10-13 15:43:33.299737303 +0000 UTC m=+0.109869217 container create 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:33 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:43:33 standalone.localdomain systemd[1]: Started libpod-conmon-949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8.scope.
Oct 13 15:43:33 standalone.localdomain podman[529360]: 2025-10-13 15:43:33.241065276 +0000 UTC m=+0.051197270 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:33 standalone.localdomain systemd[1]: tmp-crun.sv1eRX.mount: Deactivated successfully.
Oct 13 15:43:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff618c0806afd40cc0470268c7911f8d9f395a04f324469a2529b49c509fb6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:33 standalone.localdomain podman[529375]: 2025-10-13 15:43:33.432024265 +0000 UTC m=+0.088624926 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:43:33 standalone.localdomain podman[529360]: 2025-10-13 15:43:33.450185052 +0000 UTC m=+0.260316956 container init 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:43:33 standalone.localdomain podman[529360]: 2025-10-13 15:43:33.498692727 +0000 UTC m=+0.308824641 container start 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:33 standalone.localdomain dnsmasq[529401]: started, version 2.85 cachesize 150
Oct 13 15:43:33 standalone.localdomain dnsmasq[529401]: DNS service limited to local subnets
Oct 13 15:43:33 standalone.localdomain dnsmasq[529401]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:33 standalone.localdomain dnsmasq[529401]: warning: no upstream servers configured
Oct 13 15:43:33 standalone.localdomain dnsmasq-dhcp[529401]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:33 standalone.localdomain dnsmasq[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/addn_hosts - 0 addresses
Oct 13 15:43:33 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/host
Oct 13 15:43:33 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/opts
Oct 13 15:43:33 standalone.localdomain podman[529375]: 2025-10-13 15:43:33.535173085 +0000 UTC m=+0.191773716 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:43:33 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:43:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:33.564 496978 INFO neutron.agent.dhcp.agent [None req-c0d3a519-2ea1-4a9a-8b2b-384961f52420 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:33Z, description=, device_id=35ae121e-2ac9-4ef9-8c70-41b932b0a80a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891623d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889162df0>], id=ebac8f4c-103b-43cf-8315-975ac51e8cdf, ip_allocation=immediate, mac_address=fa:16:3e:73:50:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:29Z, description=, dns_domain=, id=f9684289-9ed5-4a83-855f-6832281c6ea8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1544784029-network, port_security_enabled=True, project_id=7847f696e2214db29eb613f97c6affa3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33348, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=791, status=ACTIVE, subnets=['348ca65b-b998-48a6-9e3a-2e12240a9c48'], tags=[], tenant_id=7847f696e2214db29eb613f97c6affa3, updated_at=2025-10-13T15:43:30Z, vlan_transparent=None, network_id=f9684289-9ed5-4a83-855f-6832281c6ea8, port_security_enabled=False, project_id=7847f696e2214db29eb613f97c6affa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=818, status=DOWN, tags=[], tenant_id=7847f696e2214db29eb613f97c6affa3, updated_at=2025-10-13T15:43:33Z on network f9684289-9ed5-4a83-855f-6832281c6ea8
Oct 13 15:43:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:33.630 496978 INFO neutron.agent.dhcp.agent [None req-ba80ca04-50b8-43c0-9cba-dde99fe71e36 - - - - - -] DHCP configuration for ports {'3783247f-4359-457d-bcc6-3f4e6e51fa07'} is completed
Oct 13 15:43:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3875: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:33 standalone.localdomain dnsmasq[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/addn_hosts - 1 addresses
Oct 13 15:43:33 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/host
Oct 13 15:43:33 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/opts
Oct 13 15:43:33 standalone.localdomain podman[529423]: 2025-10-13 15:43:33.765644904 +0000 UTC m=+0.048819086 container kill 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:33.975 496978 INFO neutron.agent.dhcp.agent [None req-d8f71d21-7e8f-4020-aac9-f6771234900f - - - - - -] DHCP configuration for ports {'ebac8f4c-103b-43cf-8315-975ac51e8cdf'} is completed
Oct 13 15:43:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:34.425 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:33Z, description=, device_id=35ae121e-2ac9-4ef9-8c70-41b932b0a80a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889108310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889108c70>], id=ebac8f4c-103b-43cf-8315-975ac51e8cdf, ip_allocation=immediate, mac_address=fa:16:3e:73:50:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:29Z, description=, dns_domain=, id=f9684289-9ed5-4a83-855f-6832281c6ea8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1544784029-network, port_security_enabled=True, project_id=7847f696e2214db29eb613f97c6affa3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33348, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=791, status=ACTIVE, subnets=['348ca65b-b998-48a6-9e3a-2e12240a9c48'], tags=[], tenant_id=7847f696e2214db29eb613f97c6affa3, updated_at=2025-10-13T15:43:30Z, vlan_transparent=None, network_id=f9684289-9ed5-4a83-855f-6832281c6ea8, port_security_enabled=False, project_id=7847f696e2214db29eb613f97c6affa3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=818, status=DOWN, tags=[], tenant_id=7847f696e2214db29eb613f97c6affa3, updated_at=2025-10-13T15:43:33Z on network f9684289-9ed5-4a83-855f-6832281c6ea8
Oct 13 15:43:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:34.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:34 standalone.localdomain podman[529460]: 2025-10-13 15:43:34.708732903 +0000 UTC m=+0.069705386 container kill 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:43:34 standalone.localdomain dnsmasq[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/addn_hosts - 1 addresses
Oct 13 15:43:34 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/host
Oct 13 15:43:34 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/opts
Oct 13 15:43:34 standalone.localdomain ceph-mon[29756]: pgmap v3875: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:34.984 496978 INFO neutron.agent.dhcp.agent [None req-fca4d6f4-15be-4549-aa34-f8ff3f4cbab2 - - - - - -] DHCP configuration for ports {'ebac8f4c-103b-43cf-8315-975ac51e8cdf'} is completed
Oct 13 15:43:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3876: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:43:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 13 15:43:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e71 e71: 1 total, 1 up, 1 in
Oct 13 15:43:35 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e71: 1 total, 1 up, 1 in
Oct 13 15:43:35 standalone.localdomain podman[529482]: 2025-10-13 15:43:35.824404868 +0000 UTC m=+0.090640787 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350)
Oct 13 15:43:35 standalone.localdomain podman[529482]: 2025-10-13 15:43:35.835480227 +0000 UTC m=+0.101716176 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, version=9.6, release=1755695350, architecture=x86_64, distribution-scope=public, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Oct 13 15:43:35 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:43:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:35.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:35 standalone.localdomain podman[529515]: 2025-10-13 15:43:35.902588043 +0000 UTC m=+0.046964060 container kill d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:35 standalone.localdomain dnsmasq[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/addn_hosts - 0 addresses
Oct 13 15:43:35 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/host
Oct 13 15:43:35 standalone.localdomain dnsmasq-dhcp[528920]: read /var/lib/neutron/dhcp/32a9dbae-2e38-4407-9d1b-3340c23ceaac/opts
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain kernel: device tap931de190-5d left promiscuous mode
Oct 13 15:43:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:36Z|00103|binding|INFO|Releasing lport 931de190-5d1b-4dde-9904-56c18df9213a from this chassis (sb_readonly=0)
Oct 13 15:43:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:36Z|00104|binding|INFO|Setting lport 931de190-5d1b-4dde-9904-56c18df9213a down in Southbound
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.134 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-32a9dbae-2e38-4407-9d1b-3340c23ceaac', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32a9dbae-2e38-4407-9d1b-3340c23ceaac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21021b5518144ee590654b7f7154816b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad85b933-b50e-427a-b768-3c7127288675, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=931de190-5d1b-4dde-9904-56c18df9213a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.136 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 931de190-5d1b-4dde-9904-56c18df9213a in datapath 32a9dbae-2e38-4407-9d1b-3340c23ceaac unbound from our chassis
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.139 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32a9dbae-2e38-4407-9d1b-3340c23ceaac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.140 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[636a08a0-b1e0-473e-a795-1708c10af25e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:36.417 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:36Z, description=, device_id=f034d7c1-dbee-4451-adbf-77631ad9b385, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890fcc70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890fc130>], id=90171035-9422-4694-a468-fa1ac56a9de5, ip_allocation=immediate, mac_address=fa:16:3e:70:88:43, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=852, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:36Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:36.603 496978 INFO neutron.agent.linux.ip_lib [None req-4d7dadcf-72c3-4f42-89d5-307cde5dbcee - - - - - -] Device tapb266eccc-45 cannot be used as it has no MAC address
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain kernel: device tapb266eccc-45 entered promiscuous mode
Oct 13 15:43:36 standalone.localdomain NetworkManager[5962]: <info>  [1760370216.6471] manager: (tapb266eccc-45): new Generic device (/org/freedesktop/NetworkManager/Devices/30)
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:36Z|00105|binding|INFO|Claiming lport b266eccc-451d-4082-b77b-64eb5f44a18d for this chassis.
Oct 13 15:43:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:36Z|00106|binding|INFO|b266eccc-451d-4082-b77b-64eb5f44a18d: Claiming unknown
Oct 13 15:43:36 standalone.localdomain systemd-udevd[529561]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:36Z|00107|binding|INFO|Setting lport b266eccc-451d-4082-b77b-64eb5f44a18d ovn-installed in OVS
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:36Z|00108|binding|INFO|Setting lport b266eccc-451d-4082-b77b-64eb5f44a18d up in Southbound
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.699 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-670a07d7-1dd0-4554-a989-6ec34ada9f3c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-670a07d7-1dd0-4554-a989-6ec34ada9f3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90ebd324693f4ef385b6ff13efc2101b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627dc6c1-3da8-4859-8268-b7e2daf4c302, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b266eccc-451d-4082-b77b-64eb5f44a18d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.701 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b266eccc-451d-4082-b77b-64eb5f44a18d in datapath 670a07d7-1dd0-4554-a989-6ec34ada9f3c bound to our chassis
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.705 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 161d4c3b-d6b2-48f7-9b5b-950b3c12c297 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.705 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 670a07d7-1dd0-4554-a989-6ec34ada9f3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:36.707 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[81a4a199-3472-42ca-b53c-47d6c9ea1575]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:36 standalone.localdomain ceph-mon[29756]: pgmap v3876: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:36 standalone.localdomain ceph-mon[29756]: osdmap e71: 1 total, 1 up, 1 in
Oct 13 15:43:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:36.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:36 standalone.localdomain podman[529575]: 2025-10-13 15:43:36.817166868 +0000 UTC m=+0.079343751 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:36 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:43:36 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:36 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:37.086 496978 INFO neutron.agent.dhcp.agent [None req-0dcba2e5-8e90-4c97-8669-6a1f0299cc5d - - - - - -] DHCP configuration for ports {'90171035-9422-4694-a468-fa1ac56a9de5'} is completed
Oct 13 15:43:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:37.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:43:37 standalone.localdomain podman[529621]: 2025-10-13 15:43:37.496092525 +0000 UTC m=+0.057064699 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:37 standalone.localdomain podman[529621]: 2025-10-13 15:43:37.534896824 +0000 UTC m=+0.095868988 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:37 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:43:37 standalone.localdomain podman[529656]: 2025-10-13 15:43:37.590154516 +0000 UTC m=+0.052358944 container kill 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:43:37 standalone.localdomain dnsmasq[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/addn_hosts - 0 addresses
Oct 13 15:43:37 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/host
Oct 13 15:43:37 standalone.localdomain dnsmasq-dhcp[529401]: read /var/lib/neutron/dhcp/f9684289-9ed5-4a83-855f-6832281c6ea8/opts
Oct 13 15:43:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3878: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:37Z|00109|binding|INFO|Releasing lport baa3eb91-3e7f-4a3c-a691-a11c83979add from this chassis (sb_readonly=0)
Oct 13 15:43:37 standalone.localdomain kernel: device tapbaa3eb91-3e left promiscuous mode
Oct 13 15:43:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:37Z|00110|binding|INFO|Setting lport baa3eb91-3e7f-4a3c-a691-a11c83979add down in Southbound
Oct 13 15:43:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:37.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:37.898 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f9684289-9ed5-4a83-855f-6832281c6ea8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9684289-9ed5-4a83-855f-6832281c6ea8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7847f696e2214db29eb613f97c6affa3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afb25bc6-edf8-4b8e-a866-0bb6969bc134, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=baa3eb91-3e7f-4a3c-a691-a11c83979add) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:37.900 378821 INFO neutron.agent.ovn.metadata.agent [-] Port baa3eb91-3e7f-4a3c-a691-a11c83979add in datapath f9684289-9ed5-4a83-855f-6832281c6ea8 unbound from our chassis
Oct 13 15:43:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:37.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:37.902 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f9684289-9ed5-4a83-855f-6832281c6ea8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:37.903 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[418e3f67-57f0-40e0-b08e-849b185757e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:37 standalone.localdomain podman[529700]: 
Oct 13 15:43:37 standalone.localdomain podman[529700]: 2025-10-13 15:43:37.934287797 +0000 UTC m=+0.111059072 container create b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:37 standalone.localdomain podman[529700]: 2025-10-13 15:43:37.878242501 +0000 UTC m=+0.055013786 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:37 standalone.localdomain systemd[1]: Started libpod-conmon-b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f.scope.
Oct 13 15:43:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:38 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7dca2432be1b3ec710696d0ed273e319913b3f2cd6b71e5055c1b60400551d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:38 standalone.localdomain podman[529700]: 2025-10-13 15:43:38.011506393 +0000 UTC m=+0.188277668 container init b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:43:38 standalone.localdomain podman[529700]: 2025-10-13 15:43:38.020592461 +0000 UTC m=+0.197363736 container start b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: started, version 2.85 cachesize 150
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: DNS service limited to local subnets
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: warning: no upstream servers configured
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/addn_hosts - 0 addresses
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/host
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/opts
Oct 13 15:43:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:38.103 496978 INFO neutron.agent.dhcp.agent [None req-d7d6a881-9e16-4bd0-9493-e5be635bcb89 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:37Z, description=, device_id=f034d7c1-dbee-4451-adbf-77631ad9b385, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e4370>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e4f10>], id=19b21733-7e17-457b-beb0-c3b1e734055b, ip_allocation=immediate, mac_address=fa:16:3e:6b:13:bb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:33Z, description=, dns_domain=, id=670a07d7-1dd0-4554-a989-6ec34ada9f3c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1163438387-network, port_security_enabled=True, project_id=90ebd324693f4ef385b6ff13efc2101b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38222, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=828, status=ACTIVE, subnets=['d7549e38-d93b-4566-ad85-a677d4bf9beb'], tags=[], tenant_id=90ebd324693f4ef385b6ff13efc2101b, updated_at=2025-10-13T15:43:34Z, vlan_transparent=None, network_id=670a07d7-1dd0-4554-a989-6ec34ada9f3c, port_security_enabled=False, project_id=90ebd324693f4ef385b6ff13efc2101b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=855, status=DOWN, tags=[], tenant_id=90ebd324693f4ef385b6ff13efc2101b, updated_at=2025-10-13T15:43:37Z on network 670a07d7-1dd0-4554-a989-6ec34ada9f3c
Oct 13 15:43:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:38.241 496978 INFO neutron.agent.dhcp.agent [None req-b1d74929-cbb1-4720-a47d-69082b576150 - - - - - -] DHCP configuration for ports {'06639e9e-0ebd-4d31-99fa-43e0caa9518c'} is completed
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/addn_hosts - 1 addresses
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/host
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/opts
Oct 13 15:43:38 standalone.localdomain podman[529740]: 2025-10-13 15:43:38.344832964 +0000 UTC m=+0.063496137 container kill b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:43:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:38.397 496978 INFO neutron.agent.linux.ip_lib [None req-c1ab82fb-a283-4fcb-ba9a-3b2a0b1fc092 - - - - - -] Device tap739c4fa3-78 cannot be used as it has no MAC address
Oct 13 15:43:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:38.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:38 standalone.localdomain kernel: device tap739c4fa3-78 entered promiscuous mode
Oct 13 15:43:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:38Z|00111|binding|INFO|Claiming lport 739c4fa3-78dc-42a6-90ec-44b13c0174b2 for this chassis.
Oct 13 15:43:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:38Z|00112|binding|INFO|739c4fa3-78dc-42a6-90ec-44b13c0174b2: Claiming unknown
Oct 13 15:43:38 standalone.localdomain NetworkManager[5962]: <info>  [1760370218.5056] manager: (tap739c4fa3-78): new Generic device (/org/freedesktop/NetworkManager/Devices/31)
Oct 13 15:43:38 standalone.localdomain systemd-udevd[529564]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:38.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:38 standalone.localdomain ceph-mon[29756]: pgmap v3878: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:38Z|00113|binding|INFO|Setting lport 739c4fa3-78dc-42a6-90ec-44b13c0174b2 ovn-installed in OVS
Oct 13 15:43:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:38Z|00114|binding|INFO|Setting lport 739c4fa3-78dc-42a6-90ec-44b13c0174b2 up in Southbound
Oct 13 15:43:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:38.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:38.520 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d7f1eea2-e875-4697-aeee-9ff7614f00fa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7f1eea2-e875-4697-aeee-9ff7614f00fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef2aadab7b86431fa8f7a831d718f0e0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d44ba895-6ffc-44ca-a604-28090952b6bc, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=739c4fa3-78dc-42a6-90ec-44b13c0174b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:38.523 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 739c4fa3-78dc-42a6-90ec-44b13c0174b2 in datapath d7f1eea2-e875-4697-aeee-9ff7614f00fa bound to our chassis
Oct 13 15:43:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:38.525 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d7f1eea2-e875-4697-aeee-9ff7614f00fa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:43:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:38.525 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[de71979a-b5a9-4705-8c38-f823f3eaca07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:38.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap739c4fa3-78: No such device
Oct 13 15:43:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:38.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:38.674 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:37Z, description=, device_id=f034d7c1-dbee-4451-adbf-77631ad9b385, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889260880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892603d0>], id=19b21733-7e17-457b-beb0-c3b1e734055b, ip_allocation=immediate, mac_address=fa:16:3e:6b:13:bb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:33Z, description=, dns_domain=, id=670a07d7-1dd0-4554-a989-6ec34ada9f3c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-DeleteServersTestJSON-1163438387-network, port_security_enabled=True, project_id=90ebd324693f4ef385b6ff13efc2101b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38222, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=828, status=ACTIVE, subnets=['d7549e38-d93b-4566-ad85-a677d4bf9beb'], tags=[], tenant_id=90ebd324693f4ef385b6ff13efc2101b, updated_at=2025-10-13T15:43:34Z, vlan_transparent=None, network_id=670a07d7-1dd0-4554-a989-6ec34ada9f3c, port_security_enabled=False, project_id=90ebd324693f4ef385b6ff13efc2101b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=855, status=DOWN, tags=[], tenant_id=90ebd324693f4ef385b6ff13efc2101b, updated_at=2025-10-13T15:43:37Z on network 670a07d7-1dd0-4554-a989-6ec34ada9f3c
Oct 13 15:43:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:38.740 496978 INFO neutron.agent.dhcp.agent [None req-1e4c63a8-ca41-4c35-8683-dc10dd47a76f - - - - - -] DHCP configuration for ports {'19b21733-7e17-457b-beb0-c3b1e734055b'} is completed
Oct 13 15:43:38 standalone.localdomain dnsmasq[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/addn_hosts - 1 addresses
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/host
Oct 13 15:43:38 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/opts
Oct 13 15:43:38 standalone.localdomain podman[529819]: 2025-10-13 15:43:38.911945065 +0000 UTC m=+0.053829340 container kill b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:39 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:43:39 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:39 standalone.localdomain podman[529860]: 2025-10-13 15:43:39.078447496 +0000 UTC m=+0.050400025 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:39 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:39Z|00115|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:39.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:39.295 496978 INFO neutron.agent.dhcp.agent [None req-edb592b0-e8cd-43a5-85d0-ede7d31fae10 - - - - - -] DHCP configuration for ports {'19b21733-7e17-457b-beb0-c3b1e734055b'} is completed
Oct 13 15:43:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:39.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3879: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 13 15:43:39 standalone.localdomain podman[529914]: 
Oct 13 15:43:39 standalone.localdomain podman[529914]: 2025-10-13 15:43:39.804096034 +0000 UTC m=+0.107878216 container create 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:43:39 standalone.localdomain podman[529914]: 2025-10-13 15:43:39.752822783 +0000 UTC m=+0.056605025 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:39 standalone.localdomain systemd[1]: Started libpod-conmon-5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2.scope.
Oct 13 15:43:39 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0060dffc15a1f2f12dd95f8d5e78bd98550f488dd661f3e670dd7cda70d5c7c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:39 standalone.localdomain podman[529914]: 2025-10-13 15:43:39.893617906 +0000 UTC m=+0.197400118 container init 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:43:39 standalone.localdomain podman[529914]: 2025-10-13 15:43:39.903048745 +0000 UTC m=+0.206830957 container start 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:43:39 standalone.localdomain dnsmasq[529933]: started, version 2.85 cachesize 150
Oct 13 15:43:39 standalone.localdomain dnsmasq[529933]: DNS service limited to local subnets
Oct 13 15:43:39 standalone.localdomain dnsmasq[529933]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:39 standalone.localdomain dnsmasq[529933]: warning: no upstream servers configured
Oct 13 15:43:39 standalone.localdomain dnsmasq-dhcp[529933]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:39 standalone.localdomain dnsmasq[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/addn_hosts - 0 addresses
Oct 13 15:43:39 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/host
Oct 13 15:43:39 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/opts
Oct 13 15:43:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:40.049 496978 INFO neutron.agent.dhcp.agent [None req-26b4b9de-3c7b-40ad-8dc4-736db1da1b12 - - - - - -] DHCP configuration for ports {'0e029095-3ec6-4018-86d5-5af604a78b38'} is completed
Oct 13 15:43:40 standalone.localdomain dnsmasq[528920]: exiting on receipt of SIGTERM
Oct 13 15:43:40 standalone.localdomain podman[529951]: 2025-10-13 15:43:40.456606062 +0000 UTC m=+0.056590486 container kill d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:40 standalone.localdomain systemd[1]: libpod-d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8.scope: Deactivated successfully.
Oct 13 15:43:40 standalone.localdomain podman[529970]: 2025-10-13 15:43:40.535171748 +0000 UTC m=+0.053462088 container died d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:40 standalone.localdomain podman[529970]: 2025-10-13 15:43:40.636302926 +0000 UTC m=+0.154593216 container remove d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-32a9dbae-2e38-4407-9d1b-3340c23ceaac, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:43:40 standalone.localdomain systemd[1]: libpod-conmon-d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8.scope: Deactivated successfully.
Oct 13 15:43:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:40.666 496978 INFO neutron.agent.dhcp.agent [None req-11f41bca-3977-432f-9dbf-3783e71fd158 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 13 15:43:40 standalone.localdomain ceph-mon[29756]: pgmap v3879: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 13 15:43:40 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e72 e72: 1 total, 1 up, 1 in
Oct 13 15:43:40 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e72: 1 total, 1 up, 1 in
Oct 13 15:43:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15d7ac5f8f211c05ec6625db5e75afce4b59fd5892be07a56d45495df6e5d327-merged.mount: Deactivated successfully.
Oct 13 15:43:40 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d29b3a6ca3b47a3170c7d531ad31fbdedc6772d19948d0e15b4a83595900e6d8-userdata-shm.mount: Deactivated successfully.
Oct 13 15:43:40 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d32a9dbae\x2d2e38\x2d4407\x2d9d1b\x2d3340c23ceaac.mount: Deactivated successfully.
Oct 13 15:43:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:40.874 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:40.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:41 standalone.localdomain podman[467099]: time="2025-10-13T15:43:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:43:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:43:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 420129 "" "Go-http-client/1.1"
Oct 13 15:43:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3881: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 13 15:43:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:43:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 50544 "" "Go-http-client/1.1"
Oct 13 15:43:41 standalone.localdomain ceph-mon[29756]: osdmap e72: 1 total, 1 up, 1 in
Oct 13 15:43:41 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:43:41 standalone.localdomain podman[530010]: 2025-10-13 15:43:41.83291907 +0000 UTC m=+0.052308463 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:43:41 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:41 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:42Z|00116|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:42.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:43:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:43:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:43:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:42.203 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:41Z, description=, device_id=7392a9b8-dd5d-4317-a0a5-a22a766034cb, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890dbd00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890dbdc0>], id=3a63b370-fa62-46b1-ac0b-8329d0231bbc, ip_allocation=immediate, mac_address=fa:16:3e:cb:ae:b8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=876, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:41Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:42 standalone.localdomain podman[530032]: 2025-10-13 15:43:42.242466555 +0000 UTC m=+0.098092155 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, vcs-type=git, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:43:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:43:42 standalone.localdomain podman[530033]: 2025-10-13 15:43:42.303716702 +0000 UTC m=+0.156946269 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-account, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, batch=17.1_20250721.1, build-date=2025-07-21T16:11:22)
Oct 13 15:43:42 standalone.localdomain podman[530034]: 2025-10-13 15:43:42.342292394 +0000 UTC m=+0.191347803 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:43:42 standalone.localdomain podman[530034]: 2025-10-13 15:43:42.355849489 +0000 UTC m=+0.204904918 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:43:42 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:43:42 standalone.localdomain podman[530072]: 2025-10-13 15:43:42.421673995 +0000 UTC m=+0.157864086 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, distribution-scope=public, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, vcs-type=git)
Oct 13 15:43:42 standalone.localdomain podman[530033]: 2025-10-13 15:43:42.497871969 +0000 UTC m=+0.351101576 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, tcib_managed=true, version=17.1.9, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64)
Oct 13 15:43:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:42 standalone.localdomain podman[530032]: 2025-10-13 15:43:42.507737881 +0000 UTC m=+0.363363411 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, release=1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, container_name=swift_object_server, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, build-date=2025-07-21T14:56:28)
Oct 13 15:43:42 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:43:42 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:43:42 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:43:42 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:42 standalone.localdomain podman[530144]: 2025-10-13 15:43:42.539628418 +0000 UTC m=+0.067686244 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:43:42 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:42 standalone.localdomain podman[530072]: 2025-10-13 15:43:42.622427935 +0000 UTC m=+0.358618016 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, release=1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, name=rhosp17/openstack-swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=swift_container_server, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-swift-container-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905)
Oct 13 15:43:42 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:43:42 standalone.localdomain ceph-mon[29756]: pgmap v3881: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 13 15:43:42 standalone.localdomain dnsmasq[529401]: exiting on receipt of SIGTERM
Oct 13 15:43:42 standalone.localdomain podman[530190]: 2025-10-13 15:43:42.828644111 +0000 UTC m=+0.065945521 container kill 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:42 standalone.localdomain systemd[1]: libpod-949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8.scope: Deactivated successfully.
Oct 13 15:43:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:42.864 496978 INFO neutron.agent.dhcp.agent [None req-067e2170-94ce-4837-a4d3-38ffbca65ea5 - - - - - -] DHCP configuration for ports {'3a63b370-fa62-46b1-ac0b-8329d0231bbc'} is completed
Oct 13 15:43:42 standalone.localdomain podman[530204]: 2025-10-13 15:43:42.893344783 +0000 UTC m=+0.049552688 container died 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:43:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:43:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:43:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8-userdata-shm.mount: Deactivated successfully.
Oct 13 15:43:43 standalone.localdomain podman[530204]: 2025-10-13 15:43:43.009057058 +0000 UTC m=+0.165264923 container cleanup 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:43:43 standalone.localdomain systemd[1]: libpod-conmon-949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8.scope: Deactivated successfully.
Oct 13 15:43:43 standalone.localdomain podman[530206]: 2025-10-13 15:43:43.032557237 +0000 UTC m=+0.178499169 container remove 949e983e9346b562bc2d8e05d3d47ec773b705c30962aee72422f1ba81120cb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f9684289-9ed5-4a83-855f-6832281c6ea8, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:43:43 standalone.localdomain podman[530232]: 2025-10-13 15:43:43.097689182 +0000 UTC m=+0.158068502 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Oct 13 15:43:43 standalone.localdomain podman[530232]: 2025-10-13 15:43:43.107553265 +0000 UTC m=+0.167932595 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:43.114 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:43 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:43:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:43.127 496978 INFO neutron.agent.linux.ip_lib [None req-998ff137-ee35-4a78-8bbf-9e70aa45736a - - - - - -] Device tap57ec3634-44 cannot be used as it has no MAC address
Oct 13 15:43:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:43.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:43 standalone.localdomain kernel: device tap57ec3634-44 entered promiscuous mode
Oct 13 15:43:43 standalone.localdomain NetworkManager[5962]: <info>  [1760370223.1510] manager: (tap57ec3634-44): new Generic device (/org/freedesktop/NetworkManager/Devices/32)
Oct 13 15:43:43 standalone.localdomain systemd-udevd[530263]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:43Z|00117|binding|INFO|Claiming lport 57ec3634-44ba-4768-88e5-b61cb0ca1c97 for this chassis.
Oct 13 15:43:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:43Z|00118|binding|INFO|57ec3634-44ba-4768-88e5-b61cb0ca1c97: Claiming unknown
Oct 13 15:43:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:43.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:43.201 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-78178a9a-81a6-4e63-a7f9-e5e97639bfae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78178a9a-81a6-4e63-a7f9-e5e97639bfae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1e6572989ad4feab2b4f2fcd6222c2b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ae9c590-fb2c-47b5-b68e-fa122c376038, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=57ec3634-44ba-4768-88e5-b61cb0ca1c97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:43.203 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 57ec3634-44ba-4768-88e5-b61cb0ca1c97 in datapath 78178a9a-81a6-4e63-a7f9-e5e97639bfae bound to our chassis
Oct 13 15:43:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:43.204 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 78178a9a-81a6-4e63-a7f9-e5e97639bfae or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:43:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:43.205 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[de3ab393-cd16-4131-a674-e26a0b3cd3d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:43Z|00119|binding|INFO|Setting lport 57ec3634-44ba-4768-88e5-b61cb0ca1c97 ovn-installed in OVS
Oct 13 15:43:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:43Z|00120|binding|INFO|Setting lport 57ec3634-44ba-4768-88e5-b61cb0ca1c97 up in Southbound
Oct 13 15:43:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:43.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:43.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:43.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:43.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3882: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 13 15:43:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:43.701 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bff618c0806afd40cc0470268c7911f8d9f395a04f324469a2529b49c509fb6d-merged.mount: Deactivated successfully.
Oct 13 15:43:43 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df9684289\x2d9ed5\x2d4a83\x2d855f\x2d6832281c6ea8.mount: Deactivated successfully.
Oct 13 15:43:44 standalone.localdomain podman[530318]: 
Oct 13 15:43:44 standalone.localdomain podman[530318]: 2025-10-13 15:43:44.212972176 +0000 UTC m=+0.078071842 container create dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:43:44 standalone.localdomain systemd[1]: Started libpod-conmon-dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4.scope.
Oct 13 15:43:44 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:44 standalone.localdomain podman[530318]: 2025-10-13 15:43:44.182102521 +0000 UTC m=+0.047202197 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:44 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f82d8d9cf1ee5ad7f6a8a2de78b15ac132a76c37a3bea83ee712d42fa29048/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:44 standalone.localdomain podman[530318]: 2025-10-13 15:43:44.292037628 +0000 UTC m=+0.157137314 container init dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:44 standalone.localdomain podman[530318]: 2025-10-13 15:43:44.300389324 +0000 UTC m=+0.165489020 container start dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:43:44 standalone.localdomain dnsmasq[530336]: started, version 2.85 cachesize 150
Oct 13 15:43:44 standalone.localdomain dnsmasq[530336]: DNS service limited to local subnets
Oct 13 15:43:44 standalone.localdomain dnsmasq[530336]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:44 standalone.localdomain dnsmasq[530336]: warning: no upstream servers configured
Oct 13 15:43:44 standalone.localdomain dnsmasq-dhcp[530336]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:44 standalone.localdomain dnsmasq[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/addn_hosts - 0 addresses
Oct 13 15:43:44 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/host
Oct 13 15:43:44 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/opts
Oct 13 15:43:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:44.486 496978 INFO neutron.agent.dhcp.agent [None req-a9c94264-8005-40a3-8bef-37963360e87a - - - - - -] DHCP configuration for ports {'425d8e1f-4853-471b-b6d7-a840e4b46e0d'} is completed
Oct 13 15:43:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:44.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:44.644 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:44Z, description=, device_id=e025fabb-4213-4b6e-8834-ea7cbb26a400, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188921a130>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924eca0>], id=f8692ec0-a13c-44a0-a0fa-b3bf9d84c7b4, ip_allocation=immediate, mac_address=fa:16:3e:9a:83:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=895, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:44Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:44 standalone.localdomain ceph-mon[29756]: pgmap v3882: 177 pgs: 177 active+clean; 359 MiB data, 298 MiB used, 6.7 GiB / 7.0 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 13 15:43:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:44.818 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:44Z, description=, device_id=7392a9b8-dd5d-4317-a0a5-a22a766034cb, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889a42be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890db040>], id=1f11c339-41e2-4fb9-ae05-28f7b9196a45, ip_allocation=immediate, mac_address=fa:16:3e:7f:c5:73, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:35Z, description=, dns_domain=, id=d7f1eea2-e875-4697-aeee-9ff7614f00fa, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1746148525-network, port_security_enabled=True, project_id=ef2aadab7b86431fa8f7a831d718f0e0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57963, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=849, status=ACTIVE, subnets=['6446aa1a-e2bd-44f5-ad30-c0b7607f3682'], tags=[], tenant_id=ef2aadab7b86431fa8f7a831d718f0e0, updated_at=2025-10-13T15:43:37Z, vlan_transparent=None, network_id=d7f1eea2-e875-4697-aeee-9ff7614f00fa, port_security_enabled=False, project_id=ef2aadab7b86431fa8f7a831d718f0e0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=896, status=DOWN, tags=[], tenant_id=ef2aadab7b86431fa8f7a831d718f0e0, updated_at=2025-10-13T15:43:44Z on network d7f1eea2-e875-4697-aeee-9ff7614f00fa
Oct 13 15:43:44 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:43:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:44 standalone.localdomain podman[530354]: 2025-10-13 15:43:44.897751182 +0000 UTC m=+0.061663310 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:43:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:45 standalone.localdomain podman[530387]: 2025-10-13 15:43:45.038698989 +0000 UTC m=+0.057033008 container kill 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:45 standalone.localdomain dnsmasq[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/addn_hosts - 1 addresses
Oct 13 15:43:45 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/host
Oct 13 15:43:45 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/opts
Oct 13 15:43:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:45.255 496978 INFO neutron.agent.dhcp.agent [None req-f6e45929-135b-4190-8b05-97a11c50f398 - - - - - -] DHCP configuration for ports {'f8692ec0-a13c-44a0-a0fa-b3bf9d84c7b4'} is completed
Oct 13 15:43:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:45.410 496978 INFO neutron.agent.dhcp.agent [None req-ad114066-5345-4d16-993b-632137e07620 - - - - - -] DHCP configuration for ports {'1f11c339-41e2-4fb9-ae05-28f7b9196a45'} is completed
Oct 13 15:43:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3883: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 40 op/s
Oct 13 15:43:45 standalone.localdomain dnsmasq[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/addn_hosts - 0 addresses
Oct 13 15:43:45 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/host
Oct 13 15:43:45 standalone.localdomain systemd[1]: tmp-crun.wA8FjL.mount: Deactivated successfully.
Oct 13 15:43:45 standalone.localdomain dnsmasq-dhcp[529721]: read /var/lib/neutron/dhcp/670a07d7-1dd0-4554-a989-6ec34ada9f3c/opts
Oct 13 15:43:45 standalone.localdomain podman[530431]: 2025-10-13 15:43:45.719011679 +0000 UTC m=+0.072010668 container kill b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:45.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:45 standalone.localdomain kernel: device tapb266eccc-45 left promiscuous mode
Oct 13 15:43:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:45Z|00121|binding|INFO|Releasing lport b266eccc-451d-4082-b77b-64eb5f44a18d from this chassis (sb_readonly=0)
Oct 13 15:43:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:45Z|00122|binding|INFO|Setting lport b266eccc-451d-4082-b77b-64eb5f44a18d down in Southbound
Oct 13 15:43:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:45.949 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-670a07d7-1dd0-4554-a989-6ec34ada9f3c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-670a07d7-1dd0-4554-a989-6ec34ada9f3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90ebd324693f4ef385b6ff13efc2101b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627dc6c1-3da8-4859-8268-b7e2daf4c302, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b266eccc-451d-4082-b77b-64eb5f44a18d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:45.950 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b266eccc-451d-4082-b77b-64eb5f44a18d in datapath 670a07d7-1dd0-4554-a989-6ec34ada9f3c unbound from our chassis
Oct 13 15:43:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:45.952 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 670a07d7-1dd0-4554-a989-6ec34ada9f3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:45.953 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[aea36b42-6de5-4486-9bcc-4eba97825094]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:45.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:45.986 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:44Z, description=, device_id=7392a9b8-dd5d-4317-a0a5-a22a766034cb, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188907fca0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188907fa30>], id=1f11c339-41e2-4fb9-ae05-28f7b9196a45, ip_allocation=immediate, mac_address=fa:16:3e:7f:c5:73, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:35Z, description=, dns_domain=, id=d7f1eea2-e875-4697-aeee-9ff7614f00fa, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1746148525-network, port_security_enabled=True, project_id=ef2aadab7b86431fa8f7a831d718f0e0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57963, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=849, status=ACTIVE, subnets=['6446aa1a-e2bd-44f5-ad30-c0b7607f3682'], tags=[], tenant_id=ef2aadab7b86431fa8f7a831d718f0e0, updated_at=2025-10-13T15:43:37Z, vlan_transparent=None, network_id=d7f1eea2-e875-4697-aeee-9ff7614f00fa, port_security_enabled=False, project_id=ef2aadab7b86431fa8f7a831d718f0e0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=896, status=DOWN, tags=[], tenant_id=ef2aadab7b86431fa8f7a831d718f0e0, updated_at=2025-10-13T15:43:44Z on network d7f1eea2-e875-4697-aeee-9ff7614f00fa
Oct 13 15:43:46 standalone.localdomain dnsmasq[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/addn_hosts - 1 addresses
Oct 13 15:43:46 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/host
Oct 13 15:43:46 standalone.localdomain podman[530472]: 2025-10-13 15:43:46.169357104 +0000 UTC m=+0.045996841 container kill 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:46 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/opts
Oct 13 15:43:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:46.453 496978 INFO neutron.agent.dhcp.agent [None req-486f1f07-b448-40d2-b874-5ffa7aefd984 - - - - - -] DHCP configuration for ports {'1f11c339-41e2-4fb9-ae05-28f7b9196a45'} is completed
Oct 13 15:43:46 standalone.localdomain ceph-mon[29756]: pgmap v3883: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 40 op/s
Oct 13 15:43:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:47.092 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:46Z, description=, device_id=e025fabb-4213-4b6e-8834-ea7cbb26a400, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889106340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889106160>], id=291a13a0-b564-4224-989a-1a73de26bfaa, ip_allocation=immediate, mac_address=fa:16:3e:2f:a3:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:40Z, description=, dns_domain=, id=78178a9a-81a6-4e63-a7f9-e5e97639bfae, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-2066401684-network, port_security_enabled=True, project_id=b1e6572989ad4feab2b4f2fcd6222c2b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51621, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=872, status=ACTIVE, subnets=['096bfe55-9536-42a2-81d0-5e74991b41e8'], tags=[], tenant_id=b1e6572989ad4feab2b4f2fcd6222c2b, updated_at=2025-10-13T15:43:41Z, vlan_transparent=None, network_id=78178a9a-81a6-4e63-a7f9-e5e97639bfae, port_security_enabled=False, project_id=b1e6572989ad4feab2b4f2fcd6222c2b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=904, status=DOWN, tags=[], tenant_id=b1e6572989ad4feab2b4f2fcd6222c2b, updated_at=2025-10-13T15:43:46Z on network 78178a9a-81a6-4e63-a7f9-e5e97639bfae
Oct 13 15:43:47 standalone.localdomain dnsmasq[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/addn_hosts - 1 addresses
Oct 13 15:43:47 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/host
Oct 13 15:43:47 standalone.localdomain podman[530509]: 2025-10-13 15:43:47.349895445 +0000 UTC m=+0.071544442 container kill dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:47 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/opts
Oct 13 15:43:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:43:47 standalone.localdomain podman[530524]: 2025-10-13 15:43:47.475188804 +0000 UTC m=+0.092151934 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:47 standalone.localdomain podman[530524]: 2025-10-13 15:43:47.491900096 +0000 UTC m=+0.108863206 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 13 15:43:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 e73: 1 total, 1 up, 1 in
Oct 13 15:43:47 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:43:47 standalone.localdomain ceph-mon[29756]: log_channel(cluster) log [DBG] : osdmap e73: 1 total, 1 up, 1 in
Oct 13 15:43:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:47.635 496978 INFO neutron.agent.dhcp.agent [None req-709058a9-ca06-4ae6-a382-d3d0eaafa57d - - - - - -] DHCP configuration for ports {'291a13a0-b564-4224-989a-1a73de26bfaa'} is completed
Oct 13 15:43:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3885: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 13 15:43:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:47Z|00123|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:43:48 standalone.localdomain podman[530568]: 2025-10-13 15:43:48.080629459 +0000 UTC m=+0.072584624 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:48 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:48 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:48 standalone.localdomain ceph-mon[29756]: osdmap e73: 1 total, 1 up, 1 in
Oct 13 15:43:48 standalone.localdomain ceph-mon[29756]: pgmap v3885: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 13 15:43:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:48.635 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:46Z, description=, device_id=e025fabb-4213-4b6e-8834-ea7cbb26a400, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889229220>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892291f0>], id=291a13a0-b564-4224-989a-1a73de26bfaa, ip_allocation=immediate, mac_address=fa:16:3e:2f:a3:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:40Z, description=, dns_domain=, id=78178a9a-81a6-4e63-a7f9-e5e97639bfae, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-2066401684-network, port_security_enabled=True, project_id=b1e6572989ad4feab2b4f2fcd6222c2b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51621, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=872, status=ACTIVE, subnets=['096bfe55-9536-42a2-81d0-5e74991b41e8'], tags=[], tenant_id=b1e6572989ad4feab2b4f2fcd6222c2b, updated_at=2025-10-13T15:43:41Z, vlan_transparent=None, network_id=78178a9a-81a6-4e63-a7f9-e5e97639bfae, port_security_enabled=False, project_id=b1e6572989ad4feab2b4f2fcd6222c2b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=904, status=DOWN, tags=[], tenant_id=b1e6572989ad4feab2b4f2fcd6222c2b, updated_at=2025-10-13T15:43:46Z on network 78178a9a-81a6-4e63-a7f9-e5e97639bfae
Oct 13 15:43:48 standalone.localdomain dnsmasq[529721]: exiting on receipt of SIGTERM
Oct 13 15:43:48 standalone.localdomain podman[530607]: 2025-10-13 15:43:48.730054643 +0000 UTC m=+0.084602713 container kill b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:48 standalone.localdomain systemd[1]: libpod-b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f.scope: Deactivated successfully.
Oct 13 15:43:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:48.795 496978 INFO neutron.agent.linux.ip_lib [None req-9a82e827-b5ee-4432-b217-9e0214401a33 - - - - - -] Device tap2244dbdc-4b cannot be used as it has no MAC address
Oct 13 15:43:48 standalone.localdomain podman[530638]: 2025-10-13 15:43:48.811996262 +0000 UTC m=+0.065202918 container died b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain kernel: device tap2244dbdc-4b entered promiscuous mode
Oct 13 15:43:48 standalone.localdomain NetworkManager[5962]: <info>  [1760370228.8195] manager: (tap2244dbdc-4b): new Generic device (/org/freedesktop/NetworkManager/Devices/33)
Oct 13 15:43:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:48Z|00124|binding|INFO|Claiming lport 2244dbdc-4b24-424b-828c-6d8ecf587bda for this chassis.
Oct 13 15:43:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:48Z|00125|binding|INFO|2244dbdc-4b24-424b-828c-6d8ecf587bda: Claiming unknown
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain systemd-udevd[530679]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:48.831 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d5036f63-29f3-448c-ad5c-55534cac285e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5036f63-29f3-448c-ad5c-55534cac285e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad31fa1626a4d048525338a288fd601', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c17d3ad-231d-4d34-a373-ab16abbf06e3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2244dbdc-4b24-424b-828c-6d8ecf587bda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:48.833 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2244dbdc-4b24-424b-828c-6d8ecf587bda in datapath d5036f63-29f3-448c-ad5c-55534cac285e bound to our chassis
Oct 13 15:43:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:48.834 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d5036f63-29f3-448c-ad5c-55534cac285e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:43:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:48.835 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f68c4a-3206-4aac-9a0e-79f9518bf8f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:48Z|00126|binding|INFO|Setting lport 2244dbdc-4b24-424b-828c-6d8ecf587bda ovn-installed in OVS
Oct 13 15:43:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:48Z|00127|binding|INFO|Setting lport 2244dbdc-4b24-424b-828c-6d8ecf587bda up in Southbound
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain podman[530664]: 2025-10-13 15:43:48.851617806 +0000 UTC m=+0.062743533 container kill dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:48 standalone.localdomain dnsmasq[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/addn_hosts - 1 addresses
Oct 13 15:43:48 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/host
Oct 13 15:43:48 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/opts
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain podman[530638]: 2025-10-13 15:43:48.904930069 +0000 UTC m=+0.158136675 container cleanup b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:43:48 standalone.localdomain systemd[1]: libpod-conmon-b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f.scope: Deactivated successfully.
Oct 13 15:43:48 standalone.localdomain podman[530640]: 2025-10-13 15:43:48.928952285 +0000 UTC m=+0.177833649 container remove b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-670a07d7-1dd0-4554-a989-6ec34ada9f3c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:43:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:48.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:48.960 496978 INFO neutron.agent.dhcp.agent [None req-a41e9cec-0f22-4363-99f8-ea9270cedab1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:49.094 496978 INFO neutron.agent.dhcp.agent [None req-24c1d896-226d-4fd7-b24d-8b015559b1c2 - - - - - -] DHCP configuration for ports {'291a13a0-b564-4224-989a-1a73de26bfaa'} is completed
Oct 13 15:43:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:49.188 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:43:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4a7dca2432be1b3ec710696d0ed273e319913b3f2cd6b71e5055c1b60400551d-merged.mount: Deactivated successfully.
Oct 13 15:43:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3a87ad3731f42032a8070284bed0e8cb7ac79cc935309bb798eaad383866f3f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:43:49 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d670a07d7\x2d1dd0\x2d4554\x2da989\x2d6ec34ada9f3c.mount: Deactivated successfully.
Oct 13 15:43:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3886: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 27 op/s
Oct 13 15:43:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:49.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:49 standalone.localdomain podman[530753]: 
Oct 13 15:43:49 standalone.localdomain podman[530753]: 2025-10-13 15:43:49.836804084 +0000 UTC m=+0.098079645 container create b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:43:49 standalone.localdomain systemd[1]: Started libpod-conmon-b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02.scope.
Oct 13 15:43:49 standalone.localdomain podman[530753]: 2025-10-13 15:43:49.788319709 +0000 UTC m=+0.049595240 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff3a0edf95c6e1252ebe647dfc9352a29f36dbdd641538873f57af6736257a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:49 standalone.localdomain podman[530753]: 2025-10-13 15:43:49.91173217 +0000 UTC m=+0.173007701 container init b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:43:49 standalone.localdomain podman[530753]: 2025-10-13 15:43:49.919697174 +0000 UTC m=+0.180972695 container start b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:43:49 standalone.localdomain dnsmasq[530771]: started, version 2.85 cachesize 150
Oct 13 15:43:49 standalone.localdomain dnsmasq[530771]: DNS service limited to local subnets
Oct 13 15:43:49 standalone.localdomain dnsmasq[530771]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:49 standalone.localdomain dnsmasq[530771]: warning: no upstream servers configured
Oct 13 15:43:49 standalone.localdomain dnsmasq-dhcp[530771]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:49 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 0 addresses
Oct 13 15:43:49 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:43:49 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:43:50 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:50.020 2 INFO neutron.agent.securitygroups_rpc [req-398cabd8-900e-4658-8436-dfe9cd56f791 req-f8f25c03-1b0a-4294-9a24-dda00a89dc13 90d09ace7156426ebdd6d24092ec14b9 ef2aadab7b86431fa8f7a831d718f0e0 - - default default] Security group rule updated ['1231e586-5edc-47f8-84b2-75ebfc390f30']
Oct 13 15:43:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:50.072 496978 INFO neutron.agent.dhcp.agent [None req-084c1e62-29dc-4824-a74e-8b77a7cf2884 - - - - - -] DHCP configuration for ports {'6d0c7ccf-eb5c-4000-9850-d98e4bccfdba'} is completed
Oct 13 15:43:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:50.712 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:50Z, description=, device_id=b9c6c367-f41c-4c0a-beda-ecdeb22b9dc8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890eca60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924e0d0>], id=c49f0917-509a-4130-b768-4d745accb876, ip_allocation=immediate, mac_address=fa:16:3e:38:83:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=926, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:50Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:50 standalone.localdomain ceph-mon[29756]: pgmap v3886: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 27 op/s
Oct 13 15:43:50 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:43:50 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:50 standalone.localdomain podman[530789]: 2025-10-13 15:43:50.939615256 +0000 UTC m=+0.052569752 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:50 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:50.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:43:51 standalone.localdomain systemd[1]: tmp-crun.KEMM0K.mount: Deactivated successfully.
Oct 13 15:43:51 standalone.localdomain podman[530803]: 2025-10-13 15:43:51.108502599 +0000 UTC m=+0.103592595 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:43:51 standalone.localdomain podman[530803]: 2025-10-13 15:43:51.124087087 +0000 UTC m=+0.119177083 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:43:51 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:43:51 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:51.206 2 INFO neutron.agent.securitygroups_rpc [req-4b1eaa57-38d3-46ac-b1cd-1d4d1789a903 req-fe602599-fa48-4cb2-ad91-0527f2550f6e 90d09ace7156426ebdd6d24092ec14b9 ef2aadab7b86431fa8f7a831d718f0e0 - - default default] Security group rule updated ['1231e586-5edc-47f8-84b2-75ebfc390f30']
Oct 13 15:43:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:51.240 496978 INFO neutron.agent.dhcp.agent [None req-d6fd0cb1-3054-4e2f-bea6-64a33c5657c5 - - - - - -] DHCP configuration for ports {'c49f0917-509a-4130-b768-4d745accb876'} is completed
Oct 13 15:43:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3887: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 13 15:43:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:52.667 496978 INFO neutron.agent.linux.ip_lib [None req-56eaffd9-41cb-4b37-bb2f-b7b876e1846e - - - - - -] Device tap959d96cd-85 cannot be used as it has no MAC address
Oct 13 15:43:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:52.689 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:52Z, description=, device_id=b9c6c367-f41c-4c0a-beda-ecdeb22b9dc8, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889162730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889162ca0>], id=bec668a9-d281-4fd9-bd38-2f6685d57053, ip_allocation=immediate, mac_address=fa:16:3e:10:f6:10, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:46Z, description=, dns_domain=, id=d5036f63-29f3-448c-ad5c-55534cac285e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-534158699-network, port_security_enabled=True, project_id=0ad31fa1626a4d048525338a288fd601, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25099, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=902, status=ACTIVE, subnets=['6f99dd30-5c89-44e1-93d3-a2e2e37c9df8'], tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:43:47Z, vlan_transparent=None, network_id=d5036f63-29f3-448c-ad5c-55534cac285e, port_security_enabled=False, project_id=0ad31fa1626a4d048525338a288fd601, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=942, status=DOWN, tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:43:52Z on network d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:43:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:52.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:52 standalone.localdomain kernel: device tap959d96cd-85 entered promiscuous mode
Oct 13 15:43:52 standalone.localdomain NetworkManager[5962]: <info>  [1760370232.7069] manager: (tap959d96cd-85): new Generic device (/org/freedesktop/NetworkManager/Devices/34)
Oct 13 15:43:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:52Z|00128|binding|INFO|Claiming lport 959d96cd-857d-4059-90ea-1b59b010fda0 for this chassis.
Oct 13 15:43:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:52 standalone.localdomain systemd-udevd[530844]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:52Z|00129|binding|INFO|959d96cd-857d-4059-90ea-1b59b010fda0: Claiming unknown
Oct 13 15:43:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:52.723 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-25474d28-1153-460e-a447-2e833189f1b1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25474d28-1153-460e-a447-2e833189f1b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '545241e1ced34a5d90f35b64da19c3a1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7aad2c98-6279-4708-95f0-631f41c9f944, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=959d96cd-857d-4059-90ea-1b59b010fda0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:52.725 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 959d96cd-857d-4059-90ea-1b59b010fda0 in datapath 25474d28-1153-460e-a447-2e833189f1b1 bound to our chassis
Oct 13 15:43:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:52.728 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 25474d28-1153-460e-a447-2e833189f1b1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:43:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:52.729 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[03b55886-3dec-4ef2-b7bc-2e045dc9537c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:52 standalone.localdomain ceph-mon[29756]: pgmap v3887: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 13 15:43:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:52Z|00130|binding|INFO|Setting lport 959d96cd-857d-4059-90ea-1b59b010fda0 ovn-installed in OVS
Oct 13 15:43:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:52Z|00131|binding|INFO|Setting lport 959d96cd-857d-4059-90ea-1b59b010fda0 up in Southbound
Oct 13 15:43:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:52.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:52.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:52.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:52 standalone.localdomain systemd[1]: tmp-crun.8p26yZ.mount: Deactivated successfully.
Oct 13 15:43:52 standalone.localdomain podman[530871]: 2025-10-13 15:43:52.933672057 +0000 UTC m=+0.059836944 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:43:52 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 1 addresses
Oct 13 15:43:52 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:43:52 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:43:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:53.296 496978 INFO neutron.agent.dhcp.agent [None req-e1c31333-f575-486b-94fd-3e80424c304f - - - - - -] DHCP configuration for ports {'bec668a9-d281-4fd9-bd38-2f6685d57053'} is completed
Oct 13 15:43:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13585 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=3332441405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCEEE6F0000000001030307) 
Oct 13 15:43:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3888: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 13 15:43:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:43:53 standalone.localdomain podman[530933]: 2025-10-13 15:43:53.813716735 +0000 UTC m=+0.083638043 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:43:53 standalone.localdomain podman[530933]: 2025-10-13 15:43:53.842169396 +0000 UTC m=+0.112090664 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 13 15:43:53 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:43:53 standalone.localdomain podman[530949]: 
Oct 13 15:43:53 standalone.localdomain podman[530949]: 2025-10-13 15:43:53.896362626 +0000 UTC m=+0.118425849 container create ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:43:53 standalone.localdomain systemd[1]: Started libpod-conmon-ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5.scope.
Oct 13 15:43:53 standalone.localdomain podman[530949]: 2025-10-13 15:43:53.84164302 +0000 UTC m=+0.063706293 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:53 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:53 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80c2c49182a160fe13bea3908b318400e224778504d7aa54885d3c54d80d04d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:53 standalone.localdomain podman[530949]: 2025-10-13 15:43:53.977088989 +0000 UTC m=+0.199152212 container init ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:43:53 standalone.localdomain podman[530949]: 2025-10-13 15:43:53.987340523 +0000 UTC m=+0.209403746 container start ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:53 standalone.localdomain dnsmasq[530985]: started, version 2.85 cachesize 150
Oct 13 15:43:53 standalone.localdomain dnsmasq[530985]: DNS service limited to local subnets
Oct 13 15:43:53 standalone.localdomain dnsmasq[530985]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:53 standalone.localdomain dnsmasq[530985]: warning: no upstream servers configured
Oct 13 15:43:53 standalone.localdomain dnsmasq-dhcp[530985]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:53 standalone.localdomain dnsmasq[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/addn_hosts - 0 addresses
Oct 13 15:43:53 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/host
Oct 13 15:43:53 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/opts
Oct 13 15:43:54 standalone.localdomain sudo[530973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:43:54 standalone.localdomain sudo[530973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:43:54 standalone.localdomain sudo[530973]: pam_unix(sudo:session): session closed for user root
Oct 13 15:43:54 standalone.localdomain sudo[530993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:43:54 standalone.localdomain sudo[530993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:43:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:54.147 496978 INFO neutron.agent.dhcp.agent [None req-a6282572-0006-4baf-8de0-ba50250115a1 - - - - - -] DHCP configuration for ports {'315da74a-52c4-47a4-95aa-ae8a34f2b19e'} is completed
Oct 13 15:43:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13586 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=3332441405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCEF2760000000001030307) 
Oct 13 15:43:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:54.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:54.702 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:52Z, description=, device_id=b9c6c367-f41c-4c0a-beda-ecdeb22b9dc8, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889260880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889260220>], id=bec668a9-d281-4fd9-bd38-2f6685d57053, ip_allocation=immediate, mac_address=fa:16:3e:10:f6:10, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:46Z, description=, dns_domain=, id=d5036f63-29f3-448c-ad5c-55534cac285e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-534158699-network, port_security_enabled=True, project_id=0ad31fa1626a4d048525338a288fd601, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25099, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=902, status=ACTIVE, subnets=['6f99dd30-5c89-44e1-93d3-a2e2e37c9df8'], tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:43:47Z, vlan_transparent=None, network_id=d5036f63-29f3-448c-ad5c-55534cac285e, port_security_enabled=False, project_id=0ad31fa1626a4d048525338a288fd601, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=942, status=DOWN, tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:43:52Z on network d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: pgmap v3888: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 13 15:43:54 standalone.localdomain sudo[530993]: pam_unix(sudo:session): session closed for user root
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:43:54 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:43:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev f4a35039-c651-4b2e-8e34-a83a1f94fe04 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:43:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev f4a35039-c651-4b2e-8e34-a83a1f94fe04 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:43:54 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event f4a35039-c651-4b2e-8e34-a83a1f94fe04 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:43:54 standalone.localdomain podman[531061]: 2025-10-13 15:43:54.961794392 +0000 UTC m=+0.066956391 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:43:54 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 1 addresses
Oct 13 15:43:54 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:43:54 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:43:54 standalone.localdomain sudo[531067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:43:54 standalone.localdomain sudo[531067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:43:54 standalone.localdomain sudo[531067]: pam_unix(sudo:session): session closed for user root
Oct 13 15:43:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:55.137 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:54Z, description=, device_id=8aac16a2-319e-4a1f-917d-54e393d5b21a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111460>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111310>], id=e71709ae-fe64-443b-9ceb-ddff059a4b29, ip_allocation=immediate, mac_address=fa:16:3e:dc:dc:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=971, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:54Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:55 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:43:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:55.291 496978 INFO neutron.agent.dhcp.agent [None req-1f41899e-a990-48d5-9865-c145ad667912 - - - - - -] DHCP configuration for ports {'bec668a9-d281-4fd9-bd38-2f6685d57053'} is completed
Oct 13 15:43:55 standalone.localdomain podman[531118]: 2025-10-13 15:43:55.378175147 +0000 UTC m=+0.052213771 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:43:55 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 5 addresses
Oct 13 15:43:55 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:55 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3889: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:55.674 496978 INFO neutron.agent.dhcp.agent [None req-7cc0d657-26ac-4cc6-b13e-cf976f48b2c5 - - - - - -] DHCP configuration for ports {'e71709ae-fe64-443b-9ceb-ddff059a4b29'} is completed
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:43:55 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain podman[531157]: 2025-10-13 15:43:56.229273448 +0000 UTC m=+0.070747078 container kill 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:43:56 standalone.localdomain dnsmasq[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/addn_hosts - 0 addresses
Oct 13 15:43:56 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/host
Oct 13 15:43:56 standalone.localdomain dnsmasq-dhcp[529933]: read /var/lib/neutron/dhcp/d7f1eea2-e875-4697-aeee-9ff7614f00fa/opts
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:56.413 496978 INFO neutron.agent.linux.ip_lib [None req-c3f3d0ce-f54b-4290-b3ed-b1f7203539f2 - - - - - -] Device tape55b98cb-6f cannot be used as it has no MAC address
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain kernel: device tape55b98cb-6f entered promiscuous mode
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00132|binding|INFO|Claiming lport e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad for this chassis.
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00133|binding|INFO|e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad: Claiming unknown
Oct 13 15:43:56 standalone.localdomain NetworkManager[5962]: <info>  [1760370236.4567] manager: (tape55b98cb-6f): new Generic device (/org/freedesktop/NetworkManager/Devices/35)
Oct 13 15:43:56 standalone.localdomain systemd-udevd[531199]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.463 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d783c1bb-e132-4da9-b61c-3ae915fe52b1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d783c1bb-e132-4da9-b61c-3ae915fe52b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea4c55d5c309444f85c557f6936baf4a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e54cb89f-9a76-476e-b769-53ca04799779, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.464 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad in datapath d783c1bb-e132-4da9-b61c-3ae915fe52b1 bound to our chassis
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.466 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d783c1bb-e132-4da9-b61c-3ae915fe52b1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.468 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2f91f616-005b-47d2-813f-9fac7bcfa7ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00134|binding|INFO|Releasing lport 739c4fa3-78dc-42a6-90ec-44b13c0174b2 from this chassis (sb_readonly=0)
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00135|binding|INFO|Setting lport 739c4fa3-78dc-42a6-90ec-44b13c0174b2 down in Southbound
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.477 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d7f1eea2-e875-4697-aeee-9ff7614f00fa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7f1eea2-e875-4697-aeee-9ff7614f00fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef2aadab7b86431fa8f7a831d718f0e0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d44ba895-6ffc-44ca-a604-28090952b6bc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=739c4fa3-78dc-42a6-90ec-44b13c0174b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.478 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 739c4fa3-78dc-42a6-90ec-44b13c0174b2 in datapath d7f1eea2-e875-4697-aeee-9ff7614f00fa unbound from our chassis
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.481 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7f1eea2-e875-4697-aeee-9ff7614f00fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.481 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[941b75cd-a71c-4420-946e-86e18748695b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:56 standalone.localdomain kernel: device tap739c4fa3-78 left promiscuous mode
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00136|binding|INFO|Setting lport e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad ovn-installed in OVS
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00137|binding|INFO|Setting lport e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad up in Southbound
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape55b98cb-6f: No such device
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13587 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=3332441405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCEFA760000000001030307) 
Oct 13 15:43:56 standalone.localdomain podman[531230]: 2025-10-13 15:43:56.625257477 +0000 UTC m=+0.072906864 container kill dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:43:56 standalone.localdomain dnsmasq[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/addn_hosts - 0 addresses
Oct 13 15:43:56 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/host
Oct 13 15:43:56 standalone.localdomain dnsmasq-dhcp[530336]: read /var/lib/neutron/dhcp/78178a9a-81a6-4e63-a7f9-e5e97639bfae/opts
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00138|binding|INFO|Releasing lport 57ec3634-44ba-4768-88e5-b61cb0ca1c97 from this chassis (sb_readonly=0)
Oct 13 15:43:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:56Z|00139|binding|INFO|Setting lport 57ec3634-44ba-4768-88e5-b61cb0ca1c97 down in Southbound
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:56 standalone.localdomain kernel: device tap57ec3634-44 left promiscuous mode
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.977 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-78178a9a-81a6-4e63-a7f9-e5e97639bfae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-78178a9a-81a6-4e63-a7f9-e5e97639bfae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1e6572989ad4feab2b4f2fcd6222c2b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ae9c590-fb2c-47b5-b68e-fa122c376038, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=57ec3634-44ba-4768-88e5-b61cb0ca1c97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.979 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 57ec3634-44ba-4768-88e5-b61cb0ca1c97 in datapath 78178a9a-81a6-4e63-a7f9-e5e97639bfae unbound from our chassis
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.983 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 78178a9a-81a6-4e63-a7f9-e5e97639bfae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:43:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:43:56.984 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[12690787-1152-4707-8857-f2a4fa380fc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:43:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:56.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:57.291 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:56Z, description=, device_id=8aac16a2-319e-4a1f-917d-54e393d5b21a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889259fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889259f10>], id=90e3628b-c784-4e83-a024-694ac51f8f06, ip_allocation=immediate, mac_address=fa:16:3e:e5:8f:bd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:49Z, description=, dns_domain=, id=25474d28-1153-460e-a447-2e833189f1b1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-326150654-network, port_security_enabled=True, project_id=545241e1ced34a5d90f35b64da19c3a1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6486, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=922, status=ACTIVE, subnets=['4c2ae349-42c1-45ab-b08f-6af927d075b7'], tags=[], tenant_id=545241e1ced34a5d90f35b64da19c3a1, updated_at=2025-10-13T15:43:51Z, vlan_transparent=None, network_id=25474d28-1153-460e-a447-2e833189f1b1, port_security_enabled=False, project_id=545241e1ced34a5d90f35b64da19c3a1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=979, status=DOWN, tags=[], tenant_id=545241e1ced34a5d90f35b64da19c3a1, updated_at=2025-10-13T15:43:56Z on network 25474d28-1153-460e-a447-2e833189f1b1
Oct 13 15:43:57 standalone.localdomain ceph-mon[29756]: pgmap v3889: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:43:57 standalone.localdomain dnsmasq[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/addn_hosts - 1 addresses
Oct 13 15:43:57 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/host
Oct 13 15:43:57 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/opts
Oct 13 15:43:57 standalone.localdomain podman[531304]: 2025-10-13 15:43:57.562572289 +0000 UTC m=+0.102933304 container kill ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:43:57 standalone.localdomain systemd[1]: tmp-crun.hFw0kL.mount: Deactivated successfully.
Oct 13 15:43:57 standalone.localdomain podman[531325]: 
Oct 13 15:43:57 standalone.localdomain podman[531325]: 2025-10-13 15:43:57.653059021 +0000 UTC m=+0.134042958 container create 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:43:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3890: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:57 standalone.localdomain systemd[1]: Started libpod-conmon-57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535.scope.
Oct 13 15:43:57 standalone.localdomain podman[531325]: 2025-10-13 15:43:57.606742872 +0000 UTC m=+0.087726829 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:43:57 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:43:57 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2ed2e095601aa4daebba21697eeb4351b9ce08ac94506c726e721924730216/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:43:57 standalone.localdomain podman[531325]: 2025-10-13 15:43:57.747314468 +0000 UTC m=+0.228298415 container init 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:57 standalone.localdomain podman[531325]: 2025-10-13 15:43:57.757630813 +0000 UTC m=+0.238614711 container start 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:43:57 standalone.localdomain dnsmasq[531352]: started, version 2.85 cachesize 150
Oct 13 15:43:57 standalone.localdomain dnsmasq[531352]: DNS service limited to local subnets
Oct 13 15:43:57 standalone.localdomain dnsmasq[531352]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:43:57 standalone.localdomain dnsmasq[531352]: warning: no upstream servers configured
Oct 13 15:43:57 standalone.localdomain dnsmasq-dhcp[531352]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:43:57 standalone.localdomain dnsmasq[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/addn_hosts - 0 addresses
Oct 13 15:43:57 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/host
Oct 13 15:43:57 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/opts
Oct 13 15:43:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:57.994 496978 INFO neutron.agent.dhcp.agent [None req-002bbcc4-40e3-4674-8c98-93f480eb5141 - - - - - -] DHCP configuration for ports {'90e3628b-c784-4e83-a024-694ac51f8f06'} is completed
Oct 13 15:43:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:58.136 496978 INFO neutron.agent.dhcp.agent [None req-aa9d5457-59bf-4d28-9164-11cbe6fd50e6 - - - - - -] DHCP configuration for ports {'7ae6f01e-4a08-460c-bbb8-f11979b0e556'} is completed
Oct 13 15:43:58 standalone.localdomain ceph-mon[29756]: pgmap v3890: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:58 standalone.localdomain systemd[1]: tmp-crun.QBAzlF.mount: Deactivated successfully.
Oct 13 15:43:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:58.833 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:58Z, description=, device_id=485ea8b3-4e0c-4aca-b36c-5001c636cd2b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889252190>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889252610>], id=82476360-2fc3-43d6-9a7c-1de54e9c68cd, ip_allocation=immediate, mac_address=fa:16:3e:61:4f:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=984, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:43:58Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:43:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:58.974 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:43:56Z, description=, device_id=8aac16a2-319e-4a1f-917d-54e393d5b21a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891736a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173220>], id=90e3628b-c784-4e83-a024-694ac51f8f06, ip_allocation=immediate, mac_address=fa:16:3e:e5:8f:bd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:49Z, description=, dns_domain=, id=25474d28-1153-460e-a447-2e833189f1b1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-326150654-network, port_security_enabled=True, project_id=545241e1ced34a5d90f35b64da19c3a1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6486, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=922, status=ACTIVE, subnets=['4c2ae349-42c1-45ab-b08f-6af927d075b7'], tags=[], tenant_id=545241e1ced34a5d90f35b64da19c3a1, updated_at=2025-10-13T15:43:51Z, vlan_transparent=None, network_id=25474d28-1153-460e-a447-2e833189f1b1, port_security_enabled=False, project_id=545241e1ced34a5d90f35b64da19c3a1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=979, status=DOWN, tags=[], tenant_id=545241e1ced34a5d90f35b64da19c3a1, updated_at=2025-10-13T15:43:56Z on network 25474d28-1153-460e-a447-2e833189f1b1
Oct 13 15:43:59 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:59.035 2 INFO neutron.agent.securitygroups_rpc [None req-2ded6eda-b969-4056-9fc6-6d20bb546a95 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Security group rule updated ['d0f6910c-2783-4ce6-b538-179724c8f9b8']
Oct 13 15:43:59 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 6 addresses
Oct 13 15:43:59 standalone.localdomain podman[531370]: 2025-10-13 15:43:59.078776133 +0000 UTC m=+0.067574601 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:59 standalone.localdomain dnsmasq[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/addn_hosts - 1 addresses
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/host
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/opts
Oct 13 15:43:59 standalone.localdomain podman[531404]: 2025-10-13 15:43:59.231415819 +0000 UTC m=+0.066412325 container kill ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:43:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:59.397 496978 INFO neutron.agent.dhcp.agent [None req-025dfe73-12cc-4eca-bb6f-3b80023344cb - - - - - -] DHCP configuration for ports {'82476360-2fc3-43d6-9a7c-1de54e9c68cd'} is completed
Oct 13 15:43:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:59Z|00140|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:59.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:43:59.534 496978 INFO neutron.agent.dhcp.agent [None req-71a3ae83-1c9c-4c0c-8853-4cb7c9daf9c5 - - - - - -] DHCP configuration for ports {'90e3628b-c784-4e83-a024-694ac51f8f06'} is completed
Oct 13 15:43:59 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 5 addresses
Oct 13 15:43:59 standalone.localdomain podman[531446]: 2025-10-13 15:43:59.549564794 +0000 UTC m=+0.060841715 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:43:59 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:43:59.562 2 INFO neutron.agent.securitygroups_rpc [None req-137ee3ab-321d-4f74-b1a0-6def0e5b3d9e 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Security group rule updated ['d0f6910c-2783-4ce6-b538-179724c8f9b8']
Oct 13 15:43:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:43:59Z|00141|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:43:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3891: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:43:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:59.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:59.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:43:59.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:43:59 standalone.localdomain systemd[1]: tmp-crun.YCA99J.mount: Deactivated successfully.
Oct 13 15:43:59 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:43:59 standalone.localdomain podman[531487]: 2025-10-13 15:43:59.968062974 +0000 UTC m=+0.054959725 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:43:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:00 standalone.localdomain dnsmasq[529933]: exiting on receipt of SIGTERM
Oct 13 15:44:00 standalone.localdomain podman[531518]: 2025-10-13 15:44:00.088972038 +0000 UTC m=+0.062046592 container kill 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:44:00 standalone.localdomain systemd[1]: libpod-5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2.scope: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain podman[531535]: 2025-10-13 15:44:00.143760955 +0000 UTC m=+0.042418870 container died 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:00 standalone.localdomain podman[531535]: 2025-10-13 15:44:00.187214697 +0000 UTC m=+0.085872542 container cleanup 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:44:00 standalone.localdomain systemd[1]: libpod-conmon-5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2.scope: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain podman[531543]: 2025-10-13 15:44:00.244387628 +0000 UTC m=+0.130368324 container remove 5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7f1eea2-e875-4697-aeee-9ff7614f00fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:44:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:00.272 496978 INFO neutron.agent.dhcp.agent [None req-d4e9a80e-2a71-46a2-9230-3c9ab7f9e8a6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:00.586 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:00 standalone.localdomain dnsmasq[530336]: exiting on receipt of SIGTERM
Oct 13 15:44:00 standalone.localdomain podman[531586]: 2025-10-13 15:44:00.607328426 +0000 UTC m=+0.048654502 container kill dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:44:00 standalone.localdomain systemd[1]: libpod-dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4.scope: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13588 DF PROTO=TCP SPT=43738 DPT=9102 SEQ=3332441405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCF0A370000000001030307) 
Oct 13 15:44:00 standalone.localdomain podman[531600]: 2025-10-13 15:44:00.663777515 +0000 UTC m=+0.049633172 container died dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:00 standalone.localdomain podman[531600]: 2025-10-13 15:44:00.690935287 +0000 UTC m=+0.076790874 container cleanup dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:44:00 standalone.localdomain systemd[1]: libpod-conmon-dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4.scope: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain ceph-mon[29756]: pgmap v3891: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:00 standalone.localdomain podman[531607]: 2025-10-13 15:44:00.749195781 +0000 UTC m=+0.123268297 container remove dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-78178a9a-81a6-4e63-a7f9-e5e97639bfae, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:00.890 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:00Z, description=, device_id=485ea8b3-4e0c-4aca-b36c-5001c636cd2b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889139be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889139970>], id=4cfa3986-500e-4b66-8136-8ec8bc143eed, ip_allocation=immediate, mac_address=fa:16:3e:23:6a:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:53Z, description=, dns_domain=, id=d783c1bb-e132-4da9-b61c-3ae915fe52b1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1898644010-network, port_security_enabled=True, project_id=ea4c55d5c309444f85c557f6936baf4a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12916, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=967, status=ACTIVE, subnets=['2be5faa3-be3e-4934-aa7d-dc985159baf4'], tags=[], tenant_id=ea4c55d5c309444f85c557f6936baf4a, updated_at=2025-10-13T15:43:55Z, vlan_transparent=None, network_id=d783c1bb-e132-4da9-b61c-3ae915fe52b1, port_security_enabled=False, project_id=ea4c55d5c309444f85c557f6936baf4a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=989, status=DOWN, tags=[], tenant_id=ea4c55d5c309444f85c557f6936baf4a, updated_at=2025-10-13T15:44:00Z on network d783c1bb-e132-4da9-b61c-3ae915fe52b1
Oct 13 15:44:00 standalone.localdomain systemd[1]: tmp-crun.RGderk.mount: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f4f82d8d9cf1ee5ad7f6a8a2de78b15ac132a76c37a3bea83ee712d42fa29048-merged.mount: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dada39bda57a6adba4eff8e34f98bc866602589f593d90c782a96189b4c3e5b4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0060dffc15a1f2f12dd95f8d5e78bd98550f488dd661f3e670dd7cda70d5c7c5-merged.mount: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a43e33c4a710994e8cd9caeca9f92b0b89f7ee0468442e9b808c13392d326b2-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd7f1eea2\x2de875\x2d4697\x2daeee\x2d9ff7614f00fa.mount: Deactivated successfully.
Oct 13 15:44:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:00.967 496978 INFO neutron.agent.dhcp.agent [None req-34cb58fb-0677-4cf3-81a8-cfb78874f7fa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:00.968 496978 INFO neutron.agent.dhcp.agent [None req-34cb58fb-0677-4cf3-81a8-cfb78874f7fa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:00 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d78178a9a\x2d81a6\x2d4e63\x2da7f9\x2de5e97639bfae.mount: Deactivated successfully.
Oct 13 15:44:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:01.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:01 standalone.localdomain dnsmasq[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/addn_hosts - 1 addresses
Oct 13 15:44:01 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/host
Oct 13 15:44:01 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/opts
Oct 13 15:44:01 standalone.localdomain podman[531649]: 2025-10-13 15:44:01.18657918 +0000 UTC m=+0.061465744 container kill 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:44:01 standalone.localdomain systemd[1]: tmp-crun.M4Ji70.mount: Deactivated successfully.
Oct 13 15:44:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:01.275 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:01.505 496978 INFO neutron.agent.dhcp.agent [None req-7c298729-d309-4d46-9f5e-496f6b836809 - - - - - -] DHCP configuration for ports {'4cfa3986-500e-4b66-8136-8ec8bc143eed'} is completed
Oct 13 15:44:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3892: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:02.283 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:00Z, description=, device_id=485ea8b3-4e0c-4aca-b36c-5001c636cd2b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891196d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119c10>], id=4cfa3986-500e-4b66-8136-8ec8bc143eed, ip_allocation=immediate, mac_address=fa:16:3e:23:6a:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:53Z, description=, dns_domain=, id=d783c1bb-e132-4da9-b61c-3ae915fe52b1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1898644010-network, port_security_enabled=True, project_id=ea4c55d5c309444f85c557f6936baf4a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12916, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=967, status=ACTIVE, subnets=['2be5faa3-be3e-4934-aa7d-dc985159baf4'], tags=[], tenant_id=ea4c55d5c309444f85c557f6936baf4a, updated_at=2025-10-13T15:43:55Z, vlan_transparent=None, network_id=d783c1bb-e132-4da9-b61c-3ae915fe52b1, port_security_enabled=False, project_id=ea4c55d5c309444f85c557f6936baf4a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=989, status=DOWN, tags=[], tenant_id=ea4c55d5c309444f85c557f6936baf4a, updated_at=2025-10-13T15:44:00Z on network d783c1bb-e132-4da9-b61c-3ae915fe52b1
Oct 13 15:44:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:02 standalone.localdomain dnsmasq[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/addn_hosts - 1 addresses
Oct 13 15:44:02 standalone.localdomain podman[531689]: 2025-10-13 15:44:02.565758786 +0000 UTC m=+0.059431061 container kill 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:44:02 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/host
Oct 13 15:44:02 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/opts
Oct 13 15:44:02 standalone.localdomain ceph-mon[29756]: pgmap v3892: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:02 standalone.localdomain dnsmasq[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/addn_hosts - 0 addresses
Oct 13 15:44:02 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/host
Oct 13 15:44:02 standalone.localdomain podman[531727]: 2025-10-13 15:44:02.838124139 +0000 UTC m=+0.057467601 container kill ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:02 standalone.localdomain dnsmasq-dhcp[530985]: read /var/lib/neutron/dhcp/25474d28-1153-460e-a447-2e833189f1b1/opts
Oct 13 15:44:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:02.862 496978 INFO neutron.agent.dhcp.agent [None req-73dd17a1-7daf-4a78-a713-9256a111fc6e - - - - - -] DHCP configuration for ports {'4cfa3986-500e-4b66-8136-8ec8bc143eed'} is completed
Oct 13 15:44:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:02.966 496978 INFO neutron.agent.linux.ip_lib [None req-078d47e2-9a55-4035-9524-82942c78883a - - - - - -] Device tap674449c3-c7 cannot be used as it has no MAC address
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:02.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain kernel: device tap674449c3-c7 entered promiscuous mode
Oct 13 15:44:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:03Z|00142|binding|INFO|Claiming lport 674449c3-c755-4fa5-b86a-7f3571608e86 for this chassis.
Oct 13 15:44:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:03Z|00143|binding|INFO|674449c3-c755-4fa5-b86a-7f3571608e86: Claiming unknown
Oct 13 15:44:03 standalone.localdomain NetworkManager[5962]: <info>  [1760370243.0099] manager: (tap674449c3-c7): new Generic device (/org/freedesktop/NetworkManager/Devices/36)
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain systemd-udevd[531758]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.023 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1406682a926240e29f505dd4ed438b0b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3df86e27-752c-4829-88cb-a33e756d0e40, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=674449c3-c755-4fa5-b86a-7f3571608e86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.026 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 674449c3-c755-4fa5-b86a-7f3571608e86 in datapath 82962eb0-ad81-4fa3-9cd5-0dc555e5fed0 bound to our chassis
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.028 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 82962eb0-ad81-4fa3-9cd5-0dc555e5fed0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.029 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[330c9d96-fbc9-4226-b2da-044809f22553]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:03Z|00144|binding|INFO|Setting lport 674449c3-c755-4fa5-b86a-7f3571608e86 ovn-installed in OVS
Oct 13 15:44:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:03Z|00145|binding|INFO|Setting lport 674449c3-c755-4fa5-b86a-7f3571608e86 up in Southbound
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:03Z|00146|binding|INFO|Releasing lport 959d96cd-857d-4059-90ea-1b59b010fda0 from this chassis (sb_readonly=0)
Oct 13 15:44:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:03Z|00147|binding|INFO|Setting lport 959d96cd-857d-4059-90ea-1b59b010fda0 down in Southbound
Oct 13 15:44:03 standalone.localdomain kernel: device tap959d96cd-85 left promiscuous mode
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.238 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.239 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.299 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-25474d28-1153-460e-a447-2e833189f1b1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25474d28-1153-460e-a447-2e833189f1b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '545241e1ced34a5d90f35b64da19c3a1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7aad2c98-6279-4708-95f0-631f41c9f944, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=959d96cd-857d-4059-90ea-1b59b010fda0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.301 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 959d96cd-857d-4059-90ea-1b59b010fda0 in datapath 25474d28-1153-460e-a447-2e833189f1b1 unbound from our chassis
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.305 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25474d28-1153-460e-a447-2e833189f1b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:03.307 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[008e6fd5-1145-492a-a257-cde97dfc7b6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.338 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.440 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.440 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.444 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.444 2 INFO nova.compute.claims [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Claim successful on node standalone.localdomain
Oct 13 15:44:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:03.606 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3893: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:44:03 standalone.localdomain systemd[1]: tmp-crun.No0pVG.mount: Deactivated successfully.
Oct 13 15:44:03 standalone.localdomain podman[531795]: 2025-10-13 15:44:03.863709375 +0000 UTC m=+0.132946583 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 13 15:44:03 standalone.localdomain podman[531795]: 2025-10-13 15:44:03.903888265 +0000 UTC m=+0.173125503 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:03 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:44:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:44:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/567336875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.084 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.095 2 DEBUG nova.compute.provider_tree [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.122 2 DEBUG nova.scheduler.client.report [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.149 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.150 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 13 15:44:04 standalone.localdomain podman[531861]: 
Oct 13 15:44:04 standalone.localdomain podman[531861]: 2025-10-13 15:44:04.205788713 +0000 UTC m=+0.114737306 container create 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.205 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.206 2 DEBUG nova.network.neutron [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.244 2 INFO nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 13 15:44:04 standalone.localdomain podman[531861]: 2025-10-13 15:44:04.150554292 +0000 UTC m=+0.059502925 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.263 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 13 15:44:04 standalone.localdomain systemd[1]: Started libpod-conmon-5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6.scope.
Oct 13 15:44:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:44:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/500281d0b0f4879c5a6d19db2deebb34d424010a9a5c246982fea90dc447d402/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:44:04 standalone.localdomain podman[531861]: 2025-10-13 15:44:04.301894928 +0000 UTC m=+0.210843491 container init 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:04 standalone.localdomain podman[531861]: 2025-10-13 15:44:04.308082817 +0000 UTC m=+0.217031380 container start 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:04 standalone.localdomain dnsmasq[531879]: started, version 2.85 cachesize 150
Oct 13 15:44:04 standalone.localdomain dnsmasq[531879]: DNS service limited to local subnets
Oct 13 15:44:04 standalone.localdomain dnsmasq[531879]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:44:04 standalone.localdomain dnsmasq[531879]: warning: no upstream servers configured
Oct 13 15:44:04 standalone.localdomain dnsmasq-dhcp[531879]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:44:04 standalone.localdomain dnsmasq[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/addn_hosts - 0 addresses
Oct 13 15:44:04 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/host
Oct 13 15:44:04 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/opts
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.372 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.375 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.375 2 INFO nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Creating image(s)
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.396 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.416 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.437 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.443 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "0666b5c9634fadf3e9e06166b1d8249e1c25061f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.444 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "0666b5c9634fadf3e9e06166b1d8249e1c25061f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.456 2 WARNING oslo_policy.policy [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.456 2 WARNING oslo_policy.policy [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.458 2 DEBUG nova.policy [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 13 15:44:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:04.517 496978 INFO neutron.agent.dhcp.agent [None req-b88ab601-3396-4302-904c-6338f00953e1 - - - - - -] DHCP configuration for ports {'0566e8dc-61a5-4831-b161-7b8a653a259f'} is completed
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.553 2 DEBUG nova.virt.libvirt.imagebackend [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Image locations are: [{'url': 'rbd://627e7f45-65aa-56de-94df-66eaee84a56e/images/c219892b-7f28-4b58-8156-43d84cf6801b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://627e7f45-65aa-56de-94df-66eaee84a56e/images/c219892b-7f28-4b58-8156-43d84cf6801b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:04 standalone.localdomain ceph-mon[29756]: pgmap v3893: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:04 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/567336875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:04Z|00148|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:44:04 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:44:04 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:04 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:04 standalone.localdomain systemd[1]: tmp-crun.2LoBJ4.mount: Deactivated successfully.
Oct 13 15:44:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:04.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:04 standalone.localdomain podman[531951]: 2025-10-13 15:44:04.996827805 +0000 UTC m=+0.062277890 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.318 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.379 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.380 2 DEBUG nova.virt.images [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] c219892b-7f28-4b58-8156-43d84cf6801b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.381 2 DEBUG nova.privsep.utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.382 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.part /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:05.450 2 INFO neutron.agent.securitygroups_rpc [req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb req-f92193ae-c0eb-401b-82bb-de1a6d3bb9fb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Security group member updated ['d0f6910c-2783-4ce6-b538-179724c8f9b8']
Oct 13 15:44:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:05.522 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:04Z, description=, device_id=1f4a5f45-5f65-4051-b24e-5815867a3574, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889129400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891298b0>], id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f, ip_allocation=immediate, mac_address=fa:16:3e:e9:c8:33, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:43:46Z, description=, dns_domain=, id=d5036f63-29f3-448c-ad5c-55534cac285e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-534158699-network, port_security_enabled=True, project_id=0ad31fa1626a4d048525338a288fd601, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25099, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=902, status=ACTIVE, subnets=['6f99dd30-5c89-44e1-93d3-a2e2e37c9df8'], tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:43:47Z, vlan_transparent=None, network_id=d5036f63-29f3-448c-ad5c-55534cac285e, port_security_enabled=True, project_id=0ad31fa1626a4d048525338a288fd601, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d0f6910c-2783-4ce6-b538-179724c8f9b8'], standard_attr_id=1021, status=DOWN, tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:44:05Z on network d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.552 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.part /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.converted" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.557 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.627 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f.converted --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.630 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "0666b5c9634fadf3e9e06166b1d8249e1c25061f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.665 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3894: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.675 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f 1f4a5f45-5f65-4051-b24e-5815867a3574_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:05.707 2 DEBUG nova.network.neutron [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Successfully created port: bf0a8138-74b3-47a4-8639-a5b7d069ec7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 13 15:44:05 standalone.localdomain podman[532020]: 2025-10-13 15:44:05.782316575 +0000 UTC m=+0.064549648 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:05 standalone.localdomain systemd[1]: tmp-crun.qGNFiW.mount: Deactivated successfully.
Oct 13 15:44:05 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 2 addresses
Oct 13 15:44:05 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:44:05 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:44:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:05.791 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:05Z, description=, device_id=11be536b-6d20-46ca-964a-af6750b237fe, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e54c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912cc40>], id=04fa3ee7-f962-4f80-bf89-cf3aa50f590a, ip_allocation=immediate, mac_address=fa:16:3e:69:be:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1022, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:44:05Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:44:06 standalone.localdomain dnsmasq[530985]: exiting on receipt of SIGTERM
Oct 13 15:44:06 standalone.localdomain podman[532082]: 2025-10-13 15:44:06.043062102 +0000 UTC m=+0.084924952 container kill ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:44:06 standalone.localdomain systemd[1]: tmp-crun.gYZQ0T.mount: Deactivated successfully.
Oct 13 15:44:06 standalone.localdomain systemd[1]: libpod-ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5.scope: Deactivated successfully.
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.060 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0666b5c9634fadf3e9e06166b1d8249e1c25061f 1f4a5f45-5f65-4051-b24e-5815867a3574_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:06.101 496978 INFO neutron.agent.dhcp.agent [None req-b764d1b1-c772-4876-ac27-41164dee1624 - - - - - -] DHCP configuration for ports {'bf0a8138-74b3-47a4-8639-a5b7d069ec7f'} is completed
Oct 13 15:44:06 standalone.localdomain systemd[1]: tmp-crun.edVJct.mount: Deactivated successfully.
Oct 13 15:44:06 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:44:06 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:06 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:06 standalone.localdomain podman[532101]: 2025-10-13 15:44:06.120223346 +0000 UTC m=+0.104874904 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:06 standalone.localdomain podman[532115]: 2025-10-13 15:44:06.134933726 +0000 UTC m=+0.083892260 container died ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:44:06 standalone.localdomain podman[532115]: 2025-10-13 15:44:06.168698291 +0000 UTC m=+0.117656825 container cleanup ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:06 standalone.localdomain systemd[1]: libpod-conmon-ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5.scope: Deactivated successfully.
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.175 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] resizing rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 13 15:44:06 standalone.localdomain podman[532122]: 2025-10-13 15:44:06.212945226 +0000 UTC m=+0.151911514 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter)
Oct 13 15:44:06 standalone.localdomain podman[532122]: 2025-10-13 15:44:06.225872442 +0000 UTC m=+0.164838790 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:44:06 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:44:06 standalone.localdomain podman[532123]: 2025-10-13 15:44:06.264458504 +0000 UTC m=+0.190641651 container remove ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25474d28-1153-460e-a447-2e833189f1b1, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:44:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:06.292 496978 INFO neutron.agent.dhcp.agent [None req-8debef37-a54b-42ae-9c17-8d835a63936a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:06.292 496978 INFO neutron.agent.dhcp.agent [None req-8debef37-a54b-42ae-9c17-8d835a63936a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.310 2 DEBUG nova.objects.instance [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f4a5f45-5f65-4051-b24e-5815867a3574 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.336 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.336 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Ensure instance console log exists: /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.337 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.337 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:06.338 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:06.363 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=standalone.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:04Z, description=, device_id=1f4a5f45-5f65-4051-b24e-5815867a3574, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924e730>], dns_domain=, dns_name=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924e130>], id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f, ip_allocation=immediate, mac_address=fa:16:3e:e9:c8:33, name=, network_id=d5036f63-29f3-448c-ad5c-55534cac285e, port_security_enabled=True, project_id=0ad31fa1626a4d048525338a288fd601, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['d0f6910c-2783-4ce6-b538-179724c8f9b8'], standard_attr_id=1021, status=DOWN, tags=[], tenant_id=0ad31fa1626a4d048525338a288fd601, updated_at=2025-10-13T15:44:05Z on network d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:44:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:06.369 496978 INFO neutron.agent.dhcp.agent [None req-9ffb03f5-1261-4da2-ad0e-3abcaa404156 - - - - - -] DHCP configuration for ports {'04fa3ee7-f962-4f80-bf89-cf3aa50f590a'} is completed
Oct 13 15:44:06 standalone.localdomain podman[532261]: 2025-10-13 15:44:06.569097976 +0000 UTC m=+0.049503808 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:06 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 2 addresses
Oct 13 15:44:06 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:44:06 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:44:06 standalone.localdomain ceph-mon[29756]: pgmap v3894: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:06.788 496978 INFO neutron.agent.dhcp.agent [None req-2f1e5f6d-31e6-4990-b232-c900b552f0cb - - - - - -] DHCP configuration for ports {'bf0a8138-74b3-47a4-8639-a5b7d069ec7f'} is completed
Oct 13 15:44:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:06.963 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:06.963 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:06.964 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b80c2c49182a160fe13bea3908b318400e224778504d7aa54885d3c54d80d04d-merged.mount: Deactivated successfully.
Oct 13 15:44:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec0982bd39d763c4fa4fee18500c87b264605ef8a9bd540b0c66047449a170b5-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:07 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d25474d28\x2d1153\x2d460e\x2da447\x2d2e833189f1b1.mount: Deactivated successfully.
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.488 2 DEBUG nova.network.neutron [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Successfully updated port: bf0a8138-74b3-47a4-8639-a5b7d069ec7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.509 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.509 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquired lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.509 2 DEBUG nova.network.neutron [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 13 15:44:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.628 2 DEBUG nova.network.neutron [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 13 15:44:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3895: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:44:07 standalone.localdomain podman[532283]: 2025-10-13 15:44:07.799098393 +0000 UTC m=+0.068466048 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:44:07 standalone.localdomain podman[532283]: 2025-10-13 15:44:07.810165722 +0000 UTC m=+0.079533457 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:07 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.993 2 DEBUG nova.compute.manager [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-changed-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.993 2 DEBUG nova.compute.manager [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Refreshing instance network info cache due to event network-changed-bf0a8138-74b3-47a4-8639-a5b7d069ec7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 13 15:44:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:07.993 2 DEBUG oslo_concurrency.lockutils [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:44:08 standalone.localdomain dnsmasq[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/addn_hosts - 0 addresses
Oct 13 15:44:08 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/host
Oct 13 15:44:08 standalone.localdomain podman[532319]: 2025-10-13 15:44:08.32393438 +0000 UTC m=+0.045392642 container kill 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:44:08 standalone.localdomain dnsmasq-dhcp[531352]: read /var/lib/neutron/dhcp/d783c1bb-e132-4da9-b61c-3ae915fe52b1/opts
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.367 2 DEBUG nova.network.neutron [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Updating instance_info_cache with network_info: [{"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.397 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Releasing lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.397 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Instance network_info: |[{"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.398 2 DEBUG oslo_concurrency.lockutils [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquired lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.398 2 DEBUG nova.network.neutron [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Refreshing network info cache for port bf0a8138-74b3-47a4-8639-a5b7d069ec7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.403 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Start _get_guest_xml network_info=[{"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:41:46Z,direct_url=<?>,disk_format='qcow2',id=c219892b-7f28-4b58-8156-43d84cf6801b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e44641a80bcb466cb3dd688e48b72d8e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:41:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'size': 0, 'image_id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.418 2 WARNING nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.421 2 DEBUG nova.virt.libvirt.host [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Searching host: 'standalone.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.422 2 DEBUG nova.virt.libvirt.host [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.424 2 DEBUG nova.virt.libvirt.host [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Searching host: 'standalone.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.425 2 DEBUG nova.virt.libvirt.host [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.426 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.426 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T14:10:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1f304cc6-f658-4c81-8601-a769ec289d72',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:41:46Z,direct_url=<?>,disk_format='qcow2',id=c219892b-7f28-4b58-8156-43d84cf6801b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e44641a80bcb466cb3dd688e48b72d8e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:41:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.427 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.427 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.427 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.428 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.428 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.428 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.429 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.429 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.430 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.430 2 DEBUG nova.virt.hardware [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.435 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:08 standalone.localdomain ceph-mon[29756]: pgmap v3895: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:44:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:08Z|00149|binding|INFO|Releasing lport e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad from this chassis (sb_readonly=0)
Oct 13 15:44:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:08Z|00150|binding|INFO|Setting lport e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad down in Southbound
Oct 13 15:44:08 standalone.localdomain kernel: device tape55b98cb-6f left promiscuous mode
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:08.626 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d783c1bb-e132-4da9-b61c-3ae915fe52b1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d783c1bb-e132-4da9-b61c-3ae915fe52b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea4c55d5c309444f85c557f6936baf4a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e54cb89f-9a76-476e-b769-53ca04799779, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:08.627 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e55b98cb-6f91-4ca4-a7c7-25c5a4a083ad in datapath d783c1bb-e132-4da9-b61c-3ae915fe52b1 unbound from our chassis
Oct 13 15:44:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:08.630 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d783c1bb-e132-4da9-b61c-3ae915fe52b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:08.630 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[de9c833b-94f0-4a20-97a9-bea4a78cb55f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:08 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 15:44:08 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3441011376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.928 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.954 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:08.959 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.230 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.230 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.257 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.362 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.362 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.363 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.363 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:44:09 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 13 15:44:09 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2323490962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:44:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:09.442 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:08Z, description=, device_id=11be536b-6d20-46ca-964a-af6750b237fe, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892f3a90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892f3460>], id=ac26b563-86be-45b4-914a-3f185d50c814, ip_allocation=immediate, mac_address=fa:16:3e:4d:ba:ea, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:00Z, description=, dns_domain=, id=82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-1526031337-network, port_security_enabled=True, project_id=1406682a926240e29f505dd4ed438b0b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12840, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1000, status=ACTIVE, subnets=['d3ebc0c4-c9dd-44e9-9aa4-fb2fbfa62ed6'], tags=[], tenant_id=1406682a926240e29f505dd4ed438b0b, updated_at=2025-10-13T15:44:01Z, vlan_transparent=None, network_id=82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, port_security_enabled=False, project_id=1406682a926240e29f505dd4ed438b0b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1028, status=DOWN, tags=[], tenant_id=1406682a926240e29f505dd4ed438b0b, updated_at=2025-10-13T15:44:08Z on network 82962eb0-ad81-4fa3-9cd5-0dc555e5fed0
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.448 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.451 2 DEBUG nova.virt.libvirt.vif [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='standalone.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=4,image_ref='c219892b-7f28-4b58-8156-43d84cf6801b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHC2IIz5lKl3HDlfmZeplMM9FpSRm3+y1rUn/v78Cx3DOIV7ynijtPAGsFZVeIIJ403IWKXviSnyaqtFaTnOX/L8cqanRlSgnheMn0MeNzvR53otIJhAFkKn2gkt7aL1ng==',key_name='tempest-keypair-14677431',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='standalone.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ad31fa1626a4d048525338a288fd601',ramdisk_id='',reservation_id='r-kywtld1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c219892b-7f28-4b58-8156-43d84cf6801b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1628500701',owner_user_name='tempest-ServersV294TestFqdnHostnames-1628500701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:44:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b4b76f527df445a9af9fdb570abbb05',uuid=1f4a5f45-5f65-4051-b24e-5815867a3574,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.452 2 DEBUG nova.network.os_vif_util [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Converting VIF {"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.452 2 DEBUG nova.network.os_vif_util [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.454 2 DEBUG nova.objects.instance [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f4a5f45-5f65-4051-b24e-5815867a3574 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.469 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] End _get_guest_xml xml=<domain type="kvm">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <uuid>1f4a5f45-5f65-4051-b24e-5815867a3574</uuid>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <name>instance-00000004</name>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <memory>131072</memory>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <vcpu>1</vcpu>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <metadata>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:name>guest-instance-1</nova:name>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:creationTime>2025-10-13 15:44:08</nova:creationTime>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:flavor name="m1.nano">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:memory>128</nova:memory>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:disk>1</nova:disk>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:swap>0</nova:swap>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:ephemeral>0</nova:ephemeral>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:vcpus>1</nova:vcpus>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </nova:flavor>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:owner>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:user uuid="3b4b76f527df445a9af9fdb570abbb05">tempest-ServersV294TestFqdnHostnames-1628500701-project-member</nova:user>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:project uuid="0ad31fa1626a4d048525338a288fd601">tempest-ServersV294TestFqdnHostnames-1628500701</nova:project>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </nova:owner>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:root type="image" uuid="c219892b-7f28-4b58-8156-43d84cf6801b"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <nova:ports>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <nova:port uuid="bf0a8138-74b3-47a4-8639-a5b7d069ec7f">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         </nova:port>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </nova:ports>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </nova:instance>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </metadata>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <sysinfo type="smbios">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <system>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <entry name="manufacturer">RDO</entry>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <entry name="product">OpenStack Compute</entry>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <entry name="serial">1f4a5f45-5f65-4051-b24e-5815867a3574</entry>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <entry name="uuid">1f4a5f45-5f65-4051-b24e-5815867a3574</entry>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <entry name="family">Virtual Machine</entry>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </system>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </sysinfo>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <os>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <boot dev="hd"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <smbios mode="sysinfo"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </os>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <features>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <acpi/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <apic/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <vmcoreinfo/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </features>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <clock offset="utc">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <timer name="pit" tickpolicy="delay"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <timer name="hpet" present="no"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </clock>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <cpu mode="host-model" match="exact">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <topology sockets="1" cores="1" threads="1"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </cpu>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   <devices>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <disk type="network" device="disk">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <driver type="raw" cache="none"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <source protocol="rbd" name="vms/1f4a5f45-5f65-4051-b24e-5815867a3574_disk">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <host name="172.18.0.100" port="6789"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </source>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <auth username="openstack">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <secret type="ceph" uuid="627e7f45-65aa-56de-94df-66eaee84a56e"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </auth>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <target dev="vda" bus="virtio"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <disk type="network" device="cdrom">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <driver type="raw" cache="none"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <source protocol="rbd" name="vms/1f4a5f45-5f65-4051-b24e-5815867a3574_disk.config">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <host name="172.18.0.100" port="6789"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </source>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <auth username="openstack">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:         <secret type="ceph" uuid="627e7f45-65aa-56de-94df-66eaee84a56e"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       </auth>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <target dev="sda" bus="sata"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </disk>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <interface type="ethernet">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <mac address="fa:16:3e:e9:c8:33"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <model type="virtio"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <driver name="vhost" rx_queue_size="512"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <mtu size="1442"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <target dev="tapbf0a8138-74"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </interface>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <serial type="pty">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <log file="/var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/console.log" append="off"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </serial>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <video>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <model type="virtio"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </video>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <input type="tablet" bus="usb"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <rng model="virtio">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <backend model="random">/dev/urandom</backend>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </rng>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="pci" model="pcie-root-port"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <controller type="usb" index="0"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     <memballoon model="virtio">
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:       <stats period="10"/>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:     </memballoon>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:   </devices>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: </domain>
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.469 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Preparing to wait for external event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.470 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.470 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.470 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.470 2 DEBUG nova.virt.libvirt.vif [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='standalone.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=4,image_ref='c219892b-7f28-4b58-8156-43d84cf6801b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHC2IIz5lKl3HDlfmZeplMM9FpSRm3+y1rUn/v78Cx3DOIV7ynijtPAGsFZVeIIJ403IWKXviSnyaqtFaTnOX/L8cqanRlSgnheMn0MeNzvR53otIJhAFkKn2gkt7aL1ng==',key_name='tempest-keypair-14677431',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='standalone.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ad31fa1626a4d048525338a288fd601',ramdisk_id='',reservation_id='r-kywtld1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c219892b-7f28-4b58-8156-43d84cf6801b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1628500701',owner_user_name='tempest-ServersV294TestFqdnHostnames-1628500701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:44:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b4b76f527df445a9af9fdb570abbb05',uuid=1f4a5f45-5f65-4051-b24e-5815867a3574,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.471 2 DEBUG nova.network.os_vif_util [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Converting VIF {"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.471 2 DEBUG nova.network.os_vif_util [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.471 2 DEBUG os_vif [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0a8138-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0a8138-74, col_values=(('external_ids', {'iface-id': 'bf0a8138-74b3-47a4-8639-a5b7d069ec7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:c8:33', 'vm-uuid': '1f4a5f45-5f65-4051-b24e-5815867a3574'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.489 2 INFO os_vif [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74')
Oct 13 15:44:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3441011376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:44:09 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2323490962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.548 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.549 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.549 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] No VIF found with MAC fa:16:3e:e9:c8:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.550 2 INFO nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Using config drive
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.571 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.599 2 DEBUG nova.network.neutron [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Updated VIF entry in instance network info cache for port bf0a8138-74b3-47a4-8639-a5b7d069ec7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.599 2 DEBUG nova.network.neutron [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Updating instance_info_cache with network_info: [{"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.618 2 DEBUG oslo_concurrency.lockutils [req-7444fafb-fbe6-41e6-bc2d-62cde0ea77bc req-7043f711-60bb-44e4-84c0-163c7a7fa45e 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Releasing lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:44:09 standalone.localdomain dnsmasq[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/addn_hosts - 1 addresses
Oct 13 15:44:09 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/host
Oct 13 15:44:09 standalone.localdomain podman[532441]: 2025-10-13 15:44:09.670640802 +0000 UTC m=+0.071557923 container kill 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:44:09 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/opts
Oct 13 15:44:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3896: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.697 2 INFO nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Creating config drive at /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/disk.config
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.701 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpat4ig4oz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.827 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpat4ig4oz" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.846 2 DEBUG nova.storage.rbd_utils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] rbd image 1f4a5f45-5f65-4051-b24e-5815867a3574_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.849 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/disk.config 1f4a5f45-5f65-4051-b24e-5815867a3574_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:09.957 496978 INFO neutron.agent.dhcp.agent [None req-bf4c7f1c-af47-4377-8a9b-a4c87b08d39f - - - - - -] DHCP configuration for ports {'ac26b563-86be-45b4-914a-3f185d50c814'} is completed
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.986 2 DEBUG oslo_concurrency.processutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/disk.config 1f4a5f45-5f65-4051-b24e-5815867a3574_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:09.987 2 INFO nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Deleting local config drive /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574/disk.config because it was imported into RBD.
Oct 13 15:44:10 standalone.localdomain kernel: device tapbf0a8138-74 entered promiscuous mode
Oct 13 15:44:10 standalone.localdomain NetworkManager[5962]: <info>  [1760370250.0359] manager: (tapbf0a8138-74): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:10Z|00151|binding|INFO|Claiming lport bf0a8138-74b3-47a4-8639-a5b7d069ec7f for this chassis.
Oct 13 15:44:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:10Z|00152|binding|INFO|bf0a8138-74b3-47a4-8639-a5b7d069ec7f: Claiming fa:16:3e:e9:c8:33 10.100.0.10
Oct 13 15:44:10 standalone.localdomain systemd-udevd[532511]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.047 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:c8:33 10.100.0.10'], port_security=['fa:16:3e:e9:c8:33 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5036f63-29f3-448c-ad5c-55534cac285e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad31fa1626a4d048525338a288fd601', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0f6910c-2783-4ce6-b538-179724c8f9b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c17d3ad-231d-4d34-a373-ab16abbf06e3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bf0a8138-74b3-47a4-8639-a5b7d069ec7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.050 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bf0a8138-74b3-47a4-8639-a5b7d069ec7f in datapath d5036f63-29f3-448c-ad5c-55534cac285e bound to our chassis
Oct 13 15:44:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:10Z|00153|binding|INFO|Setting lport bf0a8138-74b3-47a4-8639-a5b7d069ec7f ovn-installed in OVS
Oct 13 15:44:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:10Z|00154|binding|INFO|Setting lport bf0a8138-74b3-47a4-8639-a5b7d069ec7f up in Southbound
Oct 13 15:44:10 standalone.localdomain NetworkManager[5962]: <info>  [1760370250.0517] device (tapbf0a8138-74): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain NetworkManager[5962]: <info>  [1760370250.0558] device (tapbf0a8138-74): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.058 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port bcbd804f-edbb-47fa-9ba8-f47ac32fbf9e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.059 378821 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.066 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3f902c55-30e5-4a9f-ab10-cf5e6c8d3fa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.068 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5036f63-21 in ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.071 378925 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5036f63-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.071 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc8d427-696d-4600-8df7-089ed0750b77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.075 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0e34d6-0d65-4b6e-a08b-7f26f9f2438f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain systemd-machined[183383]: New machine qemu-4-instance-00000004.
Oct 13 15:44:10 standalone.localdomain systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.105 379037 DEBUG oslo.privsep.daemon [-] privsep: reply[99735320-9137-43f0-9b1f-1971d56683ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.134 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[16237081-7bc0-49ef-bb04-24fc624e290d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.160 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[124868e1-a047-488b-845c-a05e96eba8d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain NetworkManager[5962]: <info>  [1760370250.1663] manager: (tapd5036f63-20): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.167 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1bad879c-e070-4562-9aca-93c181656a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.198 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3aa53a-254c-440d-bfdc-da919367aad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.200 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3e7f7c-d659-49f0-b451-f36b7be04a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapd5036f63-21: link becomes ready
Oct 13 15:44:10 standalone.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapd5036f63-20: link becomes ready
Oct 13 15:44:10 standalone.localdomain NetworkManager[5962]: <info>  [1760370250.2207] device (tapd5036f63-20): carrier: link connected
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.226 378936 DEBUG oslo.privsep.daemon [-] privsep: reply[2183f97c-95fe-40f7-8556-50620130261a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.240 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[060a26fe-3676-4d59-b92c-35953776108b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5036f63-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:bd:3a:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 969333, 'reachable_time': 28918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 532567, 'error': None, 'target': 'ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.255 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[66f99f99-df37-4751-b0be-02a68dbe0f4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:3aea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 969333, 'tstamp': 969333}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 532569, 'error': None, 'target': 'ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.281 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b96e4c8a-b9c3-4d1d-8081-6eb75fb809a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5036f63-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:bd:3a:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 969333, 'reachable_time': 28918, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 532585, 'error': None, 'target': 'ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.309 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9ca4eb-61f7-4da6-89d5-bb9218c94196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.363 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6519e9-a54c-4760-afe3-8440fbb19aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.365 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5036f63-20, bridge=br-ctlplane, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.365 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.366 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5036f63-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:10 standalone.localdomain kernel: device tapd5036f63-20 entered promiscuous mode
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.379 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5036f63-20, col_values=(('external_ids', {'iface-id': '6d0c7ccf-eb5c-4000-9850-d98e4bccfdba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:10Z|00155|binding|INFO|Releasing lport 6d0c7ccf-eb5c-4000-9850-d98e4bccfdba from this chassis (sb_readonly=0)
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.385 378821 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5036f63-29f3-448c-ad5c-55534cac285e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5036f63-29f3-448c-ad5c-55534cac285e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.386 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[77cf9fc2-3bb8-4240-adc3-1748c3c535c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.387 378821 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: global
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     log         /dev/log local0 debug
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     log-tag     haproxy-metadata-proxy-d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     user        root
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     group       root
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     maxconn     1024
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     pidfile     /var/lib/neutron/external/pids/d5036f63-29f3-448c-ad5c-55534cac285e.pid.haproxy
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     daemon
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: defaults
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     log global
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     mode http
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     option httplog
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     option dontlognull
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     option http-server-close
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     option forwardfor
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     retries                 3
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout http-request    30s
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout connect         30s
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout client          32s
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout server          32s
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     timeout http-keep-alive 30s
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: listen listener
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     bind 169.254.169.254:80
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     server metadata /var/lib/neutron/metadata_proxy
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:     http-request add-header X-OVN-Network-ID d5036f63-29f3-448c-ad5c-55534cac285e
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 13 15:44:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:10.389 378821 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e', 'env', 'PROCESS_TAG=haproxy-d5036f63-29f3-448c-ad5c-55534cac285e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5036f63-29f3-448c-ad5c-55534cac285e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:10 standalone.localdomain ceph-mon[29756]: pgmap v3896: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.592 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.607 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.607 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.608 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.608 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.608 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.609 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.609 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.623 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.624 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.624 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.624 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.625 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:10 standalone.localdomain podman[532628]: 
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.785 2 DEBUG nova.virt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Emitting event <LifecycleEvent: 1760370250.784727, 1f4a5f45-5f65-4051-b24e-5815867a3574 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.786 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] VM Started (Lifecycle Event)
Oct 13 15:44:10 standalone.localdomain podman[532628]: 2025-10-13 15:44:10.793316542 +0000 UTC m=+0.087734899 container create fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.815 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.821 2 DEBUG nova.virt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Emitting event <LifecycleEvent: 1760370250.7890682, 1f4a5f45-5f65-4051-b24e-5815867a3574 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.821 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] VM Paused (Lifecycle Event)
Oct 13 15:44:10 standalone.localdomain systemd[1]: Started libpod-conmon-fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4.scope.
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.839 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.842 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 13 15:44:10 standalone.localdomain podman[532628]: 2025-10-13 15:44:10.74464539 +0000 UTC m=+0.039063847 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 15:44:10 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:44:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4def9279d2afc491e16a86c7504ebe42c84466598fd4e9cf13041559eb1df0a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:44:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:10.864 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 13 15:44:10 standalone.localdomain podman[532628]: 2025-10-13 15:44:10.870064212 +0000 UTC m=+0.164482569 container init fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:10 standalone.localdomain podman[532628]: 2025-10-13 15:44:10.878651275 +0000 UTC m=+0.173069632 container start fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:44:10 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [NOTICE]   (532665) : New worker (532667) forked
Oct 13 15:44:10 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [NOTICE]   (532665) : Loading success.
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.067 2 DEBUG nova.compute.manager [req-4a55825f-3535-4cee-9061-3c0cbb262da0 req-a800d1b4-b35a-4322-b6ba-7778c2b8225f 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.067 2 DEBUG oslo_concurrency.lockutils [req-4a55825f-3535-4cee-9061-3c0cbb262da0 req-a800d1b4-b35a-4322-b6ba-7778c2b8225f 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.068 2 DEBUG oslo_concurrency.lockutils [req-4a55825f-3535-4cee-9061-3c0cbb262da0 req-a800d1b4-b35a-4322-b6ba-7778c2b8225f 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.068 2 DEBUG oslo_concurrency.lockutils [req-4a55825f-3535-4cee-9061-3c0cbb262da0 req-a800d1b4-b35a-4322-b6ba-7778c2b8225f 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.068 2 DEBUG nova.compute.manager [req-4a55825f-3535-4cee-9061-3c0cbb262da0 req-a800d1b4-b35a-4322-b6ba-7778c2b8225f 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Processing event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.069 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.073 2 DEBUG nova.virt.driver [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] Emitting event <LifecycleEvent: 1760370251.0731273, 1f4a5f45-5f65-4051-b24e-5815867a3574 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.074 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] VM Resumed (Lifecycle Event)
Oct 13 15:44:11 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:44:11 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1889562756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.087 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.094 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.095 2 INFO nova.virt.libvirt.driver [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Instance spawned successfully.
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.096 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.101 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.103 2 DEBUG nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.122 2 INFO nova.compute.manager [None req-18b69862-8605-4b6f-b0c1-44895c7db812 - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.151 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.151 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.152 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.153 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.154 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.154 2 DEBUG nova.virt.libvirt.driver [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 13 15:44:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:11.224 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:08Z, description=, device_id=11be536b-6d20-46ca-964a-af6750b237fe, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119b80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119d60>], id=ac26b563-86be-45b4-914a-3f185d50c814, ip_allocation=immediate, mac_address=fa:16:3e:4d:ba:ea, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:00Z, description=, dns_domain=, id=82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-1526031337-network, port_security_enabled=True, project_id=1406682a926240e29f505dd4ed438b0b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12840, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1000, status=ACTIVE, subnets=['d3ebc0c4-c9dd-44e9-9aa4-fb2fbfa62ed6'], tags=[], tenant_id=1406682a926240e29f505dd4ed438b0b, updated_at=2025-10-13T15:44:01Z, vlan_transparent=None, network_id=82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, port_security_enabled=False, project_id=1406682a926240e29f505dd4ed438b0b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1028, status=DOWN, tags=[], tenant_id=1406682a926240e29f505dd4ed438b0b, updated_at=2025-10-13T15:44:08Z on network 82962eb0-ad81-4fa3-9cd5-0dc555e5fed0
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.225 2 INFO nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Took 6.85 seconds to spawn the instance on the hypervisor.
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.226 2 DEBUG nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.243 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.243 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.243 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.249 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.249 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.254 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.254 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.367 2 INFO nova.compute.manager [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Took 7.96 seconds to build instance.
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.383 2 DEBUG oslo_concurrency.lockutils [None req-6256f42b-24c4-4a44-9ba9-ea709cca8ceb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:11 standalone.localdomain dnsmasq[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/addn_hosts - 1 addresses
Oct 13 15:44:11 standalone.localdomain podman[532696]: 2025-10-13 15:44:11.512786671 +0000 UTC m=+0.058194164 container kill 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:11 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/host
Oct 13 15:44:11 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/opts
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.545 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.548 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9108MB free_disk=6.9288330078125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.548 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.548 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:11 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1889562756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:11 standalone.localdomain podman[467099]: time="2025-10-13T15:44:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:44:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:44:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 421316 "" "Go-http-client/1.1"
Oct 13 15:44:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3897: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.745 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.745 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.745 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 1f4a5f45-5f65-4051-b24e-5815867a3574 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.745 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.745 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1664MB phys_disk=6GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:44:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:44:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51036 "" "Go-http-client/1.1"
Oct 13 15:44:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:11.819 496978 INFO neutron.agent.dhcp.agent [None req-0806d009-33ff-43de-93d0-2f9562ca6998 - - - - - -] DHCP configuration for ports {'ac26b563-86be-45b4-914a-3f185d50c814'} is completed
Oct 13 15:44:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:11.826 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:11 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:44:11 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:11 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:11 standalone.localdomain podman[532735]: 2025-10-13 15:44:11.897273068 +0000 UTC m=+0.045547996 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:12Z|00156|binding|INFO|Releasing lport 6d0c7ccf-eb5c-4000-9850-d98e4bccfdba from this chassis (sb_readonly=0)
Oct 13 15:44:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:12Z|00157|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:44:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:12.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:44:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2746292559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:12.374 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:12.386 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:44:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:12.407 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:44:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:12.435 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:44:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:12.436 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:12 standalone.localdomain ceph-mon[29756]: pgmap v3897: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 13 15:44:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2746292559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:44:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:44:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:44:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:44:12 standalone.localdomain podman[532786]: 2025-10-13 15:44:12.809216762 +0000 UTC m=+0.080964681 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, distribution-scope=public, io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=)
Oct 13 15:44:12 standalone.localdomain systemd[1]: tmp-crun.F8Z2ef.mount: Deactivated successfully.
Oct 13 15:44:12 standalone.localdomain podman[532789]: 2025-10-13 15:44:12.889602005 +0000 UTC m=+0.159239329 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, vcs-type=git, container_name=swift_account_server, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, distribution-scope=public, release=1, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:44:12 standalone.localdomain podman[532790]: 2025-10-13 15:44:12.901097127 +0000 UTC m=+0.162326574 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:44:12 standalone.localdomain podman[532790]: 2025-10-13 15:44:12.914984032 +0000 UTC m=+0.176213479 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:44:12 standalone.localdomain podman[532788]: 2025-10-13 15:44:12.871607454 +0000 UTC m=+0.138618278 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9)
Oct 13 15:44:12 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:44:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:44:13 standalone.localdomain podman[532786]: 2025-10-13 15:44:13.028246302 +0000 UTC m=+0.299994221 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, io.buildah.version=1.33.12, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., version=17.1.9, build-date=2025-07-21T14:56:28, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, release=1, container_name=swift_object_server, distribution-scope=public, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, architecture=x86_64)
Oct 13 15:44:13 standalone.localdomain podman[532884]: 2025-10-13 15:44:13.031140731 +0000 UTC m=+0.053128079 container kill 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:44:13 standalone.localdomain dnsmasq[531352]: exiting on receipt of SIGTERM
Oct 13 15:44:13 standalone.localdomain systemd[1]: libpod-57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535.scope: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.055 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.056 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.056 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:44:13 standalone.localdomain podman[532905]: 2025-10-13 15:44:13.085743513 +0000 UTC m=+0.044020670 container died 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.109 2 DEBUG nova.compute.manager [req-3ebff58d-25d5-40ee-a6c1-5a15ccae27eb req-48958dfb-be06-456e-80e7-b808b03e1979 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.110 2 DEBUG oslo_concurrency.lockutils [req-3ebff58d-25d5-40ee-a6c1-5a15ccae27eb req-48958dfb-be06-456e-80e7-b808b03e1979 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.110 2 DEBUG oslo_concurrency.lockutils [req-3ebff58d-25d5-40ee-a6c1-5a15ccae27eb req-48958dfb-be06-456e-80e7-b808b03e1979 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.111 2 DEBUG oslo_concurrency.lockutils [req-3ebff58d-25d5-40ee-a6c1-5a15ccae27eb req-48958dfb-be06-456e-80e7-b808b03e1979 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.111 2 DEBUG nova.compute.manager [req-3ebff58d-25d5-40ee-a6c1-5a15ccae27eb req-48958dfb-be06-456e-80e7-b808b03e1979 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] No waiting events found dispatching network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.111 2 WARNING nova.compute.manager [req-3ebff58d-25d5-40ee-a6c1-5a15ccae27eb req-48958dfb-be06-456e-80e7-b808b03e1979 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received unexpected event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f for instance with vm_state active and task_state None.
Oct 13 15:44:13 standalone.localdomain podman[532905]: 2025-10-13 15:44:13.11829536 +0000 UTC m=+0.076572497 container cleanup 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:44:13 standalone.localdomain systemd[1]: libpod-conmon-57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535.scope: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:44:13 standalone.localdomain podman[532789]: 2025-10-13 15:44:13.134371442 +0000 UTC m=+0.404008786 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, container_name=swift_account_server, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9)
Oct 13 15:44:13 standalone.localdomain podman[532908]: 2025-10-13 15:44:13.146088572 +0000 UTC m=+0.091544786 container remove 57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d783c1bb-e132-4da9-b61c-3ae915fe52b1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:13 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:13.181 496978 INFO neutron.agent.dhcp.agent [None req-fb5ad333-0d15-4894-bd4b-804c654ed851 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:13.182 496978 INFO neutron.agent.dhcp.agent [None req-fb5ad333-0d15-4894-bd4b-804c654ed851 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:13 standalone.localdomain podman[532788]: 2025-10-13 15:44:13.186372055 +0000 UTC m=+0.453382889 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20250721.1, version=17.1.9, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:44:13 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain podman[532937]: 2025-10-13 15:44:13.199308692 +0000 UTC m=+0.058802442 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:44:13 standalone.localdomain podman[532937]: 2025-10-13 15:44:13.210766323 +0000 UTC m=+0.070260083 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:44:13 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3898: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 13 15:44:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bb2ed2e095601aa4daebba21697eeb4351b9ce08ac94506c726e721924730216-merged.mount: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57b442fa7217ca4e29ced866d56d07d15c904f9ff36cee475a5d411dec588535-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd783c1bb\x2de132\x2d4da9\x2db61c\x2d3ae915fe52b1.mount: Deactivated successfully.
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.890 2 DEBUG nova.compute.manager [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-changed-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.891 2 DEBUG nova.compute.manager [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Refreshing instance network info cache due to event network-changed-bf0a8138-74b3-47a4-8639-a5b7d069ec7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.891 2 DEBUG oslo_concurrency.lockutils [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.892 2 DEBUG oslo_concurrency.lockutils [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquired lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:44:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:13.892 2 DEBUG nova.network.neutron [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Refreshing network info cache for port bf0a8138-74b3-47a4-8639-a5b7d069ec7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 13 15:44:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:14.248 496978 INFO neutron.agent.linux.ip_lib [None req-b35af70a-57b8-4d3a-87fb-47607c376f5f - - - - - -] Device tap7bd955f5-64 cannot be used as it has no MAC address
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:14 standalone.localdomain kernel: device tap7bd955f5-64 entered promiscuous mode
Oct 13 15:44:14 standalone.localdomain NetworkManager[5962]: <info>  [1760370254.2859] manager: (tap7bd955f5-64): new Generic device (/org/freedesktop/NetworkManager/Devices/39)
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:14Z|00158|binding|INFO|Claiming lport 7bd955f5-6410-49e1-86d0-72c79ea57b61 for this chassis.
Oct 13 15:44:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:14Z|00159|binding|INFO|7bd955f5-6410-49e1-86d0-72c79ea57b61: Claiming unknown
Oct 13 15:44:14 standalone.localdomain systemd-udevd[532966]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:44:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:14.300 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-e1d53cab-7437-4d64-ab81-a5510ea78cab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1d53cab-7437-4d64-ab81-a5510ea78cab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '405ee72dfc854aa88303025334758c24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0499b2ed-3fde-4959-a490-2a87c83c6791, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=7bd955f5-6410-49e1-86d0-72c79ea57b61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:14.303 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 7bd955f5-6410-49e1-86d0-72c79ea57b61 in datapath e1d53cab-7437-4d64-ab81-a5510ea78cab bound to our chassis
Oct 13 15:44:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:14.307 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e1d53cab-7437-4d64-ab81-a5510ea78cab or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:44:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:14.311 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bc73884e-f898-4549-be9b-452167144b16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:14Z|00160|binding|INFO|Setting lport 7bd955f5-6410-49e1-86d0-72c79ea57b61 ovn-installed in OVS
Oct 13 15:44:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:14Z|00161|binding|INFO|Setting lport 7bd955f5-6410-49e1-86d0-72c79ea57b61 up in Southbound
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.357 2 DEBUG nova.network.neutron [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Updated VIF entry in instance network info cache for port bf0a8138-74b3-47a4-8639-a5b7d069ec7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.358 2 DEBUG nova.network.neutron [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Updating instance_info_cache with network_info: [{"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap7bd955f5-64: No such device
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.388 2 DEBUG oslo_concurrency.lockutils [req-fe4ff966-72bf-4147-9502-df6eb192bf65 req-79e476cc-9801-4bd8-b200-3595be2b8414 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Releasing lock "refresh_cache-1f4a5f45-5f65-4051-b24e-5815867a3574" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:14.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:14 standalone.localdomain ceph-mon[29756]: pgmap v3898: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 13 15:44:15 standalone.localdomain podman[533037]: 
Oct 13 15:44:15 standalone.localdomain podman[533037]: 2025-10-13 15:44:15.413671072 +0000 UTC m=+0.094261719 container create 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:15 standalone.localdomain systemd[1]: Started libpod-conmon-3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136.scope.
Oct 13 15:44:15 standalone.localdomain podman[533037]: 2025-10-13 15:44:15.373894064 +0000 UTC m=+0.054484731 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:44:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:44:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbda2dee89a820875d7c5033b136efe637628f02cc6f565aed8a4a0299446ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:44:15 standalone.localdomain podman[533037]: 2025-10-13 15:44:15.496961473 +0000 UTC m=+0.177552140 container init 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:15 standalone.localdomain podman[533037]: 2025-10-13 15:44:15.50664911 +0000 UTC m=+0.187239767 container start 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:44:15 standalone.localdomain dnsmasq[533055]: started, version 2.85 cachesize 150
Oct 13 15:44:15 standalone.localdomain dnsmasq[533055]: DNS service limited to local subnets
Oct 13 15:44:15 standalone.localdomain dnsmasq[533055]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:44:15 standalone.localdomain dnsmasq[533055]: warning: no upstream servers configured
Oct 13 15:44:15 standalone.localdomain dnsmasq-dhcp[533055]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:44:15 standalone.localdomain dnsmasq[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/addn_hosts - 0 addresses
Oct 13 15:44:15 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/host
Oct 13 15:44:15 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/opts
Oct 13 15:44:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3899: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 13 15:44:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:15.690 496978 INFO neutron.agent.dhcp.agent [None req-f3443c9f-1c13-4a3f-b518-18ac1a231eff - - - - - -] DHCP configuration for ports {'8857de49-9033-440a-adee-a18796141120'} is completed
Oct 13 15:44:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:16.148 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:15Z, description=, device_id=072f4e36-3678-48bc-b1a7-a51370d6f12f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889162160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889252e20>], id=df9404d2-40da-451b-ba0d-79a4e7c272ad, ip_allocation=immediate, mac_address=fa:16:3e:e1:70:98, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1061, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:44:16Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:44:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:16.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:16 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 4 addresses
Oct 13 15:44:16 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:16 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:16 standalone.localdomain podman[533073]: 2025-10-13 15:44:16.374336449 +0000 UTC m=+0.062234237 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:16.634 496978 INFO neutron.agent.dhcp.agent [None req-db066e60-c535-4fe0-91ac-ac208c82d8f5 - - - - - -] DHCP configuration for ports {'df9404d2-40da-451b-ba0d-79a4e7c272ad'} is completed
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: pgmap v3899: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.750857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370256750886, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1600, "num_deletes": 252, "total_data_size": 1373853, "memory_usage": 1405312, "flush_reason": "Manual Compaction"}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370256758225, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 1335544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90855, "largest_seqno": 92454, "table_properties": {"data_size": 1329094, "index_size": 3601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14122, "raw_average_key_size": 19, "raw_value_size": 1315729, "raw_average_value_size": 1858, "num_data_blocks": 162, "num_entries": 708, "num_filter_entries": 708, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760370127, "oldest_key_time": 1760370127, "file_creation_time": 1760370256, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 7419 microseconds, and 3515 cpu microseconds.
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.758258) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 1335544 bytes OK
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.758290) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.760320) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.760338) EVENT_LOG_v1 {"time_micros": 1760370256760333, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.760358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 1366769, prev total WAL file size 1366769, number of live WAL files 2.
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.761012) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(1304KB)], [209(5414KB)]
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370256761091, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 6879518, "oldest_snapshot_seqno": -1}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 7304 keys, 5860786 bytes, temperature: kUnknown
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370256796407, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 5860786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5820421, "index_size": 21001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 193562, "raw_average_key_size": 26, "raw_value_size": 5695817, "raw_average_value_size": 779, "num_data_blocks": 819, "num_entries": 7304, "num_filter_entries": 7304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370256, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.796740) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 5860786 bytes
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.798462) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.1 rd, 165.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.3 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(9.5) write-amplify(4.4) OK, records in: 7827, records dropped: 523 output_compression: NoCompression
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.798522) EVENT_LOG_v1 {"time_micros": 1760370256798480, "job": 132, "event": "compaction_finished", "compaction_time_micros": 35449, "compaction_time_cpu_micros": 21940, "output_level": 6, "num_output_files": 1, "total_output_size": 5860786, "num_input_records": 7827, "num_output_records": 7304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370256798884, "job": 132, "event": "table_file_deletion", "file_number": 211}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370256799645, "job": 132, "event": "table_file_deletion", "file_number": 209}
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.760857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.799732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.799738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.799740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.799743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:44:16 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:44:16.799744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:44:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:16.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3900: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 13 15:44:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:17.684 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:17Z, description=, device_id=072f4e36-3678-48bc-b1a7-a51370d6f12f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890aeb80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890aeb20>], id=afc4ec50-b83b-4e40-a090-50be4ab9300c, ip_allocation=immediate, mac_address=fa:16:3e:da:ce:0a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:11Z, description=, dns_domain=, id=e1d53cab-7437-4d64-ab81-a5510ea78cab, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerDiskConfigTestJSON-11723410-network, port_security_enabled=True, project_id=405ee72dfc854aa88303025334758c24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8205, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1039, status=ACTIVE, subnets=['fbbad9a1-c6d1-49a0-be14-8d10da95883f'], tags=[], tenant_id=405ee72dfc854aa88303025334758c24, updated_at=2025-10-13T15:44:13Z, vlan_transparent=None, network_id=e1d53cab-7437-4d64-ab81-a5510ea78cab, port_security_enabled=False, project_id=405ee72dfc854aa88303025334758c24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1068, status=DOWN, tags=[], tenant_id=405ee72dfc854aa88303025334758c24, updated_at=2025-10-13T15:44:17Z on network e1d53cab-7437-4d64-ab81-a5510ea78cab
Oct 13 15:44:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:44:17 standalone.localdomain podman[533095]: 2025-10-13 15:44:17.823004117 +0000 UTC m=+0.087227022 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 13 15:44:17 standalone.localdomain podman[533095]: 2025-10-13 15:44:17.832116751 +0000 UTC m=+0.096339666 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:17 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:44:17 standalone.localdomain dnsmasq[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/addn_hosts - 1 addresses
Oct 13 15:44:17 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/host
Oct 13 15:44:17 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/opts
Oct 13 15:44:17 standalone.localdomain podman[533132]: 2025-10-13 15:44:17.997078837 +0000 UTC m=+0.057941318 container kill 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:44:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:18.179 496978 INFO neutron.agent.dhcp.agent [None req-cfd41d1b-4e5e-4bd8-8da6-9227afccb09f - - - - - -] DHCP configuration for ports {'afc4ec50-b83b-4e40-a090-50be4ab9300c'} is completed
Oct 13 15:44:18 standalone.localdomain ceph-mon[29756]: pgmap v3900: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 13 15:44:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:44:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/555716882' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:44:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:44:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/555716882' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:44:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:18.964 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:17Z, description=, device_id=072f4e36-3678-48bc-b1a7-a51370d6f12f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188927bbb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111460>], id=afc4ec50-b83b-4e40-a090-50be4ab9300c, ip_allocation=immediate, mac_address=fa:16:3e:da:ce:0a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:11Z, description=, dns_domain=, id=e1d53cab-7437-4d64-ab81-a5510ea78cab, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerDiskConfigTestJSON-11723410-network, port_security_enabled=True, project_id=405ee72dfc854aa88303025334758c24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8205, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1039, status=ACTIVE, subnets=['fbbad9a1-c6d1-49a0-be14-8d10da95883f'], tags=[], tenant_id=405ee72dfc854aa88303025334758c24, updated_at=2025-10-13T15:44:13Z, vlan_transparent=None, network_id=e1d53cab-7437-4d64-ab81-a5510ea78cab, port_security_enabled=False, project_id=405ee72dfc854aa88303025334758c24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1068, status=DOWN, tags=[], tenant_id=405ee72dfc854aa88303025334758c24, updated_at=2025-10-13T15:44:17Z on network e1d53cab-7437-4d64-ab81-a5510ea78cab
Oct 13 15:44:19 standalone.localdomain dnsmasq[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/addn_hosts - 1 addresses
Oct 13 15:44:19 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/host
Oct 13 15:44:19 standalone.localdomain podman[533169]: 2025-10-13 15:44:19.181219384 +0000 UTC m=+0.054178650 container kill 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:44:19 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/opts
Oct 13 15:44:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:19.431 496978 INFO neutron.agent.dhcp.agent [None req-03f8e632-6a72-4476-9609-98b02b82cae7 - - - - - -] DHCP configuration for ports {'afc4ec50-b83b-4e40-a090-50be4ab9300c'} is completed
Oct 13 15:44:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:19.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/555716882' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:44:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/555716882' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:44:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3901: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 13 15:44:20 standalone.localdomain podman[533208]: 2025-10-13 15:44:20.21174728 +0000 UTC m=+0.052457357 container kill 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:44:20 standalone.localdomain dnsmasq[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/addn_hosts - 0 addresses
Oct 13 15:44:20 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/host
Oct 13 15:44:20 standalone.localdomain dnsmasq-dhcp[531879]: read /var/lib/neutron/dhcp/82962eb0-ad81-4fa3-9cd5-0dc555e5fed0/opts
Oct 13 15:44:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:20Z|00162|binding|INFO|Releasing lport 674449c3-c755-4fa5-b86a-7f3571608e86 from this chassis (sb_readonly=0)
Oct 13 15:44:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:20.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:20Z|00163|binding|INFO|Setting lport 674449c3-c755-4fa5-b86a-7f3571608e86 down in Southbound
Oct 13 15:44:20 standalone.localdomain kernel: device tap674449c3-c7 left promiscuous mode
Oct 13 15:44:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:20.398 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1406682a926240e29f505dd4ed438b0b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3df86e27-752c-4829-88cb-a33e756d0e40, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=674449c3-c755-4fa5-b86a-7f3571608e86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:20.399 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 674449c3-c755-4fa5-b86a-7f3571608e86 in datapath 82962eb0-ad81-4fa3-9cd5-0dc555e5fed0 unbound from our chassis
Oct 13 15:44:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:20.401 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:20.401 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4e853fd0-a016-4a7b-9da3-bb153bce1083]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:20.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:20 standalone.localdomain ceph-mon[29756]: pgmap v3901: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 13 15:44:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:20.994 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:44:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:20.998 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b6ce2c2280625cff9d72184537087fc2419d2206b034d898812e337b1cf178d" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Oct 13 15:44:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:21.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.374 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Mon, 13 Oct 2025 15:44:21 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5a553711-3884-48bb-94c7-7698b494084c x-openstack-request-id: req-5a553711-3884-48bb-94c7-7698b494084c _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.375 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "1f304cc6-f658-4c81-8601-a769ec289d72", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/1f304cc6-f658-4c81-8601-a769ec289d72"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/1f304cc6-f658-4c81-8601-a769ec289d72"}]}, {"id": "993bc811-5d85-498c-8b2f-935e295a6567", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/993bc811-5d85-498c-8b2f-935e295a6567"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/993bc811-5d85-498c-8b2f-935e295a6567"}]}, {"id": "a4dc5724-1446-4a58-9cd5-9b31504ed897", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/a4dc5724-1446-4a58-9cd5-9b31504ed897"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/a4dc5724-1446-4a58-9cd5-9b31504ed897"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.376 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-5a553711-3884-48bb-94c7-7698b494084c request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.378 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/1f304cc6-f658-4c81-8601-a769ec289d72 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5b6ce2c2280625cff9d72184537087fc2419d2206b034d898812e337b1cf178d" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.396 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Mon, 13 Oct 2025 15:44:21 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b3bbd8fb-c8a9-4391-a53d-b408bbdd6823 x-openstack-request-id: req-b3bbd8fb-c8a9-4391-a53d-b408bbdd6823 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.396 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "1f304cc6-f658-4c81-8601-a769ec289d72", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/1f304cc6-f658-4c81-8601-a769ec289d72"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/1f304cc6-f658-4c81-8601-a769ec289d72"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.396 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/1f304cc6-f658-4c81-8601-a769ec289d72 used request id req-b3bbd8fb-c8a9-4391-a53d-b408bbdd6823 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.397 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'name': 'guest-instance-1', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0ad31fa1626a4d048525338a288fd601', 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'hostId': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.400 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.401 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.401 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.401 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: guest-instance-1>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: guest-instance-1>]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.402 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.405 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.407 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1f4a5f45-5f65-4051-b24e-5815867a3574 / tapbf0a8138-74 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.407 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.410 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'eb7ae7e8-55d7-4135-aa10-5cdb2795598a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.402387', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7c8c1e48-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '093432df46dc7c1f6c4aec3a49242de02b3d13c67bc1e662e71cb24f2f0f5917'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.402387', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7c8c7802-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '76fd22570fa063d3b5f0a490105de06bbb2192fe1fdfe9da18a94cc035ed687a'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.402387', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7c8ce8aa-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': '32353112353a91aafe687f34cc8085bfc110d99e0e4de1bd64f6c6690f1a3bc6'}]}, 'timestamp': '2025-10-13 15:44:21.411010', '_unique_id': '62e8609ca9e04888b99ce2f73fd120a7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.412 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.413 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.437 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.437 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.437 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.451 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.451 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.466 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.466 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2fa337ad-bb1d-4f15-af27-ece064bd70b7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c90f8c8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '1aae2853496a9e22aa7ccada537eac7ebd80b76e41958aafd8d693f7073a0bac'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c910462-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '3898c3b15ce0d55d9bb4342123b232dc8a609a37d6313b9966b626eb2ecf811b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7c910ec6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '4c96f777f8c13ddcb76cd92977041a348b358a228292e2fc97adf0fd7e8bfcbc'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 23775232, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7c93299a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'cea27057789544f7c871f91f138a8537a7be97d83c8c3dc030514a95ff3fc601'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2048, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7c933516-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': '902976fbbc258bd414b0e6dda89de7339be201c3bdebacb5b5068a961b58b74b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c956b42-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': 'b8e1ee1bf2ab0918dc9d4bd739501c75dfb77c6ff0e1a09445721f8355079324'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.413419', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c9576aa-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '670ca1426443918746ac64caad18d71e3d9174f2dc55bd419306703f315c1701'}]}, 'timestamp': '2025-10-13 15:44:21.467004', '_unique_id': '78b6d09bbbfc4d77a32b93322f3e3565'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.469 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.469 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.469 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.469 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '76c293b9-9763-4541-9423-dc47e5a30f77', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.469114', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7c95d21c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': 'c38433a02b3ab5bde5db7d9b39202a5cf7077ec1bd0e45e3362ee4070aa348b9'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.469114', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7c95d9f6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '2d5e69317bd48fddadd468250c665e9f4590b8053168e3df58a43326316bd8a5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.469114', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7c95e270-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': 'ddd073eee5796773246220dc05b05640fea84dd02c48da355dc07c94eefcf9a4'}]}, 'timestamp': '2025-10-13 15:44:21.469754', '_unique_id': '5d50a646236747e5866bb06fd05a0165'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.470 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.471 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.471 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '894e2c95-28d7-41e2-9557-e94eeadd4fc8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.470910', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7c9617c2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': 'a1598f00af4f31aa5ad590b5cb519db05c834413c62d9a9609efeba66fe3de95'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.470910', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7c96206e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '863079d835d3807e9772b74be6e923a53df4893bdb970c8fee56ec143bbb3fb9'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.470910', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7c962816-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': 'dedf83951d70a4d553d3403b4b5d4fa997abcd7e77b32cc0c48387f8e7b56a34'}]}, 'timestamp': '2025-10-13 15:44:21.471590', '_unique_id': 'c791a6f157484ac0be6a5021b6029cbc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.472 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.473 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.484 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.485 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.485 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.493 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.493 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.505 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.505 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '21df3133-708a-4733-b557-8ae44d591635', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c983a20-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '588c3cffeb3d73d706bba5b8f57ddb26bd0fa10d03a99fbd43103a2b2fe768e8'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c984542-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '9a09e5120aa0486777e44e0a85634d26e141c58345e975dd11c786873ad1349e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7c984db2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '17ff3311323ce3474b73f1d230cbc25cc2223d22a88607fdc210066df3ad2e9d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7c999136-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.674813593, 'message_signature': '8617afc02bf188a793e8eee08b3ce2ca92f510e9894df9e46654521f8ac8ce70'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7c999d3e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.674813593, 'message_signature': '3307ccdd065367d1304cb3b75ca0abbefe387b74106191158925bb8c66671959'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c9b6290-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.683399891, 'message_signature': '1e4690cf83d7ad8ea9478fb4023ba948b54de1364da8a13f0e3e9efad4e3215e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'us
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: er_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.473388', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c9b75e6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.683399891, 'message_signature': '1eb529d24bf46f680e6dc0a9b7afd4fcca88252db5d03d290c707ce449abdf50'}]}, 'timestamp': '2025-10-13 15:44:21.506351', '_unique_id': 'bf9702ef170247ac9186c9b084a9986e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.508 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.520 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 13200000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.539 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/cpu volume: 10060000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.558 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 35100000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd6e65dd5-51cc-4327-9f6b-082528ec19ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13200000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:44:21.508624', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '7c9da988-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.709450934, 'message_signature': '353cf1c55912de85088d8d2fc37bf4c219e6f290e5b38db04c1483bcf2b5fb10'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 10060000000, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'timestamp': '2025-10-13T15:44:21.508624', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': '7ca09080-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.728116916, 'message_signature': 'a4e668a6ab0685638e055a0865a64532b55bb38239b72743b4c0db5d544b897c'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 35100000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:44:21.508624', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '7ca37372-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.747099608, 'message_signature': '33484770035552bc4ba3e4022eca050b0c4c486f804200b6138b1c452a191eac'}]}, 'timestamp': '2025-10-13 15:44:21.558987', '_unique_id': '1008ff7904874419952f74c9ed8a1343'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.560 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.562 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.562 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.563 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.563 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.564 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.564 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.564 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.565 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d75a119-cb20-46f1-83a2-30aa6d07fa08', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca42b0a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'f8ced2dc324f76ddeffcc8623426db56c76f8c62fca9f6d9316b88ce1c50a00d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca43ec4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '165f07014475e33b9fab17e2c5ebc904482fd9791f664d6041e5752ad61936b8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7ca44b76-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'a0659c88161ad546d1ffb2dec3652ea6ab826248d8bc60a286c46c7da4b5770e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 760, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7ca4579c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'fbcf72e8604d343f494773dc8acaed5298a03367e418e1f068e921347f8deb07'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7ca4621e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': '10feff2778861dc6b51ae27b86ff975bddd7a23c3bfd430d14b52e33f7467ab8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca47358-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '73dcb300bce40268de258ffa488afe7bba649a089c93463bab73cfbc275c9b34'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': '
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.562865', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca48636-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '9587ac8133b267d14a0840f69a166a4c97067300d208e9ddfb2c3bc83d3b52bc'}]}, 'timestamp': '2025-10-13 15:44:21.565730', '_unique_id': '9672800541cc4e4abb62cb1c1f9e01cb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.567 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.567 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.568 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.568 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.568 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.read.latency volume: 713915213 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.569 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.read.latency volume: 1505196 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.569 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.569 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f4e29490-455d-4fc3-b20b-f7b65b0475c6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca4e78e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'bc547984984757e1bedfa41f926f119eb584a8e77e37d9f5faab6c801eec3201'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca4f29c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'd2f5abbfd9898f126fe3cc9dd1ef86170dac8f1593bfdc39d29b7037a4196894'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7ca4fd1e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '432d5bd662933a5da3402ee79e64d7c0896283ac11525bef13ec569615c8b544'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 713915213, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7ca5069c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'f49d6257e2811dde6f4a3037320424c19172b74833792978fbc2ef2bc412fb9d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1505196, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7ca51222-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'a7909e43e3a0700ff50b6226add0405a754ec98450a5c56d2228b72f8aef371f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca51c04-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': 'e6a907b2819997af916b5c25a0155ca760d173f57729540d55836b13ce0d3ed5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumu
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: lative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.567884', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca52622-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '801352c6e4988bf1c19e9880806778d7a0fbf17219a1bd3f18c8b56e80fab5e0'}]}, 'timestamp': '2025-10-13 15:44:21.569814', '_unique_id': '6f33dd15a0c04ce4a6d7b396d85874c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.571 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.571 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.571 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.572 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3edd042c-d8f8-45d6-b1fe-b7ac58e24951', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.571574', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7ca5765e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': 'de90024706a34d5d13816b2c8dceb25dd303dd2e3140d2ddf711f2bfc53cebdd'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 90, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.571574', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7ca580ae-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '0fd600937fe44656d40644581611b07959bb25ca3d55b1b09e68213f124c524a'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.571574', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7ca58d2e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': '220c8d4e72dcf4dc652b321aa615c1c2bbf2c27bce602f10b8889a3ce44ff6ea'}]}, 'timestamp': '2025-10-13 15:44:21.572466', '_unique_id': 'b10cb3a956f244f1a4ccb5cd13a351f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.573 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.574 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.574 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '29ea4d75-c428-48e0-bf2f-ae3e58378fd8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.573970', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7ca5d34c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '23924c9fbac430fc4fa6a443f0482f43e618798b5f3001cd4c4854eef5a7f37f'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.573970', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7ca5e33c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '9751b8fb35e066cf5f1c072fb2640ede229942823275211f087910978d25ce60'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.573970', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7ca5edaa-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': '78f631df9e02abe5357f3b0e092b20f4437711e117f7792db5b30b7e3995ac90'}]}, 'timestamp': '2025-10-13 15:44:21.574932', '_unique_id': '5d25c5c21ff04674b089c9b19bd519c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.575 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.576 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.576 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.576 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.577 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b5a846c3-0e63-4921-85c8-ed9740ba2c16', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.576628', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7ca63ac6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '71684d7e1bb9a7c57fdd90291b315b90faac0bc33efc894dbaca12918cddc6ef'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.576628', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7ca64502-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '094d0cd1264e12efb798744852194acbfc229eb27c4fe8ffe148f6f73dad85f7'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.576628', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7ca64f0c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': 'bd23424bb25c656f2c2e3487ff00473d8f9fe6ebdb1e1c5f287fcfb5fd519e1f'}]}, 'timestamp': '2025-10-13 15:44:21.577480', '_unique_id': '5d229915fce2428f9dc6c41374683757'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.578 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.579 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.579 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.579 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: guest-instance-1>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: guest-instance-1>]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.579 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.579 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.580 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.580 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.580 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.580 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.581 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.581 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a13154c7-5417-4159-8352-4aafbf9f007d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca6b5c8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'f9671afde13119535e76b40bb9cc4eefd94776b5cf36e4bf1c798b6570a5ddb2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca6c1e4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '9a4ea500cd0dc2fc3b5d508ebd713d8f2f4f6341ae6643ccba1679d1dc093192'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7ca6cee6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'cd058a0138c964b7a004c2e8c282c76fa2b5b3bf21832bf059f7509180d69419'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7ca6d97c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': '2ddc76d77e950d23200d8fec81cb0fdbf430cc3ce5839d17059e81a9d6d92998'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7ca6e322-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'd24b00579e6fd3a3c5571033b58c69dc25a8700421052ce06ea453c5abe22726'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca6f9e8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': 'c9a18803f5e208ac655a1e79903b8d5bdc0fd55a843245a1d4a98a1e317b7e09'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.579742', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca703fc-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '6192c025b4bfc44aa4fca5b94d0faf85905bb5379d04c736a24cf8910fde685a'}]}, 'timestamp': '2025-10-13 15:44:21.582050', '_unique_id': '9b143b14215f4c7887518b312f7f7c9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.583 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.583 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.584 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.584 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.584 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.585 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.585 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.585 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '50dd2bb7-b631-4c92-b5ee-65c085854e0a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca757c6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': 'a38e2a6283ccd729bd414596f5132dfced97c65794b95c036c7834444386c1d6'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca761d0-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '8d2a5347cae844cf43ec9ed6fbf38065b9914753b14eba5ccd31f261580bbd83'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7ca770a8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '9fe215c0c553bc649ad87a8fbee92d8af84e6a83cf97f3c8ff046a56d715670a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7ca77d5a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.674813593, 'message_signature': '4fbb946f5f0082ec72564c0feb6d408e3f690d2acd5d1c9baab224183f199624'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7ca78714-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.674813593, 'message_signature': '1a175116a379c9a2e61aef489a32420c7442d8ae6efb71a9afa0c67f6d5a8850'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca7960a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.683399891, 'message_signature': '4b618bf78838a056045ab9328b492a8a5bc29ac3c27103b12751ac9d7ee5d3eb'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.583910', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca7a0b4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.683399891, 'message_signature': 'ff5ce8c7992c3c833a902b346dc3073a089c000a4c702c6d0d58bfb1bddeba0e'}]}, 'timestamp': '2025-10-13 15:44:21.586063', '_unique_id': 'ad0247c5baf64ec5a3f23a23758992b5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.587 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.587 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.588 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.588 12 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1f4a5f45-5f65-4051-b24e-5815867a3574: ceilometer.compute.pollsters.NoVolumeException
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.588 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bfbd537d-d22b-4d57-a4c2-4c15ea97f62d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:44:21.587886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '7ca7f294-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.709450934, 'message_signature': '203b683ed1c914bd29d800c2c476770823a04139d3b2cdfba4af0dfbf30d39d6'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:44:21.587886', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '7ca80554-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.747099608, 'message_signature': '0eec4dfc9723a3ddbea5ff31c0f0c4bffa1b04619ce8043323be94a4846b5f0f'}]}, 'timestamp': '2025-10-13 15:44:21.588641', '_unique_id': '3ba8fe22314a46139d0414bc80e5bb41'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.589 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.599 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.600 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.600 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.600 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.601 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.601 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.601 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.602 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.468 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe0e82ba-c2dd-4fb6-8d5d-99780a1670b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7ca9d532-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': 'dc0ac89c03b7af9aebf1561574264339028f3f2651e3cd6d2c15c767ccc53141'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7ca9e658-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '174982fad9d3d795fbd490cbfe1c3ba7d88432e2deb50599d7acb5781ed3d938'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7ca9f10c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '6b07b27edada19e043b35c94ead16eb58d5224cc2f5e6f2c5d0dece698e1a9df'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7ca9fb70-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': '177f61561f8d3f7aeca6df892b14f0e03cca44ebff2785994005d27ad9447474'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7caa071e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'c615b403c47cc7f96eae9faa59e38a751e308aec37bdd5778eb538ecada003e8'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7caa1150-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '5baec7937f2a7c7b818efa44f706d92a605adc402662dd7ac4c1b04df4941b12'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cu
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: mulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.600137', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7caa1b46-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': 'b2a75d7c12d447fe96eadf2f89c03f9ac696f95cf855dca6e37c1c7bf7c374e4'}]}, 'timestamp': '2025-10-13 15:44:21.602323', '_unique_id': '48dedbefbeda43a098eee166cec04809'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.604 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.605 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.605 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.605 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '39c89e4b-5d7e-4653-8bf3-eb788ebdb7e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.605100', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7caa959e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '08d49e3bfbc46cf1919ebc4fbd44f07087d520ec09d780c5bd5359c4ccfd5097'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.605100', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7caaa5de-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '4f5bf2ba83c21b85c0372b202d87a9ad69d1c2dcee408d918bbdf15c29c9887c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.605100', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7caab330-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': 'bed067be632663b6cc41749607342176ee2db707e1bf4ac81a49924c4e23c175'}]}, 'timestamp': '2025-10-13 15:44:21.606251', '_unique_id': '211b0d739e5d493d9af6d766acf3cde0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.608 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.611 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.612 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.612 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.613 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd092abaf-4a50-455a-97b6-c67957ad5a3c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.612284', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7cabb316-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '8b3388860bc50c32d010efe9f4d38025f20bba26192f1a201407fc4fb40387fe'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.612284', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7cabc496-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': 'ad57b8ce1b3edc82e9f90cbaa5aa6204bf9d2787c51cea8b79d6fa151676ffef'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.612284', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7cabd850-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': '86307a1ebf0fb98fb27c3ab4c1f3191b898a324f36ac389bb70acffd5519dd19'}]}, 'timestamp': '2025-10-13 15:44:21.613733', '_unique_id': 'a08c242159c4488fa3c0d0d260b78bdd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.507 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.615 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.617 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.617 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.618 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [<NovaLikeServer: guest-instance-1>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: guest-instance-1>]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.618 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.618 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.618 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.618 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.619 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.620 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.621 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.622 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5a766efd-87f0-4a47-8356-31484336f05a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7cac9b78-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '215c1480f5d996cfa36cc30c43cc8e84aeac26a20bbf0167f75f82f051fc5407'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7caca67c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '333f10fb309cbf69145605127876f3d77800d26d64b99b91b8212600cb35705f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7cacb0a4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.602668383, 'message_signature': '296b0257691bc73d3c35efa28c503374212298e20df5ea70b1f7a1cd97b33a63'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7cacbc0c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': '2572e4d39736714730dbcad66a62a3f9f3a80f11830c7fd26cce89bd861495e6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7cace9a2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.627338192, 'message_signature': 'dad248e79a36d47ab175fa5ad486f7f05c2498f91ce62e619a4f90457d2073a4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7cad2ef8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '30f3247feeeb761aa26a896f5b24b44a4c9f048b38f60977fb23dd3d08d38d32'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: ': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.618387', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7cad4870-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.641458703, 'message_signature': '19550266f5da3f5415fa09425b5623a8cce2b2383ea20c4f8249d0cc2dd181bc'}]}, 'timestamp': '2025-10-13 15:44:21.623144', '_unique_id': '31b3e03a707b47a799018b379dbff6bb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.625 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.625 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.625 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.625 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.566 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '26b14070-9f17-4f77-9341-46e7c788a6fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.625284', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7cada87e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '4749d46a18da00e7681401a0bdd1e8db9635649b6eca2d6fb3b63fee13c2f5c0'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.625284', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7cadb774-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': 'a8a8bfdef89a69f2651ecb87de5102c8b038455c3fb1581ae1a02791f341ce2c'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.625284', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7cadc278-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': 'd6dbaa0af5238d2068b11f7c1d13188bab222f70a6979f9d821e857738d1f097'}]}, 'timestamp': '2025-10-13 15:44:21.626266', '_unique_id': '7724ddf218a44138a36028b72f18eaac'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.627 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.629 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.629 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.630 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.630 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.630 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.631 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.632 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.633 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.570 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '08228026-c9b8-4d33-a6c6-ef42e70b7608', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7cae5a6c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '2c3fa728a4215ef6a78fdc0577f942435332563fc4a55a204bb44451e285b29e'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7cae65a2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': '837147ef3f0e82cd7835ae691d840db708af454dc47cb982347b3af4ec9de6ba'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '7cae716e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.662630853, 'message_signature': 'dc4a19ca810b4220a3ff1105fe00b50944f73ba0e082214873ebaa4d905c52bf'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-vda', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7cae7bdc-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.674813593, 'message_signature': '7ada9b00552367526be244baea8a202483e3eae2fd006664495a66c89a86fe82'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': '1f4a5f45-5f65-4051-b24e-5815867a3574-sda', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'instance-00000004', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7caead32-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.674813593, 'message_signature': '075edfb055f201f197ab5affe63b9fbe1851813dc2b729ab257be086e1750744'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7caee07c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.683399891, 'message_signature': '295e9f3f9db2642cbc14f316930a18679333da5f0e734c7bd641ab00427b96b4'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:44:21.629839', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7caeed74-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.683399891, 'message_signature': '8da61526c322e9cb8fda57691c320489238827512889580691229fb3cc6dd829'}]}, 'timestamp': '2025-10-13 15:44:21.633919', '_unique_id': '6a1b0a031190415eb5516355ef3a44fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.636 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.636 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.637 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [<NovaLikeServer: guest-instance-1>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: guest-instance-1>]
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.638 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.638 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.639 12 DEBUG ceilometer.compute.pollsters [-] 1f4a5f45-5f65-4051-b24e-5815867a3574/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.639 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20d7fce9-82dc-4d3f-b032-10c3dc52ae79', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:44:21.638829', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '7cafc244-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.591651449, 'message_signature': '7cf37f65fda5b84445a9d08439337482be04fc2f39a733f474827e26e906a419'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 1, 'user_id': '3b4b76f527df445a9af9fdb570abbb05', 'user_name': None, 'project_id': '0ad31fa1626a4d048525338a288fd601', 'project_name': None, 'resource_id': 'instance-00000004-1f4a5f45-5f65-4051-b24e-5815867a3574-tapbf0a8138-74', 'timestamp': '2025-10-13T15:44:21.638829', 'resource_metadata': {'display_name': 'guest-instance-1', 'name': 'tapbf0a8138-74', 'instance_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'instance_type': 'm1.nano', 'host': '3962d03ff4a180ada74df1d83ed282f2bd526acda81be029a7cbec94', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '1f304cc6-f658-4c81-8601-a769ec289d72', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c219892b-7f28-4b58-8156-43d84cf6801b'}, 'image_ref': 'c219892b-7f28-4b58-8156-43d84cf6801b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:e9:c8:33', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapbf0a8138-74'}, 'message_id': '7cafd0c2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.595044325, 'message_signature': '30c7520289b81922842c8906340fd3704c069d068e2a53b3174065f8bb42c8bd'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:44:21.638829', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '7cafdb4e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9704.597295065, 'message_signature': 'cc9b99861560a97ed24b0b76761a78c557e0113cd67d1f1276924812c5c54108'}]}, 'timestamp': '2025-10-13 15:44:21.639966', '_unique_id': '281192f2f0354c2e82d85386eae4da90'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:44:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:44:21.640 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.582 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.586 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.603 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3902: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.624 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain rsyslogd[56156]: message too long (8192) with configured size 8096, begin of message is: 2025-10-13 15:44:21.635 12 ERROR oslo_messaging.notify.messaging [-] Could not s [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Oct 13 15:44:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:44:21 standalone.localdomain podman[533248]: 2025-10-13 15:44:21.778718599 +0000 UTC m=+0.049619879 container kill 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:44:21 standalone.localdomain dnsmasq[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/addn_hosts - 0 addresses
Oct 13 15:44:21 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/host
Oct 13 15:44:21 standalone.localdomain dnsmasq-dhcp[533055]: read /var/lib/neutron/dhcp/e1d53cab-7437-4d64-ab81-a5510ea78cab/opts
Oct 13 15:44:21 standalone.localdomain podman[533249]: 2025-10-13 15:44:21.882268509 +0000 UTC m=+0.151072204 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:44:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:21Z|00164|binding|INFO|Releasing lport 7bd955f5-6410-49e1-86d0-72c79ea57b61 from this chassis (sb_readonly=0)
Oct 13 15:44:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:21.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:21Z|00165|binding|INFO|Setting lport 7bd955f5-6410-49e1-86d0-72c79ea57b61 down in Southbound
Oct 13 15:44:21 standalone.localdomain kernel: device tap7bd955f5-64 left promiscuous mode
Oct 13 15:44:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:21.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:21.955 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-e1d53cab-7437-4d64-ab81-a5510ea78cab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1d53cab-7437-4d64-ab81-a5510ea78cab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '405ee72dfc854aa88303025334758c24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0499b2ed-3fde-4959-a490-2a87c83c6791, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=7bd955f5-6410-49e1-86d0-72c79ea57b61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:21.958 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 7bd955f5-6410-49e1-86d0-72c79ea57b61 in datapath e1d53cab-7437-4d64-ab81-a5510ea78cab unbound from our chassis
Oct 13 15:44:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:21.961 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e1d53cab-7437-4d64-ab81-a5510ea78cab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:21 standalone.localdomain podman[533249]: 2025-10-13 15:44:21.96211954 +0000 UTC m=+0.230923235 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:44:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:21.961 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d31af53f-2e13-47cb-951a-87a1cecd4c1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:21Z|00166|binding|INFO|Releasing lport 6d0c7ccf-eb5c-4000-9850-d98e4bccfdba from this chassis (sb_readonly=0)
Oct 13 15:44:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:21Z|00167|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:44:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:22.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:22 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:44:22 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:44:22 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:22 standalone.localdomain podman[533312]: 2025-10-13 15:44:22.191379951 +0000 UTC m=+0.210317601 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:44:22 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:22.460 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:22.461 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:44:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:22.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:22 standalone.localdomain dnsmasq[531879]: exiting on receipt of SIGTERM
Oct 13 15:44:22 standalone.localdomain podman[533350]: 2025-10-13 15:44:22.672307393 +0000 UTC m=+0.053351925 container kill 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:44:22 standalone.localdomain systemd[1]: libpod-5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6.scope: Deactivated successfully.
Oct 13 15:44:22 standalone.localdomain podman[533371]: 2025-10-13 15:44:22.769712332 +0000 UTC m=+0.066123564 container died 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:44:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-500281d0b0f4879c5a6d19db2deebb34d424010a9a5c246982fea90dc447d402-merged.mount: Deactivated successfully.
Oct 13 15:44:22 standalone.localdomain ceph-mon[29756]: pgmap v3902: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 13 15:44:22 standalone.localdomain podman[533371]: 2025-10-13 15:44:22.866374727 +0000 UTC m=+0.162785909 container remove 5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82962eb0-ad81-4fa3-9cd5-0dc555e5fed0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:44:22 standalone.localdomain systemd[1]: libpod-conmon-5e14815606df847349fe6b31ecc410f828abb1a78881536cd4e8d991a376def6.scope: Deactivated successfully.
Oct 13 15:44:22 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d82962eb0\x2dad81\x2d4fa3\x2d9cd5\x2d0dc555e5fed0.mount: Deactivated successfully.
Oct 13 15:44:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:22.904 496978 INFO neutron.agent.dhcp.agent [None req-7de2c773-9a84-4e21-858e-9c74a3dda7d5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:23.169 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:44:23
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'backups', '.mgr', 'images', 'vms', 'manila_data']
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:44:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14972 DF PROTO=TCP SPT=50006 DPT=9102 SEQ=225840487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCF639E0000000001030307) 
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3903: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:44:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:44:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:24Z|00168|binding|INFO|Releasing lport 6d0c7ccf-eb5c-4000-9850-d98e4bccfdba from this chassis (sb_readonly=0)
Oct 13 15:44:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:24Z|00169|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:44:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:24.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:24 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:44:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:24 standalone.localdomain podman[533409]: 2025-10-13 15:44:24.143981939 +0000 UTC m=+0.048299477 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:44:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:44:24 standalone.localdomain podman[533423]: 2025-10-13 15:44:24.233995328 +0000 UTC m=+0.070356886 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:44:24 standalone.localdomain podman[533423]: 2025-10-13 15:44:24.263436566 +0000 UTC m=+0.099798104 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:44:24 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:44:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:24.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14973 DF PROTO=TCP SPT=50006 DPT=9102 SEQ=225840487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCF67B60000000001030307) 
Oct 13 15:44:24 standalone.localdomain dnsmasq[533055]: exiting on receipt of SIGTERM
Oct 13 15:44:24 standalone.localdomain podman[533466]: 2025-10-13 15:44:24.672405423 +0000 UTC m=+0.061176410 container kill 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:44:24 standalone.localdomain systemd[1]: libpod-3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136.scope: Deactivated successfully.
Oct 13 15:44:24 standalone.localdomain podman[533477]: 2025-10-13 15:44:24.747716592 +0000 UTC m=+0.065406041 container died 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:24 standalone.localdomain podman[533477]: 2025-10-13 15:44:24.778605596 +0000 UTC m=+0.096295005 container cleanup 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:24 standalone.localdomain systemd[1]: libpod-conmon-3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136.scope: Deactivated successfully.
Oct 13 15:44:24 standalone.localdomain ceph-mon[29756]: pgmap v3903: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 13 15:44:24 standalone.localdomain podman[533484]: 2025-10-13 15:44:24.876602453 +0000 UTC m=+0.180123760 container remove 3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e1d53cab-7437-4d64-ab81-a5510ea78cab, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:24.919 496978 INFO neutron.agent.dhcp.agent [None req-b085cffa-acbb-44ab-94da-157249780735 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:24.920 496978 INFO neutron.agent.dhcp.agent [None req-b085cffa-acbb-44ab-94da-157249780735 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-acbda2dee89a820875d7c5033b136efe637628f02cc6f565aed8a4a0299446ed-merged.mount: Deactivated successfully.
Oct 13 15:44:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3da846bbc928965b9effd13042508b2d76eece04bb2b96a6c1d300f71e153136-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:25 standalone.localdomain systemd[1]: run-netns-qdhcp\x2de1d53cab\x2d7437\x2d4d64\x2dab81\x2da5510ea78cab.mount: Deactivated successfully.
Oct 13 15:44:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3904: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Oct 13 15:44:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:26.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14974 DF PROTO=TCP SPT=50006 DPT=9102 SEQ=225840487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCF6FB60000000001030307) 
Oct 13 15:44:26 standalone.localdomain ceph-mon[29756]: pgmap v3904: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Oct 13 15:44:27 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:27Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:c8:33 10.100.0.10
Oct 13 15:44:27 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:27Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:c8:33 10.100.0.10
Oct 13 15:44:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3905: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 170 B/s wr, 1 op/s
Oct 13 15:44:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:28.315 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:28 standalone.localdomain ceph-mon[29756]: pgmap v3905: 177 pgs: 177 active+clean; 406 MiB data, 323 MiB used, 6.7 GiB / 7.0 GiB avail; 3.8 KiB/s rd, 170 B/s wr, 1 op/s
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00963803391959799 of space, bias 1.0, pg target 0.963803391959799 quantized to 32 (current 32)
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:44:29 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:29.463 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:29.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3906: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 13 15:44:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14975 DF PROTO=TCP SPT=50006 DPT=9102 SEQ=225840487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCF7F770000000001030307) 
Oct 13 15:44:30 standalone.localdomain ceph-mon[29756]: pgmap v3906: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 13 15:44:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:31.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3907: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 13 15:44:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:32 standalone.localdomain ceph-mon[29756]: pgmap v3907: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 13 15:44:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3908: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 13 15:44:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:34.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:44:34 standalone.localdomain ceph-mon[29756]: pgmap v3908: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 13 15:44:34 standalone.localdomain podman[533504]: 2025-10-13 15:44:34.803549308 +0000 UTC m=+0.067157356 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:44:34 standalone.localdomain podman[533504]: 2025-10-13 15:44:34.839972054 +0000 UTC m=+0.103580182 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 15:44:34 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:44:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3909: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 13 15:44:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:36.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:44:36 standalone.localdomain ceph-mon[29756]: pgmap v3909: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 13 15:44:36 standalone.localdomain podman[533529]: 2025-10-13 15:44:36.836836473 +0000 UTC m=+0.090560346 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal)
Oct 13 15:44:36 standalone.localdomain podman[533529]: 2025-10-13 15:44:36.849462617 +0000 UTC m=+0.103186520 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 15:44:36 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:44:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3910: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 13 15:44:38 standalone.localdomain ceph-mon[29756]: pgmap v3910: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 13 15:44:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:44:38 standalone.localdomain podman[533548]: 2025-10-13 15:44:38.821006506 +0000 UTC m=+0.085754216 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:44:38 standalone.localdomain podman[533548]: 2025-10-13 15:44:38.861001774 +0000 UTC m=+0.125749494 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:38 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:44:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:39.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3911: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 13 15:44:40 standalone.localdomain ceph-mon[29756]: pgmap v3911: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 13 15:44:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:41.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:41 standalone.localdomain podman[467099]: time="2025-10-13T15:44:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:44:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:44:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 417668 "" "Go-http-client/1.1"
Oct 13 15:44:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3912: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 14 KiB/s wr, 0 op/s
Oct 13 15:44:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:44:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 50088 "" "Go-http-client/1.1"
Oct 13 15:44:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:42 standalone.localdomain ceph-mon[29756]: pgmap v3912: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 14 KiB/s wr, 0 op/s
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:44:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:44:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:44:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3913: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 14 KiB/s wr, 0 op/s
Oct 13 15:44:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:44:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:44:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:44:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:44:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:44:43 standalone.localdomain podman[533569]: 2025-10-13 15:44:43.84724424 +0000 UTC m=+0.097353738 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:44:43 standalone.localdomain podman[533569]: 2025-10-13 15:44:43.861838525 +0000 UTC m=+0.111948023 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:44:43 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:44:43 standalone.localdomain systemd[1]: tmp-crun.wJbINn.mount: Deactivated successfully.
Oct 13 15:44:43 standalone.localdomain podman[533570]: 2025-10-13 15:44:43.905208589 +0000 UTC m=+0.151502668 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., container_name=swift_container_server, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:44:43 standalone.localdomain podman[533568]: 2025-10-13 15:44:43.952663509 +0000 UTC m=+0.209963051 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, config_id=tripleo_step4, container_name=swift_object_server, com.redhat.component=openstack-swift-object-container, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:44:44 standalone.localdomain podman[533577]: 2025-10-13 15:44:44.017765079 +0000 UTC m=+0.253026984 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, vendor=Red Hat, Inc.)
Oct 13 15:44:44 standalone.localdomain podman[533583]: 2025-10-13 15:44:44.069589505 +0000 UTC m=+0.304746437 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:44:44 standalone.localdomain podman[533570]: 2025-10-13 15:44:44.078724101 +0000 UTC m=+0.325018190 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, version=17.1.9, batch=17.1_20250721.1, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, container_name=swift_container_server, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:44:44 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:44:44 standalone.localdomain podman[533583]: 2025-10-13 15:44:44.132767587 +0000 UTC m=+0.367924479 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:44:44 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:44:44 standalone.localdomain podman[533568]: 2025-10-13 15:44:44.219824242 +0000 UTC m=+0.477123794 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.buildah.version=1.33.12, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, maintainer=OpenStack TripleO Team, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-swift-object-container)
Oct 13 15:44:44 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:44:44 standalone.localdomain podman[533577]: 2025-10-13 15:44:44.258728866 +0000 UTC m=+0.493990741 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, build-date=2025-07-21T16:11:22, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, batch=17.1_20250721.1, vcs-type=git, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:44:44 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:44:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:44.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:44 standalone.localdomain ceph-mon[29756]: pgmap v3913: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 14 KiB/s wr, 0 op/s
Oct 13 15:44:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3914: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 14 KiB/s wr, 0 op/s
Oct 13 15:44:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:45.999 378920 DEBUG eventlet.wsgi.server [-] (378920) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:46.000 378920 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: Accept: */*
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: Connection: close
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: Content-Type: text/plain
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: Host: 169.254.169.254
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: User-Agent: curl/7.84.0
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: X-Forwarded-For: 10.100.0.10
Oct 13 15:44:46 standalone.localdomain ovn_metadata_agent[378816]: X-Ovn-Network-Id: d5036f63-29f3-448c-ad5c-55534cac285e __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 13 15:44:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:46.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:46 standalone.localdomain ceph-mon[29756]: pgmap v3914: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 14 KiB/s wr, 0 op/s
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.181 378920 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.182 378920 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1669 time: 1.1820986
Oct 13 15:44:47 standalone.localdomain haproxy-metadata-proxy-d5036f63-29f3-448c-ad5c-55534cac285e[532667]: 10.100.0.10:38248 [13/Oct/2025:15:44:45.998] listener listener/metadata 0/0/0/1184/1184 200 1653 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.354 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.355 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.356 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.356 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.356 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.359 2 INFO nova.compute.manager [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Terminating instance
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.360 2 DEBUG nova.compute.manager [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 13 15:44:47 standalone.localdomain kernel: device tapbf0a8138-74 left promiscuous mode
Oct 13 15:44:47 standalone.localdomain NetworkManager[5962]: <info>  [1760370287.4333] device (tapbf0a8138-74): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Oct 13 15:44:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:47Z|00170|binding|INFO|Releasing lport bf0a8138-74b3-47a4-8639-a5b7d069ec7f from this chassis (sb_readonly=0)
Oct 13 15:44:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:47Z|00171|binding|INFO|Setting lport bf0a8138-74b3-47a4-8639-a5b7d069ec7f down in Southbound
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:47Z|00172|binding|INFO|Removing iface tapbf0a8138-74 ovn-installed in OVS
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.488 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:c8:33 10.100.0.10'], port_security=['fa:16:3e:e9:c8:33 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f4a5f45-5f65-4051-b24e-5815867a3574', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5036f63-29f3-448c-ad5c-55534cac285e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad31fa1626a4d048525338a288fd601', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0f6910c-2783-4ce6-b538-179724c8f9b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c17d3ad-231d-4d34-a373-ab16abbf06e3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bf0a8138-74b3-47a4-8639-a5b7d069ec7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.490 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bf0a8138-74b3-47a4-8639-a5b7d069ec7f in datapath d5036f63-29f3-448c-ad5c-55534cac285e unbound from our chassis
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.492 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port bcbd804f-edbb-47fa-9ba8-f47ac32fbf9e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.492 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5036f63-29f3-448c-ad5c-55534cac285e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.493 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[66b19af9-ebb7-4f6b-8966-b22be05b174f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.494 378821 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e namespace which is not needed anymore
Oct 13 15:44:47 standalone.localdomain systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 13 15:44:47 standalone.localdomain systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 16.192s CPU time.
Oct 13 15:44:47 standalone.localdomain systemd-machined[183383]: Machine qemu-4-instance-00000004 terminated.
Oct 13 15:44:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.596 2 INFO nova.virt.libvirt.driver [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Instance destroyed successfully.
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.596 2 DEBUG nova.objects.instance [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lazy-loading 'resources' on Instance uuid 1f4a5f45-5f65-4051-b24e-5815867a3574 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.616 2 DEBUG nova.virt.libvirt.vif [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='standalone.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=4,image_ref='c219892b-7f28-4b58-8156-43d84cf6801b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHC2IIz5lKl3HDlfmZeplMM9FpSRm3+y1rUn/v78Cx3DOIV7ynijtPAGsFZVeIIJ403IWKXviSnyaqtFaTnOX/L8cqanRlSgnheMn0MeNzvR53otIJhAFkKn2gkt7aL1ng==',key_name='tempest-keypair-14677431',keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:44:11Z,launched_on='standalone.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='standalone.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ad31fa1626a4d048525338a288fd601',ramdisk_id='',reservation_id='r-kywtld1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c219892b-7f28-4b58-8156-43d84cf6801b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1628500701',owner_user_name='tempest-ServersV294TestFqdnHostnames-1628500701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:44:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3b4b76f527df445a9af9fdb570abbb05',uuid=1f4a5f45-5f65-4051-b24e-5815867a3574,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.617 2 DEBUG nova.network.os_vif_util [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Converting VIF {"id": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "address": "fa:16:3e:e9:c8:33", "network": {"id": "d5036f63-29f3-448c-ad5c-55534cac285e", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-534158699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "0ad31fa1626a4d048525338a288fd601", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0a8138-74", "ovs_interfaceid": "bf0a8138-74b3-47a4-8639-a5b7d069ec7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.618 2 DEBUG nova.network.os_vif_util [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.618 2 DEBUG os_vif [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0a8138-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.627 2 INFO os_vif [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:c8:33,bridge_name='br-int',has_traffic_filtering=True,id=bf0a8138-74b3-47a4-8639-a5b7d069ec7f,network=Network(d5036f63-29f3-448c-ad5c-55534cac285e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0a8138-74')
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.643 2 DEBUG nova.compute.manager [req-1889d060-2f17-4f85-9315-e0cb417a3d2a req-49b0de80-fd5a-43e0-ad00-32d33d68bdbf 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-vif-unplugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.643 2 DEBUG oslo_concurrency.lockutils [req-1889d060-2f17-4f85-9315-e0cb417a3d2a req-49b0de80-fd5a-43e0-ad00-32d33d68bdbf 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.643 2 DEBUG oslo_concurrency.lockutils [req-1889d060-2f17-4f85-9315-e0cb417a3d2a req-49b0de80-fd5a-43e0-ad00-32d33d68bdbf 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.644 2 DEBUG oslo_concurrency.lockutils [req-1889d060-2f17-4f85-9315-e0cb417a3d2a req-49b0de80-fd5a-43e0-ad00-32d33d68bdbf 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.644 2 DEBUG nova.compute.manager [req-1889d060-2f17-4f85-9315-e0cb417a3d2a req-49b0de80-fd5a-43e0-ad00-32d33d68bdbf 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] No waiting events found dispatching network-vif-unplugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.644 2 DEBUG nova.compute.manager [req-1889d060-2f17-4f85-9315-e0cb417a3d2a req-49b0de80-fd5a-43e0-ad00-32d33d68bdbf 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-vif-unplugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 13 15:44:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3915: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 1023 B/s wr, 0 op/s
Oct 13 15:44:47 standalone.localdomain systemd[1]: tmp-crun.fshx6k.mount: Deactivated successfully.
Oct 13 15:44:47 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [NOTICE]   (532665) : haproxy version is 2.8.14-c23fe91
Oct 13 15:44:47 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [NOTICE]   (532665) : path to executable is /usr/sbin/haproxy
Oct 13 15:44:47 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [WARNING]  (532665) : Exiting Master process...
Oct 13 15:44:47 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [ALERT]    (532665) : Current worker (532667) exited with code 143 (Terminated)
Oct 13 15:44:47 standalone.localdomain neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e[532661]: [WARNING]  (532665) : All workers exited. Exiting... (0)
Oct 13 15:44:47 standalone.localdomain systemd[1]: libpod-fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4.scope: Deactivated successfully.
Oct 13 15:44:47 standalone.localdomain podman[533725]: 2025-10-13 15:44:47.724560587 +0000 UTC m=+0.094629073 container died fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:44:47 standalone.localdomain podman[533725]: 2025-10-13 15:44:47.78267982 +0000 UTC m=+0.152748246 container cleanup fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:44:47 standalone.localdomain podman[533757]: 2025-10-13 15:44:47.820647514 +0000 UTC m=+0.090312048 container cleanup fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:44:47 standalone.localdomain systemd[1]: libpod-conmon-fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4.scope: Deactivated successfully.
Oct 13 15:44:47 standalone.localdomain podman[533771]: 2025-10-13 15:44:47.889543883 +0000 UTC m=+0.086881651 container remove fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.895 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8ca515-e418-46ef-bd29-3e1224b743c3]: (4, ('Mon Oct 13 03:44:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e (fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4)\nfa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4\nMon Oct 13 03:44:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e (fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4)\nfa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.897 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8463ed-816e-478e-926e-ddf449763465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.899 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5036f63-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:44:47 standalone.localdomain kernel: device tapd5036f63-20 left promiscuous mode
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:47.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.913 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4fdce3-a6af-485f-b638-dd9c89420645]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.934 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9f689f5f-8cb2-49de-81a5-c3876f573200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.936 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5999d61d-55c5-4409-ada3-fb6c4ddd6ac0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.956 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[51ed9bc5-47a5-49bd-8859-64092b9df32b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 969326, 'reachable_time': 23417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 533798, 'error': None, 'target': 'ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.968 379037 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5036f63-29f3-448c-ad5c-55534cac285e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 13 15:44:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:47.969 379037 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd9c432-1700-44a8-9334-13117dd2137e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:48 standalone.localdomain podman[533790]: 2025-10-13 15:44:48.022544562 +0000 UTC m=+0.081641218 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:44:48 standalone.localdomain podman[533790]: 2025-10-13 15:44:48.037903741 +0000 UTC m=+0.097000427 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:44:48 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:44:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:48.079 2 INFO nova.virt.libvirt.driver [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Deleting instance files /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574_del
Oct 13 15:44:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:48.079 2 INFO nova.virt.libvirt.driver [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Deletion of /var/lib/nova/instances/1f4a5f45-5f65-4051-b24e-5815867a3574_del complete
Oct 13 15:44:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:48.367 2 INFO nova.compute.manager [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Took 1.01 seconds to destroy the instance on the hypervisor.
Oct 13 15:44:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:48.367 2 DEBUG oslo.service.loopingcall [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 13 15:44:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:48.368 2 DEBUG nova.compute.manager [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 13 15:44:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:48.368 2 DEBUG nova.network.neutron [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 13 15:44:48 standalone.localdomain ceph-mon[29756]: pgmap v3915: 177 pgs: 177 active+clean; 439 MiB data, 348 MiB used, 6.7 GiB / 7.0 GiB avail; 1023 B/s wr, 0 op/s
Oct 13 15:44:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4def9279d2afc491e16a86c7504ebe42c84466598fd4e9cf13041559eb1df0a0-merged.mount: Deactivated successfully.
Oct 13 15:44:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa99f08118073dfe78589558388d1a212b52bb94972f65bf8e66d3803c4192d4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:48 standalone.localdomain systemd[1]: run-netns-ovnmeta\x2dd5036f63\x2d29f3\x2d448c\x2dad5c\x2d55534cac285e.mount: Deactivated successfully.
Oct 13 15:44:49 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:49.215 2 INFO neutron.agent.securitygroups_rpc [req-26ae7892-1189-48a4-b05b-41f2128fc9cb req-ad366fe5-551b-4e15-af60-c2c103cbcc36 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Security group member updated ['d0f6910c-2783-4ce6-b538-179724c8f9b8']
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.301 2 DEBUG nova.network.neutron [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.321 2 INFO nova.compute.manager [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Took 0.95 seconds to deallocate network for instance.
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.378 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.379 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.448 2 DEBUG oslo_concurrency.processutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:44:49 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 1 addresses
Oct 13 15:44:49 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:44:49 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:44:49 standalone.localdomain podman[533830]: 2025-10-13 15:44:49.490636936 +0000 UTC m=+0.069734176 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:44:49 standalone.localdomain systemd[1]: tmp-crun.lMgM22.mount: Deactivated successfully.
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.665 2 DEBUG nova.compute.manager [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.665 2 DEBUG oslo_concurrency.lockutils [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Acquiring lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.666 2 DEBUG oslo_concurrency.lockutils [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.666 2 DEBUG oslo_concurrency.lockutils [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.667 2 DEBUG nova.compute.manager [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] No waiting events found dispatching network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.667 2 WARNING nova.compute.manager [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received unexpected event network-vif-plugged-bf0a8138-74b3-47a4-8639-a5b7d069ec7f for instance with vm_state deleted and task_state None.
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.667 2 DEBUG nova.compute.manager [req-30a06b34-87fc-49d4-ad48-344f174e698f req-30e38f31-7453-4473-9750-f10c7785ea89 150f517a42bd40f99c171eab4e1a8605 206ad444831c476581ca5d3bd344a788 - - default default] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Received event network-vif-deleted-bf0a8138-74b3-47a4-8639-a5b7d069ec7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 13 15:44:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3916: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 13 15:44:49 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:44:49 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2369973532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.990 2 DEBUG oslo_concurrency.processutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:44:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:49.999 2 DEBUG nova.compute.provider_tree [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:44:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:50.016 2 DEBUG nova.scheduler.client.report [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:44:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:50.048 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:50.092 2 INFO nova.scheduler.client.report [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Deleted allocations for instance 1f4a5f45-5f65-4051-b24e-5815867a3574
Oct 13 15:44:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:50.173 2 DEBUG oslo_concurrency.lockutils [None req-26ae7892-1189-48a4-b05b-41f2128fc9cb 3b4b76f527df445a9af9fdb570abbb05 0ad31fa1626a4d048525338a288fd601 - - default default] Lock "1f4a5f45-5f65-4051-b24e-5815867a3574" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:44:50 standalone.localdomain ceph-mon[29756]: pgmap v3916: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 13 15:44:50 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2369973532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:44:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:51.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:51.440 496978 INFO neutron.agent.linux.ip_lib [None req-c153dc87-47c8-4a98-882a-38689881dadd - - - - - -] Device tap2707842e-7b cannot be used as it has no MAC address
Oct 13 15:44:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:51.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:51 standalone.localdomain kernel: device tap2707842e-7b entered promiscuous mode
Oct 13 15:44:51 standalone.localdomain NetworkManager[5962]: <info>  [1760370291.4685] manager: (tap2707842e-7b): new Generic device (/org/freedesktop/NetworkManager/Devices/40)
Oct 13 15:44:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:51Z|00173|binding|INFO|Claiming lport 2707842e-7bb2-4dbc-9083-86897c806804 for this chassis.
Oct 13 15:44:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:51Z|00174|binding|INFO|2707842e-7bb2-4dbc-9083-86897c806804: Claiming unknown
Oct 13 15:44:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:51.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:51 standalone.localdomain systemd-udevd[533882]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:44:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:51.479 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-70c4970f-1b6a-4b76-8c1a-6b354d322a22', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70c4970f-1b6a-4b76-8c1a-6b354d322a22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2b32b6c0f9d493ab8b8605b9e3d027a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1db8bacc-37da-43f4-9324-fa0f59a6dea4, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2707842e-7bb2-4dbc-9083-86897c806804) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:51.480 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2707842e-7bb2-4dbc-9083-86897c806804 in datapath 70c4970f-1b6a-4b76-8c1a-6b354d322a22 bound to our chassis
Oct 13 15:44:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:51.482 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port a1f0bad1-3ba6-4f95-9235-d9983b957f3f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:44:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:51.482 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 70c4970f-1b6a-4b76-8c1a-6b354d322a22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:51.485 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[78ce8b96-695e-4704-96f8-1e2c5258af6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:51Z|00175|binding|INFO|Setting lport 2707842e-7bb2-4dbc-9083-86897c806804 ovn-installed in OVS
Oct 13 15:44:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:51Z|00176|binding|INFO|Setting lport 2707842e-7bb2-4dbc-9083-86897c806804 up in Southbound
Oct 13 15:44:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:51.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2707842e-7b: No such device
Oct 13 15:44:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:51.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3917: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:44:52 standalone.localdomain podman[533932]: 2025-10-13 15:44:52.312389227 +0000 UTC m=+0.106426661 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:44:52 standalone.localdomain podman[533932]: 2025-10-13 15:44:52.349803163 +0000 UTC m=+0.143840597 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:44:52 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:44:52 standalone.localdomain podman[533977]: 
Oct 13 15:44:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:52 standalone.localdomain podman[533977]: 2025-10-13 15:44:52.596043565 +0000 UTC m=+0.176004282 container create d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:52 standalone.localdomain podman[533977]: 2025-10-13 15:44:52.505888402 +0000 UTC m=+0.085849179 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:44:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:52.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:52 standalone.localdomain systemd[1]: Started libpod-conmon-d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e.scope.
Oct 13 15:44:52 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:44:52 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720c2a4ec4d0c91692d89e3a2574a1b061e80cc2c4a6004d1c2a819b90d9ea49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:44:52 standalone.localdomain podman[533977]: 2025-10-13 15:44:52.694598459 +0000 UTC m=+0.274559176 container init d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:44:52 standalone.localdomain dnsmasq[533995]: started, version 2.85 cachesize 150
Oct 13 15:44:52 standalone.localdomain dnsmasq[533995]: DNS service limited to local subnets
Oct 13 15:44:52 standalone.localdomain dnsmasq[533995]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:44:52 standalone.localdomain dnsmasq[533995]: warning: no upstream servers configured
Oct 13 15:44:52 standalone.localdomain dnsmasq-dhcp[533995]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:44:52 standalone.localdomain dnsmasq[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/addn_hosts - 0 addresses
Oct 13 15:44:52 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/host
Oct 13 15:44:52 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/opts
Oct 13 15:44:52 standalone.localdomain podman[533977]: 2025-10-13 15:44:52.704038513 +0000 UTC m=+0.283999220 container start d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:44:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:52.749 496978 INFO neutron.agent.dhcp.agent [None req-2a86d2f7-1d15-4446-b549-9d95da151829 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:50Z, description=, device_id=df10fe17-755c-44c7-a877-4622ff69de74, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e7430>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892ce8e0>], id=a411a202-fea6-43f1-b518-75a8a5c4afcc, ip_allocation=immediate, mac_address=fa:16:3e:e6:4d:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:49Z, description=, dns_domain=, id=70c4970f-1b6a-4b76-8c1a-6b354d322a22, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-2102348093, port_security_enabled=True, project_id=d2b32b6c0f9d493ab8b8605b9e3d027a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51097, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1205, status=ACTIVE, subnets=['32d87668-a43f-4a4e-8674-15c3d258797e'], tags=[], tenant_id=d2b32b6c0f9d493ab8b8605b9e3d027a, updated_at=2025-10-13T15:44:49Z, vlan_transparent=None, network_id=70c4970f-1b6a-4b76-8c1a-6b354d322a22, port_security_enabled=False, project_id=d2b32b6c0f9d493ab8b8605b9e3d027a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1212, status=DOWN, tags=[], tenant_id=d2b32b6c0f9d493ab8b8605b9e3d027a, updated_at=2025-10-13T15:44:51Z on network 70c4970f-1b6a-4b76-8c1a-6b354d322a22
Oct 13 15:44:52 standalone.localdomain ceph-mon[29756]: pgmap v3917: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:52.876 496978 INFO neutron.agent.dhcp.agent [None req-827d280b-ae25-4209-8dcb-3ccac3f73c46 - - - - - -] DHCP configuration for ports {'668e0d84-6688-4c3f-8c93-646ca85a9d3f'} is completed
Oct 13 15:44:52 standalone.localdomain dnsmasq[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/addn_hosts - 1 addresses
Oct 13 15:44:52 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/host
Oct 13 15:44:52 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/opts
Oct 13 15:44:52 standalone.localdomain podman[534014]: 2025-10-13 15:44:52.997425465 +0000 UTC m=+0.071493231 container kill d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:44:53 standalone.localdomain dnsmasq[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/addn_hosts - 0 addresses
Oct 13 15:44:53 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/host
Oct 13 15:44:53 standalone.localdomain dnsmasq-dhcp[530771]: read /var/lib/neutron/dhcp/d5036f63-29f3-448c-ad5c-55534cac285e/opts
Oct 13 15:44:53 standalone.localdomain podman[534044]: 2025-10-13 15:44:53.122741204 +0000 UTC m=+0.068181598 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:44:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:53.176 496978 INFO neutron.agent.dhcp.agent [None req-b3599002-34d4-49ec-8dbc-49e258e14449 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:50Z, description=, device_id=df10fe17-755c-44c7-a877-4622ff69de74, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889139310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889139b80>], id=a411a202-fea6-43f1-b518-75a8a5c4afcc, ip_allocation=immediate, mac_address=fa:16:3e:e6:4d:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:49Z, description=, dns_domain=, id=70c4970f-1b6a-4b76-8c1a-6b354d322a22, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-2102348093, port_security_enabled=True, project_id=d2b32b6c0f9d493ab8b8605b9e3d027a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51097, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1205, status=ACTIVE, subnets=['32d87668-a43f-4a4e-8674-15c3d258797e'], tags=[], tenant_id=d2b32b6c0f9d493ab8b8605b9e3d027a, updated_at=2025-10-13T15:44:49Z, vlan_transparent=None, network_id=70c4970f-1b6a-4b76-8c1a-6b354d322a22, port_security_enabled=False, project_id=d2b32b6c0f9d493ab8b8605b9e3d027a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1212, status=DOWN, tags=[], tenant_id=d2b32b6c0f9d493ab8b8605b9e3d027a, updated_at=2025-10-13T15:44:51Z on network 70c4970f-1b6a-4b76-8c1a-6b354d322a22
Oct 13 15:44:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:53.228 496978 INFO neutron.agent.dhcp.agent [None req-236f128a-148d-4274-a427-00c47a544221 - - - - - -] DHCP configuration for ports {'a411a202-fea6-43f1-b518-75a8a5c4afcc'} is completed
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:44:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:53Z|00177|binding|INFO|Releasing lport 2244dbdc-4b24-424b-828c-6d8ecf587bda from this chassis (sb_readonly=0)
Oct 13 15:44:53 standalone.localdomain kernel: device tap2244dbdc-4b left promiscuous mode
Oct 13 15:44:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:53Z|00178|binding|INFO|Setting lport 2244dbdc-4b24-424b-828c-6d8ecf587bda down in Southbound
Oct 13 15:44:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:53.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:53 standalone.localdomain systemd[1]: tmp-crun.aJJpY7.mount: Deactivated successfully.
Oct 13 15:44:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:53.315 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d5036f63-29f3-448c-ad5c-55534cac285e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5036f63-29f3-448c-ad5c-55534cac285e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad31fa1626a4d048525338a288fd601', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c17d3ad-231d-4d34-a373-ab16abbf06e3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2244dbdc-4b24-424b-828c-6d8ecf587bda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:53.318 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2244dbdc-4b24-424b-828c-6d8ecf587bda in datapath d5036f63-29f3-448c-ad5c-55534cac285e unbound from our chassis
Oct 13 15:44:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:53.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:53.320 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5036f63-29f3-448c-ad5c-55534cac285e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:53.321 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac9c59e-7e77-47b0-b9df-600adf9e48e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:53 standalone.localdomain dnsmasq[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/addn_hosts - 1 addresses
Oct 13 15:44:53 standalone.localdomain podman[534089]: 2025-10-13 15:44:53.388954308 +0000 UTC m=+0.051410734 container kill d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:44:53 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/host
Oct 13 15:44:53 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/opts
Oct 13 15:44:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9265 DF PROTO=TCP SPT=36610 DPT=9102 SEQ=1942387619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCFD8CF0000000001030307) 
Oct 13 15:44:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:53.675 496978 INFO neutron.agent.dhcp.agent [None req-8ec91822-f25c-4577-9e7b-7a0d75a4e45d - - - - - -] DHCP configuration for ports {'a411a202-fea6-43f1-b518-75a8a5c4afcc'} is completed
Oct 13 15:44:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3918: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:53 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:53.916 2 INFO neutron.agent.securitygroups_rpc [None req-31e3f056-6e82-4081-8e7a-40fd25f260cc c7031b5cba0747f6a1f442505a3c33d5 d2b32b6c0f9d493ab8b8605b9e3d027a - - default default] Security group member updated ['f52959fb-7dca-42e9-8b7f-c748e241201b']
Oct 13 15:44:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:53.975 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:44:53Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889103d30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889103d00>], id=434f2a8a-e129-4791-9c8f-9d8429662b1e, ip_allocation=immediate, mac_address=fa:16:3e:de:9a:75, name=tempest-FloatingIPNegativeTestJSON-1134668370, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:49Z, description=, dns_domain=, id=70c4970f-1b6a-4b76-8c1a-6b354d322a22, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-2102348093, port_security_enabled=True, project_id=d2b32b6c0f9d493ab8b8605b9e3d027a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51097, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1205, status=ACTIVE, subnets=['32d87668-a43f-4a4e-8674-15c3d258797e'], tags=[], tenant_id=d2b32b6c0f9d493ab8b8605b9e3d027a, updated_at=2025-10-13T15:44:49Z, vlan_transparent=None, network_id=70c4970f-1b6a-4b76-8c1a-6b354d322a22, port_security_enabled=True, project_id=d2b32b6c0f9d493ab8b8605b9e3d027a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f52959fb-7dca-42e9-8b7f-c748e241201b'], standard_attr_id=1218, status=DOWN, tags=[], tenant_id=d2b32b6c0f9d493ab8b8605b9e3d027a, updated_at=2025-10-13T15:44:53Z on network 70c4970f-1b6a-4b76-8c1a-6b354d322a22
Oct 13 15:44:54 standalone.localdomain dnsmasq[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/addn_hosts - 2 addresses
Oct 13 15:44:54 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/host
Oct 13 15:44:54 standalone.localdomain podman[534127]: 2025-10-13 15:44:54.196672624 +0000 UTC m=+0.061078307 container kill d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:44:54 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/opts
Oct 13 15:44:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:54.495 496978 INFO neutron.agent.dhcp.agent [None req-b6310a5a-fda9-42bf-910a-b84ff31b1163 - - - - - -] DHCP configuration for ports {'434f2a8a-e129-4791-9c8f-9d8429662b1e'} is completed
Oct 13 15:44:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9266 DF PROTO=TCP SPT=36610 DPT=9102 SEQ=1942387619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCFDCF70000000001030307) 
Oct 13 15:44:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:44:54 standalone.localdomain ceph-mon[29756]: pgmap v3918: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:54 standalone.localdomain podman[534149]: 2025-10-13 15:44:54.777918265 +0000 UTC m=+0.049179785 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:44:54 standalone.localdomain podman[534149]: 2025-10-13 15:44:54.808868461 +0000 UTC m=+0.080129981 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Oct 13 15:44:54 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:44:55 standalone.localdomain sudo[534167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:44:55 standalone.localdomain sudo[534167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:44:55 standalone.localdomain sudo[534167]: pam_unix(sudo:session): session closed for user root
Oct 13 15:44:55 standalone.localdomain sudo[534185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Oct 13 15:44:55 standalone.localdomain sudo[534185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:44:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 15:44:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 15:44:55 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:44:55 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:44:55 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:44:55 standalone.localdomain podman[534244]: 2025-10-13 15:44:55.597552092 +0000 UTC m=+0.148198204 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:55Z|00179|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:44:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3919: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:55 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:55.921 2 INFO neutron.agent.securitygroups_rpc [None req-80309fe5-99f0-44f9-8f96-49e96d5e70dd eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:44:56 standalone.localdomain podman[534313]: 2025-10-13 15:44:55.999517481 +0000 UTC m=+0.082806064 container exec 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, distribution-scope=public, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git)
Oct 13 15:44:56 standalone.localdomain podman[534313]: 2025-10-13 15:44:56.128225526 +0000 UTC m=+0.211514109 container exec_died 0d24c44801d510322759caff89f3bbc2a8f0e596728ff1cb774d94bbc921a580 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mon-standalone, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7)
Oct 13 15:44:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:56.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:56 standalone.localdomain sudo[534185]: pam_unix(sudo:session): session closed for user root
Oct 13 15:44:56 standalone.localdomain podman[534431]: 2025-10-13 15:44:56.649470605 +0000 UTC m=+0.084318641 container kill b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:44:56 standalone.localdomain dnsmasq[530771]: exiting on receipt of SIGTERM
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:44:56 standalone.localdomain systemd[1]: libpod-b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02.scope: Deactivated successfully.
Oct 13 15:44:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9267 DF PROTO=TCP SPT=36610 DPT=9102 SEQ=1942387619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCFE4F60000000001030307) 
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:44:56 standalone.localdomain sudo[534453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:44:56 standalone.localdomain sudo[534453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:44:56 standalone.localdomain sudo[534453]: pam_unix(sudo:session): session closed for user root
Oct 13 15:44:56 standalone.localdomain podman[534445]: 2025-10-13 15:44:56.72624892 +0000 UTC m=+0.059603050 container died b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:44:56 standalone.localdomain podman[534445]: 2025-10-13 15:44:56.76313416 +0000 UTC m=+0.096488260 container cleanup b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:44:56 standalone.localdomain systemd[1]: libpod-conmon-b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02.scope: Deactivated successfully.
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: pgmap v3919: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:44:56 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:44:56 standalone.localdomain sudo[534490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:44:56 standalone.localdomain sudo[534490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:44:56 standalone.localdomain podman[534447]: 2025-10-13 15:44:56.81890547 +0000 UTC m=+0.142784005 container remove b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d5036f63-29f3-448c-ad5c-55534cac285e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:44:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:56.848 496978 INFO neutron.agent.dhcp.agent [None req-325a1082-49ec-4698-932a-0845cd16f375 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fff3a0edf95c6e1252ebe647dfc9352a29f36dbdd641538873f57af6736257a2-merged.mount: Deactivated successfully.
Oct 13 15:44:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b437fe4c3732da8f9bebe6cc4aae6cbdb6ce6f601845af2c33737e1b2f538a02-userdata-shm.mount: Deactivated successfully.
Oct 13 15:44:56 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd5036f63\x2d29f3\x2d448c\x2dad5c\x2d55534cac285e.mount: Deactivated successfully.
Oct 13 15:44:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:57.246 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:57 standalone.localdomain sudo[534490]: pam_unix(sudo:session): session closed for user root
Oct 13 15:44:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:57.456 2 INFO neutron.agent.securitygroups_rpc [None req-1c36ca46-42b1-43ca-960d-d8a85308c3e2 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:44:57 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 54e46bd3-819a-4695-8c8d-b23a0387a5d8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:44:57 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 54e46bd3-819a-4695-8c8d-b23a0387a5d8 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:44:57 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 54e46bd3-819a-4695-8c8d-b23a0387a5d8 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:44:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:57.546 2 INFO neutron.agent.securitygroups_rpc [None req-1c36ca46-42b1-43ca-960d-d8a85308c3e2 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:44:57 standalone.localdomain sudo[534543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:44:57 standalone.localdomain sudo[534543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:44:57 standalone.localdomain sudo[534543]: pam_unix(sudo:session): session closed for user root
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:44:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:57.687 2 INFO neutron.agent.securitygroups_rpc [None req-7836aef0-a92f-4d44-bd04-aff58472a5d7 c7031b5cba0747f6a1f442505a3c33d5 d2b32b6c0f9d493ab8b8605b9e3d027a - - default default] Security group member updated ['f52959fb-7dca-42e9-8b7f-c748e241201b']
Oct 13 15:44:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3920: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:57.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:44:57 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:44:57 standalone.localdomain dnsmasq[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/addn_hosts - 1 addresses
Oct 13 15:44:57 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/host
Oct 13 15:44:57 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/opts
Oct 13 15:44:57 standalone.localdomain podman[534578]: 2025-10-13 15:44:57.902402629 +0000 UTC m=+0.053312204 container kill d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:44:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:57.944 2 INFO neutron.agent.securitygroups_rpc [None req-191348d4-e339-4cfa-b23c-5a5c051954c9 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:44:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:57.971 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:58 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:44:58.561 2 INFO neutron.agent.securitygroups_rpc [None req-4771286a-9684-4224-a739-a2a87bf55b2b eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:44:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:58.599 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:58 standalone.localdomain ceph-mon[29756]: pgmap v3920: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:44:58 standalone.localdomain dnsmasq[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/addn_hosts - 0 addresses
Oct 13 15:44:58 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/host
Oct 13 15:44:58 standalone.localdomain dnsmasq-dhcp[533995]: read /var/lib/neutron/dhcp/70c4970f-1b6a-4b76-8c1a-6b354d322a22/opts
Oct 13 15:44:58 standalone.localdomain podman[534617]: 2025-10-13 15:44:58.92256594 +0000 UTC m=+0.047207373 container kill d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:44:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:59Z|00180|binding|INFO|Releasing lport 2707842e-7bb2-4dbc-9083-86897c806804 from this chassis (sb_readonly=0)
Oct 13 15:44:59 standalone.localdomain kernel: device tap2707842e-7b left promiscuous mode
Oct 13 15:44:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:44:59Z|00181|binding|INFO|Setting lport 2707842e-7bb2-4dbc-9083-86897c806804 down in Southbound
Oct 13 15:44:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:59.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:59.098 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-70c4970f-1b6a-4b76-8c1a-6b354d322a22', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70c4970f-1b6a-4b76-8c1a-6b354d322a22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2b32b6c0f9d493ab8b8605b9e3d027a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1db8bacc-37da-43f4-9324-fa0f59a6dea4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2707842e-7bb2-4dbc-9083-86897c806804) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:44:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:59.100 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2707842e-7bb2-4dbc-9083-86897c806804 in datapath 70c4970f-1b6a-4b76-8c1a-6b354d322a22 unbound from our chassis
Oct 13 15:44:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:59.102 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 70c4970f-1b6a-4b76-8c1a-6b354d322a22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:44:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:44:59.102 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[95b17ab2-70b8-4328-ae9a-b2613fb2d583]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:44:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:44:59.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:44:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:44:59.249 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:44:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3921: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:45:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:45:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:45:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:00 standalone.localdomain dnsmasq[533995]: exiting on receipt of SIGTERM
Oct 13 15:45:00 standalone.localdomain podman[534657]: 2025-10-13 15:45:00.417155042 +0000 UTC m=+0.048087761 container kill d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:45:00 standalone.localdomain systemd[1]: libpod-d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e.scope: Deactivated successfully.
Oct 13 15:45:00 standalone.localdomain podman[534671]: 2025-10-13 15:45:00.47801996 +0000 UTC m=+0.042365953 container died d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:45:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-720c2a4ec4d0c91692d89e3a2574a1b061e80cc2c4a6004d1c2a819b90d9ea49-merged.mount: Deactivated successfully.
Oct 13 15:45:00 standalone.localdomain podman[534671]: 2025-10-13 15:45:00.529406303 +0000 UTC m=+0.093752316 container remove d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-70c4970f-1b6a-4b76-8c1a-6b354d322a22, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:00 standalone.localdomain systemd[1]: libpod-conmon-d0577d2d8f4d7b529a5e75a1e00f5e39c0b9ea2f3334393ef5500dbc3a3a649e.scope: Deactivated successfully.
Oct 13 15:45:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:00.576 496978 INFO neutron.agent.dhcp.agent [None req-a1b9d8fa-8666-43a2-8965-b1059dbbcca5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9268 DF PROTO=TCP SPT=36610 DPT=9102 SEQ=1942387619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CCFF4B60000000001030307) 
Oct 13 15:45:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:01.121 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:01 standalone.localdomain ceph-mon[29756]: pgmap v3921: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 13 15:45:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:01 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d70c4970f\x2d1b6a\x2d4b76\x2d8c1a\x2d6b354d322a22.mount: Deactivated successfully.
Oct 13 15:45:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:01Z|00182|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:01.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3922: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.003 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.006 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.007 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:45:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:02.288 496978 INFO neutron.agent.linux.ip_lib [None req-1fa17b22-5371-456a-9f6f-c58ab5e002cd - - - - - -] Device tapc5d02878-1e cannot be used as it has no MAC address
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:02 standalone.localdomain kernel: device tapc5d02878-1e entered promiscuous mode
Oct 13 15:45:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:02Z|00183|binding|INFO|Claiming lport c5d02878-1e57-449e-99c9-30733c6645ca for this chassis.
Oct 13 15:45:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:02Z|00184|binding|INFO|c5d02878-1e57-449e-99c9-30733c6645ca: Claiming unknown
Oct 13 15:45:02 standalone.localdomain NetworkManager[5962]: <info>  [1760370302.3218] manager: (tapc5d02878-1e): new Generic device (/org/freedesktop/NetworkManager/Devices/41)
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:02 standalone.localdomain systemd-udevd[534706]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.332 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-391c09bd-3213-4e10-b743-59304d4b5e3e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-391c09bd-3213-4e10-b743-59304d4b5e3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b9fc9d4-ec8b-4645-bccb-3c2e10132fa0, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=c5d02878-1e57-449e-99c9-30733c6645ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.334 378821 INFO neutron.agent.ovn.metadata.agent [-] Port c5d02878-1e57-449e-99c9-30733c6645ca in datapath 391c09bd-3213-4e10-b743-59304d4b5e3e bound to our chassis
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.336 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 391c09bd-3213-4e10-b743-59304d4b5e3e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:02.337 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[21d96452-6c44-43ed-99e5-efe4393b48ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:02Z|00185|binding|INFO|Setting lport c5d02878-1e57-449e-99c9-30733c6645ca ovn-installed in OVS
Oct 13 15:45:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:02Z|00186|binding|INFO|Setting lport c5d02878-1e57-449e-99c9-30733c6645ca up in Southbound
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapc5d02878-1e: No such device
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.593 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370287.5918348, 1f4a5f45-5f65-4051-b24e-5815867a3574 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.594 2 INFO nova.compute.manager [-] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] VM Stopped (Lifecycle Event)
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.619 2 DEBUG nova.compute.manager [None req-4445b53c-620e-481b-a038-92810881ac9e - - - - - -] [instance: 1f4a5f45-5f65-4051-b24e-5815867a3574] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 13 15:45:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:02.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:03 standalone.localdomain ceph-mon[29756]: pgmap v3922: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:03 standalone.localdomain podman[534777]: 
Oct 13 15:45:03 standalone.localdomain podman[534777]: 2025-10-13 15:45:03.412830557 +0000 UTC m=+0.108453754 container create ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:45:03 standalone.localdomain systemd[1]: Started libpod-conmon-ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82.scope.
Oct 13 15:45:03 standalone.localdomain podman[534777]: 2025-10-13 15:45:03.359865755 +0000 UTC m=+0.055488992 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff4b7daad39223d296efecf70e9d5acb22df721f53f2e0bb26f321849b60da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:03 standalone.localdomain podman[534777]: 2025-10-13 15:45:03.497328953 +0000 UTC m=+0.192952150 container init ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:03 standalone.localdomain podman[534777]: 2025-10-13 15:45:03.506040684 +0000 UTC m=+0.201663901 container start ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:45:03 standalone.localdomain dnsmasq[534796]: started, version 2.85 cachesize 150
Oct 13 15:45:03 standalone.localdomain dnsmasq[534796]: DNS service limited to local subnets
Oct 13 15:45:03 standalone.localdomain dnsmasq[534796]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:03 standalone.localdomain dnsmasq[534796]: warning: no upstream servers configured
Oct 13 15:45:03 standalone.localdomain dnsmasq-dhcp[534796]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:03 standalone.localdomain dnsmasq[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/addn_hosts - 0 addresses
Oct 13 15:45:03 standalone.localdomain dnsmasq-dhcp[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/host
Oct 13 15:45:03 standalone.localdomain dnsmasq-dhcp[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/opts
Oct 13 15:45:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:03.631 496978 INFO neutron.agent.dhcp.agent [None req-26902df2-4dc5-45cb-836e-b84399d91b05 - - - - - -] DHCP configuration for ports {'5d4a039e-c7a1-4b44-a2ec-b8b652de5b37'} is completed
Oct 13 15:45:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3923: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:03 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:03.973 2 INFO neutron.agent.securitygroups_rpc [None req-44b99d7c-1ef8-4f71-919a-4bae4c037500 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:04.038 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:03Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892a59a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890a8640>], id=3060dd7f-ccfb-4ab4-a152-ae9d75898b5b, ip_allocation=immediate, mac_address=fa:16:3e:71:dd:5f, name=tempest-PortsTestJSON-263927198, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:59Z, description=, dns_domain=, id=391c09bd-3213-4e10-b743-59304d4b5e3e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-745531828, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16285, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1246, status=ACTIVE, subnets=['ba0928a6-a3b8-430e-89fd-748c172d32b6'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:01Z, vlan_transparent=None, network_id=391c09bd-3213-4e10-b743-59304d4b5e3e, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1266, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:03Z on network 391c09bd-3213-4e10-b743-59304d4b5e3e
Oct 13 15:45:04 standalone.localdomain dnsmasq[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/addn_hosts - 1 addresses
Oct 13 15:45:04 standalone.localdomain dnsmasq-dhcp[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/host
Oct 13 15:45:04 standalone.localdomain dnsmasq-dhcp[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/opts
Oct 13 15:45:04 standalone.localdomain podman[534814]: 2025-10-13 15:45:04.29193832 +0000 UTC m=+0.067049733 container kill ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:45:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:04.486 496978 INFO neutron.agent.dhcp.agent [None req-ee1e734d-51be-4514-85f2-cc0146bc59ee - - - - - -] DHCP configuration for ports {'3060dd7f-ccfb-4ab4-a152-ae9d75898b5b'} is completed
Oct 13 15:45:04 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:04.618 2 INFO neutron.agent.securitygroups_rpc [None req-66682c8a-719f-4047-b42b-70ef642684ba eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:04 standalone.localdomain dnsmasq[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/addn_hosts - 0 addresses
Oct 13 15:45:04 standalone.localdomain podman[534853]: 2025-10-13 15:45:04.930620653 +0000 UTC m=+0.045639665 container kill ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:45:04 standalone.localdomain dnsmasq-dhcp[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/host
Oct 13 15:45:04 standalone.localdomain dnsmasq-dhcp[534796]: read /var/lib/neutron/dhcp/391c09bd-3213-4e10-b743-59304d4b5e3e/opts
Oct 13 15:45:04 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:45:05 standalone.localdomain systemd[1]: tmp-crun.E4JnN8.mount: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain podman[534867]: 2025-10-13 15:45:05.037998812 +0000 UTC m=+0.081256326 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:05 standalone.localdomain podman[534867]: 2025-10-13 15:45:05.115681365 +0000 UTC m=+0.158938889 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller)
Oct 13 15:45:05 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain dnsmasq[534796]: exiting on receipt of SIGTERM
Oct 13 15:45:05 standalone.localdomain podman[534915]: 2025-10-13 15:45:05.31555274 +0000 UTC m=+0.049543436 container kill ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:05 standalone.localdomain systemd[1]: libpod-ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82.scope: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain ceph-mon[29756]: pgmap v3923: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:05 standalone.localdomain podman[534928]: 2025-10-13 15:45:05.367617604 +0000 UTC m=+0.043647713 container died ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:45:05 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:05Z|00187|binding|INFO|Removing iface tapc5d02878-1e ovn-installed in OVS
Oct 13 15:45:05 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:05Z|00188|binding|INFO|Removing lport c5d02878-1e57-449e-99c9-30733c6645ca ovn-installed in OVS
Oct 13 15:45:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:05.381 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 589f252f-da79-487c-a8d7-208b901b2dd3 with type ""
Oct 13 15:45:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:05.382 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-391c09bd-3213-4e10-b743-59304d4b5e3e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-391c09bd-3213-4e10-b743-59304d4b5e3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b9fc9d4-ec8b-4645-bccb-3c2e10132fa0, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=c5d02878-1e57-449e-99c9-30733c6645ca) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:05.384 378821 INFO neutron.agent.ovn.metadata.agent [-] Port c5d02878-1e57-449e-99c9-30733c6645ca in datapath 391c09bd-3213-4e10-b743-59304d4b5e3e unbound from our chassis
Oct 13 15:45:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:05.386 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 391c09bd-3213-4e10-b743-59304d4b5e3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:05.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:05 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:05.390 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b61c7a04-29b9-43e7-9b32-5be8ab7cba8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:05 standalone.localdomain podman[534928]: 2025-10-13 15:45:05.407113356 +0000 UTC m=+0.083143445 container cleanup ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:45:05 standalone.localdomain systemd[1]: libpod-conmon-ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82.scope: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b3ff4b7daad39223d296efecf70e9d5acb22df721f53f2e0bb26f321849b60da-merged.mount: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain podman[534935]: 2025-10-13 15:45:05.433724856 +0000 UTC m=+0.093065494 container remove ba7dd8217a82a2844e95fa7fd8f7699a13b44b7ac1a4fa0ec0dc3f62e11c7c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-391c09bd-3213-4e10-b743-59304d4b5e3e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:05 standalone.localdomain kernel: device tapc5d02878-1e left promiscuous mode
Oct 13 15:45:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:05.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:05.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:05.474 496978 INFO neutron.agent.dhcp.agent [None req-2ec211fd-4bad-4a6f-b31b-8968e3149e0e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:05 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d391c09bd\x2d3213\x2d4e10\x2db743\x2d59304d4b5e3e.mount: Deactivated successfully.
Oct 13 15:45:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:05.510 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:05 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:05Z|00189|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3924: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:05.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:06.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:06.964 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:45:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:06.965 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:45:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:06.966 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:45:07 standalone.localdomain ceph-mon[29756]: pgmap v3924: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:45:07 standalone.localdomain podman[534959]: 2025-10-13 15:45:07.519721175 +0000 UTC m=+0.079639744 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 13 15:45:07 standalone.localdomain podman[534959]: 2025-10-13 15:45:07.53974574 +0000 UTC m=+0.099664309 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git)
Oct 13 15:45:07 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:45:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3925: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:07.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:08.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:08.311 496978 INFO neutron.agent.linux.ip_lib [None req-13962abe-339e-4104-a8c0-550ae4c75b63 - - - - - -] Device tap057f5038-f0 cannot be used as it has no MAC address
Oct 13 15:45:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:08.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:08 standalone.localdomain kernel: device tap057f5038-f0 entered promiscuous mode
Oct 13 15:45:08 standalone.localdomain NetworkManager[5962]: <info>  [1760370308.3452] manager: (tap057f5038-f0): new Generic device (/org/freedesktop/NetworkManager/Devices/42)
Oct 13 15:45:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:08Z|00190|binding|INFO|Claiming lport 057f5038-f0df-4fa8-b115-2be08de0b330 for this chassis.
Oct 13 15:45:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:08Z|00191|binding|INFO|057f5038-f0df-4fa8-b115-2be08de0b330: Claiming unknown
Oct 13 15:45:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:08.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:08 standalone.localdomain systemd-udevd[534989]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:08.361 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c7b3640-eb0e-4a66-89c4-ba3b99a5775e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=057f5038-f0df-4fa8-b115-2be08de0b330) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:08.363 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 057f5038-f0df-4fa8-b115-2be08de0b330 in datapath 0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc bound to our chassis
Oct 13 15:45:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:08.366 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port ba173817-600e-47e4-9406-276f39d58040 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:45:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:08.366 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:08.369 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[13ba9bd4-d42e-4b58-a78d-f822bc84a5ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:08Z|00192|binding|INFO|Setting lport 057f5038-f0df-4fa8-b115-2be08de0b330 ovn-installed in OVS
Oct 13 15:45:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:08Z|00193|binding|INFO|Setting lport 057f5038-f0df-4fa8-b115-2be08de0b330 up in Southbound
Oct 13 15:45:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:08.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap057f5038-f0: No such device
Oct 13 15:45:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:08.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:08 standalone.localdomain ceph-mon[29756]: pgmap v3925: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:09.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:09.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:45:09 standalone.localdomain podman[535060]: 
Oct 13 15:45:09 standalone.localdomain podman[535060]: 2025-10-13 15:45:09.275646719 +0000 UTC m=+0.066566408 container create 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:45:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:45:09 standalone.localdomain systemd[1]: Started libpod-conmon-2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885.scope.
Oct 13 15:45:09 standalone.localdomain systemd[1]: tmp-crun.WkzE23.mount: Deactivated successfully.
Oct 13 15:45:09 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea556e7a16e7cdeb0952f176764e2c296b78cdc52aa285e1c28ec8d6dad53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:09 standalone.localdomain podman[535060]: 2025-10-13 15:45:09.235823537 +0000 UTC m=+0.026743246 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:09 standalone.localdomain podman[535060]: 2025-10-13 15:45:09.341765992 +0000 UTC m=+0.132685681 container init 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:45:09 standalone.localdomain podman[535060]: 2025-10-13 15:45:09.354171408 +0000 UTC m=+0.145091097 container start 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:45:09 standalone.localdomain dnsmasq[535090]: started, version 2.85 cachesize 150
Oct 13 15:45:09 standalone.localdomain dnsmasq[535090]: DNS service limited to local subnets
Oct 13 15:45:09 standalone.localdomain dnsmasq[535090]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:09 standalone.localdomain dnsmasq[535090]: warning: no upstream servers configured
Oct 13 15:45:09 standalone.localdomain dnsmasq-dhcp[535090]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:09 standalone.localdomain dnsmasq[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/addn_hosts - 0 addresses
Oct 13 15:45:09 standalone.localdomain dnsmasq-dhcp[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/host
Oct 13 15:45:09 standalone.localdomain dnsmasq-dhcp[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/opts
Oct 13 15:45:09 standalone.localdomain podman[535074]: 2025-10-13 15:45:09.378673633 +0000 UTC m=+0.067904000 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:45:09 standalone.localdomain podman[535074]: 2025-10-13 15:45:09.416829092 +0000 UTC m=+0.106059459 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, tcib_managed=true)
Oct 13 15:45:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:09.419 496978 INFO neutron.agent.dhcp.agent [None req-d836c71e-91a4-48b0-86ae-ac35fa2de7a4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:08Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924e0d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924ed60>], id=f057fd2a-f164-49c9-8e46-c7e5a08e8fb0, ip_allocation=immediate, mac_address=fa:16:3e:2a:d8:df, name=tempest-PortsTestJSON-1444966935, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:05Z, description=, dns_domain=, id=0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1442021458, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62223, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1277, status=ACTIVE, subnets=['2a06b993-7941-4c1a-a180-01eb8867b793'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:06Z, vlan_transparent=None, network_id=0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1284, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:08Z on network 0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc
Oct 13 15:45:09 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:45:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:09.511 496978 INFO neutron.agent.dhcp.agent [None req-1bb330e1-0237-4bc0-9df3-891b42d97ee6 - - - - - -] DHCP configuration for ports {'7d3656f2-2a4b-47aa-886a-11470c51df1a'} is completed
Oct 13 15:45:09 standalone.localdomain dnsmasq[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/addn_hosts - 1 addresses
Oct 13 15:45:09 standalone.localdomain dnsmasq-dhcp[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/host
Oct 13 15:45:09 standalone.localdomain dnsmasq-dhcp[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/opts
Oct 13 15:45:09 standalone.localdomain podman[535115]: 2025-10-13 15:45:09.683600154 +0000 UTC m=+0.056506764 container kill 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:45:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3926: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:09.929 496978 INFO neutron.agent.dhcp.agent [None req-bdc80943-79ea-4a6d-b8a8-4633c9d20e4b - - - - - -] DHCP configuration for ports {'f057fd2a-f164-49c9-8e46-c7e5a08e8fb0'} is completed
Oct 13 15:45:10 standalone.localdomain dnsmasq[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/addn_hosts - 0 addresses
Oct 13 15:45:10 standalone.localdomain dnsmasq-dhcp[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/host
Oct 13 15:45:10 standalone.localdomain podman[535155]: 2025-10-13 15:45:10.033630582 +0000 UTC m=+0.046964155 container kill 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:45:10 standalone.localdomain dnsmasq-dhcp[535090]: read /var/lib/neutron/dhcp/0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc/opts
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:45:10 standalone.localdomain systemd[1]: tmp-crun.IgG7Mu.mount: Deactivated successfully.
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.353 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.354 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.355 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:45:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:10Z|00194|binding|INFO|Removing iface tap057f5038-f0 ovn-installed in OVS
Oct 13 15:45:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:10.422 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ba173817-600e-47e4-9406-276f39d58040 with type ""
Oct 13 15:45:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:10Z|00195|binding|INFO|Removing lport 057f5038-f0df-4fa8-b115-2be08de0b330 ovn-installed in OVS
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:10.430 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c7b3640-eb0e-4a66-89c4-ba3b99a5775e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=057f5038-f0df-4fa8-b115-2be08de0b330) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:10.433 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 057f5038-f0df-4fa8-b115-2be08de0b330 in datapath 0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc unbound from our chassis
Oct 13 15:45:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:10.436 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:10.437 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b7407d-e0b2-409b-8729-b585efdc4781]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:10 standalone.localdomain systemd[1]: tmp-crun.QyLtLm.mount: Deactivated successfully.
Oct 13 15:45:10 standalone.localdomain dnsmasq[535090]: exiting on receipt of SIGTERM
Oct 13 15:45:10 standalone.localdomain systemd[1]: libpod-2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885.scope: Deactivated successfully.
Oct 13 15:45:10 standalone.localdomain podman[535194]: 2025-10-13 15:45:10.523575456 +0000 UTC m=+0.057303549 container kill 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:10 standalone.localdomain podman[535207]: 2025-10-13 15:45:10.580924475 +0000 UTC m=+0.049065892 container died 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:45:10 standalone.localdomain podman[535207]: 2025-10-13 15:45:10.62019828 +0000 UTC m=+0.088339677 container cleanup 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:45:10 standalone.localdomain systemd[1]: libpod-conmon-2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885.scope: Deactivated successfully.
Oct 13 15:45:10 standalone.localdomain podman[535214]: 2025-10-13 15:45:10.660712693 +0000 UTC m=+0.115519404 container remove 2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c58ccd2-7c66-48bd-8bbd-33ff2d4d0fdc, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:10 standalone.localdomain kernel: device tap057f5038-f0 left promiscuous mode
Oct 13 15:45:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:10.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:10.706 496978 INFO neutron.agent.dhcp.agent [None req-779dc80f-dee9-4c2d-a702-1d692d89ed4e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:10 standalone.localdomain ceph-mon[29756]: pgmap v3926: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:10.832 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.008 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.024 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.024 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.025 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:11Z|00196|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-81aea556e7a16e7cdeb0952f176764e2c296b78cdc52aa285e1c28ec8d6dad53-merged.mount: Deactivated successfully.
Oct 13 15:45:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2eeb2bdb2f1459d962953b55066a5204c432b48aa112feb7cf200427f1a9b885-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:11 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d0c58ccd2\x2d7c66\x2d48bd\x2d8bbd\x2d33ff2d4d0fdc.mount: Deactivated successfully.
Oct 13 15:45:11 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:11.361 2 INFO neutron.agent.securitygroups_rpc [None req-7f529f27-948a-4782-900c-ff312e8ec6b9 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:11 standalone.localdomain podman[467099]: time="2025-10-13T15:45:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:45:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:11.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:11 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:11.600 2 INFO neutron.agent.securitygroups_rpc [None req-7d1bf53c-5f6c-46da-a9ad-002b382f8ad3 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:11.618 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:45:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:45:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3927: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:45:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49137 "" "Go-http-client/1.1"
Oct 13 15:45:12 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:12.019 2 INFO neutron.agent.securitygroups_rpc [None req-4bac261b-cac9-4cca-bc3a-39a4603d8bf9 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.223 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.251 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.252 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.271 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.272 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.273 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.273 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.273 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:45:12 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:12.442 2 INFO neutron.agent.securitygroups_rpc [None req-e16ab0f5-b26e-4c4b-a038-c9cb8a6719d1 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:12.473 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:45:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3096616755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:45:12 standalone.localdomain ceph-mon[29756]: pgmap v3927: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.785 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.918 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.919 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.919 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.924 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:45:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:12.924 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:45:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:45:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:13.090 2 INFO neutron.agent.securitygroups_rpc [None req-38d336ca-29e8-424a-86fc-c6bec1c9a2e1 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.129 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.130 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9205MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.131 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.131 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.205 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.205 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.205 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.206 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.269 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:45:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:13.625 496978 INFO neutron.agent.linux.ip_lib [None req-0d2cc74f-6e48-439c-b205-f6bd89f2c166 - - - - - -] Device tap6250f4f3-33 cannot be used as it has no MAC address
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:13 standalone.localdomain kernel: device tap6250f4f3-33 entered promiscuous mode
Oct 13 15:45:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:13Z|00197|binding|INFO|Claiming lport 6250f4f3-3337-4a22-bb48-1353a8d9c25a for this chassis.
Oct 13 15:45:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:13Z|00198|binding|INFO|6250f4f3-3337-4a22-bb48-1353a8d9c25a: Claiming unknown
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:13 standalone.localdomain NetworkManager[5962]: <info>  [1760370313.6528] manager: (tap6250f4f3-33): new Generic device (/org/freedesktop/NetworkManager/Devices/43)
Oct 13 15:45:13 standalone.localdomain systemd-udevd[535291]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:13.664 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4ed029b4-cbf3-4090-b43d-8017c56a090d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ed029b4-cbf3-4090-b43d-8017c56a090d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=749d7bc7-f648-4d5b-9814-dc042e11a679, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6250f4f3-3337-4a22-bb48-1353a8d9c25a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:13.665 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6250f4f3-3337-4a22-bb48-1353a8d9c25a in datapath 4ed029b4-cbf3-4090-b43d-8017c56a090d bound to our chassis
Oct 13 15:45:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:13.666 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4ed029b4-cbf3-4090-b43d-8017c56a090d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:13.667 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6648c0-d033-43cf-9e0f-f7d844c77e18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:13Z|00199|binding|INFO|Setting lport 6250f4f3-3337-4a22-bb48-1353a8d9c25a ovn-installed in OVS
Oct 13 15:45:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:13Z|00200|binding|INFO|Setting lport 6250f4f3-3337-4a22-bb48-1353a8d9c25a up in Southbound
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3928: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6250f4f3-33: No such device
Oct 13 15:45:13 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:45:13 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2139424470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.764 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.771 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:45:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3096616755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:45:13 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2139424470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:45:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:13.782 2 INFO neutron.agent.securitygroups_rpc [None req-6f671dce-3fd8-4899-a74c-037c3191e9bf 432171b00b9e4155b0a6545bfa78781f 305b71cfb2a247209481ca6b4fdffffc - - default default] Security group member updated ['7bb9144c-44e5-4a09-89f5-c3c5e4f2b9db']
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.787 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.808 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:45:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:13.809 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:45:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:14.147 2 INFO neutron.agent.securitygroups_rpc [None req-656fd67e-0d42-47ae-a183-fb3f81dae7dd eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:14.183 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:14.319 2 INFO neutron.agent.securitygroups_rpc [None req-f5a18d4f-4be2-4f05-958c-777a5183ae89 432171b00b9e4155b0a6545bfa78781f 305b71cfb2a247209481ca6b4fdffffc - - default default] Security group member updated ['7bb9144c-44e5-4a09-89f5-c3c5e4f2b9db']
Oct 13 15:45:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:14.347 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:14 standalone.localdomain podman[535365]: 
Oct 13 15:45:14 standalone.localdomain podman[535365]: 2025-10-13 15:45:14.580030339 +0000 UTC m=+0.079170370 container create 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started libpod-conmon-37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee.scope.
Oct 13 15:45:14 standalone.localdomain podman[535365]: 2025-10-13 15:45:14.535231522 +0000 UTC m=+0.034371593 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/703063e5e507588f12aa33d1faad86b9b7dbb68cc9ec5d96332c9acd2096132a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:14 standalone.localdomain podman[535365]: 2025-10-13 15:45:14.662200633 +0000 UTC m=+0.161340664 container init 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:14 standalone.localdomain dnsmasq[535434]: started, version 2.85 cachesize 150
Oct 13 15:45:14 standalone.localdomain dnsmasq[535434]: DNS service limited to local subnets
Oct 13 15:45:14 standalone.localdomain dnsmasq[535434]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:14 standalone.localdomain dnsmasq[535434]: warning: no upstream servers configured
Oct 13 15:45:14 standalone.localdomain dnsmasq-dhcp[535434]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d
Oct 13 15:45:14 standalone.localdomain dnsmasq[535434]: read /var/lib/neutron/dhcp/4ed029b4-cbf3-4090-b43d-8017c56a090d/addn_hosts - 0 addresses
Oct 13 15:45:14 standalone.localdomain dnsmasq-dhcp[535434]: read /var/lib/neutron/dhcp/4ed029b4-cbf3-4090-b43d-8017c56a090d/host
Oct 13 15:45:14 standalone.localdomain dnsmasq-dhcp[535434]: read /var/lib/neutron/dhcp/4ed029b4-cbf3-4090-b43d-8017c56a090d/opts
Oct 13 15:45:14 standalone.localdomain podman[535383]: 2025-10-13 15:45:14.693908022 +0000 UTC m=+0.068038663 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, container_name=swift_account_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1, architecture=x86_64, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:45:14 standalone.localdomain podman[535393]: 2025-10-13 15:45:14.71759202 +0000 UTC m=+0.084052053 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:45:14 standalone.localdomain podman[535393]: 2025-10-13 15:45:14.722537835 +0000 UTC m=+0.088997958 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:45:14 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:45:14 standalone.localdomain podman[535365]: 2025-10-13 15:45:14.771013098 +0000 UTC m=+0.270153129 container start 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:45:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:14.785 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:45:14 standalone.localdomain ceph-mon[29756]: pgmap v3928: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:14 standalone.localdomain podman[535380]: 2025-10-13 15:45:14.810804169 +0000 UTC m=+0.191640800 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, container_name=swift_object_server, distribution-scope=public, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, release=1)
Oct 13 15:45:14 standalone.localdomain podman[535381]: 2025-10-13 15:45:14.870778949 +0000 UTC m=+0.250468414 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 15:45:14 standalone.localdomain podman[535381]: 2025-10-13 15:45:14.89068361 +0000 UTC m=+0.270373075 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:45:14 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:45:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:14.917 496978 INFO neutron.agent.dhcp.agent [None req-7efea360-f10b-4d49-9f34-011c8c5fa77b - - - - - -] DHCP configuration for ports {'053ca28f-5513-43e4-9692-c36f60de900f'} is completed
Oct 13 15:45:14 standalone.localdomain podman[535383]: 2025-10-13 15:45:14.941985231 +0000 UTC m=+0.316115862 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, release=1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9)
Oct 13 15:45:14 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:45:14 standalone.localdomain podman[535382]: 2025-10-13 15:45:14.947970397 +0000 UTC m=+0.325869326 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-swift-container-container, summary=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, vendor=Red Hat, Inc., release=1, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, name=rhosp17/openstack-swift-container, build-date=2025-07-21T15:54:32, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, version=17.1.9, container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:45:15 standalone.localdomain podman[535380]: 2025-10-13 15:45:15.050846376 +0000 UTC m=+0.431683047 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9)
Oct 13 15:45:15 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:45:15 standalone.localdomain podman[535382]: 2025-10-13 15:45:15.188773399 +0000 UTC m=+0.566672368 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container, version=17.1.9, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, tcib_managed=true, vendor=Red Hat, Inc., container_name=swift_container_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:45:15 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:45:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:15.431 496978 INFO neutron.agent.linux.ip_lib [None req-c8a02ffd-6455-48c3-9b55-c2b99e132983 - - - - - -] Device tapd6721b0c-85 cannot be used as it has no MAC address
Oct 13 15:45:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:15.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:15 standalone.localdomain kernel: device tapd6721b0c-85 entered promiscuous mode
Oct 13 15:45:15 standalone.localdomain NetworkManager[5962]: <info>  [1760370315.4656] manager: (tapd6721b0c-85): new Generic device (/org/freedesktop/NetworkManager/Devices/44)
Oct 13 15:45:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:15.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:15Z|00201|binding|INFO|Claiming lport d6721b0c-8549-4f92-a3fd-952c9498db40 for this chassis.
Oct 13 15:45:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:15Z|00202|binding|INFO|d6721b0c-8549-4f92-a3fd-952c9498db40: Claiming unknown
Oct 13 15:45:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:15.479 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-778d4861-e0d8-4606-91c7-2ad0fc59fb7c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-778d4861-e0d8-4606-91c7-2ad0fc59fb7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed20abd6-da10-4002-98f8-fc05a83627f1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d6721b0c-8549-4f92-a3fd-952c9498db40) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:15.480 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d6721b0c-8549-4f92-a3fd-952c9498db40 in datapath 778d4861-e0d8-4606-91c7-2ad0fc59fb7c bound to our chassis
Oct 13 15:45:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:15.481 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 778d4861-e0d8-4606-91c7-2ad0fc59fb7c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:15.482 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e1618bc6-9c07-4aaa-b143-f6d7a65babbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:15Z|00203|binding|INFO|Setting lport d6721b0c-8549-4f92-a3fd-952c9498db40 ovn-installed in OVS
Oct 13 15:45:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:15Z|00204|binding|INFO|Setting lport d6721b0c-8549-4f92-a3fd-952c9498db40 up in Southbound
Oct 13 15:45:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:15.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:15.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:15 standalone.localdomain systemd[1]: tmp-crun.lWPcRj.mount: Deactivated successfully.
Oct 13 15:45:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3929: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:16Z|00205|binding|INFO|Removing iface tapd6721b0c-85 ovn-installed in OVS
Oct 13 15:45:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:16Z|00206|binding|INFO|Removing lport d6721b0c-8549-4f92-a3fd-952c9498db40 ovn-installed in OVS
Oct 13 15:45:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:16.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:16.146 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b3cd530f-2ba7-4d75-905e-1bb4907eee97 with type ""
Oct 13 15:45:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:16.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:16.155 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-778d4861-e0d8-4606-91c7-2ad0fc59fb7c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-778d4861-e0d8-4606-91c7-2ad0fc59fb7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed20abd6-da10-4002-98f8-fc05a83627f1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d6721b0c-8549-4f92-a3fd-952c9498db40) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:16.158 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d6721b0c-8549-4f92-a3fd-952c9498db40 in datapath 778d4861-e0d8-4606-91c7-2ad0fc59fb7c unbound from our chassis
Oct 13 15:45:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:16.160 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 778d4861-e0d8-4606-91c7-2ad0fc59fb7c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:16.161 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2034ec-a912-49aa-8dfa-a92ea3214a77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:16Z|00207|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:16.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:16 standalone.localdomain podman[535565]: 
Oct 13 15:45:16 standalone.localdomain podman[535565]: 2025-10-13 15:45:16.515788422 +0000 UTC m=+0.101502287 container create 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:45:16 standalone.localdomain systemd[1]: Started libpod-conmon-72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c.scope.
Oct 13 15:45:16 standalone.localdomain podman[535565]: 2025-10-13 15:45:16.464861654 +0000 UTC m=+0.050575569 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f333815574e304dae5aa551b4c4b579c7b689447bc5726e95d8a489f643ed12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:16 standalone.localdomain podman[535565]: 2025-10-13 15:45:16.592161145 +0000 UTC m=+0.177875020 container init 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:45:16 standalone.localdomain podman[535565]: 2025-10-13 15:45:16.599173164 +0000 UTC m=+0.184887039 container start 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:16 standalone.localdomain dnsmasq[535584]: started, version 2.85 cachesize 150
Oct 13 15:45:16 standalone.localdomain dnsmasq[535584]: DNS service limited to local subnets
Oct 13 15:45:16 standalone.localdomain dnsmasq[535584]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:16 standalone.localdomain dnsmasq[535584]: warning: no upstream servers configured
Oct 13 15:45:16 standalone.localdomain dnsmasq-dhcp[535584]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:16 standalone.localdomain dnsmasq[535584]: read /var/lib/neutron/dhcp/778d4861-e0d8-4606-91c7-2ad0fc59fb7c/addn_hosts - 0 addresses
Oct 13 15:45:16 standalone.localdomain dnsmasq-dhcp[535584]: read /var/lib/neutron/dhcp/778d4861-e0d8-4606-91c7-2ad0fc59fb7c/host
Oct 13 15:45:16 standalone.localdomain dnsmasq-dhcp[535584]: read /var/lib/neutron/dhcp/778d4861-e0d8-4606-91c7-2ad0fc59fb7c/opts
Oct 13 15:45:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:16.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:16.762 496978 INFO neutron.agent.dhcp.agent [None req-fc8fbe0e-d921-47f7-aa66-fc546d2e23ad - - - - - -] DHCP configuration for ports {'2e5320b8-0c39-4f04-92f6-3f80f054140c'} is completed
Oct 13 15:45:16 standalone.localdomain ceph-mon[29756]: pgmap v3929: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:16 standalone.localdomain dnsmasq[535584]: exiting on receipt of SIGTERM
Oct 13 15:45:16 standalone.localdomain podman[535602]: 2025-10-13 15:45:16.867760572 +0000 UTC m=+0.066456424 container kill 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:45:16 standalone.localdomain systemd[1]: libpod-72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c.scope: Deactivated successfully.
Oct 13 15:45:16 standalone.localdomain podman[535616]: 2025-10-13 15:45:16.941952757 +0000 UTC m=+0.061995296 container died 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:16 standalone.localdomain podman[535616]: 2025-10-13 15:45:16.982806821 +0000 UTC m=+0.102849360 container cleanup 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:16 standalone.localdomain systemd[1]: libpod-conmon-72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c.scope: Deactivated successfully.
Oct 13 15:45:17 standalone.localdomain podman[535618]: 2025-10-13 15:45:17.037429065 +0000 UTC m=+0.148511824 container remove 72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-778d4861-e0d8-4606-91c7-2ad0fc59fb7c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:45:17 standalone.localdomain kernel: device tapd6721b0c-85 left promiscuous mode
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:17.081 496978 INFO neutron.agent.dhcp.agent [None req-25af112c-ae38-4743-8d2b-c01d0dacf747 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:17.082 496978 INFO neutron.agent.dhcp.agent [None req-25af112c-ae38-4743-8d2b-c01d0dacf747 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6f333815574e304dae5aa551b4c4b579c7b689447bc5726e95d8a489f643ed12-merged.mount: Deactivated successfully.
Oct 13 15:45:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72711804070a1e7497b62a39a977f5b460465b6758eb91d78de3b85e1d8b2e3c-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:17 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d778d4861\x2de0d8\x2d4606\x2d91c7\x2d2ad0fc59fb7c.mount: Deactivated successfully.
Oct 13 15:45:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:17.632 496978 INFO neutron.agent.linux.ip_lib [None req-abe9256e-2b5f-4e9e-b13c-6a1a2653dc50 - - - - - -] Device tap96e5f1f3-cd cannot be used as it has no MAC address
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain kernel: device tap96e5f1f3-cd entered promiscuous mode
Oct 13 15:45:17 standalone.localdomain NetworkManager[5962]: <info>  [1760370317.6668] manager: (tap96e5f1f3-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/45)
Oct 13 15:45:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3930: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:17Z|00208|binding|INFO|Claiming lport 96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f for this chassis.
Oct 13 15:45:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:17Z|00209|binding|INFO|96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f: Claiming unknown
Oct 13 15:45:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:17.722 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=136bc237-fb76-4519-b037-952a34c29ab6, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:17.725 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f in datapath 7c2a0c62-163a-4617-8ca9-d872c66d8566 bound to our chassis
Oct 13 15:45:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:17.728 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7c2a0c62-163a-4617-8ca9-d872c66d8566 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:17.729 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4e65532b-e352-46cc-b0da-39b90065301e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:17Z|00210|binding|INFO|Setting lport 96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f ovn-installed in OVS
Oct 13 15:45:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:17Z|00211|binding|INFO|Setting lport 96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f up in Southbound
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:17.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:17Z|00212|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:18 standalone.localdomain ceph-mon[29756]: pgmap v3930: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:45:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2048935679' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:45:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:45:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2048935679' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:45:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:45:18 standalone.localdomain systemd[1]: tmp-crun.WhwIya.mount: Deactivated successfully.
Oct 13 15:45:18 standalone.localdomain podman[535702]: 2025-10-13 15:45:18.865721535 +0000 UTC m=+0.129244162 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 15:45:18 standalone.localdomain podman[535702]: 2025-10-13 15:45:18.872560489 +0000 UTC m=+0.136083126 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 15:45:18 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:45:18 standalone.localdomain podman[535719]: 
Oct 13 15:45:18 standalone.localdomain podman[535719]: 2025-10-13 15:45:18.930730223 +0000 UTC m=+0.103731017 container create cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:18 standalone.localdomain systemd[1]: Started libpod-conmon-cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f.scope.
Oct 13 15:45:18 standalone.localdomain podman[535719]: 2025-10-13 15:45:18.875011896 +0000 UTC m=+0.048012670 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300de0fc17f5597e2a474549933d6f0a3560005ee70d322da2f104c4729ef7ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:19 standalone.localdomain podman[535719]: 2025-10-13 15:45:19.009598764 +0000 UTC m=+0.182599558 container init cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:45:19 standalone.localdomain podman[535719]: 2025-10-13 15:45:19.019569754 +0000 UTC m=+0.192570548 container start cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:45:19 standalone.localdomain dnsmasq[535743]: started, version 2.85 cachesize 150
Oct 13 15:45:19 standalone.localdomain dnsmasq[535743]: DNS service limited to local subnets
Oct 13 15:45:19 standalone.localdomain dnsmasq[535743]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:19 standalone.localdomain dnsmasq[535743]: warning: no upstream servers configured
Oct 13 15:45:19 standalone.localdomain dnsmasq-dhcp[535743]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:19 standalone.localdomain dnsmasq[535743]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 0 addresses
Oct 13 15:45:19 standalone.localdomain dnsmasq-dhcp[535743]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:19 standalone.localdomain dnsmasq-dhcp[535743]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:19.158 496978 INFO neutron.agent.dhcp.agent [None req-bd7e8883-99cf-48e5-aa99-f82131632a72 - - - - - -] DHCP configuration for ports {'47ba8f9b-0450-439a-a29a-4856ce6e3f76'} is completed
Oct 13 15:45:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:19.233 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:19.327 496978 INFO neutron.agent.linux.ip_lib [None req-b320f897-11cd-496d-9d36-9f9777db4f94 - - - - - -] Device tap14695a0a-2d cannot be used as it has no MAC address
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain kernel: device tap14695a0a-2d entered promiscuous mode
Oct 13 15:45:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:19Z|00213|binding|INFO|Claiming lport 14695a0a-2de5-41ca-821a-c3a45cb1af27 for this chassis.
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain NetworkManager[5962]: <info>  [1760370319.3633] manager: (tap14695a0a-2d): new Generic device (/org/freedesktop/NetworkManager/Devices/46)
Oct 13 15:45:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:19Z|00214|binding|INFO|14695a0a-2de5-41ca-821a-c3a45cb1af27: Claiming unknown
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.375 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=374753c3-9aa8-427b-8fe9-f87c26f8457e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=14695a0a-2de5-41ca-821a-c3a45cb1af27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.377 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 14695a0a-2de5-41ca-821a-c3a45cb1af27 in datapath 4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf bound to our chassis
Oct 13 15:45:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:19Z|00215|binding|INFO|Setting lport 14695a0a-2de5-41ca-821a-c3a45cb1af27 ovn-installed in OVS
Oct 13 15:45:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:19Z|00216|binding|INFO|Setting lport 14695a0a-2de5-41ca-821a-c3a45cb1af27 up in Southbound
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.379 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.381 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9b309d-0390-4b34-9641-2ee880eb0478]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:19Z|00217|binding|INFO|Removing iface tap14695a0a-2d ovn-installed in OVS
Oct 13 15:45:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:19Z|00218|binding|INFO|Removing lport 14695a0a-2de5-41ca-821a-c3a45cb1af27 ovn-installed in OVS
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.466 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ed5b1d72-3c77-4425-93c1-4479b6250530 with type ""
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.468 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=374753c3-9aa8-427b-8fe9-f87c26f8457e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=14695a0a-2de5-41ca-821a-c3a45cb1af27) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.472 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 14695a0a-2de5-41ca-821a-c3a45cb1af27 in datapath 4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf unbound from our chassis
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.474 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:19.475 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bff02f-d488-4843-907c-a3967d047091]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:19.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2048935679' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:45:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2048935679' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:45:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3931: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:20 standalone.localdomain podman[535805]: 
Oct 13 15:45:20 standalone.localdomain podman[535805]: 2025-10-13 15:45:20.425506531 +0000 UTC m=+0.097134882 container create e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:45:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:20.447 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:20Z|00219|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:20 standalone.localdomain systemd[1]: Started libpod-conmon-e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc.scope.
Oct 13 15:45:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:20.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:20 standalone.localdomain podman[535805]: 2025-10-13 15:45:20.39024178 +0000 UTC m=+0.061870171 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:20 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:20 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6834688cc5f5fb64c48695113ba190dd6a10171d332fe08f44bdf23fd38fbde6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:20 standalone.localdomain podman[535805]: 2025-10-13 15:45:20.507387744 +0000 UTC m=+0.179016125 container init e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:20 standalone.localdomain podman[535805]: 2025-10-13 15:45:20.512606917 +0000 UTC m=+0.184235268 container start e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:45:20 standalone.localdomain dnsmasq[535823]: started, version 2.85 cachesize 150
Oct 13 15:45:20 standalone.localdomain dnsmasq[535823]: DNS service limited to local subnets
Oct 13 15:45:20 standalone.localdomain dnsmasq[535823]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:20 standalone.localdomain dnsmasq[535823]: warning: no upstream servers configured
Oct 13 15:45:20 standalone.localdomain dnsmasq-dhcp[535823]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:20 standalone.localdomain dnsmasq[535823]: read /var/lib/neutron/dhcp/4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf/addn_hosts - 0 addresses
Oct 13 15:45:20 standalone.localdomain dnsmasq-dhcp[535823]: read /var/lib/neutron/dhcp/4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf/host
Oct 13 15:45:20 standalone.localdomain dnsmasq-dhcp[535823]: read /var/lib/neutron/dhcp/4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf/opts
Oct 13 15:45:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:20.582 496978 INFO neutron.agent.dhcp.agent [None req-d0b908a2-b75a-443d-bf97-d9ef31636974 - - - - - -] DHCP configuration for ports {'fa8ec328-b08f-4a94-a1d4-ac6910213756'} is completed
Oct 13 15:45:20 standalone.localdomain ceph-mon[29756]: pgmap v3931: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:20 standalone.localdomain dnsmasq[535743]: exiting on receipt of SIGTERM
Oct 13 15:45:20 standalone.localdomain podman[535854]: 2025-10-13 15:45:20.712230004 +0000 UTC m=+0.051681273 container kill cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:45:20 standalone.localdomain systemd[1]: libpod-cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f.scope: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain dnsmasq[535823]: exiting on receipt of SIGTERM
Oct 13 15:45:20 standalone.localdomain podman[535869]: 2025-10-13 15:45:20.762165612 +0000 UTC m=+0.063449860 container kill e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:45:20 standalone.localdomain systemd[1]: libpod-e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc.scope: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain podman[535888]: 2025-10-13 15:45:20.801165519 +0000 UTC m=+0.067687113 container died cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:20 standalone.localdomain podman[535903]: 2025-10-13 15:45:20.827847321 +0000 UTC m=+0.050886549 container died e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:45:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-300de0fc17f5597e2a474549933d6f0a3560005ee70d322da2f104c4729ef7ef-merged.mount: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6834688cc5f5fb64c48695113ba190dd6a10171d332fe08f44bdf23fd38fbde6-merged.mount: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain podman[535903]: 2025-10-13 15:45:20.906368391 +0000 UTC m=+0.129407589 container cleanup e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:45:20 standalone.localdomain systemd[1]: libpod-conmon-e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc.scope: Deactivated successfully.
Oct 13 15:45:20 standalone.localdomain podman[535907]: 2025-10-13 15:45:20.971260705 +0000 UTC m=+0.185359113 container remove e0282c81e205d00a5f5a97357c52ad0649b1be66003975c680ebfd0fea4577fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4b1e5e11-2de9-4a5f-9e9e-d5d82c956caf, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:20.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:20 standalone.localdomain kernel: device tap14695a0a-2d left promiscuous mode
Oct 13 15:45:20 standalone.localdomain podman[535888]: 2025-10-13 15:45:20.995702227 +0000 UTC m=+0.262223761 container remove cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:45:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:21 standalone.localdomain systemd[1]: libpod-conmon-cb0c381aecee0fcf1bf80ce8e2e4f5284eff9bbec7ac5e730eb8be2cf5dd972f.scope: Deactivated successfully.
Oct 13 15:45:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:21.060 496978 INFO neutron.agent.dhcp.agent [None req-208f3ef3-1197-4822-9067-f998473d8258 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:21.061 496978 INFO neutron.agent.dhcp.agent [None req-208f3ef3-1197-4822-9067-f998473d8258 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:21 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d4b1e5e11\x2d2de9\x2d4a5f\x2d9e9e\x2dd5d82c956caf.mount: Deactivated successfully.
Oct 13 15:45:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:21.346 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:cb:95 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=136bc237-fb76-4519-b037-952a34c29ab6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=47ba8f9b-0450-439a-a29a-4856ce6e3f76) old=Port_Binding(mac=['fa:16:3e:29:cb:95 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:21.348 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 47ba8f9b-0450-439a-a29a-4856ce6e3f76 in datapath 7c2a0c62-163a-4617-8ca9-d872c66d8566 updated
Oct 13 15:45:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:21.352 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5ed1f40a-b315-4f4f-ae56-ab4310111ff3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:45:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:21.352 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c2a0c62-163a-4617-8ca9-d872c66d8566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:21.353 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[87265ab1-4410-4bfb-8651-779b0726d238]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:21.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3932: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:22 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:22.183 2 INFO neutron.agent.securitygroups_rpc [None req-d6ec5a5a-5c6c-488c-82c6-873e5042cea4 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:22.281 496978 INFO neutron.agent.linux.ip_lib [None req-36cbac6d-5264-421f-b175-3748c3950c6e - - - - - -] Device tap455e91df-86 cannot be used as it has no MAC address
Oct 13 15:45:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:22.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:22 standalone.localdomain kernel: device tap455e91df-86 entered promiscuous mode
Oct 13 15:45:22 standalone.localdomain NetworkManager[5962]: <info>  [1760370322.3032] manager: (tap455e91df-86): new Generic device (/org/freedesktop/NetworkManager/Devices/47)
Oct 13 15:45:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:22.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:22Z|00220|binding|INFO|Claiming lport 455e91df-8673-4f81-b9d7-0fa28d25fa30 for this chassis.
Oct 13 15:45:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:22Z|00221|binding|INFO|455e91df-8673-4f81-b9d7-0fa28d25fa30: Claiming unknown
Oct 13 15:45:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:22.318 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-39554d73-8d1e-479c-8f6c-7b5c172c6abd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39554d73-8d1e-479c-8f6c-7b5c172c6abd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6569146f-e9c9-49d5-8344-d6ba7a6cf79c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=455e91df-8673-4f81-b9d7-0fa28d25fa30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:22.319 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 455e91df-8673-4f81-b9d7-0fa28d25fa30 in datapath 39554d73-8d1e-479c-8f6c-7b5c172c6abd bound to our chassis
Oct 13 15:45:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:22.320 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 39554d73-8d1e-479c-8f6c-7b5c172c6abd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:22.321 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[df83899f-bdc0-4f45-ad9d-e4035bdf7f38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:22Z|00222|binding|INFO|Setting lport 455e91df-8673-4f81-b9d7-0fa28d25fa30 ovn-installed in OVS
Oct 13 15:45:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:22Z|00223|binding|INFO|Setting lport 455e91df-8673-4f81-b9d7-0fa28d25fa30 up in Southbound
Oct 13 15:45:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:22.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:22.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:22 standalone.localdomain podman[536010]: 
Oct 13 15:45:22 standalone.localdomain podman[536010]: 2025-10-13 15:45:22.491889449 +0000 UTC m=+0.061669035 container create bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:45:22 standalone.localdomain systemd[1]: Started libpod-conmon-bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5.scope.
Oct 13 15:45:22 standalone.localdomain systemd[1]: tmp-crun.fd1OmE.mount: Deactivated successfully.
Oct 13 15:45:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a23d9a5d66168b007105f529421f551b6c03f2458959f0310cc4cb46467890f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:22 standalone.localdomain podman[536010]: 2025-10-13 15:45:22.555945317 +0000 UTC m=+0.125724873 container init bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:22 standalone.localdomain podman[536010]: 2025-10-13 15:45:22.457364531 +0000 UTC m=+0.027144137 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:22 standalone.localdomain podman[536010]: 2025-10-13 15:45:22.564251305 +0000 UTC m=+0.134030901 container start bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:45:22 standalone.localdomain dnsmasq[536042]: started, version 2.85 cachesize 150
Oct 13 15:45:22 standalone.localdomain dnsmasq[536042]: DNS service limited to local subnets
Oct 13 15:45:22 standalone.localdomain dnsmasq[536042]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:22 standalone.localdomain dnsmasq[536042]: warning: no upstream servers configured
Oct 13 15:45:22 standalone.localdomain dnsmasq-dhcp[536042]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:22 standalone.localdomain dnsmasq-dhcp[536042]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:45:22 standalone.localdomain dnsmasq[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 0 addresses
Oct 13 15:45:22 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:22 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:22 standalone.localdomain podman[536026]: 2025-10-13 15:45:22.616844826 +0000 UTC m=+0.086227511 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:45:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:22.624 496978 INFO neutron.agent.dhcp.agent [None req-e6ba8eb5-7ee0-4f10-b1dd-3f4ab2f2a9c0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:21Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e98e0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18890a8040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889252400>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18892e9d90>], id=171d7040-7f3d-4b72-b48a-ef513f37886f, ip_allocation=immediate, mac_address=fa:16:3e:b4:3f:9e, name=tempest-PortsTestJSON-1686459708, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:14Z, description=, dns_domain=, id=7c2a0c62-163a-4617-8ca9-d872c66d8566, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-446280570, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19040, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=1324, status=ACTIVE, subnets=['5e5a16a7-8fd7-4f94-9fcf-9398e1f821c4', '63ff50fa-0e77-4b6a-82db-3c28f57a00ae'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:20Z, vlan_transparent=None, network_id=7c2a0c62-163a-4617-8ca9-d872c66d8566, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1371, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:21Z on network 7c2a0c62-163a-4617-8ca9-d872c66d8566
Oct 13 15:45:22 standalone.localdomain podman[536026]: 2025-10-13 15:45:22.650439674 +0000 UTC m=+0.119822369 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:45:22 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:45:22 standalone.localdomain ceph-mon[29756]: pgmap v3932: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:22.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:22 standalone.localdomain dnsmasq[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 2 addresses
Oct 13 15:45:22 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:22 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:22 standalone.localdomain podman[536089]: 2025-10-13 15:45:22.881591205 +0000 UTC m=+0.064418161 container kill bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:45:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:22.908 496978 INFO neutron.agent.dhcp.agent [None req-61dbbbb4-fda6-4186-9ee2-24052cce4c7b - - - - - -] DHCP configuration for ports {'47ba8f9b-0450-439a-a29a-4856ce6e3f76', '96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f'} is completed
Oct 13 15:45:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:23.146 496978 INFO neutron.agent.dhcp.agent [None req-b6d5dc7d-d9d5-4c8d-819f-a1d5a5c6f9c1 - - - - - -] DHCP configuration for ports {'171d7040-7f3d-4b72-b48a-ef513f37886f'} is completed
Oct 13 15:45:23 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:23.222 2 INFO neutron.agent.securitygroups_rpc [None req-73a69d49-e2ed-407d-a225-e74854e0b0a6 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:23.250 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:21Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d7be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d7f10>], id=171d7040-7f3d-4b72-b48a-ef513f37886f, ip_allocation=immediate, mac_address=fa:16:3e:b4:3f:9e, name=tempest-PortsTestJSON-1686459708, network_id=7c2a0c62-163a-4617-8ca9-d872c66d8566, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1371, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:23Z on network 7c2a0c62-163a-4617-8ca9-d872c66d8566
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:45:23 standalone.localdomain podman[536133]: 
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:45:23
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'volumes', 'manila_data', 'images', 'manila_metadata']
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:45:23 standalone.localdomain podman[536133]: 2025-10-13 15:45:23.340413207 +0000 UTC m=+0.099601318 container create 3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39554d73-8d1e-479c-8f6c-7b5c172c6abd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:45:23 standalone.localdomain systemd[1]: Started libpod-conmon-3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade.scope.
Oct 13 15:45:23 standalone.localdomain podman[536133]: 2025-10-13 15:45:23.289097996 +0000 UTC m=+0.048286127 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd491ac44e968ac1f40c0f76ec1f85aefcbd2de3f8ce3eddd4e6cf810d033af6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:23 standalone.localdomain podman[536133]: 2025-10-13 15:45:23.440010243 +0000 UTC m=+0.199198354 container init 3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39554d73-8d1e-479c-8f6c-7b5c172c6abd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:45:23 standalone.localdomain dnsmasq[536167]: started, version 2.85 cachesize 150
Oct 13 15:45:23 standalone.localdomain dnsmasq[536167]: DNS service limited to local subnets
Oct 13 15:45:23 standalone.localdomain dnsmasq[536167]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:23 standalone.localdomain dnsmasq[536167]: warning: no upstream servers configured
Oct 13 15:45:23 standalone.localdomain dnsmasq-dhcp[536167]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:23 standalone.localdomain dnsmasq[536167]: read /var/lib/neutron/dhcp/39554d73-8d1e-479c-8f6c-7b5c172c6abd/addn_hosts - 0 addresses
Oct 13 15:45:23 standalone.localdomain dnsmasq-dhcp[536167]: read /var/lib/neutron/dhcp/39554d73-8d1e-479c-8f6c-7b5c172c6abd/host
Oct 13 15:45:23 standalone.localdomain dnsmasq-dhcp[536167]: read /var/lib/neutron/dhcp/39554d73-8d1e-479c-8f6c-7b5c172c6abd/opts
Oct 13 15:45:23 standalone.localdomain podman[536133]: 2025-10-13 15:45:23.499700795 +0000 UTC m=+0.258888906 container start 3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39554d73-8d1e-479c-8f6c-7b5c172c6abd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41533 DF PROTO=TCP SPT=51520 DPT=9102 SEQ=2316298919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD04DFF0000000001030307) 
Oct 13 15:45:23 standalone.localdomain dnsmasq[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 1 addresses
Oct 13 15:45:23 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:23 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:23 standalone.localdomain podman[536170]: 2025-10-13 15:45:23.562227716 +0000 UTC m=+0.074930909 container kill bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3933: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:23.784 496978 INFO neutron.agent.dhcp.agent [None req-677eb50d-87ae-4768-b458-a3bfa75893a0 - - - - - -] DHCP configuration for ports {'333d166b-6c9f-4fb2-857d-e10a0e36db2f'} is completed
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:45:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:45:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:23.919 496978 INFO neutron.agent.dhcp.agent [None req-5fc5c672-2258-4893-b862-d80fa7a8091b - - - - - -] DHCP configuration for ports {'171d7040-7f3d-4b72-b48a-ef513f37886f'} is completed
Oct 13 15:45:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:23.927 496978 INFO neutron.agent.linux.ip_lib [None req-0a864492-923a-4c04-8926-613781ceb091 - - - - - -] Device tap3219658f-d8 cannot be used as it has no MAC address
Oct 13 15:45:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:23.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:23 standalone.localdomain kernel: device tap3219658f-d8 entered promiscuous mode
Oct 13 15:45:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:23Z|00224|binding|INFO|Claiming lport 3219658f-d89b-47e3-bba8-9b12bf480a82 for this chassis.
Oct 13 15:45:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:23Z|00225|binding|INFO|3219658f-d89b-47e3-bba8-9b12bf480a82: Claiming unknown
Oct 13 15:45:23 standalone.localdomain NetworkManager[5962]: <info>  [1760370323.9612] manager: (tap3219658f-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/48)
Oct 13 15:45:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:23.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:23.970 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-08bbd394-1256-44d6-b3a6-f42b965fa8d3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08bbd394-1256-44d6-b3a6-f42b965fa8d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8acea2c0-1123-4e57-a59b-d2b96658053b, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3219658f-d89b-47e3-bba8-9b12bf480a82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:23.972 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3219658f-d89b-47e3-bba8-9b12bf480a82 in datapath 08bbd394-1256-44d6-b3a6-f42b965fa8d3 bound to our chassis
Oct 13 15:45:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:23.973 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 08bbd394-1256-44d6-b3a6-f42b965fa8d3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:23.973 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3af6a8f9-1cc5-4bce-902c-786c1413172f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:24Z|00226|binding|INFO|Setting lport 3219658f-d89b-47e3-bba8-9b12bf480a82 ovn-installed in OVS
Oct 13 15:45:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:24Z|00227|binding|INFO|Setting lport 3219658f-d89b-47e3-bba8-9b12bf480a82 up in Southbound
Oct 13 15:45:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:24.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:24.062 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:23Z, description=, device_id=1ebc3671-90d2-49c9-a4df-da59add06fa1, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188920a9a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889028ee0>], id=daaad04f-74d1-418d-b520-736014b3f272, ip_allocation=immediate, mac_address=fa:16:3e:c2:de:35, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1379, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:45:23Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:45:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:24.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:24 standalone.localdomain systemd[1]: tmp-crun.eed2bU.mount: Deactivated successfully.
Oct 13 15:45:24 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:24.256 2 INFO neutron.agent.securitygroups_rpc [None req-1239f757-6de6-4d4f-9988-186c1252e427 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:24 standalone.localdomain podman[536232]: 2025-10-13 15:45:24.289898833 +0000 UTC m=+0.066059401 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:24 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:45:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:24 standalone.localdomain systemd[1]: tmp-crun.scflZq.mount: Deactivated successfully.
Oct 13 15:45:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:24.307 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:21Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035730>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18890356d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890355b0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1889035640>], id=171d7040-7f3d-4b72-b48a-ef513f37886f, ip_allocation=immediate, mac_address=fa:16:3e:b4:3f:9e, name=tempest-PortsTestJSON-1686459708, network_id=7c2a0c62-163a-4617-8ca9-d872c66d8566, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1371, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:23Z on network 7c2a0c62-163a-4617-8ca9-d872c66d8566
Oct 13 15:45:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:24.557 496978 INFO neutron.agent.dhcp.agent [None req-3040ea76-6585-4386-92b1-d5061d7d3fed - - - - - -] DHCP configuration for ports {'daaad04f-74d1-418d-b520-736014b3f272'} is completed
Oct 13 15:45:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41534 DF PROTO=TCP SPT=51520 DPT=9102 SEQ=2316298919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD051F60000000001030307) 
Oct 13 15:45:24 standalone.localdomain podman[536283]: 2025-10-13 15:45:24.588629462 +0000 UTC m=+0.063686858 container kill bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:45:24 standalone.localdomain dnsmasq[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 2 addresses
Oct 13 15:45:24 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:24 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:24 standalone.localdomain ceph-mon[29756]: pgmap v3933: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:24.808 496978 INFO neutron.agent.dhcp.agent [None req-e1cb4b2d-a313-433c-bb93-152b38ca6ba9 - - - - - -] DHCP configuration for ports {'171d7040-7f3d-4b72-b48a-ef513f37886f'} is completed
Oct 13 15:45:25 standalone.localdomain podman[536331]: 
Oct 13 15:45:25 standalone.localdomain podman[536331]: 2025-10-13 15:45:25.018632316 +0000 UTC m=+0.105016567 container create 4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-08bbd394-1256-44d6-b3a6-f42b965fa8d3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:45:25 standalone.localdomain systemd[1]: Started libpod-conmon-4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1.scope.
Oct 13 15:45:25 standalone.localdomain podman[536331]: 2025-10-13 15:45:24.96683383 +0000 UTC m=+0.053218101 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:25 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1e7e1145abbe5443579bd98d1ddfb9cb98f1f7b01eeeac37e59ebcad509a0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:25 standalone.localdomain podman[536331]: 2025-10-13 15:45:25.090399274 +0000 UTC m=+0.176783515 container init 4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-08bbd394-1256-44d6-b3a6-f42b965fa8d3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:45:25 standalone.localdomain podman[536331]: 2025-10-13 15:45:25.106040142 +0000 UTC m=+0.192424413 container start 4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-08bbd394-1256-44d6-b3a6-f42b965fa8d3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:45:25 standalone.localdomain dnsmasq[536363]: started, version 2.85 cachesize 150
Oct 13 15:45:25 standalone.localdomain dnsmasq[536363]: DNS service limited to local subnets
Oct 13 15:45:25 standalone.localdomain dnsmasq[536363]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:25 standalone.localdomain dnsmasq[536363]: warning: no upstream servers configured
Oct 13 15:45:25 standalone.localdomain dnsmasq-dhcp[536363]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:25 standalone.localdomain dnsmasq[536363]: read /var/lib/neutron/dhcp/08bbd394-1256-44d6-b3a6-f42b965fa8d3/addn_hosts - 0 addresses
Oct 13 15:45:25 standalone.localdomain dnsmasq-dhcp[536363]: read /var/lib/neutron/dhcp/08bbd394-1256-44d6-b3a6-f42b965fa8d3/host
Oct 13 15:45:25 standalone.localdomain dnsmasq-dhcp[536363]: read /var/lib/neutron/dhcp/08bbd394-1256-44d6-b3a6-f42b965fa8d3/opts
Oct 13 15:45:25 standalone.localdomain podman[536345]: 2025-10-13 15:45:25.138103922 +0000 UTC m=+0.082974209 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 13 15:45:25 standalone.localdomain podman[536345]: 2025-10-13 15:45:25.146246436 +0000 UTC m=+0.091116733 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009)
Oct 13 15:45:25 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:45:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:25.227 496978 INFO neutron.agent.dhcp.agent [None req-c06ac8b7-1910-43fe-8a88-b107375484a7 - - - - - -] DHCP configuration for ports {'e5083daf-8bdd-4ecd-93dc-268af9f3bc18'} is completed
Oct 13 15:45:25 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:25.300 2 INFO neutron.agent.securitygroups_rpc [None req-e1fe39ec-d4bf-42fd-afbc-e7916b29b587 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:25 standalone.localdomain podman[536386]: 2025-10-13 15:45:25.407547767 +0000 UTC m=+0.067753464 container kill 4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-08bbd394-1256-44d6-b3a6-f42b965fa8d3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:45:25 standalone.localdomain dnsmasq[536363]: exiting on receipt of SIGTERM
Oct 13 15:45:25 standalone.localdomain systemd[1]: libpod-4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1.scope: Deactivated successfully.
Oct 13 15:45:25 standalone.localdomain podman[536406]: 2025-10-13 15:45:25.481276257 +0000 UTC m=+0.043948602 container died 4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-08bbd394-1256-44d6-b3a6-f42b965fa8d3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:25.518 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9d78c8f4-ae8e-49f1-b9db-0d67f1f4d6d3 with type ""
Oct 13 15:45:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:25Z|00228|binding|INFO|Removing iface tap3219658f-d8 ovn-installed in OVS
Oct 13 15:45:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:25Z|00229|binding|INFO|Removing lport 3219658f-d89b-47e3-bba8-9b12bf480a82 ovn-installed in OVS
Oct 13 15:45:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:25.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:25.523 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-08bbd394-1256-44d6-b3a6-f42b965fa8d3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08bbd394-1256-44d6-b3a6-f42b965fa8d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8acea2c0-1123-4e57-a59b-d2b96658053b, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3219658f-d89b-47e3-bba8-9b12bf480a82) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:25 standalone.localdomain podman[536406]: 2025-10-13 15:45:25.525463706 +0000 UTC m=+0.088136031 container remove 4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-08bbd394-1256-44d6-b3a6-f42b965fa8d3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:45:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:25.527 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3219658f-d89b-47e3-bba8-9b12bf480a82 in datapath 08bbd394-1256-44d6-b3a6-f42b965fa8d3 unbound from our chassis
Oct 13 15:45:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:25.528 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 08bbd394-1256-44d6-b3a6-f42b965fa8d3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:25.529 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6b99ac-78e4-42f9-b784-7819896c45e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:25 standalone.localdomain systemd[1]: libpod-conmon-4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1.scope: Deactivated successfully.
Oct 13 15:45:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:25.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:25 standalone.localdomain kernel: device tap3219658f-d8 left promiscuous mode
Oct 13 15:45:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:25.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:25.575 496978 INFO neutron.agent.dhcp.agent [None req-007f8f69-2837-4ff5-a5e2-882808bb6dd7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:25.575 496978 INFO neutron.agent.dhcp.agent [None req-007f8f69-2837-4ff5-a5e2-882808bb6dd7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:25 standalone.localdomain dnsmasq[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 0 addresses
Oct 13 15:45:25 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:25 standalone.localdomain podman[536444]: 2025-10-13 15:45:25.636613662 +0000 UTC m=+0.056372159 container kill bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:45:25 standalone.localdomain dnsmasq-dhcp[536042]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3934: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:25Z|00230|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:25.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ff1e7e1145abbe5443579bd98d1ddfb9cb98f1f7b01eeeac37e59ebcad509a0c-merged.mount: Deactivated successfully.
Oct 13 15:45:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4bd2f29241a08690e26a59ad9b39d4dbb07a5a6da10f1d873a5365a6da23a7f1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:26 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d08bbd394\x2d1256\x2d44d6\x2db3a6\x2df42b965fa8d3.mount: Deactivated successfully.
Oct 13 15:45:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41535 DF PROTO=TCP SPT=51520 DPT=9102 SEQ=2316298919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD059F60000000001030307) 
Oct 13 15:45:26 standalone.localdomain systemd[1]: tmp-crun.WMNueN.mount: Deactivated successfully.
Oct 13 15:45:26 standalone.localdomain dnsmasq[536042]: exiting on receipt of SIGTERM
Oct 13 15:45:26 standalone.localdomain podman[536483]: 2025-10-13 15:45:26.618276815 +0000 UTC m=+0.077347204 container kill bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:45:26 standalone.localdomain systemd[1]: libpod-bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5.scope: Deactivated successfully.
Oct 13 15:45:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:26.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:26 standalone.localdomain podman[536497]: 2025-10-13 15:45:26.733402546 +0000 UTC m=+0.100125524 container died bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:26 standalone.localdomain podman[536497]: 2025-10-13 15:45:26.762903816 +0000 UTC m=+0.129626774 container cleanup bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:45:26 standalone.localdomain systemd[1]: libpod-conmon-bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5.scope: Deactivated successfully.
Oct 13 15:45:26 standalone.localdomain ceph-mon[29756]: pgmap v3934: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:26 standalone.localdomain podman[536504]: 2025-10-13 15:45:26.825000392 +0000 UTC m=+0.178585200 container remove bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:45:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7a23d9a5d66168b007105f529421f551b6c03f2458959f0310cc4cb46467890f-merged.mount: Deactivated successfully.
Oct 13 15:45:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf96c707106c939314258a9d8ff1e0051b7ffdac3b982da14d8a81fbad5404f5-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:27 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:27Z|00231|binding|INFO|Removing iface tap96e5f1f3-cd ovn-installed in OVS
Oct 13 15:45:27 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:27Z|00232|binding|INFO|Removing lport 96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f ovn-installed in OVS
Oct 13 15:45:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:27.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:27.234 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 5ed1f40a-b315-4f4f-ae56-ab4310111ff3 with type ""
Oct 13 15:45:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:27.237 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c2a0c62-163a-4617-8ca9-d872c66d8566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=136bc237-fb76-4519-b037-952a34c29ab6, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:27.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:27.240 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f in datapath 7c2a0c62-163a-4617-8ca9-d872c66d8566 unbound from our chassis
Oct 13 15:45:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:27.244 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c2a0c62-163a-4617-8ca9-d872c66d8566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:27.245 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[96cb132b-1ce0-40bb-8c8d-b571223b2850]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:27 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:45:27 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:27 standalone.localdomain podman[536563]: 2025-10-13 15:45:27.301706303 +0000 UTC m=+0.044806698 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:27 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3935: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:27 standalone.localdomain podman[536612]: 
Oct 13 15:45:27 standalone.localdomain podman[536612]: 2025-10-13 15:45:27.841928474 +0000 UTC m=+0.106265126 container create eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:27.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:27 standalone.localdomain systemd[1]: Started libpod-conmon-eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06.scope.
Oct 13 15:45:27 standalone.localdomain podman[536612]: 2025-10-13 15:45:27.789462618 +0000 UTC m=+0.053799300 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:27 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc7f948a274a6888832ada005c92a0d9cf60fd5cf742e9fdf1d49a6143c53f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:27 standalone.localdomain podman[536612]: 2025-10-13 15:45:27.913252209 +0000 UTC m=+0.177588921 container init eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:45:27 standalone.localdomain podman[536612]: 2025-10-13 15:45:27.922553359 +0000 UTC m=+0.186890041 container start eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:27 standalone.localdomain dnsmasq[536630]: started, version 2.85 cachesize 150
Oct 13 15:45:27 standalone.localdomain dnsmasq[536630]: DNS service limited to local subnets
Oct 13 15:45:27 standalone.localdomain dnsmasq[536630]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:27 standalone.localdomain dnsmasq[536630]: warning: no upstream servers configured
Oct 13 15:45:27 standalone.localdomain dnsmasq-dhcp[536630]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:27 standalone.localdomain dnsmasq[536630]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/addn_hosts - 0 addresses
Oct 13 15:45:27 standalone.localdomain dnsmasq-dhcp[536630]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/host
Oct 13 15:45:27 standalone.localdomain dnsmasq-dhcp[536630]: read /var/lib/neutron/dhcp/7c2a0c62-163a-4617-8ca9-d872c66d8566/opts
Oct 13 15:45:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:28.081 496978 INFO neutron.agent.dhcp.agent [None req-00a933bd-790f-449c-ab6f-f670a32d49a4 - - - - - -] DHCP configuration for ports {'47ba8f9b-0450-439a-a29a-4856ce6e3f76', '96e5f1f3-cd57-4b3c-93b5-a57fb9002c7f'} is completed
Oct 13 15:45:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:28Z|00233|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain dnsmasq[536630]: exiting on receipt of SIGTERM
Oct 13 15:45:28 standalone.localdomain podman[536648]: 2025-10-13 15:45:28.234611324 +0000 UTC m=+0.056160173 container kill eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:45:28 standalone.localdomain systemd[1]: libpod-eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06.scope: Deactivated successfully.
Oct 13 15:45:28 standalone.localdomain podman[536662]: 2025-10-13 15:45:28.30631742 +0000 UTC m=+0.053146669 container died eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:45:28 standalone.localdomain podman[536662]: 2025-10-13 15:45:28.336837052 +0000 UTC m=+0.083666231 container cleanup eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:45:28 standalone.localdomain systemd[1]: libpod-conmon-eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06.scope: Deactivated successfully.
Oct 13 15:45:28 standalone.localdomain podman[536664]: 2025-10-13 15:45:28.383965653 +0000 UTC m=+0.125985501 container remove eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7c2a0c62-163a-4617-8ca9-d872c66d8566, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain kernel: device tap96e5f1f3-cd left promiscuous mode
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:28.432 496978 INFO neutron.agent.dhcp.agent [None req-eeb6c9eb-c56e-4812-84f9-fe0ea8fd081a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:28.433 496978 INFO neutron.agent.dhcp.agent [None req-eeb6c9eb-c56e-4812-84f9-fe0ea8fd081a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:28.545 496978 INFO neutron.agent.linux.ip_lib [None req-b62c30ac-00a6-4c20-a9ee-7108c9287fb9 - - - - - -] Device tapbe847d01-21 cannot be used as it has no MAC address
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain kernel: device tapbe847d01-21 entered promiscuous mode
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:28Z|00234|binding|INFO|Claiming lport be847d01-21e1-45ac-a7b7-ce10b59ab30c for this chassis.
Oct 13 15:45:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:28Z|00235|binding|INFO|be847d01-21e1-45ac-a7b7-ce10b59ab30c: Claiming unknown
Oct 13 15:45:28 standalone.localdomain NetworkManager[5962]: <info>  [1760370328.5810] manager: (tapbe847d01-21): new Generic device (/org/freedesktop/NetworkManager/Devices/49)
Oct 13 15:45:28 standalone.localdomain systemd-udevd[536703]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:28.588 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d24a5474-4856-4e69-8d2c-c3d6842817d3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d24a5474-4856-4e69-8d2c-c3d6842817d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a661d60-55c3-4158-a391-6ab2af3f6284, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=be847d01-21e1-45ac-a7b7-ce10b59ab30c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:28.591 378821 INFO neutron.agent.ovn.metadata.agent [-] Port be847d01-21e1-45ac-a7b7-ce10b59ab30c in datapath d24a5474-4856-4e69-8d2c-c3d6842817d3 bound to our chassis
Oct 13 15:45:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:28Z|00236|binding|INFO|Setting lport be847d01-21e1-45ac-a7b7-ce10b59ab30c ovn-installed in OVS
Oct 13 15:45:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:28Z|00237|binding|INFO|Setting lport be847d01-21e1-45ac-a7b7-ce10b59ab30c up in Southbound
Oct 13 15:45:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:28.593 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d24a5474-4856-4e69-8d2c-c3d6842817d3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:28.593 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4eae068e-bcfc-48b3-920f-08f41220dc69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain ceph-mon[29756]: pgmap v3935: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapbe847d01-21: No such device
Oct 13 15:45:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:28.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:28.864 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:28Z, description=, device_id=3b8e915a-cadf-461f-935b-731c25db3959, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892cefd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891f4f70>], id=04b467c3-cd61-4fec-8411-18de167307ff, ip_allocation=immediate, mac_address=fa:16:3e:45:d3:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1392, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:45:28Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:45:29 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:45:29 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:29 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:29 standalone.localdomain podman[536760]: 2025-10-13 15:45:29.08916855 +0000 UTC m=+0.070709787 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fbc7f948a274a6888832ada005c92a0d9cf60fd5cf742e9fdf1d49a6143c53f8-merged.mount: Deactivated successfully.
Oct 13 15:45:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaa5041f78b9fc6bcbab971b3385f4a21a9acd56db084b25816acdf9bec03a06-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:29 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d7c2a0c62\x2d163a\x2d4617\x2d8ca9\x2dd872c66d8566.mount: Deactivated successfully.
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:45:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:29.333 496978 INFO neutron.agent.dhcp.agent [None req-f02f8270-f29a-40f7-b4ff-50720e18c3f3 - - - - - -] DHCP configuration for ports {'04b467c3-cd61-4fec-8411-18de167307ff'} is completed
Oct 13 15:45:29 standalone.localdomain podman[536812]: 
Oct 13 15:45:29 standalone.localdomain podman[536812]: 2025-10-13 15:45:29.550415768 +0000 UTC m=+0.071892013 container create 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:29 standalone.localdomain systemd[1]: Started libpod-conmon-87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619.scope.
Oct 13 15:45:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fe799eab9c19ca55b74a21c4d69f13fab9973d818bd2bb7d422c1defebc292/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:29 standalone.localdomain podman[536812]: 2025-10-13 15:45:29.611207384 +0000 UTC m=+0.132683659 container init 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:29 standalone.localdomain podman[536812]: 2025-10-13 15:45:29.515142428 +0000 UTC m=+0.036618743 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:29 standalone.localdomain podman[536812]: 2025-10-13 15:45:29.620880426 +0000 UTC m=+0.142356741 container start 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:45:29 standalone.localdomain dnsmasq[536831]: started, version 2.85 cachesize 150
Oct 13 15:45:29 standalone.localdomain dnsmasq[536831]: DNS service limited to local subnets
Oct 13 15:45:29 standalone.localdomain dnsmasq[536831]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:29 standalone.localdomain dnsmasq[536831]: warning: no upstream servers configured
Oct 13 15:45:29 standalone.localdomain dnsmasq-dhcp[536831]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:29 standalone.localdomain dnsmasq[536831]: read /var/lib/neutron/dhcp/d24a5474-4856-4e69-8d2c-c3d6842817d3/addn_hosts - 0 addresses
Oct 13 15:45:29 standalone.localdomain dnsmasq-dhcp[536831]: read /var/lib/neutron/dhcp/d24a5474-4856-4e69-8d2c-c3d6842817d3/host
Oct 13 15:45:29 standalone.localdomain dnsmasq-dhcp[536831]: read /var/lib/neutron/dhcp/d24a5474-4856-4e69-8d2c-c3d6842817d3/opts
Oct 13 15:45:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3936: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:29.801 496978 INFO neutron.agent.dhcp.agent [None req-68c858bb-2a70-465a-9191-3321af38a700 - - - - - -] DHCP configuration for ports {'b9c5d5fb-4115-4d11-8a23-2bc3c94d363e'} is completed
Oct 13 15:45:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:30.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41536 DF PROTO=TCP SPT=51520 DPT=9102 SEQ=2316298919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD069B60000000001030307) 
Oct 13 15:45:30 standalone.localdomain ceph-mon[29756]: pgmap v3936: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:30.962 496978 INFO neutron.agent.linux.ip_lib [None req-252542db-e5fc-4da6-a6c1-2bde3c9e2e07 - - - - - -] Device tap136e54d8-a1 cannot be used as it has no MAC address
Oct 13 15:45:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:30.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:30 standalone.localdomain kernel: device tap136e54d8-a1 entered promiscuous mode
Oct 13 15:45:31 standalone.localdomain NetworkManager[5962]: <info>  [1760370330.9997] manager: (tap136e54d8-a1): new Generic device (/org/freedesktop/NetworkManager/Devices/50)
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00238|binding|INFO|Claiming lport 136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3 for this chassis.
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00239|binding|INFO|136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3: Claiming unknown
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.009 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-04142abb-9330-44a1-acd5-71674c9f0b34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04142abb-9330-44a1-acd5-71674c9f0b34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02ffe7ae-7c8a-4622-98b4-df65774f0799, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.010 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3 in datapath 04142abb-9330-44a1-acd5-71674c9f0b34 bound to our chassis
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.013 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 04142abb-9330-44a1-acd5-71674c9f0b34 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.014 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f74716ee-13c7-4f9a-89b0-1aa0f406c56e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00240|binding|INFO|Setting lport 136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3 ovn-installed in OVS
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00241|binding|INFO|Setting lport 136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3 up in Southbound
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:31.279 496978 INFO neutron.agent.linux.ip_lib [None req-49384c60-2e6c-4ba8-81d8-ec2373bfdf17 - - - - - -] Device tap4df9c34f-66 cannot be used as it has no MAC address
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain kernel: device tap4df9c34f-66 entered promiscuous mode
Oct 13 15:45:31 standalone.localdomain NetworkManager[5962]: <info>  [1760370331.3207] manager: (tap4df9c34f-66): new Generic device (/org/freedesktop/NetworkManager/Devices/51)
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00242|binding|INFO|Claiming lport 4df9c34f-6631-4a03-88eb-6814e6e40644 for this chassis.
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00243|binding|INFO|4df9c34f-6631-4a03-88eb-6814e6e40644: Claiming unknown
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00244|binding|INFO|Removing iface tap136e54d8-a1 ovn-installed in OVS
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00245|binding|INFO|Removing lport 136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3 ovn-installed in OVS
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.327 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 3605645f-8107-4899-aead-6c1b2ccbc031 with type ""
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.329 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-04142abb-9330-44a1-acd5-71674c9f0b34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04142abb-9330-44a1-acd5-71674c9f0b34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02ffe7ae-7c8a-4622-98b4-df65774f0799, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.331 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 136e54d8-a1fd-4f5e-b8c3-2919a7fa6be3 in datapath 04142abb-9330-44a1-acd5-71674c9f0b34 unbound from our chassis
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.333 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 04142abb-9330-44a1-acd5-71674c9f0b34 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.338 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-65aaae47-faab-4db8-a516-d8d2ce33ab92', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65aaae47-faab-4db8-a516-d8d2ce33ab92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bcf8139-2178-4a5e-86d2-0d4e7c4077d1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4df9c34f-6631-4a03-88eb-6814e6e40644) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.334 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[deaf2f1a-7ed5-4b59-a40c-5f1ab448b31f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.341 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4df9c34f-6631-4a03-88eb-6814e6e40644 in datapath 65aaae47-faab-4db8-a516-d8d2ce33ab92 bound to our chassis
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.342 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 65aaae47-faab-4db8-a516-d8d2ce33ab92 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:31.343 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[01beb971-5f2d-4f2d-93b2-0435b6ac31bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00246|binding|INFO|Setting lport 4df9c34f-6631-4a03-88eb-6814e6e40644 ovn-installed in OVS
Oct 13 15:45:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:31Z|00247|binding|INFO|Setting lport 4df9c34f-6631-4a03-88eb-6814e6e40644 up in Southbound
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3937: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:31.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:32.099 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:31Z, description=, device_id=89840cdd-2a77-4591-b0bc-420f8ca97708, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188901ff70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188901fac0>], id=b9baeb12-8072-4325-b1b3-5f5a90afedcd, ip_allocation=immediate, mac_address=fa:16:3e:9a:38:f8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1418, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:45:31Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:45:32 standalone.localdomain podman[536934]: 
Oct 13 15:45:32 standalone.localdomain podman[536934]: 2025-10-13 15:45:32.221117048 +0000 UTC m=+0.089677628 container create 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:32 standalone.localdomain systemd[1]: Started libpod-conmon-8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a.scope.
Oct 13 15:45:32 standalone.localdomain podman[536934]: 2025-10-13 15:45:32.168305451 +0000 UTC m=+0.036866051 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be08dabb8144536678194d965fa2c856077179053e90898146824e86efaa4dd9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:32 standalone.localdomain podman[536934]: 2025-10-13 15:45:32.293521196 +0000 UTC m=+0.162081766 container init 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:45:32 standalone.localdomain podman[536934]: 2025-10-13 15:45:32.304666713 +0000 UTC m=+0.173227263 container start 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:45:32 standalone.localdomain dnsmasq[536985]: started, version 2.85 cachesize 150
Oct 13 15:45:32 standalone.localdomain dnsmasq[536985]: DNS service limited to local subnets
Oct 13 15:45:32 standalone.localdomain dnsmasq[536985]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:32 standalone.localdomain dnsmasq[536985]: warning: no upstream servers configured
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[536985]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:32 standalone.localdomain dnsmasq[536985]: read /var/lib/neutron/dhcp/04142abb-9330-44a1-acd5-71674c9f0b34/addn_hosts - 0 addresses
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[536985]: read /var/lib/neutron/dhcp/04142abb-9330-44a1-acd5-71674c9f0b34/host
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[536985]: read /var/lib/neutron/dhcp/04142abb-9330-44a1-acd5-71674c9f0b34/opts
Oct 13 15:45:32 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:32 standalone.localdomain podman[536984]: 2025-10-13 15:45:32.362016573 +0000 UTC m=+0.051558429 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:45:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:32.446 496978 INFO neutron.agent.dhcp.agent [None req-99b6fab1-0cf0-40c8-918d-7886b0c4122f - - - - - -] DHCP configuration for ports {'e5378dc2-252a-471a-9330-b0791590e700'} is completed
Oct 13 15:45:32 standalone.localdomain podman[537014]: 
Oct 13 15:45:32 standalone.localdomain podman[537014]: 2025-10-13 15:45:32.580078755 +0000 UTC m=+0.092675812 container create 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:45:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:32.582 496978 INFO neutron.agent.dhcp.agent [None req-b5651581-71dc-4b64-b074-b73595a33557 - - - - - -] DHCP configuration for ports {'b9baeb12-8072-4325-b1b3-5f5a90afedcd'} is completed
Oct 13 15:45:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:32 standalone.localdomain systemd[1]: Started libpod-conmon-150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8.scope.
Oct 13 15:45:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:32 standalone.localdomain podman[537014]: 2025-10-13 15:45:32.53441387 +0000 UTC m=+0.047010967 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839d00b8e59b02c6e606b791fc94dde7b2400cb340e49653dbbf82b5d8381386/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:32 standalone.localdomain dnsmasq[536985]: exiting on receipt of SIGTERM
Oct 13 15:45:32 standalone.localdomain podman[537040]: 2025-10-13 15:45:32.643179553 +0000 UTC m=+0.101713534 container kill 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:32 standalone.localdomain systemd[1]: libpod-8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a.scope: Deactivated successfully.
Oct 13 15:45:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:32Z|00248|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:32 standalone.localdomain podman[537014]: 2025-10-13 15:45:32.649145399 +0000 UTC m=+0.161742426 container init 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:45:32 standalone.localdomain podman[537014]: 2025-10-13 15:45:32.657865712 +0000 UTC m=+0.170462729 container start 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:45:32 standalone.localdomain dnsmasq[537065]: started, version 2.85 cachesize 150
Oct 13 15:45:32 standalone.localdomain dnsmasq[537065]: DNS service limited to local subnets
Oct 13 15:45:32 standalone.localdomain dnsmasq[537065]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:32 standalone.localdomain dnsmasq[537065]: warning: no upstream servers configured
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[537065]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:32 standalone.localdomain dnsmasq[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/addn_hosts - 0 addresses
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/host
Oct 13 15:45:32 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/opts
Oct 13 15:45:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:32.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:32 standalone.localdomain podman[537057]: 2025-10-13 15:45:32.725107589 +0000 UTC m=+0.068061384 container died 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:45:32 standalone.localdomain ceph-mon[29756]: pgmap v3937: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:32 standalone.localdomain podman[537057]: 2025-10-13 15:45:32.806006902 +0000 UTC m=+0.148960707 container cleanup 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:45:32 standalone.localdomain systemd[1]: libpod-conmon-8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a.scope: Deactivated successfully.
Oct 13 15:45:32 standalone.localdomain podman[537059]: 2025-10-13 15:45:32.832958313 +0000 UTC m=+0.166953579 container remove 8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04142abb-9330-44a1-acd5-71674c9f0b34, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:45:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:32.838 496978 INFO neutron.agent.dhcp.agent [None req-d8da2a77-522b-4961-bd1d-e018e7085cc0 - - - - - -] DHCP configuration for ports {'435950da-2059-471f-b49d-8d89d88a323b'} is completed
Oct 13 15:45:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:32.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:32 standalone.localdomain kernel: device tap136e54d8-a1 left promiscuous mode
Oct 13 15:45:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:32.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:32.875 496978 INFO neutron.agent.dhcp.agent [None req-5651b0f9-d0a9-443b-82e6-5a762577e812 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:32.876 496978 INFO neutron.agent.dhcp.agent [None req-5651b0f9-d0a9-443b-82e6-5a762577e812 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:32.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-be08dabb8144536678194d965fa2c856077179053e90898146824e86efaa4dd9-merged.mount: Deactivated successfully.
Oct 13 15:45:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f6a36c9faa1475c2f451f5b1fd44c5ec0a57e752579ac168107f6613f54757a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:33 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d04142abb\x2d9330\x2d44a1\x2dacd5\x2d71674c9f0b34.mount: Deactivated successfully.
Oct 13 15:45:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3938: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:34.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:34 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:34.252 2 INFO neutron.agent.securitygroups_rpc [None req-86fa64d3-bbad-4dd0-ab38-eaa44e179380 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:34.319 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:33Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188908d460>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188ae4d1f0>], id=2f7e609d-cd27-4de6-b3c3-7853c0398e51, ip_allocation=immediate, mac_address=fa:16:3e:93:d9:77, name=tempest-PortsTestJSON-1930638454, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:28Z, description=, dns_domain=, id=65aaae47-faab-4db8-a516-d8d2ce33ab92, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-610699653, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13934, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1399, status=ACTIVE, subnets=['0f44f45f-60b9-4661-971a-c5516056af94'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:30Z, vlan_transparent=None, network_id=65aaae47-faab-4db8-a516-d8d2ce33ab92, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1432, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:33Z on network 65aaae47-faab-4db8-a516-d8d2ce33ab92
Oct 13 15:45:34 standalone.localdomain podman[537107]: 2025-10-13 15:45:34.592006154 +0000 UTC m=+0.068469067 container kill 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:45:34 standalone.localdomain dnsmasq[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/addn_hosts - 1 addresses
Oct 13 15:45:34 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/host
Oct 13 15:45:34 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/opts
Oct 13 15:45:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:45:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:45:34 standalone.localdomain ceph-mon[29756]: pgmap v3938: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:34.906 496978 INFO neutron.agent.dhcp.agent [None req-58d64496-2875-43f0-88d9-4a1bc038af39 - - - - - -] DHCP configuration for ports {'2f7e609d-cd27-4de6-b3c3-7853c0398e51'} is completed
Oct 13 15:45:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3939: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:35 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:45:35 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:35.782 2 INFO neutron.agent.securitygroups_rpc [None req-68ee94fc-7909-4015-a463-d95b626dccd6 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:35 standalone.localdomain systemd[1]: tmp-crun.NEAVDu.mount: Deactivated successfully.
Oct 13 15:45:35 standalone.localdomain podman[537129]: 2025-10-13 15:45:35.821798656 +0000 UTC m=+0.088416260 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:35 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:35.846 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:34Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889034fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889034e50>], id=fa64140d-4e8d-49a8-aa2a-18e0377a767c, ip_allocation=immediate, mac_address=fa:16:3e:85:35:94, name=tempest-PortsTestJSON-1818713340, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:28Z, description=, dns_domain=, id=65aaae47-faab-4db8-a516-d8d2ce33ab92, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-610699653, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13934, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1399, status=ACTIVE, subnets=['0f44f45f-60b9-4661-971a-c5516056af94'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:30Z, vlan_transparent=None, network_id=65aaae47-faab-4db8-a516-d8d2ce33ab92, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1439, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:35Z on network 65aaae47-faab-4db8-a516-d8d2ce33ab92
Oct 13 15:45:35 standalone.localdomain podman[537129]: 2025-10-13 15:45:35.855799286 +0000 UTC m=+0.122416870 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:45:35 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:45:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:36.048 496978 INFO neutron.agent.linux.ip_lib [None req-e3e6730e-61b8-4b05-8ff3-edd896260990 - - - - - -] Device tapa0724f98-24 cannot be used as it has no MAC address
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain kernel: device tapa0724f98-24 entered promiscuous mode
Oct 13 15:45:36 standalone.localdomain NetworkManager[5962]: <info>  [1760370336.0861] manager: (tapa0724f98-24): new Generic device (/org/freedesktop/NetworkManager/Devices/52)
Oct 13 15:45:36 standalone.localdomain systemd-udevd[537212]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00249|binding|INFO|Claiming lport a0724f98-2479-430d-b750-98e925602501 for this chassis.
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00250|binding|INFO|a0724f98-2479-430d-b750-98e925602501: Claiming unknown
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.100 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0def371f-2710-4225-89ab-3d82571d9e8b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0def371f-2710-4225-89ab-3d82571d9e8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a062e50-27b5-47aa-8437-6b43173a76da, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=a0724f98-2479-430d-b750-98e925602501) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.103 378821 INFO neutron.agent.ovn.metadata.agent [-] Port a0724f98-2479-430d-b750-98e925602501 in datapath 0def371f-2710-4225-89ab-3d82571d9e8b bound to our chassis
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.107 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0def371f-2710-4225-89ab-3d82571d9e8b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.109 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b53395b2-5488-40d9-91d8-e49df8d35939]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:36 standalone.localdomain dnsmasq[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/addn_hosts - 2 addresses
Oct 13 15:45:36 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/host
Oct 13 15:45:36 standalone.localdomain podman[537188]: 2025-10-13 15:45:36.111387889 +0000 UTC m=+0.074513475 container kill 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:45:36 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/opts
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00251|binding|INFO|Setting lport a0724f98-2479-430d-b750-98e925602501 ovn-installed in OVS
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00252|binding|INFO|Setting lport a0724f98-2479-430d-b750-98e925602501 up in Southbound
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:45:36 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:36 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:36 standalone.localdomain podman[537204]: 2025-10-13 15:45:36.186659247 +0000 UTC m=+0.109940330 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00253|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00254|binding|INFO|Removing iface tapa0724f98-24 ovn-installed in OVS
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.451 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 16cbb04f-83da-4763-af03-a1a923e10734 with type ""
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.452 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0def371f-2710-4225-89ab-3d82571d9e8b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0def371f-2710-4225-89ab-3d82571d9e8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a062e50-27b5-47aa-8437-6b43173a76da, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=a0724f98-2479-430d-b750-98e925602501) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.453 378821 INFO neutron.agent.ovn.metadata.agent [-] Port a0724f98-2479-430d-b750-98e925602501 in datapath 0def371f-2710-4225-89ab-3d82571d9e8b unbound from our chassis
Oct 13 15:45:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:36Z|00255|binding|INFO|Removing lport a0724f98-2479-430d-b750-98e925602501 ovn-installed in OVS
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.454 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0def371f-2710-4225-89ab-3d82571d9e8b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:36.456 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ea2b55-42f9-45e4-a020-fc177e136328]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:36.484 496978 INFO neutron.agent.dhcp.agent [None req-d9a8e34f-21e7-4df8-967e-fca28109e9ca - - - - - -] DHCP configuration for ports {'fa64140d-4e8d-49a8-aa2a-18e0377a767c'} is completed
Oct 13 15:45:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:36.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:36 standalone.localdomain ceph-mon[29756]: pgmap v3939: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:37 standalone.localdomain podman[537291]: 
Oct 13 15:45:37 standalone.localdomain podman[537291]: 2025-10-13 15:45:37.076886526 +0000 UTC m=+0.088181741 container create dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:37 standalone.localdomain systemd[1]: Started libpod-conmon-dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1.scope.
Oct 13 15:45:37 standalone.localdomain podman[537291]: 2025-10-13 15:45:37.026516955 +0000 UTC m=+0.037812200 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e4e075539f2415ef4b4d95be686dcae8b12d9301b72d130e17f49a441934f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:37 standalone.localdomain podman[537291]: 2025-10-13 15:45:37.163472977 +0000 UTC m=+0.174768182 container init dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:37 standalone.localdomain podman[537291]: 2025-10-13 15:45:37.172590751 +0000 UTC m=+0.183885966 container start dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:45:37 standalone.localdomain dnsmasq[537309]: started, version 2.85 cachesize 150
Oct 13 15:45:37 standalone.localdomain dnsmasq[537309]: DNS service limited to local subnets
Oct 13 15:45:37 standalone.localdomain dnsmasq[537309]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:37 standalone.localdomain dnsmasq[537309]: warning: no upstream servers configured
Oct 13 15:45:37 standalone.localdomain dnsmasq-dhcp[537309]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:37 standalone.localdomain dnsmasq[537309]: read /var/lib/neutron/dhcp/0def371f-2710-4225-89ab-3d82571d9e8b/addn_hosts - 0 addresses
Oct 13 15:45:37 standalone.localdomain dnsmasq-dhcp[537309]: read /var/lib/neutron/dhcp/0def371f-2710-4225-89ab-3d82571d9e8b/host
Oct 13 15:45:37 standalone.localdomain dnsmasq-dhcp[537309]: read /var/lib/neutron/dhcp/0def371f-2710-4225-89ab-3d82571d9e8b/opts
Oct 13 15:45:37 standalone.localdomain kernel: device tapa0724f98-24 left promiscuous mode
Oct 13 15:45:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:37.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:37.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.323 496978 INFO neutron.agent.dhcp.agent [None req-79fa84cf-f551-43b2-a210-51e8fc0ad92f - - - - - -] DHCP configuration for ports {'50ba921a-2844-464a-8522-aaef4a13d3ee'} is completed
Oct 13 15:45:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3940: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:45:37 standalone.localdomain podman[537329]: 2025-10-13 15:45:37.767929273 +0000 UTC m=+0.067075454 container kill dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:45:37 standalone.localdomain dnsmasq[537309]: read /var/lib/neutron/dhcp/0def371f-2710-4225-89ab-3d82571d9e8b/addn_hosts - 0 addresses
Oct 13 15:45:37 standalone.localdomain dnsmasq-dhcp[537309]: read /var/lib/neutron/dhcp/0def371f-2710-4225-89ab-3d82571d9e8b/host
Oct 13 15:45:37 standalone.localdomain dnsmasq-dhcp[537309]: read /var/lib/neutron/dhcp/0def371f-2710-4225-89ab-3d82571d9e8b/opts
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 0def371f-2710-4225-89ab-3d82571d9e8b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa0724f98-24 not found in namespace qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b.
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa0724f98-24 not found in namespace qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b.
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.802 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:45:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:37.810 496978 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Oct 13 15:45:37 standalone.localdomain podman[537338]: 2025-10-13 15:45:37.818020905 +0000 UTC m=+0.086373025 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal)
Oct 13 15:45:37 standalone.localdomain podman[537338]: 2025-10-13 15:45:37.825157007 +0000 UTC m=+0.093509207 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Oct 13 15:45:37 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:45:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:37.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:37 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:37.890 2 INFO neutron.agent.securitygroups_rpc [None req-b1b7a26b-149c-4796-8678-2f3959a9c385 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:38.024 496978 INFO neutron.agent.dhcp.agent [None req-c7bcb7f4-cbe2-4e87-9c77-4fd87e2dbbed - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:45:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:38.025 496978 INFO neutron.agent.dhcp.agent [-] Starting network 0def371f-2710-4225-89ab-3d82571d9e8b dhcp configuration
Oct 13 15:45:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:38.028 496978 INFO neutron.agent.dhcp.agent [-] Starting network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 dhcp configuration
Oct 13 15:45:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:38.029 496978 INFO neutron.agent.dhcp.agent [-] Finished network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 dhcp configuration
Oct 13 15:45:38 standalone.localdomain dnsmasq[537309]: exiting on receipt of SIGTERM
Oct 13 15:45:38 standalone.localdomain podman[537381]: 2025-10-13 15:45:38.20386491 +0000 UTC m=+0.075278999 container kill dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:38 standalone.localdomain systemd[1]: tmp-crun.KB9ZXN.mount: Deactivated successfully.
Oct 13 15:45:38 standalone.localdomain systemd[1]: libpod-dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1.scope: Deactivated successfully.
Oct 13 15:45:38 standalone.localdomain podman[537395]: 2025-10-13 15:45:38.258175825 +0000 UTC m=+0.041000860 container died dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:45:38 standalone.localdomain podman[537395]: 2025-10-13 15:45:38.342823225 +0000 UTC m=+0.125648180 container cleanup dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:45:38 standalone.localdomain systemd[1]: libpod-conmon-dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1.scope: Deactivated successfully.
Oct 13 15:45:38 standalone.localdomain podman[537397]: 2025-10-13 15:45:38.367991361 +0000 UTC m=+0.140544725 container remove dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0def371f-2710-4225-89ab-3d82571d9e8b, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:38.492 496978 INFO neutron.agent.dhcp.agent [None req-ec322036-cd71-43bf-931b-a3bdda9192ea - - - - - -] Finished network 0def371f-2710-4225-89ab-3d82571d9e8b dhcp configuration
Oct 13 15:45:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:38.493 496978 INFO neutron.agent.dhcp.agent [None req-c7bcb7f4-cbe2-4e87-9c77-4fd87e2dbbed - - - - - -] Synchronizing state complete
Oct 13 15:45:38 standalone.localdomain ceph-mon[29756]: pgmap v3940: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:38Z|00256|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:38.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:38 standalone.localdomain dnsmasq[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/addn_hosts - 1 addresses
Oct 13 15:45:38 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/host
Oct 13 15:45:38 standalone.localdomain podman[537457]: 2025-10-13 15:45:38.762990472 +0000 UTC m=+0.073132232 container kill 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:38 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/opts
Oct 13 15:45:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-85e4e075539f2415ef4b4d95be686dcae8b12d9301b72d130e17f49a441934f8-merged.mount: Deactivated successfully.
Oct 13 15:45:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dab42a0bcd33012efafcf7feefc45bd36bc04afc0fb361a2624a7fb4e4c8edf1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:38 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d0def371f\x2d2710\x2d4225\x2d89ab\x2d3d82571d9e8b.mount: Deactivated successfully.
Oct 13 15:45:38 standalone.localdomain podman[537473]: 2025-10-13 15:45:38.825268774 +0000 UTC m=+0.069648013 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:45:38 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:45:38 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:38 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:39 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:39.332 2 INFO neutron.agent.securitygroups_rpc [None req-2fe4439c-7921-4b78-bb65-1bcf60171ad1 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:39 standalone.localdomain dnsmasq[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/addn_hosts - 0 addresses
Oct 13 15:45:39 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/host
Oct 13 15:45:39 standalone.localdomain dnsmasq-dhcp[537065]: read /var/lib/neutron/dhcp/65aaae47-faab-4db8-a516-d8d2ce33ab92/opts
Oct 13 15:45:39 standalone.localdomain podman[537521]: 2025-10-13 15:45:39.587774409 +0000 UTC m=+0.059430345 container kill 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:45:39 standalone.localdomain podman[537536]: 2025-10-13 15:45:39.698837844 +0000 UTC m=+0.085047694 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:45:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3941: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:39 standalone.localdomain podman[537536]: 2025-10-13 15:45:39.710881969 +0000 UTC m=+0.097091869 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:45:39 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:45:39 standalone.localdomain systemd[1]: tmp-crun.CKUIbJ.mount: Deactivated successfully.
Oct 13 15:45:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:40.510 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:39Z, description=, device_id=935b7e3a-2fa2-4b40-8bbf-b9c686db1e8b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924e580>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188924e430>], id=bd8a28f2-5eb1-4049-983c-1e4442603bed, ip_allocation=immediate, mac_address=fa:16:3e:0b:05:f9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1475, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:45:39Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:45:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:40.723 496978 INFO neutron.agent.linux.ip_lib [None req-501f74de-5a3e-4a49-bbb5-7acbdd4f411b - - - - - -] Device tap4bdbbaea-e2 cannot be used as it has no MAC address
Oct 13 15:45:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:40.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:40 standalone.localdomain podman[537585]: 2025-10-13 15:45:40.746543176 +0000 UTC m=+0.052623353 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:40 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:45:40 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:40 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:40 standalone.localdomain kernel: device tap4bdbbaea-e2 entered promiscuous mode
Oct 13 15:45:40 standalone.localdomain NetworkManager[5962]: <info>  [1760370340.7523] manager: (tap4bdbbaea-e2): new Generic device (/org/freedesktop/NetworkManager/Devices/53)
Oct 13 15:45:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:40Z|00257|binding|INFO|Claiming lport 4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf for this chassis.
Oct 13 15:45:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:40Z|00258|binding|INFO|4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf: Claiming unknown
Oct 13 15:45:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:40.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:40 standalone.localdomain systemd-udevd[537609]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.764 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-357f4107-b3fb-41d3-b9fc-38e46382a636', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-357f4107-b3fb-41d3-b9fc-38e46382a636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4071f84865644ae796190f7e252315e4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1278b6f7-d94a-4f0c-a65f-4e19b537c178, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.767 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf in datapath 357f4107-b3fb-41d3-b9fc-38e46382a636 bound to our chassis
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.768 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 357f4107-b3fb-41d3-b9fc-38e46382a636 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.769 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a38c940c-4e50-4412-816e-996943a7cda4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:40 standalone.localdomain ceph-mon[29756]: pgmap v3941: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:40Z|00259|binding|INFO|Setting lport 4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf ovn-installed in OVS
Oct 13 15:45:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:40Z|00260|binding|INFO|Setting lport 4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf up in Southbound
Oct 13 15:45:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:40.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:40.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:40 standalone.localdomain dnsmasq[537065]: exiting on receipt of SIGTERM
Oct 13 15:45:40 standalone.localdomain podman[537631]: 2025-10-13 15:45:40.894622935 +0000 UTC m=+0.053916463 container kill 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:45:40 standalone.localdomain systemd[1]: libpod-150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8.scope: Deactivated successfully.
Oct 13 15:45:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:40Z|00261|binding|INFO|Removing iface tap4df9c34f-66 ovn-installed in OVS
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.913 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a7c26862-301a-4ea8-baeb-2b860db4b792 with type ""
Oct 13 15:45:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:40Z|00262|binding|INFO|Removing lport 4df9c34f-6631-4a03-88eb-6814e6e40644 ovn-installed in OVS
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.914 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-65aaae47-faab-4db8-a516-d8d2ce33ab92', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65aaae47-faab-4db8-a516-d8d2ce33ab92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bcf8139-2178-4a5e-86d2-0d4e7c4077d1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4df9c34f-6631-4a03-88eb-6814e6e40644) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:40.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.915 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4df9c34f-6631-4a03-88eb-6814e6e40644 in datapath 65aaae47-faab-4db8-a516-d8d2ce33ab92 unbound from our chassis
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.918 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65aaae47-faab-4db8-a516-d8d2ce33ab92, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:40.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:40.919 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0be006a7-dbac-47af-872b-f282cd64eb03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:40 standalone.localdomain podman[537650]: 2025-10-13 15:45:40.955717971 +0000 UTC m=+0.052036255 container died 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:40 standalone.localdomain podman[537650]: 2025-10-13 15:45:40.983716074 +0000 UTC m=+0.080034328 container cleanup 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:45:40 standalone.localdomain systemd[1]: libpod-conmon-150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8.scope: Deactivated successfully.
Oct 13 15:45:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:40.991 496978 INFO neutron.agent.dhcp.agent [None req-9b63a5db-9b68-45aa-a9b6-6f6dd338a5f1 - - - - - -] DHCP configuration for ports {'bd8a28f2-5eb1-4049-983c-1e4442603bed'} is completed
Oct 13 15:45:41 standalone.localdomain podman[537652]: 2025-10-13 15:45:41.03135797 +0000 UTC m=+0.120410897 container remove 150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-65aaae47-faab-4db8-a516-d8d2ce33ab92, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:45:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:41.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:41 standalone.localdomain kernel: device tap4df9c34f-66 left promiscuous mode
Oct 13 15:45:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:41.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:41 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:41.159 2 INFO neutron.agent.securitygroups_rpc [None req-0374b79a-9e6a-4131-9a90-4cc5b60adb1f b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:41.162 496978 INFO neutron.agent.dhcp.agent [None req-cad07ce3-8233-43ca-81af-77019d3cb739 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:41.195 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:41 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:41Z|00263|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:41.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:41 standalone.localdomain podman[467099]: time="2025-10-13T15:45:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:45:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:45:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 420115 "" "Go-http-client/1.1"
Oct 13 15:45:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3942: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:41.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:41 standalone.localdomain podman[537723]: 
Oct 13 15:45:41 standalone.localdomain podman[537723]: 2025-10-13 15:45:41.805965972 +0000 UTC m=+0.187724886 container create a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:45:41 standalone.localdomain systemd[1]: Started libpod-conmon-a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1.scope.
Oct 13 15:45:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:45:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 50568 "" "Go-http-client/1.1"
Oct 13 15:45:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:41 standalone.localdomain podman[537723]: 2025-10-13 15:45:41.764352524 +0000 UTC m=+0.146111428 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3da98327ed50c7286cf16c632124a034a9c273d907eeabd3fc1246ed7ad5805/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:41 standalone.localdomain podman[537723]: 2025-10-13 15:45:41.883722968 +0000 UTC m=+0.265481902 container init a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:45:41 standalone.localdomain podman[537723]: 2025-10-13 15:45:41.898012704 +0000 UTC m=+0.279771618 container start a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-839d00b8e59b02c6e606b791fc94dde7b2400cb340e49653dbbf82b5d8381386-merged.mount: Deactivated successfully.
Oct 13 15:45:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-150e777573f6516aa2b9403f2304f4317d531545c06bdf4fc5bc889d5e6fe4a8-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:41 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d65aaae47\x2dfaab\x2d4db8\x2da516\x2dd8d2ce33ab92.mount: Deactivated successfully.
Oct 13 15:45:41 standalone.localdomain dnsmasq[537743]: started, version 2.85 cachesize 150
Oct 13 15:45:41 standalone.localdomain dnsmasq[537743]: DNS service limited to local subnets
Oct 13 15:45:41 standalone.localdomain dnsmasq[537743]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:41 standalone.localdomain dnsmasq[537743]: warning: no upstream servers configured
Oct 13 15:45:41 standalone.localdomain dnsmasq-dhcp[537743]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:41 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 0 addresses
Oct 13 15:45:41 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:41 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:41.963 496978 INFO neutron.agent.dhcp.agent [None req-501f74de-5a3e-4a49-bbb5-7acbdd4f411b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:40Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889078130>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889078df0>], id=15fe2540-2528-45c5-a584-cd16f49b7ed6, ip_allocation=immediate, mac_address=fa:16:3e:cc:b2:47, name=tempest-AllowedAddressPairIpV6TestJSON-461367466, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:37Z, description=, dns_domain=, id=357f4107-b3fb-41d3-b9fc-38e46382a636, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1261217596, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23253, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1456, status=ACTIVE, subnets=['7748d93a-9f64-4b56-99c3-67948e71030c'], tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:39Z, vlan_transparent=None, network_id=357f4107-b3fb-41d3-b9fc-38e46382a636, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7537441e-577a-4568-8769-77153bd4a825'], standard_attr_id=1477, status=DOWN, tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:40Z on network 357f4107-b3fb-41d3-b9fc-38e46382a636
Oct 13 15:45:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:42.021 496978 INFO neutron.agent.linux.ip_lib [None req-3eb5c136-d530-4acd-82ed-6098ecf28b29 - - - - - -] Device tapbebfb2cb-5d cannot be used as it has no MAC address
Oct 13 15:45:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:42.044 496978 INFO neutron.agent.dhcp.agent [None req-4760f242-e266-4096-a540-951ceb207827 - - - - - -] DHCP configuration for ports {'85194641-5ddb-4373-865e-1410d5872cd3'} is completed
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain kernel: device tapbebfb2cb-5d entered promiscuous mode
Oct 13 15:45:42 standalone.localdomain NetworkManager[5962]: <info>  [1760370342.0610] manager: (tapbebfb2cb-5d): new Generic device (/org/freedesktop/NetworkManager/Devices/54)
Oct 13 15:45:42 standalone.localdomain systemd-udevd[537615]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:42Z|00264|binding|INFO|Claiming lport bebfb2cb-5d13-4ccc-9929-1fc3115c20e0 for this chassis.
Oct 13 15:45:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:42Z|00265|binding|INFO|bebfb2cb-5d13-4ccc-9929-1fc3115c20e0: Claiming unknown
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.073 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-094fdf1a-b7ce-4696-904e-ce4465bab8e0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-094fdf1a-b7ce-4696-904e-ce4465bab8e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b703cce8-d8eb-455c-b1af-8b5f01ac164a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bebfb2cb-5d13-4ccc-9929-1fc3115c20e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.075 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bebfb2cb-5d13-4ccc-9929-1fc3115c20e0 in datapath 094fdf1a-b7ce-4696-904e-ce4465bab8e0 bound to our chassis
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.077 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 094fdf1a-b7ce-4696-904e-ce4465bab8e0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:42 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:42.078 2 INFO neutron.agent.securitygroups_rpc [None req-b151cd43-8f83-4ca8-928b-dd16013b3d39 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.079 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9f64fa-fd97-47d3-8da2-5b1edb5ad479]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:42Z|00266|binding|INFO|Setting lport bebfb2cb-5d13-4ccc-9929-1fc3115c20e0 ovn-installed in OVS
Oct 13 15:45:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:42Z|00267|binding|INFO|Setting lport bebfb2cb-5d13-4ccc-9929-1fc3115c20e0 up in Southbound
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 1 addresses
Oct 13 15:45:42 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:42 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:42 standalone.localdomain podman[537774]: 2025-10-13 15:45:42.161967897 +0000 UTC m=+0.044644993 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:42.306 496978 INFO neutron.agent.dhcp.agent [None req-501f74de-5a3e-4a49-bbb5-7acbdd4f411b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889011850>], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:41Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889011550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889011a60>], id=09df1d12-cb62-4fec-9db8-81c1b39db468, ip_allocation=immediate, mac_address=fa:16:3e:af:75:a7, name=tempest-AllowedAddressPairIpV6TestJSON-500334303, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:37Z, description=, dns_domain=, id=357f4107-b3fb-41d3-b9fc-38e46382a636, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1261217596, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23253, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1456, status=ACTIVE, subnets=['7748d93a-9f64-4b56-99c3-67948e71030c'], tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:39Z, vlan_transparent=None, network_id=357f4107-b3fb-41d3-b9fc-38e46382a636, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7537441e-577a-4568-8769-77153bd4a825'], standard_attr_id=1487, status=DOWN, tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:41Z on network 357f4107-b3fb-41d3-b9fc-38e46382a636
Oct 13 15:45:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:42.383 496978 INFO neutron.agent.dhcp.agent [None req-9f82af0a-b696-4b59-833f-0b2b24c43aa6 - - - - - -] DHCP configuration for ports {'15fe2540-2528-45c5-a584-cd16f49b7ed6'} is completed
Oct 13 15:45:42 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 2 addresses
Oct 13 15:45:42 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:42 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:42 standalone.localdomain podman[537828]: 2025-10-13 15:45:42.501844199 +0000 UTC m=+0.062730027 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:42Z|00268|binding|INFO|Removing iface tapbebfb2cb-5d ovn-installed in OVS
Oct 13 15:45:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:42Z|00269|binding|INFO|Removing lport bebfb2cb-5d13-4ccc-9929-1fc3115c20e0 ovn-installed in OVS
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.695 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ca28d508-1fea-4ab8-b330-fad29eef9629 with type ""
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.697 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-094fdf1a-b7ce-4696-904e-ce4465bab8e0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-094fdf1a-b7ce-4696-904e-ce4465bab8e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b703cce8-d8eb-455c-b1af-8b5f01ac164a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bebfb2cb-5d13-4ccc-9929-1fc3115c20e0) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.700 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bebfb2cb-5d13-4ccc-9929-1fc3115c20e0 in datapath 094fdf1a-b7ce-4696-904e-ce4465bab8e0 unbound from our chassis
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.702 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 094fdf1a-b7ce-4696-904e-ce4465bab8e0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:42.703 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[8116e755-b585-47fe-882e-a7806bd371de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:42 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:42.725 2 INFO neutron.agent.securitygroups_rpc [None req-08789c1e-e7ad-4988-915d-276ac1a5a351 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:42.745 496978 INFO neutron.agent.dhcp.agent [None req-f3e47c4a-60cf-4ea8-a796-ca2c071825f7 - - - - - -] DHCP configuration for ports {'09df1d12-cb62-4fec-9db8-81c1b39db468'} is completed
Oct 13 15:45:42 standalone.localdomain ceph-mon[29756]: pgmap v3942: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:42.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:45:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:45:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:45:43 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 1 addresses
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:43 standalone.localdomain systemd[1]: tmp-crun.XPx9JA.mount: Deactivated successfully.
Oct 13 15:45:43 standalone.localdomain podman[537882]: 2025-10-13 15:45:43.028838889 +0000 UTC m=+0.133605749 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:45:43 standalone.localdomain podman[537911]: 
Oct 13 15:45:43 standalone.localdomain podman[537911]: 2025-10-13 15:45:43.117736441 +0000 UTC m=+0.097153381 container create fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:43 standalone.localdomain systemd[1]: Started libpod-conmon-fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d.scope.
Oct 13 15:45:43 standalone.localdomain podman[537911]: 2025-10-13 15:45:43.070071964 +0000 UTC m=+0.049488934 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:43 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:43 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b394de7b4d9c26f44f977e3c6db2ecdac35fce152fc2bdeb4ff9d5162f656f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:43 standalone.localdomain podman[537911]: 2025-10-13 15:45:43.189657705 +0000 UTC m=+0.169074635 container init fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:43 standalone.localdomain podman[537911]: 2025-10-13 15:45:43.195121045 +0000 UTC m=+0.174537965 container start fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:45:43 standalone.localdomain dnsmasq[537939]: started, version 2.85 cachesize 150
Oct 13 15:45:43 standalone.localdomain dnsmasq[537939]: DNS service limited to local subnets
Oct 13 15:45:43 standalone.localdomain dnsmasq[537939]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:43 standalone.localdomain dnsmasq[537939]: warning: no upstream servers configured
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537939]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:43 standalone.localdomain dnsmasq[537939]: read /var/lib/neutron/dhcp/094fdf1a-b7ce-4696-904e-ce4465bab8e0/addn_hosts - 0 addresses
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537939]: read /var/lib/neutron/dhcp/094fdf1a-b7ce-4696-904e-ce4465bab8e0/host
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537939]: read /var/lib/neutron/dhcp/094fdf1a-b7ce-4696-904e-ce4465bab8e0/opts
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.265 496978 INFO neutron.agent.dhcp.agent [None req-68b9e553-b03b-40fa-ba49-a0c117ed3a3e - - - - - -] DHCP configuration for ports {'660480a3-4b76-4452-8cc7-85a531d193a2'} is completed
Oct 13 15:45:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:43.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:43 standalone.localdomain kernel: device tapbebfb2cb-5d left promiscuous mode
Oct 13 15:45:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:43.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:43 standalone.localdomain dnsmasq[537939]: read /var/lib/neutron/dhcp/094fdf1a-b7ce-4696-904e-ce4465bab8e0/addn_hosts - 0 addresses
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537939]: read /var/lib/neutron/dhcp/094fdf1a-b7ce-4696-904e-ce4465bab8e0/host
Oct 13 15:45:43 standalone.localdomain podman[537959]: 2025-10-13 15:45:43.437918969 +0000 UTC m=+0.048658569 container kill fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537939]: read /var/lib/neutron/dhcp/094fdf1a-b7ce-4696-904e-ce4465bab8e0/opts
Oct 13 15:45:43 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:43.460 2 INFO neutron.agent.securitygroups_rpc [None req-0712c5e8-528d-47e1-879d-d20dec9c6aa4 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent [None req-3eb5c136-d530-4acd-82ed-6098ecf28b29 - - - - - -] Unable to reload_allocations dhcp for 094fdf1a-b7ce-4696-904e-ce4465bab8e0.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapbebfb2cb-5d not found in namespace qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0.
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapbebfb2cb-5d not found in namespace qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0.
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.467 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.471 496978 INFO neutron.agent.dhcp.agent [None req-c7bcb7f4-cbe2-4e87-9c77-4fd87e2dbbed - - - - - -] Synchronizing state
Oct 13 15:45:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3943: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.735 496978 INFO neutron.agent.dhcp.agent [None req-87885e52-862c-4da1-b809-5f0f8e082c7a - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.736 496978 INFO neutron.agent.dhcp.agent [-] Starting network 094fdf1a-b7ce-4696-904e-ce4465bab8e0 dhcp configuration
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.737 496978 INFO neutron.agent.dhcp.agent [-] Finished network 094fdf1a-b7ce-4696-904e-ce4465bab8e0 dhcp configuration
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.737 496978 INFO neutron.agent.dhcp.agent [-] Starting network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 dhcp configuration
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.738 496978 INFO neutron.agent.dhcp.agent [-] Finished network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 dhcp configuration
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.738 496978 INFO neutron.agent.dhcp.agent [-] Starting network d9d84da0-72ac-45c1-b703-5737905d90fa dhcp configuration
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.739 496978 INFO neutron.agent.dhcp.agent [-] Finished network d9d84da0-72ac-45c1-b703-5737905d90fa dhcp configuration
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.740 496978 INFO neutron.agent.dhcp.agent [None req-87885e52-862c-4da1-b809-5f0f8e082c7a - - - - - -] Synchronizing state complete
Oct 13 15:45:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:43.741 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:43Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188900dfd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188900d970>], id=3994fa8f-1b9c-4fe4-83b4-66139a62d0c3, ip_allocation=immediate, mac_address=fa:16:3e:29:3b:05, name=tempest-AllowedAddressPairIpV6TestJSON-490044826, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:37Z, description=, dns_domain=, id=357f4107-b3fb-41d3-b9fc-38e46382a636, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1261217596, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23253, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1456, status=ACTIVE, subnets=['7748d93a-9f64-4b56-99c3-67948e71030c'], tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:39Z, vlan_transparent=None, network_id=357f4107-b3fb-41d3-b9fc-38e46382a636, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7537441e-577a-4568-8769-77153bd4a825'], standard_attr_id=1489, status=DOWN, tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:43Z on network 357f4107-b3fb-41d3-b9fc-38e46382a636
Oct 13 15:45:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:43Z|00270|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:43.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:43 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 2 addresses
Oct 13 15:45:43 standalone.localdomain podman[538000]: 2025-10-13 15:45:43.974472076 +0000 UTC m=+0.082019209 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:43 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:44 standalone.localdomain podman[538022]: 2025-10-13 15:45:44.066036252 +0000 UTC m=+0.086494769 container kill fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:44 standalone.localdomain dnsmasq[537939]: exiting on receipt of SIGTERM
Oct 13 15:45:44 standalone.localdomain systemd[1]: libpod-fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d.scope: Deactivated successfully.
Oct 13 15:45:44 standalone.localdomain podman[538042]: 2025-10-13 15:45:44.147766952 +0000 UTC m=+0.061257732 container died fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:45:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:44 standalone.localdomain podman[538042]: 2025-10-13 15:45:44.187971955 +0000 UTC m=+0.101462675 container cleanup fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:45:44 standalone.localdomain systemd[1]: libpod-conmon-fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d.scope: Deactivated successfully.
Oct 13 15:45:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:44.221 496978 INFO neutron.agent.dhcp.agent [None req-40a0a817-9b64-4401-b3f7-5c1fc0900763 - - - - - -] DHCP configuration for ports {'3994fa8f-1b9c-4fe4-83b4-66139a62d0c3'} is completed
Oct 13 15:45:44 standalone.localdomain podman[538044]: 2025-10-13 15:45:44.269986364 +0000 UTC m=+0.176038322 container remove fd3a8f2e780b363fb415076f7d14a823d16d94d51e2481fe1d99debce3bf5f0d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-094fdf1a-b7ce-4696-904e-ce4465bab8e0, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:45:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:44.466 496978 INFO neutron.agent.linux.ip_lib [None req-2b93b94c-5281-4324-ace7-cbd98dafd6b8 - - - - - -] Device tap96688557-c5 cannot be used as it has no MAC address
Oct 13 15:45:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:44.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:44 standalone.localdomain kernel: device tap96688557-c5 entered promiscuous mode
Oct 13 15:45:44 standalone.localdomain NetworkManager[5962]: <info>  [1760370344.5013] manager: (tap96688557-c5): new Generic device (/org/freedesktop/NetworkManager/Devices/55)
Oct 13 15:45:44 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:44Z|00271|binding|INFO|Claiming lport 96688557-c598-477a-a404-4919556f30c2 for this chassis.
Oct 13 15:45:44 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:44Z|00272|binding|INFO|96688557-c598-477a-a404-4919556f30c2: Claiming unknown
Oct 13 15:45:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:44.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:44 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:44.513 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d9d84da0-72ac-45c1-b703-5737905d90fa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d84da0-72ac-45c1-b703-5737905d90fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c68151c-f4cd-4f91-a63a-24f9cabfbb1c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=96688557-c598-477a-a404-4919556f30c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:44 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:44.515 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 96688557-c598-477a-a404-4919556f30c2 in datapath d9d84da0-72ac-45c1-b703-5737905d90fa bound to our chassis
Oct 13 15:45:44 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:44.517 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d9d84da0-72ac-45c1-b703-5737905d90fa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:44 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:44.519 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2a41d7a0-16f1-45b0-a4f6-940774ddd655]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:44Z|00273|binding|INFO|Setting lport 96688557-c598-477a-a404-4919556f30c2 ovn-installed in OVS
Oct 13 15:45:44 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:44Z|00274|binding|INFO|Setting lport 96688557-c598-477a-a404-4919556f30c2 up in Southbound
Oct 13 15:45:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:44.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap96688557-c5: No such device
Oct 13 15:45:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:44.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:44 standalone.localdomain ceph-mon[29756]: pgmap v3943: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:44 standalone.localdomain podman[538134]: 2025-10-13 15:45:44.881666564 +0000 UTC m=+0.052319833 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:44 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:45:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:45:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:45:44 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:45:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8b394de7b4d9c26f44f977e3c6db2ecdac35fce152fc2bdeb4ff9d5162f656f7-merged.mount: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d094fdf1a\x2db7ce\x2d4696\x2d904e\x2dce4465bab8e0.mount: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain podman[538162]: 2025-10-13 15:45:45.080147216 +0000 UTC m=+0.088498862 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, container_name=swift_account_server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, batch=17.1_20250721.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, distribution-scope=public)
Oct 13 15:45:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:45:45 standalone.localdomain podman[538205]: 2025-10-13 15:45:45.158577242 +0000 UTC m=+0.062696007 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.12, batch=17.1_20250721.1, name=rhosp17/openstack-swift-object, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, container_name=swift_object_server, maintainer=OpenStack TripleO Team, version=17.1.9, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, build-date=2025-07-21T14:56:28)
Oct 13 15:45:45 standalone.localdomain podman[538163]: 2025-10-13 15:45:45.131800857 +0000 UTC m=+0.135548379 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:45:45 standalone.localdomain podman[538163]: 2025-10-13 15:45:45.21106247 +0000 UTC m=+0.214809992 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:45:45 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:45:45 standalone.localdomain podman[538160]: 2025-10-13 15:45:45.28770721 +0000 UTC m=+0.292788764 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:45:45 standalone.localdomain podman[538162]: 2025-10-13 15:45:45.313892917 +0000 UTC m=+0.322244553 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, container_name=swift_account_server, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, com.redhat.component=openstack-swift-account-container)
Oct 13 15:45:45 standalone.localdomain podman[538160]: 2025-10-13 15:45:45.322881688 +0000 UTC m=+0.327963212 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:45 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:45.334 2 INFO neutron.agent.securitygroups_rpc [None req-bab039c1-3f76-4ff6-a819-2b2ddedfd98d b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:45 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain podman[538205]: 2025-10-13 15:45:45.358980283 +0000 UTC m=+0.263099068 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, container_name=swift_object_server, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, name=rhosp17/openstack-swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, com.redhat.component=openstack-swift-object-container, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, release=1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:45:45 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain podman[538247]: 2025-10-13 15:45:45.488616118 +0000 UTC m=+0.245523480 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-swift-container-container, distribution-scope=public, release=1, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=17.1.9, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container)
Oct 13 15:45:45 standalone.localdomain podman[538308]: 
Oct 13 15:45:45 standalone.localdomain podman[538317]: 2025-10-13 15:45:45.527951595 +0000 UTC m=+0.057534736 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:45 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 1 addresses
Oct 13 15:45:45 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:45 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:45 standalone.localdomain podman[538308]: 2025-10-13 15:45:45.576915462 +0000 UTC m=+0.127073025 container create c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:45 standalone.localdomain podman[538308]: 2025-10-13 15:45:45.479035858 +0000 UTC m=+0.029193431 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:45 standalone.localdomain systemd[1]: Started libpod-conmon-c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5.scope.
Oct 13 15:45:45 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:45 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6c3e437417901409fbbfc4d768f18e6d0f9efd985a5c70ad8460e590dcfd42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:45 standalone.localdomain podman[538308]: 2025-10-13 15:45:45.661695556 +0000 UTC m=+0.211853099 container init c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:45 standalone.localdomain podman[538308]: 2025-10-13 15:45:45.671596195 +0000 UTC m=+0.221753738 container start c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:45:45 standalone.localdomain dnsmasq[538365]: started, version 2.85 cachesize 150
Oct 13 15:45:45 standalone.localdomain dnsmasq[538365]: DNS service limited to local subnets
Oct 13 15:45:45 standalone.localdomain dnsmasq[538365]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:45 standalone.localdomain dnsmasq[538365]: warning: no upstream servers configured
Oct 13 15:45:45 standalone.localdomain dnsmasq-dhcp[538365]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:45 standalone.localdomain dnsmasq[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/addn_hosts - 0 addresses
Oct 13 15:45:45 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/host
Oct 13 15:45:45 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/opts
Oct 13 15:45:45 standalone.localdomain podman[538247]: 2025-10-13 15:45:45.69196071 +0000 UTC m=+0.448868112 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, distribution-scope=public, name=rhosp17/openstack-swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, release=1, version=17.1.9, container_name=swift_container_server)
Oct 13 15:45:45 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:45:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3944: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:45.878 496978 INFO neutron.agent.dhcp.agent [None req-111f2264-f807-4273-8195-700eec7653e2 - - - - - -] DHCP configuration for ports {'1150d1e1-0e8f-4cec-9769-7b11d213f19a'} is completed
Oct 13 15:45:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:46.539 496978 INFO neutron.agent.linux.ip_lib [None req-2a818e4a-54e8-4b6b-8fcc-648416cce604 - - - - - -] Device tapf31ccec1-20 cannot be used as it has no MAC address
Oct 13 15:45:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:46.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:46 standalone.localdomain kernel: device tapf31ccec1-20 entered promiscuous mode
Oct 13 15:45:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:46Z|00275|binding|INFO|Claiming lport f31ccec1-20e7-41e5-b93c-77e7913d87ef for this chassis.
Oct 13 15:45:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:46Z|00276|binding|INFO|f31ccec1-20e7-41e5-b93c-77e7913d87ef: Claiming unknown
Oct 13 15:45:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:46.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:46 standalone.localdomain NetworkManager[5962]: <info>  [1760370346.6115] manager: (tapf31ccec1-20): new Generic device (/org/freedesktop/NetworkManager/Devices/56)
Oct 13 15:45:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:46.621 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f487554e-b45e-43a8-a75f-cdae84bd0099', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f487554e-b45e-43a8-a75f-cdae84bd0099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=258957cb-1903-482f-82ff-e7d8d931c651, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=f31ccec1-20e7-41e5-b93c-77e7913d87ef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:46.623 378821 INFO neutron.agent.ovn.metadata.agent [-] Port f31ccec1-20e7-41e5-b93c-77e7913d87ef in datapath f487554e-b45e-43a8-a75f-cdae84bd0099 bound to our chassis
Oct 13 15:45:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:46.625 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f487554e-b45e-43a8-a75f-cdae84bd0099 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:46.626 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[292c53f9-039d-45fc-90bb-4183ff8eddf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:46Z|00277|binding|INFO|Setting lport f31ccec1-20e7-41e5-b93c-77e7913d87ef ovn-installed in OVS
Oct 13 15:45:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:46Z|00278|binding|INFO|Setting lport f31ccec1-20e7-41e5-b93c-77e7913d87ef up in Southbound
Oct 13 15:45:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:46.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:46 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:46.726 2 INFO neutron.agent.securitygroups_rpc [None req-8961f155-ce65-49c5-8759-32d203de6f5e b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:46.731 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:46Z, description=, device_id=b289f3f1-cafb-42b6-a4e0-a4876a4c76be, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f11340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890616a0>], id=5e6c3a51-e250-4e4a-92ab-e00f2d0bd699, ip_allocation=immediate, mac_address=fa:16:3e:de:f7:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1504, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:45:46Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:45:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:46.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:46.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:46.780 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:46Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32c40>], id=23fe510d-09e5-4241-9c33-f86de3a38e2a, ip_allocation=immediate, mac_address=fa:16:3e:ab:3c:4c, name=tempest-AllowedAddressPairIpV6TestJSON-732589740, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:37Z, description=, dns_domain=, id=357f4107-b3fb-41d3-b9fc-38e46382a636, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1261217596, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23253, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1456, status=ACTIVE, subnets=['7748d93a-9f64-4b56-99c3-67948e71030c'], tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:39Z, vlan_transparent=None, network_id=357f4107-b3fb-41d3-b9fc-38e46382a636, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7537441e-577a-4568-8769-77153bd4a825'], standard_attr_id=1502, status=DOWN, tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:46Z on network 357f4107-b3fb-41d3-b9fc-38e46382a636
Oct 13 15:45:46 standalone.localdomain ceph-mon[29756]: pgmap v3944: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:46 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 2 addresses
Oct 13 15:45:46 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:46 standalone.localdomain podman[538426]: 2025-10-13 15:45:46.995600696 +0000 UTC m=+0.075288450 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:45:46 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:47 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:45:47 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:47 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:47 standalone.localdomain podman[538438]: 2025-10-13 15:45:47.053922124 +0000 UTC m=+0.093747515 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:45:47 standalone.localdomain systemd[1]: tmp-crun.NVz0aa.mount: Deactivated successfully.
Oct 13 15:45:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:47.380 496978 INFO neutron.agent.dhcp.agent [None req-5bd3070a-34ac-4dab-810e-341a34a3c0ac - - - - - -] DHCP configuration for ports {'5e6c3a51-e250-4e4a-92ab-e00f2d0bd699', '23fe510d-09e5-4241-9c33-f86de3a38e2a'} is completed
Oct 13 15:45:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:47 standalone.localdomain podman[538510]: 
Oct 13 15:45:47 standalone.localdomain podman[538510]: 2025-10-13 15:45:47.679749126 +0000 UTC m=+0.087024225 container create 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:45:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3945: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:47 standalone.localdomain systemd[1]: Started libpod-conmon-566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf.scope.
Oct 13 15:45:47 standalone.localdomain podman[538510]: 2025-10-13 15:45:47.629472218 +0000 UTC m=+0.036747327 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:47 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb99193fe8f1150cf5ff44f76f67bf949432de83cfb593ffe2683cda46bb0fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:47 standalone.localdomain podman[538510]: 2025-10-13 15:45:47.745116695 +0000 UTC m=+0.152391774 container init 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:47 standalone.localdomain podman[538510]: 2025-10-13 15:45:47.751194224 +0000 UTC m=+0.158469293 container start 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:45:47 standalone.localdomain dnsmasq[538528]: started, version 2.85 cachesize 150
Oct 13 15:45:47 standalone.localdomain dnsmasq[538528]: DNS service limited to local subnets
Oct 13 15:45:47 standalone.localdomain dnsmasq[538528]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:47 standalone.localdomain dnsmasq[538528]: warning: no upstream servers configured
Oct 13 15:45:47 standalone.localdomain dnsmasq-dhcp[538528]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:47 standalone.localdomain dnsmasq[538528]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/addn_hosts - 0 addresses
Oct 13 15:45:47 standalone.localdomain dnsmasq-dhcp[538528]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/host
Oct 13 15:45:47 standalone.localdomain dnsmasq-dhcp[538528]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/opts
Oct 13 15:45:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:47.798 496978 INFO neutron.agent.dhcp.agent [None req-2a818e4a-54e8-4b6b-8fcc-648416cce604 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:45:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:47.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:47.920 496978 INFO neutron.agent.dhcp.agent [None req-a23b2f00-d3cc-42a6-b3c8-47c685fc0f2b - - - - - -] DHCP configuration for ports {'f813096a-b43f-47ff-9ed6-484b8508115e'} is completed
Oct 13 15:45:48 standalone.localdomain dnsmasq[538528]: exiting on receipt of SIGTERM
Oct 13 15:45:48 standalone.localdomain podman[538546]: 2025-10-13 15:45:48.035730881 +0000 UTC m=+0.046506533 container kill 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:48 standalone.localdomain systemd[1]: libpod-566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf.scope: Deactivated successfully.
Oct 13 15:45:48 standalone.localdomain podman[538560]: 2025-10-13 15:45:48.088167747 +0000 UTC m=+0.043955093 container died 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:48 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:48 standalone.localdomain podman[538560]: 2025-10-13 15:45:48.122362163 +0000 UTC m=+0.078149489 container cleanup 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:48 standalone.localdomain systemd[1]: libpod-conmon-566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf.scope: Deactivated successfully.
Oct 13 15:45:48 standalone.localdomain podman[538567]: 2025-10-13 15:45:48.149684386 +0000 UTC m=+0.089000038 container remove 566550e19d7e4c69a1426783854e90ad8ac4b71867a8875111117bc0c360d5cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:48 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:48.560 2 INFO neutron.agent.securitygroups_rpc [None req-8c3a34c8-3e5b-45f4-8502-1d1cd6930ae8 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:48 standalone.localdomain ceph-mon[29756]: pgmap v3945: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:48.638 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:47Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890022e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889002f70>], id=65fe2ef1-d5e6-4c6c-8e4e-9b2e604d221d, ip_allocation=immediate, mac_address=fa:16:3e:19:3d:7f, name=tempest-PortsTestJSON-785187392, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:41Z, description=, dns_domain=, id=d9d84da0-72ac-45c1-b703-5737905d90fa, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1938036791, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50086, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1480, status=ACTIVE, subnets=['f2cef6f9-2d04-4fc5-ba60-76b0ac3a3a9c'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:43Z, vlan_transparent=None, network_id=d9d84da0-72ac-45c1-b703-5737905d90fa, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1512, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:48Z on network d9d84da0-72ac-45c1-b703-5737905d90fa
Oct 13 15:45:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:45:48 standalone.localdomain object-server[538590]: Object update sweep starting on /srv/node/d1 (pid: 32)
Oct 13 15:45:48 standalone.localdomain object-server[538590]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 32)
Oct 13 15:45:48 standalone.localdomain object-server[538590]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:45:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.11s
Oct 13 15:45:48 standalone.localdomain dnsmasq[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/addn_hosts - 1 addresses
Oct 13 15:45:48 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/host
Oct 13 15:45:48 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/opts
Oct 13 15:45:48 standalone.localdomain podman[538612]: 2025-10-13 15:45:48.87660655 +0000 UTC m=+0.047053469 container kill c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:45:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:45:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-beb99193fe8f1150cf5ff44f76f67bf949432de83cfb593ffe2683cda46bb0fd-merged.mount: Deactivated successfully.
Oct 13 15:45:49 standalone.localdomain podman[538636]: 2025-10-13 15:45:49.061420065 +0000 UTC m=+0.082643718 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 13 15:45:49 standalone.localdomain podman[538636]: 2025-10-13 15:45:49.074725121 +0000 UTC m=+0.095948784 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible)
Oct 13 15:45:49 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:45:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:49.171 496978 INFO neutron.agent.dhcp.agent [None req-9c15d685-1d48-44af-ab83-2ffe60008508 - - - - - -] DHCP configuration for ports {'65fe2ef1-d5e6-4c6c-8e4e-9b2e604d221d'} is completed
Oct 13 15:45:49 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:49.209 2 INFO neutron.agent.securitygroups_rpc [None req-010e4aa4-d40c-4b5b-acda-b9bb534dd6b2 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:49 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 1 addresses
Oct 13 15:45:49 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:49 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:49 standalone.localdomain podman[538692]: 2025-10-13 15:45:49.467526323 +0000 UTC m=+0.063010456 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3946: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:49 standalone.localdomain podman[538736]: 
Oct 13 15:45:49 standalone.localdomain podman[538736]: 2025-10-13 15:45:49.764672202 +0000 UTC m=+0.096542152 container create c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:45:49 standalone.localdomain systemd[1]: Started libpod-conmon-c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878.scope.
Oct 13 15:45:49 standalone.localdomain podman[538736]: 2025-10-13 15:45:49.717250533 +0000 UTC m=+0.049120503 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aad769ddee43d4e4d5a105c5a675fb33411d43e6692238f7b6571b6eb2bb3a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:49 standalone.localdomain podman[538736]: 2025-10-13 15:45:49.85435605 +0000 UTC m=+0.186225960 container init c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:49 standalone.localdomain podman[538736]: 2025-10-13 15:45:49.864107274 +0000 UTC m=+0.195977184 container start c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:45:49 standalone.localdomain dnsmasq[538754]: started, version 2.85 cachesize 150
Oct 13 15:45:49 standalone.localdomain dnsmasq[538754]: DNS service limited to local subnets
Oct 13 15:45:49 standalone.localdomain dnsmasq[538754]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:49 standalone.localdomain dnsmasq[538754]: warning: no upstream servers configured
Oct 13 15:45:49 standalone.localdomain dnsmasq-dhcp[538754]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:45:49 standalone.localdomain dnsmasq-dhcp[538754]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:49 standalone.localdomain dnsmasq[538754]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/addn_hosts - 0 addresses
Oct 13 15:45:49 standalone.localdomain dnsmasq-dhcp[538754]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/host
Oct 13 15:45:49 standalone.localdomain dnsmasq-dhcp[538754]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/opts
Oct 13 15:45:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:50.167 496978 INFO neutron.agent.dhcp.agent [None req-5ac63454-9c0b-4230-97e6-d8a02c0ab453 - - - - - -] DHCP configuration for ports {'f813096a-b43f-47ff-9ed6-484b8508115e', 'f31ccec1-20e7-41e5-b93c-77e7913d87ef'} is completed
Oct 13 15:45:50 standalone.localdomain ceph-mon[29756]: pgmap v3946: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:50 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:50.831 2 INFO neutron.agent.securitygroups_rpc [None req-48eeb583-acc6-4b38-b8ac-dfd045884e2f b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:50.923 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:50Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffcf70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892593d0>], id=a3214b1b-140a-4c9e-8faf-7d95863ddb4e, ip_allocation=immediate, mac_address=fa:16:3e:06:74:01, name=tempest-AllowedAddressPairIpV6TestJSON-1815443778, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:37Z, description=, dns_domain=, id=357f4107-b3fb-41d3-b9fc-38e46382a636, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1261217596, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23253, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1456, status=ACTIVE, subnets=['7748d93a-9f64-4b56-99c3-67948e71030c'], tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:39Z, vlan_transparent=None, network_id=357f4107-b3fb-41d3-b9fc-38e46382a636, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7537441e-577a-4568-8769-77153bd4a825'], standard_attr_id=1521, status=DOWN, tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:50Z on network 357f4107-b3fb-41d3-b9fc-38e46382a636
Oct 13 15:45:51 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 2 addresses
Oct 13 15:45:51 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:51 standalone.localdomain podman[538787]: 2025-10-13 15:45:51.134316126 +0000 UTC m=+0.063690638 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:45:51 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:51 standalone.localdomain systemd[1]: tmp-crun.qfGFH7.mount: Deactivated successfully.
Oct 13 15:45:51 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:45:51 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:51 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:51 standalone.localdomain podman[538803]: 2025-10-13 15:45:51.191714726 +0000 UTC m=+0.075838966 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:51.414 496978 INFO neutron.agent.dhcp.agent [None req-4462ac05-ccc0-44e9-98b7-64fc1dd05f1a - - - - - -] DHCP configuration for ports {'a3214b1b-140a-4c9e-8faf-7d95863ddb4e'} is completed
Oct 13 15:45:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:51.456 496978 INFO neutron.agent.linux.ip_lib [None req-eaff6a4a-11ad-475f-96dd-57d7e2601bac - - - - - -] Device tap4b1a2daf-dc cannot be used as it has no MAC address
Oct 13 15:45:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:51.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:51 standalone.localdomain kernel: device tap4b1a2daf-dc entered promiscuous mode
Oct 13 15:45:51 standalone.localdomain NetworkManager[5962]: <info>  [1760370351.5406] manager: (tap4b1a2daf-dc): new Generic device (/org/freedesktop/NetworkManager/Devices/57)
Oct 13 15:45:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:51.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:51Z|00279|binding|INFO|Claiming lport 4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9 for this chassis.
Oct 13 15:45:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:51Z|00280|binding|INFO|4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9: Claiming unknown
Oct 13 15:45:51 standalone.localdomain systemd-udevd[538845]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:51.551 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1fa7e44-3640-47b9-8a98-4997dc1cdb5a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:51.553 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9 in datapath 8c14ff61-fed2-40eb-ad54-5796dda02a0e bound to our chassis
Oct 13 15:45:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:51.555 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8c14ff61-fed2-40eb-ad54-5796dda02a0e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:51.556 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa4e2df-361c-43dc-be6e-785cad220d25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:51Z|00281|binding|INFO|Setting lport 4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9 ovn-installed in OVS
Oct 13 15:45:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:51Z|00282|binding|INFO|Setting lport 4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9 up in Southbound
Oct 13 15:45:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:51.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:51.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap4b1a2daf-dc: No such device
Oct 13 15:45:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:51.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3947: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:51.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:51 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:51.834 2 INFO neutron.agent.securitygroups_rpc [None req-f32c6c46-3f7d-4742-9970-bde86eb24287 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:51.893 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:51Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f15b20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f15190>], id=68cb5a90-3a17-4394-a9d1-1ea21a81c6ad, ip_allocation=immediate, mac_address=fa:16:3e:7d:eb:26, name=tempest-AllowedAddressPairIpV6TestJSON-1587185636, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:45:37Z, description=, dns_domain=, id=357f4107-b3fb-41d3-b9fc-38e46382a636, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1261217596, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23253, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1456, status=ACTIVE, subnets=['7748d93a-9f64-4b56-99c3-67948e71030c'], tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:39Z, vlan_transparent=None, network_id=357f4107-b3fb-41d3-b9fc-38e46382a636, port_security_enabled=True, project_id=4071f84865644ae796190f7e252315e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7537441e-577a-4568-8769-77153bd4a825'], standard_attr_id=1535, status=DOWN, tags=[], tenant_id=4071f84865644ae796190f7e252315e4, updated_at=2025-10-13T15:45:51Z on network 357f4107-b3fb-41d3-b9fc-38e46382a636
Oct 13 15:45:52 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 3 addresses
Oct 13 15:45:52 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:52 standalone.localdomain podman[538906]: 2025-10-13 15:45:52.084598749 +0000 UTC m=+0.049777514 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:52 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:52.130 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:52.353 496978 INFO neutron.agent.dhcp.agent [None req-55d30ca5-0fbe-4884-a172-8100cb027fbf - - - - - -] DHCP configuration for ports {'68cb5a90-3a17-4394-a9d1-1ea21a81c6ad'} is completed
Oct 13 15:45:52 standalone.localdomain podman[538955]: 
Oct 13 15:45:52 standalone.localdomain podman[538955]: 2025-10-13 15:45:52.587170966 +0000 UTC m=+0.086883321 container create d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:52 standalone.localdomain systemd[1]: Started libpod-conmon-d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e.scope.
Oct 13 15:45:52 standalone.localdomain podman[538955]: 2025-10-13 15:45:52.547175798 +0000 UTC m=+0.046888203 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:52 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:52 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb963e7858002226755e7b6a370ff92ee42963a801196b0006cdd3856dad5a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:52 standalone.localdomain podman[538955]: 2025-10-13 15:45:52.675757599 +0000 UTC m=+0.175469974 container init d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:45:52 standalone.localdomain podman[538955]: 2025-10-13 15:45:52.686436542 +0000 UTC m=+0.186148897 container start d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:45:52 standalone.localdomain dnsmasq[538974]: started, version 2.85 cachesize 150
Oct 13 15:45:52 standalone.localdomain dnsmasq[538974]: DNS service limited to local subnets
Oct 13 15:45:52 standalone.localdomain dnsmasq[538974]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:52 standalone.localdomain dnsmasq[538974]: warning: no upstream servers configured
Oct 13 15:45:52 standalone.localdomain dnsmasq-dhcp[538974]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:52 standalone.localdomain dnsmasq[538974]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/addn_hosts - 0 addresses
Oct 13 15:45:52 standalone.localdomain dnsmasq-dhcp[538974]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/host
Oct 13 15:45:52 standalone.localdomain dnsmasq-dhcp[538974]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/opts
Oct 13 15:45:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:45:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:52.749 496978 INFO neutron.agent.dhcp.agent [None req-eaff6a4a-11ad-475f-96dd-57d7e2601bac - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:45:52 standalone.localdomain ceph-mon[29756]: pgmap v3947: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:52 standalone.localdomain podman[538975]: 2025-10-13 15:45:52.792574463 +0000 UTC m=+0.084405344 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:45:52 standalone.localdomain podman[538975]: 2025-10-13 15:45:52.801508382 +0000 UTC m=+0.093339303 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:45:52 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:45:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:52.833 496978 INFO neutron.agent.dhcp.agent [None req-ca8393d6-af12-4aee-9121-e41e9e63e0ae - - - - - -] DHCP configuration for ports {'61af236f-40ac-4bc1-95e8-becfbe91711e'} is completed
Oct 13 15:45:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:52.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:53 standalone.localdomain dnsmasq[538974]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/addn_hosts - 0 addresses
Oct 13 15:45:53 standalone.localdomain dnsmasq-dhcp[538974]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/host
Oct 13 15:45:53 standalone.localdomain podman[539016]: 2025-10-13 15:45:53.075666324 +0000 UTC m=+0.050898029 container kill d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:45:53 standalone.localdomain dnsmasq-dhcp[538974]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/opts
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:45:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:53.315 496978 INFO neutron.agent.dhcp.agent [None req-9076396d-babe-4d99-869b-bd137fc8ec78 - - - - - -] DHCP configuration for ports {'4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9', '61af236f-40ac-4bc1-95e8-becfbe91711e'} is completed
Oct 13 15:45:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43234 DF PROTO=TCP SPT=59660 DPT=9102 SEQ=3782827029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD0C32E0000000001030307) 
Oct 13 15:45:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:53.594 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 204486ca-d987-4732-a96a-51680d96a38e with type ""
Oct 13 15:45:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:53Z|00283|binding|INFO|Removing iface tap4b1a2daf-dc ovn-installed in OVS
Oct 13 15:45:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:53Z|00284|binding|INFO|Removing lport 4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9 ovn-installed in OVS
Oct 13 15:45:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:53.595 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1fa7e44-3640-47b9-8a98-4997dc1cdb5a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:53.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:53.598 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4b1a2daf-dcfe-4cc4-8b11-581c55f4f7d9 in datapath 8c14ff61-fed2-40eb-ad54-5796dda02a0e unbound from our chassis
Oct 13 15:45:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:53.601 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c14ff61-fed2-40eb-ad54-5796dda02a0e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:53.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:53.603 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2b85d0-7ddb-4419-9b0c-d6548a164e8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:53.606 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:47Z, description=, device_id=96123734-7715-44ff-91b4-5f1946485c11, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd0940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd0c40>], id=65fe2ef1-d5e6-4c6c-8e4e-9b2e604d221d, ip_allocation=immediate, mac_address=fa:16:3e:19:3d:7f, name=tempest-PortsTestJSON-785187392, network_id=d9d84da0-72ac-45c1-b703-5737905d90fa, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['e5246d74-7a72-4d89-9669-125c9e305a6a'], standard_attr_id=1512, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:45:50Z on network d9d84da0-72ac-45c1-b703-5737905d90fa
Oct 13 15:45:53 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:53.695 2 INFO neutron.agent.securitygroups_rpc [None req-af54a74d-8441-42e8-8dda-8a706f25d2f6 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3948: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:53 standalone.localdomain dnsmasq[538974]: exiting on receipt of SIGTERM
Oct 13 15:45:53 standalone.localdomain podman[539062]: 2025-10-13 15:45:53.786047004 +0000 UTC m=+0.091601289 container kill d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:45:53 standalone.localdomain systemd[1]: libpod-d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e.scope: Deactivated successfully.
Oct 13 15:45:53 standalone.localdomain dnsmasq[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/addn_hosts - 1 addresses
Oct 13 15:45:53 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/host
Oct 13 15:45:53 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/opts
Oct 13 15:45:53 standalone.localdomain podman[539085]: 2025-10-13 15:45:53.842429072 +0000 UTC m=+0.067655691 container kill c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:45:53 standalone.localdomain podman[539098]: 2025-10-13 15:45:53.843595128 +0000 UTC m=+0.042405013 container died d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:53.916 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:45:53Z, description=, device_id=f0582c52-b968-4297-813d-cfc4772f11e8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188927bbb0>], id=e0a8f293-e92f-4867-9385-9db174b1ed47, ip_allocation=immediate, mac_address=fa:16:3e:a1:c2:53, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1539, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:45:53Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:45:53 standalone.localdomain podman[539098]: 2025-10-13 15:45:53.923246233 +0000 UTC m=+0.122056088 container cleanup d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:53 standalone.localdomain systemd[1]: libpod-conmon-d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e.scope: Deactivated successfully.
Oct 13 15:45:53 standalone.localdomain podman[539102]: 2025-10-13 15:45:53.943957479 +0000 UTC m=+0.129656475 container remove d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:45:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:53.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:53 standalone.localdomain kernel: device tap4b1a2daf-dc left promiscuous mode
Oct 13 15:45:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:53.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:53 standalone.localdomain podman[539141]: 2025-10-13 15:45:53.996934162 +0000 UTC m=+0.117224418 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:53 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 2 addresses
Oct 13 15:45:53 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:53 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.081 496978 INFO neutron.agent.dhcp.agent [None req-31677eda-898e-4987-9d79-7914175922d7 - - - - - -] DHCP configuration for ports {'65fe2ef1-d5e6-4c6c-8e4e-9b2e604d221d'} is completed
Oct 13 15:45:54 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:45:54 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:54 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:54 standalone.localdomain podman[539184]: 2025-10-13 15:45:54.130055894 +0000 UTC m=+0.067414964 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:45:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5fb963e7858002226755e7b6a370ff92ee42963a801196b0006cdd3856dad5a4-merged.mount: Deactivated successfully.
Oct 13 15:45:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5952ddf6dc803ea13ecee1ad22429568c07386d60b344709122b9701862d51e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:54 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d8c14ff61\x2dfed2\x2d40eb\x2dad54\x2d5796dda02a0e.mount: Deactivated successfully.
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.312 496978 INFO neutron.agent.dhcp.agent [None req-87885e52-862c-4da1-b809-5f0f8e082c7a - - - - - -] Synchronizing state
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.447 496978 INFO neutron.agent.dhcp.agent [None req-88d8af89-3751-418f-9ebf-3abea4d84c74 - - - - - -] DHCP configuration for ports {'e0a8f293-e92f-4867-9385-9db174b1ed47'} is completed
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.566 496978 INFO neutron.agent.dhcp.agent [None req-c6a32117-daf6-478c-be31-b6ed10593660 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.567 496978 INFO neutron.agent.dhcp.agent [-] Starting network c5bf3aef-bf61-4abe-aa76-d54a61d47348 dhcp configuration
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.568 496978 INFO neutron.agent.dhcp.agent [-] Finished network c5bf3aef-bf61-4abe-aa76-d54a61d47348 dhcp configuration
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.569 496978 INFO neutron.agent.dhcp.agent [-] Starting network 72c8411a-a0c0-4326-8ad0-ef42960e9cbe dhcp configuration
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.569 496978 INFO neutron.agent.dhcp.agent [-] Finished network 72c8411a-a0c0-4326-8ad0-ef42960e9cbe dhcp configuration
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.570 496978 INFO neutron.agent.dhcp.agent [-] Starting network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 dhcp configuration
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.570 496978 INFO neutron.agent.dhcp.agent [-] Finished network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 dhcp configuration
Oct 13 15:45:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:54.571 496978 INFO neutron.agent.dhcp.agent [-] Starting network 8c14ff61-fed2-40eb-ad54-5796dda02a0e dhcp configuration
Oct 13 15:45:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43235 DF PROTO=TCP SPT=59660 DPT=9102 SEQ=3782827029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD0C7360000000001030307) 
Oct 13 15:45:54 standalone.localdomain ceph-mon[29756]: pgmap v3948: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:55 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:55.113 2 INFO neutron.agent.securitygroups_rpc [None req-e84e954b-2c01-4f03-872e-58cd6d23d873 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:55.242 496978 INFO neutron.agent.linux.ip_lib [None req-1d8ae29b-1668-4139-83a0-ff5a7141632c - - - - - -] Device tap59204015-fa cannot be used as it has no MAC address
Oct 13 15:45:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:55.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:55 standalone.localdomain kernel: device tap59204015-fa entered promiscuous mode
Oct 13 15:45:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:55.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:55Z|00285|binding|INFO|Claiming lport 59204015-facd-4903-acb5-cf563f511392 for this chassis.
Oct 13 15:45:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:55Z|00286|binding|INFO|59204015-facd-4903-acb5-cf563f511392: Claiming unknown
Oct 13 15:45:55 standalone.localdomain NetworkManager[5962]: <info>  [1760370355.2765] manager: (tap59204015-fa): new Generic device (/org/freedesktop/NetworkManager/Devices/58)
Oct 13 15:45:55 standalone.localdomain systemd-udevd[539219]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:45:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:45:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:55.283 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1fa7e44-3640-47b9-8a98-4997dc1cdb5a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=59204015-facd-4903-acb5-cf563f511392) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:55.286 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 59204015-facd-4903-acb5-cf563f511392 in datapath 8c14ff61-fed2-40eb-ad54-5796dda02a0e bound to our chassis
Oct 13 15:45:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:55.288 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8c14ff61-fed2-40eb-ad54-5796dda02a0e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:55Z|00287|binding|INFO|Setting lport 59204015-facd-4903-acb5-cf563f511392 ovn-installed in OVS
Oct 13 15:45:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:55Z|00288|binding|INFO|Setting lport 59204015-facd-4903-acb5-cf563f511392 up in Southbound
Oct 13 15:45:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:55.290 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bc77b65c-4e60-4f17-864a-7e337c76d3ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:55.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:55.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:55 standalone.localdomain podman[539221]: 2025-10-13 15:45:55.408250176 +0000 UTC m=+0.106576045 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:45:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:55.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:55 standalone.localdomain podman[539221]: 2025-10-13 15:45:55.443971209 +0000 UTC m=+0.142297068 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 13 15:45:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 15:45:55 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:45:55 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:55.706 2 INFO neutron.agent.securitygroups_rpc [None req-5a764614-f478-4e4f-9a10-5cd5a8c59731 b3ab664f86e14aef808c0b70afd1127d 4071f84865644ae796190f7e252315e4 - - default default] Security group member updated ['7537441e-577a-4568-8769-77153bd4a825']
Oct 13 15:45:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3949: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:56 standalone.localdomain podman[539291]: 
Oct 13 15:45:56 standalone.localdomain podman[539291]: 2025-10-13 15:45:56.372153712 +0000 UTC m=+0.102678294 container create 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:45:56 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:45:56.414 2 INFO neutron.agent.securitygroups_rpc [None req-4c740794-cd99-414e-a184-5eb1d99f14c9 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:45:56 standalone.localdomain podman[539291]: 2025-10-13 15:45:56.322049549 +0000 UTC m=+0.052574151 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:45:56 standalone.localdomain systemd[1]: Started libpod-conmon-248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691.scope.
Oct 13 15:45:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:45:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42dfc3f275f18e7912250976d0379494d71b9ba0442c01d3e82acfec371f68d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:45:56 standalone.localdomain podman[539291]: 2025-10-13 15:45:56.479848221 +0000 UTC m=+0.210372793 container init 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:45:56 standalone.localdomain podman[539291]: 2025-10-13 15:45:56.490465082 +0000 UTC m=+0.220989664 container start 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:56 standalone.localdomain dnsmasq[539309]: started, version 2.85 cachesize 150
Oct 13 15:45:56 standalone.localdomain dnsmasq[539309]: DNS service limited to local subnets
Oct 13 15:45:56 standalone.localdomain dnsmasq[539309]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:45:56 standalone.localdomain dnsmasq[539309]: warning: no upstream servers configured
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[539309]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:45:56 standalone.localdomain dnsmasq[539309]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/addn_hosts - 0 addresses
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[539309]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/host
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[539309]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/opts
Oct 13 15:45:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:56.572 496978 INFO neutron.agent.dhcp.agent [None req-1d8ae29b-1668-4139-83a0-ff5a7141632c - - - - - -] Finished network 8c14ff61-fed2-40eb-ad54-5796dda02a0e dhcp configuration
Oct 13 15:45:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:56.573 496978 INFO neutron.agent.dhcp.agent [None req-c6a32117-daf6-478c-be31-b6ed10593660 - - - - - -] Synchronizing state complete
Oct 13 15:45:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43236 DF PROTO=TCP SPT=59660 DPT=9102 SEQ=3782827029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD0CF370000000001030307) 
Oct 13 15:45:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:56.683 496978 INFO neutron.agent.dhcp.agent [None req-bfbda4d0-4767-4f1d-8683-0680f28c59f3 - - - - - -] DHCP configuration for ports {'61af236f-40ac-4bc1-95e8-becfbe91711e'} is completed
Oct 13 15:45:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:56.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:56 standalone.localdomain dnsmasq[539309]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/addn_hosts - 0 addresses
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[539309]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/host
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[539309]: read /var/lib/neutron/dhcp/8c14ff61-fed2-40eb-ad54-5796dda02a0e/opts
Oct 13 15:45:56 standalone.localdomain podman[539344]: 2025-10-13 15:45:56.799266875 +0000 UTC m=+0.098483243 container kill 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:56 standalone.localdomain ceph-mon[29756]: pgmap v3949: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:56 standalone.localdomain dnsmasq[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/addn_hosts - 1 addresses
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/host
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[537743]: read /var/lib/neutron/dhcp/357f4107-b3fb-41d3-b9fc-38e46382a636/opts
Oct 13 15:45:56 standalone.localdomain podman[539367]: 2025-10-13 15:45:56.825632788 +0000 UTC m=+0.070205061 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:56 standalone.localdomain dnsmasq[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/addn_hosts - 0 addresses
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/host
Oct 13 15:45:56 standalone.localdomain podman[539385]: 2025-10-13 15:45:56.868311599 +0000 UTC m=+0.068565541 container kill c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:56 standalone.localdomain dnsmasq-dhcp[538365]: read /var/lib/neutron/dhcp/d9d84da0-72ac-45c1-b703-5737905d90fa/opts
Oct 13 15:45:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:56Z|00289|binding|INFO|Removing iface tap4bdbbaea-e2 ovn-installed in OVS
Oct 13 15:45:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:56Z|00290|binding|INFO|Removing lport 4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf ovn-installed in OVS
Oct 13 15:45:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:56.933 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 8da60366-6796-4f1f-a95e-2d27a885c83f with type ""
Oct 13 15:45:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:56.934 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-357f4107-b3fb-41d3-b9fc-38e46382a636', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-357f4107-b3fb-41d3-b9fc-38e46382a636', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4071f84865644ae796190f7e252315e4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1278b6f7-d94a-4f0c-a65f-4e19b537c178, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:56.936 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4bdbbaea-e27b-4be7-a8ff-e96f89f5b9bf in datapath 357f4107-b3fb-41d3-b9fc-38e46382a636 unbound from our chassis
Oct 13 15:45:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:56.938 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 357f4107-b3fb-41d3-b9fc-38e46382a636 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:56.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:56.939 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[7a43b904-8714-4b48-b5c7-6c4ad106bc6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.121 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port bb5085e8-eb25-4721-99da-6263b6bf598d with type ""
Oct 13 15:45:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:57Z|00291|binding|INFO|Removing iface tap59204015-fa ovn-installed in OVS
Oct 13 15:45:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:57Z|00292|binding|INFO|Removing lport 59204015-facd-4903-acb5-cf563f511392 ovn-installed in OVS
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.122 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c14ff61-fed2-40eb-ad54-5796dda02a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1fa7e44-3640-47b9-8a98-4997dc1cdb5a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=59204015-facd-4903-acb5-cf563f511392) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.124 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 59204015-facd-4903-acb5-cf563f511392 in datapath 8c14ff61-fed2-40eb-ad54-5796dda02a0e unbound from our chassis
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.125 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8c14ff61-fed2-40eb-ad54-5796dda02a0e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.126 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e65a94-dad3-4d7f-9ef6-ecfafc20f608]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:57Z|00293|binding|INFO|Releasing lport 96688557-c598-477a-a404-4919556f30c2 from this chassis (sb_readonly=0)
Oct 13 15:45:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:57Z|00294|binding|INFO|Setting lport 96688557-c598-477a-a404-4919556f30c2 down in Southbound
Oct 13 15:45:57 standalone.localdomain kernel: device tap96688557-c5 left promiscuous mode
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.168 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d9d84da0-72ac-45c1-b703-5737905d90fa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d84da0-72ac-45c1-b703-5737905d90fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c68151c-f4cd-4f91-a63a-24f9cabfbb1c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=96688557-c598-477a-a404-4919556f30c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.169 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 96688557-c598-477a-a404-4919556f30c2 in datapath d9d84da0-72ac-45c1-b703-5737905d90fa unbound from our chassis
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.173 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d84da0-72ac-45c1-b703-5737905d90fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:45:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:45:57.173 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[78e72135-649c-4e6c-acf7-eceee2304fe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain dnsmasq[539309]: exiting on receipt of SIGTERM
Oct 13 15:45:57 standalone.localdomain podman[539458]: 2025-10-13 15:45:57.301122029 +0000 UTC m=+0.059553558 container kill 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:45:57 standalone.localdomain systemd[1]: libpod-248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691.scope: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain dnsmasq[537743]: exiting on receipt of SIGTERM
Oct 13 15:45:57 standalone.localdomain podman[539467]: 2025-10-13 15:45:57.332988264 +0000 UTC m=+0.065205786 container kill a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:45:57 standalone.localdomain systemd[1]: libpod-a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1.scope: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain podman[539486]: 2025-10-13 15:45:57.372828906 +0000 UTC m=+0.056848314 container died 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:45:57 standalone.localdomain podman[539486]: 2025-10-13 15:45:57.403472622 +0000 UTC m=+0.087492000 container cleanup 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:45:57 standalone.localdomain systemd[1]: libpod-conmon-248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691.scope: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain podman[539515]: 2025-10-13 15:45:57.411311987 +0000 UTC m=+0.049094353 container died a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:45:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a42dfc3f275f18e7912250976d0379494d71b9ba0442c01d3e82acfec371f68d-merged.mount: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d3da98327ed50c7286cf16c632124a034a9c273d907eeabd3fc1246ed7ad5805-merged.mount: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain podman[539488]: 2025-10-13 15:45:57.454927237 +0000 UTC m=+0.133458233 container remove 248501564f0fb058d1f53f192ad2eafe4665d3d655c051ba3e157835a977f691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c14ff61-fed2-40eb-ad54-5796dda02a0e, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:45:57 standalone.localdomain kernel: device tap59204015-fa left promiscuous mode
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain podman[539515]: 2025-10-13 15:45:57.493577563 +0000 UTC m=+0.131359959 container remove a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-357f4107-b3fb-41d3-b9fc-38e46382a636, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:45:57 standalone.localdomain systemd[1]: libpod-conmon-a6ce2aba541ed692e2f6b71c9220799cc6925d8a02a43a7d584c8482cd0ff0f1.scope: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain kernel: device tap4bdbbaea-e2 left promiscuous mode
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:57.585 496978 INFO neutron.agent.dhcp.agent [None req-2f726af7-bd0d-4ff1-9153-9afb8604aab3 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:45:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d8c14ff61\x2dfed2\x2d40eb\x2dad54\x2d5796dda02a0e.mount: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d357f4107\x2db3fb\x2d41d3\x2db9fc\x2d38e46382a636.mount: Deactivated successfully.
Oct 13 15:45:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:57.587 496978 INFO neutron.agent.dhcp.agent [None req-2f726af7-bd0d-4ff1-9153-9afb8604aab3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:57.589 496978 INFO neutron.agent.dhcp.agent [None req-2eb3a58d-11df-4ca0-84b7-25712ba5265d - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:45:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:57.590 496978 INFO neutron.agent.dhcp.agent [None req-2eb3a58d-11df-4ca0-84b7-25712ba5265d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:57.591 496978 INFO neutron.agent.dhcp.agent [None req-2eb3a58d-11df-4ca0-84b7-25712ba5265d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:45:57 standalone.localdomain sudo[539543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:45:57 standalone.localdomain sudo[539543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:45:57 standalone.localdomain sudo[539543]: pam_unix(sudo:session): session closed for user root
Oct 13 15:45:57 standalone.localdomain sudo[539561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Oct 13 15:45:57 standalone.localdomain sudo[539561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:45:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3950: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:45:57Z|00295|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:45:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:57.895 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:45:57.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:45:58 standalone.localdomain sudo[539561]: pam_unix(sudo:session): session closed for user root
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain.devices.0}] v 0)
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.standalone.localdomain}] v 0)
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:58 standalone.localdomain sudo[539601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:45:58 standalone.localdomain sudo[539601]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:45:58 standalone.localdomain sudo[539601]: pam_unix(sudo:session): session closed for user root
Oct 13 15:45:58 standalone.localdomain sudo[539619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:45:58 standalone.localdomain sudo[539619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:45:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:58.513 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:58 standalone.localdomain podman[539669]: 2025-10-13 15:45:58.771596628 +0000 UTC m=+0.059882538 container kill c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:45:58 standalone.localdomain dnsmasq[538365]: exiting on receipt of SIGTERM
Oct 13 15:45:58 standalone.localdomain systemd[1]: libpod-c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5.scope: Deactivated successfully.
Oct 13 15:45:58 standalone.localdomain podman[539686]: 2025-10-13 15:45:58.830860687 +0000 UTC m=+0.045521011 container died c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5-userdata-shm.mount: Deactivated successfully.
Oct 13 15:45:58 standalone.localdomain podman[539686]: 2025-10-13 15:45:58.867628295 +0000 UTC m=+0.082288599 container cleanup c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:45:58 standalone.localdomain systemd[1]: libpod-conmon-c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5.scope: Deactivated successfully.
Oct 13 15:45:58 standalone.localdomain podman[539693]: 2025-10-13 15:45:58.908596322 +0000 UTC m=+0.104946284 container remove c15a48cfac11f94a14ad8d0e9f0bb40538c51ef80aa66897421944809ea42df5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d9d84da0-72ac-45c1-b703-5737905d90fa, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:45:58 standalone.localdomain sudo[539619]: pam_unix(sudo:session): session closed for user root
Oct 13 15:45:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:58.951 496978 INFO neutron.agent.dhcp.agent [None req-e1e5acc6-fb34-4160-9278-46d478dceafe - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:45:58 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:45:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 1b0bc30e-5b42-46e8-a24e-b404ddc66087 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:45:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 1b0bc30e-5b42-46e8-a24e-b404ddc66087 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:45:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 1b0bc30e-5b42-46e8-a24e-b404ddc66087 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:45:59 standalone.localdomain sudo[539743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:45:59 standalone.localdomain sudo[539743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:45:59 standalone.localdomain sudo[539743]: pam_unix(sudo:session): session closed for user root
Oct 13 15:45:59 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:45:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:45:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:45:59 standalone.localdomain podman[539764]: 2025-10-13 15:45:59.127519161 +0000 UTC m=+0.054852452 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: pgmap v3950: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:45:59 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:45:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:59.210 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:45:59.471 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:45:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3951: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:45:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ab6c3e437417901409fbbfc4d768f18e6d0f9efd985a5c70ad8460e590dcfd42-merged.mount: Deactivated successfully.
Oct 13 15:45:59 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd9d84da0\x2d72ac\x2d45c1\x2db703\x2d5737905d90fa.mount: Deactivated successfully.
Oct 13 15:46:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:46:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:46:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:46:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43237 DF PROTO=TCP SPT=59660 DPT=9102 SEQ=3782827029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD0DEF60000000001030307) 
Oct 13 15:46:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:00.748 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:00Z, description=, device_id=caa021ad-2a9a-4f1c-8202-402702c1b41f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890ae160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e9f10>], id=f798cbb2-fb89-43fc-874c-b0999bdd0a45, ip_allocation=immediate, mac_address=fa:16:3e:66:8d:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1559, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:46:00Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:46:01 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:46:01 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:01 standalone.localdomain podman[539805]: 2025-10-13 15:46:01.005605155 +0000 UTC m=+0.055584655 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:46:01 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:01.291 496978 INFO neutron.agent.dhcp.agent [None req-e276cb54-4695-4fad-b557-044ae288660b - - - - - -] DHCP configuration for ports {'f798cbb2-fb89-43fc-874c-b0999bdd0a45'} is completed
Oct 13 15:46:01 standalone.localdomain ceph-mon[29756]: pgmap v3951: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:46:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3952: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:01.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:02.065 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:02.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:02.068 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:46:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:02.192 496978 INFO neutron.agent.linux.ip_lib [None req-fba70d6f-3392-45a2-b092-f86831dbc623 - - - - - -] Device tap2513fba5-33 cannot be used as it has no MAC address
Oct 13 15:46:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:02.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain kernel: device tap2513fba5-33 entered promiscuous mode
Oct 13 15:46:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:02Z|00296|binding|INFO|Claiming lport 2513fba5-3362-4add-8a7c-752b1114c953 for this chassis.
Oct 13 15:46:02 standalone.localdomain NetworkManager[5962]: <info>  [1760370362.2366] manager: (tap2513fba5-33): new Generic device (/org/freedesktop/NetworkManager/Devices/59)
Oct 13 15:46:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:02Z|00297|binding|INFO|2513fba5-3362-4add-8a7c-752b1114c953: Claiming unknown
Oct 13 15:46:02 standalone.localdomain systemd-udevd[539836]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:02.263 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=486905a0-8fbf-4847-a0a3-d828ace412ce, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2513fba5-3362-4add-8a7c-752b1114c953) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:02.265 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2513fba5-3362-4add-8a7c-752b1114c953 in datapath 8669e1a6-2c0a-4fef-817e-c674b6e65b80 bound to our chassis
Oct 13 15:46:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:02.267 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8669e1a6-2c0a-4fef-817e-c674b6e65b80 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:02.268 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfe097f-1fad-41b7-b3e4-91aa595f8da7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:02Z|00298|binding|INFO|Setting lport 2513fba5-3362-4add-8a7c-752b1114c953 ovn-installed in OVS
Oct 13 15:46:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:02Z|00299|binding|INFO|Setting lport 2513fba5-3362-4add-8a7c-752b1114c953 up in Southbound
Oct 13 15:46:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:02.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:02.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:02.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:03.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:03 standalone.localdomain ceph-mon[29756]: pgmap v3952: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:03 standalone.localdomain podman[539889]: 
Oct 13 15:46:03 standalone.localdomain podman[539889]: 2025-10-13 15:46:03.448363283 +0000 UTC m=+0.108197136 container create 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:46:03 standalone.localdomain systemd[1]: Started libpod-conmon-74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942.scope.
Oct 13 15:46:03 standalone.localdomain podman[539889]: 2025-10-13 15:46:03.403161833 +0000 UTC m=+0.062995766 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e6dbeea878d986eafeb402f1b3cf06dd108131db9b1c747276f1804c1bb4b98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:03 standalone.localdomain podman[539889]: 2025-10-13 15:46:03.526741298 +0000 UTC m=+0.186575181 container init 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:03 standalone.localdomain podman[539889]: 2025-10-13 15:46:03.533684464 +0000 UTC m=+0.193518347 container start 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:03 standalone.localdomain dnsmasq[539907]: started, version 2.85 cachesize 150
Oct 13 15:46:03 standalone.localdomain dnsmasq[539907]: DNS service limited to local subnets
Oct 13 15:46:03 standalone.localdomain dnsmasq[539907]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:03 standalone.localdomain dnsmasq[539907]: warning: no upstream servers configured
Oct 13 15:46:03 standalone.localdomain dnsmasq-dhcp[539907]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:03 standalone.localdomain dnsmasq[539907]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 0 addresses
Oct 13 15:46:03 standalone.localdomain dnsmasq-dhcp[539907]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:03 standalone.localdomain dnsmasq-dhcp[539907]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.598 496978 INFO neutron.agent.dhcp.agent [None req-c6a32117-daf6-478c-be31-b6ed10593660 - - - - - -] Synchronizing state
Oct 13 15:46:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3953: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.751 496978 INFO neutron.agent.dhcp.agent [None req-8c9dc59c-120a-4dae-95ff-0654e4e2bb6f - - - - - -] DHCP configuration for ports {'20b13e30-b2d6-40a4-9537-84abbaa32a14', '36e7e924-139d-4d57-a01f-3a7052149280'} is completed
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.816 496978 INFO neutron.agent.dhcp.agent [None req-f3b8993b-abf5-4655-8563-9a3852ce8deb - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.817 496978 INFO neutron.agent.dhcp.agent [-] Starting network 6525232d-3fc2-4340-afa8-43e35cd92774 dhcp configuration
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.817 496978 INFO neutron.agent.dhcp.agent [-] Finished network 6525232d-3fc2-4340-afa8-43e35cd92774 dhcp configuration
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.818 496978 INFO neutron.agent.dhcp.agent [-] Starting network 72c8411a-a0c0-4326-8ad0-ef42960e9cbe dhcp configuration
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.818 496978 INFO neutron.agent.dhcp.agent [-] Finished network 72c8411a-a0c0-4326-8ad0-ef42960e9cbe dhcp configuration
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.819 496978 INFO neutron.agent.dhcp.agent [None req-f3b8993b-abf5-4655-8563-9a3852ce8deb - - - - - -] Synchronizing state complete
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.820 496978 INFO neutron.agent.dhcp.agent [None req-a822fffb-aada-47f3-a76b-c848c6989989 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.820 496978 INFO neutron.agent.dhcp.agent [None req-a822fffb-aada-47f3-a76b-c848c6989989 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:03.821 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:04 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:46:04 standalone.localdomain podman[539925]: 2025-10-13 15:46:04.471987413 +0000 UTC m=+0.061085896 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:04 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:04 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:05 standalone.localdomain ceph-mon[29756]: pgmap v3953: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3954: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:06.025 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:06.574 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:05Z, description=, device_id=eb03462d-5903-4ab7-b3e1-f3ef7c87d97c, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b65e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b6a90>], id=041339fd-e588-465f-a557-e570a3c02835, ip_allocation=immediate, mac_address=fa:16:3e:94:b4:1c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1598, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:46:06Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:46:06 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:46:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:46:06 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 7800.0 total, 600.0 interval
                                                        Cumulative writes: 20K writes, 93K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s
                                                        Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1592 writes, 7155 keys, 1592 commit groups, 1.0 writes per commit group, ingest: 5.95 MB, 0.01 MB/s
                                                        Interval WAL: 1592 writes, 1592 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        
                                                        ** Compaction Stats [default] **
                                                        Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     85.5      0.74              0.22        66    0.011       0      0       0.0       0.0
                                                          L6      1/0    5.59 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   5.2    199.2    169.6      1.94              0.98        65    0.030    363K    34K       0.0       0.0
                                                         Sum      1/0    5.59 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   6.2    144.4    146.5      2.68              1.20       131    0.020    363K    34K       0.0       0.0
                                                         Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.8    166.6    169.0      0.21              0.12        10    0.021     38K   2588       0.0       0.0
                                                        
                                                        ** Compaction Stats [default] **
                                                        Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                         Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    199.2    169.6      1.94              0.98        65    0.030    363K    34K       0.0       0.0
                                                        High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     85.8      0.73              0.22        65    0.011       0      0       0.0       0.0
                                                        User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                        
                                                        Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                        
                                                        Uptime(secs): 7800.0 total, 600.0 interval
                                                        Flush(GB): cumulative 0.062, interval 0.005
                                                        AddFile(GB): cumulative 0.000, interval 0.000
                                                        AddFile(Total Files): cumulative 0, interval 0
                                                        AddFile(L0 Files): cumulative 0, interval 0
                                                        AddFile(Keys): cumulative 0, interval 0
                                                        Cumulative compaction: 0.38 GB write, 0.05 MB/s write, 0.38 GB read, 0.05 MB/s read, 2.7 seconds
                                                        Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds
                                                        Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                        Block cache BinnedLRUCache@0x557b7fb5b350#2 capacity: 308.00 MB usage: 49.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000436 secs_since: 0
                                                        Block cache entry stats(count,size,portion): DataBlock(4436,47.45 MB,15.4072%) FilterBlock(132,1013.11 KB,0.321222%) IndexBlock(132,1.36 MB,0.440023%) Misc(1,0.00 KB,0%)
                                                        
                                                        ** File Read Latency Histogram By Level [default] **
Oct 13 15:46:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:06.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:06 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:46:06 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:06 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:06 standalone.localdomain podman[539962]: 2025-10-13 15:46:06.86021054 +0000 UTC m=+0.131076550 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:46:06 standalone.localdomain podman[539963]: 2025-10-13 15:46:06.837606885 +0000 UTC m=+0.098295298 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:46:06 standalone.localdomain podman[539963]: 2025-10-13 15:46:06.92209181 +0000 UTC m=+0.182780183 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:06 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:46:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:06.965 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:46:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:06.965 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:46:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:06.966 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:46:06 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:06.996 2 INFO neutron.agent.securitygroups_rpc [None req-ced33245-5a73-4179-a8c1-c79f3025c3ea eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['ee66a319-849e-4d64-befc-3297e5dcd3f2']
Oct 13 15:46:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:07.080 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:06Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e7bfa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889106b80>], id=b30d48dd-00c4-460f-a66d-6c0e877b5fd9, ip_allocation=immediate, mac_address=fa:16:3e:e3:d9:b6, name=tempest-PortsTestJSON-537333031, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:54Z, description=, dns_domain=, id=8669e1a6-2c0a-4fef-817e-c674b6e65b80, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-831292785, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=359, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1224, status=ACTIVE, subnets=['e5a09200-1806-418d-a827-8bb50a2bebfd'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:46:00Z, vlan_transparent=None, network_id=8669e1a6-2c0a-4fef-817e-c674b6e65b80, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ee66a319-849e-4d64-befc-3297e5dcd3f2'], standard_attr_id=1600, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:46:06Z on network 8669e1a6-2c0a-4fef-817e-c674b6e65b80
Oct 13 15:46:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:07.168 496978 INFO neutron.agent.dhcp.agent [None req-d0936ae8-21dd-47ba-96a6-5cfe09850663 - - - - - -] DHCP configuration for ports {'041339fd-e588-465f-a557-e570a3c02835'} is completed
Oct 13 15:46:07 standalone.localdomain podman[540024]: 2025-10-13 15:46:07.346737796 +0000 UTC m=+0.065596987 container kill 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:46:07 standalone.localdomain ceph-mon[29756]: pgmap v3954: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:07 standalone.localdomain dnsmasq[539907]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 1 addresses
Oct 13 15:46:07 standalone.localdomain dnsmasq-dhcp[539907]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:07 standalone.localdomain dnsmasq-dhcp[539907]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:07.616 496978 INFO neutron.agent.dhcp.agent [None req-4ffc4011-47f2-47bb-96ff-8da750fb476e - - - - - -] DHCP configuration for ports {'b30d48dd-00c4-460f-a66d-6c0e877b5fd9'} is completed
Oct 13 15:46:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3955: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:46:07 standalone.localdomain systemd[1]: tmp-crun.ZoVVou.mount: Deactivated successfully.
Oct 13 15:46:07 standalone.localdomain podman[540047]: 2025-10-13 15:46:07.974632782 +0000 UTC m=+0.086748217 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Oct 13 15:46:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:07.980 496978 INFO neutron.agent.linux.ip_lib [None req-e732915a-1d0e-4c43-9c02-1217f846812d - - - - - -] Device tapd4c19129-3f cannot be used as it has no MAC address
Oct 13 15:46:07 standalone.localdomain podman[540047]: 2025-10-13 15:46:07.991198029 +0000 UTC m=+0.103313444 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Oct 13 15:46:08 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 13 15:46:08 standalone.localdomain kernel: device tapd4c19129-3f entered promiscuous mode
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:08 standalone.localdomain NetworkManager[5962]: <info>  [1760370368.0247] manager: (tapd4c19129-3f): new Generic device (/org/freedesktop/NetworkManager/Devices/60)
Oct 13 15:46:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:08Z|00300|binding|INFO|Claiming lport d4c19129-3fee-417a-b885-f36bf7a33dd4 for this chassis.
Oct 13 15:46:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:08Z|00301|binding|INFO|d4c19129-3fee-417a-b885-f36bf7a33dd4: Claiming unknown
Oct 13 15:46:08 standalone.localdomain systemd-udevd[540077]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:08.039 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9bab5d1f-c037-4145-871c-9d14f5e9153e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bab5d1f-c037-4145-871c-9d14f5e9153e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2138f2d9-9487-454c-baee-06fbe8074074, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d4c19129-3fee-417a-b885-f36bf7a33dd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:08.042 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d4c19129-3fee-417a-b885-f36bf7a33dd4 in datapath 9bab5d1f-c037-4145-871c-9d14f5e9153e bound to our chassis
Oct 13 15:46:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:08Z|00302|binding|INFO|Setting lport d4c19129-3fee-417a-b885-f36bf7a33dd4 ovn-installed in OVS
Oct 13 15:46:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:08Z|00303|binding|INFO|Setting lport d4c19129-3fee-417a-b885-f36bf7a33dd4 up in Southbound
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:08.045 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9bab5d1f-c037-4145-871c-9d14f5e9153e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:08.047 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2b364f12-7d80-4c64-931b-9f7bfbc260a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:08.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:08 standalone.localdomain ceph-mon[29756]: pgmap v3955: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:08.876 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:09 standalone.localdomain podman[540133]: 
Oct 13 15:46:09 standalone.localdomain podman[540133]: 2025-10-13 15:46:09.051605048 +0000 UTC m=+0.076107815 container create 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:46:09 standalone.localdomain systemd[1]: Started libpod-conmon-3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228.scope.
Oct 13 15:46:09 standalone.localdomain podman[540133]: 2025-10-13 15:46:09.011441784 +0000 UTC m=+0.035944571 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:09 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56301cae562762bfbf9293e6685dca77fb134e9fd98e0d8220c64c07aa27befe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:09 standalone.localdomain podman[540133]: 2025-10-13 15:46:09.124208772 +0000 UTC m=+0.148711549 container init 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:46:09 standalone.localdomain podman[540133]: 2025-10-13 15:46:09.133640096 +0000 UTC m=+0.158142873 container start 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:09 standalone.localdomain dnsmasq[540151]: started, version 2.85 cachesize 150
Oct 13 15:46:09 standalone.localdomain dnsmasq[540151]: DNS service limited to local subnets
Oct 13 15:46:09 standalone.localdomain dnsmasq[540151]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:09 standalone.localdomain dnsmasq[540151]: warning: no upstream servers configured
Oct 13 15:46:09 standalone.localdomain dnsmasq-dhcp[540151]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:09 standalone.localdomain dnsmasq[540151]: read /var/lib/neutron/dhcp/9bab5d1f-c037-4145-871c-9d14f5e9153e/addn_hosts - 0 addresses
Oct 13 15:46:09 standalone.localdomain dnsmasq-dhcp[540151]: read /var/lib/neutron/dhcp/9bab5d1f-c037-4145-871c-9d14f5e9153e/host
Oct 13 15:46:09 standalone.localdomain dnsmasq-dhcp[540151]: read /var/lib/neutron/dhcp/9bab5d1f-c037-4145-871c-9d14f5e9153e/opts
Oct 13 15:46:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:09.195 496978 INFO neutron.agent.dhcp.agent [None req-e732915a-1d0e-4c43-9c02-1217f846812d - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:09.286 496978 INFO neutron.agent.dhcp.agent [None req-1fabd1cb-aa02-49f7-83df-15f67b3f527c - - - - - -] DHCP configuration for ports {'6dc68c4e-3a88-45d4-a4f3-69407c5f676b'} is completed
Oct 13 15:46:09 standalone.localdomain dnsmasq[540151]: read /var/lib/neutron/dhcp/9bab5d1f-c037-4145-871c-9d14f5e9153e/addn_hosts - 0 addresses
Oct 13 15:46:09 standalone.localdomain dnsmasq-dhcp[540151]: read /var/lib/neutron/dhcp/9bab5d1f-c037-4145-871c-9d14f5e9153e/host
Oct 13 15:46:09 standalone.localdomain dnsmasq-dhcp[540151]: read /var/lib/neutron/dhcp/9bab5d1f-c037-4145-871c-9d14f5e9153e/opts
Oct 13 15:46:09 standalone.localdomain podman[540169]: 2025-10-13 15:46:09.502450501 +0000 UTC m=+0.064240215 container kill 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:46:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3956: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:46:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:10.050 496978 INFO neutron.agent.dhcp.agent [None req-2509d176-9d96-4f19-874c-6aee6319781e - - - - - -] DHCP configuration for ports {'d4c19129-3fee-417a-b885-f36bf7a33dd4', '6dc68c4e-3a88-45d4-a4f3-69407c5f676b'} is completed
Oct 13 15:46:10 standalone.localdomain systemd[1]: tmp-crun.xkutuv.mount: Deactivated successfully.
Oct 13 15:46:10 standalone.localdomain podman[540191]: 2025-10-13 15:46:10.102127217 +0000 UTC m=+0.116197526 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:10 standalone.localdomain podman[540191]: 2025-10-13 15:46:10.117738543 +0000 UTC m=+0.131808792 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct 13 15:46:10 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:46:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:10Z|00304|binding|INFO|Removing iface tapd4c19129-3f ovn-installed in OVS
Oct 13 15:46:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:10.201 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 281457f4-fb83-422f-8ac9-1f68548e1ad6 with type ""
Oct 13 15:46:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:10Z|00305|binding|INFO|Removing lport d4c19129-3fee-417a-b885-f36bf7a33dd4 ovn-installed in OVS
Oct 13 15:46:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:10.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:10.204 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9bab5d1f-c037-4145-871c-9d14f5e9153e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bab5d1f-c037-4145-871c-9d14f5e9153e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2138f2d9-9487-454c-baee-06fbe8074074, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d4c19129-3fee-417a-b885-f36bf7a33dd4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:10.208 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d4c19129-3fee-417a-b885-f36bf7a33dd4 in datapath 9bab5d1f-c037-4145-871c-9d14f5e9153e unbound from our chassis
Oct 13 15:46:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:10.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:10.213 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bab5d1f-c037-4145-871c-9d14f5e9153e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:10.214 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9c528613-2893-44db-8d36-6658c106c7ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:10 standalone.localdomain dnsmasq[540151]: exiting on receipt of SIGTERM
Oct 13 15:46:10 standalone.localdomain systemd[1]: libpod-3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228.scope: Deactivated successfully.
Oct 13 15:46:10 standalone.localdomain podman[540227]: 2025-10-13 15:46:10.299570945 +0000 UTC m=+0.081347288 container kill 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:10 standalone.localdomain podman[540239]: 2025-10-13 15:46:10.374023828 +0000 UTC m=+0.062386536 container died 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:10.386 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:10 standalone.localdomain podman[540239]: 2025-10-13 15:46:10.412416746 +0000 UTC m=+0.100779374 container cleanup 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:10 standalone.localdomain systemd[1]: libpod-conmon-3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228.scope: Deactivated successfully.
Oct 13 15:46:10 standalone.localdomain podman[540246]: 2025-10-13 15:46:10.466662228 +0000 UTC m=+0.135618271 container remove 3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bab5d1f-c037-4145-871c-9d14f5e9153e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:10.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:10 standalone.localdomain kernel: device tapd4c19129-3f left promiscuous mode
Oct 13 15:46:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:10.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:10 standalone.localdomain podman[540286]: 2025-10-13 15:46:10.61162759 +0000 UTC m=+0.060637893 container kill 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:46:10 standalone.localdomain dnsmasq[539907]: exiting on receipt of SIGTERM
Oct 13 15:46:10 standalone.localdomain systemd[1]: libpod-74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942.scope: Deactivated successfully.
Oct 13 15:46:10 standalone.localdomain podman[540300]: 2025-10-13 15:46:10.685069451 +0000 UTC m=+0.057393342 container died 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:10 standalone.localdomain podman[540300]: 2025-10-13 15:46:10.720389573 +0000 UTC m=+0.092713444 container cleanup 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:46:10 standalone.localdomain systemd[1]: libpod-conmon-74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942.scope: Deactivated successfully.
Oct 13 15:46:10 standalone.localdomain podman[540302]: 2025-10-13 15:46:10.768736891 +0000 UTC m=+0.132900917 container remove 74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:46:10 standalone.localdomain ceph-mon[29756]: pgmap v3956: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:11 standalone.localdomain systemd[1]: tmp-crun.gUQoLT.mount: Deactivated successfully.
Oct 13 15:46:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-56301cae562762bfbf9293e6685dca77fb134e9fd98e0d8220c64c07aa27befe-merged.mount: Deactivated successfully.
Oct 13 15:46:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c5460778326f9f7d1158288ad89d6e79d214c0b4a6cd9a0bfc1128e91dee228-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:11 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d9bab5d1f\x2dc037\x2d4145\x2d871c\x2d9d14f5e9153e.mount: Deactivated successfully.
Oct 13 15:46:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2e6dbeea878d986eafeb402f1b3cf06dd108131db9b1c747276f1804c1bb4b98-merged.mount: Deactivated successfully.
Oct 13 15:46:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74ea9f116bbe567602d20091fd081b08672c4ae3b51757f726284f32ae11c942-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:11.070 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.347 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.348 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.349 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.350 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:46:11 standalone.localdomain podman[467099]: time="2025-10-13T15:46:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:46:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:46:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 422036 "" "Go-http-client/1.1"
Oct 13 15:46:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3957: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:46:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51043 "" "Go-http-client/1.1"
Oct 13 15:46:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:11.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:12Z|00306|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:12.221 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:5f:f4 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=486905a0-8fbf-4847-a0a3-d828ace412ce, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=20b13e30-b2d6-40a4-9537-84abbaa32a14) old=Port_Binding(mac=['fa:16:3e:b5:5f:f4 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:12.223 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 20b13e30-b2d6-40a4-9537-84abbaa32a14 in datapath 8669e1a6-2c0a-4fef-817e-c674b6e65b80 updated
Oct 13 15:46:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:12.226 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2a9e7c65-0557-4dfe-b10c-718cd1a7b6b6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:12.227 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8669e1a6-2c0a-4fef-817e-c674b6e65b80, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:12.228 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2d88bab0-b5f4-48bd-af80-c6ac4f4aba49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:12 standalone.localdomain ceph-mon[29756]: pgmap v3957: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.825 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.840 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.840 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.841 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.841 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:12.842 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:46:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:46:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:46:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:46:13 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:46:13 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:46:13 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:46:13 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:46:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:13.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:13 standalone.localdomain podman[540379]: 
Oct 13 15:46:13 standalone.localdomain podman[540379]: 2025-10-13 15:46:13.033025081 +0000 UTC m=+0.118175007 container create d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:13 standalone.localdomain podman[540379]: 2025-10-13 15:46:12.953299535 +0000 UTC m=+0.038449521 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:13 standalone.localdomain systemd[1]: Started libpod-conmon-d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d.scope.
Oct 13 15:46:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905fff5a19c593be060d3f9c2a679b421280ad05bcf79062bdb6f815fa046594/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:13 standalone.localdomain podman[540379]: 2025-10-13 15:46:13.103868451 +0000 UTC m=+0.189018387 container init d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:46:13 standalone.localdomain podman[540379]: 2025-10-13 15:46:13.113605735 +0000 UTC m=+0.198755671 container start d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:13 standalone.localdomain dnsmasq[540397]: started, version 2.85 cachesize 150
Oct 13 15:46:13 standalone.localdomain dnsmasq[540397]: DNS service limited to local subnets
Oct 13 15:46:13 standalone.localdomain dnsmasq[540397]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:13 standalone.localdomain dnsmasq[540397]: warning: no upstream servers configured
Oct 13 15:46:13 standalone.localdomain dnsmasq-dhcp[540397]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:13 standalone.localdomain dnsmasq-dhcp[540397]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:46:13 standalone.localdomain dnsmasq[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 1 addresses
Oct 13 15:46:13 standalone.localdomain dnsmasq-dhcp[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:13 standalone.localdomain dnsmasq-dhcp[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.188 496978 INFO neutron.agent.dhcp.agent [None req-f3b8993b-abf5-4655-8563-9a3852ce8deb - - - - - -] Synchronizing state
Oct 13 15:46:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:13.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:13.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:13.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:13.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.475 496978 INFO neutron.agent.dhcp.agent [None req-44af39ae-650d-4493-b36d-92659daa5ae8 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.497 496978 INFO neutron.agent.dhcp.agent [None req-0c6c371e-611b-497f-9868-2e2dafdb0fcf - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.498 496978 INFO neutron.agent.dhcp.agent [-] Starting network 6f6d9b8d-568f-46ee-a430-3390e28c4ce0 dhcp configuration
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.499 496978 INFO neutron.agent.dhcp.agent [-] Finished network 6f6d9b8d-568f-46ee-a430-3390e28c4ce0 dhcp configuration
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.499 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.499 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.500 496978 INFO neutron.agent.dhcp.agent [-] Starting network daebeaf0-5f61-41df-be30-06585ca2c410 dhcp configuration
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.500 496978 INFO neutron.agent.dhcp.agent [-] Finished network daebeaf0-5f61-41df-be30-06585ca2c410 dhcp configuration
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.501 496978 INFO neutron.agent.dhcp.agent [None req-0c6c371e-611b-497f-9868-2e2dafdb0fcf - - - - - -] Synchronizing state complete
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.502 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.508 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.510 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.510 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:13.602 2 INFO neutron.agent.securitygroups_rpc [None req-60715d4c-7c97-40f6-a8c7-93ad72af3006 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:13.676 496978 INFO neutron.agent.dhcp.agent [None req-7c921aa0-d445-45ce-b847-8c8f384583e1 - - - - - -] DHCP configuration for ports {'20b13e30-b2d6-40a4-9537-84abbaa32a14', '36e7e924-139d-4d57-a01f-3a7052149280', 'b30d48dd-00c4-460f-a66d-6c0e877b5fd9', '2513fba5-3362-4add-8a7c-752b1114c953'} is completed
Oct 13 15:46:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3958: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:13 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:46:13 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:13 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:13 standalone.localdomain podman[540416]: 2025-10-13 15:46:13.872367924 +0000 UTC m=+0.070006115 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:13.982 2 INFO neutron.agent.securitygroups_rpc [None req-d9b4bf18-48fc-4a67-aebd-612af25d1951 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['ce2025a8-3588-4e7d-affc-4fbdcac1d479', 'ee66a319-849e-4d64-befc-3297e5dcd3f2']
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.039 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:06Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ed0910>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ed0820>], id=b30d48dd-00c4-460f-a66d-6c0e877b5fd9, ip_allocation=immediate, mac_address=fa:16:3e:e3:d9:b6, name=tempest-PortsTestJSON-28318411, network_id=8669e1a6-2c0a-4fef-817e-c674b6e65b80, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['ce2025a8-3588-4e7d-affc-4fbdcac1d479'], standard_attr_id=1600, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:46:13Z on network 8669e1a6-2c0a-4fef-817e-c674b6e65b80
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.043 496978 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmprkl85fc_/privsep.sock']
Oct 13 15:46:14 standalone.localdomain dnsmasq[538754]: exiting on receipt of SIGTERM
Oct 13 15:46:14 standalone.localdomain podman[540453]: 2025-10-13 15:46:14.116523509 +0000 UTC m=+0.056726620 container kill c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:46:14 standalone.localdomain systemd[1]: libpod-c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878.scope: Deactivated successfully.
Oct 13 15:46:14 standalone.localdomain podman[540477]: 2025-10-13 15:46:14.203835773 +0000 UTC m=+0.049172386 container died c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:46:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6aad769ddee43d4e4d5a105c5a675fb33411d43e6692238f7b6571b6eb2bb3a2-merged.mount: Deactivated successfully.
Oct 13 15:46:14 standalone.localdomain podman[540477]: 2025-10-13 15:46:14.250394546 +0000 UTC m=+0.095731139 container remove c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.250 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.251 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.252 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.252 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.253 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:46:14 standalone.localdomain systemd[1]: libpod-conmon-c727c22ac0341f9964fd812e967bd785f901618e70d43bc83555dc9c65875878.scope: Deactivated successfully.
Oct 13 15:46:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:14.665 2 INFO neutron.agent.securitygroups_rpc [None req-2725aa18-79c0-44d1-97aa-9b691cc3e40a 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:46:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/949719524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.725 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:46:14 standalone.localdomain ceph-mon[29756]: pgmap v3958: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/949719524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:46:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:14.793 2 INFO neutron.agent.securitygroups_rpc [None req-2725aa18-79c0-44d1-97aa-9b691cc3e40a 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.806 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.807 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.808 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.813 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:46:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:14.814 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.827 496978 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.692 540539 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.708 540539 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.718 540539 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 13 15:46:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:14.718 540539 INFO oslo.privsep.daemon [-] privsep daemon running as pid 540539
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.021 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.022 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9118MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.023 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.023 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:46:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:15.110 2 INFO neutron.agent.securitygroups_rpc [None req-bfcc0573-c3fa-4f18-96dd-dbe1c8b1ff9c 980549859d4249c8849e996aa6d0726a 11d6ccc1374f45899525aa43b59ab0c1 - - default default] Security group member updated ['e4c8dfed-cd8d-46d2-81c0-39763ceadbc8']
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.119 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.119 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.120 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.120 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:46:15 standalone.localdomain dnsmasq-dhcp[540397]: DHCPRELEASE(tap2513fba5-33) 10.100.0.6 fa:16:3e:e3:d9:b6
Oct 13 15:46:15 standalone.localdomain podman[540570]: 
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.202 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:46:15 standalone.localdomain podman[540570]: 2025-10-13 15:46:15.211662021 +0000 UTC m=+0.109100645 container create 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started libpod-conmon-3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394.scope.
Oct 13 15:46:15 standalone.localdomain podman[540570]: 2025-10-13 15:46:15.158592395 +0000 UTC m=+0.056031059 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:15 standalone.localdomain systemd[1]: tmp-crun.51HYj8.mount: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc697487596517eb994405436671a3f35ad20d8f77f0585a09aaf578495d027/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:15 standalone.localdomain podman[540570]: 2025-10-13 15:46:15.291722237 +0000 UTC m=+0.189160871 container init 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:15 standalone.localdomain podman[540570]: 2025-10-13 15:46:15.299923503 +0000 UTC m=+0.197362127 container start 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:15 standalone.localdomain dnsmasq[540599]: started, version 2.85 cachesize 150
Oct 13 15:46:15 standalone.localdomain dnsmasq[540599]: DNS service limited to local subnets
Oct 13 15:46:15 standalone.localdomain dnsmasq[540599]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:15 standalone.localdomain dnsmasq[540599]: warning: no upstream servers configured
Oct 13 15:46:15 standalone.localdomain dnsmasq-dhcp[540599]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:15 standalone.localdomain dnsmasq[540599]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/addn_hosts - 0 addresses
Oct 13 15:46:15 standalone.localdomain dnsmasq-dhcp[540599]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/host
Oct 13 15:46:15 standalone.localdomain dnsmasq-dhcp[540599]: read /var/lib/neutron/dhcp/f487554e-b45e-43a8-a75f-cdae84bd0099/opts
Oct 13 15:46:15 standalone.localdomain podman[540586]: 2025-10-13 15:46:15.365377386 +0000 UTC m=+0.107615359 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:46:15 standalone.localdomain podman[540586]: 2025-10-13 15:46:15.376246265 +0000 UTC m=+0.118484198 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:46:15 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain podman[540632]: 2025-10-13 15:46:15.480104375 +0000 UTC m=+0.077835650 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:46:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:15.499 2 INFO neutron.agent.securitygroups_rpc [None req-b09ffe15-a6e1-4af8-9d4f-7b1459b349cf eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['ce2025a8-3588-4e7d-affc-4fbdcac1d479']
Oct 13 15:46:15 standalone.localdomain podman[540632]: 2025-10-13 15:46:15.519991048 +0000 UTC m=+0.117722333 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:46:15 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:15.532 496978 INFO neutron.agent.dhcp.agent [None req-31256f2a-648a-4a06-8d88-6e5444ee1dad - - - - - -] DHCP configuration for ports {'f813096a-b43f-47ff-9ed6-484b8508115e', 'f31ccec1-20e7-41e5-b93c-77e7913d87ef'} is completed
Oct 13 15:46:15 standalone.localdomain podman[540631]: 2025-10-13 15:46:15.459612865 +0000 UTC m=+0.054611614 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:46:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:15.581 2 INFO neutron.agent.securitygroups_rpc [None req-bfcc0573-c3fa-4f18-96dd-dbe1c8b1ff9c 980549859d4249c8849e996aa6d0726a 11d6ccc1374f45899525aa43b59ab0c1 - - default default] Security group member updated ['e4c8dfed-cd8d-46d2-81c0-39763ceadbc8']
Oct 13 15:46:15 standalone.localdomain podman[540633]: 2025-10-13 15:46:15.597184146 +0000 UTC m=+0.192251018 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, container_name=swift_account_server, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, com.redhat.component=openstack-swift-account-container, config_id=tripleo_step4, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, io.buildah.version=1.33.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:46:15 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:46:15 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1570952136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:46:15 standalone.localdomain dnsmasq[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 1 addresses
Oct 13 15:46:15 standalone.localdomain dnsmasq-dhcp[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:15 standalone.localdomain dnsmasq-dhcp[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:15 standalone.localdomain podman[540728]: 2025-10-13 15:46:15.701526081 +0000 UTC m=+0.053122598 container kill d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:46:15 standalone.localdomain podman[540631]: 2025-10-13 15:46:15.707841778 +0000 UTC m=+0.302840507 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, name=rhosp17/openstack-swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, com.redhat.component=openstack-swift-object-container, architecture=x86_64, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28)
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.708 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.715 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:46:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:46:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3959: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:15.723 2 INFO neutron.agent.securitygroups_rpc [None req-bb7ef2f6-d280-4250-b0be-8f847e5f5657 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:15 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.742 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.744 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.745 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:46:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:15.745 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2dc54bc3-f80e-4090-90eb-306bf3c526d8 with type ""
Oct 13 15:46:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:15Z|00307|binding|INFO|Removing iface tapf31ccec1-20 ovn-installed in OVS
Oct 13 15:46:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:15Z|00308|binding|INFO|Removing lport f31ccec1-20e7-41e5-b93c-77e7913d87ef ovn-installed in OVS
Oct 13 15:46:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:15.745 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f487554e-b45e-43a8-a75f-cdae84bd0099', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f487554e-b45e-43a8-a75f-cdae84bd0099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=258957cb-1903-482f-82ff-e7d8d931c651, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=f31ccec1-20e7-41e5-b93c-77e7913d87ef) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:15.747 378821 INFO neutron.agent.ovn.metadata.agent [-] Port f31ccec1-20e7-41e5-b93c-77e7913d87ef in datapath f487554e-b45e-43a8-a75f-cdae84bd0099 unbound from our chassis
Oct 13 15:46:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:15.748 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f487554e-b45e-43a8-a75f-cdae84bd0099, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:15.749 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[438e7415-cbf6-4868-ae7d-76310f8b0ffa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:15.756 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:15 standalone.localdomain podman[540762]: 2025-10-13 15:46:15.794506692 +0000 UTC m=+0.063386719 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, architecture=x86_64, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.buildah.version=1.33.12, container_name=swift_container_server, release=1, summary=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=)
Oct 13 15:46:15 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1570952136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:46:15 standalone.localdomain dnsmasq[540599]: exiting on receipt of SIGTERM
Oct 13 15:46:15 standalone.localdomain podman[540735]: 2025-10-13 15:46:15.825949582 +0000 UTC m=+0.162089407 container kill 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:15 standalone.localdomain systemd[1]: libpod-3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394.scope: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain podman[540633]: 2025-10-13 15:46:15.837020848 +0000 UTC m=+0.432087720 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, tcib_managed=true, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, release=1, batch=17.1_20250721.1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, container_name=swift_account_server)
Oct 13 15:46:15 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain podman[540793]: 2025-10-13 15:46:15.874968882 +0000 UTC m=+0.035099587 container died 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:15 standalone.localdomain podman[540793]: 2025-10-13 15:46:15.899603819 +0000 UTC m=+0.059734504 container cleanup 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:15 standalone.localdomain systemd[1]: libpod-conmon-3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394.scope: Deactivated successfully.
Oct 13 15:46:15 standalone.localdomain podman[540795]: 2025-10-13 15:46:15.950699504 +0000 UTC m=+0.085053505 container remove 3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f487554e-b45e-43a8-a75f-cdae84bd0099, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:15 standalone.localdomain kernel: device tapf31ccec1-20 left promiscuous mode
Oct 13 15:46:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:15.964 496978 INFO neutron.agent.dhcp.agent [None req-66e55af5-5f22-409f-aeee-95f2bf2d250b - - - - - -] DHCP configuration for ports {'b30d48dd-00c4-460f-a66d-6c0e877b5fd9'} is completed
Oct 13 15:46:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:15.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:15 standalone.localdomain podman[540762]: 2025-10-13 15:46:15.985119217 +0000 UTC m=+0.253999234 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, container_name=swift_container_server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, io.buildah.version=1.33.12, vcs-type=git, version=17.1.9)
Oct 13 15:46:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:15.996 496978 INFO neutron.agent.dhcp.agent [None req-2af609af-d742-4b7b-bb64-fe346b0a8335 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:15 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:46:16 standalone.localdomain dnsmasq[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 0 addresses
Oct 13 15:46:16 standalone.localdomain dnsmasq-dhcp[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:16 standalone.localdomain dnsmasq-dhcp[540397]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:16 standalone.localdomain podman[540847]: 2025-10-13 15:46:16.093788237 +0000 UTC m=+0.051670223 container kill d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:46:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:16.128 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bbc697487596517eb994405436671a3f35ad20d8f77f0585a09aaf578495d027-merged.mount: Deactivated successfully.
Oct 13 15:46:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3bd2cfc53b9fccb66b1edc69eb768f11c3bebbbfaaa0df2b5cafbde0e63e5394-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:16 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df487554e\x2db45e\x2d43a8\x2da75f\x2dcdae84bd0099.mount: Deactivated successfully.
Oct 13 15:46:16 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:16.276 2 INFO neutron.agent.securitygroups_rpc [None req-8d6bba11-b567-489f-b75e-97840007d88c 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:16.305 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:16Z|00309|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:16 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:16.409 2 INFO neutron.agent.securitygroups_rpc [None req-928d7846-0e0c-4a59-bdd8-448e76b06320 980549859d4249c8849e996aa6d0726a 11d6ccc1374f45899525aa43b59ab0c1 - - default default] Security group member updated ['e4c8dfed-cd8d-46d2-81c0-39763ceadbc8']
Oct 13 15:46:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:16.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:16.460 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:16 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:16.762 2 INFO neutron.agent.securitygroups_rpc [None req-b6ac3da8-b3a1-43df-aa0d-8240ac4b66da 980549859d4249c8849e996aa6d0726a 11d6ccc1374f45899525aa43b59ab0c1 - - default default] Security group member updated ['e4c8dfed-cd8d-46d2-81c0-39763ceadbc8']
Oct 13 15:46:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:16.798 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:16 standalone.localdomain ceph-mon[29756]: pgmap v3959: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:16.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:16.948 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:17 standalone.localdomain dnsmasq[540397]: exiting on receipt of SIGTERM
Oct 13 15:46:17 standalone.localdomain podman[540885]: 2025-10-13 15:46:17.104597757 +0000 UTC m=+0.062555352 container kill d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:46:17 standalone.localdomain systemd[1]: libpod-d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d.scope: Deactivated successfully.
Oct 13 15:46:17 standalone.localdomain podman[540899]: 2025-10-13 15:46:17.178060999 +0000 UTC m=+0.061892471 container died d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:46:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-905fff5a19c593be060d3f9c2a679b421280ad05bcf79062bdb6f815fa046594-merged.mount: Deactivated successfully.
Oct 13 15:46:17 standalone.localdomain podman[540899]: 2025-10-13 15:46:17.269684437 +0000 UTC m=+0.153515859 container cleanup d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:46:17 standalone.localdomain systemd[1]: libpod-conmon-d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d.scope: Deactivated successfully.
Oct 13 15:46:17 standalone.localdomain podman[540906]: 2025-10-13 15:46:17.309282633 +0000 UTC m=+0.176097384 container remove d5c834f8ca72c55fcaccac7b6507da81f09596e9120c8aac4bfc4e34f8d41c7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:46:17 standalone.localdomain dnsmasq[536831]: exiting on receipt of SIGTERM
Oct 13 15:46:17 standalone.localdomain systemd[1]: libpod-87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619.scope: Deactivated successfully.
Oct 13 15:46:17 standalone.localdomain podman[540948]: 2025-10-13 15:46:17.44064811 +0000 UTC m=+0.070991405 container kill 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:17 standalone.localdomain podman[540965]: 2025-10-13 15:46:17.502391656 +0000 UTC m=+0.059418955 container died 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:17 standalone.localdomain podman[540965]: 2025-10-13 15:46:17.550299911 +0000 UTC m=+0.107327190 container cleanup 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:17 standalone.localdomain systemd[1]: libpod-conmon-87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619.scope: Deactivated successfully.
Oct 13 15:46:17 standalone.localdomain podman[540974]: 2025-10-13 15:46:17.586369735 +0000 UTC m=+0.115723870 container remove 87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d24a5474-4856-4e69-8d2c-c3d6842817d3, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:17 standalone.localdomain kernel: device tapbe847d01-21 left promiscuous mode
Oct 13 15:46:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:17Z|00310|binding|INFO|Releasing lport be847d01-21e1-45ac-a7b7-ce10b59ab30c from this chassis (sb_readonly=0)
Oct 13 15:46:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:17Z|00311|binding|INFO|Setting lport be847d01-21e1-45ac-a7b7-ce10b59ab30c down in Southbound
Oct 13 15:46:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:17.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:17.650 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d24a5474-4856-4e69-8d2c-c3d6842817d3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d24a5474-4856-4e69-8d2c-c3d6842817d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a661d60-55c3-4158-a391-6ab2af3f6284, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=be847d01-21e1-45ac-a7b7-ce10b59ab30c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:17.652 378821 INFO neutron.agent.ovn.metadata.agent [-] Port be847d01-21e1-45ac-a7b7-ce10b59ab30c in datapath d24a5474-4856-4e69-8d2c-c3d6842817d3 unbound from our chassis
Oct 13 15:46:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:17.653 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d24a5474-4856-4e69-8d2c-c3d6842817d3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:17.654 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[778abb7f-8f91-4ec1-ba2a-423ead9af56a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:17.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:17.683 496978 INFO neutron.agent.dhcp.agent [None req-9c2a0b22-2e52-41e8-9e68-4966ccf4adc5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3960: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:18.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:18 standalone.localdomain podman[541037]: 
Oct 13 15:46:18 standalone.localdomain podman[541037]: 2025-10-13 15:46:18.254090094 +0000 UTC m=+0.113495971 container create 7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:46:18 standalone.localdomain systemd[1]: tmp-crun.ScZcyJ.mount: Deactivated successfully.
Oct 13 15:46:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b0fe799eab9c19ca55b74a21c4d69f13fab9973d818bd2bb7d422c1defebc292-merged.mount: Deactivated successfully.
Oct 13 15:46:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87dd1ef3d61799ad196222ef0d6abf2c9b9a694e81f58fb3a6bff7eb86e81619-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:18 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd24a5474\x2d4856\x2d4e69\x2d8d2c\x2dc3d6842817d3.mount: Deactivated successfully.
Oct 13 15:46:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:18.290 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:18 standalone.localdomain podman[541037]: 2025-10-13 15:46:18.1959183 +0000 UTC m=+0.055324217 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:18 standalone.localdomain systemd[1]: Started libpod-conmon-7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4.scope.
Oct 13 15:46:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b83225cd4237f862e51e1212f4e6382a35f8f7a7a613082880abcbd969db23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:18 standalone.localdomain podman[541037]: 2025-10-13 15:46:18.354892609 +0000 UTC m=+0.214298506 container init 7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:18 standalone.localdomain podman[541037]: 2025-10-13 15:46:18.365371686 +0000 UTC m=+0.224777563 container start 7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:46:18 standalone.localdomain dnsmasq[541055]: started, version 2.85 cachesize 150
Oct 13 15:46:18 standalone.localdomain dnsmasq[541055]: DNS service limited to local subnets
Oct 13 15:46:18 standalone.localdomain dnsmasq[541055]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:18 standalone.localdomain dnsmasq[541055]: warning: no upstream servers configured
Oct 13 15:46:18 standalone.localdomain dnsmasq-dhcp[541055]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:46:18 standalone.localdomain dnsmasq[541055]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 0 addresses
Oct 13 15:46:18 standalone.localdomain dnsmasq-dhcp[541055]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:18 standalone.localdomain dnsmasq-dhcp[541055]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:18.568 496978 INFO neutron.agent.linux.ip_lib [None req-d43006af-ee74-4bbf-90a3-2524c1a1f5b3 - - - - - -] Device tap0a13f0c9-f1 cannot be used as it has no MAC address
Oct 13 15:46:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:18.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:18 standalone.localdomain kernel: device tap0a13f0c9-f1 entered promiscuous mode
Oct 13 15:46:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:18Z|00312|binding|INFO|Claiming lport 0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d for this chassis.
Oct 13 15:46:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:18Z|00313|binding|INFO|0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d: Claiming unknown
Oct 13 15:46:18 standalone.localdomain NetworkManager[5962]: <info>  [1760370378.5988] manager: (tap0a13f0c9-f1): new Generic device (/org/freedesktop/NetworkManager/Devices/61)
Oct 13 15:46:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:18.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:18 standalone.localdomain systemd-udevd[541078]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.608 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-175b2471-e33d-4aac-a849-9c4809446f60', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-175b2471-e33d-4aac-a849-9c4809446f60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1de65ef5-c697-4aa9-9b7f-a00f5e421d3d, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.609 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d in datapath 175b2471-e33d-4aac-a849-9c4809446f60 bound to our chassis
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.610 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 175b2471-e33d-4aac-a849-9c4809446f60 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.612 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2783eaff-eff5-4042-8d64-589f2d644f12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:18Z|00314|binding|INFO|Setting lport 0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d ovn-installed in OVS
Oct 13 15:46:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:18Z|00315|binding|INFO|Setting lport 0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d up in Southbound
Oct 13 15:46:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:18.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:46:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1702111441' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:46:18 standalone.localdomain ceph-mon[29756]: pgmap v3960: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:46:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1702111441' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:46:18 standalone.localdomain podman[541082]: 2025-10-13 15:46:18.671964229 +0000 UTC m=+0.046990667 container kill 7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:18 standalone.localdomain dnsmasq[541055]: exiting on receipt of SIGTERM
Oct 13 15:46:18 standalone.localdomain systemd[1]: libpod-7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4.scope: Deactivated successfully.
Oct 13 15:46:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:18.682 496978 INFO neutron.agent.dhcp.agent [None req-c6c95101-aae4-4335-9268-3972750dfb5f - - - - - -] DHCP configuration for ports {'20b13e30-b2d6-40a4-9537-84abbaa32a14', '36e7e924-139d-4d57-a01f-3a7052149280', '2513fba5-3362-4add-8a7c-752b1114c953'} is completed
Oct 13 15:46:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:18Z|00316|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.711 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:5f:f4 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=486905a0-8fbf-4847-a0a3-d828ace412ce, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=20b13e30-b2d6-40a4-9537-84abbaa32a14) old=Port_Binding(mac=['fa:16:3e:b5:5f:f4 10.100.0.18 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.712 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 20b13e30-b2d6-40a4-9537-84abbaa32a14 in datapath 8669e1a6-2c0a-4fef-817e-c674b6e65b80 updated
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.714 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2a9e7c65-0557-4dfe-b10c-718cd1a7b6b6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.714 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8669e1a6-2c0a-4fef-817e-c674b6e65b80, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:18.715 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[7291308c-c315-4c31-9303-3864a5acbe03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:18 standalone.localdomain podman[541101]: 2025-10-13 15:46:18.734660225 +0000 UTC m=+0.046966206 container died 7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:46:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:18.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:18 standalone.localdomain podman[541101]: 2025-10-13 15:46:18.789413173 +0000 UTC m=+0.101719124 container remove 7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:18 standalone.localdomain systemd[1]: libpod-conmon-7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4.scope: Deactivated successfully.
Oct 13 15:46:18 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:18.832 2 INFO neutron.agent.securitygroups_rpc [None req-80928e15-0409-4a9a-9f04-ddb878920003 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:46:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-34b83225cd4237f862e51e1212f4e6382a35f8f7a7a613082880abcbd969db23-merged.mount: Deactivated successfully.
Oct 13 15:46:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bd7039e863ed840b403db1c272617199852e2a33a067f89016e45429c1747e4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:19 standalone.localdomain podman[541150]: 2025-10-13 15:46:19.343063074 +0000 UTC m=+0.107236697 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible)
Oct 13 15:46:19 standalone.localdomain podman[541150]: 2025-10-13 15:46:19.379947854 +0000 UTC m=+0.144121527 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:46:19 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:46:19 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:19.556 2 INFO neutron.agent.securitygroups_rpc [None req-fdfadf9e-66d9-4687-8e9d-bd4e7dd93b5d 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:19 standalone.localdomain podman[541218]: 
Oct 13 15:46:19 standalone.localdomain dnsmasq[536167]: exiting on receipt of SIGTERM
Oct 13 15:46:19 standalone.localdomain systemd[1]: libpod-3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade.scope: Deactivated successfully.
Oct 13 15:46:19 standalone.localdomain podman[541235]: 2025-10-13 15:46:19.628558628 +0000 UTC m=+0.066510665 container kill 3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39554d73-8d1e-479c-8f6c-7b5c172c6abd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:46:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1702111441' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:46:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1702111441' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:46:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:19Z|00317|binding|INFO|Removing iface tap455e91df-86 ovn-installed in OVS
Oct 13 15:46:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:19.661 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 0afd24ef-477b-4291-be97-a6c497b42df6 with type ""
Oct 13 15:46:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:19Z|00318|binding|INFO|Removing lport 455e91df-8673-4f81-b9d7-0fa28d25fa30 ovn-installed in OVS
Oct 13 15:46:19 standalone.localdomain podman[541218]: 2025-10-13 15:46:19.563241591 +0000 UTC m=+0.055267725 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:19.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:19.668 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-39554d73-8d1e-479c-8f6c-7b5c172c6abd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39554d73-8d1e-479c-8f6c-7b5c172c6abd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6569146f-e9c9-49d5-8344-d6ba7a6cf79c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=455e91df-8673-4f81-b9d7-0fa28d25fa30) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:19.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:19 standalone.localdomain podman[541218]: 2025-10-13 15:46:19.670321281 +0000 UTC m=+0.162347385 container create cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:46:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:19.671 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 455e91df-8673-4f81-b9d7-0fa28d25fa30 in datapath 39554d73-8d1e-479c-8f6c-7b5c172c6abd unbound from our chassis
Oct 13 15:46:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:19.674 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 39554d73-8d1e-479c-8f6c-7b5c172c6abd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:19.675 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6c3f28-4315-40ea-a42b-e06e38016bf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:19 standalone.localdomain podman[541257]: 2025-10-13 15:46:19.701005558 +0000 UTC m=+0.054601573 container died 3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39554d73-8d1e-479c-8f6c-7b5c172c6abd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:46:19 standalone.localdomain systemd[1]: Started libpod-conmon-cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb.scope.
Oct 13 15:46:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c1aa3d139204e3d57173cf87bd40cb28298f96483efd358c5ff4814856a47e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3961: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:19 standalone.localdomain podman[541218]: 2025-10-13 15:46:19.728826646 +0000 UTC m=+0.220852750 container init cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:19 standalone.localdomain dnsmasq[541285]: started, version 2.85 cachesize 150
Oct 13 15:46:19 standalone.localdomain dnsmasq[541285]: DNS service limited to local subnets
Oct 13 15:46:19 standalone.localdomain dnsmasq[541285]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:19 standalone.localdomain dnsmasq[541285]: warning: no upstream servers configured
Oct 13 15:46:19 standalone.localdomain dnsmasq-dhcp[541285]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:19 standalone.localdomain dnsmasq[541285]: read /var/lib/neutron/dhcp/175b2471-e33d-4aac-a849-9c4809446f60/addn_hosts - 0 addresses
Oct 13 15:46:19 standalone.localdomain dnsmasq-dhcp[541285]: read /var/lib/neutron/dhcp/175b2471-e33d-4aac-a849-9c4809446f60/host
Oct 13 15:46:19 standalone.localdomain dnsmasq-dhcp[541285]: read /var/lib/neutron/dhcp/175b2471-e33d-4aac-a849-9c4809446f60/opts
Oct 13 15:46:19 standalone.localdomain podman[541257]: 2025-10-13 15:46:19.745882628 +0000 UTC m=+0.099478613 container remove 3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39554d73-8d1e-479c-8f6c-7b5c172c6abd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:46:19 standalone.localdomain systemd[1]: libpod-conmon-3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade.scope: Deactivated successfully.
Oct 13 15:46:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:19.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:19 standalone.localdomain kernel: device tap455e91df-86 left promiscuous mode
Oct 13 15:46:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:19.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:19 standalone.localdomain podman[541218]: 2025-10-13 15:46:19.785644129 +0000 UTC m=+0.277670273 container start cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:46:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:19.810 496978 INFO neutron.agent.dhcp.agent [None req-87bde682-ff53-428a-bcb4-b4adbb095bd2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:19.854 496978 INFO neutron.agent.dhcp.agent [None req-d43006af-ee74-4bbf-90a3-2524c1a1f5b3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:18Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173cd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173a90>], id=ab410841-acb2-4c4e-a8e5-8b12de9b6d0b, ip_allocation=immediate, mac_address=fa:16:3e:36:5d:01, name=tempest-PortsIpV6TestJSON-1069104284, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:17Z, description=, dns_domain=, id=175b2471-e33d-4aac-a849-9c4809446f60, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1393478243, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22251, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1662, status=ACTIVE, subnets=['0e49e1c3-cf0c-49f7-aa87-5eecc168bd02'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:17Z, vlan_transparent=None, network_id=175b2471-e33d-4aac-a849-9c4809446f60, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1669, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:18Z on network 175b2471-e33d-4aac-a849-9c4809446f60
Oct 13 15:46:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:19.928 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:19.969 496978 INFO neutron.agent.dhcp.agent [None req-19e7ab32-88b7-477d-8df6-951bf0e5a278 - - - - - -] DHCP configuration for ports {'2b17e457-478c-43eb-b027-93646173d626'} is completed
Oct 13 15:46:20 standalone.localdomain dnsmasq[541285]: read /var/lib/neutron/dhcp/175b2471-e33d-4aac-a849-9c4809446f60/addn_hosts - 1 addresses
Oct 13 15:46:20 standalone.localdomain podman[541308]: 2025-10-13 15:46:20.074918722 +0000 UTC m=+0.068278661 container kill cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:46:20 standalone.localdomain dnsmasq-dhcp[541285]: read /var/lib/neutron/dhcp/175b2471-e33d-4aac-a849-9c4809446f60/host
Oct 13 15:46:20 standalone.localdomain dnsmasq-dhcp[541285]: read /var/lib/neutron/dhcp/175b2471-e33d-4aac-a849-9c4809446f60/opts
Oct 13 15:46:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cd491ac44e968ac1f40c0f76ec1f85aefcbd2de3f8ce3eddd4e6cf810d033af6-merged.mount: Deactivated successfully.
Oct 13 15:46:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ac06919d62f04c8bbe075af676f07d587f2da83931939e9de9a4051968fcade-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:20 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d39554d73\x2d8d1e\x2d479c\x2d8f6c\x2d7b5c172c6abd.mount: Deactivated successfully.
Oct 13 15:46:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:20Z|00319|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:20.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:20 standalone.localdomain podman[541347]: 
Oct 13 15:46:20 standalone.localdomain podman[541347]: 2025-10-13 15:46:20.29254626 +0000 UTC m=+0.098321518 container create efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:20.302 496978 INFO neutron.agent.dhcp.agent [None req-ab00f64b-6878-4a7b-bd79-514d0e0e6283 - - - - - -] DHCP configuration for ports {'ab410841-acb2-4c4e-a8e5-8b12de9b6d0b'} is completed
Oct 13 15:46:20 standalone.localdomain systemd[1]: Started libpod-conmon-efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de.scope.
Oct 13 15:46:20 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:20 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc12c97e3091e121242eafbd879d1993879c465232f8f73959304a33b1c9b948/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:20 standalone.localdomain podman[541347]: 2025-10-13 15:46:20.253855324 +0000 UTC m=+0.059630612 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:20 standalone.localdomain podman[541347]: 2025-10-13 15:46:20.362985848 +0000 UTC m=+0.168761126 container init efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:20 standalone.localdomain podman[541347]: 2025-10-13 15:46:20.374165567 +0000 UTC m=+0.179940825 container start efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:46:20 standalone.localdomain dnsmasq[541372]: started, version 2.85 cachesize 150
Oct 13 15:46:20 standalone.localdomain dnsmasq[541372]: DNS service limited to local subnets
Oct 13 15:46:20 standalone.localdomain dnsmasq[541372]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:20 standalone.localdomain dnsmasq[541372]: warning: no upstream servers configured
Oct 13 15:46:20 standalone.localdomain dnsmasq-dhcp[541372]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:46:20 standalone.localdomain dnsmasq-dhcp[541372]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:20 standalone.localdomain dnsmasq[541372]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 0 addresses
Oct 13 15:46:20 standalone.localdomain dnsmasq-dhcp[541372]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:20 standalone.localdomain dnsmasq-dhcp[541372]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:20 standalone.localdomain dnsmasq[541285]: exiting on receipt of SIGTERM
Oct 13 15:46:20 standalone.localdomain podman[541387]: 2025-10-13 15:46:20.526456127 +0000 UTC m=+0.056873255 container kill cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:46:20 standalone.localdomain systemd[1]: libpod-cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb.scope: Deactivated successfully.
Oct 13 15:46:20 standalone.localdomain podman[541401]: 2025-10-13 15:46:20.602166169 +0000 UTC m=+0.061837100 container died cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:20.620 496978 INFO neutron.agent.dhcp.agent [None req-e441e417-f92f-482f-975d-606523657e6e - - - - - -] DHCP configuration for ports {'20b13e30-b2d6-40a4-9537-84abbaa32a14', '36e7e924-139d-4d57-a01f-3a7052149280', '2513fba5-3362-4add-8a7c-752b1114c953'} is completed
Oct 13 15:46:20 standalone.localdomain podman[541401]: 2025-10-13 15:46:20.635070585 +0000 UTC m=+0.094741456 container cleanup cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:46:20 standalone.localdomain systemd[1]: libpod-conmon-cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb.scope: Deactivated successfully.
Oct 13 15:46:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:20Z|00320|binding|INFO|Removing iface tap0a13f0c9-f1 ovn-installed in OVS
Oct 13 15:46:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:20Z|00321|binding|INFO|Removing lport 0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d ovn-installed in OVS
Oct 13 15:46:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:20.656 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 30e4d3e4-c246-498b-8b31-c28664ee82ab with type ""
Oct 13 15:46:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:20.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:20.658 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-175b2471-e33d-4aac-a849-9c4809446f60', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-175b2471-e33d-4aac-a849-9c4809446f60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1de65ef5-c697-4aa9-9b7f-a00f5e421d3d, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:20 standalone.localdomain ceph-mon[29756]: pgmap v3961: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:20.661 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 0a13f0c9-f11f-4ac2-bcc9-e3fb7cc1fb1d in datapath 175b2471-e33d-4aac-a849-9c4809446f60 unbound from our chassis
Oct 13 15:46:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:20.663 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 175b2471-e33d-4aac-a849-9c4809446f60 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:20.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:20 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:20.666 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[857bb4e0-ee5c-48dd-9148-9fa85a53bd7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:20 standalone.localdomain podman[541408]: 2025-10-13 15:46:20.678609423 +0000 UTC m=+0.126279270 container remove cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-175b2471-e33d-4aac-a849-9c4809446f60, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:20.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:20 standalone.localdomain kernel: device tap0a13f0c9-f1 left promiscuous mode
Oct 13 15:46:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:20.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:20.717 496978 INFO neutron.agent.dhcp.agent [None req-20a6f322-3220-4047-9585-86c56c30f5e2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:20.718 496978 INFO neutron.agent.dhcp.agent [None req-20a6f322-3220-4047-9585-86c56c30f5e2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:20 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:20.803 2 INFO neutron.agent.securitygroups_rpc [None req-ee9425d1-0a3b-4647-abdf-332f20c6f83c 1e7122234f844ff48633888d2464f9a1 2a90758ca570420084aa8bec8bdfee73 - - default default] Security group member updated ['94c6b164-ff9d-4bd1-b0ec-d17a36adf8d1']
Oct 13 15:46:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:20.898 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d72b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d7040>], id=fb7984a7-e650-4a52-9a44-2a7ace86892c, ip_allocation=immediate, mac_address=fa:16:3e:4e:8d:62, name=tempest-RoutersAdminNegativeTest-1609911924, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=True, project_id=2a90758ca570420084aa8bec8bdfee73, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['94c6b164-ff9d-4bd1-b0ec-d17a36adf8d1'], standard_attr_id=1701, status=DOWN, tags=[], tenant_id=2a90758ca570420084aa8bec8bdfee73, updated_at=2025-10-13T15:46:20Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:46:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:20.996 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:46:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:20.999 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:46:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:20.999 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.003 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.007 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa8ebcc6-4d8d-4e96-a9a6-0b27f55f4b84', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:20.999741', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3d56304-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': 'c8ba7ab444f4fbb0e68e5fdcca23be43f71f3a3765de85ed2ba6618e840c7448'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:20.999741', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3d6066a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': '5eb76ecd3c4799f5d8b48266eb5eef188662a0d17a052af592c69c1540ef1a79'}]}, 'timestamp': '2025-10-13 15:46:21.008537', '_unique_id': '31e7bfbf07b34af0914f8abcbcdb382a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.010 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.012 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.013 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fbc9dfff-1466-4e7d-9dcf-4d86263cb12e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.012554', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3d6b9b6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': '360abc8e8dd359b9239bcc5795d60d9e661813587e8954f0d06ee9cf9074e53a'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.012554', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3d6cdca-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': 'b0f2b096072e517780ec0d77fd0a7c9c762518bc3aeff475de5288bb2b4d7219'}]}, 'timestamp': '2025-10-13 15:46:21.013910', '_unique_id': '82fe8be8a3464dd18510cbf7a8ec7dd4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.015 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.016 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:46:21 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:21.033 2 INFO neutron.agent.securitygroups_rpc [None req-04580103-7d32-4e6b-820d-d4bafed69aca eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['1cccd955-cbbd-4a20-9085-c7e737d8cdd4']
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.038 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.039 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.039 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.053 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.054 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4e131fd3-23ea-470f-bc7e-35bd1060c6da', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.016886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3dab232-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': '8f5888db0ab72d7f036a5bd323dc7a7314da53a7b045faedf7fac9ab36a5ef03'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.016886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3dac5f6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': 'b95a4b437625f10d70d7e443abb0f6a1d472595c844bdfb8139e160cb8554a8f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.016886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3dad94c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': '7d63d7d13fbeebc22ddb4f6016b9b6bde8a66e823f340147a3559f150067eba1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.016886', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3dd1180-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.229299506, 'message_signature': 'c7db72f0db5fe060d768220bcf65205609306c2dcc9445d5daa4c0f237e45558'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.016886', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3dd30ac-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.229299506, 'message_signature': '9f73e968e18f146d2788bac26f91e73a68e5b2572c8dd603f3fa167ac950330e'}]}, 'timestamp': '2025-10-13 15:46:21.055448', '_unique_id': '4a3888cdc3804c84a13eb2cbdb0e74b2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.057 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.058 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.088 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.089 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.089 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:21.093 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fe86d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fe87f0>], id=8b58413e-cfab-47f1-92d6-bab5eddc4b5e, ip_allocation=immediate, mac_address=fa:16:3e:e2:49:3a, name=tempest-PortsTestJSON-877305522, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:44:54Z, description=, dns_domain=, id=8669e1a6-2c0a-4fef-817e-c674b6e65b80, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-831292785, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=359, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=1224, status=ACTIVE, subnets=['bcdeeb43-04fe-4c6e-94c2-219de9eb7111', 'c1d5d592-b99b-4373-8779-a5c7a0d57bbc'], tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:46:17Z, vlan_transparent=None, network_id=8669e1a6-2c0a-4fef-817e-c674b6e65b80, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['1cccd955-cbbd-4a20-9085-c7e737d8cdd4'], standard_attr_id=1702, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:46:20Z on network 8669e1a6-2c0a-4fef-817e-c674b6e65b80
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.110 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.110 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b119de7-dc08-4337-a659-fcc3732eda1e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.059045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3e24fe2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': 'ee6139c6f27a1f0a54b0a1f1bdb0642e82388f8ae2afe7eaa19090878d6e7eeb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.059045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3e26388-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '47d8783e7826c69a7bd870d7c52e5f6baed90c3e07fc29ab949113c91adb1a28'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.059045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3e27634-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': 'c8817ccfc1c06760bcdf67d3adcb54ca0310f803cf93d3dcf13f16f671a55a27'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.059045', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3e5a35e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': 'f3e943bb4bff85c376b6805a4eb16ab5810d7cfff5b7b29dc24933753d25f2e7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.059045', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3e5b65a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '8b2cff8e7d126119039273d6ab84e01d4036312bf5dc15f50907ccae06e085d8'}]}, 'timestamp': '2025-10-13 15:46:21.111271', '_unique_id': '20356307eb644705ae5e777c83b28b99'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.112 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.114 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.114 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c77efb51-316f-4e11-b3e9-9e195304dc2e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.114434', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3e6462e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': '5cbe5367ece37d727efea53a7b9e10d4c64d8b9de14f631237bce108d46f3aa2'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.114434', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3e6588a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': 'e1acf23387e4507fe7b7eaf86091f4271541b3b3a3ebf5ead57952f4796d2d36'}]}, 'timestamp': '2025-10-13 15:46:21.115432', '_unique_id': 'cbd9e8a15dbf4a719963725f73d15bbf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.116 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.118 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.118 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.118 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.118 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.121 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd261b36-fc1a-48c9-98e8-f3e7377afc2f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.118636', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3e6f088-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': 'd5e6b30ed4230c574ef33d43b3dd192f3c9580f93992a3b1be4d8009131cfe13'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.118636', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3e74d44-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': '13077bf25508b667acea2b032a58b9a7a423207032542245b543e742186c0eba'}]}, 'timestamp': '2025-10-13 15:46:21.121757', '_unique_id': 'd2e872a4c1874b0cba0fc6c7234770a6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.123 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.124 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:46:21 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:46:21 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:21 standalone.localdomain podman[541446]: 2025-10-13 15:46:21.14302581 +0000 UTC m=+0.078577912 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:21 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.148 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.167 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '683eef50-ca8a-4e4b-926e-5cb777d8e422', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:46:21.124419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'c3eb7c2a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.337585014, 'message_signature': '3c9dbbc006507ff401791feb7d6cff0db432c035e66277edcf0154af4fc22fb5'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:46:21.124419', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'c3ee5d6e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.356415302, 'message_signature': '96df1309ba028beaa31f8618000a2cbc7832ed9f463f6ec763f354ae12cc8194'}]}, 'timestamp': '2025-10-13 15:46:21.168091', '_unique_id': '4b39a7bf00fe44cba4d3d82f40014c9e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.171 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.172 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.172 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '907d3b19-f123-47a1-806a-72119d361c8d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.172204', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3ef1b82-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': '01de306fe34e734cebc5a1f82f9839a248032329825d2ebf0c78d8fb3defcca8'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.172204', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3ef363a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': '225d0cdd92472312c968b5b0b00f30eca4f2b83ec75cbb48d026a4c8cf706e26'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.172204', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3ef5106-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': 'ca218b0499c3b37c609aa9108311f190eab585b8c06cbad898e8caba9a02803f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.172204', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3ef6ac4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.229299506, 'message_signature': 'bd2036f8df17c62ce5b8c15b71bbb2710f0b2f965470106e752a2cfc7d996d54'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.172204', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3ef82ca-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.229299506, 'message_signature': 'b8faec58e51b2f0461059a1c2eeec1e5f1be46fb2008f0ab114798d9d11baf97'}]}, 'timestamp': '2025-10-13 15:46:21.175630', '_unique_id': '43c225d0143e4b128bec95ee7a4dd5e8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.179 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.179 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.180 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5c30d632-83c6-4258-9fef-893723b95612', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.179602', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3f03a62-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': '49094c7713f2ec1d436108cfaa5d0268e20ba99f9e9f66f06c7ca298ec352e00'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.179602', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3f05416-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': 'ba93e886c48a61b857ea88d444abb5564b321e87887d506fa32dda8a1c9aedaf'}]}, 'timestamp': '2025-10-13 15:46:21.180951', '_unique_id': '54982352d4204d6d8bebf380ff9bb634'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.182 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.183 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.184 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.184 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.185 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '33af603b-93f5-4cf7-b85f-9ab2ad7347ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.184173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f0ec82-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '503cb9a2f56bc4830dc899aec6b350490bd1b6a6378f65d8b8c9866d04ccabc5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.184173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f103e8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '1c966845f442916f0877100d14c568a7d97570a82802ace4458c3cd0a2b556a7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.184173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3f11b6c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '573b38a8784bbb1947fb5926ba1161a11c4bfac30cb70ce615e289415723a54e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.184173', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f13372-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '787f1419856311cfa2f81a2ec63d828772d9accb8b8688decc1fba39d4073623'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.184173', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f14b78-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '4154051e848e7a80a1b71fa13ff1d4426ab89978eb76c6cbf8a484303239a077'}]}, 'timestamp': '2025-10-13 15:46:21.187249', '_unique_id': '51ae2d8bf8864587bf30b6fff58928e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.189 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.190 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.191 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.191 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69fd310d-4aee-4ae6-ac25-5d5be38bfa7d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.191115', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3f1fbc2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': '46e0aea1752cf1d579fbd3ea567763c1e4e763b5b8a1e209d00df89afd0a2735'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.191115', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3f2165c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': 'ae7082638c6dd8416d0243d544e83bf89fb4a6bb74ac34b67fd01d3db784eda0'}]}, 'timestamp': '2025-10-13 15:46:21.192473', '_unique_id': 'dd91a40c4a254101b146008d40d26a25'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.193 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.196 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.196 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.197 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.198 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.199 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.199 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e8a5d738-ff79-41bf-bc95-74da057290aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.196363', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f2cab6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '4ff2d842103cf3eb68bbccd96c8b8e5fea30ed4a42b48f96e9d570b021814a2d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.196363', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f314d0-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '241e0f14363c33bf399a3a57dacbca49974fc41e3095d02b0a96d0ff5dacdbc6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.196363', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3f32484-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '32f14f6eb82e201dc430963db47e2349b3bc8d3ba06b88b4b64e1a0e510a8858'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.196363', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f33424-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '5329491255cca8a7c86cff374e0355a1856bbc842a167ddb5a16397662181b85'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.196363', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f342f2-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '722206ad0b2d3aba6a23d259dcb5c50b41429a5067c68e7eae2512d5d51fcf12'}]}, 'timestamp': '2025-10-13 15:46:21.200036', '_unique_id': '4cf2e8c2c8cc4fee98d9a98c692017e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.201 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.202 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.202 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.203 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da930a1a-5c88-4961-aa35-5bf2d92122ad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.202857', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3f3c240-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': 'c5e1345d5c9c73ac011283aa29fa7363ba917a8e054cd21b87acde8917ece557'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.202857', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3f3d762-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': '6b2574656a3f1147621154ed56376a7091a7e77d274f2876fef8ffbf11f0bfcd'}]}, 'timestamp': '2025-10-13 15:46:21.203866', '_unique_id': '0fc41fcc4fe6459d96d68e0eac474865'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.206 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.206 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.207 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.207 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.208 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4a3564d4-2e87-4686-92f5-d6ed47184117', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.206390', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f44fa8-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': '7b59c2755b2b1e8906a61ba476440b1bd224555f04bcb64433ccd3a6442765b2'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.206390', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f45fac-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': 'f7fcb3021cdebf718f55a6d49a4f5a765daecd08c852ac1fc9507fcd2855b03c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.206390', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3f470aa-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.206178285, 'message_signature': 'cb9370489b33f113d556a0f195a75dd093f692292ed6a60725319a4cc6a4068e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.206390', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f48298-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.229299506, 'message_signature': '6718f6476957bc0fafe9a14912dad40baeacbbd7d003efe8c484289bb248f941'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.206390', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f48fd6-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.229299506, 'message_signature': '2708b3e4a9078de3f6f7160d7ebe04e5077006e9a6687f389d70c06bd95b314c'}]}, 'timestamp': '2025-10-13 15:46:21.208598', '_unique_id': '7795a3c5b2784f97bf370f01a00e8b28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.211 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.212 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.212 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57305a84-06da-44b2-99fc-09cc9e6e9b5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.211187', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f5060a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '6d12f650e34d78254edf28511ed65cbdff86187be8bcc065b7cfee437a9cd2cf'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.211187', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f514c4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '9fa7c23659727d07a94debfdad10b1a471111222f009e858ec18a62dff8b2246'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.211187', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3f52306-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '3dc0ec70efb25200f24728c09aa1fba291c2cf033ec369d7f8deca95ca269756'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.211187', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f53170-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': 'c75767cc5d44af78ea9ab94db4804985ce28d503ed72235ecb2fcd43e199ec70'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.211187', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f53fe4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': 'ae6750abbc37ea628e4a75d62aaa64ebcdbb844f0eda85f0b079e7c0a151a560'}]}, 'timestamp': '2025-10-13 15:46:21.213069', '_unique_id': 'ef64687fb9034c98b2e16f10700002d7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.215 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e125a38f-d8f9-44e5-8898-79fc488e96ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.215371', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3f5ab8c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': 'bf447a397705acb284e33d9bc0dbe6c1dbfe400c2b4d3b4ad7cdeb02d287d63b'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.215371', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3f5bbea-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': 'fd2a2adc984b2cb6844701df58a2886474851dd74fea4c7b66845078f12de143'}]}, 'timestamp': '2025-10-13 15:46:21.216260', '_unique_id': 'fd0973d94e154d76bb4bee128d912833'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.218 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.218 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.218 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ed0f2a6d-eeee-410a-8782-aa3fe9222c12', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.218464', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3f623be-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': '681968001a23d7f09e7a9e9d74954c0b5d54088975bba7364a8478d7380f2060'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.218464', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3f63412-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': '7cd47c6716b910a3fea399176d7001c03ed89fc211af7706603fd491423cb6bc'}]}, 'timestamp': '2025-10-13 15:46:21.219333', '_unique_id': '5ebac216575349a3b3a7d0a62fa23f64'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.220 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.221 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.221 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.222 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.222 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.223 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '76922da4-5060-43b5-95a4-32fd7906d27c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.221432', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f6972c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': 'da73054702427d7d54cf79f3bd68327281b0909a584323484b80b9684b74d80f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.221432', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f6a67c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '3907d2a89e919715e556d88b170e23281eec17f18432a4bc75bb25cf656e1573'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.221432', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3f6b4b4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': 'b2c061f0855e142af7bf8bd91423b9a08e1445cc089a90a0bbb668091e2f841b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.221432', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f6c436-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': 'f34e19ee3047a36f4dac85d76f6850b2cdd016adb299a017a8d4176893e1d73d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.221432', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f6d20a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': 'ff37632526e9c765e8ca39e9c17fb1f50bc9d3b5be1ebb2f2217eceef5ec9e7a'}]}, 'timestamp': '2025-10-13 15:46:21.223354', '_unique_id': 'f05a51e17a864b76a93c0ba3c7c06785'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.224 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.225 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.225 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.226 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f02db53b-d1e0-4079-84e3-557eb9462d2b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:46:21.225634', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': 'c3f73c90-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.188986629, 'message_signature': '295784c7778aff812dde94db81dcf6a8ed0555ea9f1e41bd0667a459b6e9025b'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:46:21.225634', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': 'c3f74c76-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.193629824, 'message_signature': '335b508749ac95ba298f83f0a7a2386cda436413c54d4647fd50d1b6f22f88fc'}]}, 'timestamp': '2025-10-13 15:46:21.226580', '_unique_id': 'e2d5dceb2add47918f21f125a3a14129'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.227 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.232 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.233 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.233 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.234 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.234 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.234 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62ee9de3-163a-47a4-87ff-86460ffa51bf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:46:21.233101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f85fe4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': 'b42ec224b97e8b24f504d1f5bcd9487102b335f8a39e9a9013d8acfc993a4dac'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:46:21.233101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f8715a-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '202561faf1edf1209a03d62df504c64895b18e4b2b3d365d18ab8135a61a1086'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:46:21.233101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': 'c3f88096-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.24832431, 'message_signature': '80adb6a555cdba34bad19d5992eb1f8c9c9010a516014dd9f8828f3fe8078bfd'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:46:21.233101', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3f88e9c-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '9ebd78e5b7e5ff4768ae1b9e5d144c795a26a78a910fc527acefa0e47fbb632e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:46:21.233101', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3f89c8e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.279187783, 'message_signature': '570006e89fedc599a8f14d9b715c4e8cbbee0710dc14c55ff6138c9cf0ae0dc6'}]}, 'timestamp': '2025-10-13 15:46:21.235098', '_unique_id': 'ad7143e1e8e145fbaccdcad08e98db91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.236 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.237 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.237 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 13830000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.238 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 35700000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a57930d7-c127-4161-9881-a5a0e4e0d515', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13830000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:46:21.237708', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'c3f913e4-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.337585014, 'message_signature': '96920c66d8ef36e0e91f1c29234021a46d558b0c4784fa4d66a598a739ef8ebc'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 35700000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:46:21.237708', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'c3f9233e-a84b-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9824.356415302, 'message_signature': '38d3b714a43e45d22b9b9d61434eb7aee19658cc0be0ef6acb4bb0e1c0603a90'}]}, 'timestamp': '2025-10-13 15:46:21.238575', '_unique_id': '31ec9d2367874b53b26eb8199e42b05c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:46:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:46:21.239 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:46:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-34c1aa3d139204e3d57173cf87bd40cb28298f96483efd358c5ff4814856a47e-merged.mount: Deactivated successfully.
Oct 13 15:46:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf709cbdfe5d09d9c0a75c87611791ab6f0ddb9a40a27f46acb32ca9e6872feb-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:21 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d175b2471\x2de33d\x2d4aac\x2da849\x2d9c4809446f60.mount: Deactivated successfully.
Oct 13 15:46:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:21.343 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:21.368 496978 INFO neutron.agent.dhcp.agent [None req-298d115b-ed70-4114-bcfa-a2bdb326a57d - - - - - -] DHCP configuration for ports {'fb7984a7-e650-4a52-9a44-2a7ace86892c'} is completed
Oct 13 15:46:21 standalone.localdomain systemd[1]: tmp-crun.kfMHGY.mount: Deactivated successfully.
Oct 13 15:46:21 standalone.localdomain dnsmasq[541372]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 1 addresses
Oct 13 15:46:21 standalone.localdomain dnsmasq-dhcp[541372]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:21 standalone.localdomain dnsmasq-dhcp[541372]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:21 standalone.localdomain podman[541500]: 2025-10-13 15:46:21.390082207 +0000 UTC m=+0.053430058 container kill efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:46:21 standalone.localdomain dnsmasq[535434]: exiting on receipt of SIGTERM
Oct 13 15:46:21 standalone.localdomain podman[541507]: 2025-10-13 15:46:21.403569457 +0000 UTC m=+0.055540713 container kill 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:21 standalone.localdomain systemd[1]: libpod-37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee.scope: Deactivated successfully.
Oct 13 15:46:21 standalone.localdomain podman[541528]: 2025-10-13 15:46:21.45846212 +0000 UTC m=+0.041364381 container died 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:46:21 standalone.localdomain podman[541528]: 2025-10-13 15:46:21.534549213 +0000 UTC m=+0.117451454 container cleanup 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:46:21 standalone.localdomain systemd[1]: libpod-conmon-37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee.scope: Deactivated successfully.
Oct 13 15:46:21 standalone.localdomain podman[541530]: 2025-10-13 15:46:21.559338907 +0000 UTC m=+0.131588336 container remove 37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ed029b4-cbf3-4090-b43d-8017c56a090d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:21Z|00322|binding|INFO|Releasing lport 6250f4f3-3337-4a22-bb48-1353a8d9c25a from this chassis (sb_readonly=0)
Oct 13 15:46:21 standalone.localdomain kernel: device tap6250f4f3-33 left promiscuous mode
Oct 13 15:46:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:21Z|00323|binding|INFO|Setting lport 6250f4f3-3337-4a22-bb48-1353a8d9c25a down in Southbound
Oct 13 15:46:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:21.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:21.608 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4ed029b4-cbf3-4090-b43d-8017c56a090d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ed029b4-cbf3-4090-b43d-8017c56a090d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7cd3473e07a94d118687954024245a38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=749d7bc7-f648-4d5b-9814-dc042e11a679, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6250f4f3-3337-4a22-bb48-1353a8d9c25a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:21.609 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6250f4f3-3337-4a22-bb48-1353a8d9c25a in datapath 4ed029b4-cbf3-4090-b43d-8017c56a090d unbound from our chassis
Oct 13 15:46:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:21.611 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4ed029b4-cbf3-4090-b43d-8017c56a090d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:21.611 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[34ef5840-22a5-46bf-ab59-3b11b70fb933]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:21.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:21Z|00324|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:21.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3962: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:21.751 496978 INFO neutron.agent.dhcp.agent [None req-2bb73252-4462-489a-877c-da3307b2495f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:21.757 496978 INFO neutron.agent.dhcp.agent [None req-5fe76132-56bc-4c8b-8a99-ec72d1b7a7eb - - - - - -] DHCP configuration for ports {'8b58413e-cfab-47f1-92d6-bab5eddc4b5e'} is completed
Oct 13 15:46:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:21.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:21.901 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-703063e5e507588f12aa33d1faad86b9b7dbb68cc9ec5d96332c9acd2096132a-merged.mount: Deactivated successfully.
Oct 13 15:46:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37686b1b6f3156610d752a2b558d977dcd87d1d5c714612db28da10f20e857ee-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:22 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d4ed029b4\x2dcbf3\x2d4090\x2db43d\x2d8017c56a090d.mount: Deactivated successfully.
Oct 13 15:46:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:22.406 496978 INFO neutron.agent.linux.ip_lib [None req-e866be0b-cec5-40e2-9980-24991de34433 - - - - - -] Device tap38aa07c4-d8 cannot be used as it has no MAC address
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain kernel: device tap38aa07c4-d8 entered promiscuous mode
Oct 13 15:46:22 standalone.localdomain NetworkManager[5962]: <info>  [1760370382.4445] manager: (tap38aa07c4-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/62)
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00325|binding|INFO|Claiming lport 38aa07c4-d8d7-4359-a8be-624656cf0df9 for this chassis.
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00326|binding|INFO|38aa07c4-d8d7-4359-a8be-624656cf0df9: Claiming unknown
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain systemd-udevd[541573]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.456 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-de5e5683-4c18-4bce-b1f9-21c50e66c871', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de5e5683-4c18-4bce-b1f9-21c50e66c871', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11d6ccc1374f45899525aa43b59ab0c1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7107f28f-33cc-4a18-8a7b-9c1fe49ff5bb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=38aa07c4-d8d7-4359-a8be-624656cf0df9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.457 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 38aa07c4-d8d7-4359-a8be-624656cf0df9 in datapath de5e5683-4c18-4bce-b1f9-21c50e66c871 bound to our chassis
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.459 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network de5e5683-4c18-4bce-b1f9-21c50e66c871 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.460 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[03ae32bb-71db-47cb-8c3b-271c07cc0922]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00327|binding|INFO|Setting lport 38aa07c4-d8d7-4359-a8be-624656cf0df9 ovn-installed in OVS
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00328|binding|INFO|Setting lport 38aa07c4-d8d7-4359-a8be-624656cf0df9 up in Southbound
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:22.694 496978 INFO neutron.agent.linux.ip_lib [None req-3b15b82d-0247-4f44-be29-76a493864f5d - - - - - -] Device tapbbddb96e-51 cannot be used as it has no MAC address
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain kernel: device tapbbddb96e-51 entered promiscuous mode
Oct 13 15:46:22 standalone.localdomain NetworkManager[5962]: <info>  [1760370382.7679] manager: (tapbbddb96e-51): new Generic device (/org/freedesktop/NetworkManager/Devices/63)
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00329|binding|INFO|Claiming lport bbddb96e-5144-4040-8642-da228d6f8b1c for this chassis.
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00330|binding|INFO|bbddb96e-5144-4040-8642-da228d6f8b1c: Claiming unknown
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.780 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-00cc37e7-ccb3-4a97-90ab-dd00ad85362a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00cc37e7-ccb3-4a97-90ab-dd00ad85362a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11d6ccc1374f45899525aa43b59ab0c1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd3f760f-6ab6-46df-865b-8bda4d7eeb59, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bbddb96e-5144-4040-8642-da228d6f8b1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.781 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bbddb96e-5144-4040-8642-da228d6f8b1c in datapath 00cc37e7-ccb3-4a97-90ab-dd00ad85362a bound to our chassis
Oct 13 15:46:22 standalone.localdomain ceph-mon[29756]: pgmap v3962: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.782 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 00cc37e7-ccb3-4a97-90ab-dd00ad85362a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:22.782 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3026c2-9a85-4418-b7fc-5f268906dd4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00331|binding|INFO|Setting lport bbddb96e-5144-4040-8642-da228d6f8b1c ovn-installed in OVS
Oct 13 15:46:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:22Z|00332|binding|INFO|Setting lport bbddb96e-5144-4040-8642-da228d6f8b1c up in Southbound
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:22.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain podman[541620]: 2025-10-13 15:46:23.064819027 +0000 UTC m=+0.084412854 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:46:23 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:23.075 2 INFO neutron.agent.securitygroups_rpc [None req-4ca8910c-33f8-403f-9f57-839b7fae53ad 1e7122234f844ff48633888d2464f9a1 2a90758ca570420084aa8bec8bdfee73 - - default default] Security group member updated ['94c6b164-ff9d-4bd1-b0ec-d17a36adf8d1']
Oct 13 15:46:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:23.080 496978 INFO neutron.agent.linux.ip_lib [None req-86766d47-6585-443e-aa7f-76745d8f8663 - - - - - -] Device tapb2b644e5-54 cannot be used as it has no MAC address
Oct 13 15:46:23 standalone.localdomain podman[541620]: 2025-10-13 15:46:23.097939021 +0000 UTC m=+0.117532878 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:46:23 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain kernel: device tapb2b644e5-54 entered promiscuous mode
Oct 13 15:46:23 standalone.localdomain NetworkManager[5962]: <info>  [1760370383.1189] manager: (tapb2b644e5-54): new Generic device (/org/freedesktop/NetworkManager/Devices/64)
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00333|binding|INFO|Claiming lport b2b644e5-541f-482f-9d12-04ceb278cc96 for this chassis.
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00334|binding|INFO|b2b644e5-541f-482f-9d12-04ceb278cc96: Claiming unknown
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.138 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-68d6abcf-919b-4bb7-94e1-d848f2626899', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68d6abcf-919b-4bb7-94e1-d848f2626899', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84bb2532-77ab-42be-8da1-f4d14aec11cb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b2b644e5-541f-482f-9d12-04ceb278cc96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.141 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b2b644e5-541f-482f-9d12-04ceb278cc96 in datapath 68d6abcf-919b-4bb7-94e1-d848f2626899 bound to our chassis
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.144 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 68d6abcf-919b-4bb7-94e1-d848f2626899 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00335|binding|INFO|Setting lport b2b644e5-541f-482f-9d12-04ceb278cc96 ovn-installed in OVS
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00336|binding|INFO|Setting lport b2b644e5-541f-482f-9d12-04ceb278cc96 up in Southbound
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.146 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[7eba5378-b0ca-4d85-8167-c9485910e774]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:46:23
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['manila_metadata', 'vms', 'manila_data', 'images', 'volumes', 'backups', '.mgr']
Oct 13 15:46:23 standalone.localdomain podman[541693]: 2025-10-13 15:46:23.340255289 +0000 UTC m=+0.072149611 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:46:23 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:46:23 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:23 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40844 DF PROTO=TCP SPT=58820 DPT=9102 SEQ=483385854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD1385F0000000001030307) 
Oct 13 15:46:23 standalone.localdomain podman[541752]: 
Oct 13 15:46:23 standalone.localdomain podman[541752]: 2025-10-13 15:46:23.591801336 +0000 UTC m=+0.096472130 container create ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:23 standalone.localdomain podman[541752]: 2025-10-13 15:46:23.534403606 +0000 UTC m=+0.039074430 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:23 standalone.localdomain systemd[1]: Started libpod-conmon-ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b.scope.
Oct 13 15:46:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4729d0dd1beffe7d8a81cd5d14a9bebc180cb6f79164af6fcbbf93aa24365c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:23 standalone.localdomain podman[541752]: 2025-10-13 15:46:23.66183121 +0000 UTC m=+0.166502004 container init ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:46:23 standalone.localdomain podman[541752]: 2025-10-13 15:46:23.66760483 +0000 UTC m=+0.172275624 container start ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:23 standalone.localdomain dnsmasq[541782]: started, version 2.85 cachesize 150
Oct 13 15:46:23 standalone.localdomain dnsmasq[541782]: DNS service limited to local subnets
Oct 13 15:46:23 standalone.localdomain dnsmasq[541782]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:23 standalone.localdomain dnsmasq[541782]: warning: no upstream servers configured
Oct 13 15:46:23 standalone.localdomain dnsmasq-dhcp[541782]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:23 standalone.localdomain dnsmasq[541782]: read /var/lib/neutron/dhcp/de5e5683-4c18-4bce-b1f9-21c50e66c871/addn_hosts - 0 addresses
Oct 13 15:46:23 standalone.localdomain dnsmasq-dhcp[541782]: read /var/lib/neutron/dhcp/de5e5683-4c18-4bce-b1f9-21c50e66c871/host
Oct 13 15:46:23 standalone.localdomain dnsmasq-dhcp[541782]: read /var/lib/neutron/dhcp/de5e5683-4c18-4bce-b1f9-21c50e66c871/opts
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3963: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:46:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:46:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:23.820 496978 INFO neutron.agent.linux.ip_lib [None req-b86b236b-2766-4d79-89ee-ee356a8cf6d9 - - - - - -] Device tap25f5d330-75 cannot be used as it has no MAC address
Oct 13 15:46:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:23.827 496978 INFO neutron.agent.dhcp.agent [None req-2067b44b-2594-430c-aae7-3103bb61f59b - - - - - -] DHCP configuration for ports {'4dfe0731-0864-4c5f-9827-e1c66ddd9e96'} is completed
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain kernel: device tap25f5d330-75 entered promiscuous mode
Oct 13 15:46:23 standalone.localdomain NetworkManager[5962]: <info>  [1760370383.9113] manager: (tap25f5d330-75): new Generic device (/org/freedesktop/NetworkManager/Devices/65)
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00337|binding|INFO|Claiming lport 25f5d330-75c5-40ff-890c-c5a1ff84e19a for this chassis.
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00338|binding|INFO|25f5d330-75c5-40ff-890c-c5a1ff84e19a: Claiming unknown
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.925 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9a8329-7080-4d82-ad98-5f872785bf8c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=25f5d330-75c5-40ff-890c-c5a1ff84e19a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.927 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 25f5d330-75c5-40ff-890c-c5a1ff84e19a in datapath 04ff85e7-2162-40dd-a6a4-d0f61bd13dfe bound to our chassis
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.928 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 04ff85e7-2162-40dd-a6a4-d0f61bd13dfe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.929 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c11a39f7-6030-4d57-9d75-cb044cb2a181]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00339|binding|INFO|Setting lport 25f5d330-75c5-40ff-890c-c5a1ff84e19a ovn-installed in OVS
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00340|binding|INFO|Setting lport 25f5d330-75c5-40ff-890c-c5a1ff84e19a up in Southbound
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain podman[541820]: 
Oct 13 15:46:23 standalone.localdomain podman[541820]: 2025-10-13 15:46:23.976686492 +0000 UTC m=+0.155011327 container create 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.979 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 91fe4110-946e-4f13-86b9-475585284d88 with type ""
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00341|binding|INFO|Removing iface tapbbddb96e-51 ovn-installed in OVS
Oct 13 15:46:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:23Z|00342|binding|INFO|Removing lport bbddb96e-5144-4040-8642-da228d6f8b1c ovn-installed in OVS
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.980 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-00cc37e7-ccb3-4a97-90ab-dd00ad85362a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00cc37e7-ccb3-4a97-90ab-dd00ad85362a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11d6ccc1374f45899525aa43b59ab0c1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd3f760f-6ab6-46df-865b-8bda4d7eeb59, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bbddb96e-5144-4040-8642-da228d6f8b1c) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.982 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bbddb96e-5144-4040-8642-da228d6f8b1c in datapath 00cc37e7-ccb3-4a97-90ab-dd00ad85362a unbound from our chassis
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.984 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 00cc37e7-ccb3-4a97-90ab-dd00ad85362a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:23.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:23.984 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[6140306e-fd43-446e-af63-c9eec60123b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:24 standalone.localdomain podman[541820]: 2025-10-13 15:46:23.907788603 +0000 UTC m=+0.086113428 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:24 standalone.localdomain systemd[1]: Started libpod-conmon-62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184.scope.
Oct 13 15:46:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceaeed89fe5d7379e913daea347ce84fb6d44f109406d2f1670f6922c368617c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:24 standalone.localdomain dnsmasq[541782]: exiting on receipt of SIGTERM
Oct 13 15:46:24 standalone.localdomain podman[541868]: 2025-10-13 15:46:24.038395027 +0000 UTC m=+0.068464097 container kill ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:24 standalone.localdomain systemd[1]: libpod-ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b.scope: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain dnsmasq[541372]: exiting on receipt of SIGTERM
Oct 13 15:46:24 standalone.localdomain podman[541876]: 2025-10-13 15:46:24.051276918 +0000 UTC m=+0.060807028 container kill efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:46:24 standalone.localdomain systemd[1]: libpod-efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de.scope: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:24.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:24 standalone.localdomain podman[541820]: 2025-10-13 15:46:24.136656192 +0000 UTC m=+0.314981047 container init 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:24 standalone.localdomain dnsmasq[541970]: started, version 2.85 cachesize 150
Oct 13 15:46:24 standalone.localdomain dnsmasq[541970]: DNS service limited to local subnets
Oct 13 15:46:24 standalone.localdomain dnsmasq[541970]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:24 standalone.localdomain dnsmasq[541970]: warning: no upstream servers configured
Oct 13 15:46:24 standalone.localdomain dnsmasq-dhcp[541970]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:46:24 standalone.localdomain dnsmasq[541970]: read /var/lib/neutron/dhcp/00cc37e7-ccb3-4a97-90ab-dd00ad85362a/addn_hosts - 0 addresses
Oct 13 15:46:24 standalone.localdomain dnsmasq-dhcp[541970]: read /var/lib/neutron/dhcp/00cc37e7-ccb3-4a97-90ab-dd00ad85362a/host
Oct 13 15:46:24 standalone.localdomain dnsmasq-dhcp[541970]: read /var/lib/neutron/dhcp/00cc37e7-ccb3-4a97-90ab-dd00ad85362a/opts
Oct 13 15:46:24 standalone.localdomain podman[541917]: 2025-10-13 15:46:24.16190634 +0000 UTC m=+0.104577564 container died ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:46:24 standalone.localdomain podman[541917]: 2025-10-13 15:46:24.189744098 +0000 UTC m=+0.132415332 container cleanup ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:46:24 standalone.localdomain systemd[1]: libpod-conmon-ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b.scope: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain podman[541820]: 2025-10-13 15:46:24.206655556 +0000 UTC m=+0.384980421 container start 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:24 standalone.localdomain podman[541928]: 2025-10-13 15:46:24.214132048 +0000 UTC m=+0.140206214 container died efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:46:24 standalone.localdomain podman[541928]: 2025-10-13 15:46:24.255740186 +0000 UTC m=+0.181814332 container remove efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:24 standalone.localdomain podman[541918]: 2025-10-13 15:46:24.298705267 +0000 UTC m=+0.228901841 container remove ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de5e5683-4c18-4bce-b1f9-21c50e66c871, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:46:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:24Z|00343|binding|INFO|Releasing lport 38aa07c4-d8d7-4359-a8be-624656cf0df9 from this chassis (sb_readonly=0)
Oct 13 15:46:24 standalone.localdomain kernel: device tap38aa07c4-d8 left promiscuous mode
Oct 13 15:46:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:24Z|00344|binding|INFO|Setting lport 38aa07c4-d8d7-4359-a8be-624656cf0df9 down in Southbound
Oct 13 15:46:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:24.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:24 standalone.localdomain dnsmasq[541970]: exiting on receipt of SIGTERM
Oct 13 15:46:24 standalone.localdomain systemd[1]: libpod-62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184.scope: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain podman[541986]: 2025-10-13 15:46:24.32156182 +0000 UTC m=+0.086015244 container died 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:24.323 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-de5e5683-4c18-4bce-b1f9-21c50e66c871', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de5e5683-4c18-4bce-b1f9-21c50e66c871', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '11d6ccc1374f45899525aa43b59ab0c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7107f28f-33cc-4a18-8a7b-9c1fe49ff5bb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=38aa07c4-d8d7-4359-a8be-624656cf0df9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:24.327 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 38aa07c4-d8d7-4359-a8be-624656cf0df9 in datapath de5e5683-4c18-4bce-b1f9-21c50e66c871 unbound from our chassis
Oct 13 15:46:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:24.329 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network de5e5683-4c18-4bce-b1f9-21c50e66c871 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:24.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:24.330 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c3178281-b1aa-443e-8aed-82604e461fa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0f4729d0dd1beffe7d8a81cd5d14a9bebc180cb6f79164af6fcbbf93aa24365c-merged.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae579166ff5f1cd3e86f300bd3e450ba8b9e14a1f0c0d120110b1d432ebbb39b-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-fc12c97e3091e121242eafbd879d1993879c465232f8f73959304a33b1c9b948-merged.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: tmp-crun.z1nRq5.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ceaeed89fe5d7379e913daea347ce84fb6d44f109406d2f1670f6922c368617c-merged.mount: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain podman[541986]: 2025-10-13 15:46:24.381930243 +0000 UTC m=+0.146383647 container cleanup 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:24 standalone.localdomain podman[542010]: 2025-10-13 15:46:24.40332424 +0000 UTC m=+0.071253014 container cleanup 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:46:24 standalone.localdomain systemd[1]: libpod-conmon-62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184.scope: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain systemd[1]: libpod-conmon-efe84cbb7cdcb02dbd240765beccea1b05f476999b0656e2d0685e983bea72de.scope: Deactivated successfully.
Oct 13 15:46:24 standalone.localdomain podman[542026]: 2025-10-13 15:46:24.466637405 +0000 UTC m=+0.067813077 container remove 62293be1e972759dea57d822979235656ec918aa07c57c50207867fce5d02184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-00cc37e7-ccb3-4a97-90ab-dd00ad85362a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:46:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:24.517 496978 INFO neutron.agent.dhcp.agent [None req-221a3815-1087-41e4-8a9f-ba83e22404ee - - - - - -] DHCP configuration for ports {'d10ec140-39d0-4933-a4d9-bac388a1fc99'} is completed
Oct 13 15:46:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:24.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:24 standalone.localdomain kernel: device tapbbddb96e-51 left promiscuous mode
Oct 13 15:46:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:24.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:24 standalone.localdomain podman[542049]: 
Oct 13 15:46:24 standalone.localdomain podman[542049]: 2025-10-13 15:46:24.588182137 +0000 UTC m=+0.103802799 container create 1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-68d6abcf-919b-4bb7-94e1-d848f2626899, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:46:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40845 DF PROTO=TCP SPT=58820 DPT=9102 SEQ=483385854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD13C760000000001030307) 
Oct 13 15:46:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:24.603 496978 INFO neutron.agent.dhcp.agent [None req-c4565a60-8e49-4dca-a885-05ee97230864 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:24 standalone.localdomain podman[542049]: 2025-10-13 15:46:24.530883089 +0000 UTC m=+0.046503771 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:24 standalone.localdomain systemd[1]: Started libpod-conmon-1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628.scope.
Oct 13 15:46:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5e71b6d35c72e5434c6ec8fc231ab9f19b73d06665d6c0487ef7836b1642d7b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:24 standalone.localdomain podman[542049]: 2025-10-13 15:46:24.6610742 +0000 UTC m=+0.176694822 container init 1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-68d6abcf-919b-4bb7-94e1-d848f2626899, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:46:24 standalone.localdomain podman[542049]: 2025-10-13 15:46:24.670599827 +0000 UTC m=+0.186220459 container start 1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-68d6abcf-919b-4bb7-94e1-d848f2626899, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:24 standalone.localdomain dnsmasq[542070]: started, version 2.85 cachesize 150
Oct 13 15:46:24 standalone.localdomain dnsmasq[542070]: DNS service limited to local subnets
Oct 13 15:46:24 standalone.localdomain dnsmasq[542070]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:24 standalone.localdomain dnsmasq[542070]: warning: no upstream servers configured
Oct 13 15:46:24 standalone.localdomain dnsmasq-dhcp[542070]: DHCP, static leases only on 10.100.255.240, lease time 1d
Oct 13 15:46:24 standalone.localdomain dnsmasq[542070]: read /var/lib/neutron/dhcp/68d6abcf-919b-4bb7-94e1-d848f2626899/addn_hosts - 0 addresses
Oct 13 15:46:24 standalone.localdomain dnsmasq-dhcp[542070]: read /var/lib/neutron/dhcp/68d6abcf-919b-4bb7-94e1-d848f2626899/host
Oct 13 15:46:24 standalone.localdomain dnsmasq-dhcp[542070]: read /var/lib/neutron/dhcp/68d6abcf-919b-4bb7-94e1-d848f2626899/opts
Oct 13 15:46:24 standalone.localdomain ceph-mon[29756]: pgmap v3963: 177 pgs: 177 active+clean; 359 MiB data, 306 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:24.819 496978 INFO neutron.agent.dhcp.agent [None req-b38c4554-37a2-4f67-a986-290908319851 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:24.854 496978 INFO neutron.agent.dhcp.agent [None req-644ea3d3-d5c6-4627-940e-b16bf39bee30 - - - - - -] DHCP configuration for ports {'e4f7cd0b-696d-42e7-95e2-ae23c358e7fb'} is completed
Oct 13 15:46:24 standalone.localdomain podman[542094]: 
Oct 13 15:46:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:24.957 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:24 standalone.localdomain podman[542094]: 2025-10-13 15:46:24.969593814 +0000 UTC m=+0.105399049 container create aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:46:25 standalone.localdomain systemd[1]: Started libpod-conmon-aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d.scope.
Oct 13 15:46:25 standalone.localdomain podman[542094]: 2025-10-13 15:46:24.917159028 +0000 UTC m=+0.052964323 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:25 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7338c75a7e1180da5b6a47c79f7d31d1955960200582c40ec9a7bcafde5f14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:25 standalone.localdomain podman[542094]: 2025-10-13 15:46:25.043246451 +0000 UTC m=+0.179051686 container init aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:46:25 standalone.localdomain podman[542094]: 2025-10-13 15:46:25.053270104 +0000 UTC m=+0.189075339 container start aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: started, version 2.85 cachesize 150
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: DNS service limited to local subnets
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: warning: no upstream servers configured
Oct 13 15:46:25 standalone.localdomain dnsmasq-dhcp[542113]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: read /var/lib/neutron/dhcp/04ff85e7-2162-40dd-a6a4-d0f61bd13dfe/addn_hosts - 0 addresses
Oct 13 15:46:25 standalone.localdomain dnsmasq-dhcp[542113]: read /var/lib/neutron/dhcp/04ff85e7-2162-40dd-a6a4-d0f61bd13dfe/host
Oct 13 15:46:25 standalone.localdomain dnsmasq-dhcp[542113]: read /var/lib/neutron/dhcp/04ff85e7-2162-40dd-a6a4-d0f61bd13dfe/opts
Oct 13 15:46:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:25.120 496978 INFO neutron.agent.dhcp.agent [None req-b86b236b-2766-4d79-89ee-ee356a8cf6d9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891296a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111f40>], id=1063e4be-7674-401b-8130-2bd412acc5b9, ip_allocation=immediate, mac_address=fa:16:3e:67:a0:d2, name=tempest-PortsIpV6TestJSON-335168661, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:21Z, description=, dns_domain=, id=04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1878222392, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53099, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1709, status=ACTIVE, subnets=['443f92ed-e270-4d52-8d3b-6bc59b90ed57'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:22Z, vlan_transparent=None, network_id=04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1732, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:23Z on network 04ff85e7-2162-40dd-a6a4-d0f61bd13dfe
Oct 13 15:46:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:25.231 496978 INFO neutron.agent.dhcp.agent [None req-c51647b8-bf2d-418a-89c0-b9c44fa752be - - - - - -] DHCP configuration for ports {'172890a7-81ee-48b2-ac9d-b1908bc74a03'} is completed
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: read /var/lib/neutron/dhcp/04ff85e7-2162-40dd-a6a4-d0f61bd13dfe/addn_hosts - 1 addresses
Oct 13 15:46:25 standalone.localdomain podman[542132]: 2025-10-13 15:46:25.318602271 +0000 UTC m=+0.058758834 container kill aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:25 standalone.localdomain dnsmasq-dhcp[542113]: read /var/lib/neutron/dhcp/04ff85e7-2162-40dd-a6a4-d0f61bd13dfe/host
Oct 13 15:46:25 standalone.localdomain dnsmasq-dhcp[542113]: read /var/lib/neutron/dhcp/04ff85e7-2162-40dd-a6a4-d0f61bd13dfe/opts
Oct 13 15:46:25 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d00cc37e7\x2dccb3\x2d4a97\x2d90ab\x2ddd00ad85362a.mount: Deactivated successfully.
Oct 13 15:46:25 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dde5e5683\x2d4c18\x2d4bce\x2db1f9\x2d21c50e66c871.mount: Deactivated successfully.
Oct 13 15:46:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:25.621 496978 INFO neutron.agent.dhcp.agent [None req-485cbd1a-5860-411d-a86b-abeb50e15752 - - - - - -] DHCP configuration for ports {'1063e4be-7674-401b-8130-2bd412acc5b9'} is completed
Oct 13 15:46:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3964: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:46:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:25Z|00345|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:25.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:25 standalone.localdomain podman[542167]: 2025-10-13 15:46:25.844132104 +0000 UTC m=+0.101151266 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Oct 13 15:46:25 standalone.localdomain podman[542167]: 2025-10-13 15:46:25.873196741 +0000 UTC m=+0.130215873 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:46:25 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:46:25 standalone.localdomain dnsmasq[542113]: exiting on receipt of SIGTERM
Oct 13 15:46:25 standalone.localdomain podman[542182]: 2025-10-13 15:46:25.939201339 +0000 UTC m=+0.124387420 container kill aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:25 standalone.localdomain systemd[1]: libpod-aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d.scope: Deactivated successfully.
Oct 13 15:46:26 standalone.localdomain podman[542203]: 2025-10-13 15:46:26.001558574 +0000 UTC m=+0.051272630 container died aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:46:26 standalone.localdomain podman[542203]: 2025-10-13 15:46:26.037847726 +0000 UTC m=+0.087561752 container cleanup aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:26 standalone.localdomain systemd[1]: libpod-conmon-aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d.scope: Deactivated successfully.
Oct 13 15:46:26 standalone.localdomain podman[542210]: 2025-10-13 15:46:26.09534979 +0000 UTC m=+0.132471953 container remove aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:46:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:26Z|00346|binding|INFO|Releasing lport 25f5d330-75c5-40ff-890c-c5a1ff84e19a from this chassis (sb_readonly=0)
Oct 13 15:46:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:26Z|00347|binding|INFO|Setting lport 25f5d330-75c5-40ff-890c-c5a1ff84e19a down in Southbound
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain kernel: device tap25f5d330-75 left promiscuous mode
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.116 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04ff85e7-2162-40dd-a6a4-d0f61bd13dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9a8329-7080-4d82-ad98-5f872785bf8c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=25f5d330-75c5-40ff-890c-c5a1ff84e19a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.117 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 25f5d330-75c5-40ff-890c-c5a1ff84e19a in datapath 04ff85e7-2162-40dd-a6a4-d0f61bd13dfe unbound from our chassis
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.119 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 04ff85e7-2162-40dd-a6a4-d0f61bd13dfe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.120 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b54a0207-ea8f-4030-bb55-1b4b6c411409]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0f7338c75a7e1180da5b6a47c79f7d31d1955960200582c40ec9a7bcafde5f14-merged.mount: Deactivated successfully.
Oct 13 15:46:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa2ddc0178c2e1ceca9052f68db538ed69312285079798c7f73e98e5c7876f3d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.447 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:5f:f4 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=486905a0-8fbf-4847-a0a3-d828ace412ce, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=20b13e30-b2d6-40a4-9537-84abbaa32a14) old=Port_Binding(mac=['fa:16:3e:b5:5f:f4 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.450 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 20b13e30-b2d6-40a4-9537-84abbaa32a14 in datapath 8669e1a6-2c0a-4fef-817e-c674b6e65b80 updated
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.454 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2a9e7c65-0557-4dfe-b10c-718cd1a7b6b6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.454 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8669e1a6-2c0a-4fef-817e-c674b6e65b80, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.455 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[928ce59f-257a-4291-a1a3-d6ec9349f204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:26 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d04ff85e7\x2d2162\x2d40dd\x2da6a4\x2dd0f61bd13dfe.mount: Deactivated successfully.
Oct 13 15:46:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:26.520 496978 INFO neutron.agent.dhcp.agent [None req-08d4611e-01d7-4738-a18f-d1ee22822e0d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:26.522 496978 INFO neutron.agent.dhcp.agent [None req-08d4611e-01d7-4738-a18f-d1ee22822e0d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40846 DF PROTO=TCP SPT=58820 DPT=9102 SEQ=483385854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD144760000000001030307) 
Oct 13 15:46:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:26.752 496978 INFO neutron.agent.linux.ip_lib [None req-e3be87cd-f21d-4988-a35c-a21a409195a6 - - - - - -] Device tapae6e1459-ef cannot be used as it has no MAC address
Oct 13 15:46:26 standalone.localdomain ceph-mon[29756]: pgmap v3964: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain kernel: device tapae6e1459-ef entered promiscuous mode
Oct 13 15:46:26 standalone.localdomain NetworkManager[5962]: <info>  [1760370386.8270] manager: (tapae6e1459-ef): new Generic device (/org/freedesktop/NetworkManager/Devices/66)
Oct 13 15:46:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:26Z|00348|binding|INFO|Claiming lport ae6e1459-ef41-49b9-bb94-17a908eaa937 for this chassis.
Oct 13 15:46:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:26Z|00349|binding|INFO|ae6e1459-ef41-49b9-bb94-17a908eaa937: Claiming unknown
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.837 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-1425a739-9d65-4c05-9700-1ab66a63077c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1425a739-9d65-4c05-9700-1ab66a63077c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3c1410d6c264516966bcf7dffd0e4d5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4736446f-bcd8-4b47-83cd-20705e82abe7, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=ae6e1459-ef41-49b9-bb94-17a908eaa937) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.839 378821 INFO neutron.agent.ovn.metadata.agent [-] Port ae6e1459-ef41-49b9-bb94-17a908eaa937 in datapath 1425a739-9d65-4c05-9700-1ab66a63077c bound to our chassis
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.841 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1425a739-9d65-4c05-9700-1ab66a63077c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:26.842 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[70d3c207-fa38-4139-8e1b-9280194dac1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:26Z|00350|binding|INFO|Setting lport ae6e1459-ef41-49b9-bb94-17a908eaa937 ovn-installed in OVS
Oct 13 15:46:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:26Z|00351|binding|INFO|Setting lport ae6e1459-ef41-49b9-bb94-17a908eaa937 up in Southbound
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:26.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:27.014 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:27.094 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:27 standalone.localdomain podman[542316]: 
Oct 13 15:46:27 standalone.localdomain podman[542316]: 2025-10-13 15:46:27.491294264 +0000 UTC m=+0.091544347 container create beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:46:27 standalone.localdomain systemd[1]: Started libpod-conmon-beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d.scope.
Oct 13 15:46:27 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f116139a694d530e5ae5513791b772978bd635aefc016a6ccf54babe430ee02/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:27 standalone.localdomain podman[542316]: 2025-10-13 15:46:27.452838555 +0000 UTC m=+0.053088658 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:27 standalone.localdomain podman[542316]: 2025-10-13 15:46:27.558387388 +0000 UTC m=+0.158637481 container init beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:46:27 standalone.localdomain podman[542316]: 2025-10-13 15:46:27.573394805 +0000 UTC m=+0.173644888 container start beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:46:27 standalone.localdomain dnsmasq[542342]: started, version 2.85 cachesize 150
Oct 13 15:46:27 standalone.localdomain dnsmasq[542342]: DNS service limited to local subnets
Oct 13 15:46:27 standalone.localdomain dnsmasq[542342]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:27 standalone.localdomain dnsmasq[542342]: warning: no upstream servers configured
Oct 13 15:46:27 standalone.localdomain dnsmasq-dhcp[542342]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:46:27 standalone.localdomain dnsmasq-dhcp[542342]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:27 standalone.localdomain dnsmasq-dhcp[542342]: DHCP, static leases only on 10.100.0.32, lease time 1d
Oct 13 15:46:27 standalone.localdomain dnsmasq[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 1 addresses
Oct 13 15:46:27 standalone.localdomain dnsmasq-dhcp[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:27 standalone.localdomain dnsmasq-dhcp[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3965: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:27.790 496978 INFO neutron.agent.dhcp.agent [None req-eb579302-dd35-4bbb-9b08-dfa6b58f7628 - - - - - -] DHCP configuration for ports {'8b58413e-cfab-47f1-92d6-bab5eddc4b5e', '20b13e30-b2d6-40a4-9537-84abbaa32a14', '36e7e924-139d-4d57-a01f-3a7052149280', '2513fba5-3362-4add-8a7c-752b1114c953'} is completed
Oct 13 15:46:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:27.923 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:27 standalone.localdomain podman[542365]: 
Oct 13 15:46:27 standalone.localdomain podman[542365]: 2025-10-13 15:46:27.980537856 +0000 UTC m=+0.107158154 container create 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:28 standalone.localdomain podman[542365]: 2025-10-13 15:46:27.927265884 +0000 UTC m=+0.053886202 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:28.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:28 standalone.localdomain systemd[1]: Started libpod-conmon-5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce.scope.
Oct 13 15:46:28 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0153093156bd8d2ca9490348c5f4041d9fcb901cd22295fe33da7f9dc0e6f49a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:28 standalone.localdomain podman[542365]: 2025-10-13 15:46:28.105318017 +0000 UTC m=+0.231938315 container init 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:28 standalone.localdomain podman[542365]: 2025-10-13 15:46:28.11437128 +0000 UTC m=+0.240991578 container start 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:46:28 standalone.localdomain dnsmasq[542384]: started, version 2.85 cachesize 150
Oct 13 15:46:28 standalone.localdomain dnsmasq[542384]: DNS service limited to local subnets
Oct 13 15:46:28 standalone.localdomain dnsmasq[542384]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:28 standalone.localdomain dnsmasq[542384]: warning: no upstream servers configured
Oct 13 15:46:28 standalone.localdomain dnsmasq-dhcp[542384]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:28 standalone.localdomain dnsmasq[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/addn_hosts - 0 addresses
Oct 13 15:46:28 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/host
Oct 13 15:46:28 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/opts
Oct 13 15:46:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:28.212 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:28.361 496978 INFO neutron.agent.dhcp.agent [None req-d861d727-a9f9-400c-9dc9-4aad3db138e7 - - - - - -] DHCP configuration for ports {'af398378-ab26-4493-8d45-6d8d019a53ae'} is completed
Oct 13 15:46:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:28Z|00352|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:28.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:28 standalone.localdomain systemd[1]: tmp-crun.rPMQHu.mount: Deactivated successfully.
Oct 13 15:46:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:28.563 2 INFO neutron.agent.securitygroups_rpc [None req-7b44dca2-98ff-481a-bea2-8af96ea39f87 eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['1cccd955-cbbd-4a20-9085-c7e737d8cdd4', 'd9f2b665-3193-4eac-82d4-83e12a54a264', '9e9e6c64-05e8-4b50-a6bf-85cec054e42d']
Oct 13 15:46:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:28.628 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ea28e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ea2250>], id=8b58413e-cfab-47f1-92d6-bab5eddc4b5e, ip_allocation=immediate, mac_address=fa:16:3e:e2:49:3a, name=tempest-PortsTestJSON-1708764017, network_id=8669e1a6-2c0a-4fef-817e-c674b6e65b80, port_security_enabled=True, project_id=b5a94d2f8d3c41ee91638c01feac6bc7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['9e9e6c64-05e8-4b50-a6bf-85cec054e42d', 'd9f2b665-3193-4eac-82d4-83e12a54a264'], standard_attr_id=1702, status=DOWN, tags=[], tenant_id=b5a94d2f8d3c41ee91638c01feac6bc7, updated_at=2025-10-13T15:46:28Z on network 8669e1a6-2c0a-4fef-817e-c674b6e65b80
Oct 13 15:46:28 standalone.localdomain dnsmasq-dhcp[542342]: DHCPRELEASE(tap2513fba5-33) 10.100.0.10 fa:16:3e:e2:49:3a
Oct 13 15:46:28 standalone.localdomain ceph-mon[29756]: pgmap v3965: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:28.709 2 INFO neutron.agent.securitygroups_rpc [None req-ab806f8e-f1a5-47fd-9378-ebd015bc5d92 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:28.924 496978 INFO neutron.agent.linux.ip_lib [None req-900e66cc-4545-44c6-a7ba-3d39aa920858 - - - - - -] Device tapa9c12277-7a cannot be used as it has no MAC address
Oct 13 15:46:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:28.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:28 standalone.localdomain kernel: device tapa9c12277-7a entered promiscuous mode
Oct 13 15:46:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:28Z|00353|binding|INFO|Claiming lport a9c12277-7a51-4287-8ea9-3f597110889f for this chassis.
Oct 13 15:46:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:28Z|00354|binding|INFO|a9c12277-7a51-4287-8ea9-3f597110889f: Claiming unknown
Oct 13 15:46:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:28.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:28 standalone.localdomain NetworkManager[5962]: <info>  [1760370388.9619] manager: (tapa9c12277-7a): new Generic device (/org/freedesktop/NetworkManager/Devices/67)
Oct 13 15:46:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:28.975 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f531c292-f9f5-42d9-9ba4-08c52dc16c2d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f531c292-f9f5-42d9-9ba4-08c52dc16c2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9c108fe-9724-4cbc-a6cd-fdef70e0a3a4, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=a9c12277-7a51-4287-8ea9-3f597110889f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:28.977 378821 INFO neutron.agent.ovn.metadata.agent [-] Port a9c12277-7a51-4287-8ea9-3f597110889f in datapath f531c292-f9f5-42d9-9ba4-08c52dc16c2d bound to our chassis
Oct 13 15:46:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:28.978 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f531c292-f9f5-42d9-9ba4-08c52dc16c2d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:28.979 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[189bd535-e9ab-48a2-b538-0c6cc18fef8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:28Z|00355|binding|INFO|Setting lport a9c12277-7a51-4287-8ea9-3f597110889f ovn-installed in OVS
Oct 13 15:46:29 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:29Z|00356|binding|INFO|Setting lport a9c12277-7a51-4287-8ea9-3f597110889f up in Southbound
Oct 13 15:46:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:29.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:29 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:29.068 2 INFO neutron.agent.securitygroups_rpc [None req-b2be920b-3e25-4a28-a77b-50f9fac6a30d 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:29.091 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:29.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:46:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:29.349 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:29 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:29.356 2 INFO neutron.agent.securitygroups_rpc [None req-b6471eae-6e3b-4338-b570-d35ac6296bd7 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:29 standalone.localdomain dnsmasq[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 1 addresses
Oct 13 15:46:29 standalone.localdomain dnsmasq-dhcp[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:29 standalone.localdomain dnsmasq-dhcp[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:29 standalone.localdomain podman[542427]: 2025-10-13 15:46:29.364524546 +0000 UTC m=+0.064177493 container kill beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:46:29 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:29.646 2 INFO neutron.agent.securitygroups_rpc [None req-4a0ded5b-1c12-4906-b528-c2e458810f7f eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['d9f2b665-3193-4eac-82d4-83e12a54a264', '9e9e6c64-05e8-4b50-a6bf-85cec054e42d']
Oct 13 15:46:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:29.701 496978 INFO neutron.agent.dhcp.agent [None req-1bf64243-feab-4115-9671-730f551750c1 - - - - - -] DHCP configuration for ports {'8b58413e-cfab-47f1-92d6-bab5eddc4b5e'} is completed
Oct 13 15:46:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3966: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:29 standalone.localdomain dnsmasq[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 0 addresses
Oct 13 15:46:29 standalone.localdomain dnsmasq-dhcp[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:29 standalone.localdomain dnsmasq-dhcp[542342]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:29 standalone.localdomain systemd[1]: tmp-crun.HIUaBV.mount: Deactivated successfully.
Oct 13 15:46:29 standalone.localdomain podman[542482]: 2025-10-13 15:46:29.987342674 +0000 UTC m=+0.074949780 container kill beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:30.159 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:29Z, description=, device_id=515e3f8e-80ee-4789-992e-8c76303121fd, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ee41f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e4130>], id=48085968-44a5-4d43-a6ac-f3fc81e26f50, ip_allocation=immediate, mac_address=fa:16:3e:ea:0b:17, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1754, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:46:29Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:46:30 standalone.localdomain podman[542523]: 
Oct 13 15:46:30 standalone.localdomain podman[542523]: 2025-10-13 15:46:30.243406752 +0000 UTC m=+0.109419535 container create ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:30 standalone.localdomain systemd[1]: Started libpod-conmon-ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5.scope.
Oct 13 15:46:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:30 standalone.localdomain podman[542523]: 2025-10-13 15:46:30.193169295 +0000 UTC m=+0.059182118 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d176da66e7333c4804c8a17b02ea37df200683ecd5db02e4d3eebf054edde7b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:30 standalone.localdomain podman[542523]: 2025-10-13 15:46:30.305824259 +0000 UTC m=+0.171837072 container init ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:30 standalone.localdomain podman[542523]: 2025-10-13 15:46:30.314772578 +0000 UTC m=+0.180785391 container start ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:30 standalone.localdomain dnsmasq[542566]: started, version 2.85 cachesize 150
Oct 13 15:46:30 standalone.localdomain dnsmasq[542566]: DNS service limited to local subnets
Oct 13 15:46:30 standalone.localdomain dnsmasq[542566]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:30 standalone.localdomain dnsmasq[542566]: warning: no upstream servers configured
Oct 13 15:46:30 standalone.localdomain dnsmasq-dhcp[542566]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:30 standalone.localdomain dnsmasq[542566]: read /var/lib/neutron/dhcp/f531c292-f9f5-42d9-9ba4-08c52dc16c2d/addn_hosts - 0 addresses
Oct 13 15:46:30 standalone.localdomain dnsmasq-dhcp[542566]: read /var/lib/neutron/dhcp/f531c292-f9f5-42d9-9ba4-08c52dc16c2d/host
Oct 13 15:46:30 standalone.localdomain dnsmasq-dhcp[542566]: read /var/lib/neutron/dhcp/f531c292-f9f5-42d9-9ba4-08c52dc16c2d/opts
Oct 13 15:46:30 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:46:30 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:30 standalone.localdomain podman[542559]: 2025-10-13 15:46:30.362882968 +0000 UTC m=+0.056440311 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:46:30 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:30.458 496978 INFO neutron.agent.dhcp.agent [None req-1134b037-9417-4bc7-a620-4d84459225ed - - - - - -] DHCP configuration for ports {'8c7268f2-bba0-4ab1-914f-927df7dd2bc8'} is completed
Oct 13 15:46:30 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:30.483 2 INFO neutron.agent.securitygroups_rpc [None req-5ab03602-1882-443c-9f97-36e1f01f2713 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:30.538 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:30.621 496978 INFO neutron.agent.dhcp.agent [None req-de40712f-7f14-4e2c-951d-b04e0e6e7111 - - - - - -] DHCP configuration for ports {'48085968-44a5-4d43-a6ac-f3fc81e26f50'} is completed
Oct 13 15:46:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40847 DF PROTO=TCP SPT=58820 DPT=9102 SEQ=483385854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD154360000000001030307) 
Oct 13 15:46:30 standalone.localdomain ceph-mon[29756]: pgmap v3966: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:30 standalone.localdomain dnsmasq[542342]: exiting on receipt of SIGTERM
Oct 13 15:46:30 standalone.localdomain podman[542601]: 2025-10-13 15:46:30.869357007 +0000 UTC m=+0.067884748 container kill beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:30 standalone.localdomain systemd[1]: libpod-beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d.scope: Deactivated successfully.
Oct 13 15:46:30 standalone.localdomain podman[542617]: 2025-10-13 15:46:30.936358917 +0000 UTC m=+0.046681448 container died beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4f116139a694d530e5ae5513791b772978bd635aefc016a6ccf54babe430ee02-merged.mount: Deactivated successfully.
Oct 13 15:46:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:30 standalone.localdomain podman[542617]: 2025-10-13 15:46:30.992767117 +0000 UTC m=+0.103089638 container remove beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:31 standalone.localdomain systemd[1]: libpod-conmon-beb3a42ebc84cbcc3250e92baefc96b9929bf0a7b99ddd27a412e9489c93ec2d.scope: Deactivated successfully.
Oct 13 15:46:31 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:15:46:31 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txde44228ea83b4644a47ec-0068ed1ed7" "proxy-server 2" 0.0005 "-" 23 -
Oct 13 15:46:31 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txde44228ea83b4644a47ec-0068ed1ed7)
Oct 13 15:46:31 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txde44228ea83b4644a47ec-0068ed1ed7)
Oct 13 15:46:31 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:31.604 2 INFO neutron.agent.securitygroups_rpc [None req-c1faacc2-785e-4f05-8100-537bc1b3fe0e 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3967: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:31.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain podman[542691]: 
Oct 13 15:46:32 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:32.023 2 INFO neutron.agent.securitygroups_rpc [None req-5ccb6b4a-6690-4a82-80e5-24c0edf08b38 006ab206b5984ead95fbaca834760999 8c4c286df95f4c959af0b9e61d174094 - - default default] Security group member updated ['69096691-b39e-40b0-8e74-b77b5660807f']
Oct 13 15:46:32 standalone.localdomain podman[542691]: 2025-10-13 15:46:32.028793464 +0000 UTC m=+0.075628501 container create f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:32 standalone.localdomain systemd[1]: Started libpod-conmon-f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2.scope.
Oct 13 15:46:32 standalone.localdomain systemd[1]: tmp-crun.ckJJij.mount: Deactivated successfully.
Oct 13 15:46:32 standalone.localdomain podman[542691]: 2025-10-13 15:46:31.990056615 +0000 UTC m=+0.036891662 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27be3a239959552304c7dd225dbb51644314b4c1a80bd529c82fbd6e91723ecc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:32 standalone.localdomain podman[542691]: 2025-10-13 15:46:32.117851012 +0000 UTC m=+0.164686049 container init f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:32 standalone.localdomain podman[542691]: 2025-10-13 15:46:32.127596285 +0000 UTC m=+0.174431322 container start f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:32 standalone.localdomain dnsmasq[542710]: started, version 2.85 cachesize 150
Oct 13 15:46:32 standalone.localdomain dnsmasq[542710]: DNS service limited to local subnets
Oct 13 15:46:32 standalone.localdomain dnsmasq[542710]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:32 standalone.localdomain dnsmasq[542710]: warning: no upstream servers configured
Oct 13 15:46:32 standalone.localdomain dnsmasq-dhcp[542710]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:46:32 standalone.localdomain dnsmasq-dhcp[542710]: DHCP, static leases only on 10.100.0.32, lease time 1d
Oct 13 15:46:32 standalone.localdomain dnsmasq[542710]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/addn_hosts - 0 addresses
Oct 13 15:46:32 standalone.localdomain dnsmasq-dhcp[542710]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/host
Oct 13 15:46:32 standalone.localdomain dnsmasq-dhcp[542710]: read /var/lib/neutron/dhcp/8669e1a6-2c0a-4fef-817e-c674b6e65b80/opts
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.166 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:31Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ee45e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ee4e80>], id=bd6f5e17-412d-4a20-a66b-235216f6ce59, ip_allocation=immediate, mac_address=fa:16:3e:0a:7a:74, name=tempest-RoutersAdminNegativeIpV6Test-2031296659, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=True, project_id=8c4c286df95f4c959af0b9e61d174094, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['69096691-b39e-40b0-8e74-b77b5660807f'], standard_attr_id=1769, status=DOWN, tags=[], tenant_id=8c4c286df95f4c959af0b9e61d174094, updated_at=2025-10-13T15:46:31Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:46:32 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:46:32 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:32 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:32 standalone.localdomain podman[542728]: 2025-10-13 15:46:32.399009702 +0000 UTC m=+0.067197907 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00357|binding|INFO|Releasing lport a9c12277-7a51-4287-8ea9-3f597110889f from this chassis (sb_readonly=0)
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain kernel: device tapa9c12277-7a left promiscuous mode
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00358|binding|INFO|Setting lport a9c12277-7a51-4287-8ea9-3f597110889f down in Southbound
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.468 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f531c292-f9f5-42d9-9ba4-08c52dc16c2d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f531c292-f9f5-42d9-9ba4-08c52dc16c2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9c108fe-9724-4cbc-a6cd-fdef70e0a3a4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=a9c12277-7a51-4287-8ea9-3f597110889f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.469 378821 INFO neutron.agent.ovn.metadata.agent [-] Port a9c12277-7a51-4287-8ea9-3f597110889f in datapath f531c292-f9f5-42d9-9ba4-08c52dc16c2d unbound from our chassis
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.471 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f531c292-f9f5-42d9-9ba4-08c52dc16c2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.472 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3f062aaf-36a0-4afb-8717-e18165636896]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:32.482 2 INFO neutron.agent.securitygroups_rpc [None req-7079e8cf-8a5a-4478-a704-c86df29381ec eefa420ee92f417da50bfb2b44a724dc b5a94d2f8d3c41ee91638c01feac6bc7 - - default default] Security group member updated ['e5246d74-7a72-4d89-9669-125c9e305a6a']
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain dnsmasq[542710]: exiting on receipt of SIGTERM
Oct 13 15:46:32 standalone.localdomain podman[542763]: 2025-10-13 15:46:32.551033504 +0000 UTC m=+0.046734159 container kill f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:46:32 standalone.localdomain systemd[1]: libpod-f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2.scope: Deactivated successfully.
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.584 496978 INFO neutron.agent.dhcp.agent [None req-43d23dfa-fddd-466a-8506-54f935b1c6f5 - - - - - -] DHCP configuration for ports {'20b13e30-b2d6-40a4-9537-84abbaa32a14', '36e7e924-139d-4d57-a01f-3a7052149280', '2513fba5-3362-4add-8a7c-752b1114c953'} is completed
Oct 13 15:46:32 standalone.localdomain podman[542782]: 2025-10-13 15:46:32.614868715 +0000 UTC m=+0.046654987 container died f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:32 standalone.localdomain podman[542782]: 2025-10-13 15:46:32.656135673 +0000 UTC m=+0.087921915 container remove f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8669e1a6-2c0a-4fef-817e-c674b6e65b80, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00359|binding|INFO|Releasing lport 2513fba5-3362-4add-8a7c-752b1114c953 from this chassis (sb_readonly=0)
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00360|binding|INFO|Setting lport 2513fba5-3362-4add-8a7c-752b1114c953 down in Southbound
Oct 13 15:46:32 standalone.localdomain kernel: device tap2513fba5-33 left promiscuous mode
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.677 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8669e1a6-2c0a-4fef-817e-c674b6e65b80', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5a94d2f8d3c41ee91638c01feac6bc7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=486905a0-8fbf-4847-a0a3-d828ace412ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2513fba5-3362-4add-8a7c-752b1114c953) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.678 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2513fba5-3362-4add-8a7c-752b1114c953 in datapath 8669e1a6-2c0a-4fef-817e-c674b6e65b80 unbound from our chassis
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.680 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8669e1a6-2c0a-4fef-817e-c674b6e65b80, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.682 496978 INFO neutron.agent.linux.ip_lib [None req-887fad3b-f7bd-47e5-bff3-663b0bd9a12b - - - - - -] Device tape03ea3a7-8c cannot be used as it has no MAC address
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.681 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff4f427-17c6-4c24-9b5b-ebe0ffdf4c96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain systemd[1]: libpod-conmon-f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2.scope: Deactivated successfully.
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain kernel: device tape03ea3a7-8c entered promiscuous mode
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00361|binding|INFO|Claiming lport e03ea3a7-8c63-461b-95c4-931f5f3c5693 for this chassis.
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00362|binding|INFO|e03ea3a7-8c63-461b-95c4-931f5f3c5693: Claiming unknown
Oct 13 15:46:32 standalone.localdomain NetworkManager[5962]: <info>  [1760370392.7112] manager: (tape03ea3a7-8c): new Generic device (/org/freedesktop/NetworkManager/Devices/68)
Oct 13 15:46:32 standalone.localdomain systemd-udevd[542814]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.727 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e03ea3a7-8c63-461b-95c4-931f5f3c5693) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.729 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e03ea3a7-8c63-461b-95c4-931f5f3c5693 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.730 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:32.731 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbf04e7-6d93-45c7-8d97-2d0e856b3dbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00363|binding|INFO|Setting lport e03ea3a7-8c63-461b-95c4-931f5f3c5693 ovn-installed in OVS
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00364|binding|INFO|Setting lport e03ea3a7-8c63-461b-95c4-931f5f3c5693 up in Southbound
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:32Z|00365|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.756 496978 INFO neutron.agent.dhcp.agent [None req-719ef11c-ec72-4ae3-b6a2-eb21f42c801a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.757 496978 INFO neutron.agent.dhcp.agent [None req-719ef11c-ec72-4ae3-b6a2-eb21f42c801a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:32 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape03ea3a7-8c: No such device
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.779 496978 INFO neutron.agent.dhcp.agent [None req-c05e4e15-2488-45d3-a26a-31db8d1ef945 - - - - - -] DHCP configuration for ports {'bd6f5e17-412d-4a20-a66b-235216f6ce59'} is completed
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain ceph-mon[29756]: pgmap v3967: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:32.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:32.991 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:32 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:32.995 2 INFO neutron.agent.securitygroups_rpc [None req-76785da4-df71-4b8f-8ed8-d8f06c47f406 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-27be3a239959552304c7dd225dbb51644314b4c1a80bd529c82fbd6e91723ecc-merged.mount: Deactivated successfully.
Oct 13 15:46:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1978457686c1bf25fa654259b42df24e8054fe9cde6cde489f0b24a40da1dc2-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:33 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d8669e1a6\x2d2c0a\x2d4fef\x2d817e\x2dc674b6e65b80.mount: Deactivated successfully.
Oct 13 15:46:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:33.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:33 standalone.localdomain dnsmasq[542566]: read /var/lib/neutron/dhcp/f531c292-f9f5-42d9-9ba4-08c52dc16c2d/addn_hosts - 0 addresses
Oct 13 15:46:33 standalone.localdomain dnsmasq-dhcp[542566]: read /var/lib/neutron/dhcp/f531c292-f9f5-42d9-9ba4-08c52dc16c2d/host
Oct 13 15:46:33 standalone.localdomain dnsmasq-dhcp[542566]: read /var/lib/neutron/dhcp/f531c292-f9f5-42d9-9ba4-08c52dc16c2d/opts
Oct 13 15:46:33 standalone.localdomain podman[542878]: 2025-10-13 15:46:33.426362019 +0000 UTC m=+0.035688275 container kill ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent [None req-d4b67b3d-ef94-4516-a81f-fc5d9611c88f - - - - - -] Unable to reload_allocations dhcp for f531c292-f9f5-42d9-9ba4-08c52dc16c2d.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa9c12277-7a not found in namespace qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d.
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa9c12277-7a not found in namespace qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d.
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.455 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:46:33 standalone.localdomain podman[542914]: 
Oct 13 15:46:33 standalone.localdomain podman[542914]: 2025-10-13 15:46:33.677544913 +0000 UTC m=+0.083714001 container create de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:46:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3968: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:33 standalone.localdomain systemd[1]: Started libpod-conmon-de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d.scope.
Oct 13 15:46:33 standalone.localdomain podman[542914]: 2025-10-13 15:46:33.635656347 +0000 UTC m=+0.041825505 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:33 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:33.741 2 INFO neutron.agent.securitygroups_rpc [None req-3b17ebf1-d8a1-45fb-bfa3-e3e369d89530 006ab206b5984ead95fbaca834760999 8c4c286df95f4c959af0b9e61d174094 - - default default] Security group member updated ['69096691-b39e-40b0-8e74-b77b5660807f']
Oct 13 15:46:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5362a52f1ac1e9a0f8282a6c868b6c33b99402f2ee52e901e135101d88d66a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:33 standalone.localdomain podman[542914]: 2025-10-13 15:46:33.755359631 +0000 UTC m=+0.161528709 container init de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:33 standalone.localdomain podman[542914]: 2025-10-13 15:46:33.764095873 +0000 UTC m=+0.170264961 container start de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:46:33 standalone.localdomain dnsmasq[542932]: started, version 2.85 cachesize 150
Oct 13 15:46:33 standalone.localdomain dnsmasq[542932]: DNS service limited to local subnets
Oct 13 15:46:33 standalone.localdomain dnsmasq[542932]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:33 standalone.localdomain dnsmasq[542932]: warning: no upstream servers configured
Oct 13 15:46:33 standalone.localdomain dnsmasq-dhcp[542932]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:33 standalone.localdomain dnsmasq[542932]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:46:33 standalone.localdomain dnsmasq-dhcp[542932]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:33 standalone.localdomain dnsmasq-dhcp[542932]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:33 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:33.824 2 INFO neutron.agent.securitygroups_rpc [None req-639d9767-f096-47e5-8601-bc495c181aad 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.850 496978 INFO neutron.agent.dhcp.agent [None req-0c6c371e-611b-497f-9868-2e2dafdb0fcf - - - - - -] Synchronizing state
Oct 13 15:46:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:33.991 496978 INFO neutron.agent.dhcp.agent [None req-ad989400-80f0-42ef-8dd8-5dba9781e7d2 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:34 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:34.074 2 INFO neutron.agent.securitygroups_rpc [None req-a25466e5-c80a-4cff-bb2e-3d4805d3334d db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.077 496978 INFO neutron.agent.dhcp.agent [None req-30244603-2a22-4385-9705-be812a2f1f3b - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.078 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.079 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.079 496978 INFO neutron.agent.dhcp.agent [-] Starting network f531c292-f9f5-42d9-9ba4-08c52dc16c2d dhcp configuration
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.080 496978 INFO neutron.agent.dhcp.agent [-] Finished network f531c292-f9f5-42d9-9ba4-08c52dc16c2d dhcp configuration
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.080 496978 INFO neutron.agent.dhcp.agent [None req-30244603-2a22-4385-9705-be812a2f1f3b - - - - - -] Synchronizing state complete
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.081 496978 INFO neutron.agent.dhcp.agent [None req-887fad3b-f7bd-47e5-bff3-663b0bd9a12b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:32Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891ac460>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f113a0>], id=dc455f95-6547-4796-9918-8bce90cab714, ip_allocation=immediate, mac_address=fa:16:3e:1f:f4:fd, name=tempest-NetworksTestDHCPv6-1221600900, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['efdea10a-2986-452c-9ff3-9d9334b746f3'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:31Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=1772, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:32Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.103 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:34 standalone.localdomain dnsmasq[542932]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:46:34 standalone.localdomain dnsmasq-dhcp[542932]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:34 standalone.localdomain dnsmasq-dhcp[542932]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:34 standalone.localdomain podman[542975]: 2025-10-13 15:46:34.300605089 +0000 UTC m=+0.058267878 container kill de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:46:34 standalone.localdomain dnsmasq[542566]: exiting on receipt of SIGTERM
Oct 13 15:46:34 standalone.localdomain podman[542992]: 2025-10-13 15:46:34.368479977 +0000 UTC m=+0.066328061 container kill ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:46:34 standalone.localdomain systemd[1]: libpod-ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5.scope: Deactivated successfully.
Oct 13 15:46:34 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:46:34 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:34 standalone.localdomain podman[543011]: 2025-10-13 15:46:34.424351189 +0000 UTC m=+0.057290207 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:46:34 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:34 standalone.localdomain podman[543027]: 2025-10-13 15:46:34.455558133 +0000 UTC m=+0.056816633 container died ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:46:34 standalone.localdomain podman[543027]: 2025-10-13 15:46:34.499178774 +0000 UTC m=+0.100437254 container remove ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f531c292-f9f5-42d9-9ba4-08c52dc16c2d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:46:34 standalone.localdomain systemd[1]: libpod-conmon-ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5.scope: Deactivated successfully.
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.534 496978 INFO neutron.agent.dhcp.agent [None req-a7fdd39b-db29-46eb-9ef1-76c4216c4fae - - - - - -] DHCP configuration for ports {'dc455f95-6547-4796-9918-8bce90cab714'} is completed
Oct 13 15:46:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:34.646 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:34Z, description=, device_id=515e3f8e-80ee-4789-992e-8c76303121fd, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890aeb50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890aeee0>], id=f0f923a8-f6cd-4374-8752-204a37cb5079, ip_allocation=immediate, mac_address=fa:16:3e:54:21:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:23Z, description=, dns_domain=, id=1425a739-9d65-4c05-9700-1ab66a63077c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-1237094141, port_security_enabled=True, project_id=d3c1410d6c264516966bcf7dffd0e4d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1729, status=ACTIVE, subnets=['3ef35283-0a92-40c9-9bce-0cba6325d140'], tags=[], tenant_id=d3c1410d6c264516966bcf7dffd0e4d5, updated_at=2025-10-13T15:46:25Z, vlan_transparent=None, network_id=1425a739-9d65-4c05-9700-1ab66a63077c, port_security_enabled=False, project_id=d3c1410d6c264516966bcf7dffd0e4d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1780, status=DOWN, tags=[], tenant_id=d3c1410d6c264516966bcf7dffd0e4d5, updated_at=2025-10-13T15:46:34Z on network 1425a739-9d65-4c05-9700-1ab66a63077c
Oct 13 15:46:34 standalone.localdomain dnsmasq[542932]: exiting on receipt of SIGTERM
Oct 13 15:46:34 standalone.localdomain podman[543078]: 2025-10-13 15:46:34.727866677 +0000 UTC m=+0.061930943 container kill de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:46:34 standalone.localdomain systemd[1]: libpod-de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d.scope: Deactivated successfully.
Oct 13 15:46:34 standalone.localdomain ceph-mon[29756]: pgmap v3968: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:34 standalone.localdomain podman[543110]: 2025-10-13 15:46:34.822646934 +0000 UTC m=+0.063849593 container died de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:46:34 standalone.localdomain podman[543110]: 2025-10-13 15:46:34.870569458 +0000 UTC m=+0.111772067 container remove de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:34.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:34 standalone.localdomain kernel: device tape03ea3a7-8c left promiscuous mode
Oct 13 15:46:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:34Z|00366|binding|INFO|Releasing lport e03ea3a7-8c63-461b-95c4-931f5f3c5693 from this chassis (sb_readonly=0)
Oct 13 15:46:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:34Z|00367|binding|INFO|Setting lport e03ea3a7-8c63-461b-95c4-931f5f3c5693 down in Southbound
Oct 13 15:46:34 standalone.localdomain dnsmasq[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/addn_hosts - 1 addresses
Oct 13 15:46:34 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/host
Oct 13 15:46:34 standalone.localdomain podman[543128]: 2025-10-13 15:46:34.893098861 +0000 UTC m=+0.064801192 container kill 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:34 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/opts
Oct 13 15:46:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:34.895 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e03ea3a7-8c63-461b-95c4-931f5f3c5693) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:34.897 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e03ea3a7-8c63-461b-95c4-931f5f3c5693 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:46:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:34.898 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:34.901 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[df6fb41a-e00a-4dea-87f9-b0514fe0783c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:34 standalone.localdomain systemd[1]: libpod-conmon-de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d.scope: Deactivated successfully.
Oct 13 15:46:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:34.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8e5362a52f1ac1e9a0f8282a6c868b6c33b99402f2ee52e901e135101d88d66a-merged.mount: Deactivated successfully.
Oct 13 15:46:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de084fee45f8c5785e88a195f999e19f2d952e69c233e13d2bcc8c63c736c20d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5d176da66e7333c4804c8a17b02ea37df200683ecd5db02e4d3eebf054edde7b-merged.mount: Deactivated successfully.
Oct 13 15:46:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffcd9cce691b814402e1f32bb28358dfae2088355d9f7075bba1c32edd2ae5a5-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:35 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df531c292\x2df9f5\x2d42d9\x2d9ba4\x2d08c52dc16c2d.mount: Deactivated successfully.
Oct 13 15:46:35 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:35.199 496978 INFO neutron.agent.dhcp.agent [None req-c59b87c4-9f4c-4559-a67b-1567baf9862c - - - - - -] DHCP configuration for ports {'f0f923a8-f6cd-4374-8752-204a37cb5079'} is completed
Oct 13 15:46:35 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:35.203 2 INFO neutron.agent.securitygroups_rpc [None req-ebbb41ad-5a15-40d5-bb11-0ce78cdd9e73 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:35 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:46:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3969: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:35 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:35.832 496978 INFO neutron.agent.linux.ip_lib [None req-d83dbf77-205e-4d1e-adc3-b7051de80f10 - - - - - -] Device tap2eb9a25b-49 cannot be used as it has no MAC address
Oct 13 15:46:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:35.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:35 standalone.localdomain kernel: device tap2eb9a25b-49 entered promiscuous mode
Oct 13 15:46:35 standalone.localdomain NetworkManager[5962]: <info>  [1760370395.8673] manager: (tap2eb9a25b-49): new Generic device (/org/freedesktop/NetworkManager/Devices/69)
Oct 13 15:46:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:35Z|00368|binding|INFO|Claiming lport 2eb9a25b-4971-409e-a0e9-b7db8b835df3 for this chassis.
Oct 13 15:46:35 standalone.localdomain systemd-udevd[543161]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:35Z|00369|binding|INFO|2eb9a25b-4971-409e-a0e9-b7db8b835df3: Claiming unknown
Oct 13 15:46:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:35.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:35Z|00370|binding|INFO|Setting lport 2eb9a25b-4971-409e-a0e9-b7db8b835df3 ovn-installed in OVS
Oct 13 15:46:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:35.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:35Z|00371|binding|INFO|Setting lport 2eb9a25b-4971-409e-a0e9-b7db8b835df3 up in Southbound
Oct 13 15:46:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:35.882 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2eb9a25b-4971-409e-a0e9-b7db8b835df3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:35.884 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2eb9a25b-4971-409e-a0e9-b7db8b835df3 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:35.888 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:35.888 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[88f56ce4-58e1-406c-be5c-73fac0feabf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:35.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2eb9a25b-49: No such device
Oct 13 15:46:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:35.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:36.099 2 INFO neutron.agent.securitygroups_rpc [None req-cc1efc51-9a1d-4271-bc86-6179d41dc6b0 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:36 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:36.156 2 INFO neutron.agent.securitygroups_rpc [None req-9cbc90bc-de21-421e-aa64-c35d2f80271a 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['067acc51-cbd6-44fc-b2aa-6a8270beb669']
Oct 13 15:46:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:36.459 496978 INFO neutron.agent.linux.ip_lib [None req-b2b4b920-2703-48d9-963f-1801e51b433e - - - - - -] Device tapcfbcd979-ac cannot be used as it has no MAC address
Oct 13 15:46:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:36.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain kernel: device tapcfbcd979-ac entered promiscuous mode
Oct 13 15:46:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:36.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:36Z|00372|binding|INFO|Claiming lport cfbcd979-acb8-49ae-b5f5-f71da8b6e964 for this chassis.
Oct 13 15:46:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:36Z|00373|binding|INFO|cfbcd979-acb8-49ae-b5f5-f71da8b6e964: Claiming unknown
Oct 13 15:46:36 standalone.localdomain NetworkManager[5962]: <info>  [1760370396.4915] manager: (tapcfbcd979-ac): new Generic device (/org/freedesktop/NetworkManager/Devices/70)
Oct 13 15:46:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:36.500 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=865c3b7f-9b2e-4591-b3e2-a355865805ff, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=cfbcd979-acb8-49ae-b5f5-f71da8b6e964) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:36.502 378821 INFO neutron.agent.ovn.metadata.agent [-] Port cfbcd979-acb8-49ae-b5f5-f71da8b6e964 in datapath 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4 bound to our chassis
Oct 13 15:46:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:36.503 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:36.504 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[84fffff0-af96-48c6-a7fd-5d54bd1a75ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:36Z|00374|binding|INFO|Setting lport cfbcd979-acb8-49ae-b5f5-f71da8b6e964 ovn-installed in OVS
Oct 13 15:46:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:36Z|00375|binding|INFO|Setting lport cfbcd979-acb8-49ae-b5f5-f71da8b6e964 up in Southbound
Oct 13 15:46:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:36.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:36.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:36.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:46:36 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txb1b82628b36047a6a9e03-0068ed1edc" "proxy-server 2" 0.0012 "-" 21 -
Oct 13 15:46:36 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txb1b82628b36047a6a9e03-0068ed1edc)
Oct 13 15:46:36 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txb1b82628b36047a6a9e03-0068ed1edc)
Oct 13 15:46:36 standalone.localdomain ceph-mon[29756]: pgmap v3969: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:46:36 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:36.877 2 INFO neutron.agent.securitygroups_rpc [None req-8e9c9f9f-742c-4761-8eb3-1ea16c047d31 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['067acc51-cbd6-44fc-b2aa-6a8270beb669']
Oct 13 15:46:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:36.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:36 standalone.localdomain podman[543261]: 
Oct 13 15:46:36 standalone.localdomain podman[543261]: 2025-10-13 15:46:36.960411937 +0000 UTC m=+0.116594567 container create 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:46:36 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:46:36 standalone.localdomain systemd[1]: Started libpod-conmon-147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a.scope.
Oct 13 15:46:37 standalone.localdomain podman[543261]: 2025-10-13 15:46:36.920601466 +0000 UTC m=+0.076784066 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1166c09dc43f99cb601473ccc989347bcddc45be20e8d00b692276385fec004/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:37 standalone.localdomain podman[543261]: 2025-10-13 15:46:37.040732293 +0000 UTC m=+0.196914903 container init 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:37 standalone.localdomain dnsmasq[543303]: started, version 2.85 cachesize 150
Oct 13 15:46:37 standalone.localdomain dnsmasq[543303]: DNS service limited to local subnets
Oct 13 15:46:37 standalone.localdomain dnsmasq[543303]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:37 standalone.localdomain dnsmasq[543303]: warning: no upstream servers configured
Oct 13 15:46:37 standalone.localdomain dnsmasq[543303]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:46:37 standalone.localdomain podman[543284]: 2025-10-13 15:46:37.075518758 +0000 UTC m=+0.074048410 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:37 standalone.localdomain podman[543261]: 2025-10-13 15:46:37.097181384 +0000 UTC m=+0.253363984 container start 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:37 standalone.localdomain podman[543284]: 2025-10-13 15:46:37.134991494 +0000 UTC m=+0.133521136 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 13 15:46:37 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:46:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:37.206 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:34Z, description=, device_id=515e3f8e-80ee-4789-992e-8c76303121fd, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173490>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173b80>], id=f0f923a8-f6cd-4374-8752-204a37cb5079, ip_allocation=immediate, mac_address=fa:16:3e:54:21:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:23Z, description=, dns_domain=, id=1425a739-9d65-4c05-9700-1ab66a63077c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-1237094141, port_security_enabled=True, project_id=d3c1410d6c264516966bcf7dffd0e4d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1729, status=ACTIVE, subnets=['3ef35283-0a92-40c9-9bce-0cba6325d140'], tags=[], tenant_id=d3c1410d6c264516966bcf7dffd0e4d5, updated_at=2025-10-13T15:46:25Z, vlan_transparent=None, network_id=1425a739-9d65-4c05-9700-1ab66a63077c, port_security_enabled=False, project_id=d3c1410d6c264516966bcf7dffd0e4d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1780, status=DOWN, tags=[], tenant_id=d3c1410d6c264516966bcf7dffd0e4d5, updated_at=2025-10-13T15:46:34Z on network 1425a739-9d65-4c05-9700-1ab66a63077c
Oct 13 15:46:37 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:37.354 2 INFO neutron.agent.securitygroups_rpc [None req-370bd01f-4b4f-4dbf-945e-81dba44b2478 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:37.404 496978 INFO neutron.agent.dhcp.agent [None req-7db73640-2474-4bf8-887f-bbe3019627c2 - - - - - -] DHCP configuration for ports {'0c6e277b-43af-44ef-acd1-8e1246a726b9', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:37 standalone.localdomain podman[543354]: 
Oct 13 15:46:37 standalone.localdomain podman[543354]: 2025-10-13 15:46:37.49616515 +0000 UTC m=+0.057559727 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:37 standalone.localdomain dnsmasq[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/addn_hosts - 1 addresses
Oct 13 15:46:37 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/host
Oct 13 15:46:37 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/opts
Oct 13 15:46:37 standalone.localdomain podman[543380]: 2025-10-13 15:46:37.605681136 +0000 UTC m=+0.115025329 container kill 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:37 standalone.localdomain dnsmasq[543303]: exiting on receipt of SIGTERM
Oct 13 15:46:37 standalone.localdomain podman[543398]: 2025-10-13 15:46:37.650509755 +0000 UTC m=+0.077314133 container kill 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:37 standalone.localdomain podman[543354]: 2025-10-13 15:46:37.652285439 +0000 UTC m=+0.213680066 container create 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:46:37 standalone.localdomain systemd[1]: tmp-crun.4mQACJ.mount: Deactivated successfully.
Oct 13 15:46:37 standalone.localdomain systemd[1]: libpod-147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a.scope: Deactivated successfully.
Oct 13 15:46:37 standalone.localdomain systemd[1]: Started libpod-conmon-144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8.scope.
Oct 13 15:46:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a8ea1847df0f7ed50315f7684fd5a76f7d81850b4db5d184024515c6897b93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:37 standalone.localdomain podman[543413]: 2025-10-13 15:46:37.718678411 +0000 UTC m=+0.053874542 container died 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:46:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3970: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:37 standalone.localdomain podman[543354]: 2025-10-13 15:46:37.73563387 +0000 UTC m=+0.297028427 container init 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:37 standalone.localdomain podman[543354]: 2025-10-13 15:46:37.743269938 +0000 UTC m=+0.304664515 container start 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:37 standalone.localdomain dnsmasq[543453]: started, version 2.85 cachesize 150
Oct 13 15:46:37 standalone.localdomain dnsmasq[543453]: DNS service limited to local subnets
Oct 13 15:46:37 standalone.localdomain dnsmasq[543453]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:37 standalone.localdomain dnsmasq[543453]: warning: no upstream servers configured
Oct 13 15:46:37 standalone.localdomain dnsmasq-dhcp[543453]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:37 standalone.localdomain dnsmasq[543453]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/addn_hosts - 0 addresses
Oct 13 15:46:37 standalone.localdomain dnsmasq-dhcp[543453]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/host
Oct 13 15:46:37 standalone.localdomain dnsmasq-dhcp[543453]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/opts
Oct 13 15:46:37 standalone.localdomain podman[543413]: 2025-10-13 15:46:37.807743169 +0000 UTC m=+0.142939300 container cleanup 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:46:37 standalone.localdomain systemd[1]: libpod-conmon-147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a.scope: Deactivated successfully.
Oct 13 15:46:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:37.817 496978 INFO neutron.agent.linux.ip_lib [None req-67be501e-d444-4a38-94e4-e4e5c04150d1 - - - - - -] Device tap021e7d92-75 cannot be used as it has no MAC address
Oct 13 15:46:37 standalone.localdomain podman[543421]: 2025-10-13 15:46:37.836552968 +0000 UTC m=+0.148364839 container remove 147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:37 standalone.localdomain kernel: device tap021e7d92-75 entered promiscuous mode
Oct 13 15:46:37 standalone.localdomain NetworkManager[5962]: <info>  [1760370397.8543] manager: (tap021e7d92-75): new Generic device (/org/freedesktop/NetworkManager/Devices/71)
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:37Z|00376|binding|INFO|Releasing lport 2eb9a25b-4971-409e-a0e9-b7db8b835df3 from this chassis (sb_readonly=0)
Oct 13 15:46:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:37Z|00377|binding|INFO|Setting lport 2eb9a25b-4971-409e-a0e9-b7db8b835df3 down in Southbound
Oct 13 15:46:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:37Z|00378|binding|INFO|Claiming lport 021e7d92-754c-4310-8958-4cdd5973c132 for this chassis.
Oct 13 15:46:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:37Z|00379|binding|INFO|021e7d92-754c-4310-8958-4cdd5973c132: Claiming unknown
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.868 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2eb9a25b-4971-409e-a0e9-b7db8b835df3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.870 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-616c7868-4abb-4d79-8438-572ba78bfc1d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-616c7868-4abb-4d79-8438-572ba78bfc1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9c8f020-11c0-4269-949e-e6bc0d89cb3c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=021e7d92-754c-4310-8958-4cdd5973c132) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.871 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2eb9a25b-4971-409e-a0e9-b7db8b835df3 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.872 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.872 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4cdd36-edf6-4b51-bde2-51f4866dc019]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.873 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 021e7d92-754c-4310-8958-4cdd5973c132 in datapath 616c7868-4abb-4d79-8438-572ba78bfc1d bound to our chassis
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.874 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 616c7868-4abb-4d79-8438-572ba78bfc1d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:37 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:37.874 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0827ae-e964-4bf1-ac48-30e1b74fab6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:37 standalone.localdomain kernel: device tap2eb9a25b-49 left promiscuous mode
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:37Z|00380|binding|INFO|Setting lport 021e7d92-754c-4310-8958-4cdd5973c132 ovn-installed in OVS
Oct 13 15:46:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:37Z|00381|binding|INFO|Setting lport 021e7d92-754c-4310-8958-4cdd5973c132 up in Southbound
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:37.935 496978 INFO neutron.agent.dhcp.agent [None req-a92bb886-479b-4454-9d26-3dd44012db9d - - - - - -] DHCP configuration for ports {'36c35903-c856-491f-b22b-b8cb3d675c13', 'f0f923a8-f6cd-4374-8752-204a37cb5079'} is completed
Oct 13 15:46:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:37.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:38 standalone.localdomain dnsmasq[543453]: exiting on receipt of SIGTERM
Oct 13 15:46:38 standalone.localdomain podman[543488]: 2025-10-13 15:46:38.077787982 +0000 UTC m=+0.050504876 container kill 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:38 standalone.localdomain systemd[1]: libpod-144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8.scope: Deactivated successfully.
Oct 13 15:46:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:46:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:38.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:38 standalone.localdomain podman[543505]: 2025-10-13 15:46:38.128637658 +0000 UTC m=+0.039220104 container died 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:38 standalone.localdomain podman[543505]: 2025-10-13 15:46:38.162080902 +0000 UTC m=+0.072663368 container cleanup 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:46:38 standalone.localdomain systemd[1]: libpod-conmon-144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8.scope: Deactivated successfully.
Oct 13 15:46:38 standalone.localdomain podman[543507]: 2025-10-13 15:46:38.203165044 +0000 UTC m=+0.102730136 container remove 144b157d700662110e57dcb1fb79e43a21735aa4169386ddd6d7c5e3dc6079c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:38 standalone.localdomain podman[543513]: 2025-10-13 15:46:38.266198029 +0000 UTC m=+0.163432958 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc.)
Oct 13 15:46:38 standalone.localdomain podman[543513]: 2025-10-13 15:46:38.279112143 +0000 UTC m=+0.176347052 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 15:46:38 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:46:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a1166c09dc43f99cb601473ccc989347bcddc45be20e8d00b692276385fec004-merged.mount: Deactivated successfully.
Oct 13 15:46:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-147bd6ddea7fdbcd8e604d78acedcb84acce7f4b27c6cb8686fac68832d1ce2a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:38 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:46:38 standalone.localdomain ceph-mon[29756]: pgmap v3970: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:38 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:38.704 2 INFO neutron.agent.securitygroups_rpc [None req-a8ae7f03-867e-4671-8aa8-d1d18c5df1b7 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:38 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:38.997 2 INFO neutron.agent.securitygroups_rpc [None req-34a64dfe-d252-455a-8023-26efaa989960 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:39 standalone.localdomain podman[543595]: 
Oct 13 15:46:39 standalone.localdomain podman[543595]: 2025-10-13 15:46:39.023578405 +0000 UTC m=+0.111885031 container create 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:39 standalone.localdomain podman[543595]: 2025-10-13 15:46:38.973138672 +0000 UTC m=+0.061445348 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:39 standalone.localdomain systemd[1]: Started libpod-conmon-489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039.scope.
Oct 13 15:46:39 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:39 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d0978919b33f15517bb459767baa4c7e25a966601c1de9cb8671cd8bc7729b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:39 standalone.localdomain podman[543595]: 2025-10-13 15:46:39.116963778 +0000 UTC m=+0.205270404 container init 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:46:39 standalone.localdomain podman[543595]: 2025-10-13 15:46:39.128208249 +0000 UTC m=+0.216514875 container start 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:39 standalone.localdomain dnsmasq[543613]: started, version 2.85 cachesize 150
Oct 13 15:46:39 standalone.localdomain dnsmasq[543613]: DNS service limited to local subnets
Oct 13 15:46:39 standalone.localdomain dnsmasq[543613]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:39 standalone.localdomain dnsmasq[543613]: warning: no upstream servers configured
Oct 13 15:46:39 standalone.localdomain dnsmasq-dhcp[543613]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:39 standalone.localdomain dnsmasq[543613]: read /var/lib/neutron/dhcp/616c7868-4abb-4d79-8438-572ba78bfc1d/addn_hosts - 0 addresses
Oct 13 15:46:39 standalone.localdomain dnsmasq-dhcp[543613]: read /var/lib/neutron/dhcp/616c7868-4abb-4d79-8438-572ba78bfc1d/host
Oct 13 15:46:39 standalone.localdomain dnsmasq-dhcp[543613]: read /var/lib/neutron/dhcp/616c7868-4abb-4d79-8438-572ba78bfc1d/opts
Oct 13 15:46:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:39.280 496978 INFO neutron.agent.linux.ip_lib [None req-68072d84-5142-41bc-8077-2ddc486bceb9 - - - - - -] Device tapd0451465-19 cannot be used as it has no MAC address
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain kernel: device tapd0451465-19 entered promiscuous mode
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00382|binding|INFO|Claiming lport d0451465-1990-46e4-8f17-c93f365e9ece for this chassis.
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00383|binding|INFO|d0451465-1990-46e4-8f17-c93f365e9ece: Claiming unknown
Oct 13 15:46:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370399.3678] manager: (tapd0451465-19): new Generic device (/org/freedesktop/NetworkManager/Devices/72)
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00384|binding|INFO|Setting lport d0451465-1990-46e4-8f17-c93f365e9ece ovn-installed in OVS
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00385|binding|INFO|Setting lport d0451465-1990-46e4-8f17-c93f365e9ece up in Southbound
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.375 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d0451465-1990-46e4-8f17-c93f365e9ece) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.377 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d0451465-1990-46e4-8f17-c93f365e9ece in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.379 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.381 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2ebb47-4fb4-4939-9547-75ae52e02cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:39.385 496978 INFO neutron.agent.dhcp.agent [None req-55e11a27-9002-424c-b96e-abd0d14033f3 - - - - - -] DHCP configuration for ports {'ee39fc2f-c8f8-4729-b251-91c70485c661'} is completed
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:39.422 2 INFO neutron.agent.securitygroups_rpc [None req-2e5fe8ef-fe82-42b5-8f6a-b3adade0709c 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapd0451465-19: No such device
Oct 13 15:46:39 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:39.460 2 INFO neutron.agent.securitygroups_rpc [None req-69a6c9ff-2419-4dde-b94f-e719ce0ea925 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:39.604 496978 INFO neutron.agent.linux.ip_lib [None req-657011cd-a87b-4b24-9767-acffe202f132 - - - - - -] Device tap80012df1-7a cannot be used as it has no MAC address
Oct 13 15:46:39 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:39.638 2 INFO neutron.agent.securitygroups_rpc [None req-6db099fa-6916-43ea-84b8-4ab290038855 03073a8ccfc64fa88b1047377a9fc037 d3c1410d6c264516966bcf7dffd0e4d5 - - default default] Security group member updated ['2415d0e2-7475-4494-9eb4-c0035b945053']
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain kernel: device tap80012df1-7a entered promiscuous mode
Oct 13 15:46:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370399.6598] manager: (tap80012df1-7a): new Generic device (/org/freedesktop/NetworkManager/Devices/73)
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00386|binding|INFO|Claiming lport 80012df1-7aa7-444b-9e20-828cf59722ed for this chassis.
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00387|binding|INFO|80012df1-7aa7-444b-9e20-828cf59722ed: Claiming unknown
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.674 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-e83d980f-4895-497f-8345-b0812704fdfa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e83d980f-4895-497f-8345-b0812704fdfa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fad20e64-0dc5-4330-9ec8-00069b6431ee, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=80012df1-7aa7-444b-9e20-828cf59722ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.676 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 80012df1-7aa7-444b-9e20-828cf59722ed in datapath e83d980f-4895-497f-8345-b0812704fdfa bound to our chassis
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00388|binding|INFO|Setting lport 80012df1-7aa7-444b-9e20-828cf59722ed ovn-installed in OVS
Oct 13 15:46:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:39Z|00389|binding|INFO|Setting lport 80012df1-7aa7-444b-9e20-828cf59722ed up in Southbound
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.678 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e83d980f-4895-497f-8345-b0812704fdfa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:39.679 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f88286-11bc-4d6d-8bad-f7d2a48d33fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3971: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:39.773 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:38Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912c610>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18899dc370>], id=ec8df3a6-ebd9-40cd-b429-aea59eec26f5, ip_allocation=immediate, mac_address=fa:16:3e:1b:26:f4, name=tempest-FloatingIPAdminTestJSON-109004894, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:23Z, description=, dns_domain=, id=1425a739-9d65-4c05-9700-1ab66a63077c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-1237094141, port_security_enabled=True, project_id=d3c1410d6c264516966bcf7dffd0e4d5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1729, status=ACTIVE, subnets=['3ef35283-0a92-40c9-9bce-0cba6325d140'], tags=[], tenant_id=d3c1410d6c264516966bcf7dffd0e4d5, updated_at=2025-10-13T15:46:25Z, vlan_transparent=None, network_id=1425a739-9d65-4c05-9700-1ab66a63077c, port_security_enabled=True, project_id=d3c1410d6c264516966bcf7dffd0e4d5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['2415d0e2-7475-4494-9eb4-c0035b945053'], standard_attr_id=1830, status=DOWN, tags=[], tenant_id=d3c1410d6c264516966bcf7dffd0e4d5, updated_at=2025-10-13T15:46:39Z on network 1425a739-9d65-4c05-9700-1ab66a63077c
Oct 13 15:46:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:39.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:39 standalone.localdomain podman[543730]: 2025-10-13 15:46:39.994956666 +0000 UTC m=+0.047770322 container kill 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:39 standalone.localdomain dnsmasq[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/addn_hosts - 2 addresses
Oct 13 15:46:39 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/host
Oct 13 15:46:39 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/opts
Oct 13 15:46:40 standalone.localdomain podman[543786]: 
Oct 13 15:46:40 standalone.localdomain podman[543786]: 2025-10-13 15:46:40.20859096 +0000 UTC m=+0.068437156 container create dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:46:40 standalone.localdomain systemd[1]: Started libpod-conmon-dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493.scope.
Oct 13 15:46:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c288f411022dee4bb0beed992915631fb324db15d55a1a957b1a0fb27347ce0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:40 standalone.localdomain podman[543786]: 2025-10-13 15:46:40.171929807 +0000 UTC m=+0.031776043 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:40.278 2 INFO neutron.agent.securitygroups_rpc [None req-f70f1ffd-e608-4b66-9e62-4c1cae65fafb 3d4d48f603104c4d919565e926e139c4 a966fd035a9e426eabc035f6c807bc5e - - default default] Security group member updated ['4142dd56-682e-4ee4-8478-a4934a3acf9a']
Oct 13 15:46:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:40.279 2 INFO neutron.agent.securitygroups_rpc [None req-e9bb177a-1d11-4f75-9ed4-d60b87335f59 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:40 standalone.localdomain podman[543786]: 2025-10-13 15:46:40.281532835 +0000 UTC m=+0.141379051 container init dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:40 standalone.localdomain podman[543786]: 2025-10-13 15:46:40.28746245 +0000 UTC m=+0.147308646 container start dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: started, version 2.85 cachesize 150
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: DNS service limited to local subnets
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: warning: no upstream servers configured
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/addn_hosts - 2 addresses
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/host
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/opts
Oct 13 15:46:40 standalone.localdomain podman[543807]: 2025-10-13 15:46:40.293753726 +0000 UTC m=+0.057669879 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Oct 13 15:46:40 standalone.localdomain podman[543807]: 2025-10-13 15:46:40.303670796 +0000 UTC m=+0.067586979 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:46:40 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.333 496978 INFO neutron.agent.dhcp.agent [None req-2221b68d-bb01-4f86-8f6a-7e67457c0195 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:37Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e9bb0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18892e95e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e9d90>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18892e9df0>], id=cc034d9b-d42d-408a-9acd-1c6633b7dcb3, ip_allocation=immediate, mac_address=fa:16:3e:ee:54:92, name=tempest-PortsIpV6TestJSON-69332142, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:34Z, description=, dns_domain=, id=07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1394714767, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1146, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=1781, status=ACTIVE, subnets=['6ae1f90f-2a71-4df9-8c4e-a4ac6ec00a67', 'dde928fc-2453-481a-8389-2d5d0fcdfb41'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:36Z, vlan_transparent=None, network_id=07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1814, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:37Z on network 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.354 496978 INFO neutron.agent.dhcp.agent [None req-ef342b6b-4841-4019-aff1-bbe25bd7d0ff - - - - - -] DHCP configuration for ports {'ec8df3a6-ebd9-40cd-b429-aea59eec26f5'} is completed
Oct 13 15:46:40 standalone.localdomain podman[543873]: 
Oct 13 15:46:40 standalone.localdomain podman[543889]: 2025-10-13 15:46:40.52178316 +0000 UTC m=+0.056636879 container kill dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/addn_hosts - 2 addresses
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/host
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/opts
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.559 496978 INFO neutron.agent.linux.ip_lib [None req-ed5a30e0-1c59-4069-9d77-c18e77391ea1 - - - - - -] Device tap32b34ca1-18 cannot be used as it has no MAC address
Oct 13 15:46:40 standalone.localdomain podman[543873]: 2025-10-13 15:46:40.564987227 +0000 UTC m=+0.149457513 container create 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:46:40 standalone.localdomain podman[543873]: 2025-10-13 15:46:40.470821789 +0000 UTC m=+0.055292075 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.584 496978 INFO neutron.agent.dhcp.agent [None req-01a5bfd4-8ab5-442d-a4a6-72cd4b9f9e8f - - - - - -] DHCP configuration for ports {'36c35903-c856-491f-b22b-b8cb3d675c13', 'cfbcd979-acb8-49ae-b5f5-f71da8b6e964', 'cc034d9b-d42d-408a-9acd-1c6633b7dcb3'} is completed
Oct 13 15:46:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:40.584 2 INFO neutron.agent.securitygroups_rpc [None req-15a75744-c280-4efd-9cdf-e0bb0af19d76 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:40.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:40 standalone.localdomain kernel: device tap32b34ca1-18 entered promiscuous mode
Oct 13 15:46:40 standalone.localdomain NetworkManager[5962]: <info>  [1760370400.6306] manager: (tap32b34ca1-18): new Generic device (/org/freedesktop/NetworkManager/Devices/74)
Oct 13 15:46:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:40Z|00390|binding|INFO|Claiming lport 32b34ca1-1805-4606-8a84-7b600181d749 for this chassis.
Oct 13 15:46:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:40Z|00391|binding|INFO|32b34ca1-1805-4606-8a84-7b600181d749: Claiming unknown
Oct 13 15:46:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:40.642 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe84:8a6b/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-5f993e6f-30f4-4230-9cfb-649f43d1ab22', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f993e6f-30f4-4230-9cfb-649f43d1ab22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6963109-8ab0-4ad9-850b-845c3afaddf4, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=32b34ca1-1805-4606-8a84-7b600181d749) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:40.643 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 32b34ca1-1805-4606-8a84-7b600181d749 in datapath 5f993e6f-30f4-4230-9cfb-649f43d1ab22 bound to our chassis
Oct 13 15:46:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:40.646 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 32552d05-7ce9-4974-9e9f-9a6c98041ddc IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:40.647 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f993e6f-30f4-4230-9cfb-649f43d1ab22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:40 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:40.648 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e567db-bd1d-4a99-a85c-3eaaf5f6bcf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:40 standalone.localdomain systemd[1]: Started libpod-conmon-944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764.scope.
Oct 13 15:46:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:40.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:40Z|00392|binding|INFO|Setting lport 32b34ca1-1805-4606-8a84-7b600181d749 ovn-installed in OVS
Oct 13 15:46:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:40Z|00393|binding|INFO|Setting lport 32b34ca1-1805-4606-8a84-7b600181d749 up in Southbound
Oct 13 15:46:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:40.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00637dd7e2d747e7d10b0abf518dc0efb4d31c19299bdd6b8b3609ce30988633/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:40 standalone.localdomain podman[543873]: 2025-10-13 15:46:40.696858021 +0000 UTC m=+0.281328297 container init 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:40 standalone.localdomain podman[543873]: 2025-10-13 15:46:40.705244242 +0000 UTC m=+0.289714518 container start 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:40 standalone.localdomain dnsmasq[543945]: started, version 2.85 cachesize 150
Oct 13 15:46:40 standalone.localdomain dnsmasq[543945]: DNS service limited to local subnets
Oct 13 15:46:40 standalone.localdomain dnsmasq[543945]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:40 standalone.localdomain dnsmasq[543945]: warning: no upstream servers configured
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543945]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:40 standalone.localdomain dnsmasq[543945]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543945]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543945]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:40.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.753 496978 INFO neutron.agent.dhcp.agent [None req-68072d84-5142-41bc-8077-2ddc486bceb9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:34Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ee43d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ee4550>], id=0c6e277b-43af-44ef-acd1-8e1246a726b9, ip_allocation=immediate, mac_address=fa:16:3e:5b:d4:48, name=tempest-NetworksTestDHCPv6-159652585, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=4, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['4b80e8bc-e613-4388-9298-de2ba7df553e'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:34Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=1790, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:34Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.755 496978 INFO neutron.agent.dhcp.agent [None req-2221b68d-bb01-4f86-8f6a-7e67457c0195 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:37Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e86340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e86610>], id=cc034d9b-d42d-408a-9acd-1c6633b7dcb3, ip_allocation=immediate, mac_address=fa:16:3e:ee:54:92, name=tempest-PortsIpV6TestJSON-69332142, network_id=07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1814, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:38Z on network 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4
Oct 13 15:46:40 standalone.localdomain dnsmasq[543945]: exiting on receipt of SIGTERM
Oct 13 15:46:40 standalone.localdomain systemd[1]: libpod-944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764.scope: Deactivated successfully.
Oct 13 15:46:40 standalone.localdomain podman[543955]: 2025-10-13 15:46:40.786244559 +0000 UTC m=+0.054316425 container died 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:46:40 standalone.localdomain ceph-mon[29756]: pgmap v3971: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:40 standalone.localdomain podman[543955]: 2025-10-13 15:46:40.817586796 +0000 UTC m=+0.085658652 container cleanup 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.844 496978 INFO neutron.agent.dhcp.agent [None req-2fa28c30-192b-4ccf-8d92-b2a58772c5ef - - - - - -] DHCP configuration for ports {'cc034d9b-d42d-408a-9acd-1c6633b7dcb3', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:40 standalone.localdomain podman[543978]: 2025-10-13 15:46:40.851269267 +0000 UTC m=+0.062714587 container cleanup 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:40 standalone.localdomain systemd[1]: libpod-conmon-944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764.scope: Deactivated successfully.
Oct 13 15:46:40 standalone.localdomain podman[544013]: 2025-10-13 15:46:40.880708496 +0000 UTC m=+0.052838969 container remove 944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.893 496978 ERROR neutron.agent.linux.utils [None req-68072d84-5142-41bc-8077-2ddc486bceb9 - - - - - -] Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 543945]; Stdin: ; Stdout: Mon Oct 13 03:46:40 PM UTC 2025 Sending signal 'HUP' to neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007 (944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: ; Stderr: Error: no container with name or ID "944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764" found: no such container
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent [None req-68072d84-5142-41bc-8077-2ddc486bceb9 - - - - - -] Unable to reload_allocations dhcp for f934a9b1-f0ba-494d-9c34-bbdad3043007.: neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 543945]; Stdin: ; Stdout: Mon Oct 13 03:46:40 PM UTC 2025 Sending signal 'HUP' to neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007 (944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: ; Stderr: Error: no container with name or ID "944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764" found: no such container
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 671, in reload_allocations
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     self._spawn_or_reload_process(reload_with_HUP=True)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 603, in _spawn_or_reload_process
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     pm.enable(reload_cfg=reload_with_HUP, ensure_active=True)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 108, in enable
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     self.reload_cfg()
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 117, in reload_cfg
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     self.disable('HUP', delete_pid_file=False)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/external_process.py", line 132, in disable
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     utils.execute(cmd, addl_env=self.cmd_addl_env,
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py", line 156, in execute
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent     raise exceptions.ProcessExecutionError(msg,
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent neutron_lib.exceptions.ProcessExecutionError: Exit code: 125; Cmd: ['/etc/neutron/kill_scripts/dnsmasq-kill', 'HUP', 543945]; Stdin: ; Stdout: Mon Oct 13 03:46:40 PM UTC 2025 Sending signal 'HUP' to neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007 (944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764)
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent ; Stderr: Error: no container with name or ID "944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764" found: no such container
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.894 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:46:40 standalone.localdomain podman[544052]: 
Oct 13 15:46:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:40.955 496978 INFO neutron.agent.dhcp.agent [None req-9221093e-7919-44ae-ba5e-e6b93cdaa26c - - - - - -] DHCP configuration for ports {'0c6e277b-43af-44ef-acd1-8e1246a726b9'} is completed
Oct 13 15:46:40 standalone.localdomain dnsmasq[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/addn_hosts - 1 addresses
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/host
Oct 13 15:46:40 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/opts
Oct 13 15:46:40 standalone.localdomain podman[544060]: 2025-10-13 15:46:40.963061414 +0000 UTC m=+0.052191779 container kill dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:46:40 standalone.localdomain podman[544052]: 2025-10-13 15:46:40.966536903 +0000 UTC m=+0.072990608 container create e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:46:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:40.986 2 INFO neutron.agent.securitygroups_rpc [None req-172ee0dc-4992-4f6e-af0b-c1a9a05eeadc 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:41 standalone.localdomain systemd[1]: Started libpod-conmon-e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62.scope.
Oct 13 15:46:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c0c11bad23f68c59e1aeff9e769cee1f6d6ecf05d1f0840e43f2250acaadfa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:41 standalone.localdomain podman[544052]: 2025-10-13 15:46:40.928795326 +0000 UTC m=+0.035249031 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:41 standalone.localdomain podman[544052]: 2025-10-13 15:46:41.026419591 +0000 UTC m=+0.132873346 container init e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:46:41 standalone.localdomain podman[544052]: 2025-10-13 15:46:41.035321059 +0000 UTC m=+0.141774784 container start e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:46:41 standalone.localdomain dnsmasq[544098]: started, version 2.85 cachesize 150
Oct 13 15:46:41 standalone.localdomain dnsmasq[544098]: DNS service limited to local subnets
Oct 13 15:46:41 standalone.localdomain dnsmasq[544098]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:41 standalone.localdomain dnsmasq[544098]: warning: no upstream servers configured
Oct 13 15:46:41 standalone.localdomain dnsmasq-dhcp[544098]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d
Oct 13 15:46:41 standalone.localdomain dnsmasq[544098]: read /var/lib/neutron/dhcp/e83d980f-4895-497f-8345-b0812704fdfa/addn_hosts - 0 addresses
Oct 13 15:46:41 standalone.localdomain dnsmasq-dhcp[544098]: read /var/lib/neutron/dhcp/e83d980f-4895-497f-8345-b0812704fdfa/host
Oct 13 15:46:41 standalone.localdomain dnsmasq-dhcp[544098]: read /var/lib/neutron/dhcp/e83d980f-4895-497f-8345-b0812704fdfa/opts
Oct 13 15:46:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:41.092 496978 INFO neutron.agent.dhcp.agent [None req-657011cd-a87b-4b24-9767-acffe202f132 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:41.274 496978 INFO neutron.agent.dhcp.agent [None req-e5a275d8-7713-43c5-a0dc-477331155eba - - - - - -] DHCP configuration for ports {'cc034d9b-d42d-408a-9acd-1c6633b7dcb3', '5a928a4b-42ad-412d-aff2-58d3625f3d92'} is completed
Oct 13 15:46:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00637dd7e2d747e7d10b0abf518dc0efb4d31c19299bdd6b8b3609ce30988633-merged.mount: Deactivated successfully.
Oct 13 15:46:41 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-944edc849047794a25109d57ba5534fac411ce97e187fa24808fdc8a01d5a764-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:41 standalone.localdomain podman[544133]: 
Oct 13 15:46:41 standalone.localdomain podman[544133]: 2025-10-13 15:46:41.539192906 +0000 UTC m=+0.090844345 container create 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:41 standalone.localdomain podman[467099]: time="2025-10-13T15:46:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:46:41 standalone.localdomain systemd[1]: Started libpod-conmon-5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28.scope.
Oct 13 15:46:41 standalone.localdomain podman[544133]: 2025-10-13 15:46:41.501009575 +0000 UTC m=+0.052661034 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:41 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:41.612 2 INFO neutron.agent.securitygroups_rpc [None req-27dad5df-a1dc-4d16-92d7-2cdc80b79ace 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018a0acec287dd3209351c00b0136c97a3166c222c81e1f60c2dce0adc07d3cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:41 standalone.localdomain podman[544133]: 2025-10-13 15:46:41.63163818 +0000 UTC m=+0.183289629 container init 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:41 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:41.632 2 INFO neutron.agent.securitygroups_rpc [None req-e5fd2376-32ee-4530-95cf-583c8174193d db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:41 standalone.localdomain podman[544133]: 2025-10-13 15:46:41.640541357 +0000 UTC m=+0.192192796 container start 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:41 standalone.localdomain dnsmasq[544151]: started, version 2.85 cachesize 150
Oct 13 15:46:41 standalone.localdomain dnsmasq[544151]: DNS service limited to local subnets
Oct 13 15:46:41 standalone.localdomain dnsmasq[544151]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:41 standalone.localdomain dnsmasq[544151]: warning: no upstream servers configured
Oct 13 15:46:41 standalone.localdomain dnsmasq[544151]: read /var/lib/neutron/dhcp/5f993e6f-30f4-4230-9cfb-649f43d1ab22/addn_hosts - 0 addresses
Oct 13 15:46:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:46:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 425584 "" "Go-http-client/1.1"
Oct 13 15:46:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:41.722 496978 INFO neutron.agent.dhcp.agent [None req-ed5a30e0-1c59-4069-9d77-c18e77391ea1 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:41.723 496978 INFO neutron.agent.dhcp.agent [None req-30244603-2a22-4385-9705-be812a2f1f3b - - - - - -] Synchronizing state
Oct 13 15:46:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3972: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:46:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51968 "" "Go-http-client/1.1"
Oct 13 15:46:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:41.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:41.920 496978 INFO neutron.agent.dhcp.agent [None req-f4228060-862b-4a6e-803c-b52ff869f85f - - - - - -] DHCP configuration for ports {'7660adc2-06e9-4aa8-a235-fe36e67b7901'} is completed
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.036 496978 INFO neutron.agent.dhcp.agent [None req-0c4836d5-b5e9-426e-ae92-75786fc64a67 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.037 496978 INFO neutron.agent.dhcp.agent [-] Starting network 36648d54-03b4-46b3-81f4-8c595eafdf9f dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.037 496978 INFO neutron.agent.dhcp.agent [-] Finished network 36648d54-03b4-46b3-81f4-8c595eafdf9f dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.038 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.038 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.039 496978 INFO neutron.agent.dhcp.agent [-] Starting network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.039 496978 INFO neutron.agent.dhcp.agent [-] Finished network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.039 496978 INFO neutron.agent.dhcp.agent [-] Starting network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.040 496978 INFO neutron.agent.dhcp.agent [-] Finished network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:46:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:42.040 496978 INFO neutron.agent.dhcp.agent [-] Starting network f934a9b1-f0ba-494d-9c34-bbdad3043007 dhcp configuration
Oct 13 15:46:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:42Z|00394|binding|INFO|Removing iface tap021e7d92-75 ovn-installed in OVS
Oct 13 15:46:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:42.046 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port cd74d23e-6384-48cd-b0fe-7d38010d7fe7 with type ""
Oct 13 15:46:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:42Z|00395|binding|INFO|Removing lport 021e7d92-754c-4310-8958-4cdd5973c132 ovn-installed in OVS
Oct 13 15:46:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:42.049 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-616c7868-4abb-4d79-8438-572ba78bfc1d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-616c7868-4abb-4d79-8438-572ba78bfc1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9c8f020-11c0-4269-949e-e6bc0d89cb3c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=021e7d92-754c-4310-8958-4cdd5973c132) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:42.055 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 021e7d92-754c-4310-8958-4cdd5973c132 in datapath 616c7868-4abb-4d79-8438-572ba78bfc1d unbound from our chassis
Oct 13 15:46:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:42.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:42.059 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 616c7868-4abb-4d79-8438-572ba78bfc1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:42.061 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed2c757-9d9e-451c-9e08-212ccb2cf83b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:42 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:42.436 2 INFO neutron.agent.securitygroups_rpc [None req-4ab34c2f-cc9a-4240-b62b-2afab319594a 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:42 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:42.493 2 INFO neutron.agent.securitygroups_rpc [None req-9cd690ea-6509-455e-a52b-2843e2312609 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:42 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:42.668 2 INFO neutron.agent.securitygroups_rpc [None req-1d38e6bc-b8b4-4277-89f0-4ed21580d8cf 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:42 standalone.localdomain ceph-mon[29756]: pgmap v3972: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:42 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:42.813 2 INFO neutron.agent.securitygroups_rpc [None req-66c5e9a3-6aec-4b17-b4f8-ef77a0d31afd 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:42 standalone.localdomain podman[544201]: 
Oct 13 15:46:42 standalone.localdomain podman[544201]: 2025-10-13 15:46:42.877133841 +0000 UTC m=+0.097983248 container create ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:46:42 standalone.localdomain systemd[1]: Started libpod-conmon-ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0.scope.
Oct 13 15:46:42 standalone.localdomain podman[544201]: 2025-10-13 15:46:42.830568399 +0000 UTC m=+0.051417836 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:42 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:42 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bcb317d34ceeabd95b72b351005439f338ff61752f535caea472d42896f9c5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:42 standalone.localdomain podman[544201]: 2025-10-13 15:46:42.946082622 +0000 UTC m=+0.166932029 container init ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:42 standalone.localdomain podman[544201]: 2025-10-13 15:46:42.951995186 +0000 UTC m=+0.172844593 container start ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:42 standalone.localdomain dnsmasq[544219]: started, version 2.85 cachesize 150
Oct 13 15:46:42 standalone.localdomain dnsmasq[544219]: DNS service limited to local subnets
Oct 13 15:46:42 standalone.localdomain dnsmasq[544219]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:42 standalone.localdomain dnsmasq[544219]: warning: no upstream servers configured
Oct 13 15:46:42 standalone.localdomain dnsmasq-dhcp[544219]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:42 standalone.localdomain dnsmasq[544219]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:46:42 standalone.localdomain dnsmasq-dhcp[544219]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:42 standalone.localdomain dnsmasq-dhcp[544219]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:46:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:46:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:46:43 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:43.008 2 INFO neutron.agent.securitygroups_rpc [None req-35befbbf-305c-4f9f-8fe2-51dd87e6d016 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.045 496978 INFO neutron.agent.dhcp.agent [-] Finished network f934a9b1-f0ba-494d-9c34-bbdad3043007 dhcp configuration
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.045 496978 INFO neutron.agent.dhcp.agent [None req-0c4836d5-b5e9-426e-ae92-75786fc64a67 - - - - - -] Synchronizing state complete
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.047 496978 INFO neutron.agent.dhcp.agent [None req-2221b68d-bb01-4f86-8f6a-7e67457c0195 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:37Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5c10>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890fce80>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18890fcbb0>], id=cc034d9b-d42d-408a-9acd-1c6633b7dcb3, ip_allocation=immediate, mac_address=fa:16:3e:ee:54:92, name=tempest-PortsIpV6TestJSON-69332142, network_id=07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1814, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:40Z on network 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.060 496978 INFO neutron.agent.dhcp.agent [None req-68072d84-5142-41bc-8077-2ddc486bceb9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:38Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889151d60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889151730>], id=83e3ef43-00fc-49d0-924c-133720475aab, ip_allocation=immediate, mac_address=fa:16:3e:84:f8:5d, name=tempest-NetworksTestDHCPv6-31751363, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=6, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['78b340e9-fb16-42ac-9120-181c04db1e4e'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:37Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=1828, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:38Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain kernel: device tap021e7d92-75 left promiscuous mode
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain dnsmasq[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/addn_hosts - 2 addresses
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/host
Oct 13 15:46:43 standalone.localdomain podman[544255]: 2025-10-13 15:46:43.289090241 +0000 UTC m=+0.101996592 container kill dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/opts
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.310 496978 INFO neutron.agent.dhcp.agent [None req-1c792ee4-bf95-45e9-94b4-da62b76ce36f - - - - - -] DHCP configuration for ports {'d0451465-1990-46e4-8f17-c93f365e9ece', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:43 standalone.localdomain dnsmasq[544219]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[544219]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[544219]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:43 standalone.localdomain podman[544269]: 2025-10-13 15:46:43.31309039 +0000 UTC m=+0.065203735 container kill ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:43 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:43.410 2 INFO neutron.agent.securitygroups_rpc [None req-31265eb7-28d0-4ee1-8804-b44656879884 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['142f0c53-854b-4ec8-814d-7a1be19f0f0a']
Oct 13 15:46:43 standalone.localdomain dnsmasq[544151]: exiting on receipt of SIGTERM
Oct 13 15:46:43 standalone.localdomain systemd[1]: libpod-5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28.scope: Deactivated successfully.
Oct 13 15:46:43 standalone.localdomain podman[544303]: 2025-10-13 15:46:43.426537738 +0000 UTC m=+0.060243529 container kill 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:43 standalone.localdomain podman[544319]: 2025-10-13 15:46:43.485059214 +0000 UTC m=+0.046842992 container died 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:43 standalone.localdomain podman[544319]: 2025-10-13 15:46:43.510678714 +0000 UTC m=+0.072462472 container cleanup 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:43 standalone.localdomain systemd[1]: libpod-conmon-5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28.scope: Deactivated successfully.
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.524 496978 INFO neutron.agent.dhcp.agent [None req-867269c4-9b4d-4cd1-b254-25bd1430cded - - - - - -] DHCP configuration for ports {'cc034d9b-d42d-408a-9acd-1c6633b7dcb3'} is completed
Oct 13 15:46:43 standalone.localdomain podman[544320]: 2025-10-13 15:46:43.557668099 +0000 UTC m=+0.111247081 container remove 5c3d756af88cf490fb2b2f00d24d2b20f068739bb2375f8902eea7ae31681e28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f993e6f-30f4-4230-9cfb-649f43d1ab22, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:46:43 standalone.localdomain kernel: device tap32b34ca1-18 left promiscuous mode
Oct 13 15:46:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:43Z|00396|binding|INFO|Releasing lport 32b34ca1-1805-4606-8a84-7b600181d749 from this chassis (sb_readonly=0)
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:43Z|00397|binding|INFO|Setting lport 32b34ca1-1805-4606-8a84-7b600181d749 down in Southbound
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.577 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe84:8a6b/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-5f993e6f-30f4-4230-9cfb-649f43d1ab22', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f993e6f-30f4-4230-9cfb-649f43d1ab22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6963109-8ab0-4ad9-850b-845c3afaddf4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=32b34ca1-1805-4606-8a84-7b600181d749) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.579 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 32b34ca1-1805-4606-8a84-7b600181d749 in datapath 5f993e6f-30f4-4230-9cfb-649f43d1ab22 unbound from our chassis
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.583 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f993e6f-30f4-4230-9cfb-649f43d1ab22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.584 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[22a0571d-4609-4606-b149-1913fd2fd8e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.645 496978 INFO neutron.agent.dhcp.agent [None req-7fe49ef0-3753-43f0-82c4-f37da2639028 - - - - - -] DHCP configuration for ports {'83e3ef43-00fc-49d0-924c-133720475aab'} is completed
Oct 13 15:46:43 standalone.localdomain podman[544378]: 2025-10-13 15:46:43.699554965 +0000 UTC m=+0.057476774 container kill dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:43 standalone.localdomain dnsmasq[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/addn_hosts - 0 addresses
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/host
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[543829]: read /var/lib/neutron/dhcp/07fc0ca9-86f3-46bc-898b-45e15ca9bdf4/opts
Oct 13 15:46:43 standalone.localdomain dnsmasq[544219]: exiting on receipt of SIGTERM
Oct 13 15:46:43 standalone.localdomain podman[544396]: 2025-10-13 15:46:43.722216022 +0000 UTC m=+0.041902178 container kill ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:46:43 standalone.localdomain systemd[1]: libpod-ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0.scope: Deactivated successfully.
Oct 13 15:46:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3973: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.785 496978 INFO neutron.agent.dhcp.agent [None req-d3aec8a4-a8de-46e0-aade-b10a8cd2c0f9 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.786 496978 INFO neutron.agent.dhcp.agent [None req-d3aec8a4-a8de-46e0-aade-b10a8cd2c0f9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:43 standalone.localdomain podman[544434]: 2025-10-13 15:46:43.807940706 +0000 UTC m=+0.054203232 container died ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:46:43 standalone.localdomain podman[544434]: 2025-10-13 15:46:43.852075122 +0000 UTC m=+0.098337648 container remove ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:46:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:43Z|00398|binding|INFO|Releasing lport d0451465-1990-46e4-8f17-c93f365e9ece from this chassis (sb_readonly=0)
Oct 13 15:46:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:43Z|00399|binding|INFO|Setting lport d0451465-1990-46e4-8f17-c93f365e9ece down in Southbound
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain kernel: device tapd0451465-19 left promiscuous mode
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.866 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d0451465-1990-46e4-8f17-c93f365e9ece) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.867 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d0451465-1990-46e4-8f17-c93f365e9ece in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.868 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:43.869 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0a61de52-b03e-4982-9787-3f6b4d54c7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:43.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:43 standalone.localdomain systemd[1]: libpod-conmon-ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0.scope: Deactivated successfully.
Oct 13 15:46:43 standalone.localdomain dnsmasq[543613]: read /var/lib/neutron/dhcp/616c7868-4abb-4d79-8438-572ba78bfc1d/addn_hosts - 0 addresses
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[543613]: read /var/lib/neutron/dhcp/616c7868-4abb-4d79-8438-572ba78bfc1d/host
Oct 13 15:46:43 standalone.localdomain podman[544455]: 2025-10-13 15:46:43.947702086 +0000 UTC m=+0.145891883 container kill 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:46:43 standalone.localdomain dnsmasq-dhcp[543613]: read /var/lib/neutron/dhcp/616c7868-4abb-4d79-8438-572ba78bfc1d/opts
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent [None req-822ffe72-ffeb-4f45-aba6-a6fcc9fc624e - - - - - -] Unable to reload_allocations dhcp for 616c7868-4abb-4d79-8438-572ba78bfc1d.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap021e7d92-75 not found in namespace qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d.
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap021e7d92-75 not found in namespace qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d.
Oct 13 15:46:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:43.982 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.062 496978 INFO neutron.agent.dhcp.agent [None req-e1e93fee-8259-4837-83af-8347dfaf06fe - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.064 496978 INFO neutron.agent.dhcp.agent [None req-0c4836d5-b5e9-426e-ae92-75786fc64a67 - - - - - -] Synchronizing state
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.228 496978 INFO neutron.agent.dhcp.agent [None req-c4a77193-e250-44e3-9610-931116eeb0c3 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.230 496978 INFO neutron.agent.dhcp.agent [-] Starting network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.230 496978 INFO neutron.agent.dhcp.agent [-] Finished network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.231 496978 INFO neutron.agent.dhcp.agent [-] Starting network 36648d54-03b4-46b3-81f4-8c595eafdf9f dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.231 496978 INFO neutron.agent.dhcp.agent [-] Finished network 36648d54-03b4-46b3-81f4-8c595eafdf9f dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.231 496978 INFO neutron.agent.dhcp.agent [-] Starting network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.232 496978 INFO neutron.agent.dhcp.agent [-] Finished network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.232 496978 INFO neutron.agent.dhcp.agent [-] Starting network 616c7868-4abb-4d79-8438-572ba78bfc1d dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.235 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.236 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.236 496978 INFO neutron.agent.dhcp.agent [-] Starting network b2b1caf7-9527-477a-a4f6-9c95f6068c7e dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.237 496978 INFO neutron.agent.dhcp.agent [-] Finished network b2b1caf7-9527-477a-a4f6-9c95f6068c7e dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.237 496978 INFO neutron.agent.dhcp.agent [-] Starting network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.243 496978 INFO neutron.agent.dhcp.agent [-] Starting network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.243 496978 INFO neutron.agent.dhcp.agent [-] Finished network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.244 496978 INFO neutron.agent.dhcp.agent [-] Starting network f934a9b1-f0ba-494d-9c34-bbdad3043007 dhcp configuration
Oct 13 15:46:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3bcb317d34ceeabd95b72b351005439f338ff61752f535caea472d42896f9c5b-merged.mount: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba8aedb496cb9ca8cc0332fc41f4a4a34771b9df064b6d9c1d0e0e635afbf9a0-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-018a0acec287dd3209351c00b0136c97a3166c222c81e1f60c2dce0adc07d3cd-merged.mount: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d5f993e6f\x2d30f4\x2d4230\x2d9cfb\x2d649f43d1ab22.mount: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain podman[544499]: 2025-10-13 15:46:44.448079434 +0000 UTC m=+0.066842876 container kill 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:44 standalone.localdomain dnsmasq[543613]: exiting on receipt of SIGTERM
Oct 13 15:46:44 standalone.localdomain systemd[1]: libpod-489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039.scope: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain podman[544514]: 2025-10-13 15:46:44.531462165 +0000 UTC m=+0.063698338 container died 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:44 standalone.localdomain podman[544514]: 2025-10-13 15:46:44.564674322 +0000 UTC m=+0.096910455 container cleanup 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:44 standalone.localdomain systemd[1]: libpod-conmon-489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039.scope: Deactivated successfully.
Oct 13 15:46:44 standalone.localdomain podman[544515]: 2025-10-13 15:46:44.616709374 +0000 UTC m=+0.143537668 container remove 489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-616c7868-4abb-4d79-8438-572ba78bfc1d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent [None req-efd3686a-ee67-4e9f-b4b5-3f3c83301886 - - - - - -] Unable to enable dhcp for 616c7868-4abb-4d79-8438-572ba78bfc1d.: oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 9b1384d3-0cb4-49c4-b91f-0079b4456503: This subnet is being modified by another concurrent operation.
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/quota/resource_registry.py", line 95, in wrapper\n    ret_val = f(_self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 292, in create_dhcp_port\n    return self._port_action(plugin, context, port, \'create_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 118, in _port_action\n    return p_utils.create_port(plugin, context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/plugins/utils.py", line 338, in create_port\n    return core_plugin.create_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1582, in create_port\n    result, mech_context = self._create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1547, in _create_port_db\n    port_db = self.create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1501, in create_port_db\n    self.ipam.allocate_ips_for_port_and_store(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 219, in allocate_ips_for_port_and_store\n    ips = self.allocate_ips_for_port(context, port_copy)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1044, in wrapper\n    return fn(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 226, in allocate_ips_for_port\n    return self._allocate_ips_for_port(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 258, in _allocate_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 9b1384d3-0cb4-49c4-b91f-0079b4456503: This subnet is being modified by another concurrent operation.\n'].
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     common_utils.wait_until_true(self._enable, timeout=300)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     while not predicate():
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     interface_name = self.device_manager.setup(
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     self.cleanup_stale_devices(network, dhcp_port=None)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     self.force_reraise()
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     raise self.value
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     port = self.setup_dhcp_port(network, segment)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     dhcp_port = setup_method(network, device_id, dhcp_subnets)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1703, in _setup_new_dhcp_port
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     return self.plugin.create_dhcp_port({'port': port_dict})
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 893, in create_dhcp_port
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     port = cctxt.call(self.context, 'create_dhcp_port',
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     return self._original_context.call(ctxt, method, **kwargs)
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     result = self.transport._send(
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     return self._driver.send(target, ctxt, message,
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     return self._send(target, ctxt, message, wait_for_reply, timeout,
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent     raise result
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 9b1384d3-0cb4-49c4-b91f-0079b4456503: This subnet is being modified by another concurrent operation.
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/quota/resource_registry.py", line 95, in wrapper\n    ret_val = f(_self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 292, in create_dhcp_port\n    return self._port_action(plugin, context, port, \'create_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 118, in _port_action\n    return p_utils.create_port(plugin, context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/plugins/utils.py", line 338, in create_port\n    return core_plugin.create_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1582, in create_port\n    result, mech_context = self._create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1547, in _create_port_db\n    port_db = self.create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1501, in create_port_db\n    self.ipam.allocate_ips_for_port_and_store(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 219, in allocate_ips_for_port_and_store\n    ips = self.allocate_ips_for_port(context, port_copy)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1044, in wrapper\n    return fn(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 226, in allocate_ips_for_port\n    return self._allocate_ips_for_port(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 258, in _allocate_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 9b1384d3-0cb4-49c4-b91f-0079b4456503: This subnet is being modified by another concurrent operation.\n'].
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.780 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:46:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:44.783 496978 INFO neutron.agent.dhcp.agent [None req-efd3686a-ee67-4e9f-b4b5-3f3c83301886 - - - - - -] Finished network 616c7868-4abb-4d79-8438-572ba78bfc1d dhcp configuration
Oct 13 15:46:44 standalone.localdomain ceph-mon[29756]: pgmap v3973: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00400|binding|INFO|Removing iface tapcfbcd979-ac ovn-installed in OVS
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00401|binding|INFO|Removing lport cfbcd979-acb8-49ae-b5f5-f71da8b6e964 ovn-installed in OVS
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.007 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2c9acefe-c4cb-4c5f-8e69-bf6d3c97c2ad with type ""
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.009 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=865c3b7f-9b2e-4591-b3e2-a355865805ff, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=cfbcd979-acb8-49ae-b5f5-f71da8b6e964) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.013 378821 INFO neutron.agent.ovn.metadata.agent [-] Port cfbcd979-acb8-49ae-b5f5-f71da8b6e964 in datapath 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4 unbound from our chassis
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.015 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 07fc0ca9-86f3-46bc-898b-45e15ca9bdf4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.017 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9d19ba5f-84f5-4d62-b0d0-e830451e0dc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:45.214 496978 INFO neutron.agent.linux.ip_lib [None req-d1d58826-4530-4355-bffb-d6d0d00aee83 - - - - - -] Device tape9565d20-56 cannot be used as it has no MAC address
Oct 13 15:46:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:45.246 496978 INFO neutron.agent.linux.ip_lib [None req-66634ce9-43ef-48cd-89b0-99cde73dd9b6 - - - - - -] Device tape4c0cef2-6b cannot be used as it has no MAC address
Oct 13 15:46:45 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:15:46:45 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txf056f40131a14c4590c04-0068ed1ee5" "proxy-server 2" 0.0007 "-" 23 -
Oct 13 15:46:45 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txf056f40131a14c4590c04-0068ed1ee5)
Oct 13 15:46:45 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txf056f40131a14c4590c04-0068ed1ee5)
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain kernel: device tape9565d20-56 entered promiscuous mode
Oct 13 15:46:45 standalone.localdomain NetworkManager[5962]: <info>  [1760370405.3030] manager: (tape9565d20-56): new Generic device (/org/freedesktop/NetworkManager/Devices/75)
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00402|binding|INFO|Claiming lport e9565d20-564d-4f68-a5fa-bce92968253e for this chassis.
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00403|binding|INFO|e9565d20-564d-4f68-a5fa-bce92968253e: Claiming unknown
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain systemd-udevd[544559]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00404|binding|INFO|Setting lport e9565d20-564d-4f68-a5fa-bce92968253e ovn-installed in OVS
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00405|binding|INFO|Setting lport e9565d20-564d-4f68-a5fa-bce92968253e up in Southbound
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.317 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e9565d20-564d-4f68-a5fa-bce92968253e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:45 standalone.localdomain kernel: device tape4c0cef2-6b entered promiscuous mode
Oct 13 15:46:45 standalone.localdomain NetworkManager[5962]: <info>  [1760370405.3207] manager: (tape4c0cef2-6b): new Generic device (/org/freedesktop/NetworkManager/Devices/76)
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.320 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e9565d20-564d-4f68-a5fa-bce92968253e in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00406|if_status|INFO|Not updating pb chassis for e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c now as sb is readonly
Oct 13 15:46:45 standalone.localdomain systemd-udevd[544562]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00407|binding|INFO|Claiming lport e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c for this chassis.
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00408|binding|INFO|e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c: Claiming unknown
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00409|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.334 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.336 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b55461-92f6-4145-be5f-37a8168f669a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-54d0978919b33f15517bb459767baa4c7e25a966601c1de9cb8671cd8bc7729b-merged.mount: Deactivated successfully.
Oct 13 15:46:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-489e7f4a6ea7994a5f089696746787d9b0b43e2947b2dc1eeac6b7dd44568039-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.354 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec6:750d/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-ba49ab79-5018-4b85-93a5-708447c5be06', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba49ab79-5018-4b85-93a5-708447c5be06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba89c7f2-0e58-463e-b8dc-2f8f67ed0ca3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.356 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c in datapath ba49ab79-5018-4b85-93a5-708447c5be06 bound to our chassis
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.361 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port e549a507-7157-4e4f-941f-2ed944bb7475 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.362 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba49ab79-5018-4b85-93a5-708447c5be06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:45.363 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5e34a3b5-b695-44b2-8a1b-e41961296c4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tape9565d20-56: No such device
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00410|binding|INFO|Setting lport e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c ovn-installed in OVS
Oct 13 15:46:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:45Z|00411|binding|INFO|Setting lport e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c up in Southbound
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:45.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:45 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:45.710 2 INFO neutron.agent.securitygroups_rpc [None req-5ea59e6b-46ac-496c-b54d-fd3650c3b9a1 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:46:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3974: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:46:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:46:45 standalone.localdomain podman[544612]: 2025-10-13 15:46:45.846219707 +0000 UTC m=+0.101734854 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:46:45 standalone.localdomain podman[544612]: 2025-10-13 15:46:45.856393635 +0000 UTC m=+0.111908832 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:46:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:46:45 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:46:45 standalone.localdomain podman[544610]: 2025-10-13 15:46:45.95656094 +0000 UTC m=+0.214454081 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:45 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:45.962 2 INFO neutron.agent.securitygroups_rpc [None req-6006608c-62e2-4806-b42c-818cd73623c8 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['415691d6-4848-4612-b65c-790f4b1e40ee']
Oct 13 15:46:45 standalone.localdomain podman[544613]: 2025-10-13 15:46:45.923904411 +0000 UTC m=+0.169891731 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, release=1, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-object-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, container_name=swift_object_server, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12)
Oct 13 15:46:45 standalone.localdomain podman[544661]: 2025-10-13 15:46:45.978745652 +0000 UTC m=+0.097549024 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=17.1.9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, summary=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, release=1, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, distribution-scope=public)
Oct 13 15:46:45 standalone.localdomain podman[544610]: 2025-10-13 15:46:45.996043421 +0000 UTC m=+0.253936572 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:46:46 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:46:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:46:46 standalone.localdomain podman[544709]: 2025-10-13 15:46:46.089247638 +0000 UTC m=+0.061815749 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.openshift.expose-services=, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, tcib_managed=true, release=1, vendor=Red Hat, Inc., batch=17.1_20250721.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, architecture=x86_64)
Oct 13 15:46:46 standalone.localdomain podman[544613]: 2025-10-13 15:46:46.142769498 +0000 UTC m=+0.388756828 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-07-21T14:56:28, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, container_name=swift_object_server, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:46:46 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:46:46 standalone.localdomain podman[544661]: 2025-10-13 15:46:46.22715788 +0000 UTC m=+0.345961222 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, release=1, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=swift_account_server, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22)
Oct 13 15:46:46 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:46:46 standalone.localdomain podman[544709]: 2025-10-13 15:46:46.317690525 +0000 UTC m=+0.290258626 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, name=rhosp17/openstack-swift-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, container_name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, version=17.1.9, release=1)
Oct 13 15:46:46 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:46:46 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:46.503 2 INFO neutron.agent.securitygroups_rpc [None req-cc9c0176-9643-4a8e-88c7-14e3c0f44a5f 3d4d48f603104c4d919565e926e139c4 a966fd035a9e426eabc035f6c807bc5e - - default default] Security group member updated ['4142dd56-682e-4ee4-8478-a4934a3acf9a']
Oct 13 15:46:46 standalone.localdomain podman[544794]: 
Oct 13 15:46:46 standalone.localdomain podman[544794]: 2025-10-13 15:46:46.538007546 +0000 UTC m=+0.141151693 container create b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:46:46 standalone.localdomain systemd[1]: Started libpod-conmon-b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a.scope.
Oct 13 15:46:46 standalone.localdomain podman[544813]: 
Oct 13 15:46:46 standalone.localdomain podman[544794]: 2025-10-13 15:46:46.495182691 +0000 UTC m=+0.098326878 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:46 standalone.localdomain podman[544813]: 2025-10-13 15:46:46.602808558 +0000 UTC m=+0.097752601 container create bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:46 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1903a6565e352f4edae36f8048de3d6f60dc081ef743d3e136797f3ee557d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:46 standalone.localdomain podman[544794]: 2025-10-13 15:46:46.630869773 +0000 UTC m=+0.234013970 container init b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:46 standalone.localdomain systemd[1]: Started libpod-conmon-bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968.scope.
Oct 13 15:46:46 standalone.localdomain podman[544813]: 2025-10-13 15:46:46.544525489 +0000 UTC m=+0.039469552 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:46 standalone.localdomain podman[544794]: 2025-10-13 15:46:46.64488419 +0000 UTC m=+0.248028357 container start b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:46 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:46 standalone.localdomain dnsmasq[544835]: started, version 2.85 cachesize 150
Oct 13 15:46:46 standalone.localdomain dnsmasq[544835]: DNS service limited to local subnets
Oct 13 15:46:46 standalone.localdomain dnsmasq[544835]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:46 standalone.localdomain dnsmasq[544835]: warning: no upstream servers configured
Oct 13 15:46:46 standalone.localdomain dnsmasq-dhcp[544835]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c7caa976d9f9e686438e610e6f894a89f4b968bef67a631c877eb46e0af6930/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:46 standalone.localdomain dnsmasq[544835]: read /var/lib/neutron/dhcp/ba49ab79-5018-4b85-93a5-708447c5be06/addn_hosts - 0 addresses
Oct 13 15:46:46 standalone.localdomain dnsmasq-dhcp[544835]: read /var/lib/neutron/dhcp/ba49ab79-5018-4b85-93a5-708447c5be06/host
Oct 13 15:46:46 standalone.localdomain dnsmasq-dhcp[544835]: read /var/lib/neutron/dhcp/ba49ab79-5018-4b85-93a5-708447c5be06/opts
Oct 13 15:46:46 standalone.localdomain podman[544813]: 2025-10-13 15:46:46.671588194 +0000 UTC m=+0.166532217 container init bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:46:46 standalone.localdomain podman[544813]: 2025-10-13 15:46:46.680857622 +0000 UTC m=+0.175801645 container start bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:46:46 standalone.localdomain dnsmasq[544838]: started, version 2.85 cachesize 150
Oct 13 15:46:46 standalone.localdomain dnsmasq[544838]: DNS service limited to local subnets
Oct 13 15:46:46 standalone.localdomain dnsmasq[544838]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:46 standalone.localdomain dnsmasq[544838]: warning: no upstream servers configured
Oct 13 15:46:46 standalone.localdomain dnsmasq-dhcp[544838]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:46 standalone.localdomain dnsmasq[544838]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:46:46 standalone.localdomain dnsmasq-dhcp[544838]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:46 standalone.localdomain dnsmasq-dhcp[544838]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.712 496978 INFO neutron.agent.dhcp.agent [None req-66634ce9-43ef-48cd-89b0-99cde73dd9b6 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.712 496978 INFO neutron.agent.dhcp.agent [None req-66634ce9-43ef-48cd-89b0-99cde73dd9b6 - - - - - -] Finished network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.728 496978 INFO neutron.agent.dhcp.agent [None req-d1d58826-4530-4355-bffb-d6d0d00aee83 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.728 496978 INFO neutron.agent.dhcp.agent [None req-d1d58826-4530-4355-bffb-d6d0d00aee83 - - - - - -] Finished network f934a9b1-f0ba-494d-9c34-bbdad3043007 dhcp configuration
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.729 496978 INFO neutron.agent.dhcp.agent [None req-c4a77193-e250-44e3-9610-931116eeb0c3 - - - - - -] Synchronizing state complete
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.730 496978 INFO neutron.agent.dhcp.agent [None req-c4a77193-e250-44e3-9610-931116eeb0c3 - - - - - -] Synchronizing state
Oct 13 15:46:46 standalone.localdomain ceph-mon[29756]: pgmap v3974: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:46 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:46.872 496978 INFO neutron.agent.dhcp.agent [None req-584bcf5e-83b3-4b84-a6f1-e699b41b6371 - - - - - -] DHCP configuration for ports {'66edc8ec-1627-417a-87a9-59aee1458efa', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:46.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.037 496978 INFO neutron.agent.dhcp.agent [None req-2cae94b1-3c68-4232-ba9d-d4dcefc09ebc - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.066 496978 INFO neutron.agent.dhcp.agent [None req-3d1acb02-4e63-4fd5-8a06-3569ee461e17 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:47 standalone.localdomain dnsmasq[543829]: exiting on receipt of SIGTERM
Oct 13 15:46:47 standalone.localdomain podman[544855]: 2025-10-13 15:46:47.258049377 +0000 UTC m=+0.063629486 container kill dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:47 standalone.localdomain systemd[1]: libpod-dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493.scope: Deactivated successfully.
Oct 13 15:46:47 standalone.localdomain podman[544869]: 2025-10-13 15:46:47.338048803 +0000 UTC m=+0.062750049 container died dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:46:47 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d616c7868\x2d4abb\x2d4d79\x2d8438\x2d572ba78bfc1d.mount: Deactivated successfully.
Oct 13 15:46:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:47 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1c288f411022dee4bb0beed992915631fb324db15d55a1a957b1a0fb27347ce0-merged.mount: Deactivated successfully.
Oct 13 15:46:47 standalone.localdomain podman[544869]: 2025-10-13 15:46:47.37036971 +0000 UTC m=+0.095070926 container cleanup dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:47 standalone.localdomain systemd[1]: libpod-conmon-dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493.scope: Deactivated successfully.
Oct 13 15:46:47 standalone.localdomain podman[544871]: 2025-10-13 15:46:47.430418054 +0000 UTC m=+0.145463608 container remove dc282b88ed014e07d77434ccc4f96281762e012e31868001d68d0d09df25e493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-07fc0ca9-86f3-46bc-898b-45e15ca9bdf4, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:46:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:47.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:47 standalone.localdomain kernel: device tapcfbcd979-ac left promiscuous mode
Oct 13 15:46:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:47.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.491 496978 INFO neutron.agent.dhcp.agent [None req-a0c7966c-b9ee-4467-a08c-bb1059b68fa5 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.492 496978 INFO neutron.agent.dhcp.agent [-] Starting network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.493 496978 INFO neutron.agent.dhcp.agent [-] Finished network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.493 496978 INFO neutron.agent.dhcp.agent [-] Starting network 36648d54-03b4-46b3-81f4-8c595eafdf9f dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.497 496978 INFO neutron.agent.dhcp.agent [-] Starting network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.497 496978 INFO neutron.agent.dhcp.agent [-] Finished network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.498 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.498 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.501 496978 INFO neutron.agent.dhcp.agent [-] Starting network b2b1caf7-9527-477a-a4f6-9c95f6068c7e dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.502 496978 INFO neutron.agent.dhcp.agent [-] Finished network b2b1caf7-9527-477a-a4f6-9c95f6068c7e dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.502 496978 INFO neutron.agent.dhcp.agent [-] Starting network c01f2fb2-62f8-4e62-8183-0b35ede8dc9b dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:47.506 496978 INFO neutron.agent.dhcp.agent [-] Starting network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:46:47 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:47.536 2 INFO neutron.agent.securitygroups_rpc [None req-d236ade7-d6c9-4923-b260-3491932e6544 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3975: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:47 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:47.821 2 INFO neutron.agent.securitygroups_rpc [None req-15b74e09-16ca-4728-a53b-6139f377e166 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d07fc0ca9\x2d86f3\x2d46bc\x2d898b\x2d45e15ca9bdf4.mount: Deactivated successfully.
Oct 13 15:46:48 standalone.localdomain ceph-mon[29756]: pgmap v3975: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:48.739 496978 INFO neutron.agent.linux.ip_lib [None req-076e4382-a6ba-4c74-bcfb-bdf1ff4b6cf6 - - - - - -] Device tap2540a99b-db cannot be used as it has no MAC address
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain kernel: device tap2540a99b-db entered promiscuous mode
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain NetworkManager[5962]: <info>  [1760370408.7885] manager: (tap2540a99b-db): new Generic device (/org/freedesktop/NetworkManager/Devices/77)
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00412|binding|INFO|Claiming lport 2540a99b-db26-4a1d-8d4d-ea860067fbf8 for this chassis.
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00413|binding|INFO|2540a99b-db26-4a1d-8d4d-ea860067fbf8: Claiming unknown
Oct 13 15:46:48 standalone.localdomain systemd-udevd[544922]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.803 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-36648d54-03b4-46b3-81f4-8c595eafdf9f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36648d54-03b4-46b3-81f4-8c595eafdf9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=907bffa3-ca71-4872-a6b0-fa5a1f4dc5ce, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2540a99b-db26-4a1d-8d4d-ea860067fbf8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.805 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2540a99b-db26-4a1d-8d4d-ea860067fbf8 in datapath 36648d54-03b4-46b3-81f4-8c595eafdf9f bound to our chassis
Oct 13 15:46:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:48.806 496978 INFO neutron.agent.linux.ip_lib [None req-0e3db199-d4a3-44a7-9974-eb0b51d4ebfb - - - - - -] Device tap72882db9-fc cannot be used as it has no MAC address
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.807 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36648d54-03b4-46b3-81f4-8c595eafdf9f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.811 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[23c84612-b56a-4dee-b9a0-28d52bee77cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00414|binding|INFO|Setting lport 2540a99b-db26-4a1d-8d4d-ea860067fbf8 ovn-installed in OVS
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00415|binding|INFO|Setting lport 2540a99b-db26-4a1d-8d4d-ea860067fbf8 up in Southbound
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:48.886 496978 INFO neutron.agent.linux.ip_lib [None req-c87f1308-6e6f-4854-aca3-6b3308dba318 - - - - - -] Device tapd61fa2d6-03 cannot be used as it has no MAC address
Oct 13 15:46:48 standalone.localdomain kernel: device tap72882db9-fc entered promiscuous mode
Oct 13 15:46:48 standalone.localdomain systemd-udevd[544925]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:48 standalone.localdomain NetworkManager[5962]: <info>  [1760370408.8892] manager: (tap72882db9-fc): new Generic device (/org/freedesktop/NetworkManager/Devices/78)
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00416|binding|INFO|Claiming lport 72882db9-fc97-4582-bd23-512b00ec5590 for this chassis.
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00417|binding|INFO|72882db9-fc97-4582-bd23-512b00ec5590: Claiming unknown
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap2540a99b-db: No such device
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00418|binding|INFO|Setting lport 72882db9-fc97-4582-bd23-512b00ec5590 ovn-installed in OVS
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00419|binding|INFO|Setting lport 72882db9-fc97-4582-bd23-512b00ec5590 up in Southbound
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.908 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbe80c7b-02af-4cd6-82e0-dba84bd90fa5, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=72882db9-fc97-4582-bd23-512b00ec5590) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.909 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 72882db9-fc97-4582-bd23-512b00ec5590 in datapath c01f2fb2-62f8-4e62-8183-0b35ede8dc9b bound to our chassis
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.911 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c01f2fb2-62f8-4e62-8183-0b35ede8dc9b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.911 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[12d80d36-ae10-4151-8937-f1fcbf529819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:48 standalone.localdomain kernel: device tapd61fa2d6-03 entered promiscuous mode
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00420|binding|INFO|Claiming lport d61fa2d6-039b-4502-83b2-48131f68658b for this chassis.
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00421|binding|INFO|d61fa2d6-039b-4502-83b2-48131f68658b: Claiming unknown
Oct 13 15:46:48 standalone.localdomain NetworkManager[5962]: <info>  [1760370408.9172] manager: (tapd61fa2d6-03): new Generic device (/org/freedesktop/NetworkManager/Devices/79)
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.926 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-e640877a-3ffa-40a9-9920-52ad8d244aaf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e640877a-3ffa-40a9-9920-52ad8d244aaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9cf0f6-01d6-4f09-b73e-fbe2e4fb151d, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d61fa2d6-039b-4502-83b2-48131f68658b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00422|binding|INFO|Setting lport d61fa2d6-039b-4502-83b2-48131f68658b ovn-installed in OVS
Oct 13 15:46:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:48Z|00423|binding|INFO|Setting lport d61fa2d6-039b-4502-83b2-48131f68658b up in Southbound
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.927 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d61fa2d6-039b-4502-83b2-48131f68658b in datapath e640877a-3ffa-40a9-9920-52ad8d244aaf bound to our chassis
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.933 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7da7f1e5-e7f8-41e2-844e-0e9e521128f1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.934 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e640877a-3ffa-40a9-9920-52ad8d244aaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:48.934 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[191f69ad-cf82-49d8-b743-56433f32a92a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:48.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:49.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:49.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:49 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:49.553 2 INFO neutron.agent.securitygroups_rpc [None req-0c913be3-b16b-49cd-ac9c-d80514d55790 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:49 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:46:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3976: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:49 standalone.localdomain podman[545036]: 2025-10-13 15:46:49.841286257 +0000 UTC m=+0.103524110 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid)
Oct 13 15:46:49 standalone.localdomain podman[545036]: 2025-10-13 15:46:49.848340948 +0000 UTC m=+0.110578811 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:46:49 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:46:50 standalone.localdomain podman[545103]: 
Oct 13 15:46:50 standalone.localdomain podman[545103]: 2025-10-13 15:46:50.230649203 +0000 UTC m=+0.216958809 container create 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:50 standalone.localdomain podman[545125]: 
Oct 13 15:46:50 standalone.localdomain podman[545125]: 2025-10-13 15:46:50.263035723 +0000 UTC m=+0.158962650 container create 82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:50 standalone.localdomain podman[545103]: 2025-10-13 15:46:50.168832334 +0000 UTC m=+0.155142000 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:50 standalone.localdomain systemd[1]: Started libpod-conmon-7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5.scope.
Oct 13 15:46:50 standalone.localdomain podman[545125]: 2025-10-13 15:46:50.190916633 +0000 UTC m=+0.086843630 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:50 standalone.localdomain systemd[1]: Started libpod-conmon-82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e.scope.
Oct 13 15:46:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5810b9ac76003dd2df3cbb916b9194576ac6604b67133c5d2406667c0af247b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:50 standalone.localdomain podman[545103]: 2025-10-13 15:46:50.313097205 +0000 UTC m=+0.299406811 container init 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/758c28d68250cdd25dbf431b1651ab567f18221cda6d77a4b2219945470097e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:50 standalone.localdomain podman[545103]: 2025-10-13 15:46:50.339132397 +0000 UTC m=+0.325441983 container start 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:50 standalone.localdomain dnsmasq[545170]: started, version 2.85 cachesize 150
Oct 13 15:46:50 standalone.localdomain dnsmasq[545170]: DNS service limited to local subnets
Oct 13 15:46:50 standalone.localdomain podman[545125]: 2025-10-13 15:46:50.347118536 +0000 UTC m=+0.243045463 container init 82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:46:50 standalone.localdomain dnsmasq[545170]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:50 standalone.localdomain dnsmasq[545170]: warning: no upstream servers configured
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545170]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:50 standalone.localdomain dnsmasq[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/addn_hosts - 0 addresses
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/host
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/opts
Oct 13 15:46:50 standalone.localdomain podman[545144]: 
Oct 13 15:46:50 standalone.localdomain podman[545125]: 2025-10-13 15:46:50.354137885 +0000 UTC m=+0.250064812 container start 82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:46:50 standalone.localdomain dnsmasq[545172]: started, version 2.85 cachesize 150
Oct 13 15:46:50 standalone.localdomain dnsmasq[545172]: DNS service limited to local subnets
Oct 13 15:46:50 standalone.localdomain dnsmasq[545172]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:50 standalone.localdomain dnsmasq[545172]: warning: no upstream servers configured
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545172]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:50 standalone.localdomain dnsmasq[545172]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/addn_hosts - 0 addresses
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545172]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/host
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545172]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/opts
Oct 13 15:46:50 standalone.localdomain podman[545144]: 2025-10-13 15:46:50.367806651 +0000 UTC m=+0.199047210 container create 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:50 standalone.localdomain podman[545144]: 2025-10-13 15:46:50.290378876 +0000 UTC m=+0.121619485 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.393 496978 INFO neutron.agent.dhcp.agent [None req-076e4382-a6ba-4c74-bcfb-bdf1ff4b6cf6 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.394 496978 INFO neutron.agent.dhcp.agent [None req-076e4382-a6ba-4c74-bcfb-bdf1ff4b6cf6 - - - - - -] Finished network 36648d54-03b4-46b3-81f4-8c595eafdf9f dhcp configuration
Oct 13 15:46:50 standalone.localdomain systemd[1]: Started libpod-conmon-12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb.scope.
Oct 13 15:46:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.413 496978 INFO neutron.agent.dhcp.agent [None req-0e3db199-d4a3-44a7-9974-eb0b51d4ebfb - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.414 496978 INFO neutron.agent.dhcp.agent [None req-0e3db199-d4a3-44a7-9974-eb0b51d4ebfb - - - - - -] Finished network c01f2fb2-62f8-4e62-8183-0b35ede8dc9b dhcp configuration
Oct 13 15:46:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84387f4f930d23b59a71dcba5d417955612cdfd8e410c6739d2f08a5a452245/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:50 standalone.localdomain podman[545144]: 2025-10-13 15:46:50.42578395 +0000 UTC m=+0.257024509 container init 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.427 496978 INFO neutron.agent.dhcp.agent [None req-5e377412-10ab-462c-837d-48323b1a8426 - - - - - -] DHCP configuration for ports {'eb515ebc-697b-43a0-9c2a-c3b7acbe9af6'} is completed
Oct 13 15:46:50 standalone.localdomain podman[545144]: 2025-10-13 15:46:50.432347664 +0000 UTC m=+0.263588223 container start 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:50 standalone.localdomain dnsmasq[545178]: started, version 2.85 cachesize 150
Oct 13 15:46:50 standalone.localdomain dnsmasq[545178]: DNS service limited to local subnets
Oct 13 15:46:50 standalone.localdomain dnsmasq[545178]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:50 standalone.localdomain dnsmasq[545178]: warning: no upstream servers configured
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545178]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:50 standalone.localdomain dnsmasq[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/addn_hosts - 0 addresses
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/host
Oct 13 15:46:50 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/opts
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.480 496978 INFO neutron.agent.dhcp.agent [None req-8f85d792-19b3-4b2c-a229-c42d6573356d - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.482 496978 INFO neutron.agent.dhcp.agent [None req-8f85d792-19b3-4b2c-a229-c42d6573356d - - - - - -] Finished network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.483 496978 INFO neutron.agent.dhcp.agent [None req-a0c7966c-b9ee-4467-a08c-bb1059b68fa5 - - - - - -] Synchronizing state complete
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.488 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:50.489 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:50 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:50.650 2 INFO neutron.agent.securitygroups_rpc [None req-8a2e7568-ef17-4c76-8b06-55cf0d618bbc 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['0396172e-a5db-4454-881c-e63e845823da']
Oct 13 15:46:50 standalone.localdomain ceph-mon[29756]: pgmap v3976: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:51 standalone.localdomain podman[545202]: 2025-10-13 15:46:51.109538558 +0000 UTC m=+0.078521600 container kill bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:46:51 standalone.localdomain dnsmasq[544838]: exiting on receipt of SIGTERM
Oct 13 15:46:51 standalone.localdomain systemd[1]: libpod-bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968.scope: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain dnsmasq[544835]: exiting on receipt of SIGTERM
Oct 13 15:46:51 standalone.localdomain podman[545226]: 2025-10-13 15:46:51.175441795 +0000 UTC m=+0.059722415 container kill b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:51 standalone.localdomain systemd[1]: libpod-b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a.scope: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:51.184 2 INFO neutron.agent.securitygroups_rpc [None req-9ef6d56f-db89-4a6c-9634-a6644a01e998 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:51 standalone.localdomain podman[545235]: 2025-10-13 15:46:51.192046482 +0000 UTC m=+0.066107393 container died bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:51 standalone.localdomain podman[545263]: 2025-10-13 15:46:51.255413169 +0000 UTC m=+0.062967915 container died b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:46:51 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:51.261 2 INFO neutron.agent.securitygroups_rpc [None req-3f411edd-e571-41c9-9516-29c2e4175128 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['0396172e-a5db-4454-881c-e63e845823da']
Oct 13 15:46:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:51.268 496978 INFO neutron.agent.dhcp.agent [None req-0bb10108-8a14-441e-9ad3-d56f3098e242 - - - - - -] DHCP configuration for ports {'6872e591-f339-498d-9026-13cc145f9e3a', '19e10a42-825e-42b7-b23a-8fc7afd1daf4'} is completed
Oct 13 15:46:51 standalone.localdomain podman[545263]: 2025-10-13 15:46:51.290572186 +0000 UTC m=+0.098126912 container cleanup b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:51 standalone.localdomain podman[545235]: 2025-10-13 15:46:51.294662033 +0000 UTC m=+0.168722884 container cleanup bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:51 standalone.localdomain systemd[1]: libpod-conmon-b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a.scope: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain systemd[1]: libpod-conmon-bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968.scope: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain podman[545242]: 2025-10-13 15:46:51.320447347 +0000 UTC m=+0.181084499 container remove bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:46:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:51Z|00424|binding|INFO|Releasing lport e9565d20-564d-4f68-a5fa-bce92968253e from this chassis (sb_readonly=0)
Oct 13 15:46:51 standalone.localdomain kernel: device tape9565d20-56 left promiscuous mode
Oct 13 15:46:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:51Z|00425|binding|INFO|Setting lport e9565d20-564d-4f68-a5fa-bce92968253e down in Southbound
Oct 13 15:46:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:51.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.344 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e9565d20-564d-4f68-a5fa-bce92968253e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.346 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e9565d20-564d-4f68-a5fa-bce92968253e in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.347 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.347 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba41092-5493-4f34-8279-fc7028699a4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:51.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:51 standalone.localdomain podman[545274]: 2025-10-13 15:46:51.363411588 +0000 UTC m=+0.143811928 container remove b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba49ab79-5018-4b85-93a5-708447c5be06, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:46:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:51Z|00426|binding|INFO|Releasing lport e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c from this chassis (sb_readonly=0)
Oct 13 15:46:51 standalone.localdomain kernel: device tape4c0cef2-6b left promiscuous mode
Oct 13 15:46:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:51Z|00427|binding|INFO|Setting lport e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c down in Southbound
Oct 13 15:46:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:51.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:51.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.398 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec6:750d/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-ba49ab79-5018-4b85-93a5-708447c5be06', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba49ab79-5018-4b85-93a5-708447c5be06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba89c7f2-0e58-463e-b8dc-2f8f67ed0ca3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.400 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e4c0cef2-6b47-4cf5-a8f3-05a1c46e427c in datapath ba49ab79-5018-4b85-93a5-708447c5be06 unbound from our chassis
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.403 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba49ab79-5018-4b85-93a5-708447c5be06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:51.403 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f51e4d7d-8ba3-48c4-aed1-f16f5386123f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:51 standalone.localdomain dnsmasq[545172]: exiting on receipt of SIGTERM
Oct 13 15:46:51 standalone.localdomain podman[545328]: 2025-10-13 15:46:51.420903971 +0000 UTC m=+0.070496690 container kill 82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:51 standalone.localdomain systemd[1]: libpod-82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e.scope: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain dnsmasq[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/addn_hosts - 1 addresses
Oct 13 15:46:51 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/host
Oct 13 15:46:51 standalone.localdomain podman[545358]: 2025-10-13 15:46:51.46703172 +0000 UTC m=+0.042556279 container kill 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:46:51 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/opts
Oct 13 15:46:51 standalone.localdomain podman[545372]: 2025-10-13 15:46:51.491551414 +0000 UTC m=+0.050802815 container died 82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:51 standalone.localdomain podman[545372]: 2025-10-13 15:46:51.523233683 +0000 UTC m=+0.082485064 container remove 82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:51 standalone.localdomain dnsmasq[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/addn_hosts - 0 addresses
Oct 13 15:46:51 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/host
Oct 13 15:46:51 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/opts
Oct 13 15:46:51 standalone.localdomain podman[545388]: 2025-10-13 15:46:51.560580588 +0000 UTC m=+0.096515101 container kill 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:46:51 standalone.localdomain systemd[1]: libpod-conmon-82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e.scope: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:51.634 496978 INFO neutron.agent.dhcp.agent [None req-317dfe42-42f1-4302-9e0f-35fa070033cf - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:49Z, description=, device_id=4e2fec08-60d2-40a1-83ad-e5ee7cf7c04d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892dc9a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892dcc40>], id=6765551e-a2e6-4f12-ab17-d7c2ad3722de, ip_allocation=immediate, mac_address=fa:16:3e:bb:47:26, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:38Z, description=, dns_domain=, id=36648d54-03b4-46b3-81f4-8c595eafdf9f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-998262857, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58562, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1834, status=ACTIVE, subnets=['65345a8a-5891-4e38-940d-781585d48938'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:45Z, vlan_transparent=None, network_id=36648d54-03b4-46b3-81f4-8c595eafdf9f, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1903, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:49Z on network 36648d54-03b4-46b3-81f4-8c595eafdf9f
Oct 13 15:46:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3977: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-758c28d68250cdd25dbf431b1651ab567f18221cda6d77a4b2219945470097e6-merged.mount: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82941276989a3f0ec697426214922eafacff6621ae31a76ae9f1c7caa264ee8e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9c7caa976d9f9e686438e610e6f894a89f4b968bef67a631c877eb46e0af6930-merged.mount: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf4f7947376a353dceba0c3259a15f89f872a5a6c7be1eebdc1b998483d30968-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2b1903a6565e352f4edae36f8048de3d6f60dc081ef743d3e136797f3ee557d4-merged.mount: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b395f481d1a728b98d84c2c8a7bf7f87917480d019415f9116f757f68973060a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:51 standalone.localdomain dnsmasq[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/addn_hosts - 1 addresses
Oct 13 15:46:51 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/host
Oct 13 15:46:51 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/opts
Oct 13 15:46:51 standalone.localdomain podman[545464]: 2025-10-13 15:46:51.848537681 +0000 UTC m=+0.060847610 container kill 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:46:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:51.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.034 496978 INFO neutron.agent.dhcp.agent [None req-5695bcb4-14d3-4523-8522-718f5161f550 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:52 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.076 496978 INFO neutron.agent.dhcp.agent [None req-76edd96d-b28e-4cf2-ad05-e31aee39709f - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.077 496978 INFO neutron.agent.dhcp.agent [None req-76edd96d-b28e-4cf2-ad05-e31aee39709f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:52 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dba49ab79\x2d5018\x2d4b85\x2d93a5\x2d708447c5be06.mount: Deactivated successfully.
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.101 496978 INFO neutron.agent.dhcp.agent [None req-6312e7b2-355b-4daa-968c-1d9add327839 - - - - - -] DHCP configuration for ports {'6765551e-a2e6-4f12-ab17-d7c2ad3722de', 'eb515ebc-697b-43a0-9c2a-c3b7acbe9af6', '2540a99b-db26-4a1d-8d4d-ea860067fbf8'} is completed
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.109 496978 INFO neutron.agent.dhcp.agent [None req-317dfe42-42f1-4302-9e0f-35fa070033cf - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:49Z, description=, device_id=4e2fec08-60d2-40a1-83ad-e5ee7cf7c04d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889135a60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889135460>], id=6765551e-a2e6-4f12-ab17-d7c2ad3722de, ip_allocation=immediate, mac_address=fa:16:3e:bb:47:26, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:38Z, description=, dns_domain=, id=36648d54-03b4-46b3-81f4-8c595eafdf9f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-998262857, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58562, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1834, status=ACTIVE, subnets=['65345a8a-5891-4e38-940d-781585d48938'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:45Z, vlan_transparent=None, network_id=36648d54-03b4-46b3-81f4-8c595eafdf9f, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1903, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:49Z on network 36648d54-03b4-46b3-81f4-8c595eafdf9f
Oct 13 15:46:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:52.222 2 INFO neutron.agent.securitygroups_rpc [None req-5d4cc723-78f7-42fc-a14f-e53193ccf75e 03073a8ccfc64fa88b1047377a9fc037 d3c1410d6c264516966bcf7dffd0e4d5 - - default default] Security group member updated ['2415d0e2-7475-4494-9eb4-c0035b945053']
Oct 13 15:46:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:52.224 2 INFO neutron.agent.securitygroups_rpc [None req-0c4a5a81-817d-4b25-b6d7-a645565311ae db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:52.245 2 INFO neutron.agent.securitygroups_rpc [None req-d8b1cda3-190d-494f-bd1c-e3c194f9646f 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:46:52 standalone.localdomain dnsmasq[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/addn_hosts - 1 addresses
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/host
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/opts
Oct 13 15:46:52 standalone.localdomain podman[545517]: 2025-10-13 15:46:52.336995267 +0000 UTC m=+0.065143713 container kill 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:46:52 standalone.localdomain dnsmasq[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/addn_hosts - 1 addresses
Oct 13 15:46:52 standalone.localdomain podman[545585]: 2025-10-13 15:46:52.602111727 +0000 UTC m=+0.058811205 container kill 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/host
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/opts
Oct 13 15:46:52 standalone.localdomain podman[545575]: 
Oct 13 15:46:52 standalone.localdomain podman[545575]: 2025-10-13 15:46:52.626724825 +0000 UTC m=+0.092644011 container create 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:46:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:52 standalone.localdomain systemd[1]: Started libpod-conmon-76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66.scope.
Oct 13 15:46:52 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:52 standalone.localdomain podman[545575]: 2025-10-13 15:46:52.580946947 +0000 UTC m=+0.046866193 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:52 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbeb7087b24ed238f9d4138260fcfbf167378eb72369bce40569b093dc110be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.683 496978 INFO neutron.agent.dhcp.agent [None req-b3b0951b-2a90-41ea-a7bc-bfc24af11bc6 - - - - - -] DHCP configuration for ports {'6765551e-a2e6-4f12-ab17-d7c2ad3722de', 'd61fa2d6-039b-4502-83b2-48131f68658b', '6872e591-f339-498d-9026-13cc145f9e3a'} is completed
Oct 13 15:46:52 standalone.localdomain podman[545575]: 2025-10-13 15:46:52.692167986 +0000 UTC m=+0.158087172 container init 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:46:52 standalone.localdomain podman[545575]: 2025-10-13 15:46:52.701693013 +0000 UTC m=+0.167612199 container start 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:46:52 standalone.localdomain dnsmasq[545620]: started, version 2.85 cachesize 150
Oct 13 15:46:52 standalone.localdomain dnsmasq[545620]: DNS service limited to local subnets
Oct 13 15:46:52 standalone.localdomain dnsmasq[545620]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:52 standalone.localdomain dnsmasq[545620]: warning: no upstream servers configured
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[545620]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:52 standalone.localdomain dnsmasq[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/addn_hosts - 2 addresses
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/host
Oct 13 15:46:52 standalone.localdomain dnsmasq-dhcp[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/opts
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.769 496978 INFO neutron.agent.linux.ip_lib [None req-0408180b-cab5-4745-86dc-9efcd70fedf9 - - - - - -] Device tape69c0691-83 cannot be used as it has no MAC address
Oct 13 15:46:52 standalone.localdomain ceph-mon[29756]: pgmap v3977: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain kernel: device tape69c0691-83 entered promiscuous mode
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00428|binding|INFO|Claiming lport e69c0691-83a7-4795-b604-549e2456d7dd for this chassis.
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00429|binding|INFO|e69c0691-83a7-4795-b604-549e2456d7dd: Claiming unknown
Oct 13 15:46:52 standalone.localdomain NetworkManager[5962]: <info>  [1760370412.8276] manager: (tape69c0691-83): new Generic device (/org/freedesktop/NetworkManager/Devices/80)
Oct 13 15:46:52 standalone.localdomain systemd-udevd[545638]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:52.828 2 INFO neutron.agent.securitygroups_rpc [None req-3c3c2108-5706-4c40-b397-0c936f2b17fe 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['ae31dde6-04a7-46d6-ac95-0b32617425e4']
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.843 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9e5394bc-dc47-4b7a-9e53-f9917f23f6db', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e5394bc-dc47-4b7a-9e53-f9917f23f6db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6ce212c-f5ef-43ef-9aaa-25f87dc74bb3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e69c0691-83a7-4795-b604-549e2456d7dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.848 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e69c0691-83a7-4795-b604-549e2456d7dd in datapath 9e5394bc-dc47-4b7a-9e53-f9917f23f6db bound to our chassis
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.851 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9e5394bc-dc47-4b7a-9e53-f9917f23f6db or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:52.852 496978 INFO neutron.agent.linux.ip_lib [None req-e4e2dbbd-3952-4152-bb30-5dd62c287bdc - - - - - -] Device tap1bad5609-50 cannot be used as it has no MAC address
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.852 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0122fedd-94d1-4be8-9f1c-2ff2fd312ac0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00430|binding|INFO|Setting lport e69c0691-83a7-4795-b604-549e2456d7dd ovn-installed in OVS
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00431|binding|INFO|Setting lport e69c0691-83a7-4795-b604-549e2456d7dd up in Southbound
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain kernel: device tap1bad5609-50 entered promiscuous mode
Oct 13 15:46:52 standalone.localdomain NetworkManager[5962]: <info>  [1760370412.9412] manager: (tap1bad5609-50): new Generic device (/org/freedesktop/NetworkManager/Devices/81)
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00432|binding|INFO|Claiming lport 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc for this chassis.
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00433|binding|INFO|1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc: Claiming unknown
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.949 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0ab29eaa-5224-4e03-897b-7f43cb33bf62', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ab29eaa-5224-4e03-897b-7f43cb33bf62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7ccded5-1f07-432b-b3e6-4a8c9f6d769c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.950 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc in datapath 0ab29eaa-5224-4e03-897b-7f43cb33bf62 bound to our chassis
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.952 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:52.952 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[8727cf26-e26c-438b-8084-a4c71ef32c62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00434|binding|INFO|Setting lport 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc ovn-installed in OVS
Oct 13 15:46:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:52Z|00435|binding|INFO|Setting lport 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc up in Southbound
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:52.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain dnsmasq[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/addn_hosts - 0 addresses
Oct 13 15:46:53 standalone.localdomain podman[545672]: 2025-10-13 15:46:53.083081321 +0000 UTC m=+0.041154425 container kill 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:46:53 standalone.localdomain dnsmasq-dhcp[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/host
Oct 13 15:46:53 standalone.localdomain dnsmasq-dhcp[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/opts
Oct 13 15:46:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:53.171 496978 INFO neutron.agent.dhcp.agent [None req-fbb53044-a0f9-49f9-a24c-942d1cb181c1 - - - - - -] DHCP configuration for ports {'72882db9-fc97-4582-bd23-512b00ec5590', '19e10a42-825e-42b7-b23a-8fc7afd1daf4', '8455868f-a994-4f2e-845d-9a64333b95a9', '6765551e-a2e6-4f12-ab17-d7c2ad3722de', 'ba5505cf-3c6a-4e37-8254-fc4fb1a297c7'} is completed
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:53.231 496978 INFO neutron.agent.dhcp.agent [None req-985118de-14c6-46f2-9761-ff70619af95b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:47Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889028550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889028ee0>], id=ba5505cf-3c6a-4e37-8254-fc4fb1a297c7, ip_allocation=immediate, mac_address=fa:16:3e:b6:ca:63, name=tempest-PortsIpV6TestJSON-375404757, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:45Z, description=, dns_domain=, id=c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1200099792, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=35386, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1875, status=ACTIVE, subnets=['7a355263-0d7e-4887-8547-9dbd1025581e'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:46Z, vlan_transparent=None, network_id=c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1885, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:47Z on network c01f2fb2-62f8-4e62-8183-0b35ede8dc9b
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:46:53 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:53.401 2 INFO neutron.agent.securitygroups_rpc [None req-3bb10290-63f4-4c23-ae9e-7b7512345cb0 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['ae31dde6-04a7-46d6-ac95-0b32617425e4']
Oct 13 15:46:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:46:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:53.426 496978 INFO neutron.agent.dhcp.agent [None req-66ce9750-8009-4b0b-bdd1-3ff9664467a4 - - - - - -] DHCP configuration for ports {'72882db9-fc97-4582-bd23-512b00ec5590', '19e10a42-825e-42b7-b23a-8fc7afd1daf4'} is completed
Oct 13 15:46:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43212 DF PROTO=TCP SPT=60716 DPT=9102 SEQ=3437621082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD1AD8E0000000001030307) 
Oct 13 15:46:53 standalone.localdomain dnsmasq[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/addn_hosts - 1 addresses
Oct 13 15:46:53 standalone.localdomain dnsmasq-dhcp[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/host
Oct 13 15:46:53 standalone.localdomain dnsmasq-dhcp[545620]: read /var/lib/neutron/dhcp/c01f2fb2-62f8-4e62-8183-0b35ede8dc9b/opts
Oct 13 15:46:53 standalone.localdomain podman[545756]: 2025-10-13 15:46:53.590977563 +0000 UTC m=+0.074543606 container kill 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:46:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:53.592 496978 INFO neutron.agent.linux.ip_lib [None req-6c361ce5-3ce3-443f-9e62-b4bec8c09bd9 - - - - - -] Device tap2974eb36-03 cannot be used as it has no MAC address
Oct 13 15:46:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:53.621 496978 INFO neutron.agent.linux.ip_lib [None req-5b2e8f28-fa37-47c3-a823-051cc71b4335 - - - - - -] Device tapd3d72e38-9a cannot be used as it has no MAC address
Oct 13 15:46:53 standalone.localdomain podman[545727]: 2025-10-13 15:46:53.579262218 +0000 UTC m=+0.155415259 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:46:53 standalone.localdomain podman[545727]: 2025-10-13 15:46:53.658855741 +0000 UTC m=+0.235008752 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain kernel: device tap2974eb36-03 entered promiscuous mode
Oct 13 15:46:53 standalone.localdomain NetworkManager[5962]: <info>  [1760370413.6662] manager: (tap2974eb36-03): new Generic device (/org/freedesktop/NetworkManager/Devices/82)
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00436|binding|INFO|Claiming lport 2974eb36-03c0-4f0c-937a-c1a821a148bf for this chassis.
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00437|binding|INFO|2974eb36-03c0-4f0c-937a-c1a821a148bf: Claiming unknown
Oct 13 15:46:53 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.678 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-b2b1caf7-9527-477a-a4f6-9c95f6068c7e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2b1caf7-9527-477a-a4f6-9c95f6068c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=333f6906-4b8b-4a0d-ab8c-7cb0cafb7c27, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2974eb36-03c0-4f0c-937a-c1a821a148bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00438|binding|INFO|Setting lport 2974eb36-03c0-4f0c-937a-c1a821a148bf ovn-installed in OVS
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00439|binding|INFO|Setting lport 2974eb36-03c0-4f0c-937a-c1a821a148bf up in Southbound
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.688 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2974eb36-03c0-4f0c-937a-c1a821a148bf in datapath b2b1caf7-9527-477a-a4f6-9c95f6068c7e bound to our chassis
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.694 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8d41c9ab-6b55-4eb7-ae65-e0617dbec05d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.694 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2b1caf7-9527-477a-a4f6-9c95f6068c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.695 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[027af8b4-1e7d-4164-9437-36d28f3b1002]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain kernel: device tapd3d72e38-9a entered promiscuous mode
Oct 13 15:46:53 standalone.localdomain NetworkManager[5962]: <info>  [1760370413.7144] manager: (tapd3d72e38-9a): new Generic device (/org/freedesktop/NetworkManager/Devices/83)
Oct 13 15:46:53 standalone.localdomain systemd-udevd[545641]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00440|binding|INFO|Claiming lport d3d72e38-9a15-44ac-a9db-925b52929957 for this chassis.
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00441|binding|INFO|d3d72e38-9a15-44ac-a9db-925b52929957: Claiming unknown
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00442|binding|INFO|Setting lport d3d72e38-9a15-44ac-a9db-925b52929957 ovn-installed in OVS
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00443|binding|INFO|Setting lport d3d72e38-9a15-44ac-a9db-925b52929957 up in Southbound
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3978: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.734 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d3d72e38-9a15-44ac-a9db-925b52929957) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.736 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d3d72e38-9a15-44ac-a9db-925b52929957 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.738 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.738 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a7df74f9-c628-4182-86fe-0f585759bdc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.792 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ee7bea34-8a11-4685-a964-f3f6026041ba with type ""
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00444|binding|INFO|Removing iface tape69c0691-83 ovn-installed in OVS
Oct 13 15:46:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:53Z|00445|binding|INFO|Removing lport e69c0691-83a7-4795-b604-549e2456d7dd ovn-installed in OVS
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.794 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9e5394bc-dc47-4b7a-9e53-f9917f23f6db', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e5394bc-dc47-4b7a-9e53-f9917f23f6db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6ce212c-f5ef-43ef-9aaa-25f87dc74bb3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e69c0691-83a7-4795-b604-549e2456d7dd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.796 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e69c0691-83a7-4795-b604-549e2456d7dd in datapath 9e5394bc-dc47-4b7a-9e53-f9917f23f6db unbound from our chassis
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.798 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9e5394bc-dc47-4b7a-9e53-f9917f23f6db or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:53.799 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4499ebb8-91ab-4ede-8f25-cc8a25f4958e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:53.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:53.981 496978 INFO neutron.agent.dhcp.agent [None req-56752a66-e13e-41ae-a9b9-bae985f553c8 - - - - - -] DHCP configuration for ports {'ba5505cf-3c6a-4e37-8254-fc4fb1a297c7'} is completed
Oct 13 15:46:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:54Z|00446|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:54.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:54 standalone.localdomain dnsmasq[545620]: exiting on receipt of SIGTERM
Oct 13 15:46:54 standalone.localdomain podman[545925]: 2025-10-13 15:46:54.362982504 +0000 UTC m=+0.116131053 container kill 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:46:54 standalone.localdomain systemd[1]: libpod-76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66.scope: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain podman[545910]: 
Oct 13 15:46:54 standalone.localdomain podman[545910]: 2025-10-13 15:46:54.38911764 +0000 UTC m=+0.161014814 container create b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e5394bc-dc47-4b7a-9e53-f9917f23f6db, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:46:54 standalone.localdomain podman[545950]: 
Oct 13 15:46:54 standalone.localdomain podman[545967]: 2025-10-13 15:46:54.422199912 +0000 UTC m=+0.046343267 container died 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:54 standalone.localdomain systemd[1]: Started libpod-conmon-b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e.scope.
Oct 13 15:46:54 standalone.localdomain podman[545910]: 2025-10-13 15:46:54.344610792 +0000 UTC m=+0.116508046 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:54 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:54 standalone.localdomain podman[545967]: 2025-10-13 15:46:54.455827201 +0000 UTC m=+0.079970556 container cleanup 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:46:54 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1501197eb071c205241a919c4b3700ee9516627d1d49448cf996e6663a86fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:54 standalone.localdomain systemd[1]: libpod-conmon-76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66.scope: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain podman[545910]: 2025-10-13 15:46:54.463549592 +0000 UTC m=+0.235446766 container init b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e5394bc-dc47-4b7a-9e53-f9917f23f6db, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:54 standalone.localdomain podman[545910]: 2025-10-13 15:46:54.470251561 +0000 UTC m=+0.242148745 container start b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e5394bc-dc47-4b7a-9e53-f9917f23f6db, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:54 standalone.localdomain dnsmasq[546005]: started, version 2.85 cachesize 150
Oct 13 15:46:54 standalone.localdomain dnsmasq[546005]: DNS service limited to local subnets
Oct 13 15:46:54 standalone.localdomain dnsmasq[546005]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:54 standalone.localdomain dnsmasq[546005]: warning: no upstream servers configured
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546005]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:54 standalone.localdomain dnsmasq[546005]: read /var/lib/neutron/dhcp/9e5394bc-dc47-4b7a-9e53-f9917f23f6db/addn_hosts - 0 addresses
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546005]: read /var/lib/neutron/dhcp/9e5394bc-dc47-4b7a-9e53-f9917f23f6db/host
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546005]: read /var/lib/neutron/dhcp/9e5394bc-dc47-4b7a-9e53-f9917f23f6db/opts
Oct 13 15:46:54 standalone.localdomain podman[545950]: 2025-10-13 15:46:54.381140961 +0000 UTC m=+0.068088254 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:54 standalone.localdomain podman[545950]: 2025-10-13 15:46:54.491250286 +0000 UTC m=+0.178197569 container create b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:46:54 standalone.localdomain systemd[1]: Started libpod-conmon-b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640.scope.
Oct 13 15:46:54 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:54 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2680d5d6cce5ddbf74ec46573e88f60ca35a4f25eb022093628e95c2b45c5d74/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:54 standalone.localdomain podman[545950]: 2025-10-13 15:46:54.535469955 +0000 UTC m=+0.222417268 container init b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:46:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:54.535 496978 INFO neutron.agent.dhcp.agent [None req-0b36979e-2dc2-45c4-ac27-47481cbf4158 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:54 standalone.localdomain podman[545950]: 2025-10-13 15:46:54.540671048 +0000 UTC m=+0.227618341 container start b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:54 standalone.localdomain dnsmasq[546022]: started, version 2.85 cachesize 150
Oct 13 15:46:54 standalone.localdomain dnsmasq[546022]: DNS service limited to local subnets
Oct 13 15:46:54 standalone.localdomain dnsmasq[546022]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:54 standalone.localdomain dnsmasq[546022]: warning: no upstream servers configured
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546022]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Oct 13 15:46:54 standalone.localdomain dnsmasq[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/addn_hosts - 0 addresses
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/host
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/opts
Oct 13 15:46:54 standalone.localdomain podman[545984]: 2025-10-13 15:46:54.549348238 +0000 UTC m=+0.129018535 container remove 76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:46:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:54.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:54Z|00447|binding|INFO|Releasing lport 72882db9-fc97-4582-bd23-512b00ec5590 from this chassis (sb_readonly=0)
Oct 13 15:46:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:54Z|00448|binding|INFO|Setting lport 72882db9-fc97-4582-bd23-512b00ec5590 down in Southbound
Oct 13 15:46:54 standalone.localdomain kernel: device tap72882db9-fc left promiscuous mode
Oct 13 15:46:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:54.574 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c01f2fb2-62f8-4e62-8183-0b35ede8dc9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbe80c7b-02af-4cd6-82e0-dba84bd90fa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=72882db9-fc97-4582-bd23-512b00ec5590) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:54.576 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 72882db9-fc97-4582-bd23-512b00ec5590 in datapath c01f2fb2-62f8-4e62-8183-0b35ede8dc9b unbound from our chassis
Oct 13 15:46:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:54.578 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c01f2fb2-62f8-4e62-8183-0b35ede8dc9b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:54.579 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c960822a-2ae4-4c7f-b7ef-1e0cefe88363]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:54.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:54.601 496978 INFO neutron.agent.dhcp.agent [None req-e4e2dbbd-3952-4152-bb30-5dd62c287bdc - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:46:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:54.602 496978 INFO neutron.agent.dhcp.agent [None req-e4e2dbbd-3952-4152-bb30-5dd62c287bdc - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:53Z, description=, device_id=4e2fec08-60d2-40a1-83ad-e5ee7cf7c04d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4eeb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4eee0>], id=b25fae3e-0333-48f8-a5eb-fce6e67a5ac3, ip_allocation=immediate, mac_address=fa:16:3e:01:52:0f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:41Z, description=, dns_domain=, id=0ab29eaa-5224-4e03-897b-7f43cb33bf62, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-82592824, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40809, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1855, status=ACTIVE, subnets=['7cacffe9-7fb5-4b4b-9f0a-2c87e55b4f5a'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:46Z, vlan_transparent=None, network_id=0ab29eaa-5224-4e03-897b-7f43cb33bf62, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1920, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:54Z on network 0ab29eaa-5224-4e03-897b-7f43cb33bf62
Oct 13 15:46:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43213 DF PROTO=TCP SPT=60716 DPT=9102 SEQ=3437621082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD1B1B60000000001030307) 
Oct 13 15:46:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:54.618 496978 INFO neutron.agent.dhcp.agent [None req-404997c1-9a57-4bc2-a6ed-8478e2c866c1 - - - - - -] DHCP configuration for ports {'39ee785f-3103-4dd2-bda5-b75c0f6d9ad9'} is completed
Oct 13 15:46:54 standalone.localdomain dnsmasq[546005]: exiting on receipt of SIGTERM
Oct 13 15:46:54 standalone.localdomain podman[546053]: 2025-10-13 15:46:54.714088857 +0000 UTC m=+0.045256742 container kill b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e5394bc-dc47-4b7a-9e53-f9917f23f6db, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:46:54 standalone.localdomain systemd[1]: libpod-b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e.scope: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:54.762 2 INFO neutron.agent.securitygroups_rpc [None req-da0b4cd2-c848-4116-a65d-70d4a1fb50bb db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:54 standalone.localdomain podman[546102]: 2025-10-13 15:46:54.764097507 +0000 UTC m=+0.037053627 container died b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e5394bc-dc47-4b7a-9e53-f9917f23f6db, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:54 standalone.localdomain podman[546102]: 2025-10-13 15:46:54.801935497 +0000 UTC m=+0.074891597 container remove b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e5394bc-dc47-4b7a-9e53-f9917f23f6db, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:54 standalone.localdomain ceph-mon[29756]: pgmap v3978: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:54.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:54 standalone.localdomain kernel: device tape69c0691-83 left promiscuous mode
Oct 13 15:46:54 standalone.localdomain dnsmasq[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/addn_hosts - 1 addresses
Oct 13 15:46:54 standalone.localdomain podman[546077]: 2025-10-13 15:46:54.818298028 +0000 UTC m=+0.109960981 container kill b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/host
Oct 13 15:46:54 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/opts
Oct 13 15:46:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5a1501197eb071c205241a919c4b3700ee9516627d1d49448cf996e6663a86fa-merged.mount: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2cbeb7087b24ed238f9d4138260fcfbf167378eb72369bce40569b093dc110be-merged.mount: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76c2a8d4dc69fb9ae03964c87a2826eb95a0cc2f98ca5b38c89fc04b654c4f66-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:54.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:54 standalone.localdomain systemd[1]: libpod-conmon-b48a1d9473757351ae73152b53faef9535a73d950d7d4444b8e026b576ef866e.scope: Deactivated successfully.
Oct 13 15:46:54 standalone.localdomain podman[546134]: 
Oct 13 15:46:54 standalone.localdomain podman[546134]: 2025-10-13 15:46:54.960160133 +0000 UTC m=+0.189317277 container create 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:46:54 standalone.localdomain podman[546162]: 
Oct 13 15:46:54 standalone.localdomain systemd[1]: Started libpod-conmon-69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72.scope.
Oct 13 15:46:54 standalone.localdomain podman[546162]: 2025-10-13 15:46:54.991701797 +0000 UTC m=+0.089969568 container create 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.010 496978 INFO neutron.agent.dhcp.agent [None req-13419f5c-7323-48bc-ac78-b6126ddc6d69 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5471983709a84c3ae0a385f85a88db7974bbb59814161c58b48a8453762fd523/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:55 standalone.localdomain podman[546134]: 2025-10-13 15:46:54.913138776 +0000 UTC m=+0.142295960 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:55 standalone.localdomain systemd[1]: Started libpod-conmon-6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe.scope.
Oct 13 15:46:55 standalone.localdomain podman[546134]: 2025-10-13 15:46:55.021473565 +0000 UTC m=+0.250630729 container init 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: started, version 2.85 cachesize 150
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: DNS service limited to local subnets
Oct 13 15:46:55 standalone.localdomain podman[546162]: 2025-10-13 15:46:54.92897618 +0000 UTC m=+0.027243981 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:55 standalone.localdomain podman[546134]: 2025-10-13 15:46:55.032895042 +0000 UTC m=+0.262052216 container start 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: warning: no upstream servers configured
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.033 496978 INFO neutron.agent.dhcp.agent [None req-58192f28-da05-49da-9b6e-8a11b0d2d783 - - - - - -] DHCP configuration for ports {'6867cb00-afff-4651-87b7-3094e1bf36c3'} is completed
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:46:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6313d209ce1e1fb10e2be82f0a3d78a18111e3c56b623a82ca94d9fe6b430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:55 standalone.localdomain podman[546162]: 2025-10-13 15:46:55.049709737 +0000 UTC m=+0.147977498 container init 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.052 496978 INFO neutron.agent.dhcp.agent [None req-b36b3926-e86e-4eee-8f4f-eb410b11fd14 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.053 496978 INFO neutron.agent.dhcp.agent [None req-b36b3926-e86e-4eee-8f4f-eb410b11fd14 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain podman[546162]: 2025-10-13 15:46:55.057715785 +0000 UTC m=+0.155983546 container start 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:55 standalone.localdomain dnsmasq[546194]: started, version 2.85 cachesize 150
Oct 13 15:46:55 standalone.localdomain dnsmasq[546194]: DNS service limited to local subnets
Oct 13 15:46:55 standalone.localdomain dnsmasq[546194]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:55 standalone.localdomain dnsmasq[546194]: warning: no upstream servers configured
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[546194]: DHCP, static leases only on 10.101.0.0, lease time 1d
Oct 13 15:46:55 standalone.localdomain dnsmasq[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/addn_hosts - 0 addresses
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/host
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/opts
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.097 496978 INFO neutron.agent.dhcp.agent [None req-5b2e8f28-fa37-47c3-a823-051cc71b4335 - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.098 496978 INFO neutron.agent.dhcp.agent [None req-5b2e8f28-fa37-47c3-a823-051cc71b4335 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:44Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889028250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffca30>], id=635d51db-186f-4542-a7ad-af64d8e39a76, ip_allocation=immediate, mac_address=fa:16:3e:4d:25:c2, name=tempest-NetworksTestDHCPv6-1554856877, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=8, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['342938c0-db26-4c77-a6c7-21c8dbb898d2'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:43Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=1870, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:44Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.112 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:54Z, description=, device_id=8295397f-6eb8-4cf5-8e1e-340656ddc70c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889259970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889259af0>], id=7d6faee4-f367-44d2-8a32-cb6ed05313fa, ip_allocation=immediate, mac_address=fa:16:3e:b4:b3:54, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:40Z, description=, dns_domain=, id=e640877a-3ffa-40a9-9920-52ad8d244aaf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-727913338, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28045, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1848, status=ACTIVE, subnets=['0139707e-d1d7-4d30-8a10-efebc04a0ba4'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:46:45Z, vlan_transparent=None, network_id=e640877a-3ffa-40a9-9920-52ad8d244aaf, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1924, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:46:54Z on network e640877a-3ffa-40a9-9920-52ad8d244aaf
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.127 496978 INFO neutron.agent.dhcp.agent [None req-bea39146-200d-498a-973f-c00368649482 - - - - - -] Resizing dhcp processing queue green pool size to: 12
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.241 496978 INFO neutron.agent.dhcp.agent [None req-311d05d5-0c13-40c9-a716-280b48c17145 - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.242 496978 INFO neutron.agent.dhcp.agent [None req-311d05d5-0c13-40c9-a716-280b48c17145 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.242 496978 INFO neutron.agent.dhcp.agent [None req-311d05d5-0c13-40c9-a716-280b48c17145 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.243 496978 INFO neutron.agent.dhcp.agent [None req-311d05d5-0c13-40c9-a716-280b48c17145 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.244 496978 INFO neutron.agent.dhcp.agent [None req-311d05d5-0c13-40c9-a716-280b48c17145 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:46:55 standalone.localdomain podman[546213]: 2025-10-13 15:46:55.286122011 +0000 UTC m=+0.074971300 container kill 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.341 496978 INFO neutron.agent.dhcp.agent [None req-d868db0c-0280-457d-b740-43b33834b5b0 - - - - - -] DHCP configuration for ports {'f4ad9bb4-e6d3-4e35-8554-941e17a630b0', 'b25fae3e-0333-48f8-a5eb-fce6e67a5ac3', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.409 496978 INFO neutron.agent.linux.ip_lib [None req-ec62536b-759e-427e-8b30-9d40f8744160 - - - - - -] Device tap316431c1-f2 cannot be used as it has no MAC address
Oct 13 15:46:55 standalone.localdomain dnsmasq[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/addn_hosts - 1 addresses
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/host
Oct 13 15:46:55 standalone.localdomain podman[546270]: 2025-10-13 15:46:55.431355351 +0000 UTC m=+0.064474212 container kill 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/opts
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain kernel: device tap316431c1-f2 entered promiscuous mode
Oct 13 15:46:55 standalone.localdomain NetworkManager[5962]: <info>  [1760370415.4902] manager: (tap316431c1-f2): new Generic device (/org/freedesktop/NetworkManager/Devices/84)
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain podman[546292]: 2025-10-13 15:46:55.504390039 +0000 UTC m=+0.106512873 container kill 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:55 standalone.localdomain dnsmasq[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/addn_hosts - 0 addresses
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/host
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[542384]: read /var/lib/neutron/dhcp/1425a739-9d65-4c05-9700-1ab66a63077c/opts
Oct 13 15:46:55 standalone.localdomain dnsmasq[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/addn_hosts - 0 addresses
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/host
Oct 13 15:46:55 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/opts
Oct 13 15:46:55 standalone.localdomain podman[546306]: 2025-10-13 15:46:55.521843363 +0000 UTC m=+0.101403223 container kill 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.620 496978 INFO neutron.agent.dhcp.agent [None req-5b2e8f28-fa37-47c3-a823-051cc71b4335 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:51Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18899d5250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18899d5400>], id=ce5e5dc5-f47c-4a54-bdb3-d5e6a22ec356, ip_allocation=immediate, mac_address=fa:16:3e:eb:ad:6e, name=tempest-NetworksTestDHCPv6-1740699012, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=10, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['1142beaa-b079-44e0-ae7d-477aa55e16ef'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:50Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=1911, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:51Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.708 496978 INFO neutron.agent.dhcp.agent [None req-e7da6724-4067-4ba7-9858-909dcae4bc27 - - - - - -] DHCP configuration for ports {'635d51db-186f-4542-a7ad-af64d8e39a76'} is completed
Oct 13 15:46:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3979: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:55.753 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:55 standalone.localdomain podman[546377]: 2025-10-13 15:46:55.80641258 +0000 UTC m=+0.037448319 container kill 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:46:55 standalone.localdomain dnsmasq[546192]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:46:55 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d9e5394bc\x2ddc47\x2d4b7a\x2d9e53\x2df9917f23f6db.mount: Deactivated successfully.
Oct 13 15:46:55 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dc01f2fb2\x2d62f8\x2d4e62\x2d8183\x2d0b35ede8dc9b.mount: Deactivated successfully.
Oct 13 15:46:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:55Z|00449|binding|INFO|Releasing lport ae6e1459-ef41-49b9-bb94-17a908eaa937 from this chassis (sb_readonly=0)
Oct 13 15:46:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:55Z|00450|binding|INFO|Setting lport ae6e1459-ef41-49b9-bb94-17a908eaa937 down in Southbound
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain kernel: device tapae6e1459-ef left promiscuous mode
Oct 13 15:46:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:55.882 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-1425a739-9d65-4c05-9700-1ab66a63077c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1425a739-9d65-4c05-9700-1ab66a63077c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3c1410d6c264516966bcf7dffd0e4d5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4736446f-bcd8-4b47-83cd-20705e82abe7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=ae6e1459-ef41-49b9-bb94-17a908eaa937) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:55.883 378821 INFO neutron.agent.ovn.metadata.agent [-] Port ae6e1459-ef41-49b9-bb94-17a908eaa937 in datapath 1425a739-9d65-4c05-9700-1ab66a63077c unbound from our chassis
Oct 13 15:46:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:55.887 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1425a739-9d65-4c05-9700-1ab66a63077c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:46:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:55.888 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f4548c68-9212-4459-94d6-133d2c720a34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:55.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:55 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:46:55 standalone.localdomain systemd[1]: tmp-crun.MS70dD.mount: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain podman[546403]: 2025-10-13 15:46:56.004683845 +0000 UTC m=+0.090324989 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:46:56 standalone.localdomain podman[546403]: 2025-10-13 15:46:56.012026594 +0000 UTC m=+0.097667758 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 13 15:46:56 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.133 496978 INFO neutron.agent.dhcp.agent [None req-aba0a5bd-cf16-469d-bca8-95ec7f869657 - - - - - -] DHCP configuration for ports {'2974eb36-03c0-4f0c-937a-c1a821a148bf', 'f4ad9bb4-e6d3-4e35-8554-941e17a630b0', '7d6faee4-f367-44d2-8a32-cb6ed05313fa'} is completed
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.246 496978 INFO neutron.agent.dhcp.agent [None req-a8d68dd2-5c49-41f1-bcc1-043a814d6d62 - - - - - -] DHCP configuration for ports {'ce5e5dc5-f47c-4a54-bdb3-d5e6a22ec356'} is completed
Oct 13 15:46:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:56Z|00451|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:46:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:56.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:56 standalone.localdomain dnsmasq[546192]: exiting on receipt of SIGTERM
Oct 13 15:46:56 standalone.localdomain podman[546455]: 2025-10-13 15:46:56.326320468 +0000 UTC m=+0.076970062 container kill 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:46:56 standalone.localdomain systemd[1]: libpod-69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72.scope: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.349 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:53Z, description=, device_id=4e2fec08-60d2-40a1-83ad-e5ee7cf7c04d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912c100>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912cca0>], id=b25fae3e-0333-48f8-a5eb-fce6e67a5ac3, ip_allocation=immediate, mac_address=fa:16:3e:01:52:0f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:41Z, description=, dns_domain=, id=0ab29eaa-5224-4e03-897b-7f43cb33bf62, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-82592824, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40809, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1855, status=ACTIVE, subnets=['7cacffe9-7fb5-4b4b-9f0a-2c87e55b4f5a'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:46Z, vlan_transparent=None, network_id=0ab29eaa-5224-4e03-897b-7f43cb33bf62, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1920, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:46:54Z on network 0ab29eaa-5224-4e03-897b-7f43cb33bf62
Oct 13 15:46:56 standalone.localdomain podman[546469]: 2025-10-13 15:46:56.409137561 +0000 UTC m=+0.071097989 container died 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:56 standalone.localdomain podman[546469]: 2025-10-13 15:46:56.442446831 +0000 UTC m=+0.104407179 container cleanup 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:56 standalone.localdomain systemd[1]: libpod-conmon-69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72.scope: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain podman[546476]: 2025-10-13 15:46:56.474259473 +0000 UTC m=+0.121445869 container remove 69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:46:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:56Z|00452|binding|INFO|Releasing lport d3d72e38-9a15-44ac-a9db-925b52929957 from this chassis (sb_readonly=0)
Oct 13 15:46:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:56Z|00453|binding|INFO|Setting lport d3d72e38-9a15-44ac-a9db-925b52929957 down in Southbound
Oct 13 15:46:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:56.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:56 standalone.localdomain kernel: device tapd3d72e38-9a left promiscuous mode
Oct 13 15:46:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:56.496 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d3d72e38-9a15-44ac-a9db-925b52929957) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:56.497 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d3d72e38-9a15-44ac-a9db-925b52929957 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:46:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:56.498 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:56.498 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b14942-5835-4408-8b29-30578ceefb7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:56.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:56 standalone.localdomain dnsmasq[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/addn_hosts - 1 addresses
Oct 13 15:46:56 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/host
Oct 13 15:46:56 standalone.localdomain podman[546529]: 2025-10-13 15:46:56.613257829 +0000 UTC m=+0.060914312 container kill b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:46:56 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/opts
Oct 13 15:46:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43214 DF PROTO=TCP SPT=60716 DPT=9102 SEQ=3437621082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD1B9B60000000001030307) 
Oct 13 15:46:56 standalone.localdomain podman[546548]: 
Oct 13 15:46:56 standalone.localdomain podman[546548]: 2025-10-13 15:46:56.706343752 +0000 UTC m=+0.098587656 container create f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa144ad-423b-408b-ae57-8e53ff0a75e6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:56 standalone.localdomain systemd[1]: Started libpod-conmon-f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95.scope.
Oct 13 15:46:56 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:56.758 2 INFO neutron.agent.securitygroups_rpc [None req-cf5b7eb6-de15-4233-bce8-cc4a07da873c 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['a49474e2-176f-4b64-bfa8-c264efdb310b']
Oct 13 15:46:56 standalone.localdomain podman[546548]: 2025-10-13 15:46:56.659371937 +0000 UTC m=+0.051615881 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.780 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.781 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.781 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.783 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.783 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.783 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.784 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.784 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.784 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.784 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.784 496978 INFO neutron.agent.dhcp.agent [None req-212dc728-3f51-46c8-a39a-a5469676d8c9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5adc478aa71c110fd5dbb414646ee2564508c6f053c76a48eb940740df2269b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:56 standalone.localdomain podman[546548]: 2025-10-13 15:46:56.797052991 +0000 UTC m=+0.189296885 container init f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa144ad-423b-408b-ae57-8e53ff0a75e6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:56 standalone.localdomain podman[546548]: 2025-10-13 15:46:56.807412665 +0000 UTC m=+0.199656559 container start f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa144ad-423b-408b-ae57-8e53ff0a75e6, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:46:56 standalone.localdomain dnsmasq[546575]: started, version 2.85 cachesize 150
Oct 13 15:46:56 standalone.localdomain dnsmasq[546575]: DNS service limited to local subnets
Oct 13 15:46:56 standalone.localdomain dnsmasq[546575]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:56 standalone.localdomain dnsmasq[546575]: warning: no upstream servers configured
Oct 13 15:46:56 standalone.localdomain dnsmasq[546575]: read /var/lib/neutron/dhcp/caa144ad-423b-408b-ae57-8e53ff0a75e6/addn_hosts - 0 addresses
Oct 13 15:46:56 standalone.localdomain ceph-mon[29756]: pgmap v3979: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5471983709a84c3ae0a385f85a88db7974bbb59814161c58b48a8453762fd523-merged.mount: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69e001c15a490be44afbb78e0fe0f02c70f644c094603395c6a197478292af72-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.859 496978 INFO neutron.agent.dhcp.agent [None req-ec62536b-759e-427e-8b30-9d40f8744160 - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:46:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:56.916 496978 INFO neutron.agent.dhcp.agent [None req-ab7da5a3-8911-4b4b-8eca-a49dd636dde2 - - - - - -] DHCP configuration for ports {'b25fae3e-0333-48f8-a5eb-fce6e67a5ac3'} is completed
Oct 13 15:46:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:56.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:57 standalone.localdomain dnsmasq[546575]: exiting on receipt of SIGTERM
Oct 13 15:46:57 standalone.localdomain podman[546594]: 2025-10-13 15:46:57.023612159 +0000 UTC m=+0.059566799 container kill f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa144ad-423b-408b-ae57-8e53ff0a75e6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:46:57 standalone.localdomain systemd[1]: libpod-f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95.scope: Deactivated successfully.
Oct 13 15:46:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:57.032 496978 INFO neutron.agent.dhcp.agent [None req-469b3ebb-4096-41ed-b2f4-c90fff16a34a - - - - - -] DHCP configuration for ports {'dbb609a9-5e5f-4d53-bbc6-11b333e9effb'} is completed
Oct 13 15:46:57 standalone.localdomain podman[546610]: 2025-10-13 15:46:57.088862404 +0000 UTC m=+0.044665335 container died f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa144ad-423b-408b-ae57-8e53ff0a75e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:46:57 standalone.localdomain podman[546610]: 2025-10-13 15:46:57.126999944 +0000 UTC m=+0.082802845 container remove f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa144ad-423b-408b-ae57-8e53ff0a75e6, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:57.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:57 standalone.localdomain kernel: device tap316431c1-f2 left promiscuous mode
Oct 13 15:46:57 standalone.localdomain systemd[1]: libpod-conmon-f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95.scope: Deactivated successfully.
Oct 13 15:46:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:57.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:57.197 496978 INFO neutron.agent.dhcp.agent [None req-d9e1bddf-6191-45cd-ad3a-7042d239c353 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:46:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:57.198 496978 INFO neutron.agent.dhcp.agent [None req-d9e1bddf-6191-45cd-ad3a-7042d239c353 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:57.199 496978 INFO neutron.agent.dhcp.agent [None req-d9e1bddf-6191-45cd-ad3a-7042d239c353 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:46:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:57.327 2 INFO neutron.agent.securitygroups_rpc [None req-526fb909-1a9a-45da-9153-39d83a943e4a 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['a49474e2-176f-4b64-bfa8-c264efdb310b']
Oct 13 15:46:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:46:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3980: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5adc478aa71c110fd5dbb414646ee2564508c6f053c76a48eb940740df2269b4-merged.mount: Deactivated successfully.
Oct 13 15:46:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f05300778aed36428ca063d510ace56a8022ab9ea2540fac1b7a0defd37d7c95-userdata-shm.mount: Deactivated successfully.
Oct 13 15:46:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dcaa144ad\x2d423b\x2d408b\x2dae57\x2d8e53ff0a75e6.mount: Deactivated successfully.
Oct 13 15:46:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:57.932 496978 INFO neutron.agent.linux.ip_lib [None req-f5fd581a-961a-4ff1-b306-4553916b8beb - - - - - -] Device tap38c7337e-55 cannot be used as it has no MAC address
Oct 13 15:46:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:57.941 2 INFO neutron.agent.securitygroups_rpc [None req-a5dc7619-3293-40a1-ba4c-54055d139922 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain kernel: device tap38c7337e-55 entered promiscuous mode
Oct 13 15:46:58 standalone.localdomain NetworkManager[5962]: <info>  [1760370418.0123] manager: (tap38c7337e-55): new Generic device (/org/freedesktop/NetworkManager/Devices/85)
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00454|binding|INFO|Claiming lport 38c7337e-555c-430d-9050-c23875103e8d for this chassis.
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00455|binding|INFO|38c7337e-555c-430d-9050-c23875103e8d: Claiming unknown
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00456|binding|INFO|Setting lport 38c7337e-555c-430d-9050-c23875103e8d ovn-installed in OVS
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:58.028 2 INFO neutron.agent.securitygroups_rpc [None req-39574b33-16f3-429e-96d1-34550911e7aa 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['a49474e2-176f-4b64-bfa8-c264efdb310b']
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00457|binding|INFO|Setting lport 38c7337e-555c-430d-9050-c23875103e8d up in Southbound
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.035 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=38c7337e-555c-430d-9050-c23875103e8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.036 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 38c7337e-555c-430d-9050-c23875103e8d in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.037 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.037 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3668b611-0a96-4463-b72c-4663544ee150]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:58.078 496978 INFO neutron.agent.linux.ip_lib [None req-20baf091-dd86-42ca-91d6-b54342f13f34 - - - - - -] Device tap62ad21a5-33 cannot be used as it has no MAC address
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain kernel: device tap62ad21a5-33 entered promiscuous mode
Oct 13 15:46:58 standalone.localdomain NetworkManager[5962]: <info>  [1760370418.1096] manager: (tap62ad21a5-33): new Generic device (/org/freedesktop/NetworkManager/Devices/86)
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00458|binding|INFO|Claiming lport 62ad21a5-33e4-428d-b173-e74b9ce7e052 for this chassis.
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00459|binding|INFO|62ad21a5-33e4-428d-b173-e74b9ce7e052: Claiming unknown
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.124 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-22271b1b-6a76-4563-ad62-48148526587a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22271b1b-6a76-4563-ad62-48148526587a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fad11c39-2630-44b5-bb91-2e00e194a528, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=62ad21a5-33e4-428d-b173-e74b9ce7e052) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.126 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 62ad21a5-33e4-428d-b173-e74b9ce7e052 in datapath 22271b1b-6a76-4563-ad62-48148526587a bound to our chassis
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.127 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 22271b1b-6a76-4563-ad62-48148526587a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:58.128 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[10492080-8d20-4afa-ba3b-3b6bf55b32ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00460|binding|INFO|Setting lport 62ad21a5-33e4-428d-b173-e74b9ce7e052 ovn-installed in OVS
Oct 13 15:46:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:58Z|00461|binding|INFO|Setting lport 62ad21a5-33e4-428d-b173-e74b9ce7e052 up in Southbound
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:58.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:58 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:58.340 2 INFO neutron.agent.securitygroups_rpc [None req-6e3f6022-2df1-41ee-91e7-b407ad6cada8 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:46:58 standalone.localdomain ceph-mon[29756]: pgmap v3980: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:58.852 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:46:54Z, description=, device_id=8295397f-6eb8-4cf5-8e1e-340656ddc70c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889173910>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891733d0>], id=7d6faee4-f367-44d2-8a32-cb6ed05313fa, ip_allocation=immediate, mac_address=fa:16:3e:b4:b3:54, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:40Z, description=, dns_domain=, id=e640877a-3ffa-40a9-9920-52ad8d244aaf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-727913338, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28045, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1848, status=ACTIVE, subnets=['0139707e-d1d7-4d30-8a10-efebc04a0ba4'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:46:45Z, vlan_transparent=None, network_id=e640877a-3ffa-40a9-9920-52ad8d244aaf, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1924, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:46:54Z on network e640877a-3ffa-40a9-9920-52ad8d244aaf
Oct 13 15:46:59 standalone.localdomain dnsmasq[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/addn_hosts - 1 addresses
Oct 13 15:46:59 standalone.localdomain podman[546744]: 2025-10-13 15:46:59.133858444 +0000 UTC m=+0.106997498 container kill 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/host
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/opts
Oct 13 15:46:59 standalone.localdomain sudo[546758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:46:59 standalone.localdomain sudo[546758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:46:59 standalone.localdomain sudo[546758]: pam_unix(sudo:session): session closed for user root
Oct 13 15:46:59 standalone.localdomain podman[546779]: 
Oct 13 15:46:59 standalone.localdomain sudo[546798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:46:59 standalone.localdomain sudo[546798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:46:59 standalone.localdomain podman[546779]: 2025-10-13 15:46:59.175198934 +0000 UTC m=+0.060469387 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:59 standalone.localdomain podman[546779]: 2025-10-13 15:46:59.317149382 +0000 UTC m=+0.202419855 container create d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:46:59 standalone.localdomain systemd[1]: Started libpod-conmon-d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35.scope.
Oct 13 15:46:59 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:46:59.371 2 INFO neutron.agent.securitygroups_rpc [None req-650f2444-6d98-4d33-b929-0f4e35d12ef8 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['a49474e2-176f-4b64-bfa8-c264efdb310b']
Oct 13 15:46:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f9d939f9e5b686fbb14c517b5e6fdded70d58482531d42ffde3472d3e4ea92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:59 standalone.localdomain podman[546779]: 2025-10-13 15:46:59.401635947 +0000 UTC m=+0.286906430 container init d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:46:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:59.403 496978 INFO neutron.agent.dhcp.agent [None req-ce81dbd3-a251-484f-b456-aa19c88e78e4 - - - - - -] DHCP configuration for ports {'7d6faee4-f367-44d2-8a32-cb6ed05313fa'} is completed
Oct 13 15:46:59 standalone.localdomain podman[546779]: 2025-10-13 15:46:59.410797573 +0000 UTC m=+0.296068046 container start d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:46:59 standalone.localdomain dnsmasq[546861]: started, version 2.85 cachesize 150
Oct 13 15:46:59 standalone.localdomain dnsmasq[546861]: DNS service limited to local subnets
Oct 13 15:46:59 standalone.localdomain dnsmasq[546861]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:59 standalone.localdomain dnsmasq[546861]: warning: no upstream servers configured
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546861]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:46:59 standalone.localdomain dnsmasq[546861]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546861]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546861]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:46:59 standalone.localdomain podman[546843]: 
Oct 13 15:46:59 standalone.localdomain podman[546843]: 2025-10-13 15:46:59.450052297 +0000 UTC m=+0.148164912 container create 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:59 standalone.localdomain podman[546843]: 2025-10-13 15:46:59.370409963 +0000 UTC m=+0.068522578 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:46:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:59.486 496978 INFO neutron.agent.dhcp.agent [None req-f5fd581a-961a-4ff1-b306-4553916b8beb - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:46:59 standalone.localdomain systemd[1]: Started libpod-conmon-5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f.scope.
Oct 13 15:46:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:46:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7365bc19b98939272aa99d362176cd59e98b259bb67914cc8aac96bfccd1dca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:46:59 standalone.localdomain podman[546843]: 2025-10-13 15:46:59.53571754 +0000 UTC m=+0.233830125 container init 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:46:59 standalone.localdomain podman[546843]: 2025-10-13 15:46:59.548579741 +0000 UTC m=+0.246692326 container start 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:46:59 standalone.localdomain dnsmasq[546893]: started, version 2.85 cachesize 150
Oct 13 15:46:59 standalone.localdomain dnsmasq[546893]: DNS service limited to local subnets
Oct 13 15:46:59 standalone.localdomain dnsmasq[546893]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:46:59 standalone.localdomain dnsmasq[546893]: warning: no upstream servers configured
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546893]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:46:59 standalone.localdomain dnsmasq[546893]: read /var/lib/neutron/dhcp/22271b1b-6a76-4563-ad62-48148526587a/addn_hosts - 0 addresses
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546893]: read /var/lib/neutron/dhcp/22271b1b-6a76-4563-ad62-48148526587a/host
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546893]: read /var/lib/neutron/dhcp/22271b1b-6a76-4563-ad62-48148526587a/opts
Oct 13 15:46:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:59.617 496978 INFO neutron.agent.dhcp.agent [None req-f96e0c62-842c-4062-91ba-8df031d75a98 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:46:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:59.629 496978 INFO neutron.agent.dhcp.agent [None req-b4963526-3538-4c29-8f86-6fa608a00e66 - - - - - -] Resizing dhcp processing queue green pool size to: 12
Oct 13 15:46:59 standalone.localdomain dnsmasq[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/addn_hosts - 0 addresses
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/host
Oct 13 15:46:59 standalone.localdomain podman[546899]: 2025-10-13 15:46:59.672039693 +0000 UTC m=+0.060976574 container kill b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[546022]: read /var/lib/neutron/dhcp/0ab29eaa-5224-4e03-897b-7f43cb33bf62/opts
Oct 13 15:46:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3981: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:46:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:46:59.765 496978 INFO neutron.agent.dhcp.agent [None req-be328d95-36b5-4e73-a38e-fd9bff363fe6 - - - - - -] DHCP configuration for ports {'7b4db094-7d75-4d67-b68b-6d3949ce8cc5'} is completed
Oct 13 15:46:59 standalone.localdomain dnsmasq[546861]: exiting on receipt of SIGTERM
Oct 13 15:46:59 standalone.localdomain podman[546936]: 2025-10-13 15:46:59.788097202 +0000 UTC m=+0.048069700 container kill d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:59 standalone.localdomain systemd[1]: libpod-d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35.scope: Deactivated successfully.
Oct 13 15:46:59 standalone.localdomain sudo[546798]: pam_unix(sudo:session): session closed for user root
Oct 13 15:46:59 standalone.localdomain podman[546975]: 2025-10-13 15:46:59.858913311 +0000 UTC m=+0.058575178 container died d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:46:59 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:46:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:46:59 standalone.localdomain podman[547000]: 2025-10-13 15:46:59.887256765 +0000 UTC m=+0.042805266 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:46:59 standalone.localdomain podman[546975]: 2025-10-13 15:46:59.891078595 +0000 UTC m=+0.090740442 container cleanup d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:46:59 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:46:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev b77aa6a0-8898-4ba4-833f-ee300c7fed3f (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:46:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev b77aa6a0-8898-4ba4-833f-ee300c7fed3f (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:46:59 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event b77aa6a0-8898-4ba4-833f-ee300c7fed3f (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:46:59 standalone.localdomain systemd[1]: libpod-conmon-d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35.scope: Deactivated successfully.
Oct 13 15:46:59 standalone.localdomain podman[546985]: 2025-10-13 15:46:59.943059055 +0000 UTC m=+0.129425537 container remove d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:46:59 standalone.localdomain sudo[547020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:46:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:59Z|00462|binding|INFO|Releasing lport 38c7337e-555c-430d-9050-c23875103e8d from this chassis (sb_readonly=0)
Oct 13 15:46:59 standalone.localdomain kernel: device tap38c7337e-55 left promiscuous mode
Oct 13 15:46:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:46:59Z|00463|binding|INFO|Setting lport 38c7337e-555c-430d-9050-c23875103e8d down in Southbound
Oct 13 15:46:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:59.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:59 standalone.localdomain sudo[547020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:46:59 standalone.localdomain sudo[547020]: pam_unix(sudo:session): session closed for user root
Oct 13 15:46:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:59.966 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=38c7337e-555c-430d-9050-c23875103e8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:46:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:46:59.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:46:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:59.969 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 38c7337e-555c-430d-9050-c23875103e8d in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:46:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:59.971 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:46:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:46:59.972 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[21562d40-482f-4a51-be32-a9f98e1803eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:00 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:00Z|00464|binding|INFO|Releasing lport 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc from this chassis (sb_readonly=0)
Oct 13 15:47:00 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:00Z|00465|binding|INFO|Setting lport 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc down in Southbound
Oct 13 15:47:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:00.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:00 standalone.localdomain kernel: device tap1bad5609-50 left promiscuous mode
Oct 13 15:47:00 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:00.065 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-0ab29eaa-5224-4e03-897b-7f43cb33bf62', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ab29eaa-5224-4e03-897b-7f43cb33bf62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7ccded5-1f07-432b-b3e6-4a8c9f6d769c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:00 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:00.067 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 1bad5609-50fa-4cf3-be8a-0f80dbbfa2bc in datapath 0ab29eaa-5224-4e03-897b-7f43cb33bf62 unbound from our chassis
Oct 13 15:47:00 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:00.069 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:00 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:00.070 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4a279a0f-c595-4e3c-9144-e928e8b3e22e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:00 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:00.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-00f9d939f9e5b686fbb14c517b5e6fdded70d58482531d42ffde3472d3e4ea92-merged.mount: Deactivated successfully.
Oct 13 15:47:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d807ae8b69b080bc6eab12cf9f99e06ce155c559f5c267ed874cf23ed801ae35-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:47:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:00.546 496978 INFO neutron.agent.dhcp.agent [None req-609bc242-b155-4064-b7df-801dba8a52bc - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:47:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:00.548 496978 INFO neutron.agent.dhcp.agent [None req-609bc242-b155-4064-b7df-801dba8a52bc - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:00.548 496978 INFO neutron.agent.dhcp.agent [None req-609bc242-b155-4064-b7df-801dba8a52bc - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:00 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:47:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43215 DF PROTO=TCP SPT=60716 DPT=9102 SEQ=3437621082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD1C9770000000001030307) 
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: pgmap v3981: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:47:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:47:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:00.977 496978 INFO neutron.agent.linux.ip_lib [None req-c55b4af0-bec6-4506-a222-2f96d244fcc1 - - - - - -] Device tapcd2874f4-f6 cannot be used as it has no MAC address
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain kernel: device tapcd2874f4-f6 entered promiscuous mode
Oct 13 15:47:01 standalone.localdomain NetworkManager[5962]: <info>  [1760370421.0085] manager: (tapcd2874f4-f6): new Generic device (/org/freedesktop/NetworkManager/Devices/87)
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00466|binding|INFO|Claiming lport cd2874f4-f6ad-4d57-adc4-e5003eea2d52 for this chassis.
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00467|binding|INFO|cd2874f4-f6ad-4d57-adc4-e5003eea2d52: Claiming unknown
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.020 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-3a458d6a-963b-4ca0-872d-1a5eac18cdff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a458d6a-963b-4ca0-872d-1a5eac18cdff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a77a6193-dc1a-461a-8114-a0cd5f5cbc0f, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=cd2874f4-f6ad-4d57-adc4-e5003eea2d52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.022 378821 INFO neutron.agent.ovn.metadata.agent [-] Port cd2874f4-f6ad-4d57-adc4-e5003eea2d52 in datapath 3a458d6a-963b-4ca0-872d-1a5eac18cdff bound to our chassis
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.024 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3a458d6a-963b-4ca0-872d-1a5eac18cdff or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.024 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2e447a-e804-4830-a35c-a324ed9007a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00468|binding|INFO|Setting lport cd2874f4-f6ad-4d57-adc4-e5003eea2d52 ovn-installed in OVS
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00469|binding|INFO|Setting lport cd2874f4-f6ad-4d57-adc4-e5003eea2d52 up in Southbound
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:01.124 2 INFO neutron.agent.securitygroups_rpc [None req-5b001313-7cf1-4aa8-82d0-c8bcd94bad3b 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['a49474e2-176f-4b64-bfa8-c264efdb310b']
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:01.144 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:00Z, description=, device_id=8295397f-6eb8-4cf5-8e1e-340656ddc70c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889a427c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891063d0>], id=04ee27db-0fb5-43b7-8b52-630451dad0e2, ip_allocation=immediate, mac_address=fa:16:3e:01:c5:0b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:41Z, description=, dns_domain=, id=b2b1caf7-9527-477a-a4f6-9c95f6068c7e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1926174009, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30955, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['991356d8-3190-4320-9895-3c4fbb10570d'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:46:49Z, vlan_transparent=None, network_id=b2b1caf7-9527-477a-a4f6-9c95f6068c7e, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1947, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:00Z on network b2b1caf7-9527-477a-a4f6-9c95f6068c7e
Oct 13 15:47:01 standalone.localdomain dnsmasq[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/addn_hosts - 1 addresses
Oct 13 15:47:01 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/host
Oct 13 15:47:01 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/opts
Oct 13 15:47:01 standalone.localdomain podman[547085]: 2025-10-13 15:47:01.379446842 +0000 UTC m=+0.066399883 container kill 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:01.554 496978 INFO neutron.agent.linux.ip_lib [None req-739736e4-2e54-4abf-8e47-1e6d691fc70d - - - - - -] Device tape0d5db4c-32 cannot be used as it has no MAC address
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain kernel: device tape0d5db4c-32 entered promiscuous mode
Oct 13 15:47:01 standalone.localdomain NetworkManager[5962]: <info>  [1760370421.6034] manager: (tape0d5db4c-32): new Generic device (/org/freedesktop/NetworkManager/Devices/88)
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00470|binding|INFO|Claiming lport e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d for this chassis.
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00471|binding|INFO|e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d: Claiming unknown
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.611 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00472|binding|INFO|Setting lport e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d up in Southbound
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:01Z|00473|binding|INFO|Setting lport e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d ovn-installed in OVS
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.617 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.620 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:01 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:01.621 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[943a3a3e-3481-496d-9d23-f452b5a3fee8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3982: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:01.756 496978 INFO neutron.agent.dhcp.agent [None req-2140cec7-b4d5-41f6-a14f-129a12f71a66 - - - - - -] DHCP configuration for ports {'04ee27db-0fb5-43b7-8b52-630451dad0e2'} is completed
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:01.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.158 496978 INFO neutron.agent.linux.ip_lib [None req-005d32a8-f0ee-4688-82c1-df4aa4900e81 - - - - - -] Device tap700fc85d-22 cannot be used as it has no MAC address
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain podman[547195]: 
Oct 13 15:47:02 standalone.localdomain kernel: device tap700fc85d-22 entered promiscuous mode
Oct 13 15:47:02 standalone.localdomain NetworkManager[5962]: <info>  [1760370422.2115] manager: (tap700fc85d-22): new Generic device (/org/freedesktop/NetworkManager/Devices/89)
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:02Z|00474|binding|INFO|Claiming lport 700fc85d-223f-4eec-9422-d27dd484823f for this chassis.
Oct 13 15:47:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:02Z|00475|binding|INFO|700fc85d-223f-4eec-9422-d27dd484823f: Claiming unknown
Oct 13 15:47:02 standalone.localdomain podman[547195]: 2025-10-13 15:47:02.22508195 +0000 UTC m=+0.099704411 container create 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:47:02 standalone.localdomain systemd[1]: tmp-crun.DgrrYu.mount: Deactivated successfully.
Oct 13 15:47:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:02Z|00476|binding|INFO|Setting lport 700fc85d-223f-4eec-9422-d27dd484823f ovn-installed in OVS
Oct 13 15:47:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:02Z|00477|binding|INFO|Setting lport 700fc85d-223f-4eec-9422-d27dd484823f up in Southbound
Oct 13 15:47:02 standalone.localdomain dnsmasq[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/addn_hosts - 0 addresses
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/host
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[545170]: read /var/lib/neutron/dhcp/36648d54-03b4-46b3-81f4-8c595eafdf9f/opts
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.230 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8529956a-4260-425e-ad81-51ddfb728f4a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8529956a-4260-425e-ad81-51ddfb728f4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f0e9488-788d-4dbd-b628-5709c129379a, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=700fc85d-223f-4eec-9422-d27dd484823f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.232 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 700fc85d-223f-4eec-9422-d27dd484823f in datapath 8529956a-4260-425e-ad81-51ddfb728f4a bound to our chassis
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.234 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8529956a-4260-425e-ad81-51ddfb728f4a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.235 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c15ca13d-5d5b-4c7c-864a-b9cf93e3205b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:02 standalone.localdomain podman[547208]: 2025-10-13 15:47:02.235052921 +0000 UTC m=+0.073179874 container kill 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain systemd[1]: Started libpod-conmon-9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c.scope.
Oct 13 15:47:02 standalone.localdomain podman[547195]: 2025-10-13 15:47:02.172691556 +0000 UTC m=+0.047314027 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6170d30f0a1d26d931caa62db5893aaf8125f1dda463307341a23d0b005422d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:02 standalone.localdomain podman[547195]: 2025-10-13 15:47:02.291603236 +0000 UTC m=+0.166225717 container init 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:02 standalone.localdomain podman[547195]: 2025-10-13 15:47:02.304085435 +0000 UTC m=+0.178707916 container start 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:47:02 standalone.localdomain dnsmasq[547269]: started, version 2.85 cachesize 150
Oct 13 15:47:02 standalone.localdomain dnsmasq[547269]: DNS service limited to local subnets
Oct 13 15:47:02 standalone.localdomain dnsmasq[547269]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:02 standalone.localdomain dnsmasq[547269]: warning: no upstream servers configured
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[547269]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:02 standalone.localdomain dnsmasq[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/addn_hosts - 0 addresses
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/host
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/opts
Oct 13 15:47:02 standalone.localdomain dnsmasq[542384]: exiting on receipt of SIGTERM
Oct 13 15:47:02 standalone.localdomain podman[547256]: 2025-10-13 15:47:02.323361046 +0000 UTC m=+0.038485812 container kill 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:47:02 standalone.localdomain systemd[1]: libpod-5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce.scope: Deactivated successfully.
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.349 496978 INFO neutron.agent.dhcp.agent [None req-c55b4af0-bec6-4506-a222-2f96d244fcc1 - - - - - -] Resizing dhcp processing queue green pool size to: 12
Oct 13 15:47:02 standalone.localdomain podman[547289]: 2025-10-13 15:47:02.384817313 +0000 UTC m=+0.033956240 container died 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.399 496978 INFO neutron.agent.dhcp.agent [None req-b928b3f7-8e7e-49d6-a3e1-19131a61d726 - - - - - -] DHCP configuration for ports {'dbf87609-2bfa-4a03-968f-9c48192d33ce'} is completed
Oct 13 15:47:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:02 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0153093156bd8d2ca9490348c5f4041d9fcb901cd22295fe33da7f9dc0e6f49a-merged.mount: Deactivated successfully.
Oct 13 15:47:02 standalone.localdomain podman[547289]: 2025-10-13 15:47:02.448853201 +0000 UTC m=+0.097992118 container remove 5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1425a739-9d65-4c05-9700-1ab66a63077c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:02 standalone.localdomain systemd[1]: libpod-conmon-5ebde18dc498d58313ce74e1a055c3944ef5789f37936415c61cbdba6070e7ce.scope: Deactivated successfully.
Oct 13 15:47:02 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:02.494 2 INFO neutron.agent.securitygroups_rpc [None req-f670ccff-95a3-49e9-bed0-9f5f2fa1b994 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['a49474e2-176f-4b64-bfa8-c264efdb310b']
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.503 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:00Z, description=, device_id=8295397f-6eb8-4cf5-8e1e-340656ddc70c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f2b730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f2ba00>], id=04ee27db-0fb5-43b7-8b52-630451dad0e2, ip_allocation=immediate, mac_address=fa:16:3e:01:c5:0b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:41Z, description=, dns_domain=, id=b2b1caf7-9527-477a-a4f6-9c95f6068c7e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1926174009, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30955, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['991356d8-3190-4320-9895-3c4fbb10570d'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:46:49Z, vlan_transparent=None, network_id=b2b1caf7-9527-477a-a4f6-9c95f6068c7e, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1947, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:00Z on network b2b1caf7-9527-477a-a4f6-9c95f6068c7e
Oct 13 15:47:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:02 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:02.769 2 INFO neutron.agent.securitygroups_rpc [None req-f3ff3eb6-7f02-409a-b963-842de9c72b49 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.775 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.777 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:47:02 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:02.782 2 INFO neutron.agent.securitygroups_rpc [None req-e19a71d1-86f1-497c-8eda-44c1e34e031d db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:02 standalone.localdomain podman[547353]: 
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.796 496978 INFO neutron.agent.dhcp.agent [None req-4b8dde92-327e-4a22-8790-be579a889222 - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.798 496978 INFO neutron.agent.dhcp.agent [None req-4b8dde92-327e-4a22-8790-be579a889222 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:02 standalone.localdomain podman[547353]: 2025-10-13 15:47:02.802243434 +0000 UTC m=+0.110822848 container create 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:47:02 standalone.localdomain ceph-mon[29756]: pgmap v3982: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:02 standalone.localdomain systemd[1]: Started libpod-conmon-349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc.scope.
Oct 13 15:47:02 standalone.localdomain podman[547353]: 2025-10-13 15:47:02.739207467 +0000 UTC m=+0.047786971 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:02 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:02Z|00478|binding|INFO|Releasing lport 62ad21a5-33e4-428d-b173-e74b9ce7e052 from this chassis (sb_readonly=0)
Oct 13 15:47:02 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:02Z|00479|binding|INFO|Setting lport 62ad21a5-33e4-428d-b173-e74b9ce7e052 down in Southbound
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain kernel: device tap62ad21a5-33 left promiscuous mode
Oct 13 15:47:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:02.920 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:01Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ed5f10>], id=d03f82c1-0038-48c0-a2dc-1464bde314b7, ip_allocation=immediate, mac_address=fa:16:3e:f2:0a:b3, name=tempest-PortsIpV6TestJSON-305225615, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:56Z, description=, dns_domain=, id=3a458d6a-963b-4ca0-872d-1a5eac18cdff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1553576541, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31948, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1932, status=ACTIVE, subnets=['3b54bbfc-48ca-4721-810f-ef501cb0554f'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:46:59Z, vlan_transparent=None, network_id=3a458d6a-963b-4ca0-872d-1a5eac18cdff, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1952, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:02Z on network 3a458d6a-963b-4ca0-872d-1a5eac18cdff
Oct 13 15:47:02 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454acc41e10ba727c3b1cc726734ed4e39dc1c47d9d9bd950e99748e3a972683/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.923 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-22271b1b-6a76-4563-ad62-48148526587a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22271b1b-6a76-4563-ad62-48148526587a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fad11c39-2630-44b5-bb91-2e00e194a528, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=62ad21a5-33e4-428d-b173-e74b9ce7e052) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.925 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 62ad21a5-33e4-428d-b173-e74b9ce7e052 in datapath 22271b1b-6a76-4563-ad62-48148526587a unbound from our chassis
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.928 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22271b1b-6a76-4563-ad62-48148526587a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:02.928 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[daec77c1-9e5c-4ea4-b0dd-c7c1f1430fe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:02.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:02 standalone.localdomain dnsmasq[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/addn_hosts - 1 addresses
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/host
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/opts
Oct 13 15:47:02 standalone.localdomain podman[547385]: 2025-10-13 15:47:02.950146268 +0000 UTC m=+0.142902869 container kill 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:02 standalone.localdomain podman[547353]: 2025-10-13 15:47:02.98420824 +0000 UTC m=+0.292787654 container init 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:02 standalone.localdomain podman[547353]: 2025-10-13 15:47:02.98998371 +0000 UTC m=+0.298563124 container start 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:02 standalone.localdomain dnsmasq[547423]: started, version 2.85 cachesize 150
Oct 13 15:47:02 standalone.localdomain dnsmasq[547423]: DNS service limited to local subnets
Oct 13 15:47:02 standalone.localdomain dnsmasq[547423]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:02 standalone.localdomain dnsmasq[547423]: warning: no upstream servers configured
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[547423]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:02 standalone.localdomain dnsmasq[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:02 standalone.localdomain dnsmasq-dhcp[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.048 496978 INFO neutron.agent.dhcp.agent [None req-739736e4-2e54-4abf-8e47-1e6d691fc70d - - - - - -] Resizing dhcp processing queue green pool size to: 12
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.050 496978 INFO neutron.agent.dhcp.agent [None req-739736e4-2e54-4abf-8e47-1e6d691fc70d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:01Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd94f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd9f70>], id=600cfaeb-a62f-4f97-81e3-f4b0fffca771, ip_allocation=immediate, mac_address=fa:16:3e:12:23:a1, name=tempest-NetworksTestDHCPv6-411717225, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=14, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['fb379c73-0996-438c-bdd3-46aa8f0fee8d'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:46:59Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=1951, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:02Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:03 standalone.localdomain dnsmasq[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/addn_hosts - 1 addresses
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/host
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/opts
Oct 13 15:47:03 standalone.localdomain podman[547426]: 2025-10-13 15:47:03.100709674 +0000 UTC m=+0.056079910 container kill 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.156 496978 INFO neutron.agent.dhcp.agent [None req-a448bef5-d00e-460e-abc4-081dbc70928f - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:03.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:03.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:03.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:47:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:03.246 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:47:03 standalone.localdomain dnsmasq[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:03 standalone.localdomain podman[547476]: 2025-10-13 15:47:03.263033157 +0000 UTC m=+0.099525005 container kill 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:03 standalone.localdomain podman[547502]: 
Oct 13 15:47:03 standalone.localdomain podman[547502]: 2025-10-13 15:47:03.317551889 +0000 UTC m=+0.087840451 container create b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.327 496978 INFO neutron.agent.dhcp.agent [None req-37d0c644-88ff-4cb9-bec3-fb9bf99be8d3 - - - - - -] DHCP configuration for ports {'d03f82c1-0038-48c0-a2dc-1464bde314b7', '04ee27db-0fb5-43b7-8b52-630451dad0e2'} is completed
Oct 13 15:47:03 standalone.localdomain systemd[1]: Started libpod-conmon-b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910.scope.
Oct 13 15:47:03 standalone.localdomain podman[547502]: 2025-10-13 15:47:03.263909425 +0000 UTC m=+0.034197817 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a511553deb7083967c11a93951496392390e2596a1e2f715fab8c72af108bca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:03 standalone.localdomain podman[547502]: 2025-10-13 15:47:03.381289287 +0000 UTC m=+0.151577659 container init b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:47:03 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d1425a739\x2d9d65\x2d4c05\x2d9700\x2d1ab66a63077c.mount: Deactivated successfully.
Oct 13 15:47:03 standalone.localdomain podman[547502]: 2025-10-13 15:47:03.390560066 +0000 UTC m=+0.160848438 container start b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:03 standalone.localdomain dnsmasq[547530]: started, version 2.85 cachesize 150
Oct 13 15:47:03 standalone.localdomain dnsmasq[547530]: DNS service limited to local subnets
Oct 13 15:47:03 standalone.localdomain dnsmasq[547530]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:03 standalone.localdomain dnsmasq[547530]: warning: no upstream servers configured
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547530]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:03 standalone.localdomain dnsmasq[547530]: read /var/lib/neutron/dhcp/8529956a-4260-425e-ad81-51ddfb728f4a/addn_hosts - 0 addresses
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547530]: read /var/lib/neutron/dhcp/8529956a-4260-425e-ad81-51ddfb728f4a/host
Oct 13 15:47:03 standalone.localdomain dnsmasq-dhcp[547530]: read /var/lib/neutron/dhcp/8529956a-4260-425e-ad81-51ddfb728f4a/opts
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.441 496978 INFO neutron.agent.dhcp.agent [None req-005d32a8-f0ee-4688-82c1-df4aa4900e81 - - - - - -] Resizing dhcp processing queue green pool size to: 13
Oct 13 15:47:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:03Z|00480|binding|INFO|Releasing lport 700fc85d-223f-4eec-9422-d27dd484823f from this chassis (sb_readonly=0)
Oct 13 15:47:03 standalone.localdomain kernel: device tap700fc85d-22 left promiscuous mode
Oct 13 15:47:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:03Z|00481|binding|INFO|Setting lport 700fc85d-223f-4eec-9422-d27dd484823f down in Southbound
Oct 13 15:47:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:03.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.471 496978 INFO neutron.agent.dhcp.agent [None req-d9e23124-7754-442c-9861-8f5144df34f9 - - - - - -] DHCP configuration for ports {'600cfaeb-a62f-4f97-81e3-f4b0fffca771'} is completed
Oct 13 15:47:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:03.484 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-8529956a-4260-425e-ad81-51ddfb728f4a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8529956a-4260-425e-ad81-51ddfb728f4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f0e9488-788d-4dbd-b628-5709c129379a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=700fc85d-223f-4eec-9422-d27dd484823f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:03.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:03.485 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 700fc85d-223f-4eec-9422-d27dd484823f in datapath 8529956a-4260-425e-ad81-51ddfb728f4a unbound from our chassis
Oct 13 15:47:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:03.486 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8529956a-4260-425e-ad81-51ddfb728f4a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:03.487 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd19006-f819-4586-90a2-9606eab10a7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.540 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:03.646 496978 INFO neutron.agent.dhcp.agent [None req-f556eced-e591-4853-a438-8091dd66e613 - - - - - -] DHCP configuration for ports {'62e5fa6b-16f4-4537-88d8-afbf32765ce4'} is completed
Oct 13 15:47:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3983: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.365 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:04 standalone.localdomain podman[547566]: 2025-10-13 15:47:04.611363667 +0000 UTC m=+0.069084836 container kill 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:47:04 standalone.localdomain dnsmasq[546893]: read /var/lib/neutron/dhcp/22271b1b-6a76-4563-ad62-48148526587a/addn_hosts - 0 addresses
Oct 13 15:47:04 standalone.localdomain dnsmasq-dhcp[546893]: read /var/lib/neutron/dhcp/22271b1b-6a76-4563-ad62-48148526587a/host
Oct 13 15:47:04 standalone.localdomain dnsmasq-dhcp[546893]: read /var/lib/neutron/dhcp/22271b1b-6a76-4563-ad62-48148526587a/opts
Oct 13 15:47:04 standalone.localdomain dnsmasq[547530]: read /var/lib/neutron/dhcp/8529956a-4260-425e-ad81-51ddfb728f4a/addn_hosts - 0 addresses
Oct 13 15:47:04 standalone.localdomain dnsmasq-dhcp[547530]: read /var/lib/neutron/dhcp/8529956a-4260-425e-ad81-51ddfb728f4a/host
Oct 13 15:47:04 standalone.localdomain podman[547578]: 2025-10-13 15:47:04.655599766 +0000 UTC m=+0.054969585 container kill b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:47:04 standalone.localdomain dnsmasq-dhcp[547530]: read /var/lib/neutron/dhcp/8529956a-4260-425e-ad81-51ddfb728f4a/opts
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent [None req-77126e2b-921f-4080-945e-b37779e255ed - - - - - -] Unable to reload_allocations dhcp for 22271b1b-6a76-4563-ad62-48148526587a.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap62ad21a5-33 not found in namespace qdhcp-22271b1b-6a76-4563-ad62-48148526587a.
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap62ad21a5-33 not found in namespace qdhcp-22271b1b-6a76-4563-ad62-48148526587a.
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.656 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 8529956a-4260-425e-ad81-51ddfb728f4a.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap700fc85d-22 not found in namespace qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a.
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap700fc85d-22 not found in namespace qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a.
Oct 13 15:47:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:04.693 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:04 standalone.localdomain podman[547613]: 2025-10-13 15:47:04.806443892 +0000 UTC m=+0.059574319 container kill b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:04 standalone.localdomain dnsmasq[546022]: exiting on receipt of SIGTERM
Oct 13 15:47:04 standalone.localdomain systemd[1]: libpod-b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640.scope: Deactivated successfully.
Oct 13 15:47:04 standalone.localdomain ceph-mon[29756]: pgmap v3983: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:04 standalone.localdomain podman[547627]: 2025-10-13 15:47:04.853257022 +0000 UTC m=+0.034357982 container died b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:04 standalone.localdomain podman[547627]: 2025-10-13 15:47:04.935044613 +0000 UTC m=+0.116145543 container cleanup b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:04 standalone.localdomain systemd[1]: libpod-conmon-b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640.scope: Deactivated successfully.
Oct 13 15:47:04 standalone.localdomain podman[547634]: 2025-10-13 15:47:04.962367486 +0000 UTC m=+0.132667619 container remove b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ab29eaa-5224-4e03-897b-7f43cb33bf62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.223 496978 INFO neutron.agent.dhcp.agent [None req-589c16f3-b09c-4510-b657-980aa9ff6aab - - - - - -] Resizing dhcp processing queue green pool size to: 12
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.224 496978 INFO neutron.agent.dhcp.agent [None req-a0c7966c-b9ee-4467-a08c-bb1059b68fa5 - - - - - -] Synchronizing state
Oct 13 15:47:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:05.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:05.372 2 INFO neutron.agent.securitygroups_rpc [None req-b3b78615-a1c4-48a7-8032-476782f06578 3c43d7dd6ec04ebfbff9fe18886d436d c8db907361b947e6873b49b4ad64f52f - - default default] Security group rule updated ['f8fab133-ec08-4e56-b889-93a05a093103']
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.458 496978 INFO neutron.agent.dhcp.agent [None req-acd07e2e-c9e6-4654-9a09-1ad79a61ce46 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.459 496978 INFO neutron.agent.dhcp.agent [-] Starting network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.460 496978 INFO neutron.agent.dhcp.agent [-] Finished network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.460 496978 INFO neutron.agent.dhcp.agent [-] Starting network 1425a739-9d65-4c05-9700-1ab66a63077c dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.460 496978 INFO neutron.agent.dhcp.agent [-] Finished network 1425a739-9d65-4c05-9700-1ab66a63077c dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.461 496978 INFO neutron.agent.dhcp.agent [-] Starting network 22271b1b-6a76-4563-ad62-48148526587a dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.463 496978 INFO neutron.agent.dhcp.agent [-] Starting network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.464 496978 INFO neutron.agent.dhcp.agent [-] Finished network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.465 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.466 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.466 496978 INFO neutron.agent.dhcp.agent [-] Starting network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.466 496978 INFO neutron.agent.dhcp.agent [-] Finished network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:47:05 standalone.localdomain dnsmasq[546893]: exiting on receipt of SIGTERM
Oct 13 15:47:05 standalone.localdomain podman[547674]: 2025-10-13 15:47:05.606301643 +0000 UTC m=+0.049281699 container kill 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:05 standalone.localdomain systemd[1]: libpod-5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f.scope: Deactivated successfully.
Oct 13 15:47:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2680d5d6cce5ddbf74ec46573e88f60ca35a4f25eb022093628e95c2b45c5d74-merged.mount: Deactivated successfully.
Oct 13 15:47:05 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b185d39b196cec2508e1c7fd310a62d95fa8c2546ce32db9fa971e8680d52640-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:05 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d0ab29eaa\x2d5224\x2d4e03\x2d897b\x2d7f43cb33bf62.mount: Deactivated successfully.
Oct 13 15:47:05 standalone.localdomain podman[547686]: 2025-10-13 15:47:05.67610917 +0000 UTC m=+0.060036364 container died 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:05 standalone.localdomain systemd[1]: tmp-crun.XMsNxB.mount: Deactivated successfully.
Oct 13 15:47:05 standalone.localdomain podman[547686]: 2025-10-13 15:47:05.722116795 +0000 UTC m=+0.106043959 container cleanup 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:05 standalone.localdomain systemd[1]: libpod-conmon-5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f.scope: Deactivated successfully.
Oct 13 15:47:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3984: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:05 standalone.localdomain podman[547694]: 2025-10-13 15:47:05.793330367 +0000 UTC m=+0.157981170 container remove 5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22271b1b-6a76-4563-ad62-48148526587a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.896 496978 INFO neutron.agent.dhcp.agent [None req-98c94cea-2f98-4f68-8c2b-eee94ddbb877 - - - - - -] Finished network 22271b1b-6a76-4563-ad62-48148526587a dhcp configuration
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.897 496978 INFO neutron.agent.dhcp.agent [None req-acd07e2e-c9e6-4654-9a09-1ad79a61ce46 - - - - - -] Synchronizing state complete
Oct 13 15:47:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:05.899 496978 INFO neutron.agent.dhcp.agent [None req-acd07e2e-c9e6-4654-9a09-1ad79a61ce46 - - - - - -] Synchronizing state
Oct 13 15:47:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:05.960 2 INFO neutron.agent.securitygroups_rpc [None req-396447d0-fb67-48c6-8b99-615edb8f640b db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:06 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:06Z|00482|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:06.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.136 496978 INFO neutron.agent.dhcp.agent [None req-decae6dd-bb18-4063-96b5-edbe27a959d8 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:06 standalone.localdomain dnsmasq[547530]: exiting on receipt of SIGTERM
Oct 13 15:47:06 standalone.localdomain podman[547735]: 2025-10-13 15:47:06.32045981 +0000 UTC m=+0.062463240 container kill b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:47:06 standalone.localdomain systemd[1]: libpod-b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910.scope: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain podman[547749]: 2025-10-13 15:47:06.407294819 +0000 UTC m=+0.075318621 container died b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:47:06 standalone.localdomain podman[547749]: 2025-10-13 15:47:06.441798165 +0000 UTC m=+0.109821937 container cleanup b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:06 standalone.localdomain systemd[1]: libpod-conmon-b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910.scope: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain podman[547756]: 2025-10-13 15:47:06.498658668 +0000 UTC m=+0.150133453 container remove b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8529956a-4260-425e-ad81-51ddfb728f4a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.529 496978 INFO neutron.agent.dhcp.agent [None req-e975d3c0-798c-4f81-b775-4e5d271dfd6d - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.530 496978 INFO neutron.agent.dhcp.agent [-] Starting network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.530 496978 INFO neutron.agent.dhcp.agent [-] Finished network 0ab29eaa-5224-4e03-897b-7f43cb33bf62 dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.531 496978 INFO neutron.agent.dhcp.agent [-] Starting network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.531 496978 INFO neutron.agent.dhcp.agent [-] Finished network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.532 496978 INFO neutron.agent.dhcp.agent [-] Starting network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.532 496978 INFO neutron.agent.dhcp.agent [-] Finished network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.533 496978 INFO neutron.agent.dhcp.agent [-] Starting network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.533 496978 INFO neutron.agent.dhcp.agent [-] Finished network ba49ab79-5018-4b85-93a5-708447c5be06 dhcp configuration
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.534 496978 INFO neutron.agent.dhcp.agent [None req-e975d3c0-798c-4f81-b775-4e5d271dfd6d - - - - - -] Synchronizing state complete
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.535 496978 INFO neutron.agent.dhcp.agent [None req-589c16f3-b09c-4510-b657-980aa9ff6aab - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.536 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:01Z, description=, device_id=df23467e-7917-4395-8903-27c473fb052d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889229460>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889229400>], id=d03f82c1-0038-48c0-a2dc-1464bde314b7, ip_allocation=immediate, mac_address=fa:16:3e:f2:0a:b3, name=tempest-PortsIpV6TestJSON-305225615, network_id=3a458d6a-963b-4ca0-872d-1a5eac18cdff, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['6c3ff5e9-4017-4412-baf0-58ac2954cf2d'], standard_attr_id=1952, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:03Z on network 3a458d6a-963b-4ca0-872d-1a5eac18cdff
Oct 13 15:47:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9a511553deb7083967c11a93951496392390e2596a1e2f715fab8c72af108bca-merged.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9d9324ddba79af596e6e88c264b1416382c2fb9001f58f29fff48cd98f16910-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d8529956a\x2d4260\x2d425e\x2dad81\x2d51ddfb728f4a.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c7365bc19b98939272aa99d362176cd59e98b259bb67914cc8aac96bfccd1dca-merged.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5672baca3306af0d092fc6fdb14c9bf7652b70ef3d7521c85dc8b3a8859a535f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain podman[547810]: 2025-10-13 15:47:06.761476687 +0000 UTC m=+0.075189647 container kill 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:47:06 standalone.localdomain systemd[1]: tmp-crun.JVvNv1.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain dnsmasq[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:06 standalone.localdomain dnsmasq-dhcp[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:06 standalone.localdomain dnsmasq-dhcp[547423]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:06.808 496978 INFO neutron.agent.dhcp.agent [None req-079833d4-cf75-49d7-8bce-26ac7e2d7a9b - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:47:06 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d22271b1b\x2d6a76\x2d4563\x2dad62\x2d48148526587a.mount: Deactivated successfully.
Oct 13 15:47:06 standalone.localdomain ceph-mon[29756]: pgmap v3984: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:06 standalone.localdomain dnsmasq[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/addn_hosts - 1 addresses
Oct 13 15:47:06 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/host
Oct 13 15:47:06 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/opts
Oct 13 15:47:06 standalone.localdomain podman[547831]: 2025-10-13 15:47:06.902586419 +0000 UTC m=+0.114950937 container kill 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:06.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:06 standalone.localdomain dnsmasq[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/addn_hosts - 0 addresses
Oct 13 15:47:06 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/host
Oct 13 15:47:06 standalone.localdomain podman[547871]: 2025-10-13 15:47:06.933334898 +0000 UTC m=+0.071089159 container kill 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:06 standalone.localdomain dnsmasq-dhcp[546194]: read /var/lib/neutron/dhcp/b2b1caf7-9527-477a-a4f6-9c95f6068c7e/opts
Oct 13 15:47:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:06.966 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:47:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:06.966 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:47:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:06.967 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:47:06 standalone.localdomain dnsmasq[545170]: exiting on receipt of SIGTERM
Oct 13 15:47:06 standalone.localdomain podman[547886]: 2025-10-13 15:47:06.97058771 +0000 UTC m=+0.069726686 container kill 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:06 standalone.localdomain systemd[1]: libpod-7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5.scope: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain podman[547912]: 2025-10-13 15:47:07.027121433 +0000 UTC m=+0.041948279 container died 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:07 standalone.localdomain podman[547912]: 2025-10-13 15:47:07.074665226 +0000 UTC m=+0.089492062 container remove 7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36648d54-03b4-46b3-81f4-8c595eafdf9f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:07 standalone.localdomain systemd[1]: libpod-conmon-7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5.scope: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:07.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:07Z|00483|binding|INFO|Releasing lport 2540a99b-db26-4a1d-8d4d-ea860067fbf8 from this chassis (sb_readonly=0)
Oct 13 15:47:07 standalone.localdomain kernel: device tap2540a99b-db left promiscuous mode
Oct 13 15:47:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:07Z|00484|binding|INFO|Setting lport 2540a99b-db26-4a1d-8d4d-ea860067fbf8 down in Southbound
Oct 13 15:47:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:07.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.174 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-36648d54-03b4-46b3-81f4-8c595eafdf9f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36648d54-03b4-46b3-81f4-8c595eafdf9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=907bffa3-ca71-4872-a6b0-fa5a1f4dc5ce, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2540a99b-db26-4a1d-8d4d-ea860067fbf8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.176 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2540a99b-db26-4a1d-8d4d-ea860067fbf8 in datapath 36648d54-03b4-46b3-81f4-8c595eafdf9f unbound from our chassis
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.178 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36648d54-03b4-46b3-81f4-8c595eafdf9f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.179 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f75d8761-a877-4006-8e2d-520d08fba1c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:07.248 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:07.277 496978 INFO neutron.agent.dhcp.agent [None req-3312e0c3-8d39-430d-ad4e-d8db2f98dddf - - - - - -] DHCP configuration for ports {'d03f82c1-0038-48c0-a2dc-1464bde314b7'} is completed
Oct 13 15:47:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:47:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:07.437 496978 INFO neutron.agent.dhcp.agent [None req-e3545d3c-ae37-4cf5-a5d9-0573a1e67c38 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:07.437 496978 INFO neutron.agent.dhcp.agent [None req-e3545d3c-ae37-4cf5-a5d9-0573a1e67c38 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:07 standalone.localdomain podman[547968]: 2025-10-13 15:47:07.450432197 +0000 UTC m=+0.055221103 container kill 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:07 standalone.localdomain dnsmasq[547423]: exiting on receipt of SIGTERM
Oct 13 15:47:07 standalone.localdomain systemd[1]: libpod-349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc.scope: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:07.477 2 INFO neutron.agent.securitygroups_rpc [None req-0f52b984-f709-407a-aee9-2e6b7174e93b 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:47:07 standalone.localdomain podman[548001]: 2025-10-13 15:47:07.507380083 +0000 UTC m=+0.037760978 container died 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:47:07 standalone.localdomain podman[548001]: 2025-10-13 15:47:07.547843316 +0000 UTC m=+0.078224191 container remove 349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:47:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:07.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:07Z|00485|binding|INFO|Releasing lport e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d from this chassis (sb_readonly=0)
Oct 13 15:47:07 standalone.localdomain kernel: device tape0d5db4c-32 left promiscuous mode
Oct 13 15:47:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:07Z|00486|binding|INFO|Setting lport e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d down in Southbound
Oct 13 15:47:07 standalone.localdomain podman[547979]: 2025-10-13 15:47:07.506533207 +0000 UTC m=+0.077306762 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:47:07 standalone.localdomain systemd[1]: libpod-conmon-349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc.scope: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.567 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.568 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e0d5db4c-3232-4d9f-97e7-cd00c1b5e03d in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.569 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:07.570 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0d78ebcc-d3a1-4683-8b0a-5608105954c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:07.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:07 standalone.localdomain podman[547979]: 2025-10-13 15:47:07.589808635 +0000 UTC m=+0.160582170 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:47:07 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-454acc41e10ba727c3b1cc726734ed4e39dc1c47d9d9bd950e99748e3a972683-merged.mount: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-349f471128fe958b3b7890d14bf9b28b00c84f8d38c7e33e55a985332afb0ecc-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5810b9ac76003dd2df3cbb916b9194576ac6604b67133c5d2406667c0af247b0-merged.mount: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f7ac5942270bfc8a52ad2eed772bf50744f16233888ea2b43af6268ac36b7a5-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d36648d54\x2d03b4\x2d46b3\x2d81f4\x2d8c595eafdf9f.mount: Deactivated successfully.
Oct 13 15:47:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:07 standalone.localdomain dnsmasq[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/addn_hosts - 0 addresses
Oct 13 15:47:07 standalone.localdomain podman[548050]: 2025-10-13 15:47:07.731691931 +0000 UTC m=+0.039218604 container kill 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:07 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/host
Oct 13 15:47:07 standalone.localdomain dnsmasq-dhcp[547269]: read /var/lib/neutron/dhcp/3a458d6a-963b-4ca0-872d-1a5eac18cdff/opts
Oct 13 15:47:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3985: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:07.822 496978 INFO neutron.agent.dhcp.agent [None req-c5786749-13f7-4762-8fc5-eed586cfe511 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:07 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:08Z|00487|binding|INFO|Releasing lport cd2874f4-f6ad-4d57-adc4-e5003eea2d52 from this chassis (sb_readonly=0)
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:08Z|00488|binding|INFO|Setting lport cd2874f4-f6ad-4d57-adc4-e5003eea2d52 down in Southbound
Oct 13 15:47:08 standalone.localdomain kernel: device tapcd2874f4-f6 left promiscuous mode
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.231 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-3a458d6a-963b-4ca0-872d-1a5eac18cdff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a458d6a-963b-4ca0-872d-1a5eac18cdff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a77a6193-dc1a-461a-8114-a0cd5f5cbc0f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=cd2874f4-f6ad-4d57-adc4-e5003eea2d52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.234 378821 INFO neutron.agent.ovn.metadata.agent [-] Port cd2874f4-f6ad-4d57-adc4-e5003eea2d52 in datapath 3a458d6a-963b-4ca0-872d-1a5eac18cdff unbound from our chassis
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.236 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3a458d6a-963b-4ca0-872d-1a5eac18cdff or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.237 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4020a05c-ebf1-480a-bec4-ccf4e7e27c1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:47:08 standalone.localdomain podman[548075]: 2025-10-13 15:47:08.486463875 +0000 UTC m=+0.090901186 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:47:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:08.493 496978 INFO neutron.agent.linux.ip_lib [None req-9fdcf17a-0a77-4058-96bf-b2a770b5b6fa - - - - - -] Device tap466f024c-69 cannot be used as it has no MAC address
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain kernel: device tap466f024c-69 entered promiscuous mode
Oct 13 15:47:08 standalone.localdomain podman[548075]: 2025-10-13 15:47:08.525518383 +0000 UTC m=+0.129955674 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:47:08 standalone.localdomain NetworkManager[5962]: <info>  [1760370428.5262] manager: (tap466f024c-69): new Generic device (/org/freedesktop/NetworkManager/Devices/90)
Oct 13 15:47:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:08Z|00489|binding|INFO|Claiming lport 466f024c-69bd-44f6-bec8-2f849fdeed76 for this chassis.
Oct 13 15:47:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:08Z|00490|binding|INFO|466f024c-69bd-44f6-bec8-2f849fdeed76: Claiming unknown
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain systemd-udevd[548100]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.536 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=466f024c-69bd-44f6-bec8-2f849fdeed76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.537 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 466f024c-69bd-44f6-bec8-2f849fdeed76 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.538 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:08Z|00491|binding|INFO|Setting lport 466f024c-69bd-44f6-bec8-2f849fdeed76 up in Southbound
Oct 13 15:47:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:08Z|00492|binding|INFO|Setting lport 466f024c-69bd-44f6-bec8-2f849fdeed76 ovn-installed in OVS
Oct 13 15:47:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:08.539 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f7cbd18a-37f9-4520-aaa2-041a33b4b727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:08 standalone.localdomain ceph-mon[29756]: pgmap v3985: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:47:08 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 7800.1 total, 600.0 interval
                                                        Cumulative writes: 8868 writes, 39K keys, 8868 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 8868 writes, 1999 syncs, 4.44 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1809 writes, 8313 keys, 1809 commit groups, 1.0 writes per commit group, ingest: 7.87 MB, 0.01 MB/s
                                                        Interval WAL: 1809 writes, 661 syncs, 2.74 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.861 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.889 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Triggering sync for uuid 54a46fec-332e-42f9-83ed-88e763d13f63 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.890 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Triggering sync for uuid 8f68d5aa-abc4-451d-89d2-f5342b71831c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.891 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "54a46fec-332e-42f9-83ed-88e763d13f63" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.891 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.892 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.892 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.929 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "54a46fec-332e-42f9-83ed-88e763d13f63" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:47:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:08.937 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "8f68d5aa-abc4-451d-89d2-f5342b71831c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:47:09 standalone.localdomain dnsmasq[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/addn_hosts - 0 addresses
Oct 13 15:47:09 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/host
Oct 13 15:47:09 standalone.localdomain dnsmasq-dhcp[545178]: read /var/lib/neutron/dhcp/e640877a-3ffa-40a9-9920-52ad8d244aaf/opts
Oct 13 15:47:09 standalone.localdomain podman[548165]: 2025-10-13 15:47:09.455559504 +0000 UTC m=+0.065106231 container kill 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:09 standalone.localdomain systemd[1]: tmp-crun.UyIe8R.mount: Deactivated successfully.
Oct 13 15:47:09 standalone.localdomain podman[548185]: 
Oct 13 15:47:09 standalone.localdomain podman[548185]: 2025-10-13 15:47:09.530616666 +0000 UTC m=+0.087626925 container create 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:09 standalone.localdomain systemd[1]: Started libpod-conmon-060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f.scope.
Oct 13 15:47:09 standalone.localdomain podman[548185]: 2025-10-13 15:47:09.485684664 +0000 UTC m=+0.042694953 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:09 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:09 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2611b617890e3932a3540b5305881da9165852ce416b0f4e6f10f934a80a18/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:09.601 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:09 standalone.localdomain podman[548185]: 2025-10-13 15:47:09.60226504 +0000 UTC m=+0.159275299 container init 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:47:09 standalone.localdomain podman[548185]: 2025-10-13 15:47:09.610037473 +0000 UTC m=+0.167047732 container start 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:47:09 standalone.localdomain dnsmasq[548213]: started, version 2.85 cachesize 150
Oct 13 15:47:09 standalone.localdomain dnsmasq[548213]: DNS service limited to local subnets
Oct 13 15:47:09 standalone.localdomain dnsmasq[548213]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:09 standalone.localdomain dnsmasq[548213]: warning: no upstream servers configured
Oct 13 15:47:09 standalone.localdomain dnsmasq-dhcp[548213]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:09 standalone.localdomain dnsmasq[548213]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:09 standalone.localdomain dnsmasq-dhcp[548213]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:09 standalone.localdomain dnsmasq-dhcp[548213]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:09.653 496978 INFO neutron.agent.dhcp.agent [None req-9fdcf17a-0a77-4058-96bf-b2a770b5b6fa - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3986: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:09.770 496978 INFO neutron.agent.dhcp.agent [None req-0fc8aab6-0116-4de2-aaeb-cf906e06bb9b - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:09.780 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:47:09 standalone.localdomain dnsmasq[548213]: exiting on receipt of SIGTERM
Oct 13 15:47:09 standalone.localdomain podman[548231]: 2025-10-13 15:47:09.932139121 +0000 UTC m=+0.052430807 container kill 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:47:09 standalone.localdomain systemd[1]: libpod-060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f.scope: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain podman[548245]: 2025-10-13 15:47:10.010842986 +0000 UTC m=+0.066282249 container died 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:10 standalone.localdomain podman[548245]: 2025-10-13 15:47:10.047379875 +0000 UTC m=+0.102819098 container cleanup 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:10 standalone.localdomain systemd[1]: libpod-conmon-060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f.scope: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain podman[548252]: 2025-10-13 15:47:10.087375042 +0000 UTC m=+0.129819410 container remove 060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:10Z|00493|binding|INFO|Releasing lport 466f024c-69bd-44f6-bec8-2f849fdeed76 from this chassis (sb_readonly=0)
Oct 13 15:47:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:10.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:10 standalone.localdomain kernel: device tap466f024c-69 left promiscuous mode
Oct 13 15:47:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:10Z|00494|binding|INFO|Setting lport 466f024c-69bd-44f6-bec8-2f849fdeed76 down in Southbound
Oct 13 15:47:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:10.119 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=466f024c-69bd-44f6-bec8-2f849fdeed76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:10.120 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 466f024c-69bd-44f6-bec8-2f849fdeed76 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:47:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:10.122 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:10.123 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfe2534-bdf5-41a7-ae65-79e51f08558e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:10.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:10.260 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:10.398 496978 INFO neutron.agent.dhcp.agent [None req-2b7467cb-c853-403e-a0f1-a4dc6dcff6bd - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:10.399 496978 INFO neutron.agent.dhcp.agent [None req-2b7467cb-c853-403e-a0f1-a4dc6dcff6bd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:10.399 496978 INFO neutron.agent.dhcp.agent [None req-2b7467cb-c853-403e-a0f1-a4dc6dcff6bd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-0b2611b617890e3932a3540b5305881da9165852ce416b0f4e6f10f934a80a18-merged.mount: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-060bd0e7f0753cdc6bc1f05515136b6d62f726fd40c9ca22c19ceaa4a2a66f2f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:47:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:10Z|00495|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:10.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:10 standalone.localdomain systemd[1]: tmp-crun.W3Gqsn.mount: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain podman[548276]: 2025-10-13 15:47:10.608444006 +0000 UTC m=+0.125966010 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:10 standalone.localdomain podman[548276]: 2025-10-13 15:47:10.642272202 +0000 UTC m=+0.159794186 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:10 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:47:10 standalone.localdomain ceph-mon[29756]: pgmap v3986: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:11 standalone.localdomain podman[548313]: 2025-10-13 15:47:11.005944636 +0000 UTC m=+0.060222919 container kill 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:11 standalone.localdomain dnsmasq[547269]: exiting on receipt of SIGTERM
Oct 13 15:47:11 standalone.localdomain systemd[1]: libpod-9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c.scope: Deactivated successfully.
Oct 13 15:47:11 standalone.localdomain podman[548329]: 2025-10-13 15:47:11.069040235 +0000 UTC m=+0.042808386 container died 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:11 standalone.localdomain podman[548329]: 2025-10-13 15:47:11.123341828 +0000 UTC m=+0.097109939 container remove 9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a458d6a-963b-4ca0-872d-1a5eac18cdff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:11 standalone.localdomain systemd[1]: libpod-conmon-9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c.scope: Deactivated successfully.
Oct 13 15:47:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:11.200 496978 INFO neutron.agent.linux.ip_lib [None req-a0a7662a-d9af-4eab-99fb-9f2c2802b136 - - - - - -] Device tap262a0a44-38 cannot be used as it has no MAC address
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:47:11 standalone.localdomain kernel: device tap262a0a44-38 entered promiscuous mode
Oct 13 15:47:11 standalone.localdomain NetworkManager[5962]: <info>  [1760370431.2349] manager: (tap262a0a44-38): new Generic device (/org/freedesktop/NetworkManager/Devices/91)
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:11 standalone.localdomain systemd-udevd[548102]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:11Z|00496|binding|INFO|Claiming lport 262a0a44-3866-4453-bbd1-0992cb77c0db for this chassis.
Oct 13 15:47:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:11Z|00497|binding|INFO|262a0a44-3866-4453-bbd1-0992cb77c0db: Claiming unknown
Oct 13 15:47:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:11.254 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=262a0a44-3866-4453-bbd1-0992cb77c0db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:11.256 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 262a0a44-3866-4453-bbd1-0992cb77c0db in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:47:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:11.257 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:11.258 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[474604a3-e052-4b48-b9f9-7bcc8a3b29e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:11Z|00498|binding|INFO|Setting lport 262a0a44-3866-4453-bbd1-0992cb77c0db ovn-installed in OVS
Oct 13 15:47:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:11Z|00499|binding|INFO|Setting lport 262a0a44-3866-4453-bbd1-0992cb77c0db up in Southbound
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap262a0a44-38: No such device
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6170d30f0a1d26d931caa62db5893aaf8125f1dda463307341a23d0b005422d8-merged.mount: Deactivated successfully.
Oct 13 15:47:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9924d173612139713123bcbb0210f859fe0866536d6f95d14ce243f4f8acc31c-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:11 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d3a458d6a\x2d963b\x2d4ca0\x2d872d\x2d1a5eac18cdff.mount: Deactivated successfully.
Oct 13 15:47:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:11.507 496978 INFO neutron.agent.dhcp.agent [None req-91027ded-5d05-49f2-8afb-9c64cbe3cfaf - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:11 standalone.localdomain podman[467099]: time="2025-10-13T15:47:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:47:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:47:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 421956 "" "Go-http-client/1.1"
Oct 13 15:47:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:47:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51041 "" "Go-http-client/1.1"
Oct 13 15:47:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3987: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:11.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:12.110 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:12.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:12 standalone.localdomain podman[548434]: 
Oct 13 15:47:12 standalone.localdomain podman[548434]: 2025-10-13 15:47:12.283728394 +0000 UTC m=+0.106750090 container create e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:47:12 standalone.localdomain systemd[1]: Started libpod-conmon-e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a.scope.
Oct 13 15:47:12 standalone.localdomain podman[548434]: 2025-10-13 15:47:12.228132661 +0000 UTC m=+0.051154387 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:12 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:12 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4719bbed8b08463ab5ab56ff8d3694270c9cd0cf33f32f34723bc97953bec1bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:12 standalone.localdomain podman[548434]: 2025-10-13 15:47:12.357439784 +0000 UTC m=+0.180461510 container init e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:47:12 standalone.localdomain podman[548434]: 2025-10-13 15:47:12.364348979 +0000 UTC m=+0.187370695 container start e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:12 standalone.localdomain dnsmasq[548459]: started, version 2.85 cachesize 150
Oct 13 15:47:12 standalone.localdomain dnsmasq[548459]: DNS service limited to local subnets
Oct 13 15:47:12 standalone.localdomain dnsmasq[548459]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:12 standalone.localdomain dnsmasq[548459]: warning: no upstream servers configured
Oct 13 15:47:12 standalone.localdomain dnsmasq-dhcp[548459]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:12.371 496978 INFO neutron.agent.linux.ip_lib [None req-d7eb3ea5-961f-4b80-bcb2-e31822837ee2 - - - - - -] Device tapb995c9b5-aa cannot be used as it has no MAC address
Oct 13 15:47:12 standalone.localdomain dnsmasq[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:12 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:12 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:12.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:12 standalone.localdomain kernel: device tapb995c9b5-aa entered promiscuous mode
Oct 13 15:47:12 standalone.localdomain NetworkManager[5962]: <info>  [1760370432.4091] manager: (tapb995c9b5-aa): new Generic device (/org/freedesktop/NetworkManager/Devices/92)
Oct 13 15:47:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:12Z|00500|binding|INFO|Claiming lport b995c9b5-aa63-43a5-8a00-4d620f229b45 for this chassis.
Oct 13 15:47:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:12Z|00501|binding|INFO|b995c9b5-aa63-43a5-8a00-4d620f229b45: Claiming unknown
Oct 13 15:47:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:12.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:12.427 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-acfb9846-7661-4c9b-b512-a32963ac8b00', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfb9846-7661-4c9b-b512-a32963ac8b00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34eb737-df08-42fc-a6c1-adbb44336758, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b995c9b5-aa63-43a5-8a00-4d620f229b45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:12.430 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b995c9b5-aa63-43a5-8a00-4d620f229b45 in datapath acfb9846-7661-4c9b-b512-a32963ac8b00 bound to our chassis
Oct 13 15:47:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:12.432 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network acfb9846-7661-4c9b-b512-a32963ac8b00 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:12.435 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc106f2-3839-46f7-ae90-402b3a361933]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:12Z|00502|binding|INFO|Setting lport b995c9b5-aa63-43a5-8a00-4d620f229b45 ovn-installed in OVS
Oct 13 15:47:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:12Z|00503|binding|INFO|Setting lport b995c9b5-aa63-43a5-8a00-4d620f229b45 up in Southbound
Oct 13 15:47:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:12.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:12.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:12.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:12.568 496978 INFO neutron.agent.dhcp.agent [None req-78d5eab6-d19b-42d0-b16e-e210958ab7e1 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:12.747 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:12Z, description=, device_id=58319b90-2dfb-43bd-8994-8081960dfb37, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889229430>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889229460>], id=7d85ec44-e622-4be6-9e7f-ad6d848ab30a, ip_allocation=immediate, mac_address=fa:16:3e:3c:95:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['5254e1fb-71c3-4c4e-b7a9-5a0d6db619f0'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:10Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=False, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1991, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:12Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:12 standalone.localdomain ceph-mon[29756]: pgmap v3987: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:47:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:47:13 standalone.localdomain dnsmasq[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:47:13 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:13 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:13 standalone.localdomain systemd[1]: tmp-crun.ihfU6Q.mount: Deactivated successfully.
Oct 13 15:47:13 standalone.localdomain podman[548506]: 2025-10-13 15:47:13.012299952 +0000 UTC m=+0.096042458 container kill e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.225 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.227 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.250 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.339 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.340 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.341 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.406 496978 INFO neutron.agent.dhcp.agent [None req-d6b99379-da1a-4d99-8694-84f32525a292 - - - - - -] DHCP configuration for ports {'7d85ec44-e622-4be6-9e7f-ad6d848ab30a'} is completed
Oct 13 15:47:13 standalone.localdomain podman[548569]: 
Oct 13 15:47:13 standalone.localdomain podman[548569]: 2025-10-13 15:47:13.562345249 +0000 UTC m=+0.068140496 container create 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:47:13 standalone.localdomain systemd[1]: Started libpod-conmon-21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9.scope.
Oct 13 15:47:13 standalone.localdomain systemd[1]: tmp-crun.MoW0u0.mount: Deactivated successfully.
Oct 13 15:47:13 standalone.localdomain dnsmasq[546194]: exiting on receipt of SIGTERM
Oct 13 15:47:13 standalone.localdomain podman[548583]: 2025-10-13 15:47:13.599755076 +0000 UTC m=+0.071697727 container kill 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:13 standalone.localdomain systemd[1]: libpod-6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe.scope: Deactivated successfully.
Oct 13 15:47:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7eae96dbea3ff90fa9a89472ae18d0ad73948e79957470011f7f5db810fffc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:13 standalone.localdomain podman[548569]: 2025-10-13 15:47:13.619188292 +0000 UTC m=+0.124983559 container init 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:47:13 standalone.localdomain podman[548569]: 2025-10-13 15:47:13.628047758 +0000 UTC m=+0.133843005 container start 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:13 standalone.localdomain podman[548569]: 2025-10-13 15:47:13.531009572 +0000 UTC m=+0.036804839 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:13 standalone.localdomain dnsmasq[548620]: started, version 2.85 cachesize 150
Oct 13 15:47:13 standalone.localdomain dnsmasq[548620]: DNS service limited to local subnets
Oct 13 15:47:13 standalone.localdomain dnsmasq[548620]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:13 standalone.localdomain dnsmasq[548620]: warning: no upstream servers configured
Oct 13 15:47:13 standalone.localdomain dnsmasq-dhcp[548620]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:13 standalone.localdomain dnsmasq[548620]: read /var/lib/neutron/dhcp/acfb9846-7661-4c9b-b512-a32963ac8b00/addn_hosts - 0 addresses
Oct 13 15:47:13 standalone.localdomain dnsmasq-dhcp[548620]: read /var/lib/neutron/dhcp/acfb9846-7661-4c9b-b512-a32963ac8b00/host
Oct 13 15:47:13 standalone.localdomain dnsmasq-dhcp[548620]: read /var/lib/neutron/dhcp/acfb9846-7661-4c9b-b512-a32963ac8b00/opts
Oct 13 15:47:13 standalone.localdomain podman[548604]: 2025-10-13 15:47:13.657645312 +0000 UTC m=+0.044689865 container died 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.697 496978 INFO neutron.agent.dhcp.agent [None req-587992ec-bf80-4b3c-9ecc-6892119497d6 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3988: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:13 standalone.localdomain podman[548604]: 2025-10-13 15:47:13.741817798 +0000 UTC m=+0.128862341 container cleanup 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:13 standalone.localdomain systemd[1]: libpod-conmon-6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe.scope: Deactivated successfully.
Oct 13 15:47:13 standalone.localdomain podman[548610]: 2025-10-13 15:47:13.768585033 +0000 UTC m=+0.141531526 container remove 6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2b1caf7-9527-477a-a4f6-9c95f6068c7e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.775 496978 INFO neutron.agent.dhcp.agent [None req-f5ff423a-66f6-46de-919d-cad65dab181d - - - - - -] DHCP configuration for ports {'b552de40-11c5-4339-a024-b6fc68b6dcbb'} is completed
Oct 13 15:47:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:13Z|00504|binding|INFO|Releasing lport 2974eb36-03c0-4f0c-937a-c1a821a148bf from this chassis (sb_readonly=0)
Oct 13 15:47:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:13Z|00505|binding|INFO|Setting lport 2974eb36-03c0-4f0c-937a-c1a821a148bf down in Southbound
Oct 13 15:47:13 standalone.localdomain kernel: device tap2974eb36-03 left promiscuous mode
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.792 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-b2b1caf7-9527-477a-a4f6-9c95f6068c7e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2b1caf7-9527-477a-a4f6-9c95f6068c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=333f6906-4b8b-4a0d-ab8c-7cb0cafb7c27, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2974eb36-03c0-4f0c-937a-c1a821a148bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.794 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2974eb36-03c0-4f0c-937a-c1a821a148bf in datapath b2b1caf7-9527-477a-a4f6-9c95f6068c7e unbound from our chassis
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.797 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2b1caf7-9527-477a-a4f6-9c95f6068c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.798 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[182dc602-0500-4460-9abe-140ecdf963bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.823 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:12Z, description=, device_id=58319b90-2dfb-43bd-8994-8081960dfb37, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f0a790>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f0a220>], id=7d85ec44-e622-4be6-9e7f-ad6d848ab30a, ip_allocation=immediate, mac_address=fa:16:3e:3c:95:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['5254e1fb-71c3-4c4e-b7a9-5a0d6db619f0'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:10Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=False, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1991, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:12Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.923 496978 INFO neutron.agent.linux.ip_lib [None req-ca30a900-7386-458f-894e-4bd5a803f19e - - - - - -] Device tap158a0209-7d cannot be used as it has no MAC address
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:13 standalone.localdomain kernel: device tap158a0209-7d entered promiscuous mode
Oct 13 15:47:13 standalone.localdomain NetworkManager[5962]: <info>  [1760370433.9545] manager: (tap158a0209-7d): new Generic device (/org/freedesktop/NetworkManager/Devices/93)
Oct 13 15:47:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:13Z|00506|binding|INFO|Claiming lport 158a0209-7d45-4b97-911f-383c6161fb56 for this chassis.
Oct 13 15:47:13 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:13Z|00507|binding|INFO|158a0209-7d45-4b97-911f-383c6161fb56: Claiming unknown
Oct 13 15:47:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:13.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.964 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:d64e/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9bed9e4d-3fb7-4db3-ad56-35605e9c080b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bed9e4d-3fb7-4db3-ad56-35605e9c080b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59c96014-6a2a-4c1a-be14-cc9120aee75e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=158a0209-7d45-4b97-911f-383c6161fb56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.967 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 158a0209-7d45-4b97-911f-383c6161fb56 in datapath 9bed9e4d-3fb7-4db3-ad56-35605e9c080b bound to our chassis
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.972 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port d29c0227-d745-444f-96d6-4dbe3e6e5576 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.973 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bed9e4d-3fb7-4db3-ad56-35605e9c080b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:13 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:13.974 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[22b79c53-3719-46ab-8fd8-796f34bda4ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.986 496978 INFO neutron.agent.dhcp.agent [None req-acb562a8-876d-4a21-b116-96408070c806 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:13.987 496978 INFO neutron.agent.dhcp.agent [None req-acb562a8-876d-4a21-b116-96408070c806 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:14Z|00508|binding|INFO|Setting lport 158a0209-7d45-4b97-911f-383c6161fb56 up in Southbound
Oct 13 15:47:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:14Z|00509|binding|INFO|Setting lport 158a0209-7d45-4b97-911f-383c6161fb56 ovn-installed in OVS
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain dnsmasq[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:47:14 standalone.localdomain podman[548666]: 2025-10-13 15:47:14.040530856 +0000 UTC m=+0.067928491 container kill e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:47:14 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:14 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.264 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.281 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.282 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.282 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.283 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.283 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.306 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.307 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.307 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.307 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.308 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:47:14 standalone.localdomain dnsmasq[545178]: exiting on receipt of SIGTERM
Oct 13 15:47:14 standalone.localdomain podman[548715]: 2025-10-13 15:47:14.317550836 +0000 UTC m=+0.060745085 container kill 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:14 standalone.localdomain systemd[1]: libpod-12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb.scope: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:14.338 496978 INFO neutron.agent.dhcp.agent [None req-edba973d-9007-4ef8-9f99-5fc5053b752e - - - - - -] DHCP configuration for ports {'7d85ec44-e622-4be6-9e7f-ad6d848ab30a'} is completed
Oct 13 15:47:14 standalone.localdomain podman[548730]: 2025-10-13 15:47:14.398239223 +0000 UTC m=+0.067327161 container died 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:14 standalone.localdomain podman[548730]: 2025-10-13 15:47:14.438080796 +0000 UTC m=+0.107168684 container cleanup 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:14 standalone.localdomain systemd[1]: libpod-conmon-12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb.scope: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:14Z|00510|binding|INFO|Removing iface tap158a0209-7d ovn-installed in OVS
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.459 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d29c0227-d745-444f-96d6-4dbe3e6e5576 with type ""
Oct 13 15:47:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:14Z|00511|binding|INFO|Removing lport 158a0209-7d45-4b97-911f-383c6161fb56 ovn-installed in OVS
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.461 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe43:d64e/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9bed9e4d-3fb7-4db3-ad56-35605e9c080b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bed9e4d-3fb7-4db3-ad56-35605e9c080b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59c96014-6a2a-4c1a-be14-cc9120aee75e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=158a0209-7d45-4b97-911f-383c6161fb56) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.466 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 158a0209-7d45-4b97-911f-383c6161fb56 in datapath 9bed9e4d-3fb7-4db3-ad56-35605e9c080b unbound from our chassis
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.478 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bed9e4d-3fb7-4db3-ad56-35605e9c080b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.479 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[35f94646-c1b4-40b9-b41e-1894b16677b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:14 standalone.localdomain podman[548738]: 2025-10-13 15:47:14.487696674 +0000 UTC m=+0.126627101 container remove 12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e640877a-3ffa-40a9-9920-52ad8d244aaf, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain kernel: device tapd61fa2d6-03 left promiscuous mode
Oct 13 15:47:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:14Z|00512|binding|INFO|Releasing lport d61fa2d6-039b-4502-83b2-48131f68658b from this chassis (sb_readonly=0)
Oct 13 15:47:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:14Z|00513|binding|INFO|Setting lport d61fa2d6-039b-4502-83b2-48131f68658b down in Southbound
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.512 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-e640877a-3ffa-40a9-9920-52ad8d244aaf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e640877a-3ffa-40a9-9920-52ad8d244aaf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9cf0f6-01d6-4f09-b73e-fbe2e4fb151d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d61fa2d6-039b-4502-83b2-48131f68658b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.514 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d61fa2d6-039b-4502-83b2-48131f68658b in datapath e640877a-3ffa-40a9-9920-52ad8d244aaf unbound from our chassis
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.516 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e640877a-3ffa-40a9-9920-52ad8d244aaf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:14 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:14.517 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a297d455-c6a8-4efd-8624-fccd74728d73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-10b6313d209ce1e1fb10e2be82f0a3d78a18111e3c56b623a82ca94d9fe6b430-merged.mount: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d432a516ab14d78eb06fd52290755dd68a57f3a3b187a7dd726602cd7b2c6fe-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain systemd[1]: run-netns-qdhcp\x2db2b1caf7\x2d9527\x2d477a\x2da4f6\x2d9c95f6068c7e.mount: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a84387f4f930d23b59a71dcba5d417955612cdfd8e410c6739d2f08a5a452245-merged.mount: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12b590425c5895458b4ae02ca42648370fef2c6f42421e3beb062e73d0f387fb-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:14.627 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:14 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:47:14 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4009561394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:47:14 standalone.localdomain ceph-mon[29756]: pgmap v3988: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:14 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4009561394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.819 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.893 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.894 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.894 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.899 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:47:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:14.899 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:47:14 standalone.localdomain systemd[1]: run-netns-qdhcp\x2de640877a\x2d3ffa\x2d40a9\x2d9920\x2d52ad8d244aaf.mount: Deactivated successfully.
Oct 13 15:47:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:14.985 496978 INFO neutron.agent.dhcp.agent [None req-5f695f91-ec9c-4572-8237-f30aa3648549 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:15 standalone.localdomain podman[548818]: 
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.094 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.095 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9115MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.095 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.096 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:47:15 standalone.localdomain podman[548818]: 2025-10-13 15:47:15.100299594 +0000 UTC m=+0.101228019 container create 5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bed9e4d-3fb7-4db3-ad56-35605e9c080b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00514|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:15 standalone.localdomain systemd[1]: Started libpod-conmon-5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09.scope.
Oct 13 15:47:15 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:15 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf27f83b4fef2c114a897f8eff3e9cc81cce8c8e3cda3a7aac9cf0edba6ac5c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:15 standalone.localdomain podman[548818]: 2025-10-13 15:47:15.055222958 +0000 UTC m=+0.056151463 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:15 standalone.localdomain podman[548818]: 2025-10-13 15:47:15.163295878 +0000 UTC m=+0.164224303 container init 5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bed9e4d-3fb7-4db3-ad56-35605e9c080b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:47:15 standalone.localdomain podman[548818]: 2025-10-13 15:47:15.174927342 +0000 UTC m=+0.175855767 container start 5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bed9e4d-3fb7-4db3-ad56-35605e9c080b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain dnsmasq[548836]: started, version 2.85 cachesize 150
Oct 13 15:47:15 standalone.localdomain dnsmasq[548836]: DNS service limited to local subnets
Oct 13 15:47:15 standalone.localdomain dnsmasq[548836]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:15 standalone.localdomain dnsmasq[548836]: warning: no upstream servers configured
Oct 13 15:47:15 standalone.localdomain dnsmasq-dhcp[548836]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:15 standalone.localdomain dnsmasq[548836]: read /var/lib/neutron/dhcp/9bed9e4d-3fb7-4db3-ad56-35605e9c080b/addn_hosts - 0 addresses
Oct 13 15:47:15 standalone.localdomain dnsmasq-dhcp[548836]: read /var/lib/neutron/dhcp/9bed9e4d-3fb7-4db3-ad56-35605e9c080b/host
Oct 13 15:47:15 standalone.localdomain dnsmasq-dhcp[548836]: read /var/lib/neutron/dhcp/9bed9e4d-3fb7-4db3-ad56-35605e9c080b/opts
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.268 496978 INFO neutron.agent.dhcp.agent [None req-640d29b0-1493-447b-97d3-ba2873b40350 - - - - - -] DHCP configuration for ports {'269e03db-a913-4755-88c7-a4794b2ac038'} is completed
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.398 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.399 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.399 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.399 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:47:15 standalone.localdomain dnsmasq[548836]: exiting on receipt of SIGTERM
Oct 13 15:47:15 standalone.localdomain podman[548855]: 2025-10-13 15:47:15.401594532 +0000 UTC m=+0.063404719 container kill 5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bed9e4d-3fb7-4db3-ad56-35605e9c080b, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:15 standalone.localdomain systemd[1]: libpod-5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09.scope: Deactivated successfully.
Oct 13 15:47:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:15.419 2 INFO neutron.agent.securitygroups_rpc [None req-2bfa556e-8dec-401a-8593-15a419656bac 3d4d48f603104c4d919565e926e139c4 a966fd035a9e426eabc035f6c807bc5e - - default default] Security group member updated ['4142dd56-682e-4ee4-8478-a4934a3acf9a']
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.434 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:15 standalone.localdomain podman[548871]: 2025-10-13 15:47:15.462302606 +0000 UTC m=+0.042081624 container died 5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bed9e4d-3fb7-4db3-ad56-35605e9c080b, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:47:15 standalone.localdomain podman[548871]: 2025-10-13 15:47:15.507116493 +0000 UTC m=+0.086895491 container remove 5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9bed9e4d-3fb7-4db3-ad56-35605e9c080b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:15 standalone.localdomain systemd[1]: libpod-conmon-5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09.scope: Deactivated successfully.
Oct 13 15:47:15 standalone.localdomain kernel: device tap158a0209-7d left promiscuous mode
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.558 496978 INFO neutron.agent.dhcp.agent [None req-128dddf1-d066-4986-94ff-4e45bbef8cb3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.559 496978 INFO neutron.agent.dhcp.agent [None req-128dddf1-d066-4986-94ff-4e45bbef8cb3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4cf27f83b4fef2c114a897f8eff3e9cc81cce8c8e3cda3a7aac9cf0edba6ac5c-merged.mount: Deactivated successfully.
Oct 13 15:47:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f4e965193787c7c8bef0b6151dcba9f4d297c6c708d13404c02cbcfacd38a09-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:15 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d9bed9e4d\x2d3fb7\x2d4db3\x2dad56\x2d35605e9c080b.mount: Deactivated successfully.
Oct 13 15:47:15 standalone.localdomain ceph-mgr[29999]: [devicehealth INFO root] Check health
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.661 496978 INFO neutron.agent.linux.ip_lib [None req-3130a5e4-d472-4b9c-bdac-7f52e1249abe - - - - - -] Device tapc25941d0-85 cannot be used as it has no MAC address
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.699 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d5841cac-80e3-409b-b9f5-66c754983018 with type ""
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00515|binding|INFO|Removing iface tapb995c9b5-aa ovn-installed in OVS
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00516|binding|INFO|Removing lport b995c9b5-aa63-43a5-8a00-4d620f229b45 ovn-installed in OVS
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.702 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-acfb9846-7661-4c9b-b512-a32963ac8b00', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-acfb9846-7661-4c9b-b512-a32963ac8b00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b34eb737-df08-42fc-a6c1-adbb44336758, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b995c9b5-aa63-43a5-8a00-4d620f229b45) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.705 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b995c9b5-aa63-43a5-8a00-4d620f229b45 in datapath acfb9846-7661-4c9b-b512-a32963ac8b00 unbound from our chassis
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain kernel: device tapb995c9b5-aa left promiscuous mode
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.710 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network acfb9846-7661-4c9b-b512-a32963ac8b00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.711 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2ac59d-6b5c-4f91-b4fa-1390fca6229c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain kernel: device tapc25941d0-85 entered promiscuous mode
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain NetworkManager[5962]: <info>  [1760370435.7333] manager: (tapc25941d0-85): new Generic device (/org/freedesktop/NetworkManager/Devices/94)
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00517|binding|INFO|Claiming lport c25941d0-858b-4710-91a2-50a6efc58ca5 for this chassis.
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00518|binding|INFO|c25941d0-858b-4710-91a2-50a6efc58ca5: Claiming unknown
Oct 13 15:47:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3989: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.747 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-ac515510-2ee8-49c2-aaf8-85a363df7851', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac515510-2ee8-49c2-aaf8-85a363df7851', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b39821bd-fc24-4f53-8b85-30b939379a0d, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=c25941d0-858b-4710-91a2-50a6efc58ca5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.749 378821 INFO neutron.agent.ovn.metadata.agent [-] Port c25941d0-858b-4710-91a2-50a6efc58ca5 in datapath ac515510-2ee8-49c2-aaf8-85a363df7851 bound to our chassis
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.750 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.751 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ac515510-2ee8-49c2-aaf8-85a363df7851 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.752 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[884c0bc8-1cbb-41ab-9e1b-907a76b1f871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00519|binding|INFO|Setting lport c25941d0-858b-4710-91a2-50a6efc58ca5 ovn-installed in OVS
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00520|binding|INFO|Setting lport c25941d0-858b-4710-91a2-50a6efc58ca5 up in Southbound
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.814 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:15.863 496978 INFO neutron.agent.linux.ip_lib [None req-90eae700-9e64-4203-bda3-94f71638c87b - - - - - -] Device tape3d92db7-51 cannot be used as it has no MAC address
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain kernel: device tape3d92db7-51 entered promiscuous mode
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00521|binding|INFO|Claiming lport e3d92db7-5106-4a58-83fb-0c74a16b72a2 for this chassis.
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00522|binding|INFO|e3d92db7-5106-4a58-83fb-0c74a16b72a2: Claiming unknown
Oct 13 15:47:15 standalone.localdomain NetworkManager[5962]: <info>  [1760370435.8984] manager: (tape3d92db7-51): new Generic device (/org/freedesktop/NetworkManager/Devices/95)
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.908 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=def8bcce-52c2-4cc8-9e08-a28c0a56cc80, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e3d92db7-5106-4a58-83fb-0c74a16b72a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.910 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e3d92db7-5106-4a58-83fb-0c74a16b72a2 in datapath a92a537c-4a4b-4bdd-9c48-bafcd5dc371e bound to our chassis
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.912 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:15.913 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ff1a6f-3810-49d9-95d7-4bcf93e10b08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:15 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00523|binding|INFO|Setting lport e3d92db7-5106-4a58-83fb-0c74a16b72a2 ovn-installed in OVS
Oct 13 15:47:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:15Z|00524|binding|INFO|Setting lport e3d92db7-5106-4a58-83fb-0c74a16b72a2 up in Southbound
Oct 13 15:47:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:15.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.011 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.011 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:47:16 standalone.localdomain systemd[1]: tmp-crun.SVu2Pf.mount: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.030 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:47:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:47:16 standalone.localdomain podman[548927]: 2025-10-13 15:47:16.049619316 +0000 UTC m=+0.116554787 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.059 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:16 standalone.localdomain podman[548927]: 2025-10-13 15:47:16.082393798 +0000 UTC m=+0.149329239 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:47:16 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.110 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:47:16 standalone.localdomain podman[548957]: 2025-10-13 15:47:16.189981215 +0000 UTC m=+0.136499870 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute)
Oct 13 15:47:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:47:16 standalone.localdomain podman[548957]: 2025-10-13 15:47:16.205713625 +0000 UTC m=+0.152232250 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 15:47:16 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:47:16 standalone.localdomain podman[548991]: 2025-10-13 15:47:16.278699112 +0000 UTC m=+0.078953604 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, release=1, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, build-date=2025-07-21T14:56:28, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, summary=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, version=17.1.9, vendor=Red Hat, Inc., io.buildah.version=1.33.12)
Oct 13 15:47:16 standalone.localdomain podman[549034]: 2025-10-13 15:47:16.34658848 +0000 UTC m=+0.065008299 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, com.redhat.component=openstack-swift-account-container, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, container_name=swift_account_server, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, tcib_managed=true, io.buildah.version=1.33.12, release=1, config_id=tripleo_step4, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 15:47:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:47:16 standalone.localdomain podman[548991]: 2025-10-13 15:47:16.500741528 +0000 UTC m=+0.300996040 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, batch=17.1_20250721.1, io.openshift.expose-services=, release=1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git)
Oct 13 15:47:16 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain podman[549034]: 2025-10-13 15:47:16.523293052 +0000 UTC m=+0.241712871 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, container_name=swift_account_server)
Oct 13 15:47:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:47:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3264876939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.558 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:47:16 standalone.localdomain podman[549085]: 2025-10-13 15:47:16.559547813 +0000 UTC m=+0.076694134 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, release=1, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, build-date=2025-07-21T15:54:32, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_container_server, name=rhosp17/openstack-swift-container)
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.566 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.582 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:47:16 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.586 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.586 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:47:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:16.710 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:16 standalone.localdomain podman[549085]: 2025-10-13 15:47:16.725969024 +0000 UTC m=+0.243115305 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.9, tcib_managed=true, name=rhosp17/openstack-swift-container, release=1, container_name=swift_container_server, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:47:16 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain podman[549144]: 
Oct 13 15:47:16 standalone.localdomain podman[549144]: 2025-10-13 15:47:16.763881837 +0000 UTC m=+0.084005162 container create 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:16 standalone.localdomain systemd[1]: Started libpod-conmon-692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126.scope.
Oct 13 15:47:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e8cd2e2ce4a3d0c3430f8f266bb2cf26b4a8ca969c9b49af016ffd26eae36e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:16 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:16.810 2 INFO neutron.agent.securitygroups_rpc [None req-bd2b1b7e-36a1-4ba7-9b07-2f33953da45b 3d4d48f603104c4d919565e926e139c4 a966fd035a9e426eabc035f6c807bc5e - - default default] Security group member updated ['4142dd56-682e-4ee4-8478-a4934a3acf9a']
Oct 13 15:47:16 standalone.localdomain podman[549144]: 2025-10-13 15:47:16.725161639 +0000 UTC m=+0.045284974 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:16 standalone.localdomain ceph-mon[29756]: pgmap v3989: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3264876939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:47:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:16.865 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:16 standalone.localdomain podman[549144]: 2025-10-13 15:47:16.86914189 +0000 UTC m=+0.189265205 container init 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:47:16 standalone.localdomain podman[549144]: 2025-10-13 15:47:16.87429271 +0000 UTC m=+0.194416035 container start 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:16 standalone.localdomain dnsmasq[549196]: started, version 2.85 cachesize 150
Oct 13 15:47:16 standalone.localdomain dnsmasq[549196]: DNS service limited to local subnets
Oct 13 15:47:16 standalone.localdomain dnsmasq[549196]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:16 standalone.localdomain dnsmasq[549196]: warning: no upstream servers configured
Oct 13 15:47:16 standalone.localdomain dnsmasq-dhcp[549196]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:16 standalone.localdomain dnsmasq[549196]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/addn_hosts - 0 addresses
Oct 13 15:47:16 standalone.localdomain dnsmasq-dhcp[549196]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/host
Oct 13 15:47:16 standalone.localdomain dnsmasq-dhcp[549196]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/opts
Oct 13 15:47:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:16.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:16 standalone.localdomain dnsmasq[549196]: exiting on receipt of SIGTERM
Oct 13 15:47:16 standalone.localdomain systemd[1]: libpod-692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126.scope: Deactivated successfully.
Oct 13 15:47:16 standalone.localdomain podman[549204]: 2025-10-13 15:47:16.981931559 +0000 UTC m=+0.067271840 container died 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:47:16 standalone.localdomain podman[549205]: 2025-10-13 15:47:16.982251808 +0000 UTC m=+0.063331437 container kill 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:47:16 standalone.localdomain dnsmasq[548620]: read /var/lib/neutron/dhcp/acfb9846-7661-4c9b-b512-a32963ac8b00/addn_hosts - 0 addresses
Oct 13 15:47:16 standalone.localdomain dnsmasq-dhcp[548620]: read /var/lib/neutron/dhcp/acfb9846-7661-4c9b-b512-a32963ac8b00/host
Oct 13 15:47:16 standalone.localdomain dnsmasq-dhcp[548620]: read /var/lib/neutron/dhcp/acfb9846-7661-4c9b-b512-a32963ac8b00/opts
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent [None req-5b8ec66a-c156-470d-b8d6-ddf2a4ebdbd8 - - - - - -] Unable to reload_allocations dhcp for acfb9846-7661-4c9b-b512-a32963ac8b00.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapb995c9b5-aa not found in namespace qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00.
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapb995c9b5-aa not found in namespace qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00.
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.009 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:17 standalone.localdomain podman[549204]: 2025-10-13 15:47:17.010009424 +0000 UTC m=+0.095349695 container cleanup 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.030 496978 INFO neutron.agent.dhcp.agent [None req-e1e6d6d2-d972-47e8-8fb3-5852a79da558 - - - - - -] DHCP configuration for ports {'2b773556-5513-4efa-9f4c-825490c751e4'} is completed
Oct 13 15:47:17 standalone.localdomain podman[549230]: 2025-10-13 15:47:17.036679906 +0000 UTC m=+0.056286177 container cleanup 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:17 standalone.localdomain systemd[1]: libpod-conmon-692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126.scope: Deactivated successfully.
Oct 13 15:47:17 standalone.localdomain podman[549248]: 2025-10-13 15:47:17.106554616 +0000 UTC m=+0.078779599 container remove 692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:17 standalone.localdomain podman[549266]: 
Oct 13 15:47:17 standalone.localdomain podman[549266]: 2025-10-13 15:47:17.219754736 +0000 UTC m=+0.094625412 container create dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:17.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:17.254 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:17.254 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:47:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:17.255 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:47:17 standalone.localdomain systemd[1]: Started libpod-conmon-dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0.scope.
Oct 13 15:47:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:17 standalone.localdomain podman[549266]: 2025-10-13 15:47:17.173848295 +0000 UTC m=+0.048718981 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5299352c86420b8ce383d06dc993a72205c329dd593f3ff4964aa680bb060e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:17 standalone.localdomain podman[549266]: 2025-10-13 15:47:17.284120024 +0000 UTC m=+0.158990680 container init dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:47:17 standalone.localdomain podman[549266]: 2025-10-13 15:47:17.292353711 +0000 UTC m=+0.167224367 container start dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:17 standalone.localdomain dnsmasq[549283]: started, version 2.85 cachesize 150
Oct 13 15:47:17 standalone.localdomain dnsmasq[549283]: DNS service limited to local subnets
Oct 13 15:47:17 standalone.localdomain dnsmasq[549283]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:17 standalone.localdomain dnsmasq[549283]: warning: no upstream servers configured
Oct 13 15:47:17 standalone.localdomain dnsmasq-dhcp[549283]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:17 standalone.localdomain dnsmasq[549283]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 0 addresses
Oct 13 15:47:17 standalone.localdomain dnsmasq-dhcp[549283]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:17 standalone.localdomain dnsmasq-dhcp[549283]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.340 496978 INFO neutron.agent.dhcp.agent [None req-90eae700-9e64-4203-bda3-94f71638c87b - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.341 496978 INFO neutron.agent.dhcp.agent [None req-e975d3c0-798c-4f81-b775-4e5d271dfd6d - - - - - -] Synchronizing state
Oct 13 15:47:17 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:17.419 2 INFO neutron.agent.securitygroups_rpc [None req-96035f9c-c7b2-475e-a7b4-420b48754625 a82b5fa1fae74c179e19a0065df4811f aaa42b564883447080cd4183011edf7e - - default default] Security group member updated ['43fa1b64-8433-4588-a0be-9c15e8444ccc']
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.543 496978 INFO neutron.agent.dhcp.agent [None req-949ca5f1-0004-4248-adef-930b62abb9ec - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.545 496978 INFO neutron.agent.dhcp.agent [-] Starting network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.546 496978 INFO neutron.agent.dhcp.agent [-] Finished network 5f993e6f-30f4-4230-9cfb-649f43d1ab22 dhcp configuration
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.546 496978 INFO neutron.agent.dhcp.agent [-] Starting network acfb9846-7661-4c9b-b512-a32963ac8b00 dhcp configuration
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.549 496978 INFO neutron.agent.dhcp.agent [-] Starting network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.550 496978 INFO neutron.agent.dhcp.agent [-] Finished network e640877a-3ffa-40a9-9920-52ad8d244aaf dhcp configuration
Oct 13 15:47:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:17.551 496978 INFO neutron.agent.dhcp.agent [None req-9daf7402-8998-44a3-8043-2ed2d23803af - - - - - -] DHCP configuration for ports {'e2f9cc6e-dfa2-4cc5-9e74-6ca425ef35a5', '06dc6f46-a50b-481f-a741-bfad8623c5d3'} is completed
Oct 13 15:47:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-a0e8cd2e2ce4a3d0c3430f8f266bb2cf26b4a8ca969c9b49af016ffd26eae36e-merged.mount: Deactivated successfully.
Oct 13 15:47:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-692c1edad5033a6a8fb853ade051ba243d58775900bf88beff14770ea2d6e126-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:17 standalone.localdomain dnsmasq[548620]: exiting on receipt of SIGTERM
Oct 13 15:47:17 standalone.localdomain podman[549301]: 2025-10-13 15:47:17.73278897 +0000 UTC m=+0.056611668 container kill 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:47:17 standalone.localdomain systemd[1]: libpod-21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9.scope: Deactivated successfully.
Oct 13 15:47:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3990: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:17 standalone.localdomain podman[549315]: 2025-10-13 15:47:17.80814821 +0000 UTC m=+0.059185927 container died 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:17 standalone.localdomain systemd[1]: tmp-crun.SB4hqk.mount: Deactivated successfully.
Oct 13 15:47:17 standalone.localdomain podman[549315]: 2025-10-13 15:47:17.843533244 +0000 UTC m=+0.094570951 container cleanup 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:17 standalone.localdomain systemd[1]: libpod-conmon-21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9.scope: Deactivated successfully.
Oct 13 15:47:17 standalone.localdomain podman[549317]: 2025-10-13 15:47:17.900264744 +0000 UTC m=+0.142885168 container remove 21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-acfb9846-7661-4c9b-b512-a32963ac8b00, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.038 496978 INFO neutron.agent.dhcp.agent [None req-ef537e79-492d-459c-817b-a5242393f4b3 - - - - - -] Finished network acfb9846-7661-4c9b-b512-a32963ac8b00 dhcp configuration
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.039 496978 INFO neutron.agent.dhcp.agent [None req-949ca5f1-0004-4248-adef-930b62abb9ec - - - - - -] Synchronizing state complete
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.040 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:16Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889135910>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4eb80>], id=33d18682-ae5d-41c3-a09c-2ddc359b8c77, ip_allocation=immediate, mac_address=fa:16:3e:1c:62:7f, name=tempest-RoutersIpV6Test-1963956478, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:11Z, description=, dns_domain=, id=ac515510-2ee8-49c2-aaf8-85a363df7851, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-378241187, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=63086, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1982, status=ACTIVE, subnets=['d8c06ef7-6e8f-413a-844c-14499819a9a2'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:14Z, vlan_transparent=None, network_id=ac515510-2ee8-49c2-aaf8-85a363df7851, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['43fa1b64-8433-4588-a0be-9c15e8444ccc'], standard_attr_id=2010, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:16Z on network ac515510-2ee8-49c2-aaf8-85a363df7851
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.063 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:18 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:18.143 2 INFO neutron.agent.securitygroups_rpc [None req-cd6fb207-4c31-458a-9b4c-e9f386864389 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['e3c7f324-2845-409e-af7f-0e24c3fc0056']
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.146 496978 INFO neutron.agent.dhcp.agent [None req-eaf5fa06-dddb-4db1-a2f4-bbd65d13d499 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:18.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:18 standalone.localdomain dnsmasq[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[548459]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.259 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:17Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892b8b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ee4910>], id=ad1bf49c-f6d5-4492-b419-e179adc327b9, ip_allocation=immediate, mac_address=fa:16:3e:e8:d2:c0, name=tempest-PortsIpV6TestJSON-570751602, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:12Z, description=, dns_domain=, id=a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-196486360, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44606, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1632, status=ACTIVE, subnets=['13222482-ddf6-4a14-b89c-71c8f220b9de'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:14Z, vlan_transparent=None, network_id=a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['e3c7f324-2845-409e-af7f-0e24c3fc0056'], standard_attr_id=2013, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:17Z on network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e
Oct 13 15:47:18 standalone.localdomain podman[549368]: 2025-10-13 15:47:18.26755429 +0000 UTC m=+0.075904758 container kill e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:18 standalone.localdomain kernel: device tap262a0a44-38 left promiscuous mode
Oct 13 15:47:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:18.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:18Z|00525|binding|INFO|Releasing lport 262a0a44-3866-4453-bbd1-0992cb77c0db from this chassis (sb_readonly=0)
Oct 13 15:47:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:18Z|00526|binding|INFO|Setting lport 262a0a44-3866-4453-bbd1-0992cb77c0db down in Southbound
Oct 13 15:47:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:18.489 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=262a0a44-3866-4453-bbd1-0992cb77c0db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:18.490 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 262a0a44-3866-4453-bbd1-0992cb77c0db in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:47:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:18.491 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:18.492 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[65dc444b-f193-4fd0-b3c0-25cc4f70184b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:18.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:18 standalone.localdomain dnsmasq[549283]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 1 addresses
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[549283]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:18 standalone.localdomain podman[549421]: 2025-10-13 15:47:18.525204437 +0000 UTC m=+0.120820249 container kill dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[549283]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cb7eae96dbea3ff90fa9a89472ae18d0ad73948e79957470011f7f5db810fffc-merged.mount: Deactivated successfully.
Oct 13 15:47:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21fcccae843228faed633b338a6a9caf776a1d63bc843cbb350173d34b9ff5b9-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:18 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dacfb9846\x2d7661\x2d4c9b\x2db512\x2da32963ac8b00.mount: Deactivated successfully.
Oct 13 15:47:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:18Z|00527|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:18 standalone.localdomain podman[549443]: 
Oct 13 15:47:18 standalone.localdomain podman[549443]: 2025-10-13 15:47:18.616467475 +0000 UTC m=+0.093931641 container create c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3837908600' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3837908600' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:47:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:18.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:18 standalone.localdomain systemd[1]: Started libpod-conmon-c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a.scope.
Oct 13 15:47:18 standalone.localdomain podman[549443]: 2025-10-13 15:47:18.56438718 +0000 UTC m=+0.041851346 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4020596c37758d9cf8d119b0c05d73781ab185b43eae5425d6cb857317b6fbeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: pgmap v3990: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3837908600' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:47:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3837908600' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:47:18 standalone.localdomain podman[549443]: 2025-10-13 15:47:18.689818742 +0000 UTC m=+0.167282898 container init c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:47:18 standalone.localdomain podman[549443]: 2025-10-13 15:47:18.701116065 +0000 UTC m=+0.178580231 container start c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:18 standalone.localdomain dnsmasq[549472]: started, version 2.85 cachesize 150
Oct 13 15:47:18 standalone.localdomain dnsmasq[549472]: DNS service limited to local subnets
Oct 13 15:47:18 standalone.localdomain dnsmasq[549472]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:18 standalone.localdomain dnsmasq[549472]: warning: no upstream servers configured
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[549472]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:18 standalone.localdomain dnsmasq[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/addn_hosts - 1 addresses
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/host
Oct 13 15:47:18 standalone.localdomain dnsmasq-dhcp[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/opts
Oct 13 15:47:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:18.868 496978 INFO neutron.agent.dhcp.agent [None req-f731e5a5-d543-4a84-9d0b-75d4f473057e - - - - - -] DHCP configuration for ports {'ad1bf49c-f6d5-4492-b419-e179adc327b9'} is completed
Oct 13 15:47:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:19.035 496978 INFO neutron.agent.dhcp.agent [None req-5472a040-48a4-4423-8470-370f6149f7a1 - - - - - -] DHCP configuration for ports {'33d18682-ae5d-41c3-a09c-2ddc359b8c77'} is completed
Oct 13 15:47:19 standalone.localdomain dnsmasq[544098]: exiting on receipt of SIGTERM
Oct 13 15:47:19 standalone.localdomain podman[549499]: 2025-10-13 15:47:19.061017092 +0000 UTC m=+0.062187911 container kill e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:19 standalone.localdomain systemd[1]: libpod-e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62.scope: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain podman[549513]: 2025-10-13 15:47:19.13312412 +0000 UTC m=+0.054484150 container died e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:19 standalone.localdomain podman[549513]: 2025-10-13 15:47:19.158323456 +0000 UTC m=+0.079683466 container cleanup e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:47:19 standalone.localdomain systemd[1]: libpod-conmon-e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62.scope: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain podman[549515]: 2025-10-13 15:47:19.219585158 +0000 UTC m=+0.132799853 container remove e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e83d980f-4895-497f-8345-b0812704fdfa, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:19 standalone.localdomain kernel: device tap80012df1-7a left promiscuous mode
Oct 13 15:47:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:19Z|00528|binding|INFO|Releasing lport 80012df1-7aa7-444b-9e20-828cf59722ed from this chassis (sb_readonly=0)
Oct 13 15:47:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:19Z|00529|binding|INFO|Setting lport 80012df1-7aa7-444b-9e20-828cf59722ed down in Southbound
Oct 13 15:47:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:19.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:19.279 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-e83d980f-4895-497f-8345-b0812704fdfa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e83d980f-4895-497f-8345-b0812704fdfa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966fd035a9e426eabc035f6c807bc5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fad20e64-0dc5-4330-9ec8-00069b6431ee, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=80012df1-7aa7-444b-9e20-828cf59722ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:19.281 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 80012df1-7aa7-444b-9e20-828cf59722ed in datapath e83d980f-4895-497f-8345-b0812704fdfa unbound from our chassis
Oct 13 15:47:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:19.284 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e83d980f-4895-497f-8345-b0812704fdfa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:19.286 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[cad1c8c5-32bc-40e5-9ef3-2dd9384c4937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:19.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:19.536 496978 INFO neutron.agent.dhcp.agent [None req-641acfb2-6366-48b1-8cfc-07d3d8b9756d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:19 standalone.localdomain dnsmasq[548459]: exiting on receipt of SIGTERM
Oct 13 15:47:19 standalone.localdomain podman[549560]: 2025-10-13 15:47:19.540091636 +0000 UTC m=+0.062035526 container kill e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:19 standalone.localdomain systemd[1]: libpod-e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a.scope: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-64c0c11bad23f68c59e1aeff9e769cee1f6d6ecf05d1f0840e43f2250acaadfa-merged.mount: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e28f59f926e09c88c75850b0c4324f318a1e0f448fe36ec51a70b108761aab62-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain systemd[1]: run-netns-qdhcp\x2de83d980f\x2d4895\x2d497f\x2d8345\x2db0812704fdfa.mount: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain podman[549572]: 2025-10-13 15:47:19.60853063 +0000 UTC m=+0.054310465 container died e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:19.635 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:19 standalone.localdomain podman[549572]: 2025-10-13 15:47:19.640914521 +0000 UTC m=+0.086694326 container cleanup e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:19 standalone.localdomain systemd[1]: libpod-conmon-e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a.scope: Deactivated successfully.
Oct 13 15:47:19 standalone.localdomain podman[549579]: 2025-10-13 15:47:19.674831239 +0000 UTC m=+0.103609883 container remove e0c127a07607da01c5abb0eb8f981e6c88bbe61de372f8c259f8fb0384b2246a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:47:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3991: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:20.120 496978 INFO neutron.agent.dhcp.agent [None req-592df032-1cbc-49fd-a91e-c8b5bde17cb3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:20.406 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:47:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4719bbed8b08463ab5ab56ff8d3694270c9cd0cf33f32f34723bc97953bec1bf-merged.mount: Deactivated successfully.
Oct 13 15:47:20 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:47:20 standalone.localdomain podman[549610]: 2025-10-13 15:47:20.570870479 +0000 UTC m=+0.086247092 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:47:20 standalone.localdomain podman[549610]: 2025-10-13 15:47:20.581895343 +0000 UTC m=+0.097271916 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:47:20 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:47:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:20.601 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:16Z, description=, device_id=1a840656-e0f5-49ec-8743-a96b71ff70db, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188916c2e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e7b9d0>], id=33d18682-ae5d-41c3-a09c-2ddc359b8c77, ip_allocation=immediate, mac_address=fa:16:3e:1c:62:7f, name=tempest-RoutersIpV6Test-1963956478, network_id=ac515510-2ee8-49c2-aaf8-85a363df7851, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['43fa1b64-8433-4588-a0be-9c15e8444ccc'], standard_attr_id=2010, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:18Z on network ac515510-2ee8-49c2-aaf8-85a363df7851
Oct 13 15:47:20 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:20.610 2 INFO neutron.agent.securitygroups_rpc [None req-8accb366-5134-48fc-906a-23c272d442f6 9a91485a24644e3db238314d1ba0dfae 82cdce1bd6fa492b89e940508379ac1a - - default default] Security group member updated ['26a05b85-6995-492c-91b8-d91900bd8a5f']
Oct 13 15:47:20 standalone.localdomain dnsmasq[549283]: exiting on receipt of SIGTERM
Oct 13 15:47:20 standalone.localdomain podman[549649]: 2025-10-13 15:47:20.755709705 +0000 UTC m=+0.076093785 container kill dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:20 standalone.localdomain systemd[1]: tmp-crun.S8x0NE.mount: Deactivated successfully.
Oct 13 15:47:20 standalone.localdomain systemd[1]: libpod-dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0.scope: Deactivated successfully.
Oct 13 15:47:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:20Z|00530|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:20 standalone.localdomain podman[549678]: 2025-10-13 15:47:20.809053029 +0000 UTC m=+0.043174799 container died dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:20.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:20 standalone.localdomain dnsmasq[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/addn_hosts - 1 addresses
Oct 13 15:47:20 standalone.localdomain dnsmasq-dhcp[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/host
Oct 13 15:47:20 standalone.localdomain dnsmasq-dhcp[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/opts
Oct 13 15:47:20 standalone.localdomain ceph-mon[29756]: pgmap v3991: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:20 standalone.localdomain podman[549664]: 2025-10-13 15:47:20.825593115 +0000 UTC m=+0.094866971 container kill c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:20 standalone.localdomain podman[549678]: 2025-10-13 15:47:20.841528461 +0000 UTC m=+0.075650211 container cleanup dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:20 standalone.localdomain systemd[1]: libpod-conmon-dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0.scope: Deactivated successfully.
Oct 13 15:47:20 standalone.localdomain podman[549686]: 2025-10-13 15:47:20.877825833 +0000 UTC m=+0.084231878 container remove dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:21.183 496978 INFO neutron.agent.dhcp.agent [None req-98ac21c3-301b-453b-bda3-bd49c80806ce - - - - - -] DHCP configuration for ports {'33d18682-ae5d-41c3-a09c-2ddc359b8c77'} is completed
Oct 13 15:47:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:21.383 496978 INFO neutron.agent.linux.ip_lib [None req-66b399ed-8798-4e80-bdb8-5c583ea4ef17 - - - - - -] Device tap217f6bba-9c cannot be used as it has no MAC address
Oct 13 15:47:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:21.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:21 standalone.localdomain kernel: device tap217f6bba-9c entered promiscuous mode
Oct 13 15:47:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:21Z|00531|binding|INFO|Claiming lport 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 for this chassis.
Oct 13 15:47:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:21Z|00532|binding|INFO|217f6bba-9c27-4a0c-afea-2b78e62ee1c7: Claiming unknown
Oct 13 15:47:21 standalone.localdomain NetworkManager[5962]: <info>  [1760370441.4277] manager: (tap217f6bba-9c): new Generic device (/org/freedesktop/NetworkManager/Devices/96)
Oct 13 15:47:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:21.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:21 standalone.localdomain systemd-udevd[549726]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:21.434 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-265b8599-466a-474a-bf1f-214d900622c4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-265b8599-466a-474a-bf1f-214d900622c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e97900d-e928-4b98-beb6-cfd18974b360, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=217f6bba-9c27-4a0c-afea-2b78e62ee1c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:21.436 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 in datapath 265b8599-466a-474a-bf1f-214d900622c4 bound to our chassis
Oct 13 15:47:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:21.438 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 265b8599-466a-474a-bf1f-214d900622c4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:21 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:21.441 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0b401dd1-71e2-4439-827e-1b6216ad68bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:21Z|00533|binding|INFO|Setting lport 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 ovn-installed in OVS
Oct 13 15:47:21 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:21Z|00534|binding|INFO|Setting lport 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 up in Southbound
Oct 13 15:47:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap217f6bba-9c: No such device
Oct 13 15:47:21 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:21.531 2 INFO neutron.agent.securitygroups_rpc [None req-6b1fad6d-8c8e-44a3-b2ea-29e669193cd1 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['e3c7f324-2845-409e-af7f-0e24c3fc0056', '7637afb4-2d99-46f0-bb9e-4de8293673f4']
Oct 13 15:47:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:21.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e5299352c86420b8ce383d06dc993a72205c329dd593f3ff4964aa680bb060e0-merged.mount: Deactivated successfully.
Oct 13 15:47:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc355ab0b3d622044d7f5cea009b2e9761565606d19934135c7e42acb7368ac0-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:21 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:21.635 2 INFO neutron.agent.securitygroups_rpc [None req-99c61575-db6d-4edd-b010-ea13b3d4cacf 9a91485a24644e3db238314d1ba0dfae 82cdce1bd6fa492b89e940508379ac1a - - default default] Security group member updated ['26a05b85-6995-492c-91b8-d91900bd8a5f']
Oct 13 15:47:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:21.667 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3992: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:22.417 496978 INFO neutron.agent.linux.ip_lib [None req-0fa0597c-e137-4e16-a61f-1107eb5af3eb - - - - - -] Device tapcfba9bd9-b8 cannot be used as it has no MAC address
Oct 13 15:47:22 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:22.449 496978 INFO neutron.agent.linux.ip_lib [None req-8ef156c9-101f-403c-9eef-dbe37cfacd28 - - - - - -] Device tap976d5a40-86 cannot be used as it has no MAC address
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain kernel: device tapcfba9bd9-b8 entered promiscuous mode
Oct 13 15:47:22 standalone.localdomain NetworkManager[5962]: <info>  [1760370442.4850] manager: (tapcfba9bd9-b8): new Generic device (/org/freedesktop/NetworkManager/Devices/97)
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00535|binding|INFO|Claiming lport cfba9bd9-b8ad-4c91-98b2-ecf52a866833 for this chassis.
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00536|binding|INFO|cfba9bd9-b8ad-4c91-98b2-ecf52a866833: Claiming unknown
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain systemd-udevd[549728]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.496 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=cfba9bd9-b8ad-4c91-98b2-ecf52a866833) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.498 378821 INFO neutron.agent.ovn.metadata.agent [-] Port cfba9bd9-b8ad-4c91-98b2-ecf52a866833 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.499 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.500 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[39ca571f-3719-4c2f-9609-dde0fae78f54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00537|binding|INFO|Setting lport cfba9bd9-b8ad-4c91-98b2-ecf52a866833 ovn-installed in OVS
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00538|binding|INFO|Setting lport cfba9bd9-b8ad-4c91-98b2-ecf52a866833 up in Southbound
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain kernel: device tap976d5a40-86 entered promiscuous mode
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00539|binding|INFO|Claiming lport 976d5a40-8629-40ce-98a9-4cd16a58e34d for this chassis.
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain NetworkManager[5962]: <info>  [1760370442.5294] manager: (tap976d5a40-86): new Generic device (/org/freedesktop/NetworkManager/Devices/98)
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00540|binding|INFO|976d5a40-8629-40ce-98a9-4cd16a58e34d: Claiming unknown
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.542 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20a0102a-0a81-411d-8a51-114dbdfbde1b, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=976d5a40-8629-40ce-98a9-4cd16a58e34d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.543 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 976d5a40-8629-40ce-98a9-4cd16a58e34d in datapath bbc77373-2781-4d9c-81a6-fd69e1b2c2ea bound to our chassis
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.545 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bbc77373-2781-4d9c-81a6-fd69e1b2c2ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:22.546 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3859b144-6a9b-4bdb-ab4b-f482a2d9179f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00541|binding|INFO|Setting lport 976d5a40-8629-40ce-98a9-4cd16a58e34d ovn-installed in OVS
Oct 13 15:47:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:22Z|00542|binding|INFO|Setting lport 976d5a40-8629-40ce-98a9-4cd16a58e34d up in Southbound
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:22.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:22 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:22.710 2 INFO neutron.agent.securitygroups_rpc [None req-0706a0a0-f309-4c66-af7a-f30ca7caa58c 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['7637afb4-2d99-46f0-bb9e-4de8293673f4']
Oct 13 15:47:22 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:22.716 2 INFO neutron.agent.securitygroups_rpc [None req-95db35f7-7ad5-4e0d-b1ff-c4d569de63f0 a82b5fa1fae74c179e19a0065df4811f aaa42b564883447080cd4183011edf7e - - default default] Security group member updated ['43fa1b64-8433-4588-a0be-9c15e8444ccc']
Oct 13 15:47:22 standalone.localdomain podman[549860]: 
Oct 13 15:47:22 standalone.localdomain podman[549860]: 2025-10-13 15:47:22.757025092 +0000 UTC m=+0.108999951 container create 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:22 standalone.localdomain systemd[1]: Started libpod-conmon-55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436.scope.
Oct 13 15:47:22 standalone.localdomain podman[549860]: 2025-10-13 15:47:22.707451555 +0000 UTC m=+0.059426404 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e282d97a99871c626acf87fac9888f3e7fe911a2642833eb154fe1afa9eb872/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:22 standalone.localdomain ceph-mon[29756]: pgmap v3992: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:22 standalone.localdomain podman[549860]: 2025-10-13 15:47:22.843799849 +0000 UTC m=+0.195774698 container init 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:47:22 standalone.localdomain podman[549860]: 2025-10-13 15:47:22.853695888 +0000 UTC m=+0.205670747 container start 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:47:22 standalone.localdomain dnsmasq[549910]: started, version 2.85 cachesize 150
Oct 13 15:47:22 standalone.localdomain dnsmasq[549910]: DNS service limited to local subnets
Oct 13 15:47:22 standalone.localdomain dnsmasq[549910]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:22 standalone.localdomain dnsmasq[549910]: warning: no upstream servers configured
Oct 13 15:47:22 standalone.localdomain dnsmasq-dhcp[549910]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:22 standalone.localdomain dnsmasq[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/addn_hosts - 0 addresses
Oct 13 15:47:22 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/host
Oct 13 15:47:22 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/opts
Oct 13 15:47:23 standalone.localdomain podman[549935]: 
Oct 13 15:47:23 standalone.localdomain podman[549935]: 2025-10-13 15:47:23.029165801 +0000 UTC m=+0.070611554 container create ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:47:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:23.051 496978 INFO neutron.agent.dhcp.agent [None req-7aaa40ce-7500-44cb-a032-b4185d0e1999 - - - - - -] DHCP configuration for ports {'2cd6b058-2091-4af7-b547-02cb6bb7749a'} is completed
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started libpod-conmon-ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1.scope.
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6fa242e82ffc20a1b0c7bed487f960b5f016f0e353551c8a4e7816d3f5b5bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:23 standalone.localdomain dnsmasq[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/addn_hosts - 0 addresses
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/host
Oct 13 15:47:23 standalone.localdomain podman[549949]: 2025-10-13 15:47:23.078861971 +0000 UTC m=+0.075334941 container kill c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549472]: read /var/lib/neutron/dhcp/ac515510-2ee8-49c2-aaf8-85a363df7851/opts
Oct 13 15:47:23 standalone.localdomain podman[549935]: 2025-10-13 15:47:22.997291477 +0000 UTC m=+0.038737230 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:23 standalone.localdomain podman[549935]: 2025-10-13 15:47:23.126618061 +0000 UTC m=+0.168063824 container init ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:23 standalone.localdomain podman[549935]: 2025-10-13 15:47:23.13237165 +0000 UTC m=+0.173817413 container start ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: started, version 2.85 cachesize 150
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: DNS service limited to local subnets
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: warning: no upstream servers configured
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 1 addresses
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:23.193 496978 INFO neutron.agent.dhcp.agent [None req-7c5766f3-33d8-49c9-9d7a-7eb31bc90030 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:17Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889004190>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889004e50>], id=ad1bf49c-f6d5-4492-b419-e179adc327b9, ip_allocation=immediate, mac_address=fa:16:3e:e8:d2:c0, name=tempest-PortsIpV6TestJSON-2004025040, network_id=a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['7637afb4-2d99-46f0-bb9e-4de8293673f4'], standard_attr_id=2013, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:21Z on network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e
Oct 13 15:47:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:23.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:47:23
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['volumes', 'manila_metadata', 'vms', 'manila_data', 'backups', '.mgr', 'images']
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 1 addresses
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:23 standalone.localdomain podman[550018]: 2025-10-13 15:47:23.422924974 +0000 UTC m=+0.045099848 container kill ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:23Z|00543|binding|INFO|Releasing lport c25941d0-858b-4710-91a2-50a6efc58ca5 from this chassis (sb_readonly=0)
Oct 13 15:47:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:23Z|00544|binding|INFO|Setting lport c25941d0-858b-4710-91a2-50a6efc58ca5 down in Southbound
Oct 13 15:47:23 standalone.localdomain kernel: device tapc25941d0-85 left promiscuous mode
Oct 13 15:47:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:23.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:23.439 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-ac515510-2ee8-49c2-aaf8-85a363df7851', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac515510-2ee8-49c2-aaf8-85a363df7851', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b39821bd-fc24-4f53-8b85-30b939379a0d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=c25941d0-858b-4710-91a2-50a6efc58ca5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:23.441 378821 INFO neutron.agent.ovn.metadata.agent [-] Port c25941d0-858b-4710-91a2-50a6efc58ca5 in datapath ac515510-2ee8-49c2-aaf8-85a363df7851 unbound from our chassis
Oct 13 15:47:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:23.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:23.444 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ac515510-2ee8-49c2-aaf8-85a363df7851 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:23.445 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[47566bc0-0676-4f63-a4a6-d91b49eb5e89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:23.449 496978 INFO neutron.agent.dhcp.agent [None req-82f50ce8-cf45-4a30-bef8-b0a6a8eba0c9 - - - - - -] DHCP configuration for ports {'ad1bf49c-f6d5-4492-b419-e179adc327b9', '06dc6f46-a50b-481f-a741-bfad8623c5d3', 'e2f9cc6e-dfa2-4cc5-9e74-6ca425ef35a5', 'e3d92db7-5106-4a58-83fb-0c74a16b72a2'} is completed
Oct 13 15:47:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61255 DF PROTO=TCP SPT=34406 DPT=9102 SEQ=3101837516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD222BF0000000001030307) 
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3993: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:23.799 496978 INFO neutron.agent.dhcp.agent [None req-69def963-f99f-45ce-a3f3-2b159b87d81f - - - - - -] DHCP configuration for ports {'ad1bf49c-f6d5-4492-b419-e179adc327b9'} is completed
Oct 13 15:47:23 standalone.localdomain systemd[1]: tmp-crun.Goggsy.mount: Deactivated successfully.
Oct 13 15:47:23 standalone.localdomain podman[550110]: 2025-10-13 15:47:23.814187428 +0000 UTC m=+0.085752495 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:47:23 standalone.localdomain podman[550110]: 2025-10-13 15:47:23.819948988 +0000 UTC m=+0.091514025 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:47:23 standalone.localdomain dnsmasq[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 0 addresses
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[549982]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:47:23 standalone.localdomain podman[550107]: 2025-10-13 15:47:23.821356852 +0000 UTC m=+0.101995822 container kill ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:47:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:47:23 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:47:23 standalone.localdomain podman[550098]: 
Oct 13 15:47:23 standalone.localdomain podman[550098]: 2025-10-13 15:47:23.851479842 +0000 UTC m=+0.146841252 container create e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:23 standalone.localdomain podman[550138]: 
Oct 13 15:47:23 standalone.localdomain podman[550138]: 2025-10-13 15:47:23.890888991 +0000 UTC m=+0.101458225 container create 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:23 standalone.localdomain podman[550098]: 2025-10-13 15:47:23.799676356 +0000 UTC m=+0.095037796 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started libpod-conmon-5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936.scope.
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started libpod-conmon-e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e.scope.
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:23 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb54709bb9bb9d742dff98d6e083a79e5032357671aa31e14cdf44f3911186d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:23 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203b93d5dc8fb790b905eee78c3680776a5ee026899f85fdcd51105574e79b6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:23 standalone.localdomain podman[550138]: 2025-10-13 15:47:23.950207931 +0000 UTC m=+0.160777155 container init 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:23 standalone.localdomain podman[550098]: 2025-10-13 15:47:23.952792912 +0000 UTC m=+0.248154312 container init e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:47:23 standalone.localdomain podman[550138]: 2025-10-13 15:47:23.956239789 +0000 UTC m=+0.166809013 container start 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:47:23 standalone.localdomain podman[550138]: 2025-10-13 15:47:23.858311095 +0000 UTC m=+0.068880349 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:23 standalone.localdomain podman[550098]: 2025-10-13 15:47:23.959316135 +0000 UTC m=+0.254677575 container start e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:47:23 standalone.localdomain dnsmasq[550184]: started, version 2.85 cachesize 150
Oct 13 15:47:23 standalone.localdomain dnsmasq[550184]: DNS service limited to local subnets
Oct 13 15:47:23 standalone.localdomain dnsmasq[550184]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:23 standalone.localdomain dnsmasq[550184]: warning: no upstream servers configured
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[550184]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:23 standalone.localdomain dnsmasq[550184]: read /var/lib/neutron/dhcp/bbc77373-2781-4d9c-81a6-fd69e1b2c2ea/addn_hosts - 0 addresses
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[550184]: read /var/lib/neutron/dhcp/bbc77373-2781-4d9c-81a6-fd69e1b2c2ea/host
Oct 13 15:47:23 standalone.localdomain dnsmasq[550185]: started, version 2.85 cachesize 150
Oct 13 15:47:23 standalone.localdomain dnsmasq[550185]: DNS service limited to local subnets
Oct 13 15:47:23 standalone.localdomain dnsmasq[550185]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:23 standalone.localdomain dnsmasq[550185]: warning: no upstream servers configured
Oct 13 15:47:23 standalone.localdomain dnsmasq[550185]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:23 standalone.localdomain dnsmasq-dhcp[550184]: read /var/lib/neutron/dhcp/bbc77373-2781-4d9c-81a6-fd69e1b2c2ea/opts
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.009 496978 INFO neutron.agent.dhcp.agent [None req-0fa0597c-e137-4e16-a61f-1107eb5af3eb - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.009 496978 INFO neutron.agent.dhcp.agent [None req-0fa0597c-e137-4e16-a61f-1107eb5af3eb - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:21Z, description=, device_id=0ccf2baf-a93a-48a3-a11c-3d38fe54a31a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4b3a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889103070>], id=0aff5504-181f-4f36-8552-049b35943bc0, ip_allocation=immediate, mac_address=fa:16:3e:c5:70:40, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['9ecab9ee-fd82-4dc8-91ff-82f21d38e541'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:20Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=False, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2032, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:22Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:24 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:24.084 2 INFO neutron.agent.securitygroups_rpc [None req-cb79032c-fcfc-48d6-aa11-6394dcd47d91 1673af16e0954b21afa19d18870dddb4 745bbaede12e4123b190dd25fdacfb7d - - default default] Security group member updated ['698974ef-6627-4166-8798-766ef81e7360']
Oct 13 15:47:24 standalone.localdomain dnsmasq[550185]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:47:24 standalone.localdomain podman[550205]: 2025-10-13 15:47:24.141579941 +0000 UTC m=+0.041106143 container kill e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.147 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889095040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890951f0>], id=f5259448-0368-456d-9bf1-7bd1ec8b1aea, ip_allocation=immediate, mac_address=fa:16:3e:c4:43:26, name=tempest-RoutersTest-1133844275, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:17Z, description=, dns_domain=, id=265b8599-466a-474a-bf1f-214d900622c4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1220390129, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40274, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2015, status=ACTIVE, subnets=['0047aaeb-f643-4fad-9978-81a2a5852d7b'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:19Z, vlan_transparent=None, network_id=265b8599-466a-474a-bf1f-214d900622c4, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['698974ef-6627-4166-8798-766ef81e7360'], standard_attr_id=2036, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:23Z on network 265b8599-466a-474a-bf1f-214d900622c4
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.236 496978 INFO neutron.agent.dhcp.agent [None req-0b411538-70bc-4aca-8d49-44717ef7cd52 - - - - - -] DHCP configuration for ports {'299f523f-abcf-4c61-b6db-156b1856deb4', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.326 496978 INFO neutron.agent.dhcp.agent [None req-0fa0597c-e137-4e16-a61f-1107eb5af3eb - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:21Z, description=, device_id=0ccf2baf-a93a-48a3-a11c-3d38fe54a31a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83640>], id=0aff5504-181f-4f36-8552-049b35943bc0, ip_allocation=immediate, mac_address=fa:16:3e:c5:70:40, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['9ecab9ee-fd82-4dc8-91ff-82f21d38e541'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:20Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=False, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2032, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:22Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:24 standalone.localdomain dnsmasq[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/addn_hosts - 1 addresses
Oct 13 15:47:24 standalone.localdomain podman[550244]: 2025-10-13 15:47:24.412083809 +0000 UTC m=+0.066660091 container kill 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:24 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/host
Oct 13 15:47:24 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/opts
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.424 496978 INFO neutron.agent.dhcp.agent [None req-047f009b-c64a-4bb9-95df-838ed97c4b41 - - - - - -] DHCP configuration for ports {'0aff5504-181f-4f36-8552-049b35943bc0'} is completed
Oct 13 15:47:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61256 DF PROTO=TCP SPT=34406 DPT=9102 SEQ=3101837516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD226B60000000001030307) 
Oct 13 15:47:24 standalone.localdomain dnsmasq[550185]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:47:24 standalone.localdomain podman[550278]: 2025-10-13 15:47:24.569904882 +0000 UTC m=+0.076249680 container kill e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:24 standalone.localdomain dnsmasq[549472]: exiting on receipt of SIGTERM
Oct 13 15:47:24 standalone.localdomain podman[550315]: 2025-10-13 15:47:24.70288769 +0000 UTC m=+0.076726855 container kill c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.704 496978 INFO neutron.agent.dhcp.agent [None req-160ed40c-5a15-44c0-93c4-c26e4e7fc2f8 - - - - - -] DHCP configuration for ports {'f5259448-0368-456d-9bf1-7bd1ec8b1aea'} is completed
Oct 13 15:47:24 standalone.localdomain systemd[1]: libpod-c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a.scope: Deactivated successfully.
Oct 13 15:47:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:24.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:24 standalone.localdomain podman[550338]: 2025-10-13 15:47:24.772696018 +0000 UTC m=+0.047123821 container died c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-4020596c37758d9cf8d119b0c05d73781ab185b43eae5425d6cb857317b6fbeb-merged.mount: Deactivated successfully.
Oct 13 15:47:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:24.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:24Z|00545|binding|INFO|Releasing lport 976d5a40-8629-40ce-98a9-4cd16a58e34d from this chassis (sb_readonly=0)
Oct 13 15:47:24 standalone.localdomain kernel: device tap976d5a40-86 left promiscuous mode
Oct 13 15:47:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:24Z|00546|binding|INFO|Setting lport 976d5a40-8629-40ce-98a9-4cd16a58e34d down in Southbound
Oct 13 15:47:24 standalone.localdomain podman[550338]: 2025-10-13 15:47:24.82853278 +0000 UTC m=+0.102960583 container remove c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ac515510-2ee8-49c2-aaf8-85a363df7851, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:24.832 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 7557798a-247e-4a37-8d29-e4737fab0ab3 with type ""
Oct 13 15:47:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:24.834 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20a0102a-0a81-411d-8a51-114dbdfbde1b, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=976d5a40-8629-40ce-98a9-4cd16a58e34d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:24.837 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 976d5a40-8629-40ce-98a9-4cd16a58e34d in datapath bbc77373-2781-4d9c-81a6-fd69e1b2c2ea unbound from our chassis
Oct 13 15:47:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:24.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:24.843 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:24 standalone.localdomain ceph-mon[29756]: pgmap v3993: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:24 standalone.localdomain systemd[1]: libpod-conmon-c02b926510deb953ea33e59ab7c891af470f4add86f688c1fb1483ef6fcb5d6a.scope: Deactivated successfully.
Oct 13 15:47:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:24.846 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbb4960-a034-4d16-ad35-7c4b5c361ff3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:24.875 496978 INFO neutron.agent.dhcp.agent [None req-e77dd974-4e84-42e0-ba47-59f4495b1e4e - - - - - -] DHCP configuration for ports {'0aff5504-181f-4f36-8552-049b35943bc0'} is completed
Oct 13 15:47:24 standalone.localdomain podman[550381]: 2025-10-13 15:47:24.962627452 +0000 UTC m=+0.059322061 container kill ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:24 standalone.localdomain dnsmasq[549982]: exiting on receipt of SIGTERM
Oct 13 15:47:24 standalone.localdomain systemd[1]: libpod-ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1.scope: Deactivated successfully.
Oct 13 15:47:25 standalone.localdomain podman[550397]: 2025-10-13 15:47:25.038682535 +0000 UTC m=+0.047644547 container died ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.080 496978 INFO neutron.agent.dhcp.agent [None req-33b4bab6-4daa-417a-9c02-77aa5e3a4358 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.081 496978 INFO neutron.agent.dhcp.agent [None req-33b4bab6-4daa-417a-9c02-77aa5e3a4358 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:25 standalone.localdomain podman[550397]: 2025-10-13 15:47:25.087557409 +0000 UTC m=+0.096519421 container remove ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:25 standalone.localdomain systemd[1]: libpod-conmon-ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1.scope: Deactivated successfully.
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.157 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.558 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:25 standalone.localdomain dnsmasq[550184]: read /var/lib/neutron/dhcp/bbc77373-2781-4d9c-81a6-fd69e1b2c2ea/addn_hosts - 0 addresses
Oct 13 15:47:25 standalone.localdomain podman[550459]: 2025-10-13 15:47:25.561510753 +0000 UTC m=+0.058813615 container kill 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:25 standalone.localdomain dnsmasq-dhcp[550184]: read /var/lib/neutron/dhcp/bbc77373-2781-4d9c-81a6-fd69e1b2c2ea/host
Oct 13 15:47:25 standalone.localdomain dnsmasq-dhcp[550184]: read /var/lib/neutron/dhcp/bbc77373-2781-4d9c-81a6-fd69e1b2c2ea/opts
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent [None req-95bb2820-a1a7-4897-9cd4-057aa819acf3 - - - - - -] Unable to reload_allocations dhcp for bbc77373-2781-4d9c-81a6-fd69e1b2c2ea.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap976d5a40-86 not found in namespace qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea.
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap976d5a40-86 not found in namespace qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea.
Oct 13 15:47:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:25.593 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3994: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6e6fa242e82ffc20a1b0c7bed487f960b5f016f0e353551c8a4e7816d3f5b5bc-merged.mount: Deactivated successfully.
Oct 13 15:47:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce7fa8da1cf0f251624e8b78807687a82f7929fa657daaef185148ea690b91e1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:25 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dac515510\x2d2ee8\x2d49c2\x2daaf8\x2d85a363df7851.mount: Deactivated successfully.
Oct 13 15:47:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:25Z|00547|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:25.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:26 standalone.localdomain podman[550502]: 
Oct 13 15:47:26 standalone.localdomain podman[550502]: 2025-10-13 15:47:26.053679456 +0000 UTC m=+0.084466626 container create 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:47:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:47:26 standalone.localdomain systemd[1]: Started libpod-conmon-467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4.scope.
Oct 13 15:47:26 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:26 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f10bac4cc903ee494d1f499e1d66c61e37323cacf6dd45a2ae07f3771d73ddf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:26 standalone.localdomain podman[550502]: 2025-10-13 15:47:26.014775242 +0000 UTC m=+0.045562452 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:26 standalone.localdomain podman[550502]: 2025-10-13 15:47:26.11955063 +0000 UTC m=+0.150337830 container init 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:26 standalone.localdomain podman[550502]: 2025-10-13 15:47:26.130587105 +0000 UTC m=+0.161374315 container start 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:47:26 standalone.localdomain dnsmasq[550531]: started, version 2.85 cachesize 150
Oct 13 15:47:26 standalone.localdomain dnsmasq[550531]: DNS service limited to local subnets
Oct 13 15:47:26 standalone.localdomain dnsmasq[550531]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:26 standalone.localdomain dnsmasq[550531]: warning: no upstream servers configured
Oct 13 15:47:26 standalone.localdomain dnsmasq-dhcp[550531]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:47:26 standalone.localdomain dnsmasq[550531]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 0 addresses
Oct 13 15:47:26 standalone.localdomain dnsmasq-dhcp[550531]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:26 standalone.localdomain dnsmasq-dhcp[550531]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:26 standalone.localdomain podman[550516]: 2025-10-13 15:47:26.191547586 +0000 UTC m=+0.103510430 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:47:26 standalone.localdomain podman[550516]: 2025-10-13 15:47:26.201940171 +0000 UTC m=+0.113902995 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.205 496978 INFO neutron.agent.dhcp.agent [None req-949ca5f1-0004-4248-adef-930b62abb9ec - - - - - -] Synchronizing state
Oct 13 15:47:26 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.472 496978 INFO neutron.agent.dhcp.agent [None req-a88d60b0-2a99-4a06-a84c-2b8d149c12aa - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.474 496978 INFO neutron.agent.dhcp.agent [-] Starting network 08b6cfc7-21c4-4d6f-b53f-cf8140490984 dhcp configuration
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.474 496978 INFO neutron.agent.dhcp.agent [-] Finished network 08b6cfc7-21c4-4d6f-b53f-cf8140490984 dhcp configuration
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.474 496978 INFO neutron.agent.dhcp.agent [-] Starting network bbc77373-2781-4d9c-81a6-fd69e1b2c2ea dhcp configuration
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.475 496978 INFO neutron.agent.dhcp.agent [-] Finished network bbc77373-2781-4d9c-81a6-fd69e1b2c2ea dhcp configuration
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.475 496978 INFO neutron.agent.dhcp.agent [None req-a88d60b0-2a99-4a06-a84c-2b8d149c12aa - - - - - -] Synchronizing state complete
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.477 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:26.552 496978 INFO neutron.agent.dhcp.agent [None req-ba307118-2cba-49b7-b96b-d14e67dc9e91 - - - - - -] DHCP configuration for ports {'06dc6f46-a50b-481f-a741-bfad8623c5d3', 'e2f9cc6e-dfa2-4cc5-9e74-6ca425ef35a5', 'e3d92db7-5106-4a58-83fb-0c74a16b72a2'} is completed
Oct 13 15:47:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61257 DF PROTO=TCP SPT=34406 DPT=9102 SEQ=3101837516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD22EB60000000001030307) 
Oct 13 15:47:26 standalone.localdomain dnsmasq[550184]: exiting on receipt of SIGTERM
Oct 13 15:47:26 standalone.localdomain podman[550593]: 2025-10-13 15:47:26.700557384 +0000 UTC m=+0.051998983 container kill 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:47:26 standalone.localdomain systemd[1]: libpod-5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936.scope: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain dnsmasq[550185]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:26 standalone.localdomain podman[550573]: 2025-10-13 15:47:26.728672991 +0000 UTC m=+0.110434656 container kill e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:26 standalone.localdomain podman[550623]: 2025-10-13 15:47:26.752806584 +0000 UTC m=+0.039936107 container died 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-203b93d5dc8fb790b905eee78c3680776a5ee026899f85fdcd51105574e79b6d-merged.mount: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain podman[550623]: 2025-10-13 15:47:26.785588107 +0000 UTC m=+0.072717610 container cleanup 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:47:26 standalone.localdomain systemd[1]: libpod-conmon-5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936.scope: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain podman[550631]: 2025-10-13 15:47:26.806353305 +0000 UTC m=+0.078196700 container remove 5d6f038b6441311f7c43074205b0ae17cf45784ecd4bed4dddaee862a072f936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbc77373-2781-4d9c-81a6-fd69e1b2c2ea, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:26 standalone.localdomain dnsmasq[550531]: exiting on receipt of SIGTERM
Oct 13 15:47:26 standalone.localdomain podman[550609]: 2025-10-13 15:47:26.828198615 +0000 UTC m=+0.147469370 container kill 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:47:26 standalone.localdomain systemd[1]: libpod-467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4.scope: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain ceph-mon[29756]: pgmap v3994: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:26 standalone.localdomain podman[550661]: 2025-10-13 15:47:26.880786956 +0000 UTC m=+0.043321272 container died 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:26 standalone.localdomain kernel: device tapcfba9bd9-b8 left promiscuous mode
Oct 13 15:47:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:26.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:26Z|00548|binding|INFO|Releasing lport cfba9bd9-b8ad-4c91-98b2-ecf52a866833 from this chassis (sb_readonly=0)
Oct 13 15:47:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:26Z|00549|binding|INFO|Setting lport cfba9bd9-b8ad-4c91-98b2-ecf52a866833 down in Southbound
Oct 13 15:47:26 standalone.localdomain podman[550661]: 2025-10-13 15:47:26.956666593 +0000 UTC m=+0.119200889 container cleanup 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:47:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:26.960 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=cfba9bd9-b8ad-4c91-98b2-ecf52a866833) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:26 standalone.localdomain systemd[1]: libpod-conmon-467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4.scope: Deactivated successfully.
Oct 13 15:47:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:26.961 378821 INFO neutron.agent.ovn.metadata.agent [-] Port cfba9bd9-b8ad-4c91-98b2-ecf52a866833 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:47:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:26.962 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f934a9b1-f0ba-494d-9c34-bbdad3043007 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:26.963 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9f425ddf-ed67-4a46-ad49-3f13d4ee66ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:26.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:26 standalone.localdomain podman[550664]: 2025-10-13 15:47:26.988018871 +0000 UTC m=+0.138802610 container remove 467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:47:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:27.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:27.233 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:23Z, description=, device_id=fe290e63-85f9-4bd4-8534-52c79419ee28, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890ecd00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f23df0>], id=f5259448-0368-456d-9bf1-7bd1ec8b1aea, ip_allocation=immediate, mac_address=fa:16:3e:c4:43:26, name=tempest-RoutersTest-1133844275, network_id=265b8599-466a-474a-bf1f-214d900622c4, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['698974ef-6627-4166-8798-766ef81e7360'], standard_attr_id=2036, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:25Z on network 265b8599-466a-474a-bf1f-214d900622c4
Oct 13 15:47:27 standalone.localdomain dnsmasq[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/addn_hosts - 1 addresses
Oct 13 15:47:27 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/host
Oct 13 15:47:27 standalone.localdomain podman[550711]: 2025-10-13 15:47:27.490375641 +0000 UTC m=+0.057533966 container kill 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:27 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/opts
Oct 13 15:47:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3995: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:27 standalone.localdomain dnsmasq[550185]: exiting on receipt of SIGTERM
Oct 13 15:47:27 standalone.localdomain podman[550760]: 2025-10-13 15:47:27.751745214 +0000 UTC m=+0.053302714 container kill e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:27 standalone.localdomain systemd[1]: libpod-e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e.scope: Deactivated successfully.
Oct 13 15:47:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9f10bac4cc903ee494d1f499e1d66c61e37323cacf6dd45a2ae07f3771d73ddf-merged.mount: Deactivated successfully.
Oct 13 15:47:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-467ec001f810d21a79fff01701f0ddd1e1f68aebe8e5ba6d1be7f6ad70e2bfd4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:27 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dbbc77373\x2d2781\x2d4d9c\x2d81a6\x2dfd69e1b2c2ea.mount: Deactivated successfully.
Oct 13 15:47:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:27.806 496978 INFO neutron.agent.dhcp.agent [None req-4e73d71d-7fa3-4fe0-8c6a-a2ce1d451ff1 - - - - - -] DHCP configuration for ports {'f5259448-0368-456d-9bf1-7bd1ec8b1aea'} is completed
Oct 13 15:47:27 standalone.localdomain podman[550785]: 2025-10-13 15:47:27.822978156 +0000 UTC m=+0.047663368 container died e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:47:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:27 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-afb54709bb9bb9d742dff98d6e083a79e5032357671aa31e14cdf44f3911186d-merged.mount: Deactivated successfully.
Oct 13 15:47:27 standalone.localdomain podman[550785]: 2025-10-13 15:47:27.873185552 +0000 UTC m=+0.097870704 container remove e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:27 standalone.localdomain systemd[1]: libpod-conmon-e09acd48870f01b9f048db3d11b011af1f1eb2db4b9c2d7afea56ca8d945b23e.scope: Deactivated successfully.
Oct 13 15:47:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:28.033 2 INFO neutron.agent.securitygroups_rpc [None req-6572ac94-c7bd-41a8-89c1-9588949d2a18 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['81d762f2-ab19-4999-b1e3-c646428dcad5']
Oct 13 15:47:28 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:47:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:28.159 496978 INFO neutron.agent.dhcp.agent [None req-eca6eae5-7f8d-47a6-846a-539528a32fe8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:28.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:28.279 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:28Z|00550|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:28.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:28 standalone.localdomain podman[550840]: 
Oct 13 15:47:28 standalone.localdomain podman[550840]: 2025-10-13 15:47:28.493037478 +0000 UTC m=+0.102840359 container create d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:28.526 496978 INFO neutron.agent.linux.ip_lib [None req-0932924e-6912-4933-b0ab-ecb45861b7df - - - - - -] Device tapbb828ff7-92 cannot be used as it has no MAC address
Oct 13 15:47:28 standalone.localdomain systemd[1]: Started libpod-conmon-d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c.scope.
Oct 13 15:47:28 standalone.localdomain podman[550840]: 2025-10-13 15:47:28.442515262 +0000 UTC m=+0.052318193 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:28 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:28.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:28 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8555981688f39cc8771382ae7970cb3d00a4604d0ec01faf8d8c9933fc3eccdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:28 standalone.localdomain kernel: device tapbb828ff7-92 entered promiscuous mode
Oct 13 15:47:28 standalone.localdomain NetworkManager[5962]: <info>  [1760370448.5596] manager: (tapbb828ff7-92): new Generic device (/org/freedesktop/NetworkManager/Devices/99)
Oct 13 15:47:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:28Z|00551|binding|INFO|Claiming lport bb828ff7-92f8-4812-b055-786e33f7bcaa for this chassis.
Oct 13 15:47:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:28Z|00552|binding|INFO|bb828ff7-92f8-4812-b055-786e33f7bcaa: Claiming unknown
Oct 13 15:47:28 standalone.localdomain systemd-udevd[550867]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:28.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:28.577 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9a51cfe2-ad26-447b-a406-49386aa040cb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a51cfe2-ad26-447b-a406-49386aa040cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9f3031b-af4e-4609-9fa8-4267aa88ece9, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bb828ff7-92f8-4812-b055-786e33f7bcaa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:28.578 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bb828ff7-92f8-4812-b055-786e33f7bcaa in datapath 9a51cfe2-ad26-447b-a406-49386aa040cb bound to our chassis
Oct 13 15:47:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:28.579 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a51cfe2-ad26-447b-a406-49386aa040cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:28 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:28.579 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1b6f48-59f9-49b8-852f-1c2bf7e04dd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:28 standalone.localdomain podman[550840]: 2025-10-13 15:47:28.581863308 +0000 UTC m=+0.191666189 container init d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:47:28 standalone.localdomain podman[550840]: 2025-10-13 15:47:28.587836214 +0000 UTC m=+0.197639105 container start d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:28 standalone.localdomain dnsmasq[550871]: started, version 2.85 cachesize 150
Oct 13 15:47:28 standalone.localdomain dnsmasq[550871]: DNS service limited to local subnets
Oct 13 15:47:28 standalone.localdomain dnsmasq[550871]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:28 standalone.localdomain dnsmasq[550871]: warning: no upstream servers configured
Oct 13 15:47:28 standalone.localdomain dnsmasq-dhcp[550871]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:47:28 standalone.localdomain dnsmasq-dhcp[550871]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:28 standalone.localdomain dnsmasq[550871]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 0 addresses
Oct 13 15:47:28 standalone.localdomain dnsmasq-dhcp[550871]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:28 standalone.localdomain dnsmasq-dhcp[550871]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:28Z|00553|binding|INFO|Setting lport bb828ff7-92f8-4812-b055-786e33f7bcaa ovn-installed in OVS
Oct 13 15:47:28 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:28Z|00554|binding|INFO|Setting lport bb828ff7-92f8-4812-b055-786e33f7bcaa up in Southbound
Oct 13 15:47:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:28.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:28.643 496978 INFO neutron.agent.dhcp.agent [None req-8747f957-f64b-4697-9a85-11601af7f0df - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:27Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889028df0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889028cd0>], id=257714f6-d5b6-4677-ad9d-5b41a2ff2fca, ip_allocation=immediate, mac_address=fa:16:3e:74:cb:d4, name=tempest-PortsIpV6TestJSON-1492709070, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:12Z, description=, dns_domain=, id=a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-196486360, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44606, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=1632, status=ACTIVE, subnets=['29ba168e-9c07-44c8-aa04-e39b71390c7b', 'adaf5b8b-6392-44a4-83f6-16fe7a675997'], tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:25Z, vlan_transparent=None, network_id=a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['81d762f2-ab19-4999-b1e3-c646428dcad5'], standard_attr_id=2063, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:27Z on network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e
Oct 13 15:47:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:28.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:28 standalone.localdomain ceph-mon[29756]: pgmap v3995: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:28 standalone.localdomain dnsmasq[550871]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 1 addresses
Oct 13 15:47:28 standalone.localdomain dnsmasq-dhcp[550871]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:28 standalone.localdomain dnsmasq-dhcp[550871]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:28 standalone.localdomain podman[550901]: 2025-10-13 15:47:28.862594676 +0000 UTC m=+0.066289829 container kill d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:47:29 standalone.localdomain podman[550962]: 
Oct 13 15:47:29 standalone.localdomain podman[550962]: 2025-10-13 15:47:29.738937691 +0000 UTC m=+0.099209675 container create 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:47:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3996: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:29 standalone.localdomain systemd[1]: Started libpod-conmon-5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4.scope.
Oct 13 15:47:29 standalone.localdomain podman[550962]: 2025-10-13 15:47:29.691884374 +0000 UTC m=+0.052156398 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f41f7c547a500306311fb5012b939e5c7e280ca284de09d71e35054d1bf83f7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:29 standalone.localdomain podman[550962]: 2025-10-13 15:47:29.834857134 +0000 UTC m=+0.195129118 container init 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:29 standalone.localdomain podman[550962]: 2025-10-13 15:47:29.843796163 +0000 UTC m=+0.204068137 container start 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:29 standalone.localdomain dnsmasq[550981]: started, version 2.85 cachesize 150
Oct 13 15:47:29 standalone.localdomain dnsmasq[550981]: DNS service limited to local subnets
Oct 13 15:47:29 standalone.localdomain dnsmasq[550981]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:29 standalone.localdomain dnsmasq[550981]: warning: no upstream servers configured
Oct 13 15:47:29 standalone.localdomain dnsmasq-dhcp[550981]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:29 standalone.localdomain dnsmasq[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/addn_hosts - 0 addresses
Oct 13 15:47:29 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/host
Oct 13 15:47:29 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/opts
Oct 13 15:47:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:29.934 496978 INFO neutron.agent.dhcp.agent [None req-0b73e02e-a8c9-4336-8770-9642c90e76da - - - - - -] DHCP configuration for ports {'06dc6f46-a50b-481f-a741-bfad8623c5d3', 'e2f9cc6e-dfa2-4cc5-9e74-6ca425ef35a5', 'e3d92db7-5106-4a58-83fb-0c74a16b72a2'} is completed
Oct 13 15:47:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:30.058 496978 INFO neutron.agent.linux.ip_lib [None req-94128ffb-5a28-4628-b2f1-3b618dc5c027 - - - - - -] Device tap6abac8a0-f9 cannot be used as it has no MAC address
Oct 13 15:47:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:30.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:30.097 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:29Z, description=, device_id=ac7efc8a-59ba-4ac0-af7a-0eab0be98b5e, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffcc10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188921a100>], id=58ab8037-2950-4134-bbd3-c447f10ce57f, ip_allocation=immediate, mac_address=fa:16:3e:b5:32:ae, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:26Z, description=, dns_domain=, id=9a51cfe2-ad26-447b-a406-49386aa040cb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2107849883, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33637, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2040, status=ACTIVE, subnets=['0694df94-308c-463f-a826-3c29237c20a3'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:27Z, vlan_transparent=None, network_id=9a51cfe2-ad26-447b-a406-49386aa040cb, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2069, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:29Z on network 9a51cfe2-ad26-447b-a406-49386aa040cb
Oct 13 15:47:30 standalone.localdomain kernel: device tap6abac8a0-f9 entered promiscuous mode
Oct 13 15:47:30 standalone.localdomain NetworkManager[5962]: <info>  [1760370450.1019] manager: (tap6abac8a0-f9): new Generic device (/org/freedesktop/NetworkManager/Devices/100)
Oct 13 15:47:30 standalone.localdomain systemd-udevd[550869]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:30.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:30 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:30Z|00555|binding|INFO|Claiming lport 6abac8a0-f930-4687-b13f-1e77c1b12419 for this chassis.
Oct 13 15:47:30 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:30Z|00556|binding|INFO|6abac8a0-f930-4687-b13f-1e77c1b12419: Claiming unknown
Oct 13 15:47:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:30.120 496978 INFO neutron.agent.dhcp.agent [None req-08438b88-be8d-49c3-acef-6e69ec8d34d4 - - - - - -] DHCP configuration for ports {'da18da3f-ee25-415d-90de-b1c3ff66ce2d', '257714f6-d5b6-4677-ad9d-5b41a2ff2fca'} is completed
Oct 13 15:47:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:30.115 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-fd4b300a-8407-493c-a4cb-d44c73cd92ff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4b300a-8407-493c-a4cb-d44c73cd92ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfb95b1f-1952-4e12-a16c-228508edbab2, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6abac8a0-f930-4687-b13f-1e77c1b12419) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:30.117 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6abac8a0-f930-4687-b13f-1e77c1b12419 in datapath fd4b300a-8407-493c-a4cb-d44c73cd92ff bound to our chassis
Oct 13 15:47:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:30.119 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fd4b300a-8407-493c-a4cb-d44c73cd92ff or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:30.119 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[314cd538-6269-4e70-82a8-7a2a2efe4a7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:30 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:30Z|00557|binding|INFO|Setting lport 6abac8a0-f930-4687-b13f-1e77c1b12419 ovn-installed in OVS
Oct 13 15:47:30 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:30Z|00558|binding|INFO|Setting lport 6abac8a0-f930-4687-b13f-1e77c1b12419 up in Southbound
Oct 13 15:47:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:30.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:30.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:30.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:30 standalone.localdomain dnsmasq[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/addn_hosts - 1 addresses
Oct 13 15:47:30 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/host
Oct 13 15:47:30 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/opts
Oct 13 15:47:30 standalone.localdomain podman[551019]: 2025-10-13 15:47:30.323610969 +0000 UTC m=+0.054272653 container kill 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:30 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:30.383 2 INFO neutron.agent.securitygroups_rpc [None req-7407cdcd-7dbe-434d-b739-24b50bd16249 1673af16e0954b21afa19d18870dddb4 745bbaede12e4123b190dd25fdacfb7d - - default default] Security group member updated ['698974ef-6627-4166-8798-766ef81e7360']
Oct 13 15:47:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61258 DF PROTO=TCP SPT=34406 DPT=9102 SEQ=3101837516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD23E760000000001030307) 
Oct 13 15:47:30 standalone.localdomain dnsmasq[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/addn_hosts - 0 addresses
Oct 13 15:47:30 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/host
Oct 13 15:47:30 standalone.localdomain podman[551069]: 2025-10-13 15:47:30.652465807 +0000 UTC m=+0.070508840 container kill 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:47:30 standalone.localdomain dnsmasq-dhcp[549910]: read /var/lib/neutron/dhcp/265b8599-466a-474a-bf1f-214d900622c4/opts
Oct 13 15:47:30 standalone.localdomain ceph-mon[29756]: pgmap v3996: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:30.834 496978 INFO neutron.agent.dhcp.agent [None req-659d1059-3526-47ca-97fe-4263c6886be4 - - - - - -] DHCP configuration for ports {'58ab8037-2950-4134-bbd3-c447f10ce57f'} is completed
Oct 13 15:47:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:31Z|00559|binding|INFO|Releasing lport 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 from this chassis (sb_readonly=0)
Oct 13 15:47:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:31Z|00560|binding|INFO|Setting lport 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 down in Southbound
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain kernel: device tap217f6bba-9c left promiscuous mode
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.072 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-265b8599-466a-474a-bf1f-214d900622c4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-265b8599-466a-474a-bf1f-214d900622c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e97900d-e928-4b98-beb6-cfd18974b360, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=217f6bba-9c27-4a0c-afea-2b78e62ee1c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.074 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 217f6bba-9c27-4a0c-afea-2b78e62ee1c7 in datapath 265b8599-466a-474a-bf1f-214d900622c4 unbound from our chassis
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.077 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 265b8599-466a-474a-bf1f-214d900622c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.078 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c9437912-c692-4204-b420-784faddb6fe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain podman[551122]: 
Oct 13 15:47:31 standalone.localdomain podman[551122]: 2025-10-13 15:47:31.213109336 +0000 UTC m=+0.086898981 container create bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:31 standalone.localdomain systemd[1]: Started libpod-conmon-bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc.scope.
Oct 13 15:47:31 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:31 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d119a5b8ceddf2ecd6a01ae48cd7f9b8d997c63486c66d31fe7df5e955679835/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:31 standalone.localdomain podman[551122]: 2025-10-13 15:47:31.178579799 +0000 UTC m=+0.052369494 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:31 standalone.localdomain podman[551122]: 2025-10-13 15:47:31.284887265 +0000 UTC m=+0.158676930 container init bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:47:31 standalone.localdomain podman[551122]: 2025-10-13 15:47:31.293801973 +0000 UTC m=+0.167591638 container start bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:47:31 standalone.localdomain dnsmasq[551142]: started, version 2.85 cachesize 150
Oct 13 15:47:31 standalone.localdomain dnsmasq[551142]: DNS service limited to local subnets
Oct 13 15:47:31 standalone.localdomain dnsmasq[551142]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:31 standalone.localdomain dnsmasq[551142]: warning: no upstream servers configured
Oct 13 15:47:31 standalone.localdomain dnsmasq-dhcp[551142]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:31 standalone.localdomain dnsmasq[551142]: read /var/lib/neutron/dhcp/fd4b300a-8407-493c-a4cb-d44c73cd92ff/addn_hosts - 0 addresses
Oct 13 15:47:31 standalone.localdomain dnsmasq-dhcp[551142]: read /var/lib/neutron/dhcp/fd4b300a-8407-493c-a4cb-d44c73cd92ff/host
Oct 13 15:47:31 standalone.localdomain dnsmasq-dhcp[551142]: read /var/lib/neutron/dhcp/fd4b300a-8407-493c-a4cb-d44c73cd92ff/opts
Oct 13 15:47:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:31.378 496978 INFO neutron.agent.linux.ip_lib [None req-5e44f044-a7b5-46e4-a82d-7ce2c43cc9b6 - - - - - -] Device tapb28fbc52-72 cannot be used as it has no MAC address
Oct 13 15:47:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:31.388 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:29Z, description=, device_id=ac7efc8a-59ba-4ac0-af7a-0eab0be98b5e, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83dc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83eb0>], id=58ab8037-2950-4134-bbd3-c447f10ce57f, ip_allocation=immediate, mac_address=fa:16:3e:b5:32:ae, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:26Z, description=, dns_domain=, id=9a51cfe2-ad26-447b-a406-49386aa040cb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2107849883, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33637, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2040, status=ACTIVE, subnets=['0694df94-308c-463f-a826-3c29237c20a3'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:27Z, vlan_transparent=None, network_id=9a51cfe2-ad26-447b-a406-49386aa040cb, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2069, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:29Z on network 9a51cfe2-ad26-447b-a406-49386aa040cb
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain kernel: device tapb28fbc52-72 entered promiscuous mode
Oct 13 15:47:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:31Z|00561|binding|INFO|Claiming lport b28fbc52-7229-4d82-8fc1-2d6e095d1a0f for this chassis.
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:31Z|00562|binding|INFO|b28fbc52-7229-4d82-8fc1-2d6e095d1a0f: Claiming unknown
Oct 13 15:47:31 standalone.localdomain NetworkManager[5962]: <info>  [1760370451.4266] manager: (tapb28fbc52-72): new Generic device (/org/freedesktop/NetworkManager/Devices/101)
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.436 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-7b5a1eaf-f859-44dd-8036-8ec5e471e646', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b5a1eaf-f859-44dd-8036-8ec5e471e646', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd1d190f-8216-4eb2-aa59-d3dd37b8484e, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b28fbc52-7229-4d82-8fc1-2d6e095d1a0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.439 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b28fbc52-7229-4d82-8fc1-2d6e095d1a0f in datapath 7b5a1eaf-f859-44dd-8036-8ec5e471e646 bound to our chassis
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.446 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 70caee29-c3a2-4610-94e0-75f18fe80bcb IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.446 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b5a1eaf-f859-44dd-8036-8ec5e471e646, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:31 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:31.447 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1882bb29-cf2b-47f8-a939-e436b3ec6ffa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:31Z|00563|binding|INFO|Setting lport b28fbc52-7229-4d82-8fc1-2d6e095d1a0f ovn-installed in OVS
Oct 13 15:47:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:31Z|00564|binding|INFO|Setting lport b28fbc52-7229-4d82-8fc1-2d6e095d1a0f up in Southbound
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapb28fbc52-72: No such device
Oct 13 15:47:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:31.505 496978 INFO neutron.agent.dhcp.agent [None req-458dd399-2ba5-41e8-a258-9b18249171b7 - - - - - -] DHCP configuration for ports {'0245ffe9-ed54-4ab2-9832-70851b47b8dc'} is completed
Oct 13 15:47:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:31.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:31 standalone.localdomain podman[551187]: 2025-10-13 15:47:31.572187137 +0000 UTC m=+0.053386196 container kill 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:47:31 standalone.localdomain dnsmasq[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/addn_hosts - 1 addresses
Oct 13 15:47:31 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/host
Oct 13 15:47:31 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/opts
Oct 13 15:47:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3997: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.817302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370451817345, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 2116, "num_deletes": 251, "total_data_size": 2019264, "memory_usage": 2059392, "flush_reason": "Manual Compaction"}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370451828639, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 1952351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92455, "largest_seqno": 94570, "table_properties": {"data_size": 1943963, "index_size": 5154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17487, "raw_average_key_size": 20, "raw_value_size": 1926948, "raw_average_value_size": 2230, "num_data_blocks": 228, "num_entries": 864, "num_filter_entries": 864, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760370257, "oldest_key_time": 1760370257, "file_creation_time": 1760370451, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 11407 microseconds, and 6445 cpu microseconds.
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.828692) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 1952351 bytes OK
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.828732) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.830216) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.830238) EVENT_LOG_v1 {"time_micros": 1760370451830230, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.830261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 2010225, prev total WAL file size 2010225, number of live WAL files 2.
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.831021) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(1906KB)], [212(5723KB)]
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370451831070, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 7813137, "oldest_snapshot_seqno": -1}
Oct 13 15:47:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:31.866 496978 INFO neutron.agent.dhcp.agent [None req-45cf1813-9303-4dcb-937b-dda5c50eca44 - - - - - -] DHCP configuration for ports {'58ab8037-2950-4134-bbd3-c447f10ce57f'} is completed
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 7648 keys, 6793487 bytes, temperature: kUnknown
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370451867636, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 6793487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6749214, "index_size": 24077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 201439, "raw_average_key_size": 26, "raw_value_size": 6616939, "raw_average_value_size": 865, "num_data_blocks": 942, "num_entries": 7648, "num_filter_entries": 7648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370451, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.867947) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 6793487 bytes
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.869723) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.2 rd, 185.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 5.6 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 8168, records dropped: 520 output_compression: NoCompression
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.869753) EVENT_LOG_v1 {"time_micros": 1760370451869739, "job": 134, "event": "compaction_finished", "compaction_time_micros": 36648, "compaction_time_cpu_micros": 24585, "output_level": 6, "num_output_files": 1, "total_output_size": 6793487, "num_input_records": 8168, "num_output_records": 7648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370451870162, "job": 134, "event": "table_file_deletion", "file_number": 214}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370451870937, "job": 134, "event": "table_file_deletion", "file_number": 212}
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.830919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.871053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.871061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.871065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.871069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:31 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:31.871074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:32 standalone.localdomain podman[551247]: 2025-10-13 15:47:32.006202366 +0000 UTC m=+0.045464220 container kill d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:47:32 standalone.localdomain dnsmasq[550871]: exiting on receipt of SIGTERM
Oct 13 15:47:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:32.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:32 standalone.localdomain systemd[1]: libpod-d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c.scope: Deactivated successfully.
Oct 13 15:47:32 standalone.localdomain podman[551264]: 2025-10-13 15:47:32.115843945 +0000 UTC m=+0.045034125 container died d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:32 standalone.localdomain podman[551264]: 2025-10-13 15:47:32.144082326 +0000 UTC m=+0.073272476 container cleanup d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:47:32 standalone.localdomain systemd[1]: libpod-conmon-d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c.scope: Deactivated successfully.
Oct 13 15:47:32 standalone.localdomain podman[551263]: 2025-10-13 15:47:32.180295276 +0000 UTC m=+0.108868747 container remove d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:47:32 standalone.localdomain kernel: device tap6abac8a0-f9 left promiscuous mode
Oct 13 15:47:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:32.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:32Z|00565|binding|INFO|Releasing lport 6abac8a0-f930-4687-b13f-1e77c1b12419 from this chassis (sb_readonly=0)
Oct 13 15:47:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:32Z|00566|binding|INFO|Setting lport 6abac8a0-f930-4687-b13f-1e77c1b12419 down in Southbound
Oct 13 15:47:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:32.427 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port ee494704-419c-443e-a4ef-4cf7ad22b3dd with type ""
Oct 13 15:47:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:32.428 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-fd4b300a-8407-493c-a4cb-d44c73cd92ff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4b300a-8407-493c-a4cb-d44c73cd92ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfb95b1f-1952-4e12-a16c-228508edbab2, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6abac8a0-f930-4687-b13f-1e77c1b12419) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:32.429 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6abac8a0-f930-4687-b13f-1e77c1b12419 in datapath fd4b300a-8407-493c-a4cb-d44c73cd92ff unbound from our chassis
Oct 13 15:47:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:32.431 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4b300a-8407-493c-a4cb-d44c73cd92ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:32.432 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[49995ab6-7f7e-4b44-a789-7c1295f4536c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:32.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:32 standalone.localdomain podman[551315]: 
Oct 13 15:47:32 standalone.localdomain podman[551315]: 2025-10-13 15:47:32.474193184 +0000 UTC m=+0.097609746 container create f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:47:32 standalone.localdomain systemd[1]: Started libpod-conmon-f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9.scope.
Oct 13 15:47:32 standalone.localdomain podman[551315]: 2025-10-13 15:47:32.431885144 +0000 UTC m=+0.055301716 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:32 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:32 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fd686c1d5cbc3492536adc36af1051998859accc41a705f3b6fa2669820a1a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:32 standalone.localdomain podman[551315]: 2025-10-13 15:47:32.546665694 +0000 UTC m=+0.170082256 container init f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:47:32 standalone.localdomain podman[551315]: 2025-10-13 15:47:32.555978364 +0000 UTC m=+0.179394926 container start f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:32 standalone.localdomain dnsmasq[551334]: started, version 2.85 cachesize 150
Oct 13 15:47:32 standalone.localdomain dnsmasq[551334]: DNS service limited to local subnets
Oct 13 15:47:32 standalone.localdomain dnsmasq[551334]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:32 standalone.localdomain dnsmasq[551334]: warning: no upstream servers configured
Oct 13 15:47:32 standalone.localdomain dnsmasq-dhcp[551334]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:32 standalone.localdomain dnsmasq[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/addn_hosts - 0 addresses
Oct 13 15:47:32 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/host
Oct 13 15:47:32 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/opts
Oct 13 15:47:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:32.650 496978 INFO neutron.agent.dhcp.agent [None req-c10a52d0-8b8e-424e-8a4f-0827c74337e4 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:32.746 496978 INFO neutron.agent.dhcp.agent [None req-fa690632-ff44-4b80-bdd7-024fb1a28bc9 - - - - - -] DHCP configuration for ports {'704196e2-4b0a-44b7-b089-4b95a2028a73'} is completed
Oct 13 15:47:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8555981688f39cc8771382ae7970cb3d00a4604d0ec01faf8d8c9933fc3eccdc-merged.mount: Deactivated successfully.
Oct 13 15:47:32 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3cf2adfb03253e75b8b09962145daff2197904165713e359970b717ac6d7d2c-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:32 standalone.localdomain ceph-mon[29756]: pgmap v3997: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:33 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:33.064 2 INFO neutron.agent.securitygroups_rpc [None req-3c404a25-90b5-479b-80b0-c56c4c248327 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['14ab85b1-cdcd-45a1-97c4-5ae3cebf76b7', '81d762f2-ab19-4999-b1e3-c646428dcad5', 'e29339c2-7522-492d-9813-b73972b3db72']
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.077 496978 INFO neutron.agent.linux.ip_lib [None req-8f3059b1-3690-4a09-be68-0cbcc2ac062e - - - - - -] Device tap768a2903-b9 cannot be used as it has no MAC address
Oct 13 15:47:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:33.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:33 standalone.localdomain dnsmasq[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/addn_hosts - 0 addresses
Oct 13 15:47:33 standalone.localdomain kernel: device tap768a2903-b9 entered promiscuous mode
Oct 13 15:47:33 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/host
Oct 13 15:47:33 standalone.localdomain podman[551379]: 2025-10-13 15:47:33.145227474 +0000 UTC m=+0.087546892 container kill 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:47:33 standalone.localdomain dnsmasq-dhcp[550981]: read /var/lib/neutron/dhcp/9a51cfe2-ad26-447b-a406-49386aa040cb/opts
Oct 13 15:47:33 standalone.localdomain NetworkManager[5962]: <info>  [1760370453.1467] manager: (tap768a2903-b9): new Generic device (/org/freedesktop/NetworkManager/Devices/102)
Oct 13 15:47:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:33Z|00567|binding|INFO|Claiming lport 768a2903-b967-4e86-ae92-fc2a1242b65b for this chassis.
Oct 13 15:47:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:33Z|00568|binding|INFO|768a2903-b967-4e86-ae92-fc2a1242b65b: Claiming unknown
Oct 13 15:47:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:33Z|00569|binding|INFO|Setting lport 768a2903-b967-4e86-ae92-fc2a1242b65b ovn-installed in OVS
Oct 13 15:47:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:33Z|00570|binding|INFO|Setting lport 768a2903-b967-4e86-ae92-fc2a1242b65b up in Southbound
Oct 13 15:47:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:33.159 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8a:bd77/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=768a2903-b967-4e86-ae92-fc2a1242b65b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:33.160 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 768a2903-b967-4e86-ae92-fc2a1242b65b in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:47:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:33.162 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74436405-b09f-40e8-aa5a-3f0e72ab25db IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:33.162 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:33.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:33.163 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[410b3e9a-1c7f-4877-ac53-14ea4aacb602]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:33 standalone.localdomain dnsmasq[549910]: exiting on receipt of SIGTERM
Oct 13 15:47:33 standalone.localdomain systemd[1]: libpod-55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436.scope: Deactivated successfully.
Oct 13 15:47:33 standalone.localdomain podman[551401]: 2025-10-13 15:47:33.224715484 +0000 UTC m=+0.078955445 container kill 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:47:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:33.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:33 standalone.localdomain podman[551437]: 2025-10-13 15:47:33.280750282 +0000 UTC m=+0.051874080 container died 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:33.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.297 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:32Z, description=, device_id=d79db68a-35e2-4940-9b61-846765fe1630, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188916c100>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890347c0>], id=778a20ca-91df-4274-82e8-5cc6b4247e39, ip_allocation=immediate, mac_address=fa:16:3e:7f:93:b6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2084, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:47:32Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:47:33 standalone.localdomain podman[551437]: 2025-10-13 15:47:33.30666242 +0000 UTC m=+0.077786198 container cleanup 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:33 standalone.localdomain systemd[1]: libpod-conmon-55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436.scope: Deactivated successfully.
Oct 13 15:47:33 standalone.localdomain dnsmasq[551142]: read /var/lib/neutron/dhcp/fd4b300a-8407-493c-a4cb-d44c73cd92ff/addn_hosts - 0 addresses
Oct 13 15:47:33 standalone.localdomain dnsmasq-dhcp[551142]: read /var/lib/neutron/dhcp/fd4b300a-8407-493c-a4cb-d44c73cd92ff/host
Oct 13 15:47:33 standalone.localdomain podman[551472]: 2025-10-13 15:47:33.326996284 +0000 UTC m=+0.039652528 container kill bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:33 standalone.localdomain dnsmasq-dhcp[551142]: read /var/lib/neutron/dhcp/fd4b300a-8407-493c-a4cb-d44c73cd92ff/opts
Oct 13 15:47:33 standalone.localdomain podman[551448]: 2025-10-13 15:47:33.369285613 +0000 UTC m=+0.116888597 container remove 55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-265b8599-466a-474a-bf1f-214d900622c4, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent [None req-a8c872b0-75f1-42dd-a1ee-a2823694f12e - - - - - -] Unable to reload_allocations dhcp for fd4b300a-8407-493c-a4cb-d44c73cd92ff.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap6abac8a0-f9 not found in namespace qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff.
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap6abac8a0-f9 not found in namespace qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff.
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.382 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:33 standalone.localdomain podman[551521]: 2025-10-13 15:47:33.499587998 +0000 UTC m=+0.053608303 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:33 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:47:33 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:33 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.584 496978 INFO neutron.agent.dhcp.agent [None req-6aaf6ef3-eb8f-4849-8861-fe4eb6026dd2 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3998: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5e282d97a99871c626acf87fac9888f3e7fe911a2642833eb154fe1afa9eb872-merged.mount: Deactivated successfully.
Oct 13 15:47:33 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55fb45d69123a27f76dd28da3d2871d25b823df30548be601f72877565b7c436-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:33 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d265b8599\x2d466a\x2d474a\x2dbf1f\x2d214d900622c4.mount: Deactivated successfully.
Oct 13 15:47:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:33.803 496978 INFO neutron.agent.dhcp.agent [None req-b5028593-1166-41ef-89cf-ca1c2977b39e - - - - - -] DHCP configuration for ports {'778a20ca-91df-4274-82e8-5cc6b4247e39'} is completed
Oct 13 15:47:33 standalone.localdomain podman[551589]: 
Oct 13 15:47:33 standalone.localdomain podman[551589]: 2025-10-13 15:47:33.928066404 +0000 UTC m=+0.083130175 container create d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:47:33 standalone.localdomain systemd[1]: Started libpod-conmon-d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030.scope.
Oct 13 15:47:33 standalone.localdomain podman[551589]: 2025-10-13 15:47:33.876646179 +0000 UTC m=+0.031709620 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:33Z|00571|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d537fcd3973a24e3b361b28e717d5297c0a3d189f527f1f8d0fbd76ebb865682/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:33 standalone.localdomain podman[551589]: 2025-10-13 15:47:33.992674309 +0000 UTC m=+0.147737660 container init d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:47:34 standalone.localdomain podman[551589]: 2025-10-13 15:47:34.008618905 +0000 UTC m=+0.163682296 container start d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:47:34 standalone.localdomain dnsmasq[551609]: started, version 2.85 cachesize 150
Oct 13 15:47:34 standalone.localdomain dnsmasq[551609]: DNS service limited to local subnets
Oct 13 15:47:34 standalone.localdomain dnsmasq[551609]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:34 standalone.localdomain dnsmasq[551609]: warning: no upstream servers configured
Oct 13 15:47:34 standalone.localdomain dnsmasq-dhcp[551609]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:47:34 standalone.localdomain dnsmasq-dhcp[551609]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d
Oct 13 15:47:34 standalone.localdomain dnsmasq-dhcp[551609]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:34 standalone.localdomain dnsmasq[551609]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 1 addresses
Oct 13 15:47:34 standalone.localdomain dnsmasq-dhcp[551609]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:34 standalone.localdomain dnsmasq-dhcp[551609]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:34.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:34 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:34.228 2 INFO neutron.agent.securitygroups_rpc [None req-a2aaff3b-6eee-4900-b7b7-c62f03f5f596 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['14ab85b1-cdcd-45a1-97c4-5ae3cebf76b7', 'e29339c2-7522-492d-9813-b73972b3db72']
Oct 13 15:47:34 standalone.localdomain podman[551630]: 
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.247 496978 INFO neutron.agent.dhcp.agent [None req-419eb3b5-a082-49a0-8234-e62d50eed4de - - - - - -] DHCP configuration for ports {'06dc6f46-a50b-481f-a741-bfad8623c5d3', 'e2f9cc6e-dfa2-4cc5-9e74-6ca425ef35a5', 'e3d92db7-5106-4a58-83fb-0c74a16b72a2', '257714f6-d5b6-4677-ad9d-5b41a2ff2fca'} is completed
Oct 13 15:47:34 standalone.localdomain podman[551630]: 2025-10-13 15:47:34.257626313 +0000 UTC m=+0.086082376 container create 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:34 standalone.localdomain systemd[1]: Started libpod-conmon-165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe.scope.
Oct 13 15:47:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:34.305 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:34.308 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:47:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:34.313 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74436405-b09f-40e8-aa5a-3f0e72ab25db IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:34 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:34.313 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:34.315 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[fec69d3e-d666-4d8b-b8b4-757bc4a199ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:34 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bb4ab4a7f562799fa9472b8a52e7467a9294f72f85fea66c07f0cfdb985b96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:34 standalone.localdomain podman[551630]: 2025-10-13 15:47:34.21840861 +0000 UTC m=+0.046864663 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:34 standalone.localdomain podman[551630]: 2025-10-13 15:47:34.327912475 +0000 UTC m=+0.156368538 container init 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:34 standalone.localdomain podman[551630]: 2025-10-13 15:47:34.340994244 +0000 UTC m=+0.169450317 container start 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:34 standalone.localdomain dnsmasq[551648]: started, version 2.85 cachesize 150
Oct 13 15:47:34 standalone.localdomain dnsmasq[551648]: DNS service limited to local subnets
Oct 13 15:47:34 standalone.localdomain dnsmasq[551648]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:34 standalone.localdomain dnsmasq[551648]: warning: no upstream servers configured
Oct 13 15:47:34 standalone.localdomain dnsmasq[551648]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.411 496978 INFO neutron.agent.dhcp.agent [None req-8f3059b1-3690-4a09-be68-0cbcc2ac062e - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.412 496978 INFO neutron.agent.dhcp.agent [None req-a88d60b0-2a99-4a06-a84c-2b8d149c12aa - - - - - -] Synchronizing state
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.498 496978 INFO neutron.agent.dhcp.agent [None req-21edf800-2bf9-4676-8df3-a368a73eb2c2 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.591 496978 INFO neutron.agent.dhcp.agent [None req-819de79b-481d-486c-ba78-6fb6cf9c28b8 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:34 standalone.localdomain dnsmasq[551142]: exiting on receipt of SIGTERM
Oct 13 15:47:34 standalone.localdomain podman[551667]: 2025-10-13 15:47:34.788578216 +0000 UTC m=+0.069923792 container kill bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:34 standalone.localdomain systemd[1]: libpod-bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc.scope: Deactivated successfully.
Oct 13 15:47:34 standalone.localdomain ceph-mon[29756]: pgmap v3998: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:34 standalone.localdomain podman[551682]: 2025-10-13 15:47:34.872226175 +0000 UTC m=+0.057184065 container died bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:47:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:34 standalone.localdomain podman[551682]: 2025-10-13 15:47:34.90733295 +0000 UTC m=+0.092290770 container cleanup bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:34 standalone.localdomain systemd[1]: libpod-conmon-bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc.scope: Deactivated successfully.
Oct 13 15:47:34 standalone.localdomain podman[551683]: 2025-10-13 15:47:34.949740493 +0000 UTC m=+0.132925247 container remove bc2689c9ea8e67dcf37de525a610863f5ef20a998b7fb30f6360f4db5cecd8bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd4b300a-8407-493c-a4cb-d44c73cd92ff, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.972 496978 INFO neutron.agent.dhcp.agent [None req-af71c68e-bc38-47d3-b309-a26180b55ba5 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.973 496978 INFO neutron.agent.dhcp.agent [-] Starting network 147b6b76-2e5c-4c83-8fde-2c38b71b5fee dhcp configuration
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.977 496978 INFO neutron.agent.dhcp.agent [-] Starting network 265b8599-466a-474a-bf1f-214d900622c4 dhcp configuration
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.977 496978 INFO neutron.agent.dhcp.agent [-] Finished network 265b8599-466a-474a-bf1f-214d900622c4 dhcp configuration
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.978 496978 INFO neutron.agent.dhcp.agent [-] Starting network fac02be2-073c-46c5-9613-b3e18596338a dhcp configuration
Oct 13 15:47:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:34.978 496978 INFO neutron.agent.dhcp.agent [-] Finished network fac02be2-073c-46c5-9613-b3e18596338a dhcp configuration
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.201 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f86c2eae-8f1b-41e2-8630-b15eecd5cd74 with type ""
Oct 13 15:47:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:35Z|00572|binding|INFO|Removing iface tapbb828ff7-92 ovn-installed in OVS
Oct 13 15:47:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:35Z|00573|binding|INFO|Removing lport bb828ff7-92f8-4812-b055-786e33f7bcaa ovn-installed in OVS
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.203 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9a51cfe2-ad26-447b-a406-49386aa040cb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a51cfe2-ad26-447b-a406-49386aa040cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9f3031b-af4e-4609-9fa8-4267aa88ece9, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bb828ff7-92f8-4812-b055-786e33f7bcaa) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.207 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bb828ff7-92f8-4812-b055-786e33f7bcaa in datapath 9a51cfe2-ad26-447b-a406-49386aa040cb unbound from our chassis
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.210 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a51cfe2-ad26-447b-a406-49386aa040cb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.211 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[738cffa9-36cb-458d-9a75-0e7622c54a97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v3999: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d119a5b8ceddf2ecd6a01ae48cd7f9b8d997c63486c66d31fe7df5e955679835-merged.mount: Deactivated successfully.
Oct 13 15:47:35 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dfd4b300a\x2d8407\x2d493c\x2da4cb\x2dd44c73cd92ff.mount: Deactivated successfully.
Oct 13 15:47:35 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:35.832 496978 INFO neutron.agent.linux.ip_lib [None req-c71a99c1-0697-488e-913c-32537b6651ef - - - - - -] Device tapc592a2b8-b8 cannot be used as it has no MAC address
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain kernel: device tapc592a2b8-b8 entered promiscuous mode
Oct 13 15:47:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:35Z|00574|binding|INFO|Claiming lport c592a2b8-b84c-464a-ac06-59686b702837 for this chassis.
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:35Z|00575|binding|INFO|c592a2b8-b84c-464a-ac06-59686b702837: Claiming unknown
Oct 13 15:47:35 standalone.localdomain NetworkManager[5962]: <info>  [1760370455.8664] manager: (tapc592a2b8-b8): new Generic device (/org/freedesktop/NetworkManager/Devices/103)
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.877 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-147b6b76-2e5c-4c83-8fde-2c38b71b5fee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-147b6b76-2e5c-4c83-8fde-2c38b71b5fee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109bd2fe-b0bb-46f1-a8dd-33163a315639, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=c592a2b8-b84c-464a-ac06-59686b702837) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.879 378821 INFO neutron.agent.ovn.metadata.agent [-] Port c592a2b8-b84c-464a-ac06-59686b702837 in datapath 147b6b76-2e5c-4c83-8fde-2c38b71b5fee bound to our chassis
Oct 13 15:47:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:35Z|00576|binding|INFO|Setting lport c592a2b8-b84c-464a-ac06-59686b702837 ovn-installed in OVS
Oct 13 15:47:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:35Z|00577|binding|INFO|Setting lport c592a2b8-b84c-464a-ac06-59686b702837 up in Southbound
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.882 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port dce1ebaa-bed9-466e-a941-62943f6c4fd0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.883 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 147b6b76-2e5c-4c83-8fde-2c38b71b5fee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:35.884 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f7364b9e-e85a-4cf5-b501-7cba41f88151]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:35.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:35 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:35.984 2 INFO neutron.agent.securitygroups_rpc [None req-90849701-eb2c-4f47-9175-e27d26b314c9 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:36Z|00578|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:36.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:36 standalone.localdomain ceph-mon[29756]: pgmap v3999: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:37 standalone.localdomain podman[551773]: 
Oct 13 15:47:37 standalone.localdomain podman[551773]: 2025-10-13 15:47:37.029910691 +0000 UTC m=+0.090838614 container create 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:37.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:37 standalone.localdomain systemd[1]: Started libpod-conmon-89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df.scope.
Oct 13 15:47:37 standalone.localdomain systemd[1]: tmp-crun.I1UILu.mount: Deactivated successfully.
Oct 13 15:47:37 standalone.localdomain podman[551773]: 2025-10-13 15:47:36.988584212 +0000 UTC m=+0.049512215 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:37 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:37 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30c690e803ec38f20d7580efe2545a1ab67b0b0b5defafe10fef94bd70fa03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:37 standalone.localdomain podman[551773]: 2025-10-13 15:47:37.133201893 +0000 UTC m=+0.194129846 container init 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:47:37 standalone.localdomain podman[551773]: 2025-10-13 15:47:37.143210455 +0000 UTC m=+0.204138408 container start 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:47:37 standalone.localdomain dnsmasq[551791]: started, version 2.85 cachesize 150
Oct 13 15:47:37 standalone.localdomain dnsmasq[551791]: DNS service limited to local subnets
Oct 13 15:47:37 standalone.localdomain dnsmasq[551791]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:37 standalone.localdomain dnsmasq[551791]: warning: no upstream servers configured
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551791]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:37 standalone.localdomain dnsmasq[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/addn_hosts - 0 addresses
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/host
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/opts
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.193 496978 INFO neutron.agent.dhcp.agent [None req-68a5417c-0f5f-4c35-a9e2-e42736729fc0 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.194 496978 INFO neutron.agent.dhcp.agent [None req-68a5417c-0f5f-4c35-a9e2-e42736729fc0 - - - - - -] Finished network 147b6b76-2e5c-4c83-8fde-2c38b71b5fee dhcp configuration
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.196 496978 INFO neutron.agent.dhcp.agent [None req-af71c68e-bc38-47d3-b309-a26180b55ba5 - - - - - -] Synchronizing state complete
Oct 13 15:47:37 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:37.196 2 INFO neutron.agent.securitygroups_rpc [None req-665e810c-7d09-4c63-8873-f0cf8d8809d4 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.201 496978 INFO neutron.agent.dhcp.agent [None req-693c0c7b-083f-453b-b82e-f31533888ab0 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:27Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188908da90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188908d1f0>], id=257714f6-d5b6-4677-ad9d-5b41a2ff2fca, ip_allocation=immediate, mac_address=fa:16:3e:74:cb:d4, name=tempest-PortsIpV6TestJSON-2141632524, network_id=a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, port_security_enabled=True, project_id=4ce51b6cd7de42daa5659d6797c11a37, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['14ab85b1-cdcd-45a1-97c4-5ae3cebf76b7', 'e29339c2-7522-492d-9813-b73972b3db72'], standard_attr_id=2063, status=DOWN, tags=[], tenant_id=4ce51b6cd7de42daa5659d6797c11a37, updated_at=2025-10-13T15:47:32Z on network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.216 496978 INFO neutron.agent.dhcp.agent [None req-6aaf6ef3-eb8f-4849-8861-fe4eb6026dd2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.218 496978 INFO neutron.agent.dhcp.agent [None req-6aaf6ef3-eb8f-4849-8861-fe4eb6026dd2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.219 496978 INFO neutron.agent.dhcp.agent [None req-6aaf6ef3-eb8f-4849-8861-fe4eb6026dd2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.241 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:36Z, description=, device_id=6ac3ab42-77a9-4a6b-9bc9-fceb3b634877, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fe6610>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fe66a0>], id=124d2740-3281-4c52-b73f-2a590180be2c, ip_allocation=immediate, mac_address=fa:16:3e:35:5d:fc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2097, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:47:36Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:47:37 standalone.localdomain kernel: device tapbb828ff7-92 left promiscuous mode
Oct 13 15:47:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:37.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:37.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:37 standalone.localdomain dnsmasq[551609]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/addn_hosts - 1 addresses
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551609]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/host
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551609]: read /var/lib/neutron/dhcp/a92a537c-4a4b-4bdd-9c48-bafcd5dc371e/opts
Oct 13 15:47:37 standalone.localdomain podman[551821]: 2025-10-13 15:47:37.447978372 +0000 UTC m=+0.108084192 container kill d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:37 standalone.localdomain dnsmasq[550981]: exiting on receipt of SIGTERM
Oct 13 15:47:37 standalone.localdomain podman[551839]: 2025-10-13 15:47:37.510325487 +0000 UTC m=+0.114573104 container kill 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:37 standalone.localdomain systemd[1]: libpod-5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4.scope: Deactivated successfully.
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.529 496978 INFO neutron.agent.dhcp.agent [None req-07f148e0-6b48-4194-95fb-e00a827988d9 - - - - - -] DHCP configuration for ports {'7c2e91f9-5fb0-43e9-ae21-27d289fd2383'} is completed
Oct 13 15:47:37 standalone.localdomain podman[551857]: 2025-10-13 15:47:37.583786329 +0000 UTC m=+0.054191582 container died 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:47:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:37 standalone.localdomain podman[551857]: 2025-10-13 15:47:37.695677049 +0000 UTC m=+0.166082282 container remove 5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a51cfe2-ad26-447b-a406-49386aa040cb, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:37 standalone.localdomain systemd[1]: libpod-conmon-5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4.scope: Deactivated successfully.
Oct 13 15:47:37 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:47:37 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:37 standalone.localdomain podman[551912]: 2025-10-13 15:47:37.732433276 +0000 UTC m=+0.064441461 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4000: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.755 496978 INFO neutron.agent.dhcp.agent [None req-a832e73c-bfd7-4fce-844b-aa9d6a78c13c - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.755 496978 INFO neutron.agent.dhcp.agent [None req-a832e73c-bfd7-4fce-844b-aa9d6a78c13c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.756 496978 INFO neutron.agent.dhcp.agent [None req-a832e73c-bfd7-4fce-844b-aa9d6a78c13c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:37.777 496978 INFO neutron.agent.dhcp.agent [None req-16bae8d1-4251-4180-bfdb-a5c206bb9c06 - - - - - -] DHCP configuration for ports {'257714f6-d5b6-4677-ad9d-5b41a2ff2fca'} is completed
Oct 13 15:47:37 standalone.localdomain podman[551939]: 2025-10-13 15:47:37.793898523 +0000 UTC m=+0.072943186 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 15:47:37 standalone.localdomain dnsmasq[551648]: exiting on receipt of SIGTERM
Oct 13 15:47:37 standalone.localdomain podman[551935]: 2025-10-13 15:47:37.809627983 +0000 UTC m=+0.095908892 container kill 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:47:37 standalone.localdomain systemd[1]: libpod-165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe.scope: Deactivated successfully.
Oct 13 15:47:37 standalone.localdomain dnsmasq[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/addn_hosts - 0 addresses
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/host
Oct 13 15:47:37 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/opts
Oct 13 15:47:37 standalone.localdomain podman[551965]: 2025-10-13 15:47:37.820624596 +0000 UTC m=+0.050245328 container kill 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:37 standalone.localdomain podman[551939]: 2025-10-13 15:47:37.825812228 +0000 UTC m=+0.104856901 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 13 15:47:37 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:47:37 standalone.localdomain podman[551993]: 2025-10-13 15:47:37.872190765 +0000 UTC m=+0.051484027 container died 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:37 standalone.localdomain dnsmasq[551609]: exiting on receipt of SIGTERM
Oct 13 15:47:37 standalone.localdomain podman[552046]: 2025-10-13 15:47:37.975634492 +0000 UTC m=+0.054231113 container kill d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:47:37 standalone.localdomain systemd[1]: libpod-d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030.scope: Deactivated successfully.
Oct 13 15:47:37 standalone.localdomain podman[551993]: 2025-10-13 15:47:37.99963149 +0000 UTC m=+0.178924752 container cleanup 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:38 standalone.localdomain systemd[1]: libpod-conmon-165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe.scope: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.009 496978 INFO neutron.agent.dhcp.agent [None req-a3aedc88-bd65-4c79-b48f-cd068ced7db6 - - - - - -] DHCP configuration for ports {'124d2740-3281-4c52-b73f-2a590180be2c'} is completed
Oct 13 15:47:38 standalone.localdomain podman[552007]: 2025-10-13 15:47:38.030236905 +0000 UTC m=+0.189956657 container remove 165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:47:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-88bb4ab4a7f562799fa9472b8a52e7467a9294f72f85fea66c07f0cfdb985b96-merged.mount: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-165516649e68c979a7f6ebe78a67d7dccac2fbd34c0ee1363dd7aa1590d820fe-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f41f7c547a500306311fb5012b939e5c7e280ca284de09d71e35054d1bf83f7e-merged.mount: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5833cc8b33e89f3a6e95df48940248911d988164a70840a37ce8e475bbf8dec4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d9a51cfe2\x2dad26\x2d447b\x2da406\x2d49386aa040cb.mount: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain podman[552068]: 2025-10-13 15:47:38.059338912 +0000 UTC m=+0.073729850 container died d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:47:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain podman[552068]: 2025-10-13 15:47:38.08775677 +0000 UTC m=+0.102147698 container cleanup d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:47:38 standalone.localdomain systemd[1]: libpod-conmon-d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030.scope: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:47:38 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:38 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:38 standalone.localdomain podman[552108]: 2025-10-13 15:47:38.116327371 +0000 UTC m=+0.053976305 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:38 standalone.localdomain podman[552077]: 2025-10-13 15:47:38.188885274 +0000 UTC m=+0.181757641 container remove d99417dd3359d57fee0a1cd860f4853eb8fdaf55222dea386d72ee0b6dbf5030 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:38 standalone.localdomain kernel: device tape3d92db7-51 left promiscuous mode
Oct 13 15:47:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:38Z|00579|binding|INFO|Releasing lport e3d92db7-5106-4a58-83fb-0c74a16b72a2 from this chassis (sb_readonly=0)
Oct 13 15:47:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:38Z|00580|binding|INFO|Setting lport e3d92db7-5106-4a58-83fb-0c74a16b72a2 down in Southbound
Oct 13 15:47:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:38.213 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8:0:2::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a92a537c-4a4b-4bdd-9c48-bafcd5dc371e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce51b6cd7de42daa5659d6797c11a37', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=def8bcce-52c2-4cc8-9e08-a28c0a56cc80, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=e3d92db7-5106-4a58-83fb-0c74a16b72a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:38.215 378821 INFO neutron.agent.ovn.metadata.agent [-] Port e3d92db7-5106-4a58-83fb-0c74a16b72a2 in datapath a92a537c-4a4b-4bdd-9c48-bafcd5dc371e unbound from our chassis
Oct 13 15:47:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:38.217 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a92a537c-4a4b-4bdd-9c48-bafcd5dc371e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:38.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:38.218 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[50010478-2feb-47e8-8a03-bc2129835e73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:38.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.339 496978 INFO neutron.agent.dhcp.agent [None req-ce33b444-6ef2-435a-b3e9-d282761fee07 - - - - - -] DHCP configuration for ports {'7c2e91f9-5fb0-43e9-ae21-27d289fd2383', 'c592a2b8-b84c-464a-ac06-59686b702837'} is completed
Oct 13 15:47:38 standalone.localdomain ceph-mon[29756]: pgmap v4000: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:47:38 standalone.localdomain podman[552138]: 2025-10-13 15:47:38.816457551 +0000 UTC m=+0.083789865 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Oct 13 15:47:38 standalone.localdomain podman[552138]: 2025-10-13 15:47:38.828429174 +0000 UTC m=+0.095761458 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350)
Oct 13 15:47:38 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent [None req-9e0ea11f-300b-44a9-b32f-c334db958301 - - - - - -] Unable to restart dhcp for f934a9b1-f0ba-494d-9c34-bbdad3043007.: oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet bc12a5e1-5cc3-4453-8c62-4f234ecd0ace: This subnet is being modified by another concurrent operation.
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n    return self._port_action(plugin, context, port, \'update_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n    return plugin.update_port(context, port[\'id\'], port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n    updated_port = super(Ml2Plugin, self).update_port(context, id,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n    self.ipam.update_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n    changes = self.update_port_with_ips(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n    changes = self._update_ips_for_port(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet bc12a5e1-5cc3-4453-8c62-4f234ecd0ace: This subnet is being modified by another concurrent operation.\n'].
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 207, in restart
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     self.enable()
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     common_utils.wait_until_true(self._enable, timeout=300)
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     while not predicate():
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     interface_name = self.device_manager.setup(
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     self.cleanup_stale_devices(network, dhcp_port=None)
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     self.force_reraise()
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     raise self.value
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     port = self.setup_dhcp_port(network, segment)
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     dhcp_port = setup_method(network, device_id, dhcp_subnets)
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1660, in _setup_existing_dhcp_port
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     port = self.plugin.update_dhcp_port(
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 901, in update_dhcp_port
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     port = cctxt.call(self.context, 'update_dhcp_port',
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     return self._original_context.call(ctxt, method, **kwargs)
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     result = self.transport._send(
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     return self._driver.send(target, ctxt, message,
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     return self._send(target, ctxt, message, wait_for_reply, timeout,
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent     raise result
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet bc12a5e1-5cc3-4453-8c62-4f234ecd0ace: This subnet is being modified by another concurrent operation.
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n    return self._port_action(plugin, context, port, \'update_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n    return plugin.update_port(context, port[\'id\'], port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n    updated_port = super(Ml2Plugin, self).update_port(context, id,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n    self.ipam.update_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n    changes = self.update_port_with_ips(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n    changes = self._update_ips_for_port(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet bc12a5e1-5cc3-4453-8c62-4f234ecd0ace: This subnet is being modified by another concurrent operation.\n'].
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.866 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:38.902 496978 INFO neutron.agent.dhcp.agent [None req-1b1809b4-718f-4ab4-ba98-1d960b0d667d - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:38 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:38.963 2 INFO neutron.agent.securitygroups_rpc [None req-1f50df88-1a3b-4c64-a18a-f7574c0ae044 4cf1f1f3d2e74d9f9af31665280ca7d5 4ce51b6cd7de42daa5659d6797c11a37 - - default default] Security group member updated ['6c3ff5e9-4017-4412-baf0-58ac2954cf2d']
Oct 13 15:47:39 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d537fcd3973a24e3b361b28e717d5297c0a3d189f527f1f8d0fbd76ebb865682-merged.mount: Deactivated successfully.
Oct 13 15:47:39 standalone.localdomain systemd[1]: run-netns-qdhcp\x2da92a537c\x2d4a4b\x2d4bdd\x2d9c48\x2dbafcd5dc371e.mount: Deactivated successfully.
Oct 13 15:47:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:39.273 496978 INFO neutron.agent.dhcp.agent [None req-2fe112ff-244f-4128-95e7-f9657308a26e - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4001: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00581|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:39.834 496978 INFO neutron.agent.linux.ip_lib [None req-658a28a2-4659-4fe2-a676-b90d752bd2fe - - - - - -] Device tap3319ac37-79 cannot be used as it has no MAC address
Oct 13 15:47:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:39.842 496978 INFO neutron.agent.linux.ip_lib [None req-b147c8dc-4552-47f4-b5d3-c57d05969b4d - - - - - -] Device tap6574be41-ca cannot be used as it has no MAC address
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:39.856 496978 INFO neutron.agent.linux.ip_lib [None req-84a26203-ce3b-4516-8619-9a24c7e12f99 - - - - - -] Device tapf27841a2-95 cannot be used as it has no MAC address
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain kernel: device tap6574be41-ca entered promiscuous mode
Oct 13 15:47:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370459.8851] manager: (tap6574be41-ca): new Generic device (/org/freedesktop/NetworkManager/Devices/104)
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00582|binding|INFO|Claiming lport 6574be41-ca4e-471f-95bb-967c3c3d1519 for this chassis.
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00583|binding|INFO|6574be41-ca4e-471f-95bb-967c3c3d1519: Claiming unknown
Oct 13 15:47:39 standalone.localdomain systemd-udevd[552185]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.896 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-6889bbd9-925c-49b9-a8ce-cbdafffa55bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6889bbd9-925c-49b9-a8ce-cbdafffa55bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9339d97e-bb70-41ef-ad96-800e3b5e545d, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6574be41-ca4e-471f-95bb-967c3c3d1519) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.898 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6574be41-ca4e-471f-95bb-967c3c3d1519 in datapath 6889bbd9-925c-49b9-a8ce-cbdafffa55bc bound to our chassis
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.903 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74d78f60-d328-41ba-877f-d7e1122d73e5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.903 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6889bbd9-925c-49b9-a8ce-cbdafffa55bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.905 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[29f663e8-7eab-4fc7-be05-a3ac6747a0cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain kernel: device tap3319ac37-79 entered promiscuous mode
Oct 13 15:47:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370459.9324] manager: (tap3319ac37-79): new Generic device (/org/freedesktop/NetworkManager/Devices/105)
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00584|binding|INFO|Setting lport 6574be41-ca4e-471f-95bb-967c3c3d1519 ovn-installed in OVS
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00585|binding|INFO|Setting lport 6574be41-ca4e-471f-95bb-967c3c3d1519 up in Southbound
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00586|if_status|INFO|Not updating pb chassis for 3319ac37-7952-476d-9260-b8de17a0b995 now as sb is readonly
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain kernel: device tapf27841a2-95 entered promiscuous mode
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00587|binding|INFO|Claiming lport 3319ac37-7952-476d-9260-b8de17a0b995 for this chassis.
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00588|binding|INFO|3319ac37-7952-476d-9260-b8de17a0b995: Claiming unknown
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00589|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370459.9479] manager: (tapf27841a2-95): new Generic device (/org/freedesktop/NetworkManager/Devices/106)
Oct 13 15:47:39 standalone.localdomain systemd-udevd[552194]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00590|binding|INFO|Claiming lport f27841a2-952e-4e8e-98e5-f31809abe25f for this chassis.
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00591|binding|INFO|f27841a2-952e-4e8e-98e5-f31809abe25f: Claiming unknown
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00592|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.960 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-fac02be2-073c-46c5-9613-b3e18596338a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fac02be2-073c-46c5-9613-b3e18596338a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4f12ea6-1f2f-4673-b0cb-6932bceae79c, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3319ac37-7952-476d-9260-b8de17a0b995) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.961 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3319ac37-7952-476d-9260-b8de17a0b995 in datapath fac02be2-073c-46c5-9613-b3e18596338a bound to our chassis
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.965 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port a2e29aa7-6d43-4bf7-a255-31ad2a74bf9a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.965 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fac02be2-073c-46c5-9613-b3e18596338a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.971 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[685002cb-9ad6-446d-b5ab-8d3c35e034ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.980 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-3f766d5b-c1b4-4796-a835-bd80645a9419', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f766d5b-c1b4-4796-a835-bd80645a9419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=398c2979-6fba-43f5-a6f5-2660246e29c1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=f27841a2-952e-4e8e-98e5-f31809abe25f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.981 378821 INFO neutron.agent.ovn.metadata.agent [-] Port f27841a2-952e-4e8e-98e5-f31809abe25f in datapath 3f766d5b-c1b4-4796-a835-bd80645a9419 bound to our chassis
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00593|binding|INFO|Setting lport 3319ac37-7952-476d-9260-b8de17a0b995 ovn-installed in OVS
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00594|binding|INFO|Setting lport 3319ac37-7952-476d-9260-b8de17a0b995 up in Southbound
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.985 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9c236c09-c236-4f66-b294-48a08352196d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.986 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f766d5b-c1b4-4796-a835-bd80645a9419, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:39.987 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[6af5dfa3-f411-4bee-8e5c-5c7b8d70b6bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:39.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00595|binding|INFO|Setting lport f27841a2-952e-4e8e-98e5-f31809abe25f ovn-installed in OVS
Oct 13 15:47:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:39Z|00596|binding|INFO|Setting lport f27841a2-952e-4e8e-98e5-f31809abe25f up in Southbound
Oct 13 15:47:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:40.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:40.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:40.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:40.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:47:40 standalone.localdomain ceph-mon[29756]: pgmap v4001: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:40 standalone.localdomain podman[552273]: 2025-10-13 15:47:40.836398819 +0000 UTC m=+0.094984624 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:40 standalone.localdomain podman[552273]: 2025-10-13 15:47:40.845682358 +0000 UTC m=+0.104268183 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:40 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:47:41 standalone.localdomain podman[552355]: 
Oct 13 15:47:41 standalone.localdomain podman[552355]: 2025-10-13 15:47:41.188974477 +0000 UTC m=+0.087496341 container create 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:47:41 standalone.localdomain podman[552375]: 
Oct 13 15:47:41 standalone.localdomain systemd[1]: Started libpod-conmon-406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044.scope.
Oct 13 15:47:41 standalone.localdomain podman[552375]: 2025-10-13 15:47:41.236994395 +0000 UTC m=+0.080283456 container create d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:41 standalone.localdomain podman[552355]: 2025-10-13 15:47:41.143444146 +0000 UTC m=+0.041966040 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359ca4bcc2106bdf4c677339541c31e2f0095c8a11ff8046221b436a60404859/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:41 standalone.localdomain podman[552355]: 2025-10-13 15:47:41.256231615 +0000 UTC m=+0.154753479 container init 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:47:41 standalone.localdomain systemd[1]: Started libpod-conmon-d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493.scope.
Oct 13 15:47:41 standalone.localdomain podman[552355]: 2025-10-13 15:47:41.261384395 +0000 UTC m=+0.159906259 container start 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:41 standalone.localdomain dnsmasq[552413]: started, version 2.85 cachesize 150
Oct 13 15:47:41 standalone.localdomain dnsmasq[552413]: DNS service limited to local subnets
Oct 13 15:47:41 standalone.localdomain dnsmasq[552413]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:41 standalone.localdomain dnsmasq[552413]: warning: no upstream servers configured
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552413]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:41 standalone.localdomain dnsmasq[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/addn_hosts - 0 addresses
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/host
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/opts
Oct 13 15:47:41 standalone.localdomain podman[552394]: 
Oct 13 15:47:41 standalone.localdomain podman[552394]: 2025-10-13 15:47:41.275736964 +0000 UTC m=+0.065541586 container create 55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fac02be2-073c-46c5-9613-b3e18596338a, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd9c89f0ac9e1c5c9003659842aa24625d04643105a75ad3ec9997e261e8eea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:41 standalone.localdomain podman[552375]: 2025-10-13 15:47:41.288575103 +0000 UTC m=+0.131864194 container init d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:41 standalone.localdomain podman[552375]: 2025-10-13 15:47:41.191496066 +0000 UTC m=+0.034785147 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:41 standalone.localdomain podman[552375]: 2025-10-13 15:47:41.298252896 +0000 UTC m=+0.141541987 container start d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:47:41 standalone.localdomain dnsmasq[552419]: started, version 2.85 cachesize 150
Oct 13 15:47:41 standalone.localdomain dnsmasq[552419]: DNS service limited to local subnets
Oct 13 15:47:41 standalone.localdomain dnsmasq[552419]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:41 standalone.localdomain dnsmasq[552419]: warning: no upstream servers configured
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552419]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:41 standalone.localdomain dnsmasq[552419]: read /var/lib/neutron/dhcp/3f766d5b-c1b4-4796-a835-bd80645a9419/addn_hosts - 0 addresses
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552419]: read /var/lib/neutron/dhcp/3f766d5b-c1b4-4796-a835-bd80645a9419/host
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552419]: read /var/lib/neutron/dhcp/3f766d5b-c1b4-4796-a835-bd80645a9419/opts
Oct 13 15:47:41 standalone.localdomain systemd[1]: Started libpod-conmon-55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20.scope.
Oct 13 15:47:41 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:41 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939113b247a4f23d3bfa24046d7b8ca2b59a18176015381181f4ad42d7f03e8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:41 standalone.localdomain podman[552394]: 2025-10-13 15:47:41.334116534 +0000 UTC m=+0.123921176 container init 55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fac02be2-073c-46c5-9613-b3e18596338a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:47:41 standalone.localdomain podman[552394]: 2025-10-13 15:47:41.340818953 +0000 UTC m=+0.130623595 container start 55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fac02be2-073c-46c5-9613-b3e18596338a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:47:41 standalone.localdomain podman[552394]: 2025-10-13 15:47:41.241361931 +0000 UTC m=+0.031166593 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:41 standalone.localdomain dnsmasq[552424]: started, version 2.85 cachesize 150
Oct 13 15:47:41 standalone.localdomain dnsmasq[552424]: DNS service limited to local subnets
Oct 13 15:47:41 standalone.localdomain dnsmasq[552424]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:41 standalone.localdomain dnsmasq[552424]: warning: no upstream servers configured
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552424]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:41 standalone.localdomain dnsmasq[552424]: read /var/lib/neutron/dhcp/fac02be2-073c-46c5-9613-b3e18596338a/addn_hosts - 0 addresses
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552424]: read /var/lib/neutron/dhcp/fac02be2-073c-46c5-9613-b3e18596338a/host
Oct 13 15:47:41 standalone.localdomain dnsmasq-dhcp[552424]: read /var/lib/neutron/dhcp/fac02be2-073c-46c5-9613-b3e18596338a/opts
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.356 496978 INFO neutron.agent.dhcp.agent [None req-074ba696-94cc-47f1-9793-f91783e9c033 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.396 496978 INFO neutron.agent.dhcp.agent [None req-30ebadb8-5e35-4dd0-bede-4e0cd435bb57 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.397 496978 INFO neutron.agent.dhcp.agent [None req-af71c68e-bc38-47d3-b309-a26180b55ba5 - - - - - -] Synchronizing state
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.523 496978 INFO neutron.agent.dhcp.agent [None req-e9a68a6a-41d6-4edb-8498-b840f8356d5d - - - - - -] DHCP configuration for ports {'8440c2b9-c006-4a8c-81e6-11439fc90fd6', '55cef195-e70c-44be-8822-10570763018c'} is completed
Oct 13 15:47:41 standalone.localdomain podman[467099]: time="2025-10-13T15:47:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:47:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:47:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 425605 "" "Go-http-client/1.1"
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.667 496978 INFO neutron.agent.dhcp.agent [None req-fda24b74-ec6e-4205-aa7b-8f4079b485c7 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.668 496978 INFO neutron.agent.dhcp.agent [-] Starting network f934a9b1-f0ba-494d-9c34-bbdad3043007 dhcp configuration
Oct 13 15:47:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:41.678 496978 INFO neutron.agent.dhcp.agent [None req-b2fe2602-dce7-469d-8a56-2028fc3cef18 - - - - - -] DHCP configuration for ports {'d39da729-7d31-4c93-874a-65974d2bd8c3'} is completed
Oct 13 15:47:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:47:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51975 "" "Go-http-client/1.1"
Oct 13 15:47:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4002: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:41.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:41 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:41Z|00597|binding|INFO|Removing iface tapf27841a2-95 ovn-installed in OVS
Oct 13 15:47:41 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:41Z|00598|binding|INFO|Removing lport f27841a2-952e-4e8e-98e5-f31809abe25f ovn-installed in OVS
Oct 13 15:47:41 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:41.921 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9c236c09-c236-4f66-b294-48a08352196d with type ""
Oct 13 15:47:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:41.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:41 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:41.924 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-3f766d5b-c1b4-4796-a835-bd80645a9419', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f766d5b-c1b4-4796-a835-bd80645a9419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=398c2979-6fba-43f5-a6f5-2660246e29c1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=f27841a2-952e-4e8e-98e5-f31809abe25f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:41 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:41.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:41 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:41.927 378821 INFO neutron.agent.ovn.metadata.agent [-] Port f27841a2-952e-4e8e-98e5-f31809abe25f in datapath 3f766d5b-c1b4-4796-a835-bd80645a9419 unbound from our chassis
Oct 13 15:47:41 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:41.931 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f766d5b-c1b4-4796-a835-bd80645a9419, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:41 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:41.931 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[7e20fb69-6bd4-465e-8c81-e79e80c59e3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:42.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:42 standalone.localdomain podman[552473]: 
Oct 13 15:47:42 standalone.localdomain podman[552473]: 2025-10-13 15:47:42.531617218 +0000 UTC m=+0.061850100 container create 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:47:42 standalone.localdomain systemd[1]: Started libpod-conmon-1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e.scope.
Oct 13 15:47:42 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:42 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb4cf7d968c0b2f64336afa2c86ea49c978a3c649090787ad12c130df0eee42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:42 standalone.localdomain podman[552473]: 2025-10-13 15:47:42.593794578 +0000 UTC m=+0.124027450 container init 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:42 standalone.localdomain podman[552473]: 2025-10-13 15:47:42.497139963 +0000 UTC m=+0.027372835 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:42 standalone.localdomain podman[552473]: 2025-10-13 15:47:42.603352166 +0000 UTC m=+0.133585038 container start 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:42 standalone.localdomain dnsmasq[552492]: started, version 2.85 cachesize 150
Oct 13 15:47:42 standalone.localdomain dnsmasq[552492]: DNS service limited to local subnets
Oct 13 15:47:42 standalone.localdomain dnsmasq[552492]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:42 standalone.localdomain dnsmasq[552492]: warning: no upstream servers configured
Oct 13 15:47:42 standalone.localdomain dnsmasq-dhcp[552492]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:42 standalone.localdomain dnsmasq[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:42 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:42 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.647 496978 INFO neutron.agent.dhcp.agent [-] Finished network f934a9b1-f0ba-494d-9c34-bbdad3043007 dhcp configuration
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.648 496978 INFO neutron.agent.dhcp.agent [None req-fda24b74-ec6e-4205-aa7b-8f4079b485c7 - - - - - -] Synchronizing state complete
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.652 496978 INFO neutron.agent.dhcp.agent [None req-2eb363a8-197b-43c2-ad22-bbc67fc053c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.653 496978 INFO neutron.agent.dhcp.agent [None req-2eb363a8-197b-43c2-ad22-bbc67fc053c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.653 496978 INFO neutron.agent.dhcp.agent [None req-2eb363a8-197b-43c2-ad22-bbc67fc053c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.654 496978 INFO neutron.agent.dhcp.agent [None req-2eb363a8-197b-43c2-ad22-bbc67fc053c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.654 496978 INFO neutron.agent.dhcp.agent [None req-9e0ea11f-300b-44a9-b32f-c334db958301 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:35Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd9910>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd5250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd95b0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd9f70>], id=502f4f66-677b-4edf-8be4-e9e8a328a5d5, ip_allocation=immediate, mac_address=fa:16:3e:3d:ea:59, name=tempest-NetworksTestDHCPv6-1848213038, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=23, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['0b5f0a92-918c-4bac-9073-5e593f6dc225', 'bc12a5e1-5cc3-4453-8c62-4f234ecd0ace'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:32Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2092, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:35Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.660 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:38Z, description=, device_id=bebe41fc-5b25-4a1e-a65e-29c20ed7513a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e70d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fe8490>], id=024aaba8-496c-44a7-aeff-3d1dbde6844f, ip_allocation=immediate, mac_address=fa:16:3e:2f:6d:6b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2105, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:47:38Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.666 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:39Z, description=, device_id=67c3f68d-fa9e-47e8-801e-8736f87850a2, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889103070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119160>], id=4a0d5e76-6cc7-49ab-9edb-9630d9a71643, ip_allocation=immediate, mac_address=fa:16:3e:84:bd:49, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:31Z, description=, dns_domain=, id=147b6b76-2e5c-4c83-8fde-2c38b71b5fee, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--214529436, port_security_enabled=True, project_id=89713a2255ea411083c22360247bdb7c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46741, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2075, status=ACTIVE, subnets=['61460850-a731-48c9-b65c-0db55b7a088f'], tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:33Z, vlan_transparent=None, network_id=147b6b76-2e5c-4c83-8fde-2c38b71b5fee, port_security_enabled=False, project_id=89713a2255ea411083c22360247bdb7c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2109, status=DOWN, tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:39Z on network 147b6b76-2e5c-4c83-8fde-2c38b71b5fee
Oct 13 15:47:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:42 standalone.localdomain ceph-mon[29756]: pgmap v4002: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:42 standalone.localdomain dnsmasq[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:47:42 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:42 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:42 standalone.localdomain podman[552511]: 2025-10-13 15:47:42.843845747 +0000 UTC m=+0.067792985 container kill 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:47:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:42.928 496978 INFO neutron.agent.dhcp.agent [None req-70102a2e-6182-4231-a651-23e04ea20584 - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:47:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:47:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:47:43 standalone.localdomain dnsmasq[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/addn_hosts - 1 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/host
Oct 13 15:47:43 standalone.localdomain podman[552575]: 2025-10-13 15:47:43.010957281 +0000 UTC m=+0.068337493 container kill 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/opts
Oct 13 15:47:43 standalone.localdomain dnsmasq[552419]: exiting on receipt of SIGTERM
Oct 13 15:47:43 standalone.localdomain systemd[1]: libpod-d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493.scope: Deactivated successfully.
Oct 13 15:47:43 standalone.localdomain podman[552605]: 2025-10-13 15:47:43.059788693 +0000 UTC m=+0.063602584 container kill d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:43.079 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 2001:db8::f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:43.081 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:47:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:43.086 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74436405-b09f-40e8-aa5a-3f0e72ab25db IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:43.086 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:43.087 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ab23eff6-f111-4891-9af4-a17624fa5bff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:43 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:43 standalone.localdomain podman[552619]: 2025-10-13 15:47:43.113634734 +0000 UTC m=+0.069884381 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:47:43 standalone.localdomain podman[552632]: 2025-10-13 15:47:43.128055943 +0000 UTC m=+0.062968215 container died d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:47:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:43.147 496978 INFO neutron.agent.dhcp.agent [None req-fbcec0e4-26d7-4a2e-95c6-2b49d998bad0 - - - - - -] DHCP configuration for ports {'502f4f66-677b-4edf-8be4-e9e8a328a5d5'} is completed
Oct 13 15:47:43 standalone.localdomain podman[552632]: 2025-10-13 15:47:43.211769245 +0000 UTC m=+0.146681447 container cleanup d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:47:43 standalone.localdomain systemd[1]: libpod-conmon-d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493.scope: Deactivated successfully.
Oct 13 15:47:43 standalone.localdomain dnsmasq[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/addn_hosts - 1 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/host
Oct 13 15:47:43 standalone.localdomain podman[552690]: 2025-10-13 15:47:43.233160632 +0000 UTC m=+0.051223439 container kill 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/opts
Oct 13 15:47:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:43.246 496978 INFO neutron.agent.dhcp.agent [None req-af4647e9-65c1-4227-aa2b-de62fcf161e6 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:39Z, description=, device_id=67c3f68d-fa9e-47e8-801e-8736f87850a2, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ec3a90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ec3ac0>], id=4a0d5e76-6cc7-49ab-9edb-9630d9a71643, ip_allocation=immediate, mac_address=fa:16:3e:84:bd:49, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:31Z, description=, dns_domain=, id=147b6b76-2e5c-4c83-8fde-2c38b71b5fee, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--214529436, port_security_enabled=True, project_id=89713a2255ea411083c22360247bdb7c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46741, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2075, status=ACTIVE, subnets=['61460850-a731-48c9-b65c-0db55b7a088f'], tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:33Z, vlan_transparent=None, network_id=147b6b76-2e5c-4c83-8fde-2c38b71b5fee, port_security_enabled=False, project_id=89713a2255ea411083c22360247bdb7c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2109, status=DOWN, tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:39Z on network 147b6b76-2e5c-4c83-8fde-2c38b71b5fee
Oct 13 15:47:43 standalone.localdomain podman[552645]: 2025-10-13 15:47:43.265826291 +0000 UTC m=+0.182251746 container remove d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f766d5b-c1b4-4796-a835-bd80645a9419, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:43.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:43.312 496978 INFO neutron.agent.dhcp.agent [None req-a3967cb8-6d55-4f09-8591-fa2f4702a1aa - - - - - -] DHCP configuration for ports {'4a0d5e76-6cc7-49ab-9edb-9630d9a71643'} is completed
Oct 13 15:47:43 standalone.localdomain dnsmasq[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:43 standalone.localdomain podman[552701]: 2025-10-13 15:47:43.315378177 +0000 UTC m=+0.103103767 container kill 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:43.446 496978 INFO neutron.agent.dhcp.agent [None req-97b48cfe-68a9-4442-afc6-b47d985ce55c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:40Z, description=, device_id=d92e604a-c276-40b8-9bab-9b5625b089a5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889076940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b6220>], id=4c6f7ece-3ba0-4f5a-a50e-baa82dea5785, ip_allocation=immediate, mac_address=fa:16:3e:b4:e1:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:34Z, description=, dns_domain=, id=6889bbd9-925c-49b9-a8ce-cbdafffa55bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1764417669, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32307, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2088, status=ACTIVE, subnets=['66756f5c-39da-440d-a99b-80e6830a729e'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:36Z, vlan_transparent=None, network_id=6889bbd9-925c-49b9-a8ce-cbdafffa55bc, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2110, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:40Z on network 6889bbd9-925c-49b9-a8ce-cbdafffa55bc
Oct 13 15:47:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:43.458 496978 INFO neutron.agent.dhcp.agent [None req-8f535c78-3100-4367-acd9-b57f1ed47c82 - - - - - -] DHCP configuration for ports {'024aaba8-496c-44a7-aeff-3d1dbde6844f'} is completed
Oct 13 15:47:43 standalone.localdomain dnsmasq[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/addn_hosts - 1 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/host
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/opts
Oct 13 15:47:43 standalone.localdomain podman[552758]: 2025-10-13 15:47:43.484862953 +0000 UTC m=+0.046947135 container kill 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:43 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:43 standalone.localdomain podman[552787]: 2025-10-13 15:47:43.528969109 +0000 UTC m=+0.038415909 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:43 standalone.localdomain dnsmasq[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/addn_hosts - 1 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/host
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/opts
Oct 13 15:47:43 standalone.localdomain podman[552827]: 2025-10-13 15:47:43.632931542 +0000 UTC m=+0.045837370 container kill 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:47:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4003: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5bd9c89f0ac9e1c5c9003659842aa24625d04643105a75ad3ec9997e261e8eea-merged.mount: Deactivated successfully.
Oct 13 15:47:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d10755a932de24e279d63143f2dbd235e8fa48b462317655c7a29c216b169493-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:43.913 496978 INFO neutron.agent.dhcp.agent [None req-19d9b86c-2e7e-4227-a948-5142f03e4a95 - - - - - -] DHCP configuration for ports {'8440c2b9-c006-4a8c-81e6-11439fc90fd6', '4c6f7ece-3ba0-4f5a-a50e-baa82dea5785', '6574be41-ca4e-471f-95bb-967c3c3d1519'} is completed
Oct 13 15:47:43 standalone.localdomain dnsmasq[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/addn_hosts - 0 addresses
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/host
Oct 13 15:47:43 standalone.localdomain dnsmasq-dhcp[551791]: read /var/lib/neutron/dhcp/147b6b76-2e5c-4c83-8fde-2c38b71b5fee/opts
Oct 13 15:47:43 standalone.localdomain podman[552876]: 2025-10-13 15:47:43.927589793 +0000 UTC m=+0.043148476 container kill 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.046 496978 INFO neutron.agent.dhcp.agent [None req-79d761a2-ecb9-456d-91ec-cb2530c35ea4 - - - - - -] DHCP configuration for ports {'4c6f7ece-3ba0-4f5a-a50e-baa82dea5785', '4a0d5e76-6cc7-49ab-9edb-9630d9a71643'} is completed
Oct 13 15:47:44 standalone.localdomain kernel: device tapf27841a2-95 left promiscuous mode
Oct 13 15:47:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:44.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:44.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:44 standalone.localdomain dnsmasq[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:44 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:44 standalone.localdomain dnsmasq-dhcp[552492]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:44 standalone.localdomain podman[552908]: 2025-10-13 15:47:44.06435744 +0000 UTC m=+0.059378573 container kill 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.223 496978 INFO neutron.agent.dhcp.agent [None req-fda24b74-ec6e-4205-aa7b-8f4079b485c7 - - - - - -] Synchronizing state
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.409 496978 INFO neutron.agent.dhcp.agent [None req-7ea0266e-5fa7-40ee-a6c4-e0b79385d32d - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.455 496978 INFO neutron.agent.dhcp.agent [None req-915778d8-0e6b-4f7f-aaf9-f04f6562baf0 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.456 496978 INFO neutron.agent.dhcp.agent [-] Starting network 3f766d5b-c1b4-4796-a835-bd80645a9419 dhcp configuration
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.457 496978 INFO neutron.agent.dhcp.agent [-] Finished network 3f766d5b-c1b4-4796-a835-bd80645a9419 dhcp configuration
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.457 496978 INFO neutron.agent.dhcp.agent [None req-915778d8-0e6b-4f7f-aaf9-f04f6562baf0 - - - - - -] Synchronizing state complete
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.532 496978 INFO neutron.agent.dhcp.agent [None req-377d7ed2-977f-4bcd-9698-90298cbc7849 - - - - - -] DHCP configuration for ports {'55cef195-e70c-44be-8822-10570763018c'} is completed
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.594 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:43Z, description=, device_id=3be463c0-8fd1-4861-bda3-e5049496a142, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035550>], id=3d846327-1838-47e8-84a3-347dd7571df4, ip_allocation=immediate, mac_address=fa:16:3e:b0:84:97, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2115, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:47:44Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.604 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:40Z, description=, device_id=d92e604a-c276-40b8-9bab-9b5625b089a5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffc790>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd7760>], id=4c6f7ece-3ba0-4f5a-a50e-baa82dea5785, ip_allocation=immediate, mac_address=fa:16:3e:b4:e1:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:34Z, description=, dns_domain=, id=6889bbd9-925c-49b9-a8ce-cbdafffa55bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1764417669, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32307, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2088, status=ACTIVE, subnets=['66756f5c-39da-440d-a99b-80e6830a729e'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:36Z, vlan_transparent=None, network_id=6889bbd9-925c-49b9-a8ce-cbdafffa55bc, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2110, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:47:40Z on network 6889bbd9-925c-49b9-a8ce-cbdafffa55bc
Oct 13 15:47:44 standalone.localdomain ceph-mon[29756]: pgmap v4003: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:44.852 496978 INFO neutron.agent.dhcp.agent [None req-636a4db4-6c18-4171-a0cc-71b6a59c567d - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:44 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d3f766d5b\x2dc1b4\x2d4796\x2da835\x2dbd80645a9419.mount: Deactivated successfully.
Oct 13 15:47:44 standalone.localdomain dnsmasq[552492]: exiting on receipt of SIGTERM
Oct 13 15:47:44 standalone.localdomain podman[552953]: 2025-10-13 15:47:44.865516971 +0000 UTC m=+0.186889741 container kill 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:47:44 standalone.localdomain systemd[1]: libpod-1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e.scope: Deactivated successfully.
Oct 13 15:47:44 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:47:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:44 standalone.localdomain podman[552993]: 2025-10-13 15:47:44.923041696 +0000 UTC m=+0.134846639 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:44 standalone.localdomain podman[553016]: 2025-10-13 15:47:44.94082271 +0000 UTC m=+0.053511930 container died 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:44 standalone.localdomain dnsmasq[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/addn_hosts - 1 addresses
Oct 13 15:47:44 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/host
Oct 13 15:47:44 standalone.localdomain podman[553007]: 2025-10-13 15:47:44.977122162 +0000 UTC m=+0.126775615 container kill 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:47:44 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/opts
Oct 13 15:47:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:45.025 496978 INFO neutron.agent.dhcp.agent [None req-11b0c289-ad2b-4ac6-bb36-2347b567b20d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:45.027 496978 INFO neutron.agent.dhcp.agent [None req-11b0c289-ad2b-4ac6-bb36-2347b567b20d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:45.028 496978 INFO neutron.agent.dhcp.agent [None req-11b0c289-ad2b-4ac6-bb36-2347b567b20d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:45.034 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:45.035 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:47:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:45.037 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74436405-b09f-40e8-aa5a-3f0e72ab25db IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:45.037 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:45.038 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[426c0164-2b85-4578-956c-c51b71f4fdee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:45 standalone.localdomain podman[553016]: 2025-10-13 15:47:45.101550503 +0000 UTC m=+0.214239683 container remove 1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:47:45 standalone.localdomain systemd[1]: libpod-conmon-1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e.scope: Deactivated successfully.
Oct 13 15:47:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:45.216 496978 INFO neutron.agent.dhcp.agent [None req-b497a078-09e3-4144-965c-c40537331fa4 - - - - - -] DHCP configuration for ports {'3d846327-1838-47e8-84a3-347dd7571df4'} is completed
Oct 13 15:47:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:45.383 496978 INFO neutron.agent.dhcp.agent [None req-76d8ea1c-d515-45e8-a1e5-b637e6047e69 - - - - - -] DHCP configuration for ports {'4c6f7ece-3ba0-4f5a-a50e-baa82dea5785'} is completed
Oct 13 15:47:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4004: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:45Z|00599|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-5cb4cf7d968c0b2f64336afa2c86ea49c978a3c649090787ad12c130df0eee42-merged.mount: Deactivated successfully.
Oct 13 15:47:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a7c010c449f4a7e12cb4eab731d6e4f5a57e397c8c678e23d3068523ffa493e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:45.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:46 standalone.localdomain podman[553118]: 
Oct 13 15:47:46 standalone.localdomain podman[553118]: 2025-10-13 15:47:46.597331882 +0000 UTC m=+0.080651347 container create cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started libpod-conmon-cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4.scope.
Oct 13 15:47:46 standalone.localdomain podman[553118]: 2025-10-13 15:47:46.564575311 +0000 UTC m=+0.047894846 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:46 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b06bf7ff673147468dc74a2d539526480d08a4a0f22171acc32275dd5e655f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:46 standalone.localdomain podman[553118]: 2025-10-13 15:47:46.753269776 +0000 UTC m=+0.236589261 container init cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:47:46 standalone.localdomain podman[553118]: 2025-10-13 15:47:46.762580166 +0000 UTC m=+0.245899651 container start cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:47:46 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:47:46 standalone.localdomain dnsmasq[553193]: started, version 2.85 cachesize 150
Oct 13 15:47:46 standalone.localdomain dnsmasq[553193]: DNS service limited to local subnets
Oct 13 15:47:46 standalone.localdomain dnsmasq[553193]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:46 standalone.localdomain dnsmasq[553193]: warning: no upstream servers configured
Oct 13 15:47:46 standalone.localdomain dnsmasq-dhcp[553193]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:46 standalone.localdomain dnsmasq-dhcp[553193]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:46 standalone.localdomain dnsmasq[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:46 standalone.localdomain dnsmasq-dhcp[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:46 standalone.localdomain dnsmasq-dhcp[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:46 standalone.localdomain podman[553132]: 2025-10-13 15:47:46.73414801 +0000 UTC m=+0.087436789 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, build-date=2025-07-21T14:56:28, tcib_managed=true, config_id=tripleo_step4, version=17.1.9, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git)
Oct 13 15:47:46 standalone.localdomain podman[553133]: 2025-10-13 15:47:46.811231104 +0000 UTC m=+0.156750450 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 15:47:46 standalone.localdomain podman[553133]: 2025-10-13 15:47:46.820900236 +0000 UTC m=+0.166419542 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:47:46 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:47:46 standalone.localdomain ceph-mon[29756]: pgmap v4004: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:46 standalone.localdomain podman[553141]: 2025-10-13 15:47:46.85759922 +0000 UTC m=+0.191936647 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:47:46 standalone.localdomain podman[553140]: 2025-10-13 15:47:46.771594507 +0000 UTC m=+0.113493551 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-swift-account-container, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, vendor=Red Hat, Inc., container_name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, build-date=2025-07-21T16:11:22, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible)
Oct 13 15:47:46 standalone.localdomain podman[553141]: 2025-10-13 15:47:46.890875658 +0000 UTC m=+0.225213095 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:47:46 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:47:46 standalone.localdomain podman[553195]: 2025-10-13 15:47:46.92169368 +0000 UTC m=+0.145966264 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T15:54:32, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, io.buildah.version=1.33.12, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, batch=17.1_20250721.1, version=17.1.9, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:47:46 standalone.localdomain podman[553132]: 2025-10-13 15:47:46.929813433 +0000 UTC m=+0.283102212 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, version=17.1.9, batch=17.1_20250721.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-swift-object-container, architecture=x86_64, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, release=1)
Oct 13 15:47:46 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:47:47 standalone.localdomain podman[553140]: 2025-10-13 15:47:47.001751457 +0000 UTC m=+0.343650491 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, summary=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1)
Oct 13 15:47:47 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:47:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:47.077 496978 INFO neutron.agent.dhcp.agent [None req-321fdf7f-b727-405b-b068-0ddc9fbe6c81 - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:47 standalone.localdomain podman[553195]: 2025-10-13 15:47:47.169842891 +0000 UTC m=+0.394115475 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, io.buildah.version=1.33.12, version=17.1.9, io.openshift.expose-services=, release=1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:47:47 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:47:47 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:47.221 2 INFO neutron.agent.securitygroups_rpc [None req-944e93c3-5481-4c43-95c9-53b8dcc88d12 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:47.262 496978 INFO neutron.agent.linux.ip_lib [None req-6ae5e6cb-541d-404e-a769-20fd3494aca6 - - - - - -] Device tap74f50749-45 cannot be used as it has no MAC address
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:47 standalone.localdomain kernel: device tap74f50749-45 entered promiscuous mode
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:47 standalone.localdomain NetworkManager[5962]: <info>  [1760370467.2894] manager: (tap74f50749-45): new Generic device (/org/freedesktop/NetworkManager/Devices/107)
Oct 13 15:47:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:47Z|00600|binding|INFO|Claiming lport 74f50749-4537-40d2-bdbc-66d171022745 for this chassis.
Oct 13 15:47:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:47Z|00601|binding|INFO|74f50749-4537-40d2-bdbc-66d171022745: Claiming unknown
Oct 13 15:47:47 standalone.localdomain systemd-udevd[553267]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:47.299 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d625f8ce-1911-4e84-990c-f2e4e4969371', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d625f8ce-1911-4e84-990c-f2e4e4969371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b42fe79-9dd7-4d56-84f0-439dedd37b39, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=74f50749-4537-40d2-bdbc-66d171022745) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:47.301 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 74f50749-4537-40d2-bdbc-66d171022745 in datapath d625f8ce-1911-4e84-990c-f2e4e4969371 bound to our chassis
Oct 13 15:47:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:47.303 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d625f8ce-1911-4e84-990c-f2e4e4969371 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:47.305 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[880c0328-5a26-467e-bd32-8e5f697217c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:47Z|00602|binding|INFO|Setting lport 74f50749-4537-40d2-bdbc-66d171022745 ovn-installed in OVS
Oct 13 15:47:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:47Z|00603|binding|INFO|Setting lport 74f50749-4537-40d2-bdbc-66d171022745 up in Southbound
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:47.344 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:46Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891e8c40>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18891e8280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892296a0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18892292e0>], id=88804659-cef5-44c5-bec2-913ddac633e9, ip_allocation=immediate, mac_address=fa:16:3e:d3:de:44, name=tempest-NetworksTestDHCPv6-1377165566, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=27, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['360e1186-a23b-45ba-b143-bb070fc61158', 'd7f48132-9833-425d-a2cf-e4786456457c'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:44Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2129, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:46Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap74f50749-45: No such device
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:47 standalone.localdomain dnsmasq[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:47:47 standalone.localdomain dnsmasq-dhcp[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:47 standalone.localdomain dnsmasq-dhcp[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:47 standalone.localdomain podman[553315]: 2025-10-13 15:47:47.605701986 +0000 UTC m=+0.074602137 container kill cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4005: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:47.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain dnsmasq[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/addn_hosts - 0 addresses
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/host
Oct 13 15:47:48 standalone.localdomain podman[553372]: 2025-10-13 15:47:48.032111277 +0000 UTC m=+0.056970088 container kill 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[552413]: read /var/lib/neutron/dhcp/6889bbd9-925c-49b9-a8ce-cbdafffa55bc/opts
Oct 13 15:47:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:48.060 496978 INFO neutron.agent.dhcp.agent [None req-4ea6f2b2-bf98-4b5a-bd36-7551dbd57ede - - - - - -] DHCP configuration for ports {'88804659-cef5-44c5-bec2-913ddac633e9'} is completed
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain podman[553420]: 
Oct 13 15:47:48 standalone.localdomain podman[553420]: 2025-10-13 15:47:48.520978567 +0000 UTC m=+0.097423880 container create 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:48 standalone.localdomain podman[553420]: 2025-10-13 15:47:48.474752725 +0000 UTC m=+0.051198058 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:48 standalone.localdomain systemd[1]: Started libpod-conmon-12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b.scope.
Oct 13 15:47:48 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:48 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd1aada25f08f7071253f258f07a6765f0b07cc348978d02adfa3a718642ede/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:48 standalone.localdomain podman[553420]: 2025-10-13 15:47:48.615873747 +0000 UTC m=+0.192319050 container init 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:47:48 standalone.localdomain podman[553420]: 2025-10-13 15:47:48.625731784 +0000 UTC m=+0.202177097 container start 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:47:48 standalone.localdomain dnsmasq[553439]: started, version 2.85 cachesize 150
Oct 13 15:47:48 standalone.localdomain dnsmasq[553439]: DNS service limited to local subnets
Oct 13 15:47:48 standalone.localdomain dnsmasq[553439]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:48 standalone.localdomain dnsmasq[553439]: warning: no upstream servers configured
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[553439]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:48 standalone.localdomain dnsmasq[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/addn_hosts - 0 addresses
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/host
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/opts
Oct 13 15:47:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:48.693 496978 INFO neutron.agent.dhcp.agent [None req-6ae5e6cb-541d-404e-a769-20fd3494aca6 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:47:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:48.695 496978 INFO neutron.agent.dhcp.agent [None req-6ae5e6cb-541d-404e-a769-20fd3494aca6 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:47Z, description=, device_id=45a4e3bc-7a11-4435-b161-bc386e25bb8f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f15520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f15220>], id=2bbc1d4a-6965-4298-9873-33e8fff2c808, ip_allocation=immediate, mac_address=fa:16:3e:20:a7:b7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:42Z, description=, dns_domain=, id=d625f8ce-1911-4e84-990c-f2e4e4969371, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-893498818, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21822, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2112, status=ACTIVE, subnets=['4a3b2f6b-3e35-4092-8418-3a9e18d7ae6b'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:45Z, vlan_transparent=None, network_id=d625f8ce-1911-4e84-990c-f2e4e4969371, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2136, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:47Z on network d625f8ce-1911-4e84-990c-f2e4e4969371
Oct 13 15:47:48 standalone.localdomain ceph-mon[29756]: pgmap v4005: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:48.809 496978 INFO neutron.agent.linux.ip_lib [None req-fbf66404-eca9-449b-b8d2-3acecd4f1a8b - - - - - -] Device tap32c2f85f-eb cannot be used as it has no MAC address
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:48.836 496978 INFO neutron.agent.dhcp.agent [None req-f36ea4fe-9526-42dc-bca9-5708783879ba - - - - - -] DHCP configuration for ports {'7d5c2f96-5914-4b62-bead-86d5c38e4274'} is completed
Oct 13 15:47:48 standalone.localdomain kernel: device tap32c2f85f-eb entered promiscuous mode
Oct 13 15:47:48 standalone.localdomain systemd-udevd[553269]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:48 standalone.localdomain NetworkManager[5962]: <info>  [1760370468.8401] manager: (tap32c2f85f-eb): new Generic device (/org/freedesktop/NetworkManager/Devices/108)
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:48Z|00604|binding|INFO|Claiming lport 32c2f85f-eb39-41e8-af26-b19bf7252add for this chassis.
Oct 13 15:47:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:48Z|00605|binding|INFO|32c2f85f-eb39-41e8-af26-b19bf7252add: Claiming unknown
Oct 13 15:47:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:48.858 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-c0ddff84-359f-452c-98ae-c7b5e8430e4f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0ddff84-359f-452c-98ae-c7b5e8430e4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a804136c-6f0a-42d8-a30b-fc7bc6bf96e9, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=32c2f85f-eb39-41e8-af26-b19bf7252add) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:48.860 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 32c2f85f-eb39-41e8-af26-b19bf7252add in datapath c0ddff84-359f-452c-98ae-c7b5e8430e4f bound to our chassis
Oct 13 15:47:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:48Z|00606|binding|INFO|Setting lport 32c2f85f-eb39-41e8-af26-b19bf7252add ovn-installed in OVS
Oct 13 15:47:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:48Z|00607|binding|INFO|Setting lport 32c2f85f-eb39-41e8-af26-b19bf7252add up in Southbound
Oct 13 15:47:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:48.861 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c0ddff84-359f-452c-98ae-c7b5e8430e4f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:48.862 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f3852ff4-8584-4e3c-829b-f1b312e6b064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:48 standalone.localdomain dnsmasq[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/addn_hosts - 1 addresses
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/host
Oct 13 15:47:48 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/opts
Oct 13 15:47:48 standalone.localdomain podman[553465]: 2025-10-13 15:47:48.907592777 +0000 UTC m=+0.082955469 container kill 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:48 standalone.localdomain systemd[1]: tmp-crun.hG2i76.mount: Deactivated successfully.
Oct 13 15:47:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:48.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:49 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:49.030 2 INFO neutron.agent.securitygroups_rpc [None req-987027d9-60de-4817-b601-e0bdbc6bbffd db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:49.118 496978 INFO neutron.agent.dhcp.agent [None req-6ae5e6cb-541d-404e-a769-20fd3494aca6 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:47Z, description=, device_id=45a4e3bc-7a11-4435-b161-bc386e25bb8f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889085e80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889085f40>], id=2bbc1d4a-6965-4298-9873-33e8fff2c808, ip_allocation=immediate, mac_address=fa:16:3e:20:a7:b7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:42Z, description=, dns_domain=, id=d625f8ce-1911-4e84-990c-f2e4e4969371, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-893498818, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21822, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2112, status=ACTIVE, subnets=['4a3b2f6b-3e35-4092-8418-3a9e18d7ae6b'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:45Z, vlan_transparent=None, network_id=d625f8ce-1911-4e84-990c-f2e4e4969371, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2136, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:47Z on network d625f8ce-1911-4e84-990c-f2e4e4969371
Oct 13 15:47:49 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:49.140 2 INFO neutron.agent.securitygroups_rpc [None req-3ffdfe60-3236-4f5f-aae6-fa013dc047fa 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['cdbf2f3d-c7b1-4501-8a22-2b21e104a40a']
Oct 13 15:47:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:49.174 496978 INFO neutron.agent.dhcp.agent [None req-5f8782d0-93fe-47aa-87d2-7f3327385d44 - - - - - -] DHCP configuration for ports {'2bbc1d4a-6965-4298-9873-33e8fff2c808'} is completed
Oct 13 15:47:49 standalone.localdomain dnsmasq[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/addn_hosts - 1 addresses
Oct 13 15:47:49 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/host
Oct 13 15:47:49 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/opts
Oct 13 15:47:49 standalone.localdomain podman[553540]: 2025-10-13 15:47:49.341693868 +0000 UTC m=+0.074543327 container kill 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:49 standalone.localdomain dnsmasq[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:49 standalone.localdomain dnsmasq-dhcp[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:49 standalone.localdomain dnsmasq-dhcp[553193]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:49 standalone.localdomain podman[553558]: 2025-10-13 15:47:49.383553914 +0000 UTC m=+0.062885923 container kill cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:47:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:49.720 496978 INFO neutron.agent.dhcp.agent [None req-f35d87d0-1673-4a19-9bb5-368fe13db7b5 - - - - - -] DHCP configuration for ports {'2bbc1d4a-6965-4298-9873-33e8fff2c808'} is completed
Oct 13 15:47:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4006: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:50 standalone.localdomain podman[553624]: 
Oct 13 15:47:50 standalone.localdomain podman[553624]: 2025-10-13 15:47:50.011370247 +0000 UTC m=+0.086978314 container create 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:50 standalone.localdomain systemd[1]: Started libpod-conmon-192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2.scope.
Oct 13 15:47:50 standalone.localdomain podman[553624]: 2025-10-13 15:47:49.960656876 +0000 UTC m=+0.036264963 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce29101fc849ab4ed4086a3948077dcd30cea60e9d919ffbc5978e62f424f6a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:50 standalone.localdomain podman[553624]: 2025-10-13 15:47:50.098022791 +0000 UTC m=+0.173630838 container init 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:50 standalone.localdomain podman[553624]: 2025-10-13 15:47:50.105202794 +0000 UTC m=+0.180810841 container start 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:47:50 standalone.localdomain dnsmasq[553641]: started, version 2.85 cachesize 150
Oct 13 15:47:50 standalone.localdomain dnsmasq[553641]: DNS service limited to local subnets
Oct 13 15:47:50 standalone.localdomain dnsmasq[553641]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:50 standalone.localdomain dnsmasq[553641]: warning: no upstream servers configured
Oct 13 15:47:50 standalone.localdomain dnsmasq-dhcp[553641]: DHCP, static leases only on 10.101.0.0, lease time 1d
Oct 13 15:47:50 standalone.localdomain dnsmasq[553641]: read /var/lib/neutron/dhcp/c0ddff84-359f-452c-98ae-c7b5e8430e4f/addn_hosts - 0 addresses
Oct 13 15:47:50 standalone.localdomain dnsmasq-dhcp[553641]: read /var/lib/neutron/dhcp/c0ddff84-359f-452c-98ae-c7b5e8430e4f/host
Oct 13 15:47:50 standalone.localdomain dnsmasq-dhcp[553641]: read /var/lib/neutron/dhcp/c0ddff84-359f-452c-98ae-c7b5e8430e4f/opts
Oct 13 15:47:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:50.159 496978 INFO neutron.agent.dhcp.agent [None req-afbc937c-e1e1-4f19-a9d0-249a8ef00439 - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:47:50 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:50.205 2 INFO neutron.agent.securitygroups_rpc [None req-aaac51c6-e9cb-4b25-aa9b-03a08f7fd268 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['cdbf2f3d-c7b1-4501-8a22-2b21e104a40a']
Oct 13 15:47:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:50.268 496978 INFO neutron.agent.dhcp.agent [None req-67edcee8-62be-4751-9c7a-d66c06a77412 - - - - - -] DHCP configuration for ports {'6c7dcd48-dbcf-4ef9-b6b3-e4fa9bd4efa6'} is completed
Oct 13 15:47:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:50.292 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:49Z, description=, device_id=67c3f68d-fa9e-47e8-801e-8736f87850a2, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188921a2e0>], id=75f7f421-07c0-4dd8-afa7-65e4bfd81be3, ip_allocation=immediate, mac_address=fa:16:3e:4e:81:1b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:26Z, description=, dns_domain=, id=7b5a1eaf-f859-44dd-8036-8ec5e471e646, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1353397107, port_security_enabled=True, project_id=89713a2255ea411083c22360247bdb7c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58608, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2053, status=ACTIVE, subnets=['918b3f66-0e16-40f5-9fec-43b27ef3cf8f'], tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:28Z, vlan_transparent=None, network_id=7b5a1eaf-f859-44dd-8036-8ec5e471e646, port_security_enabled=False, project_id=89713a2255ea411083c22360247bdb7c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2145, status=DOWN, tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:49Z on network 7b5a1eaf-f859-44dd-8036-8ec5e471e646
Oct 13 15:47:50 standalone.localdomain dnsmasq[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/addn_hosts - 1 addresses
Oct 13 15:47:50 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/host
Oct 13 15:47:50 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/opts
Oct 13 15:47:50 standalone.localdomain podman[553659]: 2025-10-13 15:47:50.536813827 +0000 UTC m=+0.065424731 container kill f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:47:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:47:50 standalone.localdomain ceph-mon[29756]: pgmap v4006: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:50 standalone.localdomain podman[553681]: 2025-10-13 15:47:50.830806968 +0000 UTC m=+0.093835398 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 13 15:47:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:50.839 496978 INFO neutron.agent.dhcp.agent [None req-c8ea0522-93ca-46d9-b221-4c4d3166a8b3 - - - - - -] DHCP configuration for ports {'75f7f421-07c0-4dd8-afa7-65e4bfd81be3'} is completed
Oct 13 15:47:50 standalone.localdomain podman[553681]: 2025-10-13 15:47:50.86998382 +0000 UTC m=+0.133012280 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.build-date=20251009)
Oct 13 15:47:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:50.879 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:49Z, description=, device_id=67c3f68d-fa9e-47e8-801e-8736f87850a2, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035d90>], id=75f7f421-07c0-4dd8-afa7-65e4bfd81be3, ip_allocation=immediate, mac_address=fa:16:3e:4e:81:1b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:26Z, description=, dns_domain=, id=7b5a1eaf-f859-44dd-8036-8ec5e471e646, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1353397107, port_security_enabled=True, project_id=89713a2255ea411083c22360247bdb7c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58608, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2053, status=ACTIVE, subnets=['918b3f66-0e16-40f5-9fec-43b27ef3cf8f'], tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:28Z, vlan_transparent=None, network_id=7b5a1eaf-f859-44dd-8036-8ec5e471e646, port_security_enabled=False, project_id=89713a2255ea411083c22360247bdb7c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2145, status=DOWN, tags=[], tenant_id=89713a2255ea411083c22360247bdb7c, updated_at=2025-10-13T15:47:49Z on network 7b5a1eaf-f859-44dd-8036-8ec5e471e646
Oct 13 15:47:50 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain dnsmasq[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/addn_hosts - 1 addresses
Oct 13 15:47:51 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/host
Oct 13 15:47:51 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/opts
Oct 13 15:47:51 standalone.localdomain podman[553718]: 2025-10-13 15:47:51.14182097 +0000 UTC m=+0.062652115 container kill f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:47:51 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:47:51 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:51 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:51 standalone.localdomain podman[553755]: 2025-10-13 15:47:51.274939503 +0000 UTC m=+0.065718771 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:51 standalone.localdomain systemd[1]: tmp-crun.iB4Ipm.mount: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain dnsmasq[553193]: exiting on receipt of SIGTERM
Oct 13 15:47:51 standalone.localdomain podman[553808]: 2025-10-13 15:47:51.407572369 +0000 UTC m=+0.050814395 container kill cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:51 standalone.localdomain systemd[1]: libpod-cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4.scope: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain dnsmasq[552413]: exiting on receipt of SIGTERM
Oct 13 15:47:51 standalone.localdomain podman[553809]: 2025-10-13 15:47:51.431699082 +0000 UTC m=+0.065326288 container kill 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:47:51 standalone.localdomain systemd[1]: libpod-406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044.scope: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain podman[553835]: 2025-10-13 15:47:51.46336887 +0000 UTC m=+0.044267452 container died cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:51.480 496978 INFO neutron.agent.dhcp.agent [None req-cf0d72f4-0c9b-4dae-bdc5-c67a7197e2bc - - - - - -] DHCP configuration for ports {'75f7f421-07c0-4dd8-afa7-65e4bfd81be3'} is completed
Oct 13 15:47:51 standalone.localdomain podman[553835]: 2025-10-13 15:47:51.496924527 +0000 UTC m=+0.077823079 container cleanup cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:47:51 standalone.localdomain systemd[1]: libpod-conmon-cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4.scope: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain podman[553855]: 2025-10-13 15:47:51.504249636 +0000 UTC m=+0.055154582 container died 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:51Z|00608|binding|INFO|Removing iface tap6574be41-ca ovn-installed in OVS
Oct 13 15:47:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:51.512 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 74d78f60-d328-41ba-877f-d7e1122d73e5 with type ""
Oct 13 15:47:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:51Z|00609|binding|INFO|Removing lport 6574be41-ca4e-471f-95bb-967c3c3d1519 ovn-installed in OVS
Oct 13 15:47:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:51.513 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-6889bbd9-925c-49b9-a8ce-cbdafffa55bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6889bbd9-925c-49b9-a8ce-cbdafffa55bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9339d97e-bb70-41ef-ad96-800e3b5e545d, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6574be41-ca4e-471f-95bb-967c3c3d1519) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:51.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:51.514 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6574be41-ca4e-471f-95bb-967c3c3d1519 in datapath 6889bbd9-925c-49b9-a8ce-cbdafffa55bc unbound from our chassis
Oct 13 15:47:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:51.516 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6889bbd9-925c-49b9-a8ce-cbdafffa55bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:51.517 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1684e52f-e4b2-4024-9e5f-a3ded763b0c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:51.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:51 standalone.localdomain podman[553855]: 2025-10-13 15:47:51.540765395 +0000 UTC m=+0.091670321 container remove 406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889bbd9-925c-49b9-a8ce-cbdafffa55bc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:51 standalone.localdomain systemd[1]: libpod-conmon-406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044.scope: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:51.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:51 standalone.localdomain kernel: device tap6574be41-ca left promiscuous mode
Oct 13 15:47:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:51.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:51.594 496978 INFO neutron.agent.dhcp.agent [None req-51360991-e5b1-48c5-b731-6d24bc5c05fd - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:47:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:51.595 496978 INFO neutron.agent.dhcp.agent [None req-51360991-e5b1-48c5-b731-6d24bc5c05fd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:51 standalone.localdomain podman[553842]: 2025-10-13 15:47:51.618982655 +0000 UTC m=+0.186087307 container remove cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:51Z|00610|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:51.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4007: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-66b06bf7ff673147468dc74a2d539526480d08a4a0f22171acc32275dd5e655f-merged.mount: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd411a78ce0a8ddf6a081b4eab5daacf84a0fcbc3e8294013598c5ce8634a8f4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-359ca4bcc2106bdf4c677339541c31e2f0095c8a11ff8046221b436a60404859-merged.mount: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-406562ed1cd6ad20000046b0961657e5ea4675b3e6770a962eacb320ff2bd044-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d6889bbd9\x2d925c\x2d49b9\x2da8ce\x2dcbdafffa55bc.mount: Deactivated successfully.
Oct 13 15:47:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:51.863 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:51 standalone.localdomain dnsmasq[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/addn_hosts - 0 addresses
Oct 13 15:47:51 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/host
Oct 13 15:47:51 standalone.localdomain dnsmasq-dhcp[553439]: read /var/lib/neutron/dhcp/d625f8ce-1911-4e84-990c-f2e4e4969371/opts
Oct 13 15:47:51 standalone.localdomain podman[553926]: 2025-10-13 15:47:51.877619562 +0000 UTC m=+0.069351514 container kill 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:51 standalone.localdomain systemd[1]: tmp-crun.DgDNDi.mount: Deactivated successfully.
Oct 13 15:47:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:52.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:52Z|00611|binding|INFO|Releasing lport 74f50749-4537-40d2-bdbc-66d171022745 from this chassis (sb_readonly=0)
Oct 13 15:47:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:52Z|00612|binding|INFO|Setting lport 74f50749-4537-40d2-bdbc-66d171022745 down in Southbound
Oct 13 15:47:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:52.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:52 standalone.localdomain kernel: device tap74f50749-45 left promiscuous mode
Oct 13 15:47:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:52.222 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d625f8ce-1911-4e84-990c-f2e4e4969371', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d625f8ce-1911-4e84-990c-f2e4e4969371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b42fe79-9dd7-4d56-84f0-439dedd37b39, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=74f50749-4537-40d2-bdbc-66d171022745) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:52.224 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 74f50749-4537-40d2-bdbc-66d171022745 in datapath d625f8ce-1911-4e84-990c-f2e4e4969371 unbound from our chassis
Oct 13 15:47:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:52.227 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d625f8ce-1911-4e84-990c-f2e4e4969371 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:52.228 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1bdcdf-c7ce-44fd-96fe-23e01804490d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:52.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:52.461 2 INFO neutron.agent.securitygroups_rpc [None req-9514c8c3-e1d9-4b90-8724-04d4861682ee 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:52 standalone.localdomain podman[553987]: 
Oct 13 15:47:52 standalone.localdomain podman[553987]: 2025-10-13 15:47:52.624768199 +0000 UTC m=+0.071962547 container create 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:47:52 standalone.localdomain systemd[1]: Started libpod-conmon-2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2.scope.
Oct 13 15:47:52 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:52 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2292ac85b98712dd545228aa017a60cdef6c0da169da774bb7d2a218ed1ebeec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:52 standalone.localdomain podman[553987]: 2025-10-13 15:47:52.683024806 +0000 UTC m=+0.130219144 container init 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:47:52 standalone.localdomain podman[553987]: 2025-10-13 15:47:52.588063014 +0000 UTC m=+0.035257342 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:52 standalone.localdomain podman[553987]: 2025-10-13 15:47:52.692846912 +0000 UTC m=+0.140041250 container start 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:52 standalone.localdomain dnsmasq[554005]: started, version 2.85 cachesize 150
Oct 13 15:47:52 standalone.localdomain dnsmasq[554005]: DNS service limited to local subnets
Oct 13 15:47:52 standalone.localdomain dnsmasq[554005]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:52 standalone.localdomain dnsmasq[554005]: warning: no upstream servers configured
Oct 13 15:47:52 standalone.localdomain dnsmasq-dhcp[554005]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:47:52 standalone.localdomain dnsmasq[554005]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:52 standalone.localdomain dnsmasq-dhcp[554005]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:52 standalone.localdomain dnsmasq-dhcp[554005]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.700099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370472700136, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 436, "num_deletes": 255, "total_data_size": 193427, "memory_usage": 201800, "flush_reason": "Manual Compaction"}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370472703382, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 190221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94571, "largest_seqno": 95006, "table_properties": {"data_size": 187808, "index_size": 525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5789, "raw_average_key_size": 17, "raw_value_size": 182946, "raw_average_value_size": 568, "num_data_blocks": 24, "num_entries": 322, "num_filter_entries": 322, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760370452, "oldest_key_time": 1760370452, "file_creation_time": 1760370472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 3334 microseconds, and 1223 cpu microseconds.
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.703430) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 190221 bytes OK
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.703450) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.705611) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.705631) EVENT_LOG_v1 {"time_micros": 1760370472705624, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.705649) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 190720, prev total WAL file size 191210, number of live WAL files 2.
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.708691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353134' seq:72057594037927935, type:22 .. '6C6F676D0033373635' seq:0, type:0; will stop at (end)
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(185KB)], [215(6634KB)]
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370472708758, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 6983708, "oldest_snapshot_seqno": -1}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 7448 keys, 6897562 bytes, temperature: kUnknown
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370472735030, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 6897562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6853709, "index_size": 24169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 198214, "raw_average_key_size": 26, "raw_value_size": 6724007, "raw_average_value_size": 902, "num_data_blocks": 944, "num_entries": 7448, "num_filter_entries": 7448, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.735228) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 6897562 bytes
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.736696) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 265.2 rd, 262.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 6.5 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(73.0) write-amplify(36.3) OK, records in: 7970, records dropped: 522 output_compression: NoCompression
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.736711) EVENT_LOG_v1 {"time_micros": 1760370472736704, "job": 136, "event": "compaction_finished", "compaction_time_micros": 26330, "compaction_time_cpu_micros": 14003, "output_level": 6, "num_output_files": 1, "total_output_size": 6897562, "num_input_records": 7970, "num_output_records": 7448, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370472736823, "job": 136, "event": "table_file_deletion", "file_number": 217}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370472737332, "job": 136, "event": "table_file_deletion", "file_number": 215}
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.708596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.737413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.737419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.737421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.737424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:47:52.737427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:47:52 standalone.localdomain ceph-mon[29756]: pgmap v4007: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:52.878 2 INFO neutron.agent.securitygroups_rpc [None req-26f95bfa-5c8d-4d20-a1b4-719f6a21ddb3 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:52.973 496978 INFO neutron.agent.dhcp.agent [None req-7a8adad2-ca12-417a-a47b-be26ca9c34e6 - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:52.974 496978 INFO neutron.agent.linux.ip_lib [None req-d13805cb-2318-4fc5-a698-009a82766f4c - - - - - -] Device tap9404aa16-92 cannot be used as it has no MAC address
Oct 13 15:47:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:52.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain kernel: device tap9404aa16-92 entered promiscuous mode
Oct 13 15:47:53 standalone.localdomain NetworkManager[5962]: <info>  [1760370473.0071] manager: (tap9404aa16-92): new Generic device (/org/freedesktop/NetworkManager/Devices/109)
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00613|binding|INFO|Claiming lport 9404aa16-9257-434b-aa76-92c18dba29ec for this chassis.
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00614|binding|INFO|9404aa16-9257-434b-aa76-92c18dba29ec: Claiming unknown
Oct 13 15:47:53 standalone.localdomain systemd-udevd[554072]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.017 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-ee7d652a-8f84-452b-8307-6a986a345242', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee7d652a-8f84-452b-8307-6a986a345242', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8a682d3-690c-4875-bdbe-a5be698a16ea, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9404aa16-9257-434b-aa76-92c18dba29ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.020 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9404aa16-9257-434b-aa76-92c18dba29ec in datapath ee7d652a-8f84-452b-8307-6a986a345242 bound to our chassis
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.022 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ee7d652a-8f84-452b-8307-6a986a345242 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.023 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ef702714-353e-482e-89bb-89d9c5aa6a50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00615|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:53 standalone.localdomain podman[554057]: 2025-10-13 15:47:53.040316401 +0000 UTC m=+0.063025297 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:47:53 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:47:53 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:53 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00616|binding|INFO|Setting lport 9404aa16-9257-434b-aa76-92c18dba29ec ovn-installed in OVS
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00617|binding|INFO|Setting lport 9404aa16-9257-434b-aa76-92c18dba29ec up in Southbound
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain dnsmasq[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/addn_hosts - 0 addresses
Oct 13 15:47:53 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/host
Oct 13 15:47:53 standalone.localdomain dnsmasq-dhcp[551334]: read /var/lib/neutron/dhcp/7b5a1eaf-f859-44dd-8036-8ec5e471e646/opts
Oct 13 15:47:53 standalone.localdomain podman[554073]: 2025-10-13 15:47:53.096868555 +0000 UTC m=+0.076593250 container kill f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain dnsmasq[554005]: exiting on receipt of SIGTERM
Oct 13 15:47:53 standalone.localdomain podman[554095]: 2025-10-13 15:47:53.139922728 +0000 UTC m=+0.048498984 container kill 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:53 standalone.localdomain systemd[1]: libpod-2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2.scope: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain podman[554118]: 2025-10-13 15:47:53.197123932 +0000 UTC m=+0.049485995 container died 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:47:53 standalone.localdomain podman[554118]: 2025-10-13 15:47:53.226706365 +0000 UTC m=+0.079068398 container cleanup 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:53 standalone.localdomain systemd[1]: libpod-conmon-2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2.scope: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain podman[554125]: 2025-10-13 15:47:53.265948899 +0000 UTC m=+0.110344233 container remove 2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00618|binding|INFO|Releasing lport b28fbc52-7229-4d82-8fc1-2d6e095d1a0f from this chassis (sb_readonly=0)
Oct 13 15:47:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:53Z|00619|binding|INFO|Setting lport b28fbc52-7229-4d82-8fc1-2d6e095d1a0f down in Southbound
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain kernel: device tapb28fbc52-72 left promiscuous mode
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.366 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-7b5a1eaf-f859-44dd-8036-8ec5e471e646', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b5a1eaf-f859-44dd-8036-8ec5e471e646', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd1d190f-8216-4eb2-aa59-d3dd37b8484e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b28fbc52-7229-4d82-8fc1-2d6e095d1a0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.368 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b28fbc52-7229-4d82-8fc1-2d6e095d1a0f in datapath 7b5a1eaf-f859-44dd-8036-8ec5e471e646 unbound from our chassis
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.372 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b5a1eaf-f859-44dd-8036-8ec5e471e646, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.372 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b44e3954-b7d7-4193-908b-e3da1d484214]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:53.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11366 DF PROTO=TCP SPT=42744 DPT=9102 SEQ=938044110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD297EE0000000001030307) 
Oct 13 15:47:53 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:53.552 2 INFO neutron.agent.securitygroups_rpc [None req-cbeb2076-890f-41ac-afed-5dd84cb298c1 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:53 standalone.localdomain dnsmasq[553439]: exiting on receipt of SIGTERM
Oct 13 15:47:53 standalone.localdomain podman[554194]: 2025-10-13 15:47:53.69460235 +0000 UTC m=+0.065470153 container kill 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:47:53 standalone.localdomain systemd[1]: libpod-12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b.scope: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4008: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:53 standalone.localdomain podman[554211]: 2025-10-13 15:47:53.77345274 +0000 UTC m=+0.065033439 container died 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:47:53 standalone.localdomain podman[554211]: 2025-10-13 15:47:53.809081442 +0000 UTC m=+0.100662061 container cleanup 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:53 standalone.localdomain systemd[1]: libpod-conmon-12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b.scope: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2292ac85b98712dd545228aa017a60cdef6c0da169da774bb7d2a218ed1ebeec-merged.mount: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a2e632c99126f7931dee2e4972c8a0d09eb494d81935d51e775ec2927ac88d2-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-3fd1aada25f08f7071253f258f07a6765f0b07cc348978d02adfa3a718642ede-merged.mount: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:53 standalone.localdomain podman[554218]: 2025-10-13 15:47:53.86868327 +0000 UTC m=+0.148747910 container remove 12251474e13272c613f6cf165df987d30064079e620e6ea098238e930a65174b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d625f8ce-1911-4e84-990c-f2e4e4969371, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:47:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:53.917 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:53Z, description=, device_id=fee17db6-0972-4c11-84a8-c8e21526f774, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891086a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ea9790>], id=0f569103-8a09-40b5-93f7-b9eaac2fbf11, ip_allocation=immediate, mac_address=fa:16:3e:50:cf:09, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2158, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:47:53Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.952 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.953 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.956 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74436405-b09f-40e8-aa5a-3f0e72ab25db IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.957 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:53.958 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[44b32cb7-fbb5-462f-a418-e575360504a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:53 standalone.localdomain podman[554240]: 2025-10-13 15:47:53.989992954 +0000 UTC m=+0.098696559 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:47:53 standalone.localdomain podman[554240]: 2025-10-13 15:47:53.999256143 +0000 UTC m=+0.107959788 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:47:54 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:47:54 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:54.061 2 INFO neutron.agent.securitygroups_rpc [None req-70ea053f-d436-402d-9b3c-3e936e6ba9af 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:54 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:47:54 standalone.localdomain podman[554308]: 2025-10-13 15:47:54.178321999 +0000 UTC m=+0.055189342 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:54 standalone.localdomain podman[554297]: 
Oct 13 15:47:54 standalone.localdomain podman[554297]: 2025-10-13 15:47:54.2023974 +0000 UTC m=+0.091698212 container create f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:47:54 standalone.localdomain podman[554297]: 2025-10-13 15:47:54.148087956 +0000 UTC m=+0.037388838 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:54 standalone.localdomain systemd[1]: Started libpod-conmon-f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba.scope.
Oct 13 15:47:54 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:54 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dacd9c8e03fd33adc545b60d78ac52ac0aca2d4846f281ec8524b3974143ee3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.293 496978 INFO neutron.agent.dhcp.agent [None req-2f11c8e9-51f3-4272-81d4-c1185c6222d9 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.294 496978 INFO neutron.agent.dhcp.agent [None req-2f11c8e9-51f3-4272-81d4-c1185c6222d9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.295 496978 INFO neutron.agent.dhcp.agent [None req-2f11c8e9-51f3-4272-81d4-c1185c6222d9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:54 standalone.localdomain podman[554297]: 2025-10-13 15:47:54.296378342 +0000 UTC m=+0.185679174 container init f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:47:54 standalone.localdomain podman[554297]: 2025-10-13 15:47:54.306435035 +0000 UTC m=+0.195735877 container start f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:47:54 standalone.localdomain dnsmasq[554340]: started, version 2.85 cachesize 150
Oct 13 15:47:54 standalone.localdomain dnsmasq[554340]: DNS service limited to local subnets
Oct 13 15:47:54 standalone.localdomain dnsmasq[554340]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:54 standalone.localdomain dnsmasq[554340]: warning: no upstream servers configured
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[554340]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:54 standalone.localdomain dnsmasq[554340]: read /var/lib/neutron/dhcp/ee7d652a-8f84-452b-8307-6a986a345242/addn_hosts - 0 addresses
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[554340]: read /var/lib/neutron/dhcp/ee7d652a-8f84-452b-8307-6a986a345242/host
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[554340]: read /var/lib/neutron/dhcp/ee7d652a-8f84-452b-8307-6a986a345242/opts
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.360 496978 INFO neutron.agent.dhcp.agent [None req-9292a9cb-fa7f-4ab0-853d-dfc89f730870 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.492 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:53Z, description=, device_id=a95b315a-337f-4a2c-a0a9-a693cdc71cf0, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891395b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891394f0>], id=624f9d5a-b8e6-45f0-828f-c17284b8c791, ip_allocation=immediate, mac_address=fa:16:3e:88:c7:f6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2160, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:47:54Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.510 496978 INFO neutron.agent.dhcp.agent [None req-d53188d6-1d9c-457a-958f-2cdc56782fe0 - - - - - -] DHCP configuration for ports {'19cbb7d0-65c6-46ee-bd0b-64e62c8be99c', '0f569103-8a09-40b5-93f7-b9eaac2fbf11'} is completed
Oct 13 15:47:54 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:54.545 2 INFO neutron.agent.securitygroups_rpc [None req-a580ca4a-71c4-4264-ae34-efef59d8d5cc 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11367 DF PROTO=TCP SPT=42744 DPT=9102 SEQ=938044110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD29BF70000000001030307) 
Oct 13 15:47:54 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:54 standalone.localdomain podman[554383]: 2025-10-13 15:47:54.747350889 +0000 UTC m=+0.062886153 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:54 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:54 standalone.localdomain dnsmasq[553641]: exiting on receipt of SIGTERM
Oct 13 15:47:54 standalone.localdomain podman[554403]: 2025-10-13 15:47:54.786419967 +0000 UTC m=+0.050744093 container kill 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:47:54 standalone.localdomain systemd[1]: libpod-192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2.scope: Deactivated successfully.
Oct 13 15:47:54 standalone.localdomain ceph-mon[29756]: pgmap v4008: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:54 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd625f8ce\x2d1911\x2d4e84\x2d990c\x2df2e4e4969371.mount: Deactivated successfully.
Oct 13 15:47:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:54.866 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:54 standalone.localdomain podman[554422]: 2025-10-13 15:47:54.86920001 +0000 UTC m=+0.068979963 container died 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:54 standalone.localdomain systemd[1]: tmp-crun.POvGFa.mount: Deactivated successfully.
Oct 13 15:47:54 standalone.localdomain systemd[1]: tmp-crun.Tc97sf.mount: Deactivated successfully.
Oct 13 15:47:54 standalone.localdomain podman[554422]: 2025-10-13 15:47:54.916872907 +0000 UTC m=+0.116652790 container cleanup 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:47:54 standalone.localdomain systemd[1]: libpod-conmon-192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2.scope: Deactivated successfully.
Oct 13 15:47:54 standalone.localdomain podman[554429]: 2025-10-13 15:47:54.958057602 +0000 UTC m=+0.141752183 container remove 192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0ddff84-359f-452c-98ae-c7b5e8430e4f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:47:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:54Z|00620|binding|INFO|Releasing lport 32c2f85f-eb39-41e8-af26-b19bf7252add from this chassis (sb_readonly=0)
Oct 13 15:47:54 standalone.localdomain kernel: device tap32c2f85f-eb left promiscuous mode
Oct 13 15:47:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:54Z|00621|binding|INFO|Setting lport 32c2f85f-eb39-41e8-af26-b19bf7252add down in Southbound
Oct 13 15:47:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:54.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:54.980 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-c0ddff84-359f-452c-98ae-c7b5e8430e4f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0ddff84-359f-452c-98ae-c7b5e8430e4f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a804136c-6f0a-42d8-a30b-fc7bc6bf96e9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=32c2f85f-eb39-41e8-af26-b19bf7252add) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:54.982 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 32c2f85f-eb39-41e8-af26-b19bf7252add in datapath c0ddff84-359f-452c-98ae-c7b5e8430e4f unbound from our chassis
Oct 13 15:47:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:54.987 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0ddff84-359f-452c-98ae-c7b5e8430e4f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:54.991 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1231dcfa-32a8-4b58-a238-634beec264d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:54.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:55 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:55.036 2 INFO neutron.agent.securitygroups_rpc [None req-17fc9afb-bfc4-4ca4-a936-6dfb1bd452c0 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:55.072 496978 INFO neutron.agent.dhcp.agent [None req-00081899-00be-46cf-98e2-82441903c686 - - - - - -] DHCP configuration for ports {'624f9d5a-b8e6-45f0-828f-c17284b8c791'} is completed
Oct 13 15:47:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:55.215 496978 INFO neutron.agent.dhcp.agent [None req-ee00b4fe-013d-4a8e-9cb0-02d62936d1f5 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:47:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:55.216 496978 INFO neutron.agent.dhcp.agent [None req-ee00b4fe-013d-4a8e-9cb0-02d62936d1f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:55.217 496978 INFO neutron.agent.dhcp.agent [None req-ee00b4fe-013d-4a8e-9cb0-02d62936d1f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:55 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:55.589 2 INFO neutron.agent.securitygroups_rpc [None req-f83d8f7e-5589-4f91-8737-d5f6c2b8080c 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:55 standalone.localdomain podman[554496]: 
Oct 13 15:47:55 standalone.localdomain podman[554496]: 2025-10-13 15:47:55.678723902 +0000 UTC m=+0.081013698 container create 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:55 standalone.localdomain systemd[1]: Started libpod-conmon-9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6.scope.
Oct 13 15:47:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae98a8ae37e0d2f32c4a94821c59c3bf6cddcdda742251b407acf8e05c357e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:55 standalone.localdomain podman[554496]: 2025-10-13 15:47:55.730536228 +0000 UTC m=+0.132826054 container init 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:47:55 standalone.localdomain podman[554496]: 2025-10-13 15:47:55.737995751 +0000 UTC m=+0.140285577 container start 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:55 standalone.localdomain podman[554496]: 2025-10-13 15:47:55.64501455 +0000 UTC m=+0.047304426 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:55 standalone.localdomain dnsmasq[554516]: started, version 2.85 cachesize 150
Oct 13 15:47:55 standalone.localdomain dnsmasq[554516]: DNS service limited to local subnets
Oct 13 15:47:55 standalone.localdomain dnsmasq[554516]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:55 standalone.localdomain dnsmasq[554516]: warning: no upstream servers configured
Oct 13 15:47:55 standalone.localdomain dnsmasq-dhcp[554516]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:55 standalone.localdomain dnsmasq[554516]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:55 standalone.localdomain dnsmasq-dhcp[554516]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:55 standalone.localdomain dnsmasq-dhcp[554516]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4009: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ce29101fc849ab4ed4086a3948077dcd30cea60e9d919ffbc5978e62f424f6a8-merged.mount: Deactivated successfully.
Oct 13 15:47:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-192a37fdc7dafa06f59511ff9ddbcb794af225471f25756de9501eeb826b4ff2-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:55 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dc0ddff84\x2d359f\x2d452c\x2d98ae\x2dc7b5e8430e4f.mount: Deactivated successfully.
Oct 13 15:47:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:55.885 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:55Z|00622|binding|INFO|Removing iface tap9404aa16-92 ovn-installed in OVS
Oct 13 15:47:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:55.904 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 634ad86c-adaf-4997-a93f-04f53ff48b3a with type ""
Oct 13 15:47:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:55Z|00623|binding|INFO|Removing lport 9404aa16-9257-434b-aa76-92c18dba29ec ovn-installed in OVS
Oct 13 15:47:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:55.906 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-ee7d652a-8f84-452b-8307-6a986a345242', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee7d652a-8f84-452b-8307-6a986a345242', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8a682d3-690c-4875-bdbe-a5be698a16ea, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9404aa16-9257-434b-aa76-92c18dba29ec) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:55.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:55.908 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9404aa16-9257-434b-aa76-92c18dba29ec in datapath ee7d652a-8f84-452b-8307-6a986a345242 unbound from our chassis
Oct 13 15:47:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:55.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:55.911 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee7d652a-8f84-452b-8307-6a986a345242, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:55.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:55 standalone.localdomain kernel: device tap9404aa16-92 left promiscuous mode
Oct 13 15:47:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:55.912 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[fac07cc5-0ea0-4b1a-aa6e-16fbfd07e205]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:55.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:55.984 496978 INFO neutron.agent.dhcp.agent [None req-0b829000-e9d8-41f6-9d53-bced71c4a9a0 - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:56 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:56.185 2 INFO neutron.agent.securitygroups_rpc [None req-faf7852f-0219-496d-9ddb-88ff82c5abe3 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:56Z|00624|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:56.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:56 standalone.localdomain systemd[1]: tmp-crun.QwhAKf.mount: Deactivated successfully.
Oct 13 15:47:56 standalone.localdomain podman[554535]: 2025-10-13 15:47:56.570700955 +0000 UTC m=+0.073039799 container kill f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:56 standalone.localdomain dnsmasq[554340]: read /var/lib/neutron/dhcp/ee7d652a-8f84-452b-8307-6a986a345242/addn_hosts - 0 addresses
Oct 13 15:47:56 standalone.localdomain dnsmasq-dhcp[554340]: read /var/lib/neutron/dhcp/ee7d652a-8f84-452b-8307-6a986a345242/host
Oct 13 15:47:56 standalone.localdomain dnsmasq-dhcp[554340]: read /var/lib/neutron/dhcp/ee7d652a-8f84-452b-8307-6a986a345242/opts
Oct 13 15:47:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:47:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:56.609 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent [None req-bd2d8175-e8a6-4dc4-b4a7-84b26d8447e1 - - - - - -] Unable to reload_allocations dhcp for ee7d652a-8f84-452b-8307-6a986a345242.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9404aa16-92 not found in namespace qdhcp-ee7d652a-8f84-452b-8307-6a986a345242.
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     return fut.result()
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     raise self._exception
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9404aa16-92 not found in namespace qdhcp-ee7d652a-8f84-452b-8307-6a986a345242.
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.609 496978 ERROR neutron.agent.dhcp.agent 
Oct 13 15:47:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:56.613 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.613 496978 INFO neutron.agent.dhcp.agent [None req-915778d8-0e6b-4f7f-aaf9-f04f6562baf0 - - - - - -] Synchronizing state
Oct 13 15:47:56 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:56.615 2 INFO neutron.agent.securitygroups_rpc [None req-f2e494e9-1daa-4b40-af3d-1222d7972ca7 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:56.616 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 74436405-b09f-40e8-aa5a-3f0e72ab25db IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:47:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:56.616 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:56.617 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[8bad5228-d503-4065-aced-8198b99d8421]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11368 DF PROTO=TCP SPT=42744 DPT=9102 SEQ=938044110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD2A3F60000000001030307) 
Oct 13 15:47:56 standalone.localdomain podman[554549]: 2025-10-13 15:47:56.677022652 +0000 UTC m=+0.080598655 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:47:56 standalone.localdomain podman[554549]: 2025-10-13 15:47:56.683955439 +0000 UTC m=+0.087531432 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:47:56 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.824 496978 INFO neutron.agent.dhcp.agent [None req-557f92c7-4107-4d0e-8512-b985d78b2236 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.826 496978 INFO neutron.agent.dhcp.agent [-] Starting network c5fef9bf-ce37-49cb-b6c1-cf5c6b8cb009 dhcp configuration
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.827 496978 INFO neutron.agent.dhcp.agent [-] Finished network c5fef9bf-ce37-49cb-b6c1-cf5c6b8cb009 dhcp configuration
Oct 13 15:47:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:56.827 496978 INFO neutron.agent.dhcp.agent [-] Starting network ee7d652a-8f84-452b-8307-6a986a345242 dhcp configuration
Oct 13 15:47:56 standalone.localdomain ceph-mon[29756]: pgmap v4009: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:57 standalone.localdomain dnsmasq[554340]: exiting on receipt of SIGTERM
Oct 13 15:47:57 standalone.localdomain podman[554586]: 2025-10-13 15:47:57.034221525 +0000 UTC m=+0.069862081 container kill f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:47:57 standalone.localdomain systemd[1]: libpod-f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba.scope: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain podman[554602]: 2025-10-13 15:47:57.124836421 +0000 UTC m=+0.065473253 container died f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:47:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-dacd9c8e03fd33adc545b60d78ac52ac0aca2d4846f281ec8524b3974143ee3b-merged.mount: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:57.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:57 standalone.localdomain podman[554602]: 2025-10-13 15:47:57.226241014 +0000 UTC m=+0.166877846 container remove f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ee7d652a-8f84-452b-8307-6a986a345242, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:57 standalone.localdomain systemd[1]: libpod-conmon-f60c4d4d1128212e4aa3680e1da8e60b32bab9bf40c3acbabde5896e35d858ba.scope: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:57.326 496978 INFO neutron.agent.dhcp.agent [None req-49dd5f59-770b-42d6-9f09-ab3ed69e4680 - - - - - -] Finished network ee7d652a-8f84-452b-8307-6a986a345242 dhcp configuration
Oct 13 15:47:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:57.329 496978 INFO neutron.agent.dhcp.agent [None req-557f92c7-4107-4d0e-8512-b985d78b2236 - - - - - -] Synchronizing state complete
Oct 13 15:47:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:57.457 2 INFO neutron.agent.securitygroups_rpc [None req-d5312b35-93b5-4cfe-8f55-919461e07b30 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['38ecbc22-009d-4583-a9ab-fde721d3f0af']
Oct 13 15:47:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dee7d652a\x2d8f84\x2d452b\x2d8307\x2d6a986a345242.mount: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:57.632 496978 INFO neutron.agent.dhcp.agent [None req-712c4e97-2ce7-4fe3-a131-5265949cdff0 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:47:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:47:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4010: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:57 standalone.localdomain dnsmasq[552424]: exiting on receipt of SIGTERM
Oct 13 15:47:57 standalone.localdomain podman[554661]: 2025-10-13 15:47:57.809358654 +0000 UTC m=+0.063306466 container kill 55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fac02be2-073c-46c5-9613-b3e18596338a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:57 standalone.localdomain systemd[1]: libpod-55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20.scope: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:57.880 2 INFO neutron.agent.securitygroups_rpc [None req-7aa6bf81-6903-43b4-884e-f0c210f81f8d db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:57 standalone.localdomain dnsmasq[554516]: exiting on receipt of SIGTERM
Oct 13 15:47:57 standalone.localdomain podman[554670]: 2025-10-13 15:47:57.884400264 +0000 UTC m=+0.110172947 container kill 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:57 standalone.localdomain systemd[1]: libpod-9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6.scope: Deactivated successfully.
Oct 13 15:47:57 standalone.localdomain podman[554693]: 2025-10-13 15:47:57.910681285 +0000 UTC m=+0.071009877 container died 55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fac02be2-073c-46c5-9613-b3e18596338a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:47:57 standalone.localdomain podman[554712]: 2025-10-13 15:47:57.936579402 +0000 UTC m=+0.042699122 container died 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:47:58 standalone.localdomain podman[554693]: 2025-10-13 15:47:58.013604925 +0000 UTC m=+0.173933527 container remove 55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fac02be2-073c-46c5-9613-b3e18596338a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:47:58 standalone.localdomain podman[554712]: 2025-10-13 15:47:58.024145904 +0000 UTC m=+0.130265614 container cleanup 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:47:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2ae98a8ae37e0d2f32c4a94821c59c3bf6cddcdda742251b407acf8e05c357e8-merged.mount: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-939113b247a4f23d3bfa24046d7b8ca2b59a18176015381181f4ad42d7f03e8b-merged.mount: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain kernel: device tap3319ac37-79 left promiscuous mode
Oct 13 15:47:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:58Z|00625|binding|INFO|Releasing lport 3319ac37-7952-476d-9260-b8de17a0b995 from this chassis (sb_readonly=0)
Oct 13 15:47:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:58.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:58Z|00626|binding|INFO|Setting lport 3319ac37-7952-476d-9260-b8de17a0b995 down in Southbound
Oct 13 15:47:58 standalone.localdomain systemd[1]: libpod-conmon-9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6.scope: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain systemd[1]: libpod-conmon-55ec28dda79e0d63959a1921e6b71ed2c3b6c0d798528d129fa99f4fe0aecf20.scope: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.045 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-fac02be2-073c-46c5-9613-b3e18596338a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fac02be2-073c-46c5-9613-b3e18596338a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4f12ea6-1f2f-4673-b0cb-6932bceae79c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3319ac37-7952-476d-9260-b8de17a0b995) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.048 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3319ac37-7952-476d-9260-b8de17a0b995 in datapath fac02be2-073c-46c5-9613-b3e18596338a unbound from our chassis
Oct 13 15:47:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:58.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.055 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fac02be2-073c-46c5-9613-b3e18596338a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.056 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8ef75d-cd54-4cb5-b40a-b52622fd22ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:58 standalone.localdomain podman[554722]: 2025-10-13 15:47:58.066928028 +0000 UTC m=+0.154177400 container remove 9765f83de8c103fbea3b50b6fdc0bbdca07551ca7810d36ca38d74bf7ecbd2a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:47:58 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dfac02be2\x2d073c\x2d46c5\x2d9613\x2db3e18596338a.mount: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:58.305 496978 INFO neutron.agent.dhcp.agent [None req-b0faaabd-a55e-4386-b974-230f7431cc5b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:58.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:58 standalone.localdomain podman[554781]: 2025-10-13 15:47:58.442863875 +0000 UTC m=+0.057363020 container kill 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:47:58 standalone.localdomain dnsmasq[551791]: exiting on receipt of SIGTERM
Oct 13 15:47:58 standalone.localdomain systemd[1]: libpod-89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df.scope: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain podman[554798]: 2025-10-13 15:47:58.506008325 +0000 UTC m=+0.052097636 container died 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:58 standalone.localdomain podman[554798]: 2025-10-13 15:47:58.532004515 +0000 UTC m=+0.078093786 container cleanup 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:58 standalone.localdomain systemd[1]: libpod-conmon-89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df.scope: Deactivated successfully.
Oct 13 15:47:58 standalone.localdomain podman[554805]: 2025-10-13 15:47:58.59497778 +0000 UTC m=+0.131594256 container remove 89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-147b6b76-2e5c-4c83-8fde-2c38b71b5fee, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:47:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:58Z|00627|binding|INFO|Releasing lport c592a2b8-b84c-464a-ac06-59686b702837 from this chassis (sb_readonly=0)
Oct 13 15:47:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:58.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:58 standalone.localdomain kernel: device tapc592a2b8-b8 left promiscuous mode
Oct 13 15:47:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:58Z|00628|binding|INFO|Setting lport c592a2b8-b84c-464a-ac06-59686b702837 down in Southbound
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.617 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-147b6b76-2e5c-4c83-8fde-2c38b71b5fee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-147b6b76-2e5c-4c83-8fde-2c38b71b5fee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89713a2255ea411083c22360247bdb7c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109bd2fe-b0bb-46f1-a8dd-33163a315639, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=c592a2b8-b84c-464a-ac06-59686b702837) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.619 378821 INFO neutron.agent.ovn.metadata.agent [-] Port c592a2b8-b84c-464a-ac06-59686b702837 in datapath 147b6b76-2e5c-4c83-8fde-2c38b71b5fee unbound from our chassis
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.623 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 147b6b76-2e5c-4c83-8fde-2c38b71b5fee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:47:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:58.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:58.625 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[22ef7bb0-f725-4a85-a38b-5d5f636caea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:58 standalone.localdomain ceph-mon[29756]: pgmap v4010: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:58.919 496978 INFO neutron.agent.dhcp.agent [None req-9463474d-6f41-4a0d-a2d1-147302ce2d9f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1c30c690e803ec38f20d7580efe2545a1ab67b0b0b5defafe10fef94bd70fa03-merged.mount: Deactivated successfully.
Oct 13 15:47:59 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89f54d7a6b87efb420bd5eb082faf2830df7b4189b29b753e4dd4c2a3c4939df-userdata-shm.mount: Deactivated successfully.
Oct 13 15:47:59 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d147b6b76\x2d2e5c\x2d4c83\x2d8fde\x2d2c38b71b5fee.mount: Deactivated successfully.
Oct 13 15:47:59 standalone.localdomain podman[554861]: 
Oct 13 15:47:59 standalone.localdomain podman[554861]: 2025-10-13 15:47:59.119954466 +0000 UTC m=+0.093497428 container create 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:47:59 standalone.localdomain podman[554861]: 2025-10-13 15:47:59.07584078 +0000 UTC m=+0.049383792 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:47:59 standalone.localdomain systemd[1]: Started libpod-conmon-5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1.scope.
Oct 13 15:47:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:47:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63df9b2465fc0a2f2c3013204ff1f688bbdc91a809fc8ad28451e9f7a2823bb8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:47:59 standalone.localdomain podman[554861]: 2025-10-13 15:47:59.214257358 +0000 UTC m=+0.187800320 container init 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:59 standalone.localdomain podman[554861]: 2025-10-13 15:47:59.220708048 +0000 UTC m=+0.194251010 container start 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:47:59 standalone.localdomain dnsmasq[554879]: started, version 2.85 cachesize 150
Oct 13 15:47:59 standalone.localdomain dnsmasq[554879]: DNS service limited to local subnets
Oct 13 15:47:59 standalone.localdomain dnsmasq[554879]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:47:59 standalone.localdomain dnsmasq[554879]: warning: no upstream servers configured
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[554879]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:47:59 standalone.localdomain dnsmasq[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:59.238 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:47:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:59.275 496978 INFO neutron.agent.dhcp.agent [None req-378a17e7-4369-490a-b689-e754a987902a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:57Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889085160>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18892947f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b6d60>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1889135850>], id=096ca49c-943e-4101-b495-c787953e4d38, ip_allocation=immediate, mac_address=fa:16:3e:22:16:6c, name=tempest-NetworksTestDHCPv6-1435106518, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=31, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['3603bf6a-23e5-41f8-9ea4-221a207114c0', 'bd87c199-8ee1-4483-8687-5c30bd771db5'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:54Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2168, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:47:57Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:47:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:59.472 496978 INFO neutron.agent.dhcp.agent [None req-4ac7a462-b66d-4e82-b2a8-e199cadb1dde - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:47:59 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:47:59 standalone.localdomain podman[554914]: 2025-10-13 15:47:59.494954093 +0000 UTC m=+0.064925026 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:47:59 standalone.localdomain dnsmasq[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:47:59 standalone.localdomain podman[554925]: 2025-10-13 15:47:59.52272742 +0000 UTC m=+0.065734391 container kill 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:47:59 standalone.localdomain dnsmasq-dhcp[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:47:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:59.716 496978 INFO neutron.agent.linux.ip_lib [None req-dea15c3b-88b2-4fde-954e-66c5cdd83003 - - - - - -] Device tapea3677b6-e4 cannot be used as it has no MAC address
Oct 13 15:47:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:59Z|00629|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:47:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4011: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:47:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:59.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:59 standalone.localdomain kernel: device tapea3677b6-e4 entered promiscuous mode
Oct 13 15:47:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:59Z|00630|binding|INFO|Claiming lport ea3677b6-e4f5-4a84-bd8e-182407d5122f for this chassis.
Oct 13 15:47:59 standalone.localdomain NetworkManager[5962]: <info>  [1760370479.8173] manager: (tapea3677b6-e4): new Generic device (/org/freedesktop/NetworkManager/Devices/110)
Oct 13 15:47:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:59Z|00631|binding|INFO|ea3677b6-e4f5-4a84-bd8e-182407d5122f: Claiming unknown
Oct 13 15:47:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:59.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:59 standalone.localdomain systemd-udevd[554969]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:47:59 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:47:59.830 2 INFO neutron.agent.securitygroups_rpc [None req-4cef3a7c-ce29-4a1c-a8cd-42dee95a9f88 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:59Z|00632|binding|INFO|Setting lport ea3677b6-e4f5-4a84-bd8e-182407d5122f ovn-installed in OVS
Oct 13 15:47:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:59.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:59.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapea3677b6-e4: No such device
Oct 13 15:47:59 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:47:59Z|00633|binding|INFO|Setting lport ea3677b6-e4f5-4a84-bd8e-182407d5122f up in Southbound
Oct 13 15:47:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:59.912 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d1f78710-d032-4f42-8b33-952b5cd721ff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1f78710-d032-4f42-8b33-952b5cd721ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e828193-8760-40ca-9388-096c1736714f, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=ea3677b6-e4f5-4a84-bd8e-182407d5122f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:47:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:59.914 378821 INFO neutron.agent.ovn.metadata.agent [-] Port ea3677b6-e4f5-4a84-bd8e-182407d5122f in datapath d1f78710-d032-4f42-8b33-952b5cd721ff bound to our chassis
Oct 13 15:47:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:59.916 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d1f78710-d032-4f42-8b33-952b5cd721ff or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:47:59 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:47:59.917 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[14c3ac6f-fc0e-4fca-8717-59bb65ff1a2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:47:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:47:59.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:47:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:47:59.942 496978 INFO neutron.agent.dhcp.agent [None req-66e83674-66fd-469a-b814-0963df7e6f48 - - - - - -] DHCP configuration for ports {'096ca49c-943e-4101-b495-c787953e4d38'} is completed
Oct 13 15:48:00 standalone.localdomain sudo[554997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:48:00 standalone.localdomain sudo[554997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:48:00 standalone.localdomain sudo[554997]: pam_unix(sudo:session): session closed for user root
Oct 13 15:48:00 standalone.localdomain sudo[555017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:48:00 standalone.localdomain sudo[555017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:48:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11369 DF PROTO=TCP SPT=42744 DPT=9102 SEQ=938044110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD2B3B70000000001030307) 
Oct 13 15:48:00 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:00.701 2 INFO neutron.agent.securitygroups_rpc [None req-38788210-1cea-4e7e-b7d4-6ac7967408ae 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['d859d02b-1fa4-4cf1-a494-b3b1ec0508fa']
Oct 13 15:48:00 standalone.localdomain sudo[555017]: pam_unix(sudo:session): session closed for user root
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:48:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 401ea72f-36d2-4071-bdb2-d9acb7b8481d (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:48:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 401ea72f-36d2-4071-bdb2-d9acb7b8481d (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:48:00 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 401ea72f-36d2-4071-bdb2-d9acb7b8481d (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: pgmap v4011: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:48:00 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:48:00 standalone.localdomain podman[555124]: 
Oct 13 15:48:00 standalone.localdomain podman[555124]: 2025-10-13 15:48:00.838690679 +0000 UTC m=+0.083821035 container create 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:48:00 standalone.localdomain sudo[555140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:48:00 standalone.localdomain sudo[555140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:48:00 standalone.localdomain sudo[555140]: pam_unix(sudo:session): session closed for user root
Oct 13 15:48:00 standalone.localdomain dnsmasq[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:00 standalone.localdomain dnsmasq-dhcp[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:00 standalone.localdomain dnsmasq-dhcp[554879]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:00 standalone.localdomain podman[555139]: 2025-10-13 15:48:00.859394095 +0000 UTC m=+0.060344663 container kill 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:48:00 standalone.localdomain podman[555124]: 2025-10-13 15:48:00.793016525 +0000 UTC m=+0.038146951 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:00 standalone.localdomain systemd[1]: Started libpod-conmon-811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416.scope.
Oct 13 15:48:00 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:00 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8b8b0dfe5ae18481da19860b6ad547f079135306b95c1999ded7d4e52b9d9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:00 standalone.localdomain podman[555124]: 2025-10-13 15:48:00.964791123 +0000 UTC m=+0.209921499 container init 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:48:00 standalone.localdomain podman[555124]: 2025-10-13 15:48:00.97623521 +0000 UTC m=+0.221365596 container start 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:00 standalone.localdomain dnsmasq[555181]: started, version 2.85 cachesize 150
Oct 13 15:48:00 standalone.localdomain dnsmasq[555181]: DNS service limited to local subnets
Oct 13 15:48:00 standalone.localdomain dnsmasq[555181]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:00 standalone.localdomain dnsmasq[555181]: warning: no upstream servers configured
Oct 13 15:48:00 standalone.localdomain dnsmasq-dhcp[555181]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:48:00 standalone.localdomain dnsmasq[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/addn_hosts - 0 addresses
Oct 13 15:48:00 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/host
Oct 13 15:48:00 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/opts
Oct 13 15:48:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:01.047 496978 INFO neutron.agent.dhcp.agent [None req-dea15c3b-88b2-4fde-954e-66c5cdd83003 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:59Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f321c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32880>], id=60208601-16f7-4f64-bf08-2b697aed4082, ip_allocation=immediate, mac_address=fa:16:3e:e3:35:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:56Z, description=, dns_domain=, id=d1f78710-d032-4f42-8b33-952b5cd721ff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-358981427, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22717, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2164, status=ACTIVE, subnets=['12ee8704-19af-4e8c-8682-2c1e1c13a7f0'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:58Z, vlan_transparent=None, network_id=d1f78710-d032-4f42-8b33-952b5cd721ff, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2176, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:59Z on network d1f78710-d032-4f42-8b33-952b5cd721ff
Oct 13 15:48:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:01.102 496978 INFO neutron.agent.dhcp.agent [None req-a17cb03b-5f24-488b-9adc-527c22ee2b37 - - - - - -] DHCP configuration for ports {'83b26ec2-2e97-4442-9132-299bfce3bf9e'} is completed
Oct 13 15:48:01 standalone.localdomain dnsmasq[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/addn_hosts - 1 addresses
Oct 13 15:48:01 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/host
Oct 13 15:48:01 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/opts
Oct 13 15:48:01 standalone.localdomain podman[555205]: 2025-10-13 15:48:01.2557772 +0000 UTC m=+0.069487219 container kill 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:48:01 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:01.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:01.443 496978 INFO neutron.agent.dhcp.agent [None req-dea15c3b-88b2-4fde-954e-66c5cdd83003 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:47:59Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b76d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b7640>], id=60208601-16f7-4f64-bf08-2b697aed4082, ip_allocation=immediate, mac_address=fa:16:3e:e3:35:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:47:56Z, description=, dns_domain=, id=d1f78710-d032-4f42-8b33-952b5cd721ff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-358981427, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22717, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2164, status=ACTIVE, subnets=['12ee8704-19af-4e8c-8682-2c1e1c13a7f0'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:58Z, vlan_transparent=None, network_id=d1f78710-d032-4f42-8b33-952b5cd721ff, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2176, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:47:59Z on network d1f78710-d032-4f42-8b33-952b5cd721ff
Oct 13 15:48:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:01.561 496978 INFO neutron.agent.dhcp.agent [None req-fb85d9ec-e228-4d26-a896-35ae052aa0e2 - - - - - -] DHCP configuration for ports {'60208601-16f7-4f64-bf08-2b697aed4082'} is completed
Oct 13 15:48:01 standalone.localdomain dnsmasq[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/addn_hosts - 1 addresses
Oct 13 15:48:01 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/host
Oct 13 15:48:01 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/opts
Oct 13 15:48:01 standalone.localdomain podman[555245]: 2025-10-13 15:48:01.662751785 +0000 UTC m=+0.065348289 container kill 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:48:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:01.722 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4012: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:01.935 496978 INFO neutron.agent.dhcp.agent [None req-9eefeb52-444a-479b-8c46-2a5f4d119e19 - - - - - -] DHCP configuration for ports {'60208601-16f7-4f64-bf08-2b697aed4082'} is completed
Oct 13 15:48:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:02 standalone.localdomain dnsmasq[554879]: exiting on receipt of SIGTERM
Oct 13 15:48:02 standalone.localdomain podman[555298]: 2025-10-13 15:48:02.696946964 +0000 UTC m=+0.066744303 container kill 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:48:02 standalone.localdomain systemd[1]: libpod-5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1.scope: Deactivated successfully.
Oct 13 15:48:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:02 standalone.localdomain systemd[1]: tmp-crun.iNlUWM.mount: Deactivated successfully.
Oct 13 15:48:02 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:48:02 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:02 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:02 standalone.localdomain podman[555308]: 2025-10-13 15:48:02.739632946 +0000 UTC m=+0.074283298 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:48:02 standalone.localdomain podman[555324]: 2025-10-13 15:48:02.774412571 +0000 UTC m=+0.061664434 container died 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:02 standalone.localdomain podman[555324]: 2025-10-13 15:48:02.804760938 +0000 UTC m=+0.092012721 container cleanup 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:48:02 standalone.localdomain systemd[1]: libpod-conmon-5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1.scope: Deactivated successfully.
Oct 13 15:48:02 standalone.localdomain ceph-mon[29756]: pgmap v4012: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:02 standalone.localdomain podman[555333]: 2025-10-13 15:48:02.85741501 +0000 UTC m=+0.133285578 container remove 5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:48:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:02.942 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:02.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:02 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:02.944 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:48:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:03.404 496978 INFO neutron.agent.linux.ip_lib [None req-7aee389c-1f5f-49a7-aa03-4e4202e57985 - - - - - -] Device tap0f21b957-ad cannot be used as it has no MAC address
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain kernel: device tap0f21b957-ad entered promiscuous mode
Oct 13 15:48:03 standalone.localdomain NetworkManager[5962]: <info>  [1760370483.4453] manager: (tap0f21b957-ad): new Generic device (/org/freedesktop/NetworkManager/Devices/111)
Oct 13 15:48:03 standalone.localdomain systemd-udevd[555404]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00634|binding|INFO|Claiming lport 0f21b957-ad45-4a33-92e6-bc9238d2802b for this chassis.
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00635|binding|INFO|0f21b957-ad45-4a33-92e6-bc9238d2802b: Claiming unknown
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.463 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ed8b681-65d6-49ac-9636-38e243825c46, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=0f21b957-ad45-4a33-92e6-bc9238d2802b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.465 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 0f21b957-ad45-4a33-92e6-bc9238d2802b in datapath bde12338-b889-4412-87bf-7969bdf7bdfb bound to our chassis
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.467 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bde12338-b889-4412-87bf-7969bdf7bdfb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.468 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1db1aa-952e-4c81-9f4e-7ffdc0df2017]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00636|binding|INFO|Setting lport 0f21b957-ad45-4a33-92e6-bc9238d2802b ovn-installed in OVS
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00637|binding|INFO|Setting lport 0f21b957-ad45-4a33-92e6-bc9238d2802b up in Southbound
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap0f21b957-ad: No such device
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:03.569 496978 INFO neutron.agent.linux.ip_lib [None req-e1a11e1c-1708-4c13-b913-296a07c7e766 - - - - - -] Device tap4c6ff4b2-ba cannot be used as it has no MAC address
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain kernel: device tap4c6ff4b2-ba entered promiscuous mode
Oct 13 15:48:03 standalone.localdomain systemd-udevd[555407]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain NetworkManager[5962]: <info>  [1760370483.6030] manager: (tap4c6ff4b2-ba): new Generic device (/org/freedesktop/NetworkManager/Devices/112)
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00638|binding|INFO|Claiming lport 4c6ff4b2-bad6-4303-8aaf-c85431968519 for this chassis.
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00639|binding|INFO|4c6ff4b2-bad6-4303-8aaf-c85431968519: Claiming unknown
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.618 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a216632f-cda4-4672-88f3-7cc11177318c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a216632f-cda4-4672-88f3-7cc11177318c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ce5d7dd-0f62-46d4-aa82-c2711697a291, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4c6ff4b2-bad6-4303-8aaf-c85431968519) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.619 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4c6ff4b2-bad6-4303-8aaf-c85431968519 in datapath a216632f-cda4-4672-88f3-7cc11177318c bound to our chassis
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.621 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a216632f-cda4-4672-88f3-7cc11177318c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:03 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:03.621 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c53c1361-5a6e-4f2c-a94f-4a6879f85fbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00640|binding|INFO|Setting lport 4c6ff4b2-bad6-4303-8aaf-c85431968519 ovn-installed in OVS
Oct 13 15:48:03 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:03Z|00641|binding|INFO|Setting lport 4c6ff4b2-bad6-4303-8aaf-c85431968519 up in Southbound
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-63df9b2465fc0a2f2c3013204ff1f688bbdc91a809fc8ad28451e9f7a2823bb8-merged.mount: Deactivated successfully.
Oct 13 15:48:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ee0fc36b6d64837d50d434622923ae1874524a34dc6abc64f6663cbb2cda2b1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:03.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4013: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:03 standalone.localdomain podman[555479]: 
Oct 13 15:48:03 standalone.localdomain podman[555479]: 2025-10-13 15:48:03.851759897 +0000 UTC m=+0.066997980 container create c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:48:03 standalone.localdomain systemd[1]: Started libpod-conmon-c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422.scope.
Oct 13 15:48:03 standalone.localdomain systemd[1]: tmp-crun.AHIkLR.mount: Deactivated successfully.
Oct 13 15:48:03 standalone.localdomain podman[555479]: 2025-10-13 15:48:03.815976271 +0000 UTC m=+0.031214364 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:03 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:03 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13d11269551c317c2d1921c4b3e0ab837fea8bbf426ef29d5c238eb5dc41e913/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:03 standalone.localdomain podman[555479]: 2025-10-13 15:48:03.938305776 +0000 UTC m=+0.153543819 container init c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:03 standalone.localdomain podman[555479]: 2025-10-13 15:48:03.943727206 +0000 UTC m=+0.158965259 container start c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:03 standalone.localdomain dnsmasq[555512]: started, version 2.85 cachesize 150
Oct 13 15:48:03 standalone.localdomain dnsmasq[555512]: DNS service limited to local subnets
Oct 13 15:48:03 standalone.localdomain dnsmasq[555512]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:03 standalone.localdomain dnsmasq[555512]: warning: no upstream servers configured
Oct 13 15:48:03 standalone.localdomain dnsmasq-dhcp[555512]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:03 standalone.localdomain dnsmasq[555512]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:03 standalone.localdomain dnsmasq-dhcp[555512]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:03 standalone.localdomain dnsmasq-dhcp[555512]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:04.076 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:03Z, description=, device_id=2240e398-76be-4605-9593-c08898a48334, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b7c10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b7c40>], id=e9eeb242-82c2-4cad-8c6a-703970a7da0c, ip_allocation=immediate, mac_address=fa:16:3e:61:68:45, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2200, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:48:03Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:48:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:04.285 496978 INFO neutron.agent.dhcp.agent [None req-3f0a0ac9-3be4-4ce4-8f7e-98d41cce1acd - - - - - -] DHCP configuration for ports {'768a2903-b967-4e86-ae92-fc2a1242b65b', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:04.370 496978 INFO neutron.agent.linux.ip_lib [None req-ee6721b5-da91-4e86-93eb-64f400e9ff8d - - - - - -] Device tap24e82d40-4e cannot be used as it has no MAC address
Oct 13 15:48:04 standalone.localdomain dnsmasq[551334]: exiting on receipt of SIGTERM
Oct 13 15:48:04 standalone.localdomain podman[555581]: 2025-10-13 15:48:04.409521896 +0000 UTC m=+0.063380398 container kill f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain systemd[1]: libpod-f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9.scope: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain kernel: device tap24e82d40-4e entered promiscuous mode
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain NetworkManager[5962]: <info>  [1760370484.4166] manager: (tap24e82d40-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/113)
Oct 13 15:48:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:04Z|00642|binding|INFO|Claiming lport 24e82d40-4e94-4c99-855c-7dd1cefb9950 for this chassis.
Oct 13 15:48:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:04Z|00643|binding|INFO|24e82d40-4e94-4c99-855c-7dd1cefb9950: Claiming unknown
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.426 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-061740cd-cd36-44f9-8305-b1b1c923e9f6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-061740cd-cd36-44f9-8305-b1b1c923e9f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3c4560-96c5-4b6d-96ec-7d2ad9942d55, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=24e82d40-4e94-4c99-855c-7dd1cefb9950) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.427 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 24e82d40-4e94-4c99-855c-7dd1cefb9950 in datapath 061740cd-cd36-44f9-8305-b1b1c923e9f6 bound to our chassis
Oct 13 15:48:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:04Z|00644|binding|INFO|Setting lport 24e82d40-4e94-4c99-855c-7dd1cefb9950 ovn-installed in OVS
Oct 13 15:48:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:04Z|00645|binding|INFO|Setting lport 24e82d40-4e94-4c99-855c-7dd1cefb9950 up in Southbound
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.428 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 061740cd-cd36-44f9-8305-b1b1c923e9f6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.429 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a24f79e6-7d28-4105-a8e9-7595ea10d821]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:04 standalone.localdomain dnsmasq[555512]: exiting on receipt of SIGTERM
Oct 13 15:48:04 standalone.localdomain systemd[1]: libpod-c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422.scope: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain podman[555591]: 2025-10-13 15:48:04.444412844 +0000 UTC m=+0.080986767 container kill c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain podman[555676]: 2025-10-13 15:48:04.537180128 +0000 UTC m=+0.032790145 container died c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain podman[555676]: 2025-10-13 15:48:04.572324824 +0000 UTC m=+0.067934841 container remove c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:04Z|00646|binding|INFO|Releasing lport 768a2903-b967-4e86-ae92-fc2a1242b65b from this chassis (sb_readonly=0)
Oct 13 15:48:04 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:04Z|00647|binding|INFO|Setting lport 768a2903-b967-4e86-ae92-fc2a1242b65b down in Southbound
Oct 13 15:48:04 standalone.localdomain kernel: device tap768a2903-b9 left promiscuous mode
Oct 13 15:48:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:04.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.601 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8a:bd77/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=768a2903-b967-4e86-ae92-fc2a1242b65b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.602 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 768a2903-b967-4e86-ae92-fc2a1242b65b in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.605 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:04 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:04.606 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[fced87a9-4587-4203-98d4-946b649d8286]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:04 standalone.localdomain podman[555620]: 2025-10-13 15:48:04.623083857 +0000 UTC m=+0.179293773 container died f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:48:04 standalone.localdomain podman[555620]: 2025-10-13 15:48:04.675061058 +0000 UTC m=+0.231270974 container remove f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7b5a1eaf-f859-44dd-8036-8ec5e471e646, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:48:04 standalone.localdomain systemd[1]: libpod-conmon-f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9.scope: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain podman[555704]: 
Oct 13 15:48:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-13d11269551c317c2d1921c4b3e0ab837fea8bbf426ef29d5c238eb5dc41e913-merged.mount: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8fd686c1d5cbc3492536adc36af1051998859accc41a705f3b6fa2669820a1a6-merged.mount: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f59f61cdd48e4ca9922837221dfd97e97f75d642cb924dd8acf04016f282a5c9-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain podman[555704]: 2025-10-13 15:48:04.701242995 +0000 UTC m=+0.145534460 container create 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:04 standalone.localdomain systemd[1]: libpod-conmon-c378e6ab32f1e62f081c8f49114206d8582e9a4112233f3e13f78700e28e4422.scope: Deactivated successfully.
Oct 13 15:48:04 standalone.localdomain systemd[1]: Started libpod-conmon-4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1.scope.
Oct 13 15:48:04 standalone.localdomain podman[555704]: 2025-10-13 15:48:04.646055314 +0000 UTC m=+0.090346799 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d88a6f96b24e66842c3126e10483bd2553c6be397666c8fdaf1c5d3313a26d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:04 standalone.localdomain podman[555723]: 
Oct 13 15:48:04 standalone.localdomain podman[555704]: 2025-10-13 15:48:04.757559662 +0000 UTC m=+0.201851137 container init 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:04 standalone.localdomain podman[555704]: 2025-10-13 15:48:04.767994878 +0000 UTC m=+0.212286343 container start 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:04 standalone.localdomain dnsmasq[555760]: started, version 2.85 cachesize 150
Oct 13 15:48:04 standalone.localdomain dnsmasq[555760]: DNS service limited to local subnets
Oct 13 15:48:04 standalone.localdomain dnsmasq[555760]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:04 standalone.localdomain dnsmasq[555760]: warning: no upstream servers configured
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[555760]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:04 standalone.localdomain dnsmasq[555760]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/addn_hosts - 0 addresses
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[555760]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/host
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[555760]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/opts
Oct 13 15:48:04 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:04 standalone.localdomain podman[555645]: 2025-10-13 15:48:04.791066257 +0000 UTC m=+0.337277652 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:04 standalone.localdomain podman[555723]: 2025-10-13 15:48:04.809342206 +0000 UTC m=+0.176516516 container create 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:48:04 standalone.localdomain podman[555723]: 2025-10-13 15:48:04.715210591 +0000 UTC m=+0.082384921 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:04 standalone.localdomain ceph-mon[29756]: pgmap v4013: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:04 standalone.localdomain systemd[1]: Started libpod-conmon-1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a.scope.
Oct 13 15:48:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c59daf35003511ec47ed41d0f85dd8a76f6464b70e6d2ca1ce659e7c77700142/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:04 standalone.localdomain podman[555723]: 2025-10-13 15:48:04.867996256 +0000 UTC m=+0.235170576 container init 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:04 standalone.localdomain podman[555723]: 2025-10-13 15:48:04.880042412 +0000 UTC m=+0.247216722 container start 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:48:04 standalone.localdomain dnsmasq[555777]: started, version 2.85 cachesize 150
Oct 13 15:48:04 standalone.localdomain dnsmasq[555777]: DNS service limited to local subnets
Oct 13 15:48:04 standalone.localdomain dnsmasq[555777]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:04 standalone.localdomain dnsmasq[555777]: warning: no upstream servers configured
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[555777]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Oct 13 15:48:04 standalone.localdomain dnsmasq[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/addn_hosts - 0 addresses
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/host
Oct 13 15:48:04 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/opts
Oct 13 15:48:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:04.943 496978 INFO neutron.agent.dhcp.agent [None req-e1a11e1c-1708-4c13-b913-296a07c7e766 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:04.944 496978 INFO neutron.agent.dhcp.agent [None req-e1a11e1c-1708-4c13-b913-296a07c7e766 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:03Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d9d00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d9b80>], id=b2f8ce23-0ffe-4252-9e4e-7cb2f72050d9, ip_allocation=immediate, mac_address=fa:16:3e:90:09:b9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:01Z, description=, dns_domain=, id=a216632f-cda4-4672-88f3-7cc11177318c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-668831366, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14838, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2181, status=ACTIVE, subnets=['f371dc77-d670-460b-be6b-f85488a8956f'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:02Z, vlan_transparent=None, network_id=a216632f-cda4-4672-88f3-7cc11177318c, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2199, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:04Z on network a216632f-cda4-4672-88f3-7cc11177318c
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.009 496978 INFO neutron.agent.dhcp.agent [None req-15bdd49a-62b8-4b79-9181-4fd79a81b119 - - - - - -] DHCP configuration for ports {'caca84c5-1ad7-4f1e-9ec5-ce99db831b88'} is completed
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.027 496978 INFO neutron.agent.dhcp.agent [None req-a9bed9df-7e18-497e-9920-21f967ac44e9 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.030 496978 INFO neutron.agent.dhcp.agent [None req-297f947a-4853-4355-8882-95a631fb8ed3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.031 496978 INFO neutron.agent.dhcp.agent [None req-297f947a-4853-4355-8882-95a631fb8ed3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:05.107 2 INFO neutron.agent.securitygroups_rpc [None req-a53af16d-93a0-4633-ab38-3d4c6aabe16c 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['d171522c-2802-4c8d-b622-82580ceb455d']
Oct 13 15:48:05 standalone.localdomain dnsmasq[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/addn_hosts - 1 addresses
Oct 13 15:48:05 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/host
Oct 13 15:48:05 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/opts
Oct 13 15:48:05 standalone.localdomain podman[555807]: 2025-10-13 15:48:05.116149186 +0000 UTC m=+0.051529929 container kill 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.232 496978 INFO neutron.agent.dhcp.agent [None req-e4bbac63-86f8-440f-a822-f707dee47334 - - - - - -] DHCP configuration for ports {'c51af9d7-13f7-4f07-838d-069c38da4d3d', 'e9eeb242-82c2-4cad-8c6a-703970a7da0c'} is completed
Oct 13 15:48:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:48:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:48:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.379 496978 INFO neutron.agent.dhcp.agent [None req-ace269e8-04b4-480f-bedb-7d1bc4492a3f - - - - - -] DHCP configuration for ports {'b2f8ce23-0ffe-4252-9e4e-7cb2f72050d9'} is completed
Oct 13 15:48:05 standalone.localdomain podman[555854]: 
Oct 13 15:48:05 standalone.localdomain podman[555854]: 2025-10-13 15:48:05.443479007 +0000 UTC m=+0.065543076 container create a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:48:05 standalone.localdomain systemd[1]: Started libpod-conmon-a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89.scope.
Oct 13 15:48:05 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:05 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebf6e241a2e557053457bdc34646a7ac15d548ca89ab3a7b5c83fb2ebfd4527/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:05 standalone.localdomain podman[555854]: 2025-10-13 15:48:05.492692271 +0000 UTC m=+0.114756340 container init a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:48:05 standalone.localdomain podman[555854]: 2025-10-13 15:48:05.497830852 +0000 UTC m=+0.119894911 container start a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:05 standalone.localdomain podman[555854]: 2025-10-13 15:48:05.410184088 +0000 UTC m=+0.032248167 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:05 standalone.localdomain dnsmasq[555873]: started, version 2.85 cachesize 150
Oct 13 15:48:05 standalone.localdomain dnsmasq[555873]: DNS service limited to local subnets
Oct 13 15:48:05 standalone.localdomain dnsmasq[555873]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:05 standalone.localdomain dnsmasq[555873]: warning: no upstream servers configured
Oct 13 15:48:05 standalone.localdomain dnsmasq-dhcp[555873]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:05 standalone.localdomain dnsmasq[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/addn_hosts - 0 addresses
Oct 13 15:48:05 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/host
Oct 13 15:48:05 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/opts
Oct 13 15:48:05 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:05 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d7b5a1eaf\x2df859\x2d44dd\x2d8036\x2d8ec5e471e646.mount: Deactivated successfully.
Oct 13 15:48:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4014: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:05.848 496978 INFO neutron.agent.dhcp.agent [None req-734975d8-4414-407c-b2fb-9c5bcdbec57e - - - - - -] DHCP configuration for ports {'cd3ba24a-dfc6-4465-98a3-581905daa0c3'} is completed
Oct 13 15:48:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:06.130 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:06 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:06.159 2 INFO neutron.agent.securitygroups_rpc [None req-1d0e56eb-a9e6-41c1-83df-39c3e89e5abf 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['d171522c-2802-4c8d-b622-82580ceb455d']
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.292 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.294 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.298 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.300 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1370d43a-2eab-441b-bbc9-a78b05da99df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:48:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:06.655 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:06Z, description=, device_id=4130931a-8a87-485d-99eb-cd7546294713, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889076af0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889076dc0>], id=b3d8a2e1-ab2b-4a86-930e-c74615a09ecf, ip_allocation=immediate, mac_address=fa:16:3e:05:a6:01, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:00Z, description=, dns_domain=, id=061740cd-cd36-44f9-8305-b1b1c923e9f6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1364042987, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33500, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2179, status=ACTIVE, subnets=['119f8cb1-a419-4913-ba09-bf02e55e958b'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:03Z, vlan_transparent=None, network_id=061740cd-cd36-44f9-8305-b1b1c923e9f6, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2204, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:06Z on network 061740cd-cd36-44f9-8305-b1b1c923e9f6
Oct 13 15:48:06 standalone.localdomain dnsmasq[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/addn_hosts - 1 addresses
Oct 13 15:48:06 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/host
Oct 13 15:48:06 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/opts
Oct 13 15:48:06 standalone.localdomain podman[555891]: 2025-10-13 15:48:06.858848807 +0000 UTC m=+0.053979654 container kill a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.967 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.968 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:48:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:06.969 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:48:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:07.128 496978 INFO neutron.agent.linux.ip_lib [None req-05146142-e96d-4707-b12f-d01c37bcff7d - - - - - -] Device tap9ca17eab-88 cannot be used as it has no MAC address
Oct 13 15:48:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:07.209 496978 INFO neutron.agent.dhcp.agent [None req-ba348a1b-e5ee-4595-931e-a6f3976a1710 - - - - - -] DHCP configuration for ports {'b3d8a2e1-ab2b-4a86-930e-c74615a09ecf'} is completed
Oct 13 15:48:07 standalone.localdomain dnsmasq[555760]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/addn_hosts - 0 addresses
Oct 13 15:48:07 standalone.localdomain dnsmasq-dhcp[555760]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/host
Oct 13 15:48:07 standalone.localdomain dnsmasq-dhcp[555760]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/opts
Oct 13 15:48:07 standalone.localdomain podman[555934]: 2025-10-13 15:48:07.212057885 +0000 UTC m=+0.106426211 container kill 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:48:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:07.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:07 standalone.localdomain kernel: device tap9ca17eab-88 entered promiscuous mode
Oct 13 15:48:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:07Z|00648|binding|INFO|Claiming lport 9ca17eab-8896-42c0-83ac-636695eb3c16 for this chassis.
Oct 13 15:48:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:07Z|00649|binding|INFO|9ca17eab-8896-42c0-83ac-636695eb3c16: Claiming unknown
Oct 13 15:48:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:07.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:07 standalone.localdomain NetworkManager[5962]: <info>  [1760370487.2226] manager: (tap9ca17eab-88): new Generic device (/org/freedesktop/NetworkManager/Devices/114)
Oct 13 15:48:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:07Z|00650|binding|INFO|Setting lport 9ca17eab-8896-42c0-83ac-636695eb3c16 ovn-installed in OVS
Oct 13 15:48:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:07Z|00651|binding|INFO|Setting lport 9ca17eab-8896-42c0-83ac-636695eb3c16 up in Southbound
Oct 13 15:48:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:07.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:07.234 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9ca17eab-8896-42c0-83ac-636695eb3c16) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:07.237 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9ca17eab-8896-42c0-83ac-636695eb3c16 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:07.241 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port e91e68a3-71de-4908-9f0b-b2d4e111b100 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:07.241 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:07.242 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1f6839-5541-4f83-b93b-7d5b9c23ec10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:07.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:07.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap9ca17eab-88: No such device
Oct 13 15:48:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:07.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:07 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:48:07 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:07 standalone.localdomain podman[555991]: 2025-10-13 15:48:07.351796374 +0000 UTC m=+0.058411424 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:48:07 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:07 standalone.localdomain ceph-mon[29756]: pgmap v4014: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:07.368 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:03Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889004bb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890047c0>], id=b2f8ce23-0ffe-4252-9e4e-7cb2f72050d9, ip_allocation=immediate, mac_address=fa:16:3e:90:09:b9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:01Z, description=, dns_domain=, id=a216632f-cda4-4672-88f3-7cc11177318c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-668831366, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14838, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2181, status=ACTIVE, subnets=['f371dc77-d670-460b-be6b-f85488a8956f'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:02Z, vlan_transparent=None, network_id=a216632f-cda4-4672-88f3-7cc11177318c, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2199, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:04Z on network a216632f-cda4-4672-88f3-7cc11177318c
Oct 13 15:48:07 standalone.localdomain dnsmasq[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/addn_hosts - 1 addresses
Oct 13 15:48:07 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/host
Oct 13 15:48:07 standalone.localdomain podman[556040]: 2025-10-13 15:48:07.54721517 +0000 UTC m=+0.046230114 container kill 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:07 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/opts
Oct 13 15:48:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:07.566 496978 INFO neutron.agent.dhcp.agent [None req-d9159a92-f148-4d20-aa00-b7287f019477 - - - - - -] DHCP configuration for ports {'0f21b957-ad45-4a33-92e6-bc9238d2802b', 'caca84c5-1ad7-4f1e-9ec5-ce99db831b88'} is completed
Oct 13 15:48:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4015: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:07.837 496978 INFO neutron.agent.dhcp.agent [None req-6ae28015-224b-42fc-82c8-bac30a28d058 - - - - - -] DHCP configuration for ports {'b2f8ce23-0ffe-4252-9e4e-7cb2f72050d9'} is completed
Oct 13 15:48:07 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:48:07 standalone.localdomain podman[556080]: 2025-10-13 15:48:07.943413618 +0000 UTC m=+0.063430020 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:07 standalone.localdomain podman[556080]: 2025-10-13 15:48:07.980072411 +0000 UTC m=+0.100088893 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:48:07 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:48:08 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:08.150 2 INFO neutron.agent.securitygroups_rpc [None req-b37da128-5ebe-4409-8078-e22251c3f0d6 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['1cb4f14e-54f9-4d1c-8836-3eae1e3f0a6c']
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.218 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:06Z, description=, device_id=4130931a-8a87-485d-99eb-cd7546294713, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaa7c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaac40>], id=b3d8a2e1-ab2b-4a86-930e-c74615a09ecf, ip_allocation=immediate, mac_address=fa:16:3e:05:a6:01, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:00Z, description=, dns_domain=, id=061740cd-cd36-44f9-8305-b1b1c923e9f6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1364042987, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33500, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2179, status=ACTIVE, subnets=['119f8cb1-a419-4913-ba09-bf02e55e958b'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:03Z, vlan_transparent=None, network_id=061740cd-cd36-44f9-8305-b1b1c923e9f6, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2204, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:06Z on network 061740cd-cd36-44f9-8305-b1b1c923e9f6
Oct 13 15:48:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:08Z|00652|binding|INFO|Removing iface tap0f21b957-ad ovn-installed in OVS
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.222 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 01659456-dc53-4417-baac-5f99f930e960 with type ""
Oct 13 15:48:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:08Z|00653|binding|INFO|Removing lport 0f21b957-ad45-4a33-92e6-bc9238d2802b ovn-installed in OVS
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.225 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ed8b681-65d6-49ac-9636-38e243825c46, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=0f21b957-ad45-4a33-92e6-bc9238d2802b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:08.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.228 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 0f21b957-ad45-4a33-92e6-bc9238d2802b in datapath bde12338-b889-4412-87bf-7969bdf7bdfb unbound from our chassis
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.232 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bde12338-b889-4412-87bf-7969bdf7bdfb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.233 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[07b05ec8-dd80-4407-a99a-509dde3a8f15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:08 standalone.localdomain podman[556126]: 
Oct 13 15:48:08 standalone.localdomain podman[556126]: 2025-10-13 15:48:08.291475965 +0000 UTC m=+0.097360217 container create b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:08 standalone.localdomain systemd[1]: Started libpod-conmon-b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03.scope.
Oct 13 15:48:08 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:08 standalone.localdomain podman[556126]: 2025-10-13 15:48:08.248763483 +0000 UTC m=+0.054647755 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:08 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36673501ab6fd92a3d301f5ccdfd9865f6e7aac32d8196804a00c34b475fcde5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:08 standalone.localdomain podman[556126]: 2025-10-13 15:48:08.358903538 +0000 UTC m=+0.164787770 container init b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:48:08 standalone.localdomain podman[556126]: 2025-10-13 15:48:08.367182946 +0000 UTC m=+0.173067178 container start b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:08 standalone.localdomain dnsmasq[556188]: started, version 2.85 cachesize 150
Oct 13 15:48:08 standalone.localdomain dnsmasq[556188]: DNS service limited to local subnets
Oct 13 15:48:08 standalone.localdomain dnsmasq[556188]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:08 standalone.localdomain dnsmasq[556188]: warning: no upstream servers configured
Oct 13 15:48:08 standalone.localdomain dnsmasq-dhcp[556188]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:08 standalone.localdomain dnsmasq[556188]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:08 standalone.localdomain dnsmasq-dhcp[556188]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:08 standalone.localdomain dnsmasq-dhcp[556188]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:08 standalone.localdomain dnsmasq[555760]: exiting on receipt of SIGTERM
Oct 13 15:48:08 standalone.localdomain podman[556166]: 2025-10-13 15:48:08.385186889 +0000 UTC m=+0.063361188 container kill 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:08 standalone.localdomain systemd[1]: libpod-4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1.scope: Deactivated successfully.
Oct 13 15:48:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:08.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.457 496978 INFO neutron.agent.dhcp.agent [None req-59378018-f66e-4cd2-a1bd-b2e0a0b378d7 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:08 standalone.localdomain dnsmasq[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/addn_hosts - 1 addresses
Oct 13 15:48:08 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/host
Oct 13 15:48:08 standalone.localdomain podman[556190]: 2025-10-13 15:48:08.479055296 +0000 UTC m=+0.087273043 container kill a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:08 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/opts
Oct 13 15:48:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:08Z|00654|binding|INFO|Releasing lport 9ca17eab-8896-42c0-83ac-636695eb3c16 from this chassis (sb_readonly=0)
Oct 13 15:48:08 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:08Z|00655|binding|INFO|Setting lport 9ca17eab-8896-42c0-83ac-636695eb3c16 down in Southbound
Oct 13 15:48:08 standalone.localdomain kernel: device tap9ca17eab-88 left promiscuous mode
Oct 13 15:48:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:08.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.502 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9ca17eab-8896-42c0-83ac-636695eb3c16) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.504 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9ca17eab-8896-42c0-83ac-636695eb3c16 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.507 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.507 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[1879bd8f-40d9-4ad5-96bb-95bcbdfa1f4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:08 standalone.localdomain podman[556197]: 2025-10-13 15:48:08.533800904 +0000 UTC m=+0.134896838 container died 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.598 496978 INFO neutron.agent.dhcp.agent [None req-ab683f36-6881-4895-af37-2d47f479c922 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:08 standalone.localdomain podman[556197]: 2025-10-13 15:48:08.611593011 +0000 UTC m=+0.212688915 container cleanup 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:48:08 standalone.localdomain systemd[1]: libpod-conmon-4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1.scope: Deactivated successfully.
Oct 13 15:48:08 standalone.localdomain podman[556199]: 2025-10-13 15:48:08.638605573 +0000 UTC m=+0.229516620 container remove 4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:08.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:08 standalone.localdomain kernel: device tap0f21b957-ad left promiscuous mode
Oct 13 15:48:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:08.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.689 496978 INFO neutron.agent.dhcp.agent [None req-557f92c7-4107-4d0e-8512-b985d78b2236 - - - - - -] Synchronizing state
Oct 13 15:48:08 standalone.localdomain ceph-mon[29756]: pgmap v4015: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.760 496978 INFO neutron.agent.dhcp.agent [None req-0d4127ff-58ac-4309-a696-590cc07ba176 - - - - - -] DHCP configuration for ports {'b3d8a2e1-ab2b-4a86-930e-c74615a09ecf'} is completed
Oct 13 15:48:08 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:08.814 2 INFO neutron.agent.securitygroups_rpc [None req-6568e004-71f8-4996-bdb3-7acdfa159f9d 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['1cb4f14e-54f9-4d1c-8836-3eae1e3f0a6c']
Oct 13 15:48:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-86d88a6f96b24e66842c3126e10483bd2553c6be397666c8fdaf1c5d3313a26d-merged.mount: Deactivated successfully.
Oct 13 15:48:08 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e5f97619e620ef90942060eabc09bab3284e53e98576f1f362a7614d69a8cf1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:08 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dbde12338\x2db889\x2d4412\x2d87bf\x2d7969bdf7bdfb.mount: Deactivated successfully.
Oct 13 15:48:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.878 496978 INFO neutron.agent.dhcp.agent [None req-059c8e20-eb0d-48f0-a8d3-01a0297fca35 - - - - - -] All active networks have been fetched through RPC.
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.879 496978 INFO neutron.agent.dhcp.agent [-] Starting network bde12338-b889-4412-87bf-7969bdf7bdfb dhcp configuration
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.884 496978 INFO neutron.agent.dhcp.agent [-] Starting network c5fef9bf-ce37-49cb-b6c1-cf5c6b8cb009 dhcp configuration
Oct 13 15:48:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:08.884 496978 INFO neutron.agent.dhcp.agent [-] Finished network c5fef9bf-ce37-49cb-b6c1-cf5c6b8cb009 dhcp configuration
Oct 13 15:48:08 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:08.946 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:48:08 standalone.localdomain podman[556241]: 2025-10-13 15:48:08.961627809 +0000 UTC m=+0.076765546 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:48:08 standalone.localdomain podman[556241]: 2025-10-13 15:48:08.981025074 +0000 UTC m=+0.096162791 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 15:48:08 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:48:09 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:09.496 496978 INFO neutron.agent.linux.ip_lib [None req-58ab6a92-8dea-468c-84c1-6f21c731a6c2 - - - - - -] Device tap5050939e-3b cannot be used as it has no MAC address
Oct 13 15:48:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:09.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:09 standalone.localdomain kernel: device tap5050939e-3b entered promiscuous mode
Oct 13 15:48:09 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:09Z|00656|binding|INFO|Claiming lport 5050939e-3b02-4706-89ba-034a50323a59 for this chassis.
Oct 13 15:48:09 standalone.localdomain NetworkManager[5962]: <info>  [1760370489.5686] manager: (tap5050939e-3b): new Generic device (/org/freedesktop/NetworkManager/Devices/115)
Oct 13 15:48:09 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:09Z|00657|binding|INFO|5050939e-3b02-4706-89ba-034a50323a59: Claiming unknown
Oct 13 15:48:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:09.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:09.577 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ed8b681-65d6-49ac-9636-38e243825c46, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=5050939e-3b02-4706-89ba-034a50323a59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:09.579 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 5050939e-3b02-4706-89ba-034a50323a59 in datapath bde12338-b889-4412-87bf-7969bdf7bdfb bound to our chassis
Oct 13 15:48:09 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:09Z|00658|binding|INFO|Setting lport 5050939e-3b02-4706-89ba-034a50323a59 ovn-installed in OVS
Oct 13 15:48:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:09.580 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bde12338-b889-4412-87bf-7969bdf7bdfb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:09 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:09Z|00659|binding|INFO|Setting lport 5050939e-3b02-4706-89ba-034a50323a59 up in Southbound
Oct 13 15:48:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:09.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:09.582 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[82e88311-13a7-4bf2-9a83-3bbe857d7f3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:09.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:09.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:09 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:09.715 2 INFO neutron.agent.securitygroups_rpc [None req-28b17507-47ad-4802-864e-4b6517c6399a 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['147adaf3-123e-4998-b893-94b4e54c59a7']
Oct 13 15:48:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4016: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:10.113 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:10.115 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:48:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:10.119 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:10.119 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfc0848-d6ce-4bc2-8a2f-27fd72856a15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:10.245 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:10 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:10.286 2 INFO neutron.agent.securitygroups_rpc [None req-0897ede1-aaaa-4eb8-be14-c103bdd98033 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['147adaf3-123e-4998-b893-94b4e54c59a7']
Oct 13 15:48:10 standalone.localdomain podman[556324]: 
Oct 13 15:48:10 standalone.localdomain podman[556324]: 2025-10-13 15:48:10.711641817 +0000 UTC m=+0.071477370 container create 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:10 standalone.localdomain systemd[1]: Started libpod-conmon-88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a.scope.
Oct 13 15:48:10 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:10 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7daa49a4c77b0d2a357ad4c029730f9707a0eac0dad085dd7cad73099064bc5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:10 standalone.localdomain podman[556324]: 2025-10-13 15:48:10.676846813 +0000 UTC m=+0.036682376 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:10 standalone.localdomain podman[556324]: 2025-10-13 15:48:10.78512786 +0000 UTC m=+0.144963433 container init 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:48:10 standalone.localdomain podman[556324]: 2025-10-13 15:48:10.795322968 +0000 UTC m=+0.155158551 container start 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:10 standalone.localdomain dnsmasq[556343]: started, version 2.85 cachesize 150
Oct 13 15:48:10 standalone.localdomain dnsmasq[556343]: DNS service limited to local subnets
Oct 13 15:48:10 standalone.localdomain dnsmasq[556343]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:10 standalone.localdomain dnsmasq[556343]: warning: no upstream servers configured
Oct 13 15:48:10 standalone.localdomain dnsmasq-dhcp[556343]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:10 standalone.localdomain dnsmasq[556343]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/addn_hosts - 0 addresses
Oct 13 15:48:10 standalone.localdomain dnsmasq-dhcp[556343]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/host
Oct 13 15:48:10 standalone.localdomain dnsmasq-dhcp[556343]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/opts
Oct 13 15:48:10 standalone.localdomain ceph-mon[29756]: pgmap v4016: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:10.858 496978 INFO neutron.agent.dhcp.agent [None req-07742f3d-5ec4-4517-8e5b-285819d42f3a - - - - - -] Finished network bde12338-b889-4412-87bf-7969bdf7bdfb dhcp configuration
Oct 13 15:48:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:10.859 496978 INFO neutron.agent.dhcp.agent [None req-059c8e20-eb0d-48f0-a8d3-01a0297fca35 - - - - - -] Synchronizing state complete
Oct 13 15:48:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:10.867 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:09Z, description=, device_id=b3c1f567-0bd9-4a82-a26b-e032e99b4192, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889151c10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889274550>], id=1514fa98-d6cb-4ca1-a454-ffd388c60a44, ip_allocation=immediate, mac_address=fa:16:3e:fc:8c:a7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2216, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:48:09Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:48:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:11.011 496978 INFO neutron.agent.dhcp.agent [None req-383d4ac5-6276-4a52-b8b4-383bb626afb3 - - - - - -] DHCP configuration for ports {'caca84c5-1ad7-4f1e-9ec5-ce99db831b88'} is completed
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.148 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port cd27fefe-76bb-4917-b6e9-a7b36bf4a7ba with type ""
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00660|binding|INFO|Removing iface tap5050939e-3b ovn-installed in OVS
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00661|binding|INFO|Removing lport 5050939e-3b02-4706-89ba-034a50323a59 ovn-installed in OVS
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.149 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bde12338-b889-4412-87bf-7969bdf7bdfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ed8b681-65d6-49ac-9636-38e243825c46, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=5050939e-3b02-4706-89ba-034a50323a59) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.150 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 5050939e-3b02-4706-89ba-034a50323a59 in datapath bde12338-b889-4412-87bf-7969bdf7bdfb unbound from our chassis
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.151 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bde12338-b889-4412-87bf-7969bdf7bdfb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.152 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[90a4c677-aa32-4003-baf4-9d9eae986527]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain dnsmasq[556343]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/addn_hosts - 0 addresses
Oct 13 15:48:11 standalone.localdomain dnsmasq-dhcp[556343]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/host
Oct 13 15:48:11 standalone.localdomain dnsmasq-dhcp[556343]: read /var/lib/neutron/dhcp/bde12338-b889-4412-87bf-7969bdf7bdfb/opts
Oct 13 15:48:11 standalone.localdomain podman[556369]: 2025-10-13 15:48:11.19932485 +0000 UTC m=+0.151029902 container kill 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:48:11 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:11.248 2 INFO neutron.agent.securitygroups_rpc [None req-5de2bb75-f554-45ba-9432-87ba4515071a 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['147adaf3-123e-4998-b893-94b4e54c59a7']
Oct 13 15:48:11 standalone.localdomain dnsmasq[556188]: exiting on receipt of SIGTERM
Oct 13 15:48:11 standalone.localdomain podman[556430]: 2025-10-13 15:48:11.261793289 +0000 UTC m=+0.065978779 container kill b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:48:11 standalone.localdomain systemd[1]: libpod-b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03.scope: Deactivated successfully.
Oct 13 15:48:11 standalone.localdomain dnsmasq[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/addn_hosts - 0 addresses
Oct 13 15:48:11 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/host
Oct 13 15:48:11 standalone.localdomain podman[556416]: 2025-10-13 15:48:11.306202304 +0000 UTC m=+0.152790667 container kill a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:48:11 standalone.localdomain dnsmasq-dhcp[555873]: read /var/lib/neutron/dhcp/061740cd-cd36-44f9-8305-b1b1c923e9f6/opts
Oct 13 15:48:11 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:48:11 standalone.localdomain podman[556440]: 2025-10-13 15:48:11.322473271 +0000 UTC m=+0.108961590 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:48:11 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:11 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:11 standalone.localdomain podman[556471]: 2025-10-13 15:48:11.339001577 +0000 UTC m=+0.060748506 container died b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:11 standalone.localdomain podman[556439]: 2025-10-13 15:48:11.381592955 +0000 UTC m=+0.166947738 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 15:48:11 standalone.localdomain podman[556439]: 2025-10-13 15:48:11.416831815 +0000 UTC m=+0.202186588 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:11 standalone.localdomain podman[556471]: 2025-10-13 15:48:11.429432798 +0000 UTC m=+0.151179697 container cleanup b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:11 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:48:11 standalone.localdomain systemd[1]: libpod-conmon-b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03.scope: Deactivated successfully.
Oct 13 15:48:11 standalone.localdomain podman[556473]: 2025-10-13 15:48:11.451188536 +0000 UTC m=+0.166468403 container remove b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:48:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:11.544 496978 INFO neutron.agent.linux.ip_lib [None req-070de822-546f-4adf-81bf-0b20be853c8c - - - - - -] Device tap9ca17eab-88 cannot be used as it has no MAC address
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain kernel: device tap9ca17eab-88 entered promiscuous mode
Oct 13 15:48:11 standalone.localdomain NetworkManager[5962]: <info>  [1760370491.5713] manager: (tap9ca17eab-88): new Generic device (/org/freedesktop/NetworkManager/Devices/116)
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00662|binding|INFO|Claiming lport 9ca17eab-8896-42c0-83ac-636695eb3c16 for this chassis.
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00663|binding|INFO|9ca17eab-8896-42c0-83ac-636695eb3c16: Claiming unknown
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain podman[467099]: time="2025-10-13T15:48:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00664|binding|INFO|Setting lport 9ca17eab-8896-42c0-83ac-636695eb3c16 ovn-installed in OVS
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:48:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 423769 "" "Go-http-client/1.1"
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00665|binding|INFO|Setting lport 9ca17eab-8896-42c0-83ac-636695eb3c16 up in Southbound
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.648 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe66:c56c/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9ca17eab-8896-42c0-83ac-636695eb3c16) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.649 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9ca17eab-8896-42c0-83ac-636695eb3c16 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.652 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port e91e68a3-71de-4908-9f0b-b2d4e111b100 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.652 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:11 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:11.653 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf83e08-fca2-4cf5-97ac-ea43389fe7d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:48:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51504 "" "Go-http-client/1.1"
Oct 13 15:48:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-36673501ab6fd92a3d301f5ccdfd9865f6e7aac32d8196804a00c34b475fcde5-merged.mount: Deactivated successfully.
Oct 13 15:48:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0c972db354849662969ec944e3814901cd8a8d5d675d5dc7c438f53f2c60a03-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:11.755 496978 INFO neutron.agent.dhcp.agent [None req-fa42f7be-7057-48cc-b64c-72689bdd07f5 - - - - - -] DHCP configuration for ports {'1514fa98-d6cb-4ca1-a454-ffd388c60a44'} is completed
Oct 13 15:48:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4017: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:11 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:11.790 2 INFO neutron.agent.securitygroups_rpc [None req-bd111555-ec36-4b25-b6ae-3e2ab9f40f52 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['147adaf3-123e-4998-b893-94b4e54c59a7']
Oct 13 15:48:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:11Z|00666|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:11.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:11 standalone.localdomain systemd[1]: tmp-crun.Wv94Vm.mount: Deactivated successfully.
Oct 13 15:48:11 standalone.localdomain dnsmasq[556343]: exiting on receipt of SIGTERM
Oct 13 15:48:11 standalone.localdomain podman[556589]: 2025-10-13 15:48:11.962183197 +0000 UTC m=+0.093880720 container kill 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:11 standalone.localdomain systemd[1]: libpod-88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a.scope: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain dnsmasq[555873]: exiting on receipt of SIGTERM
Oct 13 15:48:12 standalone.localdomain podman[556606]: 2025-10-13 15:48:12.009783992 +0000 UTC m=+0.075980882 container kill a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:12 standalone.localdomain systemd[1]: libpod-a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89.scope: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain podman[556617]: 2025-10-13 15:48:12.020185175 +0000 UTC m=+0.050173766 container died 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.025 496978 INFO neutron.agent.linux.ip_lib [None req-50486ba8-0a2c-4ee8-a03c-9417bcdbb0b1 - - - - - -] Device tapbb990f0f-22 cannot be used as it has no MAC address
Oct 13 15:48:12 standalone.localdomain podman[556617]: 2025-10-13 15:48:12.050727128 +0000 UTC m=+0.080715649 container cleanup 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain systemd[1]: libpod-conmon-88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a.scope: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain kernel: device tapbb990f0f-22 entered promiscuous mode
Oct 13 15:48:12 standalone.localdomain NetworkManager[5962]: <info>  [1760370492.0577] manager: (tapbb990f0f-22): new Generic device (/org/freedesktop/NetworkManager/Devices/117)
Oct 13 15:48:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:12Z|00667|binding|INFO|Claiming lport bb990f0f-221d-4c30-954e-02ad60858a8b for this chassis.
Oct 13 15:48:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:12Z|00668|binding|INFO|bb990f0f-221d-4c30-954e-02ad60858a8b: Claiming unknown
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.070 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=900de971-b2cc-44cf-80c7-61a04318cca1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bb990f0f-221d-4c30-954e-02ad60858a8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.072 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bb990f0f-221d-4c30-954e-02ad60858a8b in datapath 2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf bound to our chassis
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.074 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.075 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[40fc4a92-dd0f-440a-96e2-d73958a48958]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:12Z|00669|binding|INFO|Setting lport bb990f0f-221d-4c30-954e-02ad60858a8b ovn-installed in OVS
Oct 13 15:48:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:12Z|00670|binding|INFO|Setting lport bb990f0f-221d-4c30-954e-02ad60858a8b up in Southbound
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain podman[556645]: 2025-10-13 15:48:12.096801296 +0000 UTC m=+0.072171423 container died a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:12 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:12.100 2 INFO neutron.agent.securitygroups_rpc [None req-f2bae09a-872d-412c-a7a4-33c8b969aaef 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['147adaf3-123e-4998-b893-94b4e54c59a7']
Oct 13 15:48:12 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:12.107 2 INFO neutron.agent.securitygroups_rpc [None req-40501475-c9a7-4290-9c1d-5674f99b3565 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:12 standalone.localdomain podman[556629]: 2025-10-13 15:48:12.163283909 +0000 UTC m=+0.164935776 container remove 88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bde12338-b889-4412-87bf-7969bdf7bdfb, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:12 standalone.localdomain podman[556645]: 2025-10-13 15:48:12.177363778 +0000 UTC m=+0.152733875 container cleanup a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:12 standalone.localdomain systemd[1]: libpod-conmon-a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89.scope: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain kernel: device tap5050939e-3b left promiscuous mode
Oct 13 15:48:12 standalone.localdomain podman[556651]: 2025-10-13 15:48:12.226076038 +0000 UTC m=+0.195143448 container remove a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-061740cd-cd36-44f9-8305-b1b1c923e9f6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain kernel: device tap24e82d40-4e left promiscuous mode
Oct 13 15:48:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:12Z|00671|binding|INFO|Releasing lport 24e82d40-4e94-4c99-855c-7dd1cefb9950 from this chassis (sb_readonly=0)
Oct 13 15:48:12 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:12Z|00672|binding|INFO|Setting lport 24e82d40-4e94-4c99-855c-7dd1cefb9950 down in Southbound
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.245 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-061740cd-cd36-44f9-8305-b1b1c923e9f6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-061740cd-cd36-44f9-8305-b1b1c923e9f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3c4560-96c5-4b6d-96ec-7d2ad9942d55, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=24e82d40-4e94-4c99-855c-7dd1cefb9950) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.246 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 24e82d40-4e94-4c99-855c-7dd1cefb9950 in datapath 061740cd-cd36-44f9-8305-b1b1c923e9f6 unbound from our chassis
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.248 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 061740cd-cd36-44f9-8305-b1b1c923e9f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:12 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:12.248 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b590a9cc-fea3-44b9-90ba-96ae5a22d558]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:12.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.267 496978 INFO neutron.agent.dhcp.agent [None req-a54629c7-7ead-40d2-902d-83140bae9ff5 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.267 496978 INFO neutron.agent.dhcp.agent [None req-a54629c7-7ead-40d2-902d-83140bae9ff5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.404 496978 INFO neutron.agent.dhcp.agent [None req-ccf2282d-2fd3-4752-a100-f25ddf0a6740 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.404 496978 INFO neutron.agent.dhcp.agent [None req-ccf2282d-2fd3-4752-a100-f25ddf0a6740 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:12 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:12.551 2 INFO neutron.agent.securitygroups_rpc [None req-5326aef8-72e2-4516-9f1e-a5e738e7615d 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['147adaf3-123e-4998-b893-94b4e54c59a7']
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.625 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f7daa49a4c77b0d2a357ad4c029730f9707a0eac0dad085dd7cad73099064bc5-merged.mount: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88c95c08a379015851bc8130afe8672d51af604f796368af1a75534d4e92751a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dbde12338\x2db889\x2d4412\x2d87bf\x2d7969bdf7bdfb.mount: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7ebf6e241a2e557053457bdc34646a7ac15d548ca89ab3a7b5c83fb2ebfd4527-merged.mount: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0d33588ab0dc57bf82ae1ac9b202a5357f4931c20f59a5ac038e12a8cb30e89-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d061740cd\x2dcd36\x2d44f9\x2d8305\x2db1b1c923e9f6.mount: Deactivated successfully.
Oct 13 15:48:12 standalone.localdomain podman[556741]: 
Oct 13 15:48:12 standalone.localdomain podman[556741]: 2025-10-13 15:48:12.744754828 +0000 UTC m=+0.077765757 container create 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:12 standalone.localdomain systemd[1]: Started libpod-conmon-1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a.scope.
Oct 13 15:48:12 standalone.localdomain podman[556741]: 2025-10-13 15:48:12.701447876 +0000 UTC m=+0.034458815 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:12 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:12 standalone.localdomain ceph-mon[29756]: pgmap v4017: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:12 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aeca40c2c780bfaddc755de17b413a4b9d49aaa9c1e356f4221b17f83a9a9ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:12 standalone.localdomain podman[556741]: 2025-10-13 15:48:12.837387357 +0000 UTC m=+0.170398286 container init 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:12 standalone.localdomain podman[556741]: 2025-10-13 15:48:12.845847551 +0000 UTC m=+0.178858480 container start 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:12 standalone.localdomain dnsmasq[556763]: started, version 2.85 cachesize 150
Oct 13 15:48:12 standalone.localdomain dnsmasq[556763]: DNS service limited to local subnets
Oct 13 15:48:12 standalone.localdomain dnsmasq[556763]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:12 standalone.localdomain dnsmasq[556763]: warning: no upstream servers configured
Oct 13 15:48:12 standalone.localdomain dnsmasq-dhcp[556763]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:12 standalone.localdomain dnsmasq-dhcp[556763]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:48:12 standalone.localdomain dnsmasq[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:12 standalone.localdomain dnsmasq-dhcp[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:12 standalone.localdomain dnsmasq-dhcp[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:12.893 496978 INFO neutron.agent.dhcp.agent [None req-eff54fee-b6dd-46d8-924b-87cfdf01141a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:11Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaa730>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaaa00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaa160>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaa640>], id=6a546227-2a82-48f8-a660-509b909305a1, ip_allocation=immediate, mac_address=fa:16:3e:7a:10:e3, name=tempest-NetworksTestDHCPv6-394487400, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=35, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['610b4118-47de-4f60-b214-81a58d317554', 'c6278ef3-5d50-4dd8-b636-f15d7cd657ba'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:07Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2223, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:11Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:48:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.033 496978 INFO neutron.agent.dhcp.agent [None req-93dd409f-c320-454b-a3bd-0caedb3d8359 - - - - - -] DHCP configuration for ports {'9ca17eab-8896-42c0-83ac-636695eb3c16', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:13 standalone.localdomain podman[556800]: 
Oct 13 15:48:13 standalone.localdomain podman[556800]: 2025-10-13 15:48:13.160705672 +0000 UTC m=+0.094346193 container create 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:48:13 standalone.localdomain systemd[1]: Started libpod-conmon-46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef.scope.
Oct 13 15:48:13 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:13 standalone.localdomain dnsmasq[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:13 standalone.localdomain podman[556815]: 2025-10-13 15:48:13.210093773 +0000 UTC m=+0.073621317 container kill 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:48:13 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311c9ce671e0a8fbe3cd245820e0bbd620dfcfa75cf9e781be97fa13f862baf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:13 standalone.localdomain podman[556800]: 2025-10-13 15:48:13.119360733 +0000 UTC m=+0.053001244 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:13 standalone.localdomain podman[556800]: 2025-10-13 15:48:13.22252704 +0000 UTC m=+0.156167561 container init 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:48:13 standalone.localdomain podman[556800]: 2025-10-13 15:48:13.231431999 +0000 UTC m=+0.165072530 container start 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: started, version 2.85 cachesize 150
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: DNS service limited to local subnets
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: warning: no upstream servers configured
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: DHCPv6, static leases only on 2001:db8:2::, lease time 1d
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/addn_hosts - 0 addresses
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/host
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/opts
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.406 496978 INFO neutron.agent.dhcp.agent [None req-9d7fc08d-3248-43d6-a85e-783d0810f761 - - - - - -] DHCP configuration for ports {'568988ba-3075-499d-b5b9-cc19b2167d66'} is completed
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.420 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.501 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.586 496978 INFO neutron.agent.dhcp.agent [None req-80043b18-d2fd-475f-9e49-034cf18ec58b - - - - - -] DHCP configuration for ports {'6a546227-2a82-48f8-a660-509b909305a1'} is completed
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/addn_hosts - 1 addresses
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/host
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/opts
Oct 13 15:48:13 standalone.localdomain podman[556863]: 2025-10-13 15:48:13.589765727 +0000 UTC m=+0.056955419 container kill 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:48:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4018: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.771 496978 INFO neutron.agent.dhcp.agent [None req-b776770a-1d85-4101-9cf5-438ecd6136ed - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:10Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188908dd90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188908de80>], id=140d2f04-9272-4ff2-ace0-32018d4e8203, ip_allocation=immediate, mac_address=fa:16:3e:dc:a7:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:08Z, description=, dns_domain=, id=2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1978168461, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15031, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2212, status=ACTIVE, subnets=['51cd2913-8c85-4c8c-b4ba-228004896ab3'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:09Z, vlan_transparent=None, network_id=2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2222, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:10Z on network 2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf
Oct 13 15:48:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:13.780 2 INFO neutron.agent.securitygroups_rpc [None req-2a9344f0-d68b-468c-b0be-a11df333072a 40c7e26fbd48451b913a7ab97ae2e08d 274e1d0dda434f35b2f8cb5b19376851 - - default default] Security group rule updated ['27dd0fcb-3a53-40d9-879d-2c852fc1ff12']
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.964 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.965 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.966 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:48:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:13.966 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:48:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:13.969 496978 INFO neutron.agent.dhcp.agent [None req-9343c5b5-8294-444f-8856-d33feeff7f79 - - - - - -] DHCP configuration for ports {'568988ba-3075-499d-b5b9-cc19b2167d66', 'bb990f0f-221d-4c30-954e-02ad60858a8b', '140d2f04-9272-4ff2-ace0-32018d4e8203'} is completed
Oct 13 15:48:13 standalone.localdomain dnsmasq[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/addn_hosts - 1 addresses
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/host
Oct 13 15:48:13 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/opts
Oct 13 15:48:13 standalone.localdomain podman[556902]: 2025-10-13 15:48:13.982801517 +0000 UTC m=+0.071847983 container kill 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:48:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:14.154 496978 INFO neutron.agent.dhcp.agent [None req-b776770a-1d85-4101-9cf5-438ecd6136ed - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:10Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890fcd60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32d60>], id=140d2f04-9272-4ff2-ace0-32018d4e8203, ip_allocation=immediate, mac_address=fa:16:3e:dc:a7:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:08Z, description=, dns_domain=, id=2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1978168461, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15031, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2212, status=ACTIVE, subnets=['51cd2913-8c85-4c8c-b4ba-228004896ab3'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:09Z, vlan_transparent=None, network_id=2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2222, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:10Z on network 2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf
Oct 13 15:48:14 standalone.localdomain dnsmasq[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/addn_hosts - 1 addresses
Oct 13 15:48:14 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/host
Oct 13 15:48:14 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/opts
Oct 13 15:48:14 standalone.localdomain podman[556942]: 2025-10-13 15:48:14.371745679 +0000 UTC m=+0.068859729 container kill 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:48:14 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:14Z|00673|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:14.467 496978 INFO neutron.agent.dhcp.agent [None req-c35f1b7a-9c18-41e2-8356-f00b570fedf9 - - - - - -] DHCP configuration for ports {'140d2f04-9272-4ff2-ace0-32018d4e8203'} is completed
Oct 13 15:48:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:14.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:14.636 496978 INFO neutron.agent.dhcp.agent [None req-b3f2157f-ef24-47be-88c3-6ac247b90c21 - - - - - -] DHCP configuration for ports {'140d2f04-9272-4ff2-ace0-32018d4e8203'} is completed
Oct 13 15:48:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:14.714 2 INFO neutron.agent.securitygroups_rpc [None req-38610f22-917c-4f85-ac6f-8b17b48d26a7 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:14 standalone.localdomain ceph-mon[29756]: pgmap v4018: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:15.007 496978 INFO neutron.agent.linux.ip_lib [None req-c4d54f89-323a-4e9a-a368-c1684bc0584a - - - - - -] Device tap86faa77f-d3 cannot be used as it has no MAC address
Oct 13 15:48:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:15.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:15 standalone.localdomain kernel: device tap86faa77f-d3 entered promiscuous mode
Oct 13 15:48:15 standalone.localdomain NetworkManager[5962]: <info>  [1760370495.0535] manager: (tap86faa77f-d3): new Generic device (/org/freedesktop/NetworkManager/Devices/118)
Oct 13 15:48:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:15Z|00674|binding|INFO|Claiming lport 86faa77f-d3ab-481c-867d-a19dc673e00a for this chassis.
Oct 13 15:48:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:15Z|00675|binding|INFO|86faa77f-d3ab-481c-867d-a19dc673e00a: Claiming unknown
Oct 13 15:48:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:15.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:15.070 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bce779c4-d902-46c0-836b-06ef68090247', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce779c4-d902-46c0-836b-06ef68090247', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=816dfbaa-c3f8-4b08-ac30-5d1518cccb61, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=86faa77f-d3ab-481c-867d-a19dc673e00a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:15.072 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 86faa77f-d3ab-481c-867d-a19dc673e00a in datapath bce779c4-d902-46c0-836b-06ef68090247 bound to our chassis
Oct 13 15:48:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:15.073 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bce779c4-d902-46c0-836b-06ef68090247 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:15 standalone.localdomain systemd[1]: tmp-crun.kuQkwU.mount: Deactivated successfully.
Oct 13 15:48:15 standalone.localdomain dnsmasq[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:15 standalone.localdomain dnsmasq-dhcp[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:15 standalone.localdomain dnsmasq-dhcp[556763]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:15.076 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[0f27e575-3e1f-44d4-9044-fa9b2e1535dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:15 standalone.localdomain podman[556986]: 2025-10-13 15:48:15.079715743 +0000 UTC m=+0.080593444 container kill 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:48:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:15Z|00676|binding|INFO|Setting lport 86faa77f-d3ab-481c-867d-a19dc673e00a ovn-installed in OVS
Oct 13 15:48:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:15Z|00677|binding|INFO|Setting lport 86faa77f-d3ab-481c-867d-a19dc673e00a up in Southbound
Oct 13 15:48:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:15.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:15.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:15.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:15 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:48:15 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:15 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:15 standalone.localdomain podman[557053]: 2025-10-13 15:48:15.630571606 +0000 UTC m=+0.073193204 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4019: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:15 standalone.localdomain systemd[1]: tmp-crun.VeJIZT.mount: Deactivated successfully.
Oct 13 15:48:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:16.201 496978 INFO neutron.agent.linux.ip_lib [None req-d704058b-f9b3-4656-a6dd-2baa72096f6f - - - - - -] Device tap83892680-1e cannot be used as it has no MAC address
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:16 standalone.localdomain kernel: device tap83892680-1e entered promiscuous mode
Oct 13 15:48:16 standalone.localdomain NetworkManager[5962]: <info>  [1760370496.2412] manager: (tap83892680-1e): new Generic device (/org/freedesktop/NetworkManager/Devices/119)
Oct 13 15:48:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:16Z|00678|binding|INFO|Claiming lport 83892680-1e8b-49b4-8103-a3c79e4d07f6 for this chassis.
Oct 13 15:48:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:16Z|00679|binding|INFO|83892680-1e8b-49b4-8103-a3c79e4d07f6: Claiming unknown
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:16.256 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-36ceda34-93ec-4a19-8f02-93322043557c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ceda34-93ec-4a19-8f02-93322043557c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aefab81a3f68448f93dd20e2d275ad53', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb2b1cfd-56c8-4807-81ed-31847ef54a59, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=83892680-1e8b-49b4-8103-a3c79e4d07f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:16.258 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 83892680-1e8b-49b4-8103-a3c79e4d07f6 in datapath 36ceda34-93ec-4a19-8f02-93322043557c bound to our chassis
Oct 13 15:48:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:16.260 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36ceda34-93ec-4a19-8f02-93322043557c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:16.261 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[24af525c-749e-4d7a-82cb-6a0127dd6a2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:16Z|00680|binding|INFO|Setting lport 83892680-1e8b-49b4-8103-a3c79e4d07f6 ovn-installed in OVS
Oct 13 15:48:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:16Z|00681|binding|INFO|Setting lport 83892680-1e8b-49b4-8103-a3c79e4d07f6 up in Southbound
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:16 standalone.localdomain podman[557111]: 
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:16 standalone.localdomain podman[557111]: 2025-10-13 15:48:16.299581955 +0000 UTC m=+0.115702490 container create cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:16 standalone.localdomain podman[557111]: 2025-10-13 15:48:16.239749419 +0000 UTC m=+0.055870024 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:16 standalone.localdomain systemd[1]: Started libpod-conmon-cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f.scope.
Oct 13 15:48:16 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:16 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ece9f32e9639ad3fa62cb229d9d90021466bfc7ea141b855af800291bc1346/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:16 standalone.localdomain podman[557111]: 2025-10-13 15:48:16.376783793 +0000 UTC m=+0.192904358 container init cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:48:16 standalone.localdomain podman[557111]: 2025-10-13 15:48:16.386871518 +0000 UTC m=+0.202992073 container start cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:48:16 standalone.localdomain dnsmasq[557141]: started, version 2.85 cachesize 150
Oct 13 15:48:16 standalone.localdomain dnsmasq[557141]: DNS service limited to local subnets
Oct 13 15:48:16 standalone.localdomain dnsmasq[557141]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:16 standalone.localdomain dnsmasq[557141]: warning: no upstream servers configured
Oct 13 15:48:16 standalone.localdomain dnsmasq-dhcp[557141]: DHCPv6, static leases only on 2001:db8:3::, lease time 1d
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:16 standalone.localdomain dnsmasq[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/addn_hosts - 0 addresses
Oct 13 15:48:16 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/host
Oct 13 15:48:16 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/opts
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.458 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:48:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:16.475 496978 INFO neutron.agent.dhcp.agent [None req-c4d54f89-323a-4e9a-a368-c1684bc0584a - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:16.477 496978 INFO neutron.agent.dhcp.agent [None req-c4d54f89-323a-4e9a-a368-c1684bc0584a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:15Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889274f10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5ca0>], id=4a849bd5-236f-44d3-8aed-ddf3f2888458, ip_allocation=immediate, mac_address=fa:16:3e:0e:dd:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:12Z, description=, dns_domain=, id=bce779c4-d902-46c0-836b-06ef68090247, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1972524470, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39049, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2234, status=ACTIVE, subnets=['97a7e799-898d-47f8-aad8-510877a3e045'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:13Z, vlan_transparent=None, network_id=bce779c4-d902-46c0-836b-06ef68090247, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2246, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:15Z on network bce779c4-d902-46c0-836b-06ef68090247
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.480 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.481 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.482 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.482 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.483 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.483 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.484 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.485 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.485 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.506 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.507 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.507 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.508 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.508 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:48:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:16.653 496978 INFO neutron.agent.dhcp.agent [None req-0539f3c7-82cc-4b88-bc17-b9f5f9a1fba0 - - - - - -] DHCP configuration for ports {'764a0b38-cd6f-4420-803e-a978f1700db2'} is completed
Oct 13 15:48:16 standalone.localdomain dnsmasq[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/addn_hosts - 1 addresses
Oct 13 15:48:16 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/host
Oct 13 15:48:16 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/opts
Oct 13 15:48:16 standalone.localdomain podman[557170]: 2025-10-13 15:48:16.708446989 +0000 UTC m=+0.063210223 container kill cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:48:16 standalone.localdomain ceph-mon[29756]: pgmap v4019: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:48:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:48:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:48:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/35119191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:48:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:16.992 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:48:17 standalone.localdomain podman[557219]: 2025-10-13 15:48:17.006569078 +0000 UTC m=+0.078209041 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:48:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:48:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:17.033 496978 INFO neutron.agent.dhcp.agent [None req-3ce67bcb-3b68-4dad-9f3a-1ca8c57c311b - - - - - -] DHCP configuration for ports {'4a849bd5-236f-44d3-8aed-ddf3f2888458'} is completed
Oct 13 15:48:17 standalone.localdomain podman[557220]: 2025-10-13 15:48:17.060738749 +0000 UTC m=+0.132209756 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:48:17 standalone.localdomain podman[557219]: 2025-10-13 15:48:17.086710478 +0000 UTC m=+0.158350381 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.086 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.087 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.087 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:48:17 standalone.localdomain podman[557220]: 2025-10-13 15:48:17.095148092 +0000 UTC m=+0.166619099 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:48:17 standalone.localdomain systemd[1]: tmp-crun.cuym86.mount: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.094 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.103 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:48:17 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain podman[557265]: 2025-10-13 15:48:17.111384588 +0000 UTC m=+0.077539440 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-account, architecture=x86_64, build-date=2025-07-21T16:11:22, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.9, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true)
Oct 13 15:48:17 standalone.localdomain dnsmasq[556763]: exiting on receipt of SIGTERM
Oct 13 15:48:17 standalone.localdomain podman[557272]: 2025-10-13 15:48:17.149964411 +0000 UTC m=+0.108868196 container kill 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:48:17 standalone.localdomain systemd[1]: libpod-1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a.scope: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain podman[557264]: 2025-10-13 15:48:17.234661244 +0000 UTC m=+0.205206073 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, com.redhat.component=openstack-swift-object-container, release=1, tcib_managed=true, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, batch=17.1_20250721.1, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, container_name=swift_object_server, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:48:17 standalone.localdomain podman[557327]: 2025-10-13 15:48:17.254008737 +0000 UTC m=+0.073036329 container died 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.291 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.292 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9068MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.292 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.292 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:48:17 standalone.localdomain podman[557265]: 2025-10-13 15:48:17.317830107 +0000 UTC m=+0.283984969 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, container_name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git)
Oct 13 15:48:17 standalone.localdomain podman[557385]: 2025-10-13 15:48:17.354228803 +0000 UTC m=+0.062652255 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, release=1, build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, container_name=swift_container_server, name=rhosp17/openstack-swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, version=17.1.9, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.365 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.365 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.365 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.366 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:48:17 standalone.localdomain podman[557327]: 2025-10-13 15:48:17.371539894 +0000 UTC m=+0.190567466 container remove 1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.427 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:48:17 standalone.localdomain dnsmasq[542070]: exiting on receipt of SIGTERM
Oct 13 15:48:17 standalone.localdomain podman[557412]: 2025-10-13 15:48:17.454228842 +0000 UTC m=+0.110085314 container kill 1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-68d6abcf-919b-4bb7-94e1-d848f2626899, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:17 standalone.localdomain systemd[1]: libpod-1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628.scope: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain systemd[1]: libpod-conmon-1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a.scope: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain podman[557264]: 2025-10-13 15:48:17.468748855 +0000 UTC m=+0.439293674 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, container_name=swift_object_server, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:48:17 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain podman[557467]: 2025-10-13 15:48:17.52372644 +0000 UTC m=+0.052890020 container died 1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-68d6abcf-919b-4bb7-94e1-d848f2626899, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:17 standalone.localdomain podman[557456]: 
Oct 13 15:48:17 standalone.localdomain podman[557456]: 2025-10-13 15:48:17.541877596 +0000 UTC m=+0.082866345 container create 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:48:17 standalone.localdomain podman[557467]: 2025-10-13 15:48:17.558274838 +0000 UTC m=+0.087438428 container remove 1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-68d6abcf-919b-4bb7-94e1-d848f2626899, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:48:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:17Z|00682|binding|INFO|Releasing lport b2b644e5-541f-482f-9d12-04ceb278cc96 from this chassis (sb_readonly=0)
Oct 13 15:48:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:17Z|00683|binding|INFO|Setting lport b2b644e5-541f-482f-9d12-04ceb278cc96 down in Southbound
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:17 standalone.localdomain kernel: device tapb2b644e5-54 left promiscuous mode
Oct 13 15:48:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:17.577 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-68d6abcf-919b-4bb7-94e1-d848f2626899', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68d6abcf-919b-4bb7-94e1-d848f2626899', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6bae4ad0831e437b8077d414e4772ce7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84bb2532-77ab-42be-8da1-f4d14aec11cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b2b644e5-541f-482f-9d12-04ceb278cc96) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:17 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:17.579 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b2b644e5-541f-482f-9d12-04ceb278cc96 in datapath 68d6abcf-919b-4bb7-94e1-d848f2626899 unbound from our chassis
Oct 13 15:48:17 standalone.localdomain systemd[1]: libpod-conmon-1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628.scope: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:17.581 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68d6abcf-919b-4bb7-94e1-d848f2626899, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:17.582 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e76fd7c6-7fb8-4372-9c77-107c6cd0954a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:17 standalone.localdomain podman[557385]: 2025-10-13 15:48:17.582788293 +0000 UTC m=+0.291211705 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, container_name=swift_container_server, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:17 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain podman[557456]: 2025-10-13 15:48:17.509905979 +0000 UTC m=+0.050894738 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:17 standalone.localdomain systemd[1]: Started libpod-conmon-151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6.scope.
Oct 13 15:48:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/303db635c915b698097852bef2d84a9616a567ec99741ff40953eab556d38aae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:17 standalone.localdomain podman[557456]: 2025-10-13 15:48:17.646246373 +0000 UTC m=+0.187235122 container init 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:17 standalone.localdomain podman[557456]: 2025-10-13 15:48:17.652291141 +0000 UTC m=+0.193279890 container start 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:48:17 standalone.localdomain dnsmasq[557535]: started, version 2.85 cachesize 150
Oct 13 15:48:17 standalone.localdomain dnsmasq[557535]: DNS service limited to local subnets
Oct 13 15:48:17 standalone.localdomain dnsmasq[557535]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:17 standalone.localdomain dnsmasq[557535]: warning: no upstream servers configured
Oct 13 15:48:17 standalone.localdomain dnsmasq-dhcp[557535]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:17 standalone.localdomain dnsmasq[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/addn_hosts - 0 addresses
Oct 13 15:48:17 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/host
Oct 13 15:48:17 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/opts
Oct 13 15:48:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:17.712 496978 INFO neutron.agent.dhcp.agent [None req-587000de-6824-492b-93e9-2f1e31f3b2d7 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:48:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4020: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:48:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1867468476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.831 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.838 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:48:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/35119191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:48:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1867468476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.859 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.863 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:48:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:17.863 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:48:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7aeca40c2c780bfaddc755de17b413a4b9d49aaa9c1e356f4221b17f83a9a9ce-merged.mount: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d10019bb6074e27491cab73d28ba88d89e9af96d48bf008506d4789ab506f0a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d5e71b6d35c72e5434c6ec8fc231ab9f19b73d06665d6c0487ef7836b1642d7b-merged.mount: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b15db3f4f347fa3bc2265a235abb7e978902f6669ef63218213acf0a9ebb628-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:17.909 496978 INFO neutron.agent.dhcp.agent [None req-88e1a4bf-1727-47ad-914c-ee16daea889a - - - - - -] DHCP configuration for ports {'245d6d25-134b-4c62-a003-27d9134a7849'} is completed
Oct 13 15:48:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:18.046 496978 INFO neutron.agent.dhcp.agent [None req-443179d8-efa3-407e-b25b-e46091edead5 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:18.047 496978 INFO neutron.agent.dhcp.agent [None req-443179d8-efa3-407e-b25b-e46091edead5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:18 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d68d6abcf\x2d919b\x2d4bb7\x2d94e1\x2dd848f2626899.mount: Deactivated successfully.
Oct 13 15:48:18 standalone.localdomain podman[557570]: 
Oct 13 15:48:18 standalone.localdomain podman[557570]: 2025-10-13 15:48:18.355777395 +0000 UTC m=+0.085066034 container create 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:48:18 standalone.localdomain systemd[1]: Started libpod-conmon-519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70.scope.
Oct 13 15:48:18 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:18 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/668c09148618153d48ed886079eb71135bf7d4a12a580d2954793bdbee598567/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:18 standalone.localdomain podman[557570]: 2025-10-13 15:48:18.317658856 +0000 UTC m=+0.046947515 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:18 standalone.localdomain podman[557570]: 2025-10-13 15:48:18.427827013 +0000 UTC m=+0.157115682 container init 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:18 standalone.localdomain podman[557570]: 2025-10-13 15:48:18.434192271 +0000 UTC m=+0.163480940 container start 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:18 standalone.localdomain dnsmasq[557588]: started, version 2.85 cachesize 150
Oct 13 15:48:18 standalone.localdomain dnsmasq[557588]: DNS service limited to local subnets
Oct 13 15:48:18 standalone.localdomain dnsmasq[557588]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:18 standalone.localdomain dnsmasq[557588]: warning: no upstream servers configured
Oct 13 15:48:18 standalone.localdomain dnsmasq-dhcp[557588]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:18 standalone.localdomain dnsmasq[557588]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:18 standalone.localdomain dnsmasq-dhcp[557588]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:18 standalone.localdomain dnsmasq-dhcp[557588]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:18.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:18.492 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3660360127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3660360127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:48:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:18.662 496978 INFO neutron.agent.dhcp.agent [None req-4b9f85d0-87bb-4e28-a1d0-7859ed38442b - - - - - -] DHCP configuration for ports {'9ca17eab-8896-42c0-83ac-636695eb3c16', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:18 standalone.localdomain dnsmasq[557588]: exiting on receipt of SIGTERM
Oct 13 15:48:18 standalone.localdomain podman[557606]: 2025-10-13 15:48:18.81531811 +0000 UTC m=+0.061702066 container kill 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:18 standalone.localdomain systemd[1]: libpod-519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70.scope: Deactivated successfully.
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: pgmap v4020: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3660360127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:48:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/3660360127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:48:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:18.856 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:18Z, description=, device_id=6502e9b3-26ea-44ad-afad-86a6c6b93dc3, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaa910>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaac70>], id=91518588-d76a-4596-9f84-45e1ebc646b9, ip_allocation=immediate, mac_address=fa:16:3e:8e:49:cf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2253, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:48:18Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:48:18 standalone.localdomain podman[557619]: 2025-10-13 15:48:18.896004917 +0000 UTC m=+0.069457338 container died 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:18 standalone.localdomain podman[557619]: 2025-10-13 15:48:18.936972764 +0000 UTC m=+0.110425145 container cleanup 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:48:18 standalone.localdomain systemd[1]: libpod-conmon-519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70.scope: Deactivated successfully.
Oct 13 15:48:18 standalone.localdomain podman[557627]: 2025-10-13 15:48:18.979285674 +0000 UTC m=+0.137897162 container remove 519c57cd0db881364492026e89a79ad2b04509d7b474682a4eeab373da30ba70 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:18Z|00684|binding|INFO|Releasing lport 9ca17eab-8896-42c0-83ac-636695eb3c16 from this chassis (sb_readonly=0)
Oct 13 15:48:18 standalone.localdomain kernel: device tap9ca17eab-88 left promiscuous mode
Oct 13 15:48:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:18.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:18Z|00685|binding|INFO|Setting lport 9ca17eab-8896-42c0-83ac-636695eb3c16 down in Southbound
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.006 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe66:c56c/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=9ca17eab-8896-42c0-83ac-636695eb3c16) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.009 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 9ca17eab-8896-42c0-83ac-636695eb3c16 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.014 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.015 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[34095345-5549-4e7f-b553-cdc471dff7c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:19 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:48:19 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:19 standalone.localdomain podman[557665]: 2025-10-13 15:48:19.130176271 +0000 UTC m=+0.064325198 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:48:19 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:19.243 496978 INFO neutron.agent.dhcp.agent [None req-495a451b-f212-4a6e-b3f0-3d2ab893275c - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:19.245 496978 INFO neutron.agent.dhcp.agent [None req-495a451b-f212-4a6e-b3f0-3d2ab893275c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:19.331 496978 INFO neutron.agent.linux.ip_lib [None req-d32de692-ecf7-4735-a32c-2a943250a6c3 - - - - - -] Device tap6599aeca-da cannot be used as it has no MAC address
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:19 standalone.localdomain kernel: device tap6599aeca-da entered promiscuous mode
Oct 13 15:48:19 standalone.localdomain NetworkManager[5962]: <info>  [1760370499.3673] manager: (tap6599aeca-da): new Generic device (/org/freedesktop/NetworkManager/Devices/120)
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:19 standalone.localdomain systemd-udevd[557695]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:19Z|00686|binding|INFO|Claiming lport 6599aeca-daf6-403a-a046-673e6a3ca277 for this chassis.
Oct 13 15:48:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:19Z|00687|binding|INFO|6599aeca-daf6-403a-a046-673e6a3ca277: Claiming unknown
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.386 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-2d0e9665-b447-40a6-9fa6-412d077bfcc1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d0e9665-b447-40a6-9fa6-412d077bfcc1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627a9433-425e-4a66-be18-c8c6a42b1b43, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6599aeca-daf6-403a-a046-673e6a3ca277) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.388 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6599aeca-daf6-403a-a046-673e6a3ca277 in datapath 2d0e9665-b447-40a6-9fa6-412d077bfcc1 bound to our chassis
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.391 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2d0e9665-b447-40a6-9fa6-412d077bfcc1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:19.393 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45eb52-9e98-4ee5-a239-11b5380fde6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:19.409 496978 INFO neutron.agent.dhcp.agent [None req-28d6f36a-a932-4fb0-8514-48dd8889d98b - - - - - -] DHCP configuration for ports {'91518588-d76a-4596-9f84-45e1ebc646b9'} is completed
Oct 13 15:48:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:19.412 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:19Z|00688|binding|INFO|Setting lport 6599aeca-daf6-403a-a046-673e6a3ca277 ovn-installed in OVS
Oct 13 15:48:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:19Z|00689|binding|INFO|Setting lport 6599aeca-daf6-403a-a046-673e6a3ca277 up in Southbound
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4021: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:19.773 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:18Z, description=, device_id=cf49f818-1bcb-4c13-ba84-0496bf40f984, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e5e520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889085d00>], id=279b1cc5-53eb-45e2-918c-301d7a2cb080, ip_allocation=immediate, mac_address=fa:16:3e:89:99:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2256, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:48:19Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:48:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:19.860 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:48:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-668c09148618153d48ed886079eb71135bf7d4a12a580d2954793bdbee598567-merged.mount: Deactivated successfully.
Oct 13 15:48:19 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:20 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:20 standalone.localdomain podman[557740]: 2025-10-13 15:48:20.019927796 +0000 UTC m=+0.061006284 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:48:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:20.080 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:15Z, description=, device_id=b4c672be-7dfa-443a-90ab-5f7aa9eb0a30, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890a8160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890a81c0>], id=4a849bd5-236f-44d3-8aed-ddf3f2888458, ip_allocation=immediate, mac_address=fa:16:3e:0e:dd:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:12Z, description=, dns_domain=, id=bce779c4-d902-46c0-836b-06ef68090247, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1972524470, port_security_enabled=True, project_id=aaa42b564883447080cd4183011edf7e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39049, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2234, status=ACTIVE, subnets=['97a7e799-898d-47f8-aad8-510877a3e045'], tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:13Z, vlan_transparent=None, network_id=bce779c4-d902-46c0-836b-06ef68090247, port_security_enabled=False, project_id=aaa42b564883447080cd4183011edf7e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2246, status=DOWN, tags=[], tenant_id=aaa42b564883447080cd4183011edf7e, updated_at=2025-10-13T15:48:15Z on network bce779c4-d902-46c0-836b-06ef68090247
Oct 13 15:48:20 standalone.localdomain dnsmasq[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/addn_hosts - 1 addresses
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/host
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/opts
Oct 13 15:48:20 standalone.localdomain podman[557785]: 2025-10-13 15:48:20.298386751 +0000 UTC m=+0.055537423 container kill cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:48:20 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:20Z|00690|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:20.370 496978 INFO neutron.agent.dhcp.agent [None req-ed008a09-a8b1-4132-9c74-5cedf4c00001 - - - - - -] DHCP configuration for ports {'279b1cc5-53eb-45e2-918c-301d7a2cb080'} is completed
Oct 13 15:48:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:20.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:20 standalone.localdomain podman[557826]: 
Oct 13 15:48:20 standalone.localdomain podman[557826]: 2025-10-13 15:48:20.509632462 +0000 UTC m=+0.077654054 container create 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:20.520 496978 INFO neutron.agent.dhcp.agent [None req-b9fbcec1-274f-4aec-a251-c1f773530612 - - - - - -] DHCP configuration for ports {'4a849bd5-236f-44d3-8aed-ddf3f2888458'} is completed
Oct 13 15:48:20 standalone.localdomain systemd[1]: Started libpod-conmon-700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0.scope.
Oct 13 15:48:20 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:20 standalone.localdomain podman[557826]: 2025-10-13 15:48:20.463607245 +0000 UTC m=+0.031628807 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:20 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2754dab4f64d172425d301727b977ed56d581a46c6322d5f47e86aa3a820d981/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:20 standalone.localdomain podman[557826]: 2025-10-13 15:48:20.571276514 +0000 UTC m=+0.139298086 container init 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:48:20 standalone.localdomain podman[557826]: 2025-10-13 15:48:20.578127438 +0000 UTC m=+0.146149040 container start 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:48:20 standalone.localdomain dnsmasq[557847]: started, version 2.85 cachesize 150
Oct 13 15:48:20 standalone.localdomain dnsmasq[557847]: DNS service limited to local subnets
Oct 13 15:48:20 standalone.localdomain dnsmasq[557847]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:20 standalone.localdomain dnsmasq[557847]: warning: no upstream servers configured
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[557847]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:20 standalone.localdomain dnsmasq[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/addn_hosts - 0 addresses
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/host
Oct 13 15:48:20 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/opts
Oct 13 15:48:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:20.640 496978 INFO neutron.agent.dhcp.agent [None req-7b19d8d6-a7d5-4e6a-9502-e9943267163d - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:20.786 496978 INFO neutron.agent.dhcp.agent [None req-9125b4b8-8c26-4993-9e41-1977295e514c - - - - - -] DHCP configuration for ports {'fbcb77e7-19b6-4098-abfb-f217811f572b'} is completed
Oct 13 15:48:20 standalone.localdomain ceph-mon[29756]: pgmap v4021: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:48:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:20.996 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.001 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.002 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.007 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.014 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e0dc8c89-7732-4896-9b8b-1d71a0925820', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.002792', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b5c96c0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': 'b2e7f36266377d5e7642215552f52b9969b6581f90f510e88d99df429415a0ce'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.002792', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b5d9a3e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '62a1835c860bc081e14b1eb2e4f8b0b96470154c5de2ba28a7fdfd49f036447c'}]}, 'timestamp': '2025-10-13 15:48:21.015324', '_unique_id': 'c6915686b41647198806a2644ee7a25b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.017 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.018 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:48:21 standalone.localdomain podman[557848]: 2025-10-13 15:48:21.032849802 +0000 UTC m=+0.099322769 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.041 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain podman[557848]: 2025-10-13 15:48:21.048233872 +0000 UTC m=+0.114706859 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.065 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '49ffbe55-38f1-47c5-8f25-f5ad775ed3b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:48:21.018867', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '0b61b63c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.230545925, 'message_signature': '69dfe2682d93fde317f75fa0341409ee16b24d9e975af84e604fca50511c9115'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:48:21.018867', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '0b6558c8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.254241383, 'message_signature': '667b0409766bddf30b3f301dc6b77dc29e79d4891be344e1e690003122c6b42b'}]}, 'timestamp': '2025-10-13 15:48:21.066070', '_unique_id': 'cbe1985dc04841c8b58d772e632d22f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.068 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.069 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.070 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:48:21 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.107 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.125 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.126 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'be9c1a96-6449-4a59-bf26-2ea624709e49', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.070263', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b6b9634-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '868148fdba059671503a9666bc902d7b5f6278fb3d50416a51ebc8b0580a34c7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.070263', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b6bac78-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '9f52bb05cfd6fc0ca505eb184c18b2054ed9cb351d14853b1b9b4706966d0c33'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.070263', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b6bc37a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'd800433ac748c9d1c5559969188c29fadd605e7605227a3d0fe9e770752f981b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.070263', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b6e942e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '34af6da2d097260226c397b57d0c09ada52a0802a52fdc9452b880c91597f441'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.070263', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b6ea180-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '1ae34886b8b3e1c25a0846b248ee893cf2e501aeb3b896ff143d927b2db4f44e'}]}, 'timestamp': '2025-10-13 15:48:21.126793', '_unique_id': '44df5f901006487e86035eee746986a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.128 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.129 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.130 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.131 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.131 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.131 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.131 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3dcede2c-6edc-481d-8318-cf8dc5dbd43d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.130010', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b6f4a9a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '3dfef86038fe69c977331ef8fb0c88c0ac5a16303ee4b3a84ec68595948ed3e6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.130010', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b6f56de-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '6739225188955cc9b8bd5b1f73e86e512f4d0c15152620809447f87a717cf0fb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.130010', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b6f6228-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '9ca2c47e71bf350cec0c50b86fe16f351bc183247351c9a08b23e00341be4522'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.130010', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b6f6eda-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '49786262bfd81a32acd6f562e01c457ad0b77f54a47e30f3d665d7b6ec5fbe5f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.130010', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b6f7a38-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '1a40b8322d939d84903247d7264316cbdecf77a79961245af63fc888b4db323f'}]}, 'timestamp': '2025-10-13 15:48:21.132282', '_unique_id': '7974447e6b6c4110a8402a7d2b2c06b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.133 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.134 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.134 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.134 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.134 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.135 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.135 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '359a1377-0f40-4077-a7f3-82961cf1f215', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.134257', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b6fd2e4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'c6fc817b612ac1e4375606933f8525017fd88fdbddc287cbc71a02511b83f075'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.134257', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b6fddfc-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '6761dbbdcbd1c2f270729d2677df868886ac4a7ccee89006b79c7c8c62629fd3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.134257', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b6fea9a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '9b7d5555ed12b6407ca0270751dcdd6b0f4603273650c72f9e7017c68c86733c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.134257', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b6ff594-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '543880eac6c6a47aafbc862582132ff1790bb87a646e345c4dd788013d69a79d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.134257', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b7002b4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': 'ce6832fef5ed3fa56147dc72bc452b2fc913b1796b32845cb8c28029b26aedf5'}]}, 'timestamp': '2025-10-13 15:48:21.135789', '_unique_id': 'aca7b9e64a49433286560b44e5998ff2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.136 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.137 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.137 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.138 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3b094073-983c-4a6e-9c5b-6948344ea13b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.137707', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b705a48-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': 'd6dd7b7ca56c7872b861f65450d8c186bdff1075adf70b4d84831767cd30c71c'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.137707', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b70683a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '8b6ada3f83e8186a2c5503406004c535777841523276787ffce44a24748eb441'}]}, 'timestamp': '2025-10-13 15:48:21.138383', '_unique_id': 'df763063438b4cde851b9fd95ea311c4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.139 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.149 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.149 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.150 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.159 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.160 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '78786496-e0cb-47fb-b63d-8d9d7ae256eb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.140100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b7223be-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': '8ff86532ed90c47a4e17cddaa9a9c805b41828822554f73d38112d30ed52ea1e'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.140100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b722fc6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': '9f2b52c8b2ab5c0a26957851d5d0f82460a530132fba3666cc4559ffd8bf56fe'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.140100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b723a0c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': 'c5c473199d0d49a9d5c922dcb3958003f907eaa5562fe48242da1ed4fc4bc0c1'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.140100', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b73c0a2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.339502212, 'message_signature': 'acfe3ad0a999d27880c95a36b12ea0194d83d26d481926fe25bbc988cfb0b185'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.140100', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b73cbc4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.339502212, 'message_signature': 'bd961d0c12e015ac0f1b02448703164f17208a0333815bc922ced354947efaf7'}]}, 'timestamp': '2025-10-13 15:48:21.160601', '_unique_id': '39b5059e52544997933d7090d47dd250'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.161 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.162 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.162 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.163 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8332a968-1330-4eca-a073-8195af6cc69a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.162807', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b742e66-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': '089dd549e8eb53ce9d87122345502fa4c689be4a042c7204c4af899a6c70d57b'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.162807', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b743bc2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '68eebbd2726190a9d0f3a2737fb79318d36100731e497cb6bba43cf6c251addb'}]}, 'timestamp': '2025-10-13 15:48:21.163453', '_unique_id': '8207dc3318f043158086321f8eee2ff7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.164 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.165 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.166 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '299aaf74-816b-421d-aaa3-72bf25ca0884', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.165116', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b74897e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'f8c0c2aebe0ccc1699ceda8772ca01512c9a6875d5550158f6adae9228ccb448'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.165116', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b749450-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '1abd9157b573c585e27cc7e7c3e5cc23c76d5b211c92975fb53be2ff0bf35bda'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.165116', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b749e32-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '67d935cc2b59633c4c95ff748272779dce09e875c7cc8922f84da3237368cbaa'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.165116', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b74a9e0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': 'f76e31d28ec9a26740a7537a5803a0c4daa01cbabda33dee8cf40e838e0bbcff'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.165116', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b74b3f4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '91b66e7cf609a6648afaf53729305b2c37237cedcd1e03ddf76513661671e538'}]}, 'timestamp': '2025-10-13 15:48:21.166530', '_unique_id': 'f12a7a3a3d1b4d67b6fb5c138e283205'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.167 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.168 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.168 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d46a067-8939-43d6-9709-30d5961354f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.168174', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b750020-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': '991ce7b3675b50a0d45544863028427668905ca7b35d72670824658cc0dbe660'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.168174', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b750b74-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': 'b01ba53207f122f42d269c51327fcad585eac8116fb2d1a9934f97ab090d6ecc'}]}, 'timestamp': '2025-10-13 15:48:21.168768', '_unique_id': 'c279f3d78c5541d1a6a7555f40dfb598'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.169 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.170 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.170 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.170 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '81703acb-2db6-4545-a412-5276252b82a0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.170230', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b755066-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': '9de91567f81314449daf51fa2729b4ba13e18f1e146675d3e19ff29340a41ab0'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.170230', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b755bc4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '60933c4f78a62c127064740732ba9ab67d9bd68b3a560a76c914b1a82cd316f2'}]}, 'timestamp': '2025-10-13 15:48:21.170821', '_unique_id': '23bb72920c534a2e81b6c542b0d0d96f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.171 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.172 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.172 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.172 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1bdac618-c2f1-4387-ac9d-59d1199de1cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.172459', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b75a7dc-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': 'a478043c5975ad20ed4c517a0bd9e4287922543196a4541bcef69c6745d246b6'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.172459', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b75b29a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '025c53c5bbfd848e2bec5483f94ac77d473690be063435be170b4794d8805ed4'}]}, 'timestamp': '2025-10-13 15:48:21.173057', '_unique_id': '6f6eacb07149471ca53e64fffe13e9c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.173 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.174 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.174 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '59874703-68de-4f92-a086-9d9edb1f642c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.174526', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b75f7fa-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': '1d904a9c0f8fdd715606ee93ee57bf1d39fe958e3db0ae5d543c0c0a3ce00291'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.174526', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b760290-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '07723e15c33a576d66e388b5cfc10584626bd38fc69542de6d68bc8ec45c3f4e'}]}, 'timestamp': '2025-10-13 15:48:21.175090', '_unique_id': 'd8577c34c7434e6fa70d680d90d67916'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.176 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.177 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.177 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.177 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a948ef4b-9a83-4003-9f47-1dd6826bf606', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.176562', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b76473c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': '088ebb52ccdb32114a7d935d7ad079b62fa95fc425e47987c56e8dc060923bfc'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.176562', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b765150-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': '9810047d3ed5b58d772ec28be735562c6b9fc19b80ab2e0176e6e698f3379f47'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.176562', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b7662f8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': '671af2a76eaf40f1ecb890f918fbdafac10c1a02f2d8ac1f9638b9785cf01ee3'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.176562', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b766e1a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.339502212, 'message_signature': 'b159842ba63b0fa9f21c2b1f7ad6960c208699502ad356d9c19752f7efe609c3'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.176562', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b7677b6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.339502212, 'message_signature': 'dc92d60823f9874a5bf622a9b2d27b6c00046066f17974cf835c5a87367a1bbe'}]}, 'timestamp': '2025-10-13 15:48:21.178076', '_unique_id': '83f347e618b042c98bfd67336014fec0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.178 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.179 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.179 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.179 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bd08d584-7d17-47e7-8af4-5e8c5a3d0d5e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.179607', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b76be6a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': '8a83c5cde59306993dfa44580dd8f3d9850f94703b638a211bd459dd46e6cf19'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.179607', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b76c914-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '5c4b1587a4ae6d6b81578bebfe98b8c0b7da3d3f343360cc9c90128d8d073944'}]}, 'timestamp': '2025-10-13 15:48:21.180170', '_unique_id': '29fde38d567d4c88bad53f6419631849'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.180 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.181 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.181 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.181 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.182 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.182 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.182 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aca1554d-53fe-4f0f-b58b-5154ffb0e24d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.181549', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b770a0a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': 'cd6ad9af6fd85a228db6282cdb80296a227fa65ede054a6303c38380d1a53f63'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.181549', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b771428-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': 'd819c47512a2cbb2f34cbe98dfbd9d7e57695a987039a0f7a25423d672690e29'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.181549', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b771dce-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.329349946, 'message_signature': 'b6f8b20eaa00d74dfcfe9ae875a086b63fbd0004c224b9af6c6eac6b40a2a321'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.181549', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b77276a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.339502212, 'message_signature': 'f2effb2e19350736445add15a37e947d9386819cb37861579f2fb04b778357ee'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.181549', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b77339a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.339502212, 'message_signature': 'db4bd6fa4142e2c69d08c04f636248801c15cde322b038796777f1026d269cb2'}]}, 'timestamp': '2025-10-13 15:48:21.182886', '_unique_id': 'dc216a87e25f48e2ae56911b8da0a50e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.184 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.184 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.184 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4f2fc63-79b4-41f7-97b4-19cec4a5cb59', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.184333', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b7776a2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': '39ec048fec0b52b8744ecc4d9aead9e8ceb65048dc37f18958ab8534c838b2d1'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.184333', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b77841c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': '242314b9035d367aa8d733a4e64f6501d260028e3ac34f67de3d319ab4e59084'}]}, 'timestamp': '2025-10-13 15:48:21.184978', '_unique_id': '2d0d4fed9df04b12ac670fc2061f9f03'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.185 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.186 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.186 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae8eeb54-56f6-4e01-a893-ed6d8c97793b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:48:21.186378', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '0b77c738-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.192072154, 'message_signature': 'd12137ed5660c24c14de4ec40ade6387e560d8b59f7e607f396dc0c1ebd69cde'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:48:21.186378', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '0b77d426-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.197970308, 'message_signature': 'e7be15081a372b7a12d9a232dfec495cd6ef212042213d159906061e11830c6b'}]}, 'timestamp': '2025-10-13 15:48:21.187012', '_unique_id': '02f4a9fa28ce4698a11f718a85bc2b6b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.187 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.188 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.188 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.188 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.189 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.189 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.189 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '70b11cbc-dd0d-48ed-8bd4-d2877dbb253a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.188396', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b781620-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'ef851b0c4cbaca2934833b26e7e055824a37878f5ac63224f261cbe040c2294e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.188396', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b78226e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '62cb3d720ef6faaff7e2d64917cd320e438aab1d72144a4f8b3780e9ad846d52'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.188396', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b782cb4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'c93e612d5a4f60bb85b06f9cbcc6c27eb446533da2dcae27688cd818225cd1c9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.188396', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b783664-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': 'f830625c7cac525a8d09dea3297795503eaa541d642a2669b8aa40fee21718e3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.188396', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b7840e6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '15391f87cc09f15f09dda4c964f94b41fc5504434e8a3bcba553597dd63a450d'}]}, 'timestamp': '2025-10-13 15:48:21.189832', '_unique_id': 'fe215d7fc23442d39dedda7145dbf927'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.190 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.191 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.191 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 14440000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.191 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 36290000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ee1efc28-8ce8-4dd1-92ad-fd4c6daf09a8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14440000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:48:21.191295', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '0b7886c8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.230545925, 'message_signature': 'd8ae18aaad0a2079c5158be54a4a98926db84d48a50b8b6eadba10b86ca1388e'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 36290000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:48:21.191295', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '0b7891ae-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.254241383, 'message_signature': '80482ad2ee5982e216e66ed1d4226e48ef21367f92d2c6e4575361d79d50ff19'}]}, 'timestamp': '2025-10-13 15:48:21.191899', '_unique_id': 'fa11d3db1b7a47f4bf301c3ee4a6c681'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.193 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.193 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.193 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.194 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.195 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.195 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1ce313c2-e048-40d7-87dd-ffa1508bfca6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:48:21.193406', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b78d9de-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'b95e500924502e0ab49050fafe73d7cb413bf1a82cfe8bcecdc8031e2ac7fcd2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:48:21.193406', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b78e604-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': 'd97c6b1645ad13a3eb82253b69e152c9c804c74480ad87dd022f6c92a372b127'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:48:21.193406', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '0b78f022-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.259536848, 'message_signature': '3abfcbc502e22c2033da58634982611efc6b3a958b422693a9f1d56f17908b1e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:48:21.193406', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b791e76-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '2d476eda6c06275bcf7d1af25ff00387df096b7aaa7568d353eb4d8430b5a6b3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:48:21.193406', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b792952-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 9944.297246094, 'message_signature': '16af5b40c65ef2c1574ea6932fad46c94a5a85643cec08e2e33526b12df9898e'}]}, 'timestamp': '2025-10-13 15:48:21.195740', '_unique_id': 'b315a5c73191457abfc1cf5d0d39049d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:48:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:48:21.196 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:48:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4022: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:22.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:22 standalone.localdomain ceph-mon[29756]: pgmap v4022: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:22.941 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 2001:db8::f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 10.100.0.2 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:22.943 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:48:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:22.947 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:22.947 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3a13e526-adcb-4050-a1bb-08425df04d03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:48:23
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'manila_metadata', '.mgr', 'manila_data', 'images']
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:48:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:23.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:23.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27061 DF PROTO=TCP SPT=43468 DPT=9102 SEQ=2123169327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD30D1F0000000001030307) 
Oct 13 15:48:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:23.751 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:22Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912cf10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188912c580>], id=014e9c15-0d95-4fd8-966e-3d6aa50edc70, ip_allocation=immediate, mac_address=fa:16:3e:38:35:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:15Z, description=, dns_domain=, id=2d0e9665-b447-40a6-9fa6-412d077bfcc1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-420152032, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23130, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2245, status=ACTIVE, subnets=['b571a4ae-90f8-4e28-b4e5-c1109a08e8da'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:18Z, vlan_transparent=None, network_id=2d0e9665-b447-40a6-9fa6-412d077bfcc1, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2258, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:23Z on network 2d0e9665-b447-40a6-9fa6-412d077bfcc1
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4023: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:48:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:48:23 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:23.949 2 INFO neutron.agent.securitygroups_rpc [None req-77a3f418-3276-41bd-a8e7-7e8fcffefb20 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:23 standalone.localdomain dnsmasq[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/addn_hosts - 1 addresses
Oct 13 15:48:23 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/host
Oct 13 15:48:23 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/opts
Oct 13 15:48:23 standalone.localdomain podman[557885]: 2025-10-13 15:48:23.972607173 +0000 UTC m=+0.048300368 container kill 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:48:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:48:24 standalone.localdomain podman[557903]: 2025-10-13 15:48:24.16582879 +0000 UTC m=+0.106564035 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:48:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:24.193 496978 INFO neutron.agent.linux.ip_lib [None req-abd043e6-053e-4120-aec8-bc71cc05edf7 - - - - - -] Device tap66ba1a63-52 cannot be used as it has no MAC address
Oct 13 15:48:24 standalone.localdomain podman[557903]: 2025-10-13 15:48:24.204987222 +0000 UTC m=+0.145722427 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:48:24 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:48:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:24.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:24 standalone.localdomain kernel: device tap66ba1a63-52 entered promiscuous mode
Oct 13 15:48:24 standalone.localdomain NetworkManager[5962]: <info>  [1760370504.2408] manager: (tap66ba1a63-52): new Generic device (/org/freedesktop/NetworkManager/Devices/121)
Oct 13 15:48:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:24.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:24Z|00691|binding|INFO|Claiming lport 66ba1a63-5247-47d8-8431-b6ea9f4c0710 for this chassis.
Oct 13 15:48:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:24Z|00692|binding|INFO|66ba1a63-5247-47d8-8431-b6ea9f4c0710: Claiming unknown
Oct 13 15:48:24 standalone.localdomain systemd-udevd[557940]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:24.257 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe09:5584/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=66ba1a63-5247-47d8-8431-b6ea9f4c0710) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:24.259 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 66ba1a63-5247-47d8-8431-b6ea9f4c0710 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:24.263 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8deee478-9fa3-4dfb-ae02-4d1f1324999b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:24.264 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:24.265 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[964ce672-2e2f-4e52-a488-30816b58f96a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:24.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:24Z|00693|binding|INFO|Setting lport 66ba1a63-5247-47d8-8431-b6ea9f4c0710 ovn-installed in OVS
Oct 13 15:48:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:24Z|00694|binding|INFO|Setting lport 66ba1a63-5247-47d8-8431-b6ea9f4c0710 up in Southbound
Oct 13 15:48:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:24.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap66ba1a63-52: No such device
Oct 13 15:48:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:24.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:24.381 496978 INFO neutron.agent.dhcp.agent [None req-e27fa203-160e-4b2d-8a67-f20106cf1c94 - - - - - -] DHCP configuration for ports {'014e9c15-0d95-4fd8-966e-3d6aa50edc70'} is completed
Oct 13 15:48:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27062 DF PROTO=TCP SPT=43468 DPT=9102 SEQ=2123169327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD311370000000001030307) 
Oct 13 15:48:24 standalone.localdomain ceph-mon[29756]: pgmap v4023: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:25 standalone.localdomain podman[558010]: 
Oct 13 15:48:25 standalone.localdomain podman[558010]: 2025-10-13 15:48:25.299735261 +0000 UTC m=+0.094332204 container create 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:25 standalone.localdomain systemd[1]: Started libpod-conmon-59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652.scope.
Oct 13 15:48:25 standalone.localdomain podman[558010]: 2025-10-13 15:48:25.25193187 +0000 UTC m=+0.046528823 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:25 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:25 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21708baf337c7d06a8679d436d5c48f492945e40d53a7bd58f9f004a310116de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:25 standalone.localdomain podman[558010]: 2025-10-13 15:48:25.379044434 +0000 UTC m=+0.173641327 container init 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:25 standalone.localdomain podman[558010]: 2025-10-13 15:48:25.384763993 +0000 UTC m=+0.179360906 container start 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:48:25 standalone.localdomain dnsmasq[558028]: started, version 2.85 cachesize 150
Oct 13 15:48:25 standalone.localdomain dnsmasq[558028]: DNS service limited to local subnets
Oct 13 15:48:25 standalone.localdomain dnsmasq[558028]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:25 standalone.localdomain dnsmasq[558028]: warning: no upstream servers configured
Oct 13 15:48:25 standalone.localdomain dnsmasq[558028]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:25.447 496978 INFO neutron.agent.dhcp.agent [None req-abd043e6-053e-4120-aec8-bc71cc05edf7 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:48:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:25.449 496978 INFO neutron.agent.dhcp.agent [None req-abd043e6-053e-4120-aec8-bc71cc05edf7 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e837c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83130>], id=4ce3a592-7915-4713-b748-aeb23a378759, ip_allocation=immediate, mac_address=fa:16:3e:98:d8:0f, name=tempest-NetworksTestDHCPv6-119757466, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=38, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['5a57bb4c-4a6b-482c-a9ff-686951abcf17'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:20Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2260, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:23Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:48:25 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:25.522 2 INFO neutron.agent.securitygroups_rpc [None req-6cdf6680-48f3-43ce-8e68-147572c29200 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:25.529 496978 INFO neutron.agent.dhcp.agent [None req-583caa37-90f4-4382-a83e-ca898130f6df - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:25 standalone.localdomain dnsmasq[558028]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:48:25 standalone.localdomain podman[558048]: 2025-10-13 15:48:25.673878271 +0000 UTC m=+0.072241145 container kill 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:25.741 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:24Z, description=, device_id=cf49f818-1bcb-4c13-ba84-0496bf40f984, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e70f40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f0a700>], id=cc4516cd-20dc-4642-a43e-7a6ae906f837, ip_allocation=immediate, mac_address=fa:16:3e:20:2e:67, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:12Z, description=, dns_domain=, id=36ceda34-93ec-4a19-8f02-93322043557c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1521697551-network, port_security_enabled=True, project_id=aefab81a3f68448f93dd20e2d275ad53, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3158, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2232, status=ACTIVE, subnets=['2ea5d153-74f2-4180-8fcf-89360605bf20'], tags=[], tenant_id=aefab81a3f68448f93dd20e2d275ad53, updated_at=2025-10-13T15:48:15Z, vlan_transparent=None, network_id=36ceda34-93ec-4a19-8f02-93322043557c, port_security_enabled=False, project_id=aefab81a3f68448f93dd20e2d275ad53, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2261, status=DOWN, tags=[], tenant_id=aefab81a3f68448f93dd20e2d275ad53, updated_at=2025-10-13T15:48:25Z on network 36ceda34-93ec-4a19-8f02-93322043557c
Oct 13 15:48:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4024: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:25.943 496978 INFO neutron.agent.dhcp.agent [None req-09bb279e-f6bd-4551-8444-eb51a33a446a - - - - - -] DHCP configuration for ports {'4ce3a592-7915-4713-b748-aeb23a378759'} is completed
Oct 13 15:48:25 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:48:25 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:25 standalone.localdomain podman[558094]: 2025-10-13 15:48:25.984182261 +0000 UTC m=+0.146624505 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:25 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:25 standalone.localdomain podman[558117]: 2025-10-13 15:48:25.990839698 +0000 UTC m=+0.092181977 container kill 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:48:25 standalone.localdomain dnsmasq[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/addn_hosts - 1 addresses
Oct 13 15:48:25 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/host
Oct 13 15:48:25 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/opts
Oct 13 15:48:26 standalone.localdomain dnsmasq[558028]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:26 standalone.localdomain podman[558141]: 2025-10-13 15:48:26.039581399 +0000 UTC m=+0.086332914 container kill 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:48:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:26.419 496978 INFO neutron.agent.dhcp.agent [None req-3e10951b-51bb-4f5c-bb35-b70e4b9c252b - - - - - -] DHCP configuration for ports {'cc4516cd-20dc-4642-a43e-7a6ae906f837'} is completed
Oct 13 15:48:26 standalone.localdomain dnsmasq[558028]: exiting on receipt of SIGTERM
Oct 13 15:48:26 standalone.localdomain podman[558201]: 2025-10-13 15:48:26.591590178 +0000 UTC m=+0.068229350 container kill 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:48:26 standalone.localdomain systemd[1]: libpod-59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652.scope: Deactivated successfully.
Oct 13 15:48:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27063 DF PROTO=TCP SPT=43468 DPT=9102 SEQ=2123169327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD319360000000001030307) 
Oct 13 15:48:26 standalone.localdomain podman[558222]: 2025-10-13 15:48:26.673277326 +0000 UTC m=+0.057822545 container died 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:26 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-21708baf337c7d06a8679d436d5c48f492945e40d53a7bd58f9f004a310116de-merged.mount: Deactivated successfully.
Oct 13 15:48:26 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:48:26 standalone.localdomain podman[558222]: 2025-10-13 15:48:26.726859577 +0000 UTC m=+0.111404706 container remove 59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:48:26 standalone.localdomain kernel: device tap66ba1a63-52 left promiscuous mode
Oct 13 15:48:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:26.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:26Z|00695|binding|INFO|Releasing lport 66ba1a63-5247-47d8-8431-b6ea9f4c0710 from this chassis (sb_readonly=0)
Oct 13 15:48:26 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:26Z|00696|binding|INFO|Setting lport 66ba1a63-5247-47d8-8431-b6ea9f4c0710 down in Southbound
Oct 13 15:48:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:26.746 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe09:5584/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=66ba1a63-5247-47d8-8431-b6ea9f4c0710) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:26.749 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 66ba1a63-5247-47d8-8431-b6ea9f4c0710 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:26.754 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:26 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:26.755 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4f90dad3-4b4c-476b-b1ad-0984ba21bc36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:26 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:26.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:26 standalone.localdomain systemd[1]: libpod-conmon-59140a16f281ec7ae1c9469bd62f68b54579d1378b3d54dcad849772f6b08652.scope: Deactivated successfully.
Oct 13 15:48:26 standalone.localdomain podman[558242]: 2025-10-13 15:48:26.800570736 +0000 UTC m=+0.082115742 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Oct 13 15:48:26 standalone.localdomain podman[558242]: 2025-10-13 15:48:26.829459488 +0000 UTC m=+0.111004474 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 13 15:48:26 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:48:26 standalone.localdomain ceph-mon[29756]: pgmap v4024: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:26.945 496978 INFO neutron.agent.dhcp.agent [None req-8e400524-eb75-4931-a0db-e54c0992b82f - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:26.946 496978 INFO neutron.agent.dhcp.agent [None req-8e400524-eb75-4931-a0db-e54c0992b82f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:27 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:27.205 2 INFO neutron.agent.securitygroups_rpc [None req-b2d7fff2-7b4c-4ff1-906e-80a43092f406 4cc81e303b2c4d76a0e66c6d18325f62 4c6b9765b67a41dca7f29a394a2cb7a6 - - default default] Security group rule updated ['7e1b98e6-7731-40a8-8211-7ec6223effb1']
Oct 13 15:48:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:27.257 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:22Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5bb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5550>], id=014e9c15-0d95-4fd8-966e-3d6aa50edc70, ip_allocation=immediate, mac_address=fa:16:3e:38:35:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:15Z, description=, dns_domain=, id=2d0e9665-b447-40a6-9fa6-412d077bfcc1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-420152032, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23130, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2245, status=ACTIVE, subnets=['b571a4ae-90f8-4e28-b4e5-c1109a08e8da'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:18Z, vlan_transparent=None, network_id=2d0e9665-b447-40a6-9fa6-412d077bfcc1, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2258, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:23Z on network 2d0e9665-b447-40a6-9fa6-412d077bfcc1
Oct 13 15:48:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:27.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:27 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:27 standalone.localdomain podman[558280]: 2025-10-13 15:48:27.358692797 +0000 UTC m=+0.071263904 container kill cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:48:27 standalone.localdomain dnsmasq[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/addn_hosts - 0 addresses
Oct 13 15:48:27 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/host
Oct 13 15:48:27 standalone.localdomain dnsmasq-dhcp[557141]: read /var/lib/neutron/dhcp/bce779c4-d902-46c0-836b-06ef68090247/opts
Oct 13 15:48:27 standalone.localdomain dnsmasq[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/addn_hosts - 1 addresses
Oct 13 15:48:27 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/host
Oct 13 15:48:27 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/opts
Oct 13 15:48:27 standalone.localdomain podman[558313]: 2025-10-13 15:48:27.493606895 +0000 UTC m=+0.060959412 container kill 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:48:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:27.760 496978 INFO neutron.agent.dhcp.agent [None req-97abacb3-6fee-4214-9f9d-4b900ce7e4ab - - - - - -] DHCP configuration for ports {'014e9c15-0d95-4fd8-966e-3d6aa50edc70'} is completed
Oct 13 15:48:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4025: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:27.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:27 standalone.localdomain kernel: device tap86faa77f-d3 left promiscuous mode
Oct 13 15:48:27 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:27Z|00697|binding|INFO|Releasing lport 86faa77f-d3ab-481c-867d-a19dc673e00a from this chassis (sb_readonly=0)
Oct 13 15:48:27 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:27Z|00698|binding|INFO|Setting lport 86faa77f-d3ab-481c-867d-a19dc673e00a down in Southbound
Oct 13 15:48:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:27.810 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bce779c4-d902-46c0-836b-06ef68090247', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce779c4-d902-46c0-836b-06ef68090247', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=816dfbaa-c3f8-4b08-ac30-5d1518cccb61, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=86faa77f-d3ab-481c-867d-a19dc673e00a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:27.812 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 86faa77f-d3ab-481c-867d-a19dc673e00a in datapath bce779c4-d902-46c0-836b-06ef68090247 unbound from our chassis
Oct 13 15:48:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:27.814 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bce779c4-d902-46c0-836b-06ef68090247 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:27 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:27.815 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[239721be-ab42-40cc-aafe-fe2983b5a3ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:28.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:28.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:28.601 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:27Z, description=, device_id=bf08c355-143a-4c8d-a06c-e65431a5d250, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f32ac0>], id=9bbbacf7-9093-4a5b-8ffc-e4f133783013, ip_allocation=immediate, mac_address=fa:16:3e:d1:c3:7a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2273, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:48:27Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:48:28 standalone.localdomain ceph-mon[29756]: pgmap v4025: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:28 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:48:28 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:28 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:28 standalone.localdomain podman[558361]: 2025-10-13 15:48:28.835258426 +0000 UTC m=+0.061340125 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.106 496978 INFO neutron.agent.dhcp.agent [None req-eb6cef9e-615d-4256-8a6d-5f95826ca458 - - - - - -] DHCP configuration for ports {'9bbbacf7-9093-4a5b-8ffc-e4f133783013'} is completed
Oct 13 15:48:29 standalone.localdomain podman[558399]: 2025-10-13 15:48:29.224144786 +0000 UTC m=+0.066214926 container kill cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:29 standalone.localdomain dnsmasq[557141]: exiting on receipt of SIGTERM
Oct 13 15:48:29 standalone.localdomain systemd[1]: libpod-cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f.scope: Deactivated successfully.
Oct 13 15:48:29 standalone.localdomain podman[558413]: 2025-10-13 15:48:29.294937205 +0000 UTC m=+0.056055170 container died cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.304 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:24Z, description=, device_id=cf49f818-1bcb-4c13-ba84-0496bf40f984, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188920a070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f11070>], id=cc4516cd-20dc-4642-a43e-7a6ae906f837, ip_allocation=immediate, mac_address=fa:16:3e:20:2e:67, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:12Z, description=, dns_domain=, id=36ceda34-93ec-4a19-8f02-93322043557c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1521697551-network, port_security_enabled=True, project_id=aefab81a3f68448f93dd20e2d275ad53, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3158, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2232, status=ACTIVE, subnets=['2ea5d153-74f2-4180-8fcf-89360605bf20'], tags=[], tenant_id=aefab81a3f68448f93dd20e2d275ad53, updated_at=2025-10-13T15:48:15Z, vlan_transparent=None, network_id=36ceda34-93ec-4a19-8f02-93322043557c, port_security_enabled=False, project_id=aefab81a3f68448f93dd20e2d275ad53, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2261, status=DOWN, tags=[], tenant_id=aefab81a3f68448f93dd20e2d275ad53, updated_at=2025-10-13T15:48:25Z on network 36ceda34-93ec-4a19-8f02-93322043557c
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:48:29 standalone.localdomain podman[558413]: 2025-10-13 15:48:29.375965782 +0000 UTC m=+0.137083707 container cleanup cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:29 standalone.localdomain systemd[1]: libpod-conmon-cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f.scope: Deactivated successfully.
Oct 13 15:48:29 standalone.localdomain podman[558415]: 2025-10-13 15:48:29.402867861 +0000 UTC m=+0.153120887 container remove cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bce779c4-d902-46c0-836b-06ef68090247, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.428 496978 INFO neutron.agent.dhcp.agent [None req-62ac87ac-f800-4e0e-9807-7775f08c1cd6 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.429 496978 INFO neutron.agent.dhcp.agent [None req-62ac87ac-f800-4e0e-9807-7775f08c1cd6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.429 496978 INFO neutron.agent.dhcp.agent [None req-62ac87ac-f800-4e0e-9807-7775f08c1cd6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:29 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:29Z|00699|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:29.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:29 standalone.localdomain dnsmasq[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/addn_hosts - 1 addresses
Oct 13 15:48:29 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/host
Oct 13 15:48:29 standalone.localdomain podman[558460]: 2025-10-13 15:48:29.557691151 +0000 UTC m=+0.064270136 container kill 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:48:29 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/opts
Oct 13 15:48:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4026: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.817 496978 INFO neutron.agent.linux.ip_lib [None req-e62291c9-c6b3-490a-a10c-09885b5c0ea3 - - - - - -] Device tap3b0e3bc5-be cannot be used as it has no MAC address
Oct 13 15:48:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-48ece9f32e9639ad3fa62cb229d9d90021466bfc7ea141b855af800291bc1346-merged.mount: Deactivated successfully.
Oct 13 15:48:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb85ff222a69a0cd6ef39bf07719a1e3d0d4648e9ef91b52ae95a35d5a68849f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:29 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dbce779c4\x2dd902\x2d46c0\x2d836b\x2d06ef68090247.mount: Deactivated successfully.
Oct 13 15:48:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:29.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:29.888 496978 INFO neutron.agent.dhcp.agent [None req-71f2b816-d458-4921-89ab-39fb70d70505 - - - - - -] DHCP configuration for ports {'cc4516cd-20dc-4642-a43e-7a6ae906f837'} is completed
Oct 13 15:48:29 standalone.localdomain kernel: device tap3b0e3bc5-be entered promiscuous mode
Oct 13 15:48:29 standalone.localdomain NetworkManager[5962]: <info>  [1760370509.8931] manager: (tap3b0e3bc5-be): new Generic device (/org/freedesktop/NetworkManager/Devices/122)
Oct 13 15:48:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:29.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:29 standalone.localdomain systemd-udevd[558492]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:29 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:29Z|00700|binding|INFO|Claiming lport 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 for this chassis.
Oct 13 15:48:29 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:29Z|00701|binding|INFO|3b0e3bc5-be7a-45cb-b587-2249a159b1a1: Claiming unknown
Oct 13 15:48:29 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:29.911 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe94:9c43/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3b0e3bc5-be7a-45cb-b587-2249a159b1a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:29 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:29.913 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:29 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:29.918 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6c03a667-bd91-4c75-bea1-f2dbe7992e6c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:29 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:29.918 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:29 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:29.919 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0f6d54-94c9-4cf1-9949-9d1092fb4a53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:29Z|00702|binding|INFO|Setting lport 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 ovn-installed in OVS
Oct 13 15:48:29 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:29Z|00703|binding|INFO|Setting lport 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 up in Southbound
Oct 13 15:48:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:29.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3b0e3bc5-be: No such device
Oct 13 15:48:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:29.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27064 DF PROTO=TCP SPT=43468 DPT=9102 SEQ=2123169327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD328F60000000001030307) 
Oct 13 15:48:30 standalone.localdomain ceph-mon[29756]: pgmap v4026: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:30 standalone.localdomain podman[558563]: 
Oct 13 15:48:30 standalone.localdomain podman[558563]: 2025-10-13 15:48:30.934329203 +0000 UTC m=+0.096970416 container create 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:30 standalone.localdomain systemd[1]: Started libpod-conmon-2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6.scope.
Oct 13 15:48:30 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:30 standalone.localdomain podman[558563]: 2025-10-13 15:48:30.888001957 +0000 UTC m=+0.050643250 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:30 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6851567f084f5ff2a7241a7f7931a33ee59a2ba2ba8a94dd5e8c734584287fc8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:31 standalone.localdomain podman[558563]: 2025-10-13 15:48:31.000538058 +0000 UTC m=+0.163179301 container init 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:48:31 standalone.localdomain podman[558563]: 2025-10-13 15:48:31.010293252 +0000 UTC m=+0.172934495 container start 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:48:31 standalone.localdomain dnsmasq[558581]: started, version 2.85 cachesize 150
Oct 13 15:48:31 standalone.localdomain dnsmasq[558581]: DNS service limited to local subnets
Oct 13 15:48:31 standalone.localdomain dnsmasq[558581]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:31 standalone.localdomain dnsmasq[558581]: warning: no upstream servers configured
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[558581]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:48:31 standalone.localdomain dnsmasq[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:31.070 496978 INFO neutron.agent.dhcp.agent [None req-e62291c9-c6b3-490a-a10c-09885b5c0ea3 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:31.143 496978 INFO neutron.agent.dhcp.agent [None req-4d365118-f723-4b65-9644-70f1b5bc2b57 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:31 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:31.223 2 INFO neutron.agent.securitygroups_rpc [None req-65ec5cfd-154c-47f5-ba02-2fee6cd64f3f db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:31.339 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:30Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890ae970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890ae8e0>], id=f843c9e5-c83f-4f2d-a2af-8928dc7df989, ip_allocation=immediate, mac_address=fa:16:3e:66:a7:6e, name=tempest-NetworksTestDHCPv6-253753832, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=40, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['bf74469d-e123-4611-9250-8dbeae1391af'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:26Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2281, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:30Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:48:31 standalone.localdomain dnsmasq[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:31 standalone.localdomain podman[558600]: 2025-10-13 15:48:31.553962651 +0000 UTC m=+0.057764812 container kill 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:31 standalone.localdomain dnsmasq[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/addn_hosts - 0 addresses
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/host
Oct 13 15:48:31 standalone.localdomain podman[558636]: 2025-10-13 15:48:31.753574327 +0000 UTC m=+0.067644251 container kill 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:48:31 standalone.localdomain dnsmasq-dhcp[556835]: read /var/lib/neutron/dhcp/2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf/opts
Oct 13 15:48:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4027: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:31.815 496978 INFO neutron.agent.dhcp.agent [None req-c2833dd0-c36d-4b84-8637-314f3af9d721 - - - - - -] DHCP configuration for ports {'f843c9e5-c83f-4f2d-a2af-8928dc7df989'} is completed
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:32.652 496978 INFO neutron.agent.linux.ip_lib [None req-06484e9d-95e6-4acd-8a5b-df566a6b8762 - - - - - -] Device tapb9ec2641-69 cannot be used as it has no MAC address
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain kernel: device tapb9ec2641-69 entered promiscuous mode
Oct 13 15:48:32 standalone.localdomain NetworkManager[5962]: <info>  [1760370512.6814] manager: (tapb9ec2641-69): new Generic device (/org/freedesktop/NetworkManager/Devices/123)
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:32Z|00704|binding|INFO|Claiming lport b9ec2641-69a2-484a-98e9-00e31e258d56 for this chassis.
Oct 13 15:48:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:32Z|00705|binding|INFO|b9ec2641-69a2-484a-98e9-00e31e258d56: Claiming unknown
Oct 13 15:48:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:32Z|00706|binding|INFO|Setting lport b9ec2641-69a2-484a-98e9-00e31e258d56 ovn-installed in OVS
Oct 13 15:48:32 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:32Z|00707|binding|INFO|Setting lport b9ec2641-69a2-484a-98e9-00e31e258d56 up in Southbound
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:32.699 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bfc42da9-a81e-4c8a-9c74-bef94743e411', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfc42da9-a81e-4c8a-9c74-bef94743e411', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=613b0626-ce44-4ccf-af6a-635d57bebef1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b9ec2641-69a2-484a-98e9-00e31e258d56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:32.701 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b9ec2641-69a2-484a-98e9-00e31e258d56 in datapath bfc42da9-a81e-4c8a-9c74-bef94743e411 bound to our chassis
Oct 13 15:48:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:32.704 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bfc42da9-a81e-4c8a-9c74-bef94743e411 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:32 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:32.705 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f27d81aa-6cc9-4722-96d1-af3b111283c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:32.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:32 standalone.localdomain ceph-mon[29756]: pgmap v4027: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:33 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:33.135 2 INFO neutron.agent.securitygroups_rpc [None req-d3d5a5a5-0e8e-407f-9073-769449aef933 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:33 standalone.localdomain systemd[1]: tmp-crun.foRh4N.mount: Deactivated successfully.
Oct 13 15:48:33 standalone.localdomain dnsmasq[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:33 standalone.localdomain dnsmasq-dhcp[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:33 standalone.localdomain podman[558720]: 2025-10-13 15:48:33.374593243 +0000 UTC m=+0.054531982 container kill 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:33 standalone.localdomain dnsmasq-dhcp[558581]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:33.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:33 standalone.localdomain podman[558788]: 2025-10-13 15:48:33.671390241 +0000 UTC m=+0.070260002 container kill 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:48:33 standalone.localdomain dnsmasq[556835]: exiting on receipt of SIGTERM
Oct 13 15:48:33 standalone.localdomain systemd[1]: libpod-46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef.scope: Deactivated successfully.
Oct 13 15:48:33 standalone.localdomain podman[558775]: 
Oct 13 15:48:33 standalone.localdomain podman[558775]: 2025-10-13 15:48:33.696641009 +0000 UTC m=+0.123070350 container create 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:33 standalone.localdomain podman[558775]: 2025-10-13 15:48:33.606102795 +0000 UTC m=+0.032532146 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:33 standalone.localdomain podman[558803]: 2025-10-13 15:48:33.733114877 +0000 UTC m=+0.046708969 container died 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:33 standalone.localdomain systemd[1]: Started libpod-conmon-66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3.scope.
Oct 13 15:48:33 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:33 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbee31f0984946c6a615f6017a59b1d9dd6e9c409e7575c7954c553ebed20f28/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4028: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:33 standalone.localdomain podman[558803]: 2025-10-13 15:48:33.810846392 +0000 UTC m=+0.124440464 container cleanup 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:33 standalone.localdomain systemd[1]: libpod-conmon-46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef.scope: Deactivated successfully.
Oct 13 15:48:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:33Z|00708|binding|INFO|Removing iface tapbb990f0f-22 ovn-installed in OVS
Oct 13 15:48:33 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:33Z|00709|binding|INFO|Removing lport bb990f0f-221d-4c30-954e-02ad60858a8b ovn-installed in OVS
Oct 13 15:48:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:33.829 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d8f94541-2ae7-4d7d-8816-83ebe1aef78f with type ""
Oct 13 15:48:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:33.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:33.832 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=900de971-b2cc-44cf-80c7-61a04318cca1, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=bb990f0f-221d-4c30-954e-02ad60858a8b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:33.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:33.834 378821 INFO neutron.agent.ovn.metadata.agent [-] Port bb990f0f-221d-4c30-954e-02ad60858a8b in datapath 2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf unbound from our chassis
Oct 13 15:48:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:33.835 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:33 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:33.836 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[09b88ce2-5487-4fe2-8003-718347d03d63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:33 standalone.localdomain podman[558811]: 2025-10-13 15:48:33.879986478 +0000 UTC m=+0.183335460 container remove 46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dc0a88e-a6bc-4bb9-8207-a59eaff08fdf, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:48:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:33 standalone.localdomain kernel: device tapbb990f0f-22 left promiscuous mode
Oct 13 15:48:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:33.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:33 standalone.localdomain podman[558775]: 2025-10-13 15:48:33.909925382 +0000 UTC m=+0.336354713 container init 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:48:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:33.914 496978 INFO neutron.agent.dhcp.agent [None req-2be0e9b4-5c36-46dc-9871-bfb31cb62c0c - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:33.915 496978 INFO neutron.agent.dhcp.agent [None req-2be0e9b4-5c36-46dc-9871-bfb31cb62c0c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:33 standalone.localdomain dnsmasq[558841]: started, version 2.85 cachesize 150
Oct 13 15:48:33 standalone.localdomain dnsmasq[558841]: DNS service limited to local subnets
Oct 13 15:48:33 standalone.localdomain dnsmasq[558841]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:33 standalone.localdomain dnsmasq[558841]: warning: no upstream servers configured
Oct 13 15:48:33 standalone.localdomain dnsmasq-dhcp[558841]: DHCP, static leases only on 10.101.0.0, lease time 1d
Oct 13 15:48:33 standalone.localdomain dnsmasq[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/addn_hosts - 0 addresses
Oct 13 15:48:33 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/host
Oct 13 15:48:33 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/opts
Oct 13 15:48:33 standalone.localdomain podman[558775]: 2025-10-13 15:48:33.931055831 +0000 UTC m=+0.357485162 container start 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:48:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:33.976 496978 INFO neutron.agent.dhcp.agent [None req-81fe2925-bf7c-46e6-bad8-1ffc9b582fa7 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:34.061 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:34.072 496978 INFO neutron.agent.dhcp.agent [None req-e838bb22-e8b3-45bc-aed0-b12e7a534a96 - - - - - -] DHCP configuration for ports {'d6cb4027-6f23-4799-89a9-162249473be2'} is completed
Oct 13 15:48:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:34Z|00710|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:34.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:34 standalone.localdomain podman[558859]: 2025-10-13 15:48:34.358904987 +0000 UTC m=+0.051001702 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:48:34 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:48:34 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:34 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-311c9ce671e0a8fbe3cd245820e0bbd620dfcfa75cf9e781be97fa13f862baf5-merged.mount: Deactivated successfully.
Oct 13 15:48:34 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46ea2c8118b0b8a01a9e89582401de0566e2676c8402d1c33a6a1f21b3e15aef-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:34 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d2dc0a88e\x2da6bc\x2d4bb9\x2d8207\x2da59eaff08fdf.mount: Deactivated successfully.
Oct 13 15:48:34 standalone.localdomain dnsmasq[558581]: exiting on receipt of SIGTERM
Oct 13 15:48:34 standalone.localdomain podman[558895]: 2025-10-13 15:48:34.562714514 +0000 UTC m=+0.065594556 container kill 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:34 standalone.localdomain systemd[1]: libpod-2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6.scope: Deactivated successfully.
Oct 13 15:48:34 standalone.localdomain podman[558910]: 2025-10-13 15:48:34.646677704 +0000 UTC m=+0.066548268 container died 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:34 standalone.localdomain systemd[1]: tmp-crun.6eMknm.mount: Deactivated successfully.
Oct 13 15:48:34 standalone.localdomain podman[558910]: 2025-10-13 15:48:34.688633222 +0000 UTC m=+0.108503756 container cleanup 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:34 standalone.localdomain systemd[1]: libpod-conmon-2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6.scope: Deactivated successfully.
Oct 13 15:48:34 standalone.localdomain podman[558919]: 2025-10-13 15:48:34.778598979 +0000 UTC m=+0.185763496 container remove 2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:34Z|00711|binding|INFO|Releasing lport 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 from this chassis (sb_readonly=0)
Oct 13 15:48:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:34Z|00712|binding|INFO|Setting lport 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 down in Southbound
Oct 13 15:48:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:34.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:34 standalone.localdomain kernel: device tap3b0e3bc5-be left promiscuous mode
Oct 13 15:48:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:34.826 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe94:9c43/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3b0e3bc5-be7a-45cb-b587-2249a159b1a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:34.828 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3b0e3bc5-be7a-45cb-b587-2249a159b1a1 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:34.832 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:34.833 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec68f60-dc2e-4bd8-8ad1-72ba5740ed57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:34.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:34 standalone.localdomain ceph-mon[29756]: pgmap v4028: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:35 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:35.029 496978 INFO neutron.agent.dhcp.agent [None req-b619a88f-ff66-4a5c-a3d9-24bebefa10e9 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:35 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:35.029 496978 INFO neutron.agent.dhcp.agent [None req-b619a88f-ff66-4a5c-a3d9-24bebefa10e9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6851567f084f5ff2a7241a7f7931a33ee59a2ba2ba8a94dd5e8c734584287fc8-merged.mount: Deactivated successfully.
Oct 13 15:48:35 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d4eb58e1a516d494cfe8d4afee9d5083b0345adc2c743de7bdd963d1ef015c6-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:35 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4029: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:36 standalone.localdomain dnsmasq[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/addn_hosts - 0 addresses
Oct 13 15:48:36 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/host
Oct 13 15:48:36 standalone.localdomain dnsmasq-dhcp[555777]: read /var/lib/neutron/dhcp/a216632f-cda4-4672-88f3-7cc11177318c/opts
Oct 13 15:48:36 standalone.localdomain podman[558960]: 2025-10-13 15:48:36.516645465 +0000 UTC m=+0.073923308 container kill 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:48:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:36Z|00713|binding|INFO|Releasing lport 4c6ff4b2-bad6-4303-8aaf-c85431968519 from this chassis (sb_readonly=0)
Oct 13 15:48:36 standalone.localdomain kernel: device tap4c6ff4b2-ba left promiscuous mode
Oct 13 15:48:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:36Z|00714|binding|INFO|Setting lport 4c6ff4b2-bad6-4303-8aaf-c85431968519 down in Southbound
Oct 13 15:48:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:36.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:36.720 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a216632f-cda4-4672-88f3-7cc11177318c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a216632f-cda4-4672-88f3-7cc11177318c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ce5d7dd-0f62-46d4-aa82-c2711697a291, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=4c6ff4b2-bad6-4303-8aaf-c85431968519) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:36.722 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 4c6ff4b2-bad6-4303-8aaf-c85431968519 in datapath a216632f-cda4-4672-88f3-7cc11177318c unbound from our chassis
Oct 13 15:48:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:36.725 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a216632f-cda4-4672-88f3-7cc11177318c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:36.726 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c7f211-be04-400e-b60e-3a969ab1b980]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:36.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:36 standalone.localdomain ceph-mon[29756]: pgmap v4029: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:48:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:37.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:37.554 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:36Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f23af0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889106820>], id=e0e07217-8899-4e2a-8606-44178de9710a, ip_allocation=immediate, mac_address=fa:16:3e:70:a1:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:28Z, description=, dns_domain=, id=bfc42da9-a81e-4c8a-9c74-bef94743e411, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1691488240, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26112, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2274, status=ACTIVE, subnets=['a20fb82d-97cd-48e9-8c19-5400e024a448'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:31Z, vlan_transparent=None, network_id=bfc42da9-a81e-4c8a-9c74-bef94743e411, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2304, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:37Z on network bfc42da9-a81e-4c8a-9c74-bef94743e411
Oct 13 15:48:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4030: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:37 standalone.localdomain podman[559000]: 2025-10-13 15:48:37.794068109 +0000 UTC m=+0.054397331 container kill 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:37 standalone.localdomain dnsmasq[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/addn_hosts - 1 addresses
Oct 13 15:48:37 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/host
Oct 13 15:48:37 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/opts
Oct 13 15:48:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:38.101 496978 INFO neutron.agent.dhcp.agent [None req-6a2c73cb-d2a2-4439-a2fd-e25350803844 - - - - - -] DHCP configuration for ports {'e0e07217-8899-4e2a-8606-44178de9710a'} is completed
Oct 13 15:48:38 standalone.localdomain dnsmasq[555777]: exiting on receipt of SIGTERM
Oct 13 15:48:38 standalone.localdomain podman[559040]: 2025-10-13 15:48:38.292114862 +0000 UTC m=+0.062247953 container kill 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:38 standalone.localdomain systemd[1]: libpod-1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a.scope: Deactivated successfully.
Oct 13 15:48:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:48:38 standalone.localdomain podman[559053]: 2025-10-13 15:48:38.3694718 +0000 UTC m=+0.066463053 container died 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:48:38 standalone.localdomain podman[559053]: 2025-10-13 15:48:38.397090602 +0000 UTC m=+0.094081805 container cleanup 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:38 standalone.localdomain systemd[1]: libpod-conmon-1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a.scope: Deactivated successfully.
Oct 13 15:48:38 standalone.localdomain podman[559062]: 2025-10-13 15:48:38.450293635 +0000 UTC m=+0.133843803 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:38 standalone.localdomain podman[559061]: 2025-10-13 15:48:38.528165589 +0000 UTC m=+0.211456229 container remove 1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a216632f-cda4-4672-88f3-7cc11177318c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:48:38 standalone.localdomain podman[559062]: 2025-10-13 15:48:38.557981948 +0000 UTC m=+0.241532106 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:38 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:48:38 standalone.localdomain ceph-mon[29756]: pgmap v4030: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:38.776 496978 INFO neutron.agent.linux.ip_lib [None req-6109ecec-f44d-48bc-be90-13f751319261 - - - - - -] Device tap17c22467-a0 cannot be used as it has no MAC address
Oct 13 15:48:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c59daf35003511ec47ed41d0f85dd8a76f6464b70e6d2ca1ce659e7c77700142-merged.mount: Deactivated successfully.
Oct 13 15:48:38 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1975218bbc6077511a516e09cd208a65ff7ec1d241e8cb8ae3f82624a9128c2a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:38.810 496978 INFO neutron.agent.dhcp.agent [None req-3710bf0c-f2c4-45e9-b291-61d345a55f5f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:38.810 496978 INFO neutron.agent.dhcp.agent [None req-3710bf0c-f2c4-45e9-b291-61d345a55f5f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:38 standalone.localdomain kernel: device tap17c22467-a0 entered promiscuous mode
Oct 13 15:48:38 standalone.localdomain systemd[1]: run-netns-qdhcp\x2da216632f\x2dcda4\x2d4672\x2d88f3\x2d7cc11177318c.mount: Deactivated successfully.
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:38Z|00715|binding|INFO|Claiming lport 17c22467-a03c-4c01-8b16-9fa67c76a117 for this chassis.
Oct 13 15:48:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:38Z|00716|binding|INFO|17c22467-a03c-4c01-8b16-9fa67c76a117: Claiming unknown
Oct 13 15:48:38 standalone.localdomain NetworkManager[5962]: <info>  [1760370518.8169] manager: (tap17c22467-a0): new Generic device (/org/freedesktop/NetworkManager/Devices/124)
Oct 13 15:48:38 standalone.localdomain systemd-udevd[559119]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:38Z|00717|binding|INFO|Setting lport 17c22467-a03c-4c01-8b16-9fa67c76a117 ovn-installed in OVS
Oct 13 15:48:38 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:38Z|00718|binding|INFO|Setting lport 17c22467-a03c-4c01-8b16-9fa67c76a117 up in Southbound
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:38.826 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3a:b3e7/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=17c22467-a03c-4c01-8b16-9fa67c76a117) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:38.829 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 17c22467-a03c-4c01-8b16-9fa67c76a117 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:38.837 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9a7f562c-2ebb-4582-ba9e-acce18be7632 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:38.837 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:38 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:38.838 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[971d1d98-2232-493e-b422-7872da0512b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:38.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:39 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:39.277 2 INFO neutron.agent.securitygroups_rpc [None req-5da0bcc2-7cd9-4644-9c54-45bf7be9b34e db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:39 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:48:39 standalone.localdomain podman[559146]: 2025-10-13 15:48:39.400185185 +0000 UTC m=+0.095182539 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350)
Oct 13 15:48:39 standalone.localdomain podman[559146]: 2025-10-13 15:48:39.417919242 +0000 UTC m=+0.112916596 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:48:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:39.417 496978 INFO neutron.agent.linux.ip_lib [None req-97f856da-e079-4fe1-aae7-e22c7f8877f4 - - - - - -] Device tapd4908183-16 cannot be used as it has no MAC address
Oct 13 15:48:39 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:48:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:39.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:39 standalone.localdomain kernel: device tapd4908183-16 entered promiscuous mode
Oct 13 15:48:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:39Z|00719|binding|INFO|Claiming lport d4908183-1630-4217-b316-23bcb326b470 for this chassis.
Oct 13 15:48:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:39Z|00720|binding|INFO|d4908183-1630-4217-b316-23bcb326b470: Claiming unknown
Oct 13 15:48:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370519.4578] manager: (tapd4908183-16): new Generic device (/org/freedesktop/NetworkManager/Devices/125)
Oct 13 15:48:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:39.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:39.472 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-6b5ab3aa-b7b0-43a6-9f72-c1785d339346', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b5ab3aa-b7b0-43a6-9f72-c1785d339346', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f65d01b638479e947a9c2611255c4e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4866d3a8-dc59-4347-84d0-87409d8e96ea, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d4908183-1630-4217-b316-23bcb326b470) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:39.476 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d4908183-1630-4217-b316-23bcb326b470 in datapath 6b5ab3aa-b7b0-43a6-9f72-c1785d339346 bound to our chassis
Oct 13 15:48:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:39.479 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6b5ab3aa-b7b0-43a6-9f72-c1785d339346 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:39.480 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a1dc3ba7-e97d-417c-980a-b531666ab2e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:39.490 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:39.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:39Z|00721|binding|INFO|Setting lport d4908183-1630-4217-b316-23bcb326b470 ovn-installed in OVS
Oct 13 15:48:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:39Z|00722|binding|INFO|Setting lport d4908183-1630-4217-b316-23bcb326b470 up in Southbound
Oct 13 15:48:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:39.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:39.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4031: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:40 standalone.localdomain podman[559226]: 
Oct 13 15:48:40 standalone.localdomain podman[559226]: 2025-10-13 15:48:40.047518987 +0000 UTC m=+0.091866487 container create 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:48:40 standalone.localdomain systemd[1]: Started libpod-conmon-79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28.scope.
Oct 13 15:48:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7572d5461c13eaa97f1ad0a21757bd15816e2e02504d38b02ed6c4f7d67c9a1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:40 standalone.localdomain podman[559226]: 2025-10-13 15:48:40.009296137 +0000 UTC m=+0.053643687 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:40 standalone.localdomain podman[559226]: 2025-10-13 15:48:40.117653762 +0000 UTC m=+0.162001292 container init 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:48:40 standalone.localdomain podman[559226]: 2025-10-13 15:48:40.129976732 +0000 UTC m=+0.174324252 container start 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:40 standalone.localdomain dnsmasq[559250]: started, version 2.85 cachesize 150
Oct 13 15:48:40 standalone.localdomain dnsmasq[559250]: DNS service limited to local subnets
Oct 13 15:48:40 standalone.localdomain dnsmasq[559250]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:40 standalone.localdomain dnsmasq[559250]: warning: no upstream servers configured
Oct 13 15:48:40 standalone.localdomain dnsmasq[559250]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:40.193 496978 INFO neutron.agent.dhcp.agent [None req-6109ecec-f44d-48bc-be90-13f751319261 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:38Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e5610>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890e55b0>], id=81c3005d-5c23-4bc1-a993-6bda131e824a, ip_allocation=immediate, mac_address=fa:16:3e:22:19:3e, name=tempest-NetworksTestDHCPv6-1285774073, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=42, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['a040ebb0-c3ee-426a-8175-0abb02dfd247'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:35Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2319, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:38Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:48:40 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:40Z|00723|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:40 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:40.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:40.297 496978 INFO neutron.agent.dhcp.agent [None req-532c4832-bb0b-47f1-a764-9f721dde1ce4 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:40 standalone.localdomain dnsmasq[559250]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:48:40 standalone.localdomain podman[559269]: 2025-10-13 15:48:40.365087379 +0000 UTC m=+0.046365672 container kill 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:48:40 standalone.localdomain podman[559308]: 
Oct 13 15:48:40 standalone.localdomain podman[559308]: 2025-10-13 15:48:40.566349112 +0000 UTC m=+0.102328809 container create 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:48:40 standalone.localdomain systemd[1]: Started libpod-conmon-0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960.scope.
Oct 13 15:48:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:40 standalone.localdomain podman[559308]: 2025-10-13 15:48:40.521889679 +0000 UTC m=+0.057869396 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33e666f7a3d43485d6534dea6c0ca4701a2b13699afe2588a6689f2f0b83e06/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:40 standalone.localdomain podman[559308]: 2025-10-13 15:48:40.63076351 +0000 UTC m=+0.166743217 container init 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:48:40 standalone.localdomain podman[559308]: 2025-10-13 15:48:40.639645084 +0000 UTC m=+0.175624771 container start 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:40 standalone.localdomain dnsmasq[559331]: started, version 2.85 cachesize 150
Oct 13 15:48:40 standalone.localdomain dnsmasq[559331]: DNS service limited to local subnets
Oct 13 15:48:40 standalone.localdomain dnsmasq[559331]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:40 standalone.localdomain dnsmasq[559331]: warning: no upstream servers configured
Oct 13 15:48:40 standalone.localdomain dnsmasq-dhcp[559331]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:40 standalone.localdomain dnsmasq[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/addn_hosts - 0 addresses
Oct 13 15:48:40 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/host
Oct 13 15:48:40 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/opts
Oct 13 15:48:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:40.658 496978 INFO neutron.agent.dhcp.agent [None req-13609758-37dd-4c2e-b6a3-1dc8f1d7a6fa - - - - - -] DHCP configuration for ports {'81c3005d-5c23-4bc1-a993-6bda131e824a'} is completed
Oct 13 15:48:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:40.690 496978 INFO neutron.agent.dhcp.agent [None req-e4ff91e8-1387-449e-b968-7b717e45a28d - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:40.819 496978 INFO neutron.agent.dhcp.agent [None req-3f4e41fe-b5b7-4780-96df-edf465395934 - - - - - -] DHCP configuration for ports {'f4c911ae-1166-48c4-8f31-f125d9bac2e8'} is completed
Oct 13 15:48:40 standalone.localdomain ceph-mon[29756]: pgmap v4031: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:40.909 2 INFO neutron.agent.securitygroups_rpc [None req-f90030df-f665-4382-afe7-cf2827800dad db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:41.066 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:36Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fc63a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fc6e20>], id=e0e07217-8899-4e2a-8606-44178de9710a, ip_allocation=immediate, mac_address=fa:16:3e:70:a1:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:28Z, description=, dns_domain=, id=bfc42da9-a81e-4c8a-9c74-bef94743e411, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1691488240, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26112, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2274, status=ACTIVE, subnets=['a20fb82d-97cd-48e9-8c19-5400e024a448'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:31Z, vlan_transparent=None, network_id=bfc42da9-a81e-4c8a-9c74-bef94743e411, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2304, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:37Z on network bfc42da9-a81e-4c8a-9c74-bef94743e411
Oct 13 15:48:41 standalone.localdomain dnsmasq[559250]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:41 standalone.localdomain podman[559349]: 2025-10-13 15:48:41.16464827 +0000 UTC m=+0.055393171 container kill 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:48:41 standalone.localdomain dnsmasq[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/addn_hosts - 1 addresses
Oct 13 15:48:41 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/host
Oct 13 15:48:41 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/opts
Oct 13 15:48:41 standalone.localdomain systemd[1]: tmp-crun.Tq2I8J.mount: Deactivated successfully.
Oct 13 15:48:41 standalone.localdomain podman[559381]: 2025-10-13 15:48:41.282706044 +0000 UTC m=+0.075875193 container kill 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:41.569 496978 INFO neutron.agent.dhcp.agent [None req-2ff638d7-6348-49e2-8599-8de2efcb92d5 - - - - - -] DHCP configuration for ports {'e0e07217-8899-4e2a-8606-44178de9710a'} is completed
Oct 13 15:48:41 standalone.localdomain podman[467099]: time="2025-10-13T15:48:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:48:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:48:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 425487 "" "Go-http-client/1.1"
Oct 13 15:48:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:48:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:48:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 51984 "" "Go-http-client/1.1"
Oct 13 15:48:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4032: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:41 standalone.localdomain podman[559409]: 2025-10-13 15:48:41.817631056 +0000 UTC m=+0.085389147 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 15:48:41 standalone.localdomain podman[559409]: 2025-10-13 15:48:41.824731085 +0000 UTC m=+0.092489226 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009)
Oct 13 15:48:41 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:48:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:42.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:42 standalone.localdomain dnsmasq[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/addn_hosts - 0 addresses
Oct 13 15:48:42 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/host
Oct 13 15:48:42 standalone.localdomain dnsmasq-dhcp[555181]: read /var/lib/neutron/dhcp/d1f78710-d032-4f42-8b33-952b5cd721ff/opts
Oct 13 15:48:42 standalone.localdomain podman[559445]: 2025-10-13 15:48:42.337331617 +0000 UTC m=+0.042508012 container kill 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:48:42 standalone.localdomain systemd[1]: tmp-crun.WcCyzY.mount: Deactivated successfully.
Oct 13 15:48:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:42Z|00724|binding|INFO|Releasing lport ea3677b6-e4f5-4a84-bd8e-182407d5122f from this chassis (sb_readonly=0)
Oct 13 15:48:42 standalone.localdomain kernel: device tapea3677b6-e4 left promiscuous mode
Oct 13 15:48:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:42Z|00725|binding|INFO|Setting lport ea3677b6-e4f5-4a84-bd8e-182407d5122f down in Southbound
Oct 13 15:48:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:42.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:42 standalone.localdomain dnsmasq[559250]: exiting on receipt of SIGTERM
Oct 13 15:48:42 standalone.localdomain systemd[1]: libpod-79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28.scope: Deactivated successfully.
Oct 13 15:48:42 standalone.localdomain podman[559483]: 2025-10-13 15:48:42.569558346 +0000 UTC m=+0.072848880 container kill 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.574 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d1f78710-d032-4f42-8b33-952b5cd721ff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1f78710-d032-4f42-8b33-952b5cd721ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaa42b564883447080cd4183011edf7e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e828193-8760-40ca-9388-096c1736714f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=ea3677b6-e4f5-4a84-bd8e-182407d5122f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.575 378821 INFO neutron.agent.ovn.metadata.agent [-] Port ea3677b6-e4f5-4a84-bd8e-182407d5122f in datapath d1f78710-d032-4f42-8b33-952b5cd721ff unbound from our chassis
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.577 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d1f78710-d032-4f42-8b33-952b5cd721ff or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.578 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bee1bc77-6c04-4c09-9e7d-12a1af403426]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:42.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:42 standalone.localdomain podman[559498]: 2025-10-13 15:48:42.642294601 +0000 UTC m=+0.053605695 container died 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:48:42 standalone.localdomain podman[559498]: 2025-10-13 15:48:42.676978792 +0000 UTC m=+0.088289886 container cleanup 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:42 standalone.localdomain systemd[1]: libpod-conmon-79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28.scope: Deactivated successfully.
Oct 13 15:48:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:42 standalone.localdomain podman[559500]: 2025-10-13 15:48:42.732380882 +0000 UTC m=+0.137739823 container remove 79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:42.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:42Z|00726|binding|INFO|Releasing lport 17c22467-a03c-4c01-8b16-9fa67c76a117 from this chassis (sb_readonly=0)
Oct 13 15:48:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:42Z|00727|binding|INFO|Setting lport 17c22467-a03c-4c01-8b16-9fa67c76a117 down in Southbound
Oct 13 15:48:42 standalone.localdomain kernel: device tap17c22467-a0 left promiscuous mode
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.751 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3a:b3e7/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=17c22467-a03c-4c01-8b16-9fa67c76a117) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.752 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 17c22467-a03c-4c01-8b16-9fa67c76a117 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.754 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:42.755 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1e817f-8d65-4236-a43c-968f4df32e4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:42.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:42 standalone.localdomain ceph-mon[29756]: pgmap v4032: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:48:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:48:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:48:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:43.000 496978 INFO neutron.agent.dhcp.agent [None req-f72c3cad-ce9d-4dee-8473-a0eb965b8331 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:43 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:43.001 496978 INFO neutron.agent.dhcp.agent [None req-f72c3cad-ce9d-4dee-8473-a0eb965b8331 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7572d5461c13eaa97f1ad0a21757bd15816e2e02504d38b02ed6c4f7d67c9a1b-merged.mount: Deactivated successfully.
Oct 13 15:48:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79412cf7249d215a40ea6cd251cdf32546e05dcce60144fe3104676d73afda28-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:43 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:43.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4033: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:44 standalone.localdomain dnsmasq[555181]: exiting on receipt of SIGTERM
Oct 13 15:48:44 standalone.localdomain podman[559544]: 2025-10-13 15:48:44.005633104 +0000 UTC m=+0.069456755 container kill 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:44 standalone.localdomain systemd[1]: libpod-811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416.scope: Deactivated successfully.
Oct 13 15:48:44 standalone.localdomain podman[559558]: 2025-10-13 15:48:44.065370418 +0000 UTC m=+0.047293711 container died 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:48:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:44 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-8a8b8b0dfe5ae18481da19860b6ad547f079135306b95c1999ded7d4e52b9d9f-merged.mount: Deactivated successfully.
Oct 13 15:48:44 standalone.localdomain podman[559558]: 2025-10-13 15:48:44.153166408 +0000 UTC m=+0.135089661 container cleanup 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:44 standalone.localdomain systemd[1]: libpod-conmon-811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416.scope: Deactivated successfully.
Oct 13 15:48:44 standalone.localdomain podman[559565]: 2025-10-13 15:48:44.178868361 +0000 UTC m=+0.148879076 container remove 811b7737fe271e3e64d5fd0eb6854e59ae9aa81f861e798ebd6b669be6958416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d1f78710-d032-4f42-8b33-952b5cd721ff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:44.384 496978 INFO neutron.agent.dhcp.agent [None req-57b22eb2-8932-4b26-a5f4-1497f796d9a9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:44 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd1f78710\x2dd032\x2d4f42\x2d8b33\x2d952b5cd721ff.mount: Deactivated successfully.
Oct 13 15:48:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:44.446 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:44.451 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:43Z, description=, device_id=33f9fffd-0171-4e1a-94d1-3a98e0c7be8a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaae80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888eaa5b0>], id=431a7a01-e3ba-4366-a6d5-5407004c7581, ip_allocation=immediate, mac_address=fa:16:3e:52:f3:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2332, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:48:44Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:48:44 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:48:44 standalone.localdomain podman[559604]: 2025-10-13 15:48:44.62559628 +0000 UTC m=+0.051770608 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:48:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:48:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:48:44 standalone.localdomain ceph-mon[29756]: pgmap v4033: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:44.851 496978 INFO neutron.agent.dhcp.agent [None req-30d3aa4d-28c4-4760-8299-400e0ca5f2fc - - - - - -] DHCP configuration for ports {'431a7a01-e3ba-4366-a6d5-5407004c7581'} is completed
Oct 13 15:48:44 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:44.962 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.070 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.072 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00728|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4034: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:45.888 496978 INFO neutron.agent.linux.ip_lib [None req-9285dbb0-c1ce-4314-a597-e3dcc65d7904 - - - - - -] Device tap0c9d7e03-89 cannot be used as it has no MAC address
Oct 13 15:48:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:45.898 496978 INFO neutron.agent.linux.ip_lib [None req-93571939-9769-44f8-9180-ee35a8c6338f - - - - - -] Device tap7be04010-9a cannot be used as it has no MAC address
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain kernel: device tap0c9d7e03-89 entered promiscuous mode
Oct 13 15:48:45 standalone.localdomain NetworkManager[5962]: <info>  [1760370525.9427] manager: (tap0c9d7e03-89): new Generic device (/org/freedesktop/NetworkManager/Devices/126)
Oct 13 15:48:45 standalone.localdomain systemd-udevd[559644]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00729|binding|INFO|Claiming lport 0c9d7e03-89ce-4128-b4af-4116af140786 for this chassis.
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00730|binding|INFO|0c9d7e03-89ce-4128-b4af-4116af140786: Claiming unknown
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.957 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-c78f46e2-5bde-47ef-9634-d0f5d8db615c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78f46e2-5bde-47ef-9634-d0f5d8db615c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a101722d-1595-4b7d-8999-0ab37e536fee, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=0c9d7e03-89ce-4128-b4af-4116af140786) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.960 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 0c9d7e03-89ce-4128-b4af-4116af140786 in datapath c78f46e2-5bde-47ef-9634-d0f5d8db615c bound to our chassis
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.962 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c78f46e2-5bde-47ef-9634-d0f5d8db615c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:45 standalone.localdomain kernel: device tap7be04010-9a entered promiscuous mode
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.963 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e485594b-e2f0-44c0-b33c-071cd3e2e134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:45 standalone.localdomain NetworkManager[5962]: <info>  [1760370525.9825] manager: (tap7be04010-9a): new Generic device (/org/freedesktop/NetworkManager/Devices/127)
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00731|binding|INFO|Claiming lport 7be04010-9aea-45c4-8c8e-79eae8fc063d for this chassis.
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00732|binding|INFO|7be04010-9aea-45c4-8c8e-79eae8fc063d: Claiming unknown
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00733|binding|INFO|Setting lport 0c9d7e03-89ce-4128-b4af-4116af140786 ovn-installed in OVS
Oct 13 15:48:45 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:45.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:45 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:45Z|00734|binding|INFO|Setting lport 0c9d7e03-89ce-4128-b4af-4116af140786 up in Southbound
Oct 13 15:48:45 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.996 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe60:edc8/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=7be04010-9aea-45c4-8c8e-79eae8fc063d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:45.999 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 7be04010-9aea-45c4-8c8e-79eae8fc063d in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:46.006 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9d02db6b-9ab5-4cfa-ad14-02daf01e3577 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:46.006 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:46 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:46.007 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[676f59b6-06d9-4458-82f1-da1887b3743b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:46Z|00735|binding|INFO|Setting lport 7be04010-9aea-45c4-8c8e-79eae8fc063d ovn-installed in OVS
Oct 13 15:48:46 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:46Z|00736|binding|INFO|Setting lport 7be04010-9aea-45c4-8c8e-79eae8fc063d up in Southbound
Oct 13 15:48:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:46.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:46.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:46.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:46 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:46.143 2 INFO neutron.agent.securitygroups_rpc [None req-ab6104b9-7ab0-42fd-9f27-c17647165b45 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:46.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:46 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:46.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:46 standalone.localdomain ceph-mon[29756]: pgmap v4034: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:47 standalone.localdomain podman[559746]: 
Oct 13 15:48:47 standalone.localdomain podman[559746]: 2025-10-13 15:48:47.216608889 +0000 UTC m=+0.128033564 container create 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:48:47 standalone.localdomain podman[559764]: 
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:48:47 standalone.localdomain podman[559764]: 2025-10-13 15:48:47.248208644 +0000 UTC m=+0.088315657 container create a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:48:47 standalone.localdomain podman[559746]: 2025-10-13 15:48:47.171146725 +0000 UTC m=+0.082571441 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started libpod-conmon-163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698.scope.
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started libpod-conmon-a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90.scope.
Oct 13 15:48:47 standalone.localdomain podman[559764]: 2025-10-13 15:48:47.207175458 +0000 UTC m=+0.047282441 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:47.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377aaf4b54f9203a08dc3a25bd87766b36ec4830cc77ed98d2e1ea51d11bf944/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:47 standalone.localdomain podman[559779]: 2025-10-13 15:48:47.345866668 +0000 UTC m=+0.089299547 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:47 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40e5c1201ff198ad71abecbc292a17bb6345703f629c95c6dc12ffd559649bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:47 standalone.localdomain podman[559779]: 2025-10-13 15:48:47.354882237 +0000 UTC m=+0.098315176 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:48:47 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:48:47 standalone.localdomain podman[559764]: 2025-10-13 15:48:47.408735369 +0000 UTC m=+0.248842352 container init a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:47 standalone.localdomain podman[559746]: 2025-10-13 15:48:47.41040084 +0000 UTC m=+0.321825485 container init 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:48:47 standalone.localdomain podman[559746]: 2025-10-13 15:48:47.417778138 +0000 UTC m=+0.329202783 container start 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:48:47 standalone.localdomain dnsmasq[559830]: started, version 2.85 cachesize 150
Oct 13 15:48:47 standalone.localdomain dnsmasq[559830]: DNS service limited to local subnets
Oct 13 15:48:47 standalone.localdomain dnsmasq[559830]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:47 standalone.localdomain dnsmasq[559830]: warning: no upstream servers configured
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559830]: DHCP, static leases only on 10.102.0.0, lease time 1d
Oct 13 15:48:47 standalone.localdomain dnsmasq[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/addn_hosts - 0 addresses
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/host
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/opts
Oct 13 15:48:47 standalone.localdomain dnsmasq[559831]: started, version 2.85 cachesize 150
Oct 13 15:48:47 standalone.localdomain dnsmasq[559831]: DNS service limited to local subnets
Oct 13 15:48:47 standalone.localdomain dnsmasq[559831]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:47 standalone.localdomain dnsmasq[559831]: warning: no upstream servers configured
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559831]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:48:47 standalone.localdomain dnsmasq[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:47 standalone.localdomain podman[559780]: 2025-10-13 15:48:47.40779003 +0000 UTC m=+0.138199877 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:48:47 standalone.localdomain podman[559764]: 2025-10-13 15:48:47.469409182 +0000 UTC m=+0.309516175 container start a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:47 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:47.476 2 INFO neutron.agent.securitygroups_rpc [None req-4b07a601-cc7b-40dc-9ab4-e336d3068466 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:47 standalone.localdomain podman[559780]: 2025-10-13 15:48:47.49297 +0000 UTC m=+0.223379887 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:48:47 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:48:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:47.529 496978 INFO neutron.agent.dhcp.agent [None req-93571939-9769-44f8-9180-ee35a8c6338f - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:47.531 496978 INFO neutron.agent.dhcp.agent [None req-93571939-9769-44f8-9180-ee35a8c6338f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:45Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911de20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911dd60>], id=5c5b4a9a-27cb-40c0-993c-91d424518ba7, ip_allocation=immediate, mac_address=fa:16:3e:66:c5:fd, name=tempest-NetworksTestDHCPv6-1862686636, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=44, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['a215e05f-00a2-4906-98a0-266cafe9d599'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:42Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2336, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:45Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:48:47 standalone.localdomain podman[559832]: 2025-10-13 15:48:47.637258683 +0000 UTC m=+0.105172487 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, architecture=x86_64, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, managed_by=tripleo_ansible, release=1, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=swift_object_server, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:48:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:47.641 496978 INFO neutron.agent.dhcp.agent [None req-d23e2bd8-2bfc-4e31-bc87-989a8fab1d05 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab', '1ca58d75-e233-46ca-b614-c49c032b0542'} is completed
Oct 13 15:48:47 standalone.localdomain dnsmasq[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:47 standalone.localdomain podman[559867]: 2025-10-13 15:48:47.700208476 +0000 UTC m=+0.056771963 container kill a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:48:47 standalone.localdomain dnsmasq-dhcp[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:48:47 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:48:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4035: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:47 standalone.localdomain podman[559889]: 2025-10-13 15:48:47.808981004 +0000 UTC m=+0.081526257 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.component=openstack-swift-container-container, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, version=17.1.9, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12)
Oct 13 15:48:47 standalone.localdomain podman[559832]: 2025-10-13 15:48:47.834797171 +0000 UTC m=+0.302710915 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, release=1, version=17.1.9, architecture=x86_64, io.buildah.version=1.33.12, tcib_managed=true, batch=17.1_20250721.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, build-date=2025-07-21T14:56:28, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:48:47 standalone.localdomain podman[559890]: 2025-10-13 15:48:47.860638248 +0000 UTC m=+0.132717037 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., release=1, config_id=tripleo_step4, version=17.1.9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, architecture=x86_64, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server)
Oct 13 15:48:47 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:48:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:47.961 496978 INFO neutron.agent.dhcp.agent [None req-b660d62b-ff83-4bd1-9dbb-fd9f0f1f3923 - - - - - -] DHCP configuration for ports {'5c5b4a9a-27cb-40c0-993c-91d424518ba7'} is completed
Oct 13 15:48:48 standalone.localdomain dnsmasq[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:48 standalone.localdomain dnsmasq-dhcp[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:48 standalone.localdomain podman[559966]: 2025-10-13 15:48:48.009209784 +0000 UTC m=+0.040647015 container kill a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:48 standalone.localdomain dnsmasq-dhcp[559831]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:48 standalone.localdomain podman[559890]: 2025-10-13 15:48:48.047835827 +0000 UTC m=+0.319914616 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, distribution-scope=public, name=rhosp17/openstack-swift-account, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, batch=17.1_20250721.1, description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-type=git)
Oct 13 15:48:48 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:48:48 standalone.localdomain podman[559889]: 2025-10-13 15:48:48.102566396 +0000 UTC m=+0.375111619 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, release=1, build-date=2025-07-21T15:54:32, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, name=rhosp17/openstack-swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-container-container)
Oct 13 15:48:48 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:48:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:48.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:48 standalone.localdomain ceph-mon[29756]: pgmap v4035: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:48.888 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:47Z, description=, device_id=33f9fffd-0171-4e1a-94d1-3a98e0c7be8a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889085eb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889085100>], id=4244cb61-3853-4c90-904f-cf97b33ac3c0, ip_allocation=immediate, mac_address=fa:16:3e:09:47:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:35Z, description=, dns_domain=, id=6b5ab3aa-b7b0-43a6-9f72-c1785d339346, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-430596313-network, port_security_enabled=True, project_id=57f65d01b638479e947a9c2611255c4e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42130, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2302, status=ACTIVE, subnets=['e062825a-401b-43f4-b1b0-bd3bf1b02ae0'], tags=[], tenant_id=57f65d01b638479e947a9c2611255c4e, updated_at=2025-10-13T15:48:38Z, vlan_transparent=None, network_id=6b5ab3aa-b7b0-43a6-9f72-c1785d339346, port_security_enabled=False, project_id=57f65d01b638479e947a9c2611255c4e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2343, status=DOWN, tags=[], tenant_id=57f65d01b638479e947a9c2611255c4e, updated_at=2025-10-13T15:48:48Z on network 6b5ab3aa-b7b0-43a6-9f72-c1785d339346
Oct 13 15:48:49 standalone.localdomain podman[560010]: 2025-10-13 15:48:49.124982015 +0000 UTC m=+0.068266408 container kill 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:48:49 standalone.localdomain dnsmasq[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/addn_hosts - 1 addresses
Oct 13 15:48:49 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/host
Oct 13 15:48:49 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/opts
Oct 13 15:48:49 standalone.localdomain dnsmasq[559831]: exiting on receipt of SIGTERM
Oct 13 15:48:49 standalone.localdomain podman[560042]: 2025-10-13 15:48:49.291609999 +0000 UTC m=+0.071216050 container kill a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:48:49 standalone.localdomain systemd[1]: libpod-a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90.scope: Deactivated successfully.
Oct 13 15:48:49 standalone.localdomain podman[560068]: 2025-10-13 15:48:49.360544596 +0000 UTC m=+0.047277029 container died a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:48:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:49 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-377aaf4b54f9203a08dc3a25bd87766b36ec4830cc77ed98d2e1ea51d11bf944-merged.mount: Deactivated successfully.
Oct 13 15:48:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:49.407 496978 INFO neutron.agent.dhcp.agent [None req-dc9b8711-8503-4aac-8134-c797de85f326 - - - - - -] DHCP configuration for ports {'4244cb61-3853-4c90-904f-cf97b33ac3c0'} is completed
Oct 13 15:48:49 standalone.localdomain podman[560068]: 2025-10-13 15:48:49.420774836 +0000 UTC m=+0.107507229 container remove a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:48:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:49Z|00737|binding|INFO|Releasing lport 7be04010-9aea-45c4-8c8e-79eae8fc063d from this chassis (sb_readonly=0)
Oct 13 15:48:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:49Z|00738|binding|INFO|Setting lport 7be04010-9aea-45c4-8c8e-79eae8fc063d down in Southbound
Oct 13 15:48:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:49.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:49 standalone.localdomain kernel: device tap7be04010-9a left promiscuous mode
Oct 13 15:48:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:49.443 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe60:edc8/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=7be04010-9aea-45c4-8c8e-79eae8fc063d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:49.445 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 7be04010-9aea-45c4-8c8e-79eae8fc063d in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:49.449 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:49.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:49.454 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[5e91e543-13e5-4a2b-abd8-16442c800084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:49 standalone.localdomain systemd[1]: libpod-conmon-a24ac7e75f31b60ae23af90340087c1b40ecae0e96fe08b6047b6f16aadb3c90.scope: Deactivated successfully.
Oct 13 15:48:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:49.648 496978 INFO neutron.agent.dhcp.agent [None req-de67feeb-78cc-4727-915d-888af857f4c1 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:49 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:49.648 496978 INFO neutron.agent.dhcp.agent [None req-de67feeb-78cc-4727-915d-888af857f4c1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4036: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:50.180 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:49Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890610d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889061c10>], id=395822af-d0bc-4e70-9c66-982066032481, ip_allocation=immediate, mac_address=fa:16:3e:59:11:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:42Z, description=, dns_domain=, id=c78f46e2-5bde-47ef-9634-d0f5d8db615c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-58319771, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9898, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2324, status=ACTIVE, subnets=['ace5e96c-adb0-411a-839e-d5edaf087765'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:44Z, vlan_transparent=None, network_id=c78f46e2-5bde-47ef-9634-d0f5d8db615c, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2344, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:49Z on network c78f46e2-5bde-47ef-9634-d0f5d8db615c
Oct 13 15:48:50 standalone.localdomain dnsmasq[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/addn_hosts - 1 addresses
Oct 13 15:48:50 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/host
Oct 13 15:48:50 standalone.localdomain podman[560108]: 2025-10-13 15:48:50.384648088 +0000 UTC m=+0.058394194 container kill 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:48:50 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/opts
Oct 13 15:48:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:50.662 496978 INFO neutron.agent.dhcp.agent [None req-28080cdc-5255-4e9a-b8bc-48f17c59f700 - - - - - -] DHCP configuration for ports {'395822af-d0bc-4e70-9c66-982066032481'} is completed
Oct 13 15:48:50 standalone.localdomain ceph-mon[29756]: pgmap v4036: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:50.976 496978 INFO neutron.agent.linux.ip_lib [None req-d396042f-ff6e-47c7-8bdd-0f386e38ad5e - - - - - -] Device tap28a48265-c6 cannot be used as it has no MAC address
Oct 13 15:48:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:51.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:51 standalone.localdomain kernel: device tap28a48265-c6 entered promiscuous mode
Oct 13 15:48:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:51Z|00739|binding|INFO|Claiming lport 28a48265-c6ac-47ba-a841-f43c556e1254 for this chassis.
Oct 13 15:48:51 standalone.localdomain NetworkManager[5962]: <info>  [1760370531.0101] manager: (tap28a48265-c6): new Generic device (/org/freedesktop/NetworkManager/Devices/128)
Oct 13 15:48:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:51Z|00740|binding|INFO|28a48265-c6ac-47ba-a841-f43c556e1254: Claiming unknown
Oct 13 15:48:51 standalone.localdomain systemd-udevd[560138]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:51.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:51Z|00741|binding|INFO|Setting lport 28a48265-c6ac-47ba-a841-f43c556e1254 ovn-installed in OVS
Oct 13 15:48:51 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:51Z|00742|binding|INFO|Setting lport 28a48265-c6ac-47ba-a841-f43c556e1254 up in Southbound
Oct 13 15:48:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:51.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:51.021 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=28a48265-c6ac-47ba-a841-f43c556e1254) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:51.024 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 28a48265-c6ac-47ba-a841-f43c556e1254 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:51.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:51.030 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port b805aa9c-d1c6-4b99-8ef9-3d1051d1ff9d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:51.030 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:51 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:51.031 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[41a21fc5-4110-41ec-b202-55a2e9464520]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:51.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap28a48265-c6: No such device
Oct 13 15:48:51 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:51.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:48:51 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:51.768 2 INFO neutron.agent.securitygroups_rpc [None req-2dfc02a5-c3cd-418a-b49a-9b6b7d1ed098 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4037: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:51 standalone.localdomain systemd[1]: tmp-crun.bxC1W6.mount: Deactivated successfully.
Oct 13 15:48:51 standalone.localdomain podman[560187]: 2025-10-13 15:48:51.845251083 +0000 UTC m=+0.100065219 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:48:51 standalone.localdomain podman[560187]: 2025-10-13 15:48:51.879931394 +0000 UTC m=+0.134745540 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 13 15:48:51 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:48:52 standalone.localdomain podman[560228]: 
Oct 13 15:48:52 standalone.localdomain podman[560228]: 2025-10-13 15:48:52.121577853 +0000 UTC m=+0.098169601 container create 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:52 standalone.localdomain podman[560228]: 2025-10-13 15:48:52.069076152 +0000 UTC m=+0.045667930 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:52 standalone.localdomain systemd[1]: Started libpod-conmon-9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1.scope.
Oct 13 15:48:52 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:52 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d65714f1a08fcf2481bbfb97d79356b5b039d82eecaaf3eaa4ba611fd4199ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:52 standalone.localdomain podman[560228]: 2025-10-13 15:48:52.198722345 +0000 UTC m=+0.175314083 container init 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:48:52 standalone.localdomain podman[560228]: 2025-10-13 15:48:52.207594488 +0000 UTC m=+0.184186236 container start 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:52 standalone.localdomain dnsmasq[560247]: started, version 2.85 cachesize 150
Oct 13 15:48:52 standalone.localdomain dnsmasq[560247]: DNS service limited to local subnets
Oct 13 15:48:52 standalone.localdomain dnsmasq[560247]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:52 standalone.localdomain dnsmasq[560247]: warning: no upstream servers configured
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[560247]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:48:52 standalone.localdomain dnsmasq[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:52.272 496978 INFO neutron.agent.dhcp.agent [None req-d396042f-ff6e-47c7-8bdd-0f386e38ad5e - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:52.274 496978 INFO neutron.agent.dhcp.agent [None req-d396042f-ff6e-47c7-8bdd-0f386e38ad5e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:51Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e707f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e70fa0>], id=823d930a-5645-4486-8d64-9be84a5c2720, ip_allocation=immediate, mac_address=fa:16:3e:05:f0:8b, name=tempest-NetworksTestDHCPv6-727283780, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=46, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['1395e6b4-5f90-4d9b-a337-84cf4830771f'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:50Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2357, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:51Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:48:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:52.346 496978 INFO neutron.agent.dhcp.agent [None req-bf2ff53f-52ef-4111-b7f8-42ea612909ec - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:52.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:52.441 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:47Z, description=, device_id=33f9fffd-0171-4e1a-94d1-3a98e0c7be8a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889a28ca0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188911d730>], id=4244cb61-3853-4c90-904f-cf97b33ac3c0, ip_allocation=immediate, mac_address=fa:16:3e:09:47:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:35Z, description=, dns_domain=, id=6b5ab3aa-b7b0-43a6-9f72-c1785d339346, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-430596313-network, port_security_enabled=True, project_id=57f65d01b638479e947a9c2611255c4e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42130, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2302, status=ACTIVE, subnets=['e062825a-401b-43f4-b1b0-bd3bf1b02ae0'], tags=[], tenant_id=57f65d01b638479e947a9c2611255c4e, updated_at=2025-10-13T15:48:38Z, vlan_transparent=None, network_id=6b5ab3aa-b7b0-43a6-9f72-c1785d339346, port_security_enabled=False, project_id=57f65d01b638479e947a9c2611255c4e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2343, status=DOWN, tags=[], tenant_id=57f65d01b638479e947a9c2611255c4e, updated_at=2025-10-13T15:48:48Z on network 6b5ab3aa-b7b0-43a6-9f72-c1785d339346
Oct 13 15:48:52 standalone.localdomain dnsmasq[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 1 addresses
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:52 standalone.localdomain podman[560266]: 2025-10-13 15:48:52.458059769 +0000 UTC m=+0.061415697 container kill 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:52 standalone.localdomain dnsmasq[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/addn_hosts - 1 addresses
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/host
Oct 13 15:48:52 standalone.localdomain podman[560304]: 2025-10-13 15:48:52.680575488 +0000 UTC m=+0.063370847 container kill 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:48:52 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/opts
Oct 13 15:48:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:52.717 496978 INFO neutron.agent.dhcp.agent [None req-36ec5ba2-66cf-4864-906a-c7d7b5101016 - - - - - -] DHCP configuration for ports {'823d930a-5645-4486-8d64-9be84a5c2720'} is completed
Oct 13 15:48:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:52 standalone.localdomain ceph-mon[29756]: pgmap v4037: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:52.966 496978 INFO neutron.agent.dhcp.agent [None req-25784d1f-d3d3-4d62-b1f0-9c42b41d6cb9 - - - - - -] DHCP configuration for ports {'4244cb61-3853-4c90-904f-cf97b33ac3c0'} is completed
Oct 13 15:48:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:53.100 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:49Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889299880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18892e9df0>], id=395822af-d0bc-4e70-9c66-982066032481, ip_allocation=immediate, mac_address=fa:16:3e:59:11:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:42Z, description=, dns_domain=, id=c78f46e2-5bde-47ef-9634-d0f5d8db615c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-58319771, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9898, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2324, status=ACTIVE, subnets=['ace5e96c-adb0-411a-839e-d5edaf087765'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:44Z, vlan_transparent=None, network_id=c78f46e2-5bde-47ef-9634-d0f5d8db615c, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2344, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:49Z on network c78f46e2-5bde-47ef-9634-d0f5d8db615c
Oct 13 15:48:53 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:53.154 2 INFO neutron.agent.securitygroups_rpc [None req-902d8584-7cb1-4409-822d-059285e20662 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:48:53 standalone.localdomain dnsmasq[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/addn_hosts - 1 addresses
Oct 13 15:48:53 standalone.localdomain podman[560358]: 2025-10-13 15:48:53.348439743 +0000 UTC m=+0.057782385 container kill 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:48:53 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/host
Oct 13 15:48:53 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/opts
Oct 13 15:48:53 standalone.localdomain dnsmasq[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:53 standalone.localdomain podman[560373]: 2025-10-13 15:48:53.375893051 +0000 UTC m=+0.044719092 container kill 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 15:48:53 standalone.localdomain dnsmasq-dhcp[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:48:53 standalone.localdomain dnsmasq-dhcp[560247]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:48:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56875 DF PROTO=TCP SPT=56374 DPT=9102 SEQ=1777627862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD3824F0000000001030307) 
Oct 13 15:48:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:53.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:53.644 496978 INFO neutron.agent.dhcp.agent [None req-0b02f5c7-87d3-4bf5-a232-63a556dafdea - - - - - -] DHCP configuration for ports {'395822af-d0bc-4e70-9c66-982066032481'} is completed
Oct 13 15:48:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4038: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:54.075 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:48:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56876 DF PROTO=TCP SPT=56374 DPT=9102 SEQ=1777627862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD386760000000001030307) 
Oct 13 15:48:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:48:54 standalone.localdomain podman[560402]: 2025-10-13 15:48:54.83144223 +0000 UTC m=+0.098528903 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:48:54 standalone.localdomain podman[560402]: 2025-10-13 15:48:54.84504867 +0000 UTC m=+0.112135323 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:48:54 standalone.localdomain ceph-mon[29756]: pgmap v4038: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:54 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:48:55 standalone.localdomain dnsmasq[560247]: exiting on receipt of SIGTERM
Oct 13 15:48:55 standalone.localdomain podman[560443]: 2025-10-13 15:48:55.089552358 +0000 UTC m=+0.067170045 container kill 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:48:55 standalone.localdomain systemd[1]: tmp-crun.yefLba.mount: Deactivated successfully.
Oct 13 15:48:55 standalone.localdomain systemd[1]: libpod-9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1.scope: Deactivated successfully.
Oct 13 15:48:55 standalone.localdomain podman[560457]: 2025-10-13 15:48:55.157817075 +0000 UTC m=+0.056747053 container died 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:55 standalone.localdomain podman[560457]: 2025-10-13 15:48:55.193781865 +0000 UTC m=+0.092711813 container cleanup 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:55 standalone.localdomain systemd[1]: libpod-conmon-9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1.scope: Deactivated successfully.
Oct 13 15:48:55 standalone.localdomain podman[560464]: 2025-10-13 15:48:55.261525686 +0000 UTC m=+0.147202555 container remove 9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain kernel: device tap28a48265-c6 left promiscuous mode
Oct 13 15:48:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:55Z|00743|binding|INFO|Releasing lport 28a48265-c6ac-47ba-a841-f43c556e1254 from this chassis (sb_readonly=0)
Oct 13 15:48:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:55Z|00744|binding|INFO|Setting lport 28a48265-c6ac-47ba-a841-f43c556e1254 down in Southbound
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.290 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=28a48265-c6ac-47ba-a841-f43c556e1254) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.293 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 28a48265-c6ac-47ba-a841-f43c556e1254 in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.300 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.301 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3e6b73-83b1-4829-993a-4c6fb34ee551]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:55.346 496978 INFO neutron.agent.linux.ip_lib [None req-88a68ff8-f544-4542-96ca-172227651e4e - - - - - -] Device tap439015dd-77 cannot be used as it has no MAC address
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain kernel: device tap439015dd-77 entered promiscuous mode
Oct 13 15:48:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:55Z|00745|binding|INFO|Claiming lport 439015dd-77cd-4b09-8545-f29f99597f23 for this chassis.
Oct 13 15:48:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:55Z|00746|binding|INFO|439015dd-77cd-4b09-8545-f29f99597f23: Claiming unknown
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain NetworkManager[5962]: <info>  [1760370535.3875] manager: (tap439015dd-77): new Generic device (/org/freedesktop/NetworkManager/Devices/129)
Oct 13 15:48:55 standalone.localdomain systemd-udevd[560496]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.401 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d535e530-d170-4ecf-a39a-8a44c6a50a08', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d535e530-d170-4ecf-a39a-8a44c6a50a08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4451aaa3a924fe48c89930707dab5c1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebea9263-888d-4cb9-bb3f-25996716ef94, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=439015dd-77cd-4b09-8545-f29f99597f23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.404 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 439015dd-77cd-4b09-8545-f29f99597f23 in datapath d535e530-d170-4ecf-a39a-8a44c6a50a08 bound to our chassis
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.408 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d535e530-d170-4ecf-a39a-8a44c6a50a08 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:55.410 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3b4bb4-5a73-4dba-8427-4baa67ae33d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:55Z|00747|binding|INFO|Setting lport 439015dd-77cd-4b09-8545-f29f99597f23 ovn-installed in OVS
Oct 13 15:48:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:55Z|00748|binding|INFO|Setting lport 439015dd-77cd-4b09-8545-f29f99597f23 up in Southbound
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:55.464 496978 INFO neutron.agent.dhcp.agent [None req-63721169-b97d-43e9-bc95-6266a77f6bd0 - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:48:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:55.464 496978 INFO neutron.agent.dhcp.agent [None req-63721169-b97d-43e9-bc95-6266a77f6bd0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:48:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:55.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4039: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1d65714f1a08fcf2481bbfb97d79356b5b039d82eecaaf3eaa4ba611fd4199ec-merged.mount: Deactivated successfully.
Oct 13 15:48:55 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9057b1c31062dc5677b30d691ad77d974191414aad65a0a4347531c7a36303e1-userdata-shm.mount: Deactivated successfully.
Oct 13 15:48:55 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:48:56 standalone.localdomain podman[560550]: 
Oct 13 15:48:56 standalone.localdomain podman[560550]: 2025-10-13 15:48:56.445856953 +0000 UTC m=+0.096042036 container create d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:56 standalone.localdomain systemd[1]: Started libpod-conmon-d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649.scope.
Oct 13 15:48:56 standalone.localdomain podman[560550]: 2025-10-13 15:48:56.400250525 +0000 UTC m=+0.050435678 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:56 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:56 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc91082f87a24d62a342c2b4bcf93244593a5e0f5ec4490a644bd602636afeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:56 standalone.localdomain podman[560550]: 2025-10-13 15:48:56.535522241 +0000 UTC m=+0.185707334 container init d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:48:56 standalone.localdomain podman[560550]: 2025-10-13 15:48:56.544663293 +0000 UTC m=+0.194848376 container start d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:56 standalone.localdomain dnsmasq[560569]: started, version 2.85 cachesize 150
Oct 13 15:48:56 standalone.localdomain dnsmasq[560569]: DNS service limited to local subnets
Oct 13 15:48:56 standalone.localdomain dnsmasq[560569]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:56 standalone.localdomain dnsmasq[560569]: warning: no upstream servers configured
Oct 13 15:48:56 standalone.localdomain dnsmasq-dhcp[560569]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:48:56 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 0 addresses
Oct 13 15:48:56 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:48:56 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:48:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:56.604 496978 INFO neutron.agent.dhcp.agent [None req-5afb1167-b9c8-40a9-96a4-2fbcb9d8d034 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:48:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56877 DF PROTO=TCP SPT=56374 DPT=9102 SEQ=1777627862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD38E760000000001030307) 
Oct 13 15:48:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:56.696 496978 INFO neutron.agent.dhcp.agent [None req-0ee9b01d-cb71-4d06-9bac-9cb093310090 - - - - - -] DHCP configuration for ports {'f3223190-3a4a-40e8-81ab-af960ad5dfd4'} is completed
Oct 13 15:48:56 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:56.844 2 INFO neutron.agent.securitygroups_rpc [None req-a38cb9df-408c-4e89-88bb-7a95473f16c7 3cbd21a290e2488c985d719dbc818a4b 4b9d3f4836ab467c87ff3f058089323e - - default default] Security group rule updated ['242045ef-c1d6-4fcd-bd39-944eb22c7ee0']
Oct 13 15:48:56 standalone.localdomain ceph-mon[29756]: pgmap v4039: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:57.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:48:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:48:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4040: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:57 standalone.localdomain systemd[1]: tmp-crun.6fGtBF.mount: Deactivated successfully.
Oct 13 15:48:57 standalone.localdomain podman[560570]: 2025-10-13 15:48:57.833706632 +0000 UTC m=+0.095372844 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:48:57 standalone.localdomain podman[560570]: 2025-10-13 15:48:57.868922 +0000 UTC m=+0.130588192 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:48:57 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:48:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:58.159 496978 INFO neutron.agent.linux.ip_lib [None req-08d32530-5564-42a0-8e94-b1c4bfc3b51c - - - - - -] Device tap769381ca-22 cannot be used as it has no MAC address
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain kernel: device tap769381ca-22 entered promiscuous mode
Oct 13 15:48:58 standalone.localdomain systemd-udevd[560498]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00749|binding|INFO|Claiming lport 769381ca-2269-4615-94aa-36a47e389713 for this chassis.
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00750|binding|INFO|769381ca-2269-4615-94aa-36a47e389713: Claiming unknown
Oct 13 15:48:58 standalone.localdomain NetworkManager[5962]: <info>  [1760370538.1863] manager: (tap769381ca-22): new Generic device (/org/freedesktop/NetworkManager/Devices/130)
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.197 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-7f3425af-981f-423f-b53a-5f5a3280d882', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f3425af-981f-423f-b53a-5f5a3280d882', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62d6d89f-6c25-4284-87f6-09e7604c1dab, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=769381ca-2269-4615-94aa-36a47e389713) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.198 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 769381ca-2269-4615-94aa-36a47e389713 in datapath 7f3425af-981f-423f-b53a-5f5a3280d882 bound to our chassis
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.201 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7f3425af-981f-423f-b53a-5f5a3280d882 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.201 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cc1ef1-a97d-4af5-a93d-fa61795d5a51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:58 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:58.212 2 INFO neutron.agent.securitygroups_rpc [None req-f263eaef-e61b-43a3-bbe2-8178b8448a6e 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00751|binding|INFO|Setting lport 769381ca-2269-4615-94aa-36a47e389713 ovn-installed in OVS
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00752|binding|INFO|Setting lport 769381ca-2269-4615-94aa-36a47e389713 up in Southbound
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:58.284 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:57Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891354f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889135820>], id=dc8efb2a-edb7-45a4-869c-6880e289e1ee, ip_allocation=immediate, mac_address=fa:16:3e:db:76:10, name=tempest-AllowedAddressPairTestJSON-787071566, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:51Z, description=, dns_domain=, id=d535e530-d170-4ecf-a39a-8a44c6a50a08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-22600680, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9436, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2358, status=ACTIVE, subnets=['00b17518-15e8-4f80-8ffa-635c27d9585b'], tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:53Z, vlan_transparent=None, network_id=d535e530-d170-4ecf-a39a-8a44c6a50a08, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['64edacfd-ff73-4946-8d3e-149467f27125'], standard_attr_id=2385, status=DOWN, tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:57Z on network d535e530-d170-4ecf-a39a-8a44c6a50a08
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:58.341 496978 INFO neutron.agent.linux.ip_lib [None req-d2a6baee-acde-4c5c-8b20-a05fe3ab25bd - - - - - -] Device tap042f2002-99 cannot be used as it has no MAC address
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain kernel: device tap042f2002-99 entered promiscuous mode
Oct 13 15:48:58 standalone.localdomain NetworkManager[5962]: <info>  [1760370538.4095] manager: (tap042f2002-99): new Generic device (/org/freedesktop/NetworkManager/Devices/131)
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00753|binding|INFO|Claiming lport 042f2002-993c-416e-bc05-97dbf1ad379a for this chassis.
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00754|binding|INFO|042f2002-993c-416e-bc05-97dbf1ad379a: Claiming unknown
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.418 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe16:70a2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=042f2002-993c-416e-bc05-97dbf1ad379a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00755|binding|INFO|Setting lport 042f2002-993c-416e-bc05-97dbf1ad379a ovn-installed in OVS
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.419 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 042f2002-993c-416e-bc05-97dbf1ad379a in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:48:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:48:58Z|00756|binding|INFO|Setting lport 042f2002-993c-416e-bc05-97dbf1ad379a up in Southbound
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.422 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0da70521-f8dc-46de-a472-65c898e0f22d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.422 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:48:58 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:48:58.423 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[fc438df3-868f-495c-90c0-4a26299398ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:48:58.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:48:58 standalone.localdomain podman[560640]: 2025-10-13 15:48:58.577322767 +0000 UTC m=+0.078617519 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:48:58 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 1 addresses
Oct 13 15:48:58 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:48:58 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:48:58 standalone.localdomain ceph-mon[29756]: pgmap v4040: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:58.977 496978 INFO neutron.agent.dhcp.agent [None req-ac5ec94b-3bb8-43ac-8ca9-1fe485b26040 - - - - - -] DHCP configuration for ports {'dc8efb2a-edb7-45a4-869c-6880e289e1ee'} is completed
Oct 13 15:48:59 standalone.localdomain podman[560740]: 
Oct 13 15:48:59 standalone.localdomain podman[560740]: 2025-10-13 15:48:59.509031676 +0000 UTC m=+0.151500968 container create b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:48:59 standalone.localdomain systemd[1]: Started libpod-conmon-b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa.scope.
Oct 13 15:48:59 standalone.localdomain podman[560740]: 2025-10-13 15:48:59.46121357 +0000 UTC m=+0.103682882 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:59 standalone.localdomain podman[560760]: 
Oct 13 15:48:59 standalone.localdomain podman[560760]: 2025-10-13 15:48:59.5865848 +0000 UTC m=+0.114794015 container create bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:48:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c3f74a1ac5089efd3c67cdf02681ea548bdd8422e6f78c3b194c27e07cbe2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:59 standalone.localdomain podman[560740]: 2025-10-13 15:48:59.606222356 +0000 UTC m=+0.248691658 container init b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:48:59 standalone.localdomain podman[560740]: 2025-10-13 15:48:59.615088559 +0000 UTC m=+0.257557821 container start b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:48:59 standalone.localdomain dnsmasq[560784]: started, version 2.85 cachesize 150
Oct 13 15:48:59 standalone.localdomain dnsmasq[560784]: DNS service limited to local subnets
Oct 13 15:48:59 standalone.localdomain dnsmasq[560784]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:59 standalone.localdomain dnsmasq[560784]: warning: no upstream servers configured
Oct 13 15:48:59 standalone.localdomain dnsmasq-dhcp[560784]: DHCP, static leases only on 10.103.0.0, lease time 1d
Oct 13 15:48:59 standalone.localdomain dnsmasq[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/addn_hosts - 0 addresses
Oct 13 15:48:59 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/host
Oct 13 15:48:59 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/opts
Oct 13 15:48:59 standalone.localdomain systemd[1]: Started libpod-conmon-bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47.scope.
Oct 13 15:48:59 standalone.localdomain podman[560760]: 2025-10-13 15:48:59.533995396 +0000 UTC m=+0.062204651 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:48:59 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:48:59 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5f7c9f845c338091ae8c473b167f4e1ac9053d7a4f4190a4bc15174d804754/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:48:59 standalone.localdomain podman[560760]: 2025-10-13 15:48:59.658699746 +0000 UTC m=+0.186909031 container init bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:48:59 standalone.localdomain podman[560760]: 2025-10-13 15:48:59.668512088 +0000 UTC m=+0.196721323 container start bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:48:59 standalone.localdomain dnsmasq[560789]: started, version 2.85 cachesize 150
Oct 13 15:48:59 standalone.localdomain dnsmasq[560789]: DNS service limited to local subnets
Oct 13 15:48:59 standalone.localdomain dnsmasq[560789]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:48:59 standalone.localdomain dnsmasq[560789]: warning: no upstream servers configured
Oct 13 15:48:59 standalone.localdomain dnsmasq[560789]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:48:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:59.688 496978 INFO neutron.agent.dhcp.agent [None req-61b72187-eb96-43c2-82b2-fd64319e61ab - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:48:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:59.740 496978 INFO neutron.agent.dhcp.agent [None req-d2a6baee-acde-4c5c-8b20-a05fe3ab25bd - - - - - -] Resizing dhcp processing queue green pool size to: 11
Oct 13 15:48:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4041: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:48:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:48:59.896 496978 INFO neutron.agent.dhcp.agent [None req-0f5950c4-0f5b-430e-af2e-3d673a7a02bf - - - - - -] DHCP configuration for ports {'080aa095-ad28-4888-9d2f-2e414663f27a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:48:59 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:48:59.984 2 INFO neutron.agent.securitygroups_rpc [None req-5507bb61-1cf3-467b-b606-2fddd21ea2e7 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:00 standalone.localdomain podman[560808]: 2025-10-13 15:49:00.048208869 +0000 UTC m=+0.056275988 container kill bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:49:00 standalone.localdomain dnsmasq[560789]: exiting on receipt of SIGTERM
Oct 13 15:49:00 standalone.localdomain systemd[1]: libpod-bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47.scope: Deactivated successfully.
Oct 13 15:49:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:00.080 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffcb80>], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:59Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffc970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffcd90>], id=cd3f1d96-0031-4fa1-bd01-b799ec806e35, ip_allocation=immediate, mac_address=fa:16:3e:6b:93:85, name=tempest-AllowedAddressPairTestJSON-859033021, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:51Z, description=, dns_domain=, id=d535e530-d170-4ecf-a39a-8a44c6a50a08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-22600680, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9436, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2358, status=ACTIVE, subnets=['00b17518-15e8-4f80-8ffa-635c27d9585b'], tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:53Z, vlan_transparent=None, network_id=d535e530-d170-4ecf-a39a-8a44c6a50a08, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['64edacfd-ff73-4946-8d3e-149467f27125'], standard_attr_id=2390, status=DOWN, tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:59Z on network d535e530-d170-4ecf-a39a-8a44c6a50a08
Oct 13 15:49:00 standalone.localdomain podman[560823]: 2025-10-13 15:49:00.12402807 +0000 UTC m=+0.053961768 container died bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:00 standalone.localdomain podman[560823]: 2025-10-13 15:49:00.188238012 +0000 UTC m=+0.118171630 container remove bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:49:00 standalone.localdomain systemd[1]: libpod-conmon-bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47.scope: Deactivated successfully.
Oct 13 15:49:00 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:00.292 2 INFO neutron.agent.securitygroups_rpc [None req-6ab1defd-0762-4df9-a57d-afe3756ab004 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:00 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 2 addresses
Oct 13 15:49:00 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:00 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:00 standalone.localdomain podman[560867]: 2025-10-13 15:49:00.302354843 +0000 UTC m=+0.057021780 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:49:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ca5f7c9f845c338091ae8c473b167f4e1ac9053d7a4f4190a4bc15174d804754-merged.mount: Deactivated successfully.
Oct 13 15:49:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd2a1752b4a4556049578814138adb429e6f1edd52b593ff6c63a96386250c47-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:00.579 496978 INFO neutron.agent.dhcp.agent [None req-6048de33-9520-4f02-b15c-890eda2c1d89 - - - - - -] DHCP configuration for ports {'cd3f1d96-0031-4fa1-bd01-b799ec806e35'} is completed
Oct 13 15:49:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56878 DF PROTO=TCP SPT=56374 DPT=9102 SEQ=1777627862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD39E360000000001030307) 
Oct 13 15:49:00 standalone.localdomain ceph-mon[29756]: pgmap v4041: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:00 standalone.localdomain sudo[560898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:49:00 standalone.localdomain sudo[560898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:49:00 standalone.localdomain sudo[560898]: pam_unix(sudo:session): session closed for user root
Oct 13 15:49:01 standalone.localdomain sudo[560920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:49:01 standalone.localdomain sudo[560920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:49:01 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:01.318 2 INFO neutron.agent.securitygroups_rpc [None req-08643992-5c2a-477d-85e1-d0b078e7334f db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:01 standalone.localdomain podman[560993]: 
Oct 13 15:49:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:01.648 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:00Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e834f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83d30>], id=207a8535-f654-4985-9520-bf07ca7be270, ip_allocation=immediate, mac_address=fa:16:3e:36:18:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:54Z, description=, dns_domain=, id=7f3425af-981f-423f-b53a-5f5a3280d882, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1545907218, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58401, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2362, status=ACTIVE, subnets=['4bf65f38-eea2-4235-ae60-5c9121a97cdd'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:56Z, vlan_transparent=None, network_id=7f3425af-981f-423f-b53a-5f5a3280d882, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2395, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:49:00Z on network 7f3425af-981f-423f-b53a-5f5a3280d882
Oct 13 15:49:01 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:01.651 2 INFO neutron.agent.securitygroups_rpc [None req-ffefe5e6-816c-4465-a24f-ca2cbe8bec08 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:01 standalone.localdomain podman[560993]: 2025-10-13 15:49:01.652740726 +0000 UTC m=+0.105182757 container create 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:01 standalone.localdomain sudo[560920]: pam_unix(sudo:session): session closed for user root
Oct 13 15:49:01 standalone.localdomain systemd[1]: Started libpod-conmon-5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d.scope.
Oct 13 15:49:01 standalone.localdomain podman[560993]: 2025-10-13 15:49:01.596779709 +0000 UTC m=+0.049221730 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:49:01 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:49:01 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed603a01b5dde51373ba9e882eccc322d8d2047105509ef7e332bae203a10098/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:49:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev 2b70f4a5-e203-438e-8136-dd5b24049dd1 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:49:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev 2b70f4a5-e203-438e-8136-dd5b24049dd1 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:49:01 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event 2b70f4a5-e203-438e-8136-dd5b24049dd1 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:49:01 standalone.localdomain podman[560993]: 2025-10-13 15:49:01.743171589 +0000 UTC m=+0.195613660 container init 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:49:01 standalone.localdomain podman[560993]: 2025-10-13 15:49:01.753559999 +0000 UTC m=+0.206002000 container start 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:01 standalone.localdomain dnsmasq[561042]: started, version 2.85 cachesize 150
Oct 13 15:49:01 standalone.localdomain dnsmasq[561042]: DNS service limited to local subnets
Oct 13 15:49:01 standalone.localdomain dnsmasq[561042]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:01 standalone.localdomain dnsmasq[561042]: warning: no upstream servers configured
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[561042]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:49:01 standalone.localdomain dnsmasq[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4042: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:01 standalone.localdomain sudo[561033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:49:01 standalone.localdomain sudo[561033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:49:01 standalone.localdomain sudo[561033]: pam_unix(sudo:session): session closed for user root
Oct 13 15:49:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:01.803 496978 INFO neutron.agent.dhcp.agent [None req-27edccb6-a425-4c4a-9454-4106289bba53 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:48:59Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4bac0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4ba30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4b6a0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4b580>], id=49c638e2-6494-4bed-9536-ec49b0ce3470, ip_allocation=immediate, mac_address=fa:16:3e:7a:67:e1, name=tempest-NetworksTestDHCPv6-321772689, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=49, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['80f940e1-08a9-41e3-972f-97a507a19f9c', 'e4e9a899-34a3-4715-8374-2a79e2a52311'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:48:58Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2394, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:00Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:49:01 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:49:01 standalone.localdomain dnsmasq[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/addn_hosts - 1 addresses
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/host
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/opts
Oct 13 15:49:01 standalone.localdomain podman[561076]: 2025-10-13 15:49:01.862894143 +0000 UTC m=+0.043136942 container kill b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:49:01 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 1 addresses
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:01 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:01 standalone.localdomain podman[561091]: 2025-10-13 15:49:01.896382177 +0000 UTC m=+0.054920165 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:49:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:01.992 496978 INFO neutron.agent.dhcp.agent [None req-b1176bc0-c688-47f8-85d0-923150040c70 - - - - - -] DHCP configuration for ports {'042f2002-993c-416e-bc05-97dbf1ad379a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:02 standalone.localdomain dnsmasq[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:49:02 standalone.localdomain dnsmasq-dhcp[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:02 standalone.localdomain dnsmasq-dhcp[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:02 standalone.localdomain podman[561127]: 2025-10-13 15:49:02.021288723 +0000 UTC m=+0.056900047 container kill 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:49:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:02.161 496978 INFO neutron.agent.dhcp.agent [None req-2ae111e8-5ccc-449a-86a7-7ba40234c5e8 - - - - - -] DHCP configuration for ports {'207a8535-f654-4985-9520-bf07ca7be270'} is completed
Oct 13 15:49:02 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:02.331 496978 INFO neutron.agent.dhcp.agent [None req-6fc828ea-0b5a-4f48-862c-e619ae5c09bf - - - - - -] DHCP configuration for ports {'49c638e2-6494-4bed-9536-ec49b0ce3470'} is completed
Oct 13 15:49:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:02.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:02 standalone.localdomain podman[561172]: 2025-10-13 15:49:02.380225772 +0000 UTC m=+0.085911273 container kill 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:49:02 standalone.localdomain dnsmasq[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:02 standalone.localdomain dnsmasq-dhcp[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:02 standalone.localdomain dnsmasq-dhcp[561042]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:02 standalone.localdomain ceph-mon[29756]: pgmap v4042: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:02 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:02.976 2 INFO neutron.agent.securitygroups_rpc [None req-ab67f780-2fa2-40ab-a1a6-c6856699bc7a 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:03 standalone.localdomain podman[561209]: 2025-10-13 15:49:03.009071264 +0000 UTC m=+0.072939023 container kill 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:49:03 standalone.localdomain dnsmasq[561042]: exiting on receipt of SIGTERM
Oct 13 15:49:03 standalone.localdomain systemd[1]: libpod-5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d.scope: Deactivated successfully.
Oct 13 15:49:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:03.054 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:02Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ea9520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ea9ac0>], id=5f34dfe2-f441-4d92-bd2b-1ed67d0f8e23, ip_allocation=immediate, mac_address=fa:16:3e:1a:87:2b, name=tempest-AllowedAddressPairTestJSON-220502326, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:51Z, description=, dns_domain=, id=d535e530-d170-4ecf-a39a-8a44c6a50a08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-22600680, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9436, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2358, status=ACTIVE, subnets=['00b17518-15e8-4f80-8ffa-635c27d9585b'], tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:53Z, vlan_transparent=None, network_id=d535e530-d170-4ecf-a39a-8a44c6a50a08, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['64edacfd-ff73-4946-8d3e-149467f27125'], standard_attr_id=2402, status=DOWN, tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:49:02Z on network d535e530-d170-4ecf-a39a-8a44c6a50a08
Oct 13 15:49:03 standalone.localdomain podman[561225]: 2025-10-13 15:49:03.090118285 +0000 UTC m=+0.050753028 container died 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:49:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:03 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-ed603a01b5dde51373ba9e882eccc322d8d2047105509ef7e332bae203a10098-merged.mount: Deactivated successfully.
Oct 13 15:49:03 standalone.localdomain podman[561225]: 2025-10-13 15:49:03.149512659 +0000 UTC m=+0.110147352 container remove 5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:49:03 standalone.localdomain systemd[1]: libpod-conmon-5bf1c3f5550ceb03288ee2922740a0fb97505059bc7fb46c9c02936f612c9a7d.scope: Deactivated successfully.
Oct 13 15:49:03 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 2 addresses
Oct 13 15:49:03 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:03 standalone.localdomain podman[561269]: 2025-10-13 15:49:03.305000858 +0000 UTC m=+0.065245725 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:03 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:03.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:03 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:03.589 496978 INFO neutron.agent.dhcp.agent [None req-f1c027b8-6fe1-419e-8ac3-7d090b186d53 - - - - - -] DHCP configuration for ports {'5f34dfe2-f441-4d92-bd2b-1ed67d0f8e23'} is completed
Oct 13 15:49:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4043: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:04.125 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:00Z, description=, device_id=da6bd250-b26b-4200-8dd4-d383b9b96aa0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fc6ac0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fc65e0>], id=207a8535-f654-4985-9520-bf07ca7be270, ip_allocation=immediate, mac_address=fa:16:3e:36:18:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:54Z, description=, dns_domain=, id=7f3425af-981f-423f-b53a-5f5a3280d882, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1545907218, port_security_enabled=True, project_id=745bbaede12e4123b190dd25fdacfb7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58401, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2362, status=ACTIVE, subnets=['4bf65f38-eea2-4235-ae60-5c9121a97cdd'], tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:48:56Z, vlan_transparent=None, network_id=7f3425af-981f-423f-b53a-5f5a3280d882, port_security_enabled=False, project_id=745bbaede12e4123b190dd25fdacfb7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2395, status=DOWN, tags=[], tenant_id=745bbaede12e4123b190dd25fdacfb7d, updated_at=2025-10-13T15:49:00Z on network 7f3425af-981f-423f-b53a-5f5a3280d882
Oct 13 15:49:04 standalone.localdomain podman[561338]: 
Oct 13 15:49:04 standalone.localdomain podman[561338]: 2025-10-13 15:49:04.166614524 +0000 UTC m=+0.100423831 container create 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:04 standalone.localdomain systemd[1]: Started libpod-conmon-3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5.scope.
Oct 13 15:49:04 standalone.localdomain podman[561338]: 2025-10-13 15:49:04.115450384 +0000 UTC m=+0.049259711 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:04 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:04 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7de3c2330334d2f3d32569df74a004644500b6778f3c0a668aef2fd4bd73cd6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:04 standalone.localdomain podman[561338]: 2025-10-13 15:49:04.233160908 +0000 UTC m=+0.166970195 container init 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:04 standalone.localdomain podman[561338]: 2025-10-13 15:49:04.241994531 +0000 UTC m=+0.175803818 container start 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:04 standalone.localdomain dnsmasq[561366]: started, version 2.85 cachesize 150
Oct 13 15:49:04 standalone.localdomain dnsmasq[561366]: DNS service limited to local subnets
Oct 13 15:49:04 standalone.localdomain dnsmasq[561366]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:04 standalone.localdomain dnsmasq[561366]: warning: no upstream servers configured
Oct 13 15:49:04 standalone.localdomain dnsmasq[561366]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:04 standalone.localdomain dnsmasq[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/addn_hosts - 1 addresses
Oct 13 15:49:04 standalone.localdomain podman[561374]: 2025-10-13 15:49:04.392757674 +0000 UTC m=+0.066092790 container kill b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:04 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/host
Oct 13 15:49:04 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/opts
Oct 13 15:49:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:04.639 496978 INFO neutron.agent.dhcp.agent [None req-20977df6-6d7e-4faf-a5dc-d6576c82974a - - - - - -] DHCP configuration for ports {'042f2002-993c-416e-bc05-97dbf1ad379a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:04 standalone.localdomain systemd[1]: tmp-crun.rawTOv.mount: Deactivated successfully.
Oct 13 15:49:04 standalone.localdomain dnsmasq[561366]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:04 standalone.localdomain podman[561411]: 2025-10-13 15:49:04.793635939 +0000 UTC m=+0.060722716 container kill 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:49:04 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:04.845 496978 INFO neutron.agent.dhcp.agent [None req-1e243022-d7de-44be-a82d-afe83e6bf1ed - - - - - -] DHCP configuration for ports {'207a8535-f654-4985-9520-bf07ca7be270'} is completed
Oct 13 15:49:04 standalone.localdomain ceph-mon[29756]: pgmap v4043: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:05 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:05.171 496978 INFO neutron.agent.dhcp.agent [None req-e4bc2797-3767-4508-9271-273b8a137c06 - - - - - -] DHCP configuration for ports {'042f2002-993c-416e-bc05-97dbf1ad379a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:49:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:49:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:49:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:05.615 2 INFO neutron.agent.securitygroups_rpc [None req-2f845830-68b0-43b4-8a08-cbbd19834339 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4044: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:05 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 1 addresses
Oct 13 15:49:05 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:05 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:05 standalone.localdomain podman[561448]: 2025-10-13 15:49:05.858719676 +0000 UTC m=+0.039979996 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:49:06 standalone.localdomain account-server[114571]: 172.20.0.100 - - [13/Oct/2025:15:49:06 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx96bce1db1eac4fee96cd3-0068ed1f72" "proxy-server 2" 0.0010 "-" 23 -
Oct 13 15:49:06 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx96bce1db1eac4fee96cd3-0068ed1f72)
Oct 13 15:49:06 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx96bce1db1eac4fee96cd3-0068ed1f72)
Oct 13 15:49:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:49:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:06.968 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:49:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:06.969 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:49:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:06.970 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:49:07 standalone.localdomain dnsmasq[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/addn_hosts - 0 addresses
Oct 13 15:49:07 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/host
Oct 13 15:49:07 standalone.localdomain dnsmasq-dhcp[557535]: read /var/lib/neutron/dhcp/36ceda34-93ec-4a19-8f02-93322043557c/opts
Oct 13 15:49:07 standalone.localdomain podman[561487]: 2025-10-13 15:49:07.170013932 +0000 UTC m=+0.056090952 container kill 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:49:07 standalone.localdomain ceph-mon[29756]: pgmap v4044: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:07 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:07.380 2 INFO neutron.agent.securitygroups_rpc [None req-86143497-8af5-42c3-87ad-893c0c021d1f 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:07.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:07.458 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:06Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b7670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890ae7f0>], id=3166d78f-1af3-478e-a0cb-a755851c1636, ip_allocation=immediate, mac_address=fa:16:3e:0b:dc:b0, name=tempest-AllowedAddressPairTestJSON-313759290, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:51Z, description=, dns_domain=, id=d535e530-d170-4ecf-a39a-8a44c6a50a08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-22600680, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9436, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2358, status=ACTIVE, subnets=['00b17518-15e8-4f80-8ffa-635c27d9585b'], tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:53Z, vlan_transparent=None, network_id=d535e530-d170-4ecf-a39a-8a44c6a50a08, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['64edacfd-ff73-4946-8d3e-149467f27125'], standard_attr_id=2407, status=DOWN, tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:49:07Z on network d535e530-d170-4ecf-a39a-8a44c6a50a08
Oct 13 15:49:07 standalone.localdomain podman[561524]: 2025-10-13 15:49:07.533330797 +0000 UTC m=+0.071455577 container kill 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:49:07 standalone.localdomain dnsmasq[561366]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:07Z|00757|binding|INFO|Releasing lport 83892680-1e8b-49b4-8103-a3c79e4d07f6 from this chassis (sb_readonly=0)
Oct 13 15:49:07 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:07Z|00758|binding|INFO|Setting lport 83892680-1e8b-49b4-8103-a3c79e4d07f6 down in Southbound
Oct 13 15:49:07 standalone.localdomain kernel: device tap83892680-1e left promiscuous mode
Oct 13 15:49:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:07.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:07.547 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-36ceda34-93ec-4a19-8f02-93322043557c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36ceda34-93ec-4a19-8f02-93322043557c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aefab81a3f68448f93dd20e2d275ad53', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb2b1cfd-56c8-4807-81ed-31847ef54a59, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=83892680-1e8b-49b4-8103-a3c79e4d07f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:07.549 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 83892680-1e8b-49b4-8103-a3c79e4d07f6 in datapath 36ceda34-93ec-4a19-8f02-93322043557c unbound from our chassis
Oct 13 15:49:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:07.553 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36ceda34-93ec-4a19-8f02-93322043557c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:07 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:07.553 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f48f012d-1417-4f15-b367-1fe039677cd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:07.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:07 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 2 addresses
Oct 13 15:49:07 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:07 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:07 standalone.localdomain podman[561561]: 2025-10-13 15:49:07.679631753 +0000 UTC m=+0.063661767 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4045: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:07.834 496978 INFO neutron.agent.dhcp.agent [None req-e30d180c-ab09-4fc4-95b8-9441586bbd4e - - - - - -] DHCP configuration for ports {'042f2002-993c-416e-bc05-97dbf1ad379a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:07.977 496978 INFO neutron.agent.dhcp.agent [None req-559e1421-fc57-4f13-b371-55e733cfc014 - - - - - -] DHCP configuration for ports {'3166d78f-1af3-478e-a0cb-a755851c1636'} is completed
Oct 13 15:49:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:08.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:08 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:49:08 standalone.localdomain ceph-mon[29756]: pgmap v4045: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:08 standalone.localdomain podman[561587]: 2025-10-13 15:49:08.79496725 +0000 UTC m=+0.062592542 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:08 standalone.localdomain podman[561587]: 2025-10-13 15:49:08.820692804 +0000 UTC m=+0.088318076 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:49:08 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:09.181 2 INFO neutron.agent.securitygroups_rpc [None req-211e4557-e0b7-47a8-a0c0-ee8234dc2af2 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:09 standalone.localdomain podman[561631]: 2025-10-13 15:49:09.228077589 +0000 UTC m=+0.046827736 container kill 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:09 standalone.localdomain dnsmasq[561366]: exiting on receipt of SIGTERM
Oct 13 15:49:09 standalone.localdomain systemd[1]: libpod-3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5.scope: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain podman[561645]: 2025-10-13 15:49:09.290771985 +0000 UTC m=+0.046024413 container died 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:49:09 standalone.localdomain podman[561645]: 2025-10-13 15:49:09.338213849 +0000 UTC m=+0.093466267 container cleanup 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:49:09 standalone.localdomain systemd[1]: libpod-conmon-3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5.scope: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain podman[561650]: 2025-10-13 15:49:09.367703609 +0000 UTC m=+0.109842921 container remove 3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 15:49:09 standalone.localdomain podman[561690]: 2025-10-13 15:49:09.532022341 +0000 UTC m=+0.066599057 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:49:09 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 1 addresses
Oct 13 15:49:09 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:09 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:49:09 standalone.localdomain dnsmasq[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/addn_hosts - 0 addresses
Oct 13 15:49:09 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/host
Oct 13 15:49:09 standalone.localdomain dnsmasq-dhcp[560784]: read /var/lib/neutron/dhcp/7f3425af-981f-423f-b53a-5f5a3280d882/opts
Oct 13 15:49:09 standalone.localdomain podman[561724]: 2025-10-13 15:49:09.735580105 +0000 UTC m=+0.070600101 container kill b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:49:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4046: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:09 standalone.localdomain systemd[1]: tmp-crun.I3CMoT.mount: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d7de3c2330334d2f3d32569df74a004644500b6778f3c0a668aef2fd4bd73cd6-merged.mount: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f59c34e78823fe1079156bb88859d2b51b87b0eb60c5a12036786d69ab2eaa5-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain podman[561739]: 2025-10-13 15:49:09.832145955 +0000 UTC m=+0.090566516 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, release=1755695350, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Oct 13 15:49:09 standalone.localdomain podman[561739]: 2025-10-13 15:49:09.850849493 +0000 UTC m=+0.109270074 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Oct 13 15:49:09 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:49:09 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:09.961 2 INFO neutron.agent.securitygroups_rpc [None req-42442323-4977-4faa-9f8b-a1cef256cf4d 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:10Z|00759|binding|INFO|Releasing lport 769381ca-2269-4615-94aa-36a47e389713 from this chassis (sb_readonly=0)
Oct 13 15:49:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:10Z|00760|binding|INFO|Setting lport 769381ca-2269-4615-94aa-36a47e389713 down in Southbound
Oct 13 15:49:10 standalone.localdomain kernel: device tap769381ca-22 left promiscuous mode
Oct 13 15:49:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:10.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:10.011 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:09Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111dc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111c70>], id=23f611c6-3ee1-418d-99b1-0693dbf1c0ef, ip_allocation=immediate, mac_address=fa:16:3e:46:be:b4, name=tempest-AllowedAddressPairTestJSON-1343919762, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:51Z, description=, dns_domain=, id=d535e530-d170-4ecf-a39a-8a44c6a50a08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-22600680, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9436, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2358, status=ACTIVE, subnets=['00b17518-15e8-4f80-8ffa-635c27d9585b'], tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:53Z, vlan_transparent=None, network_id=d535e530-d170-4ecf-a39a-8a44c6a50a08, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['64edacfd-ff73-4946-8d3e-149467f27125'], standard_attr_id=2418, status=DOWN, tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:49:09Z on network d535e530-d170-4ecf-a39a-8a44c6a50a08
Oct 13 15:49:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:10.014 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-7f3425af-981f-423f-b53a-5f5a3280d882', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f3425af-981f-423f-b53a-5f5a3280d882', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=62d6d89f-6c25-4284-87f6-09e7604c1dab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=769381ca-2269-4615-94aa-36a47e389713) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:10.015 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 769381ca-2269-4615-94aa-36a47e389713 in datapath 7f3425af-981f-423f-b53a-5f5a3280d882 unbound from our chassis
Oct 13 15:49:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:10.018 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f3425af-981f-423f-b53a-5f5a3280d882, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:10.018 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe1c075-f3d9-4244-987a-5e7e30f095cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:10 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 2 addresses
Oct 13 15:49:10 standalone.localdomain podman[561801]: 2025-10-13 15:49:10.245276798 +0000 UTC m=+0.066367980 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:49:10 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:10 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:10 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:10.389 2 INFO neutron.agent.securitygroups_rpc [None req-028cd782-1308-4380-af48-29a80a4bc472 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:10.470 496978 INFO neutron.agent.dhcp.agent [None req-5b447675-946b-4ba7-87d2-4080c1877995 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:10Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890a8d90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890a8040>], id=dc086ed1-78b3-4b7f-b282-1b28716dad55, ip_allocation=immediate, mac_address=fa:16:3e:d9:7c:bd, name=tempest-AllowedAddressPairTestJSON-409178214, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:48:51Z, description=, dns_domain=, id=d535e530-d170-4ecf-a39a-8a44c6a50a08, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-22600680, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9436, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2358, status=ACTIVE, subnets=['00b17518-15e8-4f80-8ffa-635c27d9585b'], tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:48:53Z, vlan_transparent=None, network_id=d535e530-d170-4ecf-a39a-8a44c6a50a08, port_security_enabled=True, project_id=c4451aaa3a924fe48c89930707dab5c1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['64edacfd-ff73-4946-8d3e-149467f27125'], standard_attr_id=2420, status=DOWN, tags=[], tenant_id=c4451aaa3a924fe48c89930707dab5c1, updated_at=2025-10-13T15:49:10Z on network d535e530-d170-4ecf-a39a-8a44c6a50a08
Oct 13 15:49:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:10.563 496978 INFO neutron.agent.dhcp.agent [None req-4d4d2828-0ec8-468b-850c-0b110ec6e3fd - - - - - -] DHCP configuration for ports {'23f611c6-3ee1-418d-99b1-0693dbf1c0ef'} is completed
Oct 13 15:49:10 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:10.635 2 INFO neutron.agent.securitygroups_rpc [None req-89247181-0e0c-4708-bf65-9ba49fda00eb db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:10 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 3 addresses
Oct 13 15:49:10 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:10 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:10 standalone.localdomain systemd[1]: tmp-crun.C2ZNiv.mount: Deactivated successfully.
Oct 13 15:49:10 standalone.localdomain podman[561852]: 2025-10-13 15:49:10.714608175 +0000 UTC m=+0.074454139 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:10 standalone.localdomain ceph-mon[29756]: pgmap v4046: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:10 standalone.localdomain podman[561894]: 
Oct 13 15:49:10 standalone.localdomain podman[561894]: 2025-10-13 15:49:10.975856409 +0000 UTC m=+0.099037048 container create adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:49:11 standalone.localdomain systemd[1]: Started libpod-conmon-adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d.scope.
Oct 13 15:49:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:11.019 496978 INFO neutron.agent.dhcp.agent [None req-18c03764-da16-4bcd-89ea-24ed7dd46e0a - - - - - -] DHCP configuration for ports {'dc086ed1-78b3-4b7f-b282-1b28716dad55'} is completed
Oct 13 15:49:11 standalone.localdomain podman[561894]: 2025-10-13 15:49:10.932725998 +0000 UTC m=+0.055906627 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:11 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:11 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0e713087ba3ad702299bef25a09d8af7e4eb91c2d031e819743f6a46de757c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:11 standalone.localdomain podman[561894]: 2025-10-13 15:49:11.056042154 +0000 UTC m=+0.179222803 container init adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:11 standalone.localdomain podman[561894]: 2025-10-13 15:49:11.068952723 +0000 UTC m=+0.192133372 container start adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: started, version 2.85 cachesize 150
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: DNS service limited to local subnets
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: warning: no upstream servers configured
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:11.130 496978 INFO neutron.agent.dhcp.agent [None req-17c6a442-4a81-4f55-8b0c-019a97dfaee8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:09Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188921aac0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f188921ad90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188921aa00>, <neutron.agent.linux.dhcp.DictModel object at 0x7f188921ac40>], id=7501d27e-f449-430b-9dbd-8d43f5487067, ip_allocation=immediate, mac_address=fa:16:3e:40:27:2f, name=tempest-NetworksTestDHCPv6-2123074031, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=53, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['53547d3d-118a-4968-ad66-b447af2dac49', 'b2e8baec-2ccd-4f64-873a-528080fc2d38'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:08Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2415, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:10Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:11 standalone.localdomain podman[561933]: 2025-10-13 15:49:11.366210139 +0000 UTC m=+0.066147194 container kill adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:11 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:11.516 2 INFO neutron.agent.securitygroups_rpc [None req-29f03847-3204-4e66-9c2b-bd117a7e91ec db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:11.559 496978 INFO neutron.agent.dhcp.agent [None req-bbd831fb-5315-46ef-8f78-ec0faf742a83 - - - - - -] DHCP configuration for ports {'042f2002-993c-416e-bc05-97dbf1ad379a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:11 standalone.localdomain podman[467099]: time="2025-10-13T15:49:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:49:11 standalone.localdomain dnsmasq[560784]: exiting on receipt of SIGTERM
Oct 13 15:49:11 standalone.localdomain podman[561967]: 2025-10-13 15:49:11.601081429 +0000 UTC m=+0.080254519 container kill b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:11 standalone.localdomain systemd[1]: libpod-b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa.scope: Deactivated successfully.
Oct 13 15:49:11 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:11.677 496978 INFO neutron.agent.dhcp.agent [None req-d76efba9-a6ea-44c0-939f-969b9143990c - - - - - -] DHCP configuration for ports {'7501d27e-f449-430b-9dbd-8d43f5487067'} is completed
Oct 13 15:49:11 standalone.localdomain podman[561998]: 2025-10-13 15:49:11.681550332 +0000 UTC m=+0.067133603 container died b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:49:11 standalone.localdomain podman[561998]: 2025-10-13 15:49:11.767352861 +0000 UTC m=+0.152936102 container cleanup b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:49:11 standalone.localdomain systemd[1]: libpod-conmon-b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa.scope: Deactivated successfully.
Oct 13 15:49:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4047: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:11 standalone.localdomain podman[562005]: 2025-10-13 15:49:11.797913514 +0000 UTC m=+0.161621620 container remove b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7f3425af-981f-423f-b53a-5f5a3280d882, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:11 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:49:11 standalone.localdomain podman[562021]: 2025-10-13 15:49:11.832279795 +0000 UTC m=+0.165524511 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:49:11 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:11Z|00761|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:11 standalone.localdomain dnsmasq[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:11 standalone.localdomain dnsmasq-dhcp[561914]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:11 standalone.localdomain podman[562056]: 2025-10-13 15:49:11.957960674 +0000 UTC m=+0.117441296 container kill adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:11 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:11.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:49:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15c3f74a1ac5089efd3c67cdf02681ea548bdd8422e6f78c3b194c27e07cbe2c-merged.mount: Deactivated successfully.
Oct 13 15:49:11 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5b6333009f47716819526e52c1353593fd81e05731df89031f80f9bf56d21aa-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:12 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:49:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 427511 "" "Go-http-client/1.1"
Oct 13 15:49:12 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d7f3425af\x2d981f\x2d423f\x2db53a\x2d5f5a3280d882.mount: Deactivated successfully.
Oct 13 15:49:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:12.031 496978 INFO neutron.agent.dhcp.agent [None req-8e0f632b-21a2-4870-8614-d35b5694f715 - - - - - -] Resizing dhcp processing queue green pool size to: 10
Oct 13 15:49:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:12.034 496978 INFO neutron.agent.dhcp.agent [None req-8e0f632b-21a2-4870-8614-d35b5694f715 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:12 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:12.056 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:12 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:49:12 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 52462 "" "Go-http-client/1.1"
Oct 13 15:49:12 standalone.localdomain podman[562074]: 2025-10-13 15:49:12.133790502 +0000 UTC m=+0.147135003 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:12 standalone.localdomain podman[562074]: 2025-10-13 15:49:12.169015729 +0000 UTC m=+0.182360280 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 13 15:49:12 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:49:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:12.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:12.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:12 standalone.localdomain ceph-mon[29756]: pgmap v4047: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:49:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:49:13 standalone.localdomain dnsmasq[561914]: exiting on receipt of SIGTERM
Oct 13 15:49:13 standalone.localdomain systemd[1]: libpod-adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d.scope: Deactivated successfully.
Oct 13 15:49:13 standalone.localdomain podman[562126]: 2025-10-13 15:49:13.073594221 +0000 UTC m=+0.083402856 container kill adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:49:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:13.134 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:13 standalone.localdomain podman[562158]: 2025-10-13 15:49:13.139779044 +0000 UTC m=+0.057799895 container died adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:49:13 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:13 standalone.localdomain podman[562158]: 2025-10-13 15:49:13.178275493 +0000 UTC m=+0.096296294 container cleanup adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:13 standalone.localdomain systemd[1]: libpod-conmon-adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d.scope: Deactivated successfully.
Oct 13 15:49:13 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:13.189 2 INFO neutron.agent.securitygroups_rpc [None req-8a9161b5-fe8d-4f95-9ba6-0ff9c0cc3ffa 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:13 standalone.localdomain dnsmasq[557535]: exiting on receipt of SIGTERM
Oct 13 15:49:13 standalone.localdomain podman[562149]: 2025-10-13 15:49:13.198479726 +0000 UTC m=+0.135733621 container kill 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:49:13 standalone.localdomain systemd[1]: libpod-151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6.scope: Deactivated successfully.
Oct 13 15:49:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:13.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:13.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:49:13 standalone.localdomain podman[562166]: 2025-10-13 15:49:13.287782343 +0000 UTC m=+0.190898874 container remove adabb73f86a20605df34f01e1292319a007f0e0e5e2a800409de69d5e28fe11d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:13 standalone.localdomain podman[562194]: 2025-10-13 15:49:13.308967956 +0000 UTC m=+0.093249539 container died 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:13 standalone.localdomain podman[562194]: 2025-10-13 15:49:13.341541532 +0000 UTC m=+0.125823045 container cleanup 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:49:13 standalone.localdomain systemd[1]: libpod-conmon-151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6.scope: Deactivated successfully.
Oct 13 15:49:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:13.395 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:49:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:13.396 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:49:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:13.396 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:49:13 standalone.localdomain podman[562196]: 2025-10-13 15:49:13.455633314 +0000 UTC m=+0.233868841 container remove 151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36ceda34-93ec-4a19-8f02-93322043557c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:49:13 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 2 addresses
Oct 13 15:49:13 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:13 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:13 standalone.localdomain podman[562246]: 2025-10-13 15:49:13.474825986 +0000 UTC m=+0.085938384 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:13.555 496978 INFO neutron.agent.dhcp.agent [None req-449426a9-7bef-4117-9e68-ed1189094723 - - - - - -] Resizing dhcp processing queue green pool size to: 9
Oct 13 15:49:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:13.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:13 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:13.644 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4048: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1f0e713087ba3ad702299bef25a09d8af7e4eb91c2d031e819743f6a46de757c-merged.mount: Deactivated successfully.
Oct 13 15:49:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-303db635c915b698097852bef2d84a9616a567ec99741ff40953eab556d38aae-merged.mount: Deactivated successfully.
Oct 13 15:49:14 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-151ac0e8cf50f0835cea73cc2a1d1d3c4d6c583f7211c9c737d8172bd15b97f6-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:14 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d36ceda34\x2d93ec\x2d4a19\x2d8f02\x2d93322043557c.mount: Deactivated successfully.
Oct 13 15:49:14 standalone.localdomain podman[562316]: 
Oct 13 15:49:14 standalone.localdomain podman[562316]: 2025-10-13 15:49:14.384842966 +0000 UTC m=+0.073764978 container create de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:49:14 standalone.localdomain systemd[1]: Started libpod-conmon-de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9.scope.
Oct 13 15:49:14 standalone.localdomain systemd[1]: tmp-crun.1XmelD.mount: Deactivated successfully.
Oct 13 15:49:14 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:14 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37eb83ed92eb8014bf269eaf41503fe1fadeb97d93a2274acf683ff87bc28cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:14 standalone.localdomain podman[562316]: 2025-10-13 15:49:14.353808638 +0000 UTC m=+0.042730630 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:14 standalone.localdomain podman[562316]: 2025-10-13 15:49:14.459669756 +0000 UTC m=+0.148591778 container init de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:49:14 standalone.localdomain podman[562316]: 2025-10-13 15:49:14.467971362 +0000 UTC m=+0.156893384 container start de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:49:14 standalone.localdomain dnsmasq[562334]: started, version 2.85 cachesize 150
Oct 13 15:49:14 standalone.localdomain dnsmasq[562334]: DNS service limited to local subnets
Oct 13 15:49:14 standalone.localdomain dnsmasq[562334]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:14 standalone.localdomain dnsmasq[562334]: warning: no upstream servers configured
Oct 13 15:49:14 standalone.localdomain dnsmasq-dhcp[562334]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:49:14 standalone.localdomain dnsmasq[562334]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:14 standalone.localdomain dnsmasq-dhcp[562334]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:14 standalone.localdomain dnsmasq-dhcp[562334]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:14.705 496978 INFO neutron.agent.dhcp.agent [None req-a46d7d82-ac43-4376-98de-51464617a6e6 - - - - - -] DHCP configuration for ports {'042f2002-993c-416e-bc05-97dbf1ad379a', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:14 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:14.747 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:14 standalone.localdomain dnsmasq[562334]: exiting on receipt of SIGTERM
Oct 13 15:49:14 standalone.localdomain systemd[1]: libpod-de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9.scope: Deactivated successfully.
Oct 13 15:49:14 standalone.localdomain podman[562352]: 2025-10-13 15:49:14.833610678 +0000 UTC m=+0.065190393 container kill de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:49:14 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:14.847 2 INFO neutron.agent.securitygroups_rpc [None req-26c324ec-6457-401f-8d72-8e0ba4e16c19 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:14 standalone.localdomain ceph-mon[29756]: pgmap v4048: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:14.900 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:49:14 standalone.localdomain podman[562365]: 2025-10-13 15:49:14.906844359 +0000 UTC m=+0.061626533 container died de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:49:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:14.915 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:49:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:14.916 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:49:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:14.917 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:14 standalone.localdomain podman[562365]: 2025-10-13 15:49:14.940752066 +0000 UTC m=+0.095534180 container cleanup de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:49:14 standalone.localdomain systemd[1]: libpod-conmon-de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9.scope: Deactivated successfully.
Oct 13 15:49:15 standalone.localdomain podman[562372]: 2025-10-13 15:49:15.002216063 +0000 UTC m=+0.144648986 container remove de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:49:15 standalone.localdomain kernel: device tap042f2002-99 left promiscuous mode
Oct 13 15:49:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:15Z|00762|binding|INFO|Releasing lport 042f2002-993c-416e-bc05-97dbf1ad379a from this chassis (sb_readonly=0)
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:15Z|00763|binding|INFO|Setting lport 042f2002-993c-416e-bc05-97dbf1ad379a down in Southbound
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.060 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::f816:3eff:fe16:70a2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '8', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=042f2002-993c-416e-bc05-97dbf1ad379a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.062 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 042f2002-993c-416e-bc05-97dbf1ad379a in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.066 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.067 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[60f404b7-e27f-42b8-b104-924e89030080]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f37eb83ed92eb8014bf269eaf41503fe1fadeb97d93a2274acf683ff87bc28cf-merged.mount: Deactivated successfully.
Oct 13 15:49:15 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de73f7fa4efd7180edca2d6d0bd44ed07e31167af9e1bc0ba9a1f425903634a9-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:15 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 1 addresses
Oct 13 15:49:15 standalone.localdomain podman[562412]: 2025-10-13 15:49:15.220550622 +0000 UTC m=+0.069714074 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:15 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:15 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:15 standalone.localdomain systemd[1]: tmp-crun.d4u9BJ.mount: Deactivated successfully.
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.252 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.256 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:49:15 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:49:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:15.308 496978 INFO neutron.agent.dhcp.agent [None req-47fff5c2-0808-4b02-8647-ccff2402562d - - - - - -] Resizing dhcp processing queue green pool size to: 8
Oct 13 15:49:15 standalone.localdomain podman[562451]: 2025-10-13 15:49:15.691982024 +0000 UTC m=+0.069049672 container kill 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:49:15 standalone.localdomain dnsmasq[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/addn_hosts - 0 addresses
Oct 13 15:49:15 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/host
Oct 13 15:49:15 standalone.localdomain dnsmasq-dhcp[559830]: read /var/lib/neutron/dhcp/c78f46e2-5bde-47ef-9634-d0f5d8db615c/opts
Oct 13 15:49:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:15.743 2 INFO neutron.agent.securitygroups_rpc [None req-f0e055eb-8361-4cae-94b4-b7c4443ab7e7 979d8e902c31424e9474d83415014da1 c4451aaa3a924fe48c89930707dab5c1 - - default default] Security group member updated ['64edacfd-ff73-4946-8d3e-149467f27125']
Oct 13 15:49:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4049: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:15Z|00764|binding|INFO|Releasing lport 0c9d7e03-89ce-4128-b4af-4116af140786 from this chassis (sb_readonly=0)
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:15 standalone.localdomain kernel: device tap0c9d7e03-89 left promiscuous mode
Oct 13 15:49:15 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:15Z|00765|binding|INFO|Setting lport 0c9d7e03-89ce-4128-b4af-4116af140786 down in Southbound
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.939 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-c78f46e2-5bde-47ef-9634-d0f5d8db615c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c78f46e2-5bde-47ef-9634-d0f5d8db615c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a101722d-1595-4b7d-8999-0ab37e536fee, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=0c9d7e03-89ce-4128-b4af-4116af140786) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.941 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 0c9d7e03-89ce-4128-b4af-4116af140786 in datapath c78f46e2-5bde-47ef-9634-d0f5d8db615c unbound from our chassis
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.943 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c78f46e2-5bde-47ef-9634-d0f5d8db615c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:15 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:15.944 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[f0771332-9a50-4b38-85a8-52be4b0ceaae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:15.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:15.993 496978 INFO neutron.agent.linux.ip_lib [None req-b213c6e3-d41b-42be-95b0-4c5cde8efc5a - - - - - -] Device tap3e92ec44-32 cannot be used as it has no MAC address
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain dnsmasq[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/addn_hosts - 0 addresses
Oct 13 15:49:16 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/host
Oct 13 15:49:16 standalone.localdomain dnsmasq-dhcp[560569]: read /var/lib/neutron/dhcp/d535e530-d170-4ecf-a39a-8a44c6a50a08/opts
Oct 13 15:49:16 standalone.localdomain podman[562495]: 2025-10-13 15:49:16.025588982 +0000 UTC m=+0.064252534 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:49:16 standalone.localdomain kernel: device tap3e92ec44-32 entered promiscuous mode
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00766|binding|INFO|Claiming lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb for this chassis.
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00767|binding|INFO|3e92ec44-32d0-48f0-a90f-a14a72aa9cbb: Claiming unknown
Oct 13 15:49:16 standalone.localdomain NetworkManager[5962]: <info>  [1760370556.0349] manager: (tap3e92ec44-32): new Generic device (/org/freedesktop/NetworkManager/Devices/132)
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00768|binding|INFO|Setting lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb ovn-installed in OVS
Oct 13 15:49:16 standalone.localdomain systemd-udevd[562513]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00769|binding|INFO|Setting lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb up in Southbound
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.041 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3e92ec44-32d0-48f0-a90f-a14a72aa9cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.042 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.044 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port fd659904-445f-4119-aff8-3e04170446f1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.044 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.045 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[db80e171-520b-4633-8b20-c7ee626c5a16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap3e92ec44-32: No such device
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.247 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.248 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.248 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.249 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.250 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00770|binding|INFO|Removing iface tap439015dd-77 ovn-installed in OVS
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.440 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2a5fa35b-b9d0-469d-a384-35af60e7eec3 with type ""
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00771|binding|INFO|Removing lport 439015dd-77cd-4b09-8545-f29f99597f23 ovn-installed in OVS
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.442 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-d535e530-d170-4ecf-a39a-8a44c6a50a08', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d535e530-d170-4ecf-a39a-8a44c6a50a08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4451aaa3a924fe48c89930707dab5c1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebea9263-888d-4cb9-bb3f-25996716ef94, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=439015dd-77cd-4b09-8545-f29f99597f23) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.444 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 439015dd-77cd-4b09-8545-f29f99597f23 in datapath d535e530-d170-4ecf-a39a-8a44c6a50a08 unbound from our chassis
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.448 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d535e530-d170-4ecf-a39a-8a44c6a50a08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:16.449 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5c20dc-c73f-418c-99a7-68833456db89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:16 standalone.localdomain dnsmasq[560569]: exiting on receipt of SIGTERM
Oct 13 15:49:16 standalone.localdomain podman[562596]: 2025-10-13 15:49:16.499861201 +0000 UTC m=+0.059674342 container kill d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:49:16 standalone.localdomain systemd[1]: libpod-d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649.scope: Deactivated successfully.
Oct 13 15:49:16 standalone.localdomain podman[562615]: 2025-10-13 15:49:16.56523938 +0000 UTC m=+0.039884313 container died d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:49:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:16 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-cbc91082f87a24d62a342c2b4bcf93244593a5e0f5ec4490a644bd602636afeb-merged.mount: Deactivated successfully.
Oct 13 15:49:16 standalone.localdomain podman[562615]: 2025-10-13 15:49:16.6135261 +0000 UTC m=+0.088171013 container remove d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d535e530-d170-4ecf-a39a-8a44c6a50a08, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:16 standalone.localdomain systemd[1]: libpod-conmon-d30615e0170f44a7280d849e953abc314654064cc2a389a2b1837ab215f52649.scope: Deactivated successfully.
Oct 13 15:49:16 standalone.localdomain kernel: device tap439015dd-77 left promiscuous mode
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:16.680 496978 INFO neutron.agent.dhcp.agent [None req-0790d42d-c16b-40a6-97f2-df54fdabae56 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:16.681 496978 INFO neutron.agent.dhcp.agent [None req-0790d42d-c16b-40a6-97f2-df54fdabae56 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:16 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:49:16 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2764246100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:49:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:16Z|00772|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.777 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.852 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.852 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.852 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:49:16 standalone.localdomain dnsmasq[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/addn_hosts - 0 addresses
Oct 13 15:49:16 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/host
Oct 13 15:49:16 standalone.localdomain dnsmasq-dhcp[559331]: read /var/lib/neutron/dhcp/6b5ab3aa-b7b0-43a6-9f72-c1785d339346/opts
Oct 13 15:49:16 standalone.localdomain podman[562664]: 2025-10-13 15:49:16.855322704 +0000 UTC m=+0.043945768 container kill 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.855 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:49:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:16.855 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:49:16 standalone.localdomain ceph-mon[29756]: pgmap v4049: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:16 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2764246100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.000 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.001 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9117MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.001 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.001 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:49:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:17Z|00773|binding|INFO|Releasing lport d4908183-1630-4217-b316-23bcb326b470 from this chassis (sb_readonly=0)
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:17Z|00774|binding|INFO|Setting lport d4908183-1630-4217-b316-23bcb326b470 down in Southbound
Oct 13 15:49:17 standalone.localdomain kernel: device tapd4908183-16 left promiscuous mode
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.065 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.065 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.065 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.065 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:49:17 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dd535e530\x2dd170\x2d4ecf\x2da39a\x2d8a44c6a50a08.mount: Deactivated successfully.
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.074 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-6b5ab3aa-b7b0-43a6-9f72-c1785d339346', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b5ab3aa-b7b0-43a6-9f72-c1785d339346', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f65d01b638479e947a9c2611255c4e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4866d3a8-dc59-4347-84d0-87409d8e96ea, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=d4908183-1630-4217-b316-23bcb326b470) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.076 378821 INFO neutron.agent.ovn.metadata.agent [-] Port d4908183-1630-4217-b316-23bcb326b470 in datapath 6b5ab3aa-b7b0-43a6-9f72-c1785d339346 unbound from our chassis
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.080 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b5ab3aa-b7b0-43a6-9f72-c1785d339346, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.080 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[945adf63-5436-4f02-928d-dbb692af9b3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:17 standalone.localdomain dnsmasq[559830]: exiting on receipt of SIGTERM
Oct 13 15:49:17 standalone.localdomain podman[562735]: 2025-10-13 15:49:17.114709951 +0000 UTC m=+0.058633001 container kill 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:49:17 standalone.localdomain systemd[1]: libpod-163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698.scope: Deactivated successfully.
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.118 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:49:17 standalone.localdomain podman[562706]: 
Oct 13 15:49:17 standalone.localdomain podman[562706]: 2025-10-13 15:49:17.138371481 +0000 UTC m=+0.146816063 container create 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:17 standalone.localdomain podman[562748]: 2025-10-13 15:49:17.166099237 +0000 UTC m=+0.041410060 container died 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:49:17 standalone.localdomain systemd[1]: Started libpod-conmon-9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293.scope.
Oct 13 15:49:17 standalone.localdomain podman[562706]: 2025-10-13 15:49:17.089927796 +0000 UTC m=+0.098372438 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619b395e8d0e533ff802204f2ef0d960fd29df13211e1afd3715dead1019eedc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:17 standalone.localdomain podman[562706]: 2025-10-13 15:49:17.20768523 +0000 UTC m=+0.216129832 container init 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:17 standalone.localdomain podman[562706]: 2025-10-13 15:49:17.216220994 +0000 UTC m=+0.224665596 container start 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:49:17 standalone.localdomain dnsmasq[562795]: started, version 2.85 cachesize 150
Oct 13 15:49:17 standalone.localdomain dnsmasq[562795]: DNS service limited to local subnets
Oct 13 15:49:17 standalone.localdomain dnsmasq[562795]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:17 standalone.localdomain dnsmasq[562795]: warning: no upstream servers configured
Oct 13 15:49:17 standalone.localdomain dnsmasq-dhcp[562795]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:49:17 standalone.localdomain dnsmasq[562795]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:17 standalone.localdomain dnsmasq-dhcp[562795]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:17 standalone.localdomain dnsmasq-dhcp[562795]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:17 standalone.localdomain podman[562748]: 2025-10-13 15:49:17.251741801 +0000 UTC m=+0.127052594 container cleanup 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:17 standalone.localdomain systemd[1]: libpod-conmon-163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698.scope: Deactivated successfully.
Oct 13 15:49:17 standalone.localdomain podman[562758]: 2025-10-13 15:49:17.283624464 +0000 UTC m=+0.137398712 container remove 163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c78f46e2-5bde-47ef-9634-d0f5d8db615c, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:17Z|00775|binding|INFO|Releasing lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb from this chassis (sb_readonly=0)
Oct 13 15:49:17 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:17Z|00776|binding|INFO|Setting lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb down in Southbound
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:17 standalone.localdomain kernel: device tap3e92ec44-32 left promiscuous mode
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.347 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe07:da36/64 2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3e92ec44-32d0-48f0-a90f-a14a72aa9cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.348 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.351 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.351 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[de81ee64-27ec-41be-b855-38aee1e35216]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:17.389 496978 INFO neutron.agent.dhcp.agent [None req-b330973d-5e61-4c9f-9092-fd94ccb24efe - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:17.389 496978 INFO neutron.agent.dhcp.agent [None req-b330973d-5e61-4c9f-9092-fd94ccb24efe - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:17.413 496978 INFO neutron.agent.dhcp.agent [None req-47eebf44-edff-4ac2-ad2d-70bef2afef13 - - - - - -] DHCP configuration for ports {'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:49:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2143762171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.525 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.531 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.555 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.558 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:49:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:17.559 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:49:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:49:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:49:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4050: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:17 standalone.localdomain podman[562809]: 2025-10-13 15:49:17.831750744 +0000 UTC m=+0.094279181 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:49:17 standalone.localdomain podman[562809]: 2025-10-13 15:49:17.846960863 +0000 UTC m=+0.109489330 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible)
Oct 13 15:49:17 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:49:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2143762171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:49:17 standalone.localdomain podman[562810]: 2025-10-13 15:49:17.928746327 +0000 UTC m=+0.184851386 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.955 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:e7:a4 2001:db8:0:1:f816:3eff:fe02:e7a4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b3d945e2-458a-437c-9a37-dcc9416fabab) old=Port_Binding(mac=['fa:16:3e:02:e7:a4 2001:db8::f816:3eff:fe02:e7a4'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:e7a4/64', 'neutron:device_id': 'ovnmeta-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:17 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.957 378821 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b3d945e2-458a-437c-9a37-dcc9416fabab in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 updated
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.961 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:17 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:17.963 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4029d8f5-7f8f-4016-905b-ab7f04d4f566]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:17 standalone.localdomain podman[562810]: 2025-10-13 15:49:17.96801981 +0000 UTC m=+0.224124919 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:49:17 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain podman[562850]: 2025-10-13 15:49:18.053628153 +0000 UTC m=+0.084053796 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, name=rhosp17/openstack-swift-object, maintainer=OpenStack TripleO Team, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, build-date=2025-07-21T14:56:28, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-object-container, architecture=x86_64, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, batch=17.1_20250721.1, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1)
Oct 13 15:49:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b40e5c1201ff198ad71abecbc292a17bb6345703f629c95c6dc12ffd559649bd-merged.mount: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-163f98061f5c7483c7f5a0abb3f8b85ce1cb19ca24084b939091a83e23984698-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dc78f46e2\x2d5bde\x2d47ef\x2d9634\x2dd0f5d8db615c.mount: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:49:18 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:49:18 standalone.localdomain podman[562872]: 2025-10-13 15:49:18.2015875 +0000 UTC m=+0.097491141 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-07-21T16:11:22, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, config_id=tripleo_step4, io.buildah.version=1.33.12, name=rhosp17/openstack-swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.9, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:49:18 standalone.localdomain podman[562895]: 2025-10-13 15:49:18.27611764 +0000 UTC m=+0.071124186 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, vcs-type=git, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, container_name=swift_container_server, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-container, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, version=17.1.9, build-date=2025-07-21T15:54:32, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container)
Oct 13 15:49:18 standalone.localdomain podman[562850]: 2025-10-13 15:49:18.319997725 +0000 UTC m=+0.350423428 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, build-date=2025-07-21T14:56:28, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, architecture=x86_64, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, release=1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, container_name=swift_object_server, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:49:18 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain podman[562872]: 2025-10-13 15:49:18.417893427 +0000 UTC m=+0.313797088 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-type=git, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, version=17.1.9, managed_by=tripleo_ansible, batch=17.1_20250721.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, container_name=swift_account_server, config_id=tripleo_step4, tcib_managed=true, build-date=2025-07-21T16:11:22, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:49:18 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain podman[562895]: 2025-10-13 15:49:18.453803605 +0000 UTC m=+0.248810151 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-container-container, io.openshift.expose-services=, build-date=2025-07-21T15:54:32, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.buildah.version=1.33.12, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, container_name=swift_container_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team)
Oct 13 15:49:18 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain dnsmasq[562795]: exiting on receipt of SIGTERM
Oct 13 15:49:18 standalone.localdomain podman[562948]: 2025-10-13 15:49:18.521959169 +0000 UTC m=+0.109583594 container kill 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:49:18 standalone.localdomain systemd[1]: libpod-9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293.scope: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.554 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.555 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.556 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.556 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:49:18 standalone.localdomain podman[562963]: 2025-10-13 15:49:18.571992413 +0000 UTC m=+0.041139541 container died 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 15:49:18 standalone.localdomain podman[562963]: 2025-10-13 15:49:18.612921847 +0000 UTC m=+0.082068935 container cleanup 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:18 standalone.localdomain systemd[1]: libpod-conmon-9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293.scope: Deactivated successfully.
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain podman[562970]: 2025-10-13 15:49:18.647871775 +0000 UTC m=+0.103877087 container remove 9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2913508704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2913508704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:49:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:18.699 496978 INFO neutron.agent.linux.ip_lib [None req-7d3cdfec-fb2f-4ff5-b848-d5031207a62e - - - - - -] Device tap3e92ec44-32 cannot be used as it has no MAC address
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain kernel: device tap3e92ec44-32 entered promiscuous mode
Oct 13 15:49:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:18Z|00777|binding|INFO|Claiming lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb for this chassis.
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:18Z|00778|binding|INFO|3e92ec44-32d0-48f0-a90f-a14a72aa9cbb: Claiming unknown
Oct 13 15:49:18 standalone.localdomain NetworkManager[5962]: <info>  [1760370558.7528] manager: (tap3e92ec44-32): new Generic device (/org/freedesktop/NetworkManager/Devices/133)
Oct 13 15:49:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:18Z|00779|binding|INFO|Setting lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb ovn-installed in OVS
Oct 13 15:49:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:18.763 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe07:da36/64 2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3e92ec44-32d0-48f0-a90f-a14a72aa9cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:18 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:18Z|00780|binding|INFO|Setting lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb up in Southbound
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:18.765 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 bound to our chassis
Oct 13 15:49:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:18.768 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port fd659904-445f-4119-aff8-3e04170446f1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:49:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:18.768 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:18 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:18.768 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c478ce1c-675d-4d94-9059-f0727922b1d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: pgmap v4050: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2913508704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:49:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2913508704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:49:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:18.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:18 standalone.localdomain dnsmasq[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/addn_hosts - 0 addresses
Oct 13 15:49:18 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/host
Oct 13 15:49:18 standalone.localdomain podman[563023]: 2025-10-13 15:49:18.945479302 +0000 UTC m=+0.055017839 container kill 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:49:18 standalone.localdomain dnsmasq-dhcp[558841]: read /var/lib/neutron/dhcp/bfc42da9-a81e-4c8a-9c74-bef94743e411/opts
Oct 13 15:49:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-619b395e8d0e533ff802204f2ef0d960fd29df13211e1afd3715dead1019eedc-merged.mount: Deactivated successfully.
Oct 13 15:49:19 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9345bdc9460de2e19edd5c58b59929271aea6de0a2c3e6d4d4254d23d8ef9293-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:19Z|00781|binding|INFO|Releasing lport b9ec2641-69a2-484a-98e9-00e31e258d56 from this chassis (sb_readonly=0)
Oct 13 15:49:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:19.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:19Z|00782|binding|INFO|Setting lport b9ec2641-69a2-484a-98e9-00e31e258d56 down in Southbound
Oct 13 15:49:19 standalone.localdomain kernel: device tapb9ec2641-69 left promiscuous mode
Oct 13 15:49:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:19.184 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-bfc42da9-a81e-4c8a-9c74-bef94743e411', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfc42da9-a81e-4c8a-9c74-bef94743e411', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=613b0626-ce44-4ccf-af6a-635d57bebef1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b9ec2641-69a2-484a-98e9-00e31e258d56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:19.185 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b9ec2641-69a2-484a-98e9-00e31e258d56 in datapath bfc42da9-a81e-4c8a-9c74-bef94743e411 unbound from our chassis
Oct 13 15:49:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:19.187 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfc42da9-a81e-4c8a-9c74-bef94743e411, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:19 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:19.188 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d2dd4805-687d-448f-944a-361f62200812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:19.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:19 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:19.339 2 INFO neutron.agent.securitygroups_rpc [None req-01e39149-0f37-4007-9cc3-b1eebbbd2dc1 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4051: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:19 standalone.localdomain podman[563090]: 
Oct 13 15:49:19 standalone.localdomain podman[563090]: 2025-10-13 15:49:19.816415795 +0000 UTC m=+0.054743340 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:19 standalone.localdomain podman[563090]: 2025-10-13 15:49:19.917204126 +0000 UTC m=+0.155531601 container create a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:19 standalone.localdomain podman[563137]: 2025-10-13 15:49:19.963717473 +0000 UTC m=+0.063711918 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:19 standalone.localdomain systemd[1]: Started libpod-conmon-a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef.scope.
Oct 13 15:49:19 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:49:19 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:49:19 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:49:19 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:19Z|00783|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:19 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:19 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42bb11c74a4b1deb3c9a52246cabacd9c68d5eb301f9f46715dfab7b7089442/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:20 standalone.localdomain podman[563090]: 2025-10-13 15:49:20.003368996 +0000 UTC m=+0.241696471 container init a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:49:20 standalone.localdomain dnsmasq[558841]: exiting on receipt of SIGTERM
Oct 13 15:49:20 standalone.localdomain podman[563145]: 2025-10-13 15:49:20.008207726 +0000 UTC m=+0.085660725 container kill 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:49:20 standalone.localdomain systemd[1]: libpod-66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3.scope: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain podman[563090]: 2025-10-13 15:49:20.013252471 +0000 UTC m=+0.251579956 container start a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: started, version 2.85 cachesize 150
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: DNS service limited to local subnets
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: warning: no upstream servers configured
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:20.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:20 standalone.localdomain podman[563183]: 2025-10-13 15:49:20.122015128 +0000 UTC m=+0.034713772 container died 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:49:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:20.133 496978 INFO neutron.agent.dhcp.agent [None req-7d3cdfec-fb2f-4ff5-b848-d5031207a62e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:18Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4e130>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4e7c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4ec40>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4e850>], id=f673ed15-ad91-40ef-a372-be9531a38204, ip_allocation=immediate, mac_address=fa:16:3e:99:15:a3, name=tempest-NetworksTestDHCPv6-503135463, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=57, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['1fb78c18-36b8-4636-ab3d-7c9b0af6ea66', 'dd398638-f9dc-4562-8b40-1987ad893d04'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:15Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2424, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:19Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:49:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bbee31f0984946c6a615f6017a59b1d9dd6e9c409e7575c7954c553ebed20f28-merged.mount: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain podman[563183]: 2025-10-13 15:49:20.160571609 +0000 UTC m=+0.073270243 container remove 66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfc42da9-a81e-4c8a-9c74-bef94743e411, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:20 standalone.localdomain systemd[1]: libpod-conmon-66c4210ce927c7799e9ec2fee5a19f099e3259e7718a9f929f8841ca0ab436f3.scope: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:20.196 2 INFO neutron.agent.securitygroups_rpc [None req-c604ebca-ccc9-49cd-a18e-2e3f24b6fcd5 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:20.204 496978 INFO neutron.agent.dhcp.agent [None req-a90e7293-6b90-40a9-8f88-ae7e4b6fcfbc - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:20 standalone.localdomain systemd[1]: run-netns-qdhcp\x2dbfc42da9\x2da81e\x2d4c8a\x2d9c74\x2dbef94743e411.mount: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:20.270 496978 INFO neutron.agent.dhcp.agent [None req-21bade2a-708a-403a-9b3f-9736526d4e72 - - - - - -] DHCP configuration for ports {'3e92ec44-32d0-48f0-a90f-a14a72aa9cbb', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:20 standalone.localdomain podman[563226]: 2025-10-13 15:49:20.344022541 +0000 UTC m=+0.047748905 container kill a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:49:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:20.698 496978 INFO neutron.agent.dhcp.agent [None req-7565c474-d3d8-416e-8dbb-c8133c09c616 - - - - - -] DHCP configuration for ports {'f673ed15-ad91-40ef-a372-be9531a38204'} is completed
Oct 13 15:49:20 standalone.localdomain dnsmasq[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:20 standalone.localdomain dnsmasq-dhcp[563174]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:20 standalone.localdomain podman[563275]: 2025-10-13 15:49:20.700999481 +0000 UTC m=+0.107344875 container kill a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:49:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:20.728 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:20 standalone.localdomain dnsmasq[559331]: exiting on receipt of SIGTERM
Oct 13 15:49:20 standalone.localdomain podman[563294]: 2025-10-13 15:49:20.740713826 +0000 UTC m=+0.078291217 container kill 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:49:20 standalone.localdomain systemd[1]: libpod-0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960.scope: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain podman[563312]: 2025-10-13 15:49:20.804031581 +0000 UTC m=+0.039168940 container died 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:49:20 standalone.localdomain podman[563312]: 2025-10-13 15:49:20.837719671 +0000 UTC m=+0.072857030 container remove 0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6b5ab3aa-b7b0-43a6-9f72-c1785d339346, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:20 standalone.localdomain systemd[1]: libpod-conmon-0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960.scope: Deactivated successfully.
Oct 13 15:49:20 standalone.localdomain ceph-mon[29756]: pgmap v4051: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:21.056 496978 INFO neutron.agent.dhcp.agent [None req-9bc0a732-8254-4372-bd3e-4ca0807399e4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:21.057 496978 INFO neutron.agent.dhcp.agent [None req-9bc0a732-8254-4372-bd3e-4ca0807399e4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-b33e666f7a3d43485d6534dea6c0ca4701a2b13699afe2588a6689f2f0b83e06-merged.mount: Deactivated successfully.
Oct 13 15:49:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0674cf4b04559f9736427267426ecc85f2fb5aa5f3a8b53c787e435a56968960-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:21 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d6b5ab3aa\x2db7b0\x2d43a6\x2d9f72\x2dc1785d339346.mount: Deactivated successfully.
Oct 13 15:49:21 standalone.localdomain podman[563361]: 2025-10-13 15:49:21.744170571 +0000 UTC m=+0.058270100 container kill a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:49:21 standalone.localdomain dnsmasq[563174]: exiting on receipt of SIGTERM
Oct 13 15:49:21 standalone.localdomain systemd[1]: libpod-a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef.scope: Deactivated successfully.
Oct 13 15:49:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4052: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:21 standalone.localdomain podman[563375]: 2025-10-13 15:49:21.799172139 +0000 UTC m=+0.044455904 container died a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:49:21 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:21 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:21.826 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:21 standalone.localdomain podman[563375]: 2025-10-13 15:49:21.83807763 +0000 UTC m=+0.083361365 container cleanup a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:49:21 standalone.localdomain systemd[1]: libpod-conmon-a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef.scope: Deactivated successfully.
Oct 13 15:49:21 standalone.localdomain podman[563382]: 2025-10-13 15:49:21.860599115 +0000 UTC m=+0.090945989 container remove a3143394b6fe9c6104f5421b04bd2ec655527c7e50ecfdfdaf3a65dbd8a76bef (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:49:22 standalone.localdomain podman[563413]: 2025-10-13 15:49:22.047164713 +0000 UTC m=+0.066712049 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 15:49:22 standalone.localdomain podman[563413]: 2025-10-13 15:49:22.055049787 +0000 UTC m=+0.074597093 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:49:22 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:49:22 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d42bb11c74a4b1deb3c9a52246cabacd9c68d5eb301f9f46715dfab7b7089442-merged.mount: Deactivated successfully.
Oct 13 15:49:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:22.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:22 standalone.localdomain podman[563481]: 2025-10-13 15:49:22.796253286 +0000 UTC m=+0.117347103 container kill 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:22 standalone.localdomain dnsmasq[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/addn_hosts - 0 addresses
Oct 13 15:49:22 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/host
Oct 13 15:49:22 standalone.localdomain dnsmasq-dhcp[557847]: read /var/lib/neutron/dhcp/2d0e9665-b447-40a6-9fa6-412d077bfcc1/opts
Oct 13 15:49:22 standalone.localdomain podman[563499]: 
Oct 13 15:49:22 standalone.localdomain podman[563499]: 2025-10-13 15:49:22.847998083 +0000 UTC m=+0.089862295 container create f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:22 standalone.localdomain systemd[1]: Started libpod-conmon-f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed.scope.
Oct 13 15:49:22 standalone.localdomain ceph-mon[29756]: pgmap v4052: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:22 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:22 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66df7aa277b55534be63e364d439e32cbbed793c0c2ee0c8d9558a18751c77aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:22 standalone.localdomain podman[563499]: 2025-10-13 15:49:22.909398238 +0000 UTC m=+0.151262450 container init f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:49:22 standalone.localdomain podman[563499]: 2025-10-13 15:49:22.915537568 +0000 UTC m=+0.157401780 container start f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:22 standalone.localdomain podman[563499]: 2025-10-13 15:49:22.810278449 +0000 UTC m=+0.052142681 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:22 standalone.localdomain dnsmasq[563526]: started, version 2.85 cachesize 150
Oct 13 15:49:22 standalone.localdomain dnsmasq[563526]: DNS service limited to local subnets
Oct 13 15:49:22 standalone.localdomain dnsmasq[563526]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:22 standalone.localdomain dnsmasq[563526]: warning: no upstream servers configured
Oct 13 15:49:22 standalone.localdomain dnsmasq-dhcp[563526]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:49:22 standalone.localdomain dnsmasq[563526]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:22 standalone.localdomain dnsmasq-dhcp[563526]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:22 standalone.localdomain dnsmasq-dhcp[563526]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:22 standalone.localdomain kernel: device tap6599aeca-da left promiscuous mode
Oct 13 15:49:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:22.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:22Z|00784|binding|INFO|Releasing lport 6599aeca-daf6-403a-a046-673e6a3ca277 from this chassis (sb_readonly=0)
Oct 13 15:49:22 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:22Z|00785|binding|INFO|Setting lport 6599aeca-daf6-403a-a046-673e6a3ca277 down in Southbound
Oct 13 15:49:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:22.986 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-2d0e9665-b447-40a6-9fa6-412d077bfcc1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d0e9665-b447-40a6-9fa6-412d077bfcc1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '745bbaede12e4123b190dd25fdacfb7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627a9433-425e-4a66-be18-c8c6a42b1b43, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6599aeca-daf6-403a-a046-673e6a3ca277) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:22.987 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6599aeca-daf6-403a-a046-673e6a3ca277 in datapath 2d0e9665-b447-40a6-9fa6-412d077bfcc1 unbound from our chassis
Oct 13 15:49:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:22.989 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d0e9665-b447-40a6-9fa6-412d077bfcc1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:22.989 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e533019f-2163-4752-afac-c42e3c5136c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:23.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:23 standalone.localdomain systemd[1]: tmp-crun.7tnbGA.mount: Deactivated successfully.
Oct 13 15:49:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:23.133 496978 INFO neutron.agent.dhcp.agent [None req-b6cf51f9-4d1a-4e43-878d-2248550216cd - - - - - -] DHCP configuration for ports {'3e92ec44-32d0-48f0-a90f-a14a72aa9cbb', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:23 standalone.localdomain dnsmasq[563526]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:23 standalone.localdomain dnsmasq-dhcp[563526]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:23 standalone.localdomain dnsmasq-dhcp[563526]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:23 standalone.localdomain podman[563548]: 2025-10-13 15:49:23.25447856 +0000 UTC m=+0.060651043 container kill f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:49:23
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['images', 'volumes', 'manila_data', 'manila_metadata', 'vms', '.mgr', 'backups']
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:49:23 standalone.localdomain podman[563584]: 2025-10-13 15:49:23.456265239 +0000 UTC m=+0.056602848 container kill 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:23 standalone.localdomain dnsmasq[557847]: exiting on receipt of SIGTERM
Oct 13 15:49:23 standalone.localdomain systemd[1]: libpod-700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0.scope: Deactivated successfully.
Oct 13 15:49:23 standalone.localdomain podman[563607]: 2025-10-13 15:49:23.53470341 +0000 UTC m=+0.053795742 container died 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8172 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=230257982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD3F77E0000000001030307) 
Oct 13 15:49:23 standalone.localdomain podman[563607]: 2025-10-13 15:49:23.581263187 +0000 UTC m=+0.100355479 container remove 700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d0e9665-b447-40a6-9fa6-412d077bfcc1, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:23.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:23.658 496978 INFO neutron.agent.dhcp.agent [None req-d76f5ce5-d9da-46da-8ee9-5acaedb6941c - - - - - -] DHCP configuration for ports {'3e92ec44-32d0-48f0-a90f-a14a72aa9cbb', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:23.659 496978 INFO neutron.agent.dhcp.agent [None req-3fd001c5-694b-403b-8781-3e2ebb2b9eba - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:23.659 496978 INFO neutron.agent.dhcp.agent [None req-3fd001c5-694b-403b-8781-3e2ebb2b9eba - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:23 standalone.localdomain systemd[1]: libpod-conmon-700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0.scope: Deactivated successfully.
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4053: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:23Z|00786|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:23.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:49:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:49:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:23.916 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:23.919 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:49:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2754dab4f64d172425d301727b977ed56d581a46c6322d5f47e86aa3a820d981-merged.mount: Deactivated successfully.
Oct 13 15:49:24 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-700686dfb0855ba201ecd4d23a8eab0d3bb14da21685374b3260a05026ee07c0-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:24 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d2d0e9665\x2db447\x2d40a6\x2d9fa6\x2d412d077bfcc1.mount: Deactivated successfully.
Oct 13 15:49:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8173 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=230257982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD3FB770000000001030307) 
Oct 13 15:49:24 standalone.localdomain ceph-mon[29756]: pgmap v4053: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:24.942 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:25.646 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:49:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4054: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:25 standalone.localdomain podman[563644]: 2025-10-13 15:49:25.804565605 +0000 UTC m=+0.075351937 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:49:25 standalone.localdomain dnsmasq[563526]: exiting on receipt of SIGTERM
Oct 13 15:49:25 standalone.localdomain podman[563651]: 2025-10-13 15:49:25.836613104 +0000 UTC m=+0.098508272 container kill f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:49:25 standalone.localdomain systemd[1]: libpod-f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed.scope: Deactivated successfully.
Oct 13 15:49:25 standalone.localdomain podman[563644]: 2025-10-13 15:49:25.844879399 +0000 UTC m=+0.115665711 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:49:25 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:49:25 standalone.localdomain podman[563683]: 2025-10-13 15:49:25.901578409 +0000 UTC m=+0.044367420 container died f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:49:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:25.921 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:49:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:25 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-66df7aa277b55534be63e364d439e32cbbed793c0c2ee0c8d9558a18751c77aa-merged.mount: Deactivated successfully.
Oct 13 15:49:25 standalone.localdomain podman[563683]: 2025-10-13 15:49:25.963717808 +0000 UTC m=+0.106506789 container remove f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:26 standalone.localdomain systemd[1]: libpod-conmon-f442d14d1ed2250f14e857ea3030918f96c103b153195ed91a29accf10f092ed.scope: Deactivated successfully.
Oct 13 15:49:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8174 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=230257982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD403760000000001030307) 
Oct 13 15:49:26 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:26.767 2 INFO neutron.agent.securitygroups_rpc [None req-9c310660-eb4b-40a6-a52b-3b9a9f11ebb9 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:26 standalone.localdomain ceph-mon[29756]: pgmap v4054: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:27 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:27.367 2 INFO neutron.agent.securitygroups_rpc [None req-23eb5ecb-76db-4a05-b073-5695771fd769 db28b41eddc54287a80f00ba256475f4 4076f5e0155f4270bb86d83d93b6e717 - - default default] Security group member updated ['806e10a6-01ff-49b5-b912-1261c07321ec']
Oct 13 15:49:27 standalone.localdomain podman[563757]: 
Oct 13 15:49:27 standalone.localdomain podman[563757]: 2025-10-13 15:49:27.453569165 +0000 UTC m=+0.077293846 container create 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:49:27 standalone.localdomain systemd[1]: Started libpod-conmon-6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59.scope.
Oct 13 15:49:27 standalone.localdomain systemd[1]: tmp-crun.vNJrLX.mount: Deactivated successfully.
Oct 13 15:49:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:27.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:27 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:27 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9256738a74710a42ad352493dbfd443bedb5be2afa62fa8972499f0bbd918f25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:27 standalone.localdomain podman[563757]: 2025-10-13 15:49:27.409103463 +0000 UTC m=+0.032828164 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:27 standalone.localdomain podman[563757]: 2025-10-13 15:49:27.514135005 +0000 UTC m=+0.137859706 container init 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:49:27 standalone.localdomain podman[563757]: 2025-10-13 15:49:27.519643375 +0000 UTC m=+0.143368056 container start 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 15:49:27 standalone.localdomain dnsmasq[563774]: started, version 2.85 cachesize 150
Oct 13 15:49:27 standalone.localdomain dnsmasq[563774]: DNS service limited to local subnets
Oct 13 15:49:27 standalone.localdomain dnsmasq[563774]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:27 standalone.localdomain dnsmasq[563774]: warning: no upstream servers configured
Oct 13 15:49:27 standalone.localdomain dnsmasq-dhcp[563774]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:49:27 standalone.localdomain dnsmasq-dhcp[563774]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:49:27 standalone.localdomain dnsmasq[563774]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:27 standalone.localdomain dnsmasq-dhcp[563774]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:27 standalone.localdomain dnsmasq-dhcp[563774]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:27.583 496978 INFO neutron.agent.dhcp.agent [None req-87544464-7337-4917-a140-613470666153 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:26Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18899d5d90>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18899d5400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18899d5910>, <neutron.agent.linux.dhcp.DictModel object at 0x7f188998f040>], id=d22f40d7-b505-4a07-93a0-f8bf61545f22, ip_allocation=immediate, mac_address=fa:16:3e:56:20:fc, name=tempest-NetworksTestDHCPv6-691570706, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:46:30Z, description=, dns_domain=, id=f934a9b1-f0ba-494d-9c34-bbdad3043007, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1823190806, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7513, qos_policy_id=None, revision_number=61, router:external=False, shared=False, standard_attr_id=1760, status=ACTIVE, subnets=['2aad64d1-161f-43c9-a834-fa5697b5f33b', '445ab960-e379-42a7-ab78-06cdd59dd32a'], tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:23Z, vlan_transparent=None, network_id=f934a9b1-f0ba-494d-9c34-bbdad3043007, port_security_enabled=True, project_id=4076f5e0155f4270bb86d83d93b6e717, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['806e10a6-01ff-49b5-b912-1261c07321ec'], standard_attr_id=2439, status=DOWN, tags=[], tenant_id=4076f5e0155f4270bb86d83d93b6e717, updated_at=2025-10-13T15:49:26Z on network f934a9b1-f0ba-494d-9c34-bbdad3043007
Oct 13 15:49:27 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:27.722 496978 INFO neutron.agent.dhcp.agent [None req-214f80ab-7bc3-46ba-8d2e-6a4df233d1af - - - - - -] DHCP configuration for ports {'3e92ec44-32d0-48f0-a90f-a14a72aa9cbb', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4055: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:27 standalone.localdomain dnsmasq[563774]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 2 addresses
Oct 13 15:49:27 standalone.localdomain dnsmasq-dhcp[563774]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:27 standalone.localdomain podman[563794]: 2025-10-13 15:49:27.790207627 +0000 UTC m=+0.061863921 container kill 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:27 standalone.localdomain dnsmasq-dhcp[563774]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:28.025 2 INFO neutron.agent.securitygroups_rpc [None req-4ac8eac3-b306-420e-9064-556c2df271a4 118f63fe8531418a9a3eeb428d44c2e2 2b86e9279d7d4f0e8ff4244980e3ba19 - - default default] Security group member updated ['082236ce-f6b2-4668-93d7-679264d01112']
Oct 13 15:49:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:28.043 496978 INFO neutron.agent.dhcp.agent [None req-7bca0170-b6a3-46e0-a894-de8ffc9f7a5c - - - - - -] DHCP configuration for ports {'d22f40d7-b505-4a07-93a0-f8bf61545f22'} is completed
Oct 13 15:49:28 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:28.204 2 INFO neutron.agent.securitygroups_rpc [None req-4ac8eac3-b306-420e-9064-556c2df271a4 118f63fe8531418a9a3eeb428d44c2e2 2b86e9279d7d4f0e8ff4244980e3ba19 - - default default] Security group member updated ['082236ce-f6b2-4668-93d7-679264d01112']
Oct 13 15:49:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:49:28 standalone.localdomain dnsmasq[563774]: exiting on receipt of SIGTERM
Oct 13 15:49:28 standalone.localdomain podman[563834]: 2025-10-13 15:49:28.275757195 +0000 UTC m=+0.062869432 container kill 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:49:28 standalone.localdomain systemd[1]: libpod-6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59.scope: Deactivated successfully.
Oct 13 15:49:28 standalone.localdomain podman[563841]: 2025-10-13 15:49:28.331892868 +0000 UTC m=+0.089978789 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 13 15:49:28 standalone.localdomain podman[563841]: 2025-10-13 15:49:28.341226715 +0000 UTC m=+0.099312696 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 13 15:49:28 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:49:28 standalone.localdomain podman[563865]: 2025-10-13 15:49:28.407152601 +0000 UTC m=+0.106394105 container died 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 15:49:28 standalone.localdomain podman[563865]: 2025-10-13 15:49:28.458284309 +0000 UTC m=+0.157525783 container remove 6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:49:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9256738a74710a42ad352493dbfd443bedb5be2afa62fa8972499f0bbd918f25-merged.mount: Deactivated successfully.
Oct 13 15:49:28 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:28 standalone.localdomain systemd[1]: libpod-conmon-6ed9077bb18668e798c931ddbcbcf7c2c62ca4fcef9b03edbf601d15b2040c59.scope: Deactivated successfully.
Oct 13 15:49:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:28.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:28 standalone.localdomain ceph-mon[29756]: pgmap v4055: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:29 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:29.079 2 INFO neutron.agent.securitygroups_rpc [None req-b04ac193-2ac7-4844-8aea-00ef1fcaf7ec 118f63fe8531418a9a3eeb428d44c2e2 2b86e9279d7d4f0e8ff4244980e3ba19 - - default default] Security group member updated ['082236ce-f6b2-4668-93d7-679264d01112']
Oct 13 15:49:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:29.103 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:29 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:29.287 2 INFO neutron.agent.securitygroups_rpc [None req-bf6c55c4-0900-472c-b02a-2cbde23e6875 118f63fe8531418a9a3eeb428d44c2e2 2b86e9279d7d4f0e8ff4244980e3ba19 - - default default] Security group member updated ['082236ce-f6b2-4668-93d7-679264d01112']
Oct 13 15:49:29 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:29.302 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:49:29 standalone.localdomain podman[563942]: 
Oct 13 15:49:29 standalone.localdomain podman[563942]: 2025-10-13 15:49:29.53162165 +0000 UTC m=+0.088571864 container create 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:49:29 standalone.localdomain systemd[1]: Started libpod-conmon-075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e.scope.
Oct 13 15:49:29 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:29 standalone.localdomain podman[563942]: 2025-10-13 15:49:29.479992167 +0000 UTC m=+0.036942461 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:29 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c1b907e397ef4cfd660482d28a6209c5c9bb8f0f5c11c28324a713c1348d01b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:29 standalone.localdomain podman[563942]: 2025-10-13 15:49:29.606189362 +0000 UTC m=+0.163139566 container init 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:29 standalone.localdomain podman[563942]: 2025-10-13 15:49:29.61519067 +0000 UTC m=+0.172140884 container start 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:49:29 standalone.localdomain dnsmasq[563961]: started, version 2.85 cachesize 150
Oct 13 15:49:29 standalone.localdomain dnsmasq[563961]: DNS service limited to local subnets
Oct 13 15:49:29 standalone.localdomain dnsmasq[563961]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:29 standalone.localdomain dnsmasq[563961]: warning: no upstream servers configured
Oct 13 15:49:29 standalone.localdomain dnsmasq-dhcp[563961]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Oct 13 15:49:29 standalone.localdomain dnsmasq[563961]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/addn_hosts - 0 addresses
Oct 13 15:49:29 standalone.localdomain dnsmasq-dhcp[563961]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/host
Oct 13 15:49:29 standalone.localdomain dnsmasq-dhcp[563961]: read /var/lib/neutron/dhcp/f934a9b1-f0ba-494d-9c34-bbdad3043007/opts
Oct 13 15:49:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4056: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:30.193 496978 INFO neutron.agent.dhcp.agent [None req-089062d2-0a82-4310-8d2e-06cad3b5c3e5 - - - - - -] DHCP configuration for ports {'3e92ec44-32d0-48f0-a90f-a14a72aa9cbb', 'b3d945e2-458a-437c-9a37-dcc9416fabab'} is completed
Oct 13 15:49:30 standalone.localdomain dnsmasq[563961]: exiting on receipt of SIGTERM
Oct 13 15:49:30 standalone.localdomain podman[563980]: 2025-10-13 15:49:30.357005458 +0000 UTC m=+0.071011834 container kill 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:49:30 standalone.localdomain systemd[1]: libpod-075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e.scope: Deactivated successfully.
Oct 13 15:49:30 standalone.localdomain podman[563994]: 2025-10-13 15:49:30.428140893 +0000 UTC m=+0.060524928 container died 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:49:30 standalone.localdomain podman[563994]: 2025-10-13 15:49:30.462796124 +0000 UTC m=+0.095180129 container cleanup 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:30 standalone.localdomain systemd[1]: libpod-conmon-075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e.scope: Deactivated successfully.
Oct 13 15:49:30 standalone.localdomain podman[564001]: 2025-10-13 15:49:30.525997204 +0000 UTC m=+0.143939284 container remove 075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f934a9b1-f0ba-494d-9c34-bbdad3043007, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-1c1b907e397ef4cfd660482d28a6209c5c9bb8f0f5c11c28324a713c1348d01b-merged.mount: Deactivated successfully.
Oct 13 15:49:30 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-075686443a125f8d8fc125824de93954c7f2ccf1d3e738442040ae6ab881da9e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:30 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:30Z|00787|binding|INFO|Releasing lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb from this chassis (sb_readonly=0)
Oct 13 15:49:30 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:30Z|00788|binding|INFO|Setting lport 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb down in Southbound
Oct 13 15:49:30 standalone.localdomain kernel: device tap3e92ec44-32 left promiscuous mode
Oct 13 15:49:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:30.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:30.549 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe07:da36/64 2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f934a9b1-f0ba-494d-9c34-bbdad3043007', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4076f5e0155f4270bb86d83d93b6e717', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e8e8e9e-8b7d-461d-b2e7-7828a7dcbfdb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=3e92ec44-32d0-48f0-a90f-a14a72aa9cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:30.552 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 3e92ec44-32d0-48f0-a90f-a14a72aa9cbb in datapath f934a9b1-f0ba-494d-9c34-bbdad3043007 unbound from our chassis
Oct 13 15:49:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:30.555 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f934a9b1-f0ba-494d-9c34-bbdad3043007, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:30 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:30.557 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d623272d-adb0-4684-8726-f286e95582b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:30 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:30.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8175 DF PROTO=TCP SPT=41500 DPT=9102 SEQ=230257982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD413360000000001030307) 
Oct 13 15:49:30 standalone.localdomain ceph-mon[29756]: pgmap v4056: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:30.856 496978 INFO neutron.agent.dhcp.agent [None req-868df7ed-3600-41f5-94d9-f466031420ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:30.857 496978 INFO neutron.agent.dhcp.agent [None req-868df7ed-3600-41f5-94d9-f466031420ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:30 standalone.localdomain systemd[1]: run-netns-qdhcp\x2df934a9b1\x2df0ba\x2d494d\x2d9c34\x2dbbdad3043007.mount: Deactivated successfully.
Oct 13 15:49:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:30.991 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:31.234 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:31 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:31Z|00789|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:31 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:31.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4057: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:32.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:32 standalone.localdomain ceph-mon[29756]: pgmap v4057: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:33.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4058: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:33 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:33.789 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:33Z, description=, device_id=e5f52e86-f5c5-49af-a5bd-798783a76e46, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd51c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd5280>], id=1e3e2301-cf4e-4274-a907-d29519388b3a, ip_allocation=immediate, mac_address=fa:16:3e:4c:fa:76, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2471, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:49:33Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:49:34 standalone.localdomain podman[564043]: 2025-10-13 15:49:34.065900502 +0000 UTC m=+0.061383056 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:49:34 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:49:34 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:49:34 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:49:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:34.438 496978 INFO neutron.agent.dhcp.agent [None req-55e76b06-693a-485c-b614-ac003913f55d - - - - - -] DHCP configuration for ports {'1e3e2301-cf4e-4274-a907-d29519388b3a'} is completed
Oct 13 15:49:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:34.559 496978 INFO neutron.agent.linux.ip_lib [None req-8c4d56e1-7d69-41b0-b1a9-024361e42e9d - - - - - -] Device tap5a790146-6b cannot be used as it has no MAC address
Oct 13 15:49:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:34.587 496978 INFO neutron.agent.linux.ip_lib [None req-43dfa47e-e895-45ae-9ac8-a3af60f0f0ba - - - - - -] Device tap40e5d086-04 cannot be used as it has no MAC address
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain kernel: device tap5a790146-6b entered promiscuous mode
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00790|binding|INFO|Claiming lport 5a790146-6b07-4cf3-a9bf-8c790e3e5efc for this chassis.
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00791|binding|INFO|5a790146-6b07-4cf3-a9bf-8c790e3e5efc: Claiming unknown
Oct 13 15:49:34 standalone.localdomain NetworkManager[5962]: <info>  [1760370574.6360] manager: (tap5a790146-6b): new Generic device (/org/freedesktop/NetworkManager/Devices/134)
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain systemd-udevd[564082]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.650 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-35ac4f78-471c-41f8-b8ab-f9a7b0175257', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35ac4f78-471c-41f8-b8ab-f9a7b0175257', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adf6933f04d74c949ba5fc4f332031e4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d1a0e53-2894-4778-8935-8dd07ebd59ad, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=5a790146-6b07-4cf3-a9bf-8c790e3e5efc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.652 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 5a790146-6b07-4cf3-a9bf-8c790e3e5efc in datapath 35ac4f78-471c-41f8-b8ab-f9a7b0175257 bound to our chassis
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.656 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port caf4ddc4-7470-42ec-a037-83b06b7eade2 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.657 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35ac4f78-471c-41f8-b8ab-f9a7b0175257, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.658 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1b6085-dc8b-496b-b594-2c5f6f624782]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain kernel: device tap40e5d086-04 entered promiscuous mode
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain NetworkManager[5962]: <info>  [1760370574.6799] manager: (tap40e5d086-04): new Generic device (/org/freedesktop/NetworkManager/Devices/135)
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00792|binding|INFO|Claiming lport 40e5d086-0427-4c94-bb70-cc33b07438ab for this chassis.
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00793|binding|INFO|40e5d086-0427-4c94-bb70-cc33b07438ab: Claiming unknown
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00794|binding|INFO|Setting lport 5a790146-6b07-4cf3-a9bf-8c790e3e5efc ovn-installed in OVS
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00795|binding|INFO|Setting lport 5a790146-6b07-4cf3-a9bf-8c790e3e5efc up in Southbound
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.700 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-90833556-463b-4132-b651-1d11a5807f5d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90833556-463b-4132-b651-1d11a5807f5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b86e9279d7d4f0e8ff4244980e3ba19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59524132-179b-46d9-a838-52b590fc5bf3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=40e5d086-0427-4c94-bb70-cc33b07438ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.702 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 40e5d086-0427-4c94-bb70-cc33b07438ab in datapath 90833556-463b-4132-b651-1d11a5807f5d unbound from our chassis
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.705 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port ff9f12e2-50cc-4d74-9e12-bfd5ec26e38f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.705 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 90833556-463b-4132-b651-1d11a5807f5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:34.706 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[70600d67-11ee-4270-98b5-0101f6468548]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00796|binding|INFO|Setting lport 40e5d086-0427-4c94-bb70-cc33b07438ab ovn-installed in OVS
Oct 13 15:49:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:34Z|00797|binding|INFO|Setting lport 40e5d086-0427-4c94-bb70-cc33b07438ab up in Southbound
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5a790146-6b: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap40e5d086-04: No such device
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain ceph-mon[29756]: pgmap v4058: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:34 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:34.966 496978 INFO neutron.agent.linux.ip_lib [None req-7cb70baa-eb88-4b95-bd1c-42842bba0383 - - - - - -] Device tap6183912a-4d cannot be used as it has no MAC address
Oct 13 15:49:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:34.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:34 standalone.localdomain kernel: device tap6183912a-4d entered promiscuous mode
Oct 13 15:49:35 standalone.localdomain NetworkManager[5962]: <info>  [1760370575.0013] manager: (tap6183912a-4d): new Generic device (/org/freedesktop/NetworkManager/Devices/136)
Oct 13 15:49:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:35.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:35Z|00798|binding|INFO|Claiming lport 6183912a-4d3c-4283-9a2c-63aa01d9dfdd for this chassis.
Oct 13 15:49:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:35Z|00799|binding|INFO|6183912a-4d3c-4283-9a2c-63aa01d9dfdd: Claiming unknown
Oct 13 15:49:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:35.015 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-54ce625b-14b6-412d-9530-884fab5292af', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54ce625b-14b6-412d-9530-884fab5292af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b86e9279d7d4f0e8ff4244980e3ba19', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0b71d23-0c18-48e2-a3b3-1ec91a8ac694, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6183912a-4d3c-4283-9a2c-63aa01d9dfdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:35.017 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6183912a-4d3c-4283-9a2c-63aa01d9dfdd in datapath 54ce625b-14b6-412d-9530-884fab5292af bound to our chassis
Oct 13 15:49:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:35.021 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6642a5e6-238c-40fa-ab9f-24b3dea93563 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:49:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:35.021 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54ce625b-14b6-412d-9530-884fab5292af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:35 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:35.022 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd31541-b474-4c57-b719-d6594bc18b36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:35Z|00800|binding|INFO|Setting lport 6183912a-4d3c-4283-9a2c-63aa01d9dfdd ovn-installed in OVS
Oct 13 15:49:35 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:35Z|00801|binding|INFO|Setting lport 6183912a-4d3c-4283-9a2c-63aa01d9dfdd up in Southbound
Oct 13 15:49:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:35.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:35.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:35 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:35.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:35 standalone.localdomain sshd[564199]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:49:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4059: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:35 standalone.localdomain podman[564244]: 
Oct 13 15:49:35 standalone.localdomain podman[564244]: 2025-10-13 15:49:35.82432645 +0000 UTC m=+0.142352384 container create 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:35 standalone.localdomain systemd[1]: Started libpod-conmon-07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4.scope.
Oct 13 15:49:35 standalone.localdomain systemd[1]: tmp-crun.4IRWEl.mount: Deactivated successfully.
Oct 13 15:49:35 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:35 standalone.localdomain podman[564244]: 2025-10-13 15:49:35.783267813 +0000 UTC m=+0.101293727 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:35 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dba510fca8eb3be4ec4f614d22b5be08def5937c9c0cb09fe6988d044345f66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:35 standalone.localdomain podman[564277]: 
Oct 13 15:49:35 standalone.localdomain podman[564277]: 2025-10-13 15:49:35.8440973 +0000 UTC m=+0.053295436 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:35 standalone.localdomain podman[564244]: 2025-10-13 15:49:35.943061486 +0000 UTC m=+0.261087400 container init 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:35 standalone.localdomain podman[564244]: 2025-10-13 15:49:35.948971008 +0000 UTC m=+0.266996942 container start 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:35 standalone.localdomain podman[564277]: 2025-10-13 15:49:35.953430216 +0000 UTC m=+0.162628322 container create 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:35 standalone.localdomain dnsmasq[564313]: started, version 2.85 cachesize 150
Oct 13 15:49:35 standalone.localdomain dnsmasq[564313]: DNS service limited to local subnets
Oct 13 15:49:35 standalone.localdomain dnsmasq[564313]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:35 standalone.localdomain dnsmasq[564313]: warning: no upstream servers configured
Oct 13 15:49:35 standalone.localdomain dnsmasq-dhcp[564313]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:49:35 standalone.localdomain dnsmasq[564313]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/addn_hosts - 0 addresses
Oct 13 15:49:35 standalone.localdomain dnsmasq-dhcp[564313]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/host
Oct 13 15:49:35 standalone.localdomain dnsmasq-dhcp[564313]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/opts
Oct 13 15:49:35 standalone.localdomain systemd[1]: Started libpod-conmon-726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30.scope.
Oct 13 15:49:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da38589581115e1080005876cb89074d761ce8d1b7eef456f83a7dfe24b51827/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:36 standalone.localdomain podman[564277]: 2025-10-13 15:49:36.018237136 +0000 UTC m=+0.227435242 container init 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:49:36 standalone.localdomain podman[564277]: 2025-10-13 15:49:36.025667365 +0000 UTC m=+0.234865461 container start 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:49:36 standalone.localdomain dnsmasq[564337]: started, version 2.85 cachesize 150
Oct 13 15:49:36 standalone.localdomain dnsmasq[564337]: DNS service limited to local subnets
Oct 13 15:49:36 standalone.localdomain dnsmasq[564337]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:36 standalone.localdomain dnsmasq[564337]: warning: no upstream servers configured
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564337]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:49:36 standalone.localdomain dnsmasq[564337]: read /var/lib/neutron/dhcp/90833556-463b-4132-b651-1d11a5807f5d/addn_hosts - 0 addresses
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564337]: read /var/lib/neutron/dhcp/90833556-463b-4132-b651-1d11a5807f5d/host
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564337]: read /var/lib/neutron/dhcp/90833556-463b-4132-b651-1d11a5807f5d/opts
Oct 13 15:49:36 standalone.localdomain dnsmasq[564313]: exiting on receipt of SIGTERM
Oct 13 15:49:36 standalone.localdomain systemd[1]: libpod-07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4.scope: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain podman[564321]: 2025-10-13 15:49:36.081180159 +0000 UTC m=+0.097012696 container died 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:49:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:36.095 496978 INFO neutron.agent.dhcp.agent [None req-c7d773af-5aa3-4f31-908f-50e02cfccd35 - - - - - -] DHCP configuration for ports {'78c6f3fe-8b6c-4654-90f7-22c1e10f17b3'} is completed
Oct 13 15:49:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:36Z|00802|binding|INFO|Removing iface tap6183912a-4d ovn-installed in OVS
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.098 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6642a5e6-238c-40fa-ab9f-24b3dea93563 with type ""
Oct 13 15:49:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:36Z|00803|binding|INFO|Removing lport 6183912a-4d3c-4283-9a2c-63aa01d9dfdd ovn-installed in OVS
Oct 13 15:49:36 standalone.localdomain unix_chkpwd[564348]: password check failed for user (root)
Oct 13 15:49:36 standalone.localdomain sshd[564199]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.134 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-54ce625b-14b6-412d-9530-884fab5292af', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54ce625b-14b6-412d-9530-884fab5292af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b86e9279d7d4f0e8ff4244980e3ba19', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e0b71d23-0c18-48e2-a3b3-1ec91a8ac694, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6183912a-4d3c-4283-9a2c-63aa01d9dfdd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.135 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6183912a-4d3c-4283-9a2c-63aa01d9dfdd in datapath 54ce625b-14b6-412d-9530-884fab5292af unbound from our chassis
Oct 13 15:49:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:36.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.137 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54ce625b-14b6-412d-9530-884fab5292af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.138 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e92e3ac7-6bd9-435a-842f-b6745f5aa8c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:36 standalone.localdomain podman[564321]: 2025-10-13 15:49:36.149406744 +0000 UTC m=+0.165239231 container cleanup 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:36.165 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:35Z, description=, device_id=e5f52e86-f5c5-49af-a5bd-798783a76e46, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188926d5b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889004f40>], id=2a07bd7a-081b-4257-8ddb-431980407c6e, ip_allocation=immediate, mac_address=fa:16:3e:30:c4:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:31Z, description=, dns_domain=, id=35ac4f78-471c-41f8-b8ab-f9a7b0175257, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-1692098810-network, port_security_enabled=True, project_id=adf6933f04d74c949ba5fc4f332031e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50988, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2459, status=ACTIVE, subnets=['f99dda7c-2eb5-4c75-9a98-2d8fca2d95e5'], tags=[], tenant_id=adf6933f04d74c949ba5fc4f332031e4, updated_at=2025-10-13T15:49:31Z, vlan_transparent=None, network_id=35ac4f78-471c-41f8-b8ab-f9a7b0175257, port_security_enabled=False, project_id=adf6933f04d74c949ba5fc4f332031e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2478, status=DOWN, tags=[], tenant_id=adf6933f04d74c949ba5fc4f332031e4, updated_at=2025-10-13T15:49:35Z on network 35ac4f78-471c-41f8-b8ab-f9a7b0175257
Oct 13 15:49:36 standalone.localdomain podman[564340]: 2025-10-13 15:49:36.189923976 +0000 UTC m=+0.104545418 container cleanup 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:36 standalone.localdomain systemd[1]: libpod-conmon-07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4.scope: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain podman[564353]: 2025-10-13 15:49:36.230673173 +0000 UTC m=+0.069158575 container remove 07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:49:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:36.285 496978 INFO neutron.agent.dhcp.agent [None req-623b332c-8f93-45af-8392-d131b84a8555 - - - - - -] DHCP configuration for ports {'cb6926da-6940-41f1-bbe0-8111b0a82864'} is completed
Oct 13 15:49:36 standalone.localdomain podman[564386]: 
Oct 13 15:49:36 standalone.localdomain podman[564386]: 2025-10-13 15:49:36.384923495 +0000 UTC m=+0.139514217 container create c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:49:36 standalone.localdomain dnsmasq[564337]: exiting on receipt of SIGTERM
Oct 13 15:49:36 standalone.localdomain podman[564404]: 2025-10-13 15:49:36.418122629 +0000 UTC m=+0.099259024 container kill 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:36 standalone.localdomain systemd[1]: Started libpod-conmon-c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2.scope.
Oct 13 15:49:36 standalone.localdomain systemd[1]: libpod-726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30.scope: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain podman[564386]: 2025-10-13 15:49:36.337369967 +0000 UTC m=+0.091960759 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05e3aa74b322d7b1dbf83082bcc1426f14b466e71dedef931f49c6c8da4b7fee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:36 standalone.localdomain podman[564386]: 2025-10-13 15:49:36.45087172 +0000 UTC m=+0.205462422 container init c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:49:36 standalone.localdomain podman[564386]: 2025-10-13 15:49:36.461655303 +0000 UTC m=+0.216246005 container start c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:49:36 standalone.localdomain dnsmasq[564440]: started, version 2.85 cachesize 150
Oct 13 15:49:36 standalone.localdomain dnsmasq[564440]: DNS service limited to local subnets
Oct 13 15:49:36 standalone.localdomain dnsmasq[564440]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:36 standalone.localdomain dnsmasq[564440]: warning: no upstream servers configured
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564440]: DHCP, static leases only on 10.100.0.16, lease time 1d
Oct 13 15:49:36 standalone.localdomain dnsmasq[564440]: read /var/lib/neutron/dhcp/54ce625b-14b6-412d-9530-884fab5292af/addn_hosts - 0 addresses
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564440]: read /var/lib/neutron/dhcp/54ce625b-14b6-412d-9530-884fab5292af/host
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564440]: read /var/lib/neutron/dhcp/54ce625b-14b6-412d-9530-884fab5292af/opts
Oct 13 15:49:36 standalone.localdomain podman[564419]: 2025-10-13 15:49:36.501843143 +0000 UTC m=+0.065582515 container died 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:36 standalone.localdomain podman[564419]: 2025-10-13 15:49:36.532439648 +0000 UTC m=+0.096178990 container cleanup 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:36 standalone.localdomain systemd[1]: libpod-conmon-726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30.scope: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain podman[564429]: 2025-10-13 15:49:36.58433346 +0000 UTC m=+0.131471139 container remove 726499acb505b9c54b2b0263ac571419ef67c9a080f34e5fd2cc85474efe6a30 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90833556-463b-4132-b651-1d11a5807f5d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:49:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:36Z|00804|binding|INFO|Releasing lport 40e5d086-0427-4c94-bb70-cc33b07438ab from this chassis (sb_readonly=0)
Oct 13 15:49:36 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:36Z|00805|binding|INFO|Setting lport 40e5d086-0427-4c94-bb70-cc33b07438ab down in Southbound
Oct 13 15:49:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:36.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:36 standalone.localdomain kernel: device tap40e5d086-04 left promiscuous mode
Oct 13 15:49:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:36.604 496978 INFO neutron.agent.dhcp.agent [None req-69de7d27-47ce-48c4-84a0-f74c2d23fd05 - - - - - -] DHCP configuration for ports {'edfa5e73-7c5c-4a9d-a100-defc387d50d9'} is completed
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.611 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-90833556-463b-4132-b651-1d11a5807f5d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90833556-463b-4132-b651-1d11a5807f5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b86e9279d7d4f0e8ff4244980e3ba19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59524132-179b-46d9-a838-52b590fc5bf3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=40e5d086-0427-4c94-bb70-cc33b07438ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.613 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 40e5d086-0427-4c94-bb70-cc33b07438ab in datapath 90833556-463b-4132-b651-1d11a5807f5d unbound from our chassis
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.616 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 90833556-463b-4132-b651-1d11a5807f5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:36 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:36.617 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[c78292e0-8e6e-4245-8f10-de7a72b6a745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:36.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:36 standalone.localdomain dnsmasq[564440]: exiting on receipt of SIGTERM
Oct 13 15:49:36 standalone.localdomain podman[564501]: 2025-10-13 15:49:36.760613972 +0000 UTC m=+0.048169169 container kill c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:36 standalone.localdomain systemd[1]: libpod-c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2.scope: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:36.775 496978 INFO neutron.agent.dhcp.agent [None req-fc06c720-d8b4-47c2-a79f-df526772a528 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:36 standalone.localdomain podman[564490]: 
Oct 13 15:49:36 standalone.localdomain podman[564490]: 2025-10-13 15:49:36.796813749 +0000 UTC m=+0.116601940 container create 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:36 standalone.localdomain podman[564520]: 2025-10-13 15:49:36.822587235 +0000 UTC m=+0.052286846 container died c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:36 standalone.localdomain podman[564490]: 2025-10-13 15:49:36.736752515 +0000 UTC m=+0.056540746 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:36 standalone.localdomain systemd[1]: Started libpod-conmon-07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498.scope.
Oct 13 15:49:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-2dba510fca8eb3be4ec4f614d22b5be08def5937c9c0cb09fe6988d044345f66-merged.mount: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07c7ecfafd43a5f9ce1a2836764365a695c915ab39e39214b28ee8e75e0dabf4-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d90833556\x2d463b\x2d4132\x2db651\x2d1d11a5807f5d.mount: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain ceph-mon[29756]: pgmap v4059: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:36 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:36 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15776ffcc7c3417e189b45b1b17fcedd341e363408d699bcf455f20c9dfbf0bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:36 standalone.localdomain podman[564490]: 2025-10-13 15:49:36.889657005 +0000 UTC m=+0.209445196 container init 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:49:36 standalone.localdomain podman[564490]: 2025-10-13 15:49:36.895612279 +0000 UTC m=+0.215400460 container start 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:49:36 standalone.localdomain dnsmasq[564555]: started, version 2.85 cachesize 150
Oct 13 15:49:36 standalone.localdomain dnsmasq[564555]: DNS service limited to local subnets
Oct 13 15:49:36 standalone.localdomain dnsmasq[564555]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:36 standalone.localdomain dnsmasq[564555]: warning: no upstream servers configured
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564555]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:49:36 standalone.localdomain dnsmasq[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/addn_hosts - 1 addresses
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/host
Oct 13 15:49:36 standalone.localdomain dnsmasq-dhcp[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/opts
Oct 13 15:49:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-05e3aa74b322d7b1dbf83082bcc1426f14b466e71dedef931f49c6c8da4b7fee-merged.mount: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain podman[564520]: 2025-10-13 15:49:36.925196181 +0000 UTC m=+0.154895752 container cleanup c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:49:36 standalone.localdomain systemd[1]: libpod-conmon-c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2.scope: Deactivated successfully.
Oct 13 15:49:36 standalone.localdomain podman[564531]: 2025-10-13 15:49:36.952119213 +0000 UTC m=+0.152023504 container remove c6309e0cbf6e4427dc797eec8d0d8f36ab2e5dbb2b2167ef24d1c896ed9ab6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-54ce625b-14b6-412d-9530-884fab5292af, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:49:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:36.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:36 standalone.localdomain kernel: device tap6183912a-4d left promiscuous mode
Oct 13 15:49:36 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:36.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:36.999 496978 INFO neutron.agent.dhcp.agent [None req-b787f8cc-0364-454b-a95f-dd8d9fa375d3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:37.000 496978 INFO neutron.agent.dhcp.agent [None req-b787f8cc-0364-454b-a95f-dd8d9fa375d3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:37Z|00806|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:37.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:37.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:37 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:37.758 496978 INFO neutron.agent.dhcp.agent [None req-3d90cc69-557e-4ed7-b301-215a6ff7fee8 - - - - - -] DHCP configuration for ports {'2a07bd7a-081b-4257-8ddb-431980407c6e'} is completed
Oct 13 15:49:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4060: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:37 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d54ce625b\x2d14b6\x2d412d\x2d9530\x2d884fab5292af.mount: Deactivated successfully.
Oct 13 15:49:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:38.185 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:38 standalone.localdomain sshd[564199]: Failed password for root from 193.46.255.159 port 29778 ssh2
Oct 13 15:49:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:38.668 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:35Z, description=, device_id=e5f52e86-f5c5-49af-a5bd-798783a76e46, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119580>], id=2a07bd7a-081b-4257-8ddb-431980407c6e, ip_allocation=immediate, mac_address=fa:16:3e:30:c4:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:31Z, description=, dns_domain=, id=35ac4f78-471c-41f8-b8ab-f9a7b0175257, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-1692098810-network, port_security_enabled=True, project_id=adf6933f04d74c949ba5fc4f332031e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50988, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2459, status=ACTIVE, subnets=['f99dda7c-2eb5-4c75-9a98-2d8fca2d95e5'], tags=[], tenant_id=adf6933f04d74c949ba5fc4f332031e4, updated_at=2025-10-13T15:49:31Z, vlan_transparent=None, network_id=35ac4f78-471c-41f8-b8ab-f9a7b0175257, port_security_enabled=False, project_id=adf6933f04d74c949ba5fc4f332031e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2478, status=DOWN, tags=[], tenant_id=adf6933f04d74c949ba5fc4f332031e4, updated_at=2025-10-13T15:49:35Z on network 35ac4f78-471c-41f8-b8ab-f9a7b0175257
Oct 13 15:49:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:38.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:38 standalone.localdomain ceph-mon[29756]: pgmap v4060: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:38 standalone.localdomain dnsmasq[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/addn_hosts - 1 addresses
Oct 13 15:49:38 standalone.localdomain podman[564583]: 2025-10-13 15:49:38.904926141 +0000 UTC m=+0.060869640 container kill 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:38 standalone.localdomain dnsmasq-dhcp[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/host
Oct 13 15:49:38 standalone.localdomain dnsmasq-dhcp[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/opts
Oct 13 15:49:38 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:49:39 standalone.localdomain podman[564596]: 2025-10-13 15:49:39.027719601 +0000 UTC m=+0.095720465 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 13 15:49:39 standalone.localdomain podman[564596]: 2025-10-13 15:49:39.068678926 +0000 UTC m=+0.136679810 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:49:39 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:49:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:39.095 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:39.119 496978 INFO neutron.agent.linux.ip_lib [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Device tapb5c8af7b-46 cannot be used as it has no MAC address
Oct 13 15:49:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:39 standalone.localdomain kernel: device tapb5c8af7b-46 entered promiscuous mode
Oct 13 15:49:39 standalone.localdomain NetworkManager[5962]: <info>  [1760370579.1552] manager: (tapb5c8af7b-46): new Generic device (/org/freedesktop/NetworkManager/Devices/137)
Oct 13 15:49:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:39Z|00807|binding|INFO|Claiming lport b5c8af7b-4628-4de0-8eac-0df176719249 for this chassis.
Oct 13 15:49:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:39Z|00808|binding|INFO|b5c8af7b-4628-4de0-8eac-0df176719249: Claiming unknown
Oct 13 15:49:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:39.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:39 standalone.localdomain systemd-udevd[564637]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:49:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:39.165 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-dfb2252b-9ac0-44da-93fe-e98b4cd60215', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb2252b-9ac0-44da-93fe-e98b4cd60215', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b590ef07afc04283817dba3e7ac7fe5f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc2a8b3d-afda-4492-a8d6-388704e69773, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b5c8af7b-4628-4de0-8eac-0df176719249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:39.167 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b5c8af7b-4628-4de0-8eac-0df176719249 in datapath dfb2252b-9ac0-44da-93fe-e98b4cd60215 bound to our chassis
Oct 13 15:49:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:39.169 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfb2252b-9ac0-44da-93fe-e98b4cd60215 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:49:39 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:39.171 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d3063058-48b2-45fb-944c-67daa7ef0bcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:39Z|00809|binding|INFO|Setting lport b5c8af7b-4628-4de0-8eac-0df176719249 ovn-installed in OVS
Oct 13 15:49:39 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:39Z|00810|binding|INFO|Setting lport b5c8af7b-4628-4de0-8eac-0df176719249 up in Southbound
Oct 13 15:49:39 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:39.197 2 INFO neutron.agent.securitygroups_rpc [None req-de88a6ff-1dbf-4d35-9f4a-ec688a2048fc e52bc13f462f48aa918b14a7bdec0b0e b590ef07afc04283817dba3e7ac7fe5f - - default default] Security group member updated ['b12fa3ef-1513-43b4-8cd9-76991177073d']
Oct 13 15:49:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:39.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:39.234 496978 INFO neutron.agent.dhcp.agent [None req-79114c81-bea2-4f71-9661-0bfd85c95315 - - - - - -] DHCP configuration for ports {'2a07bd7a-081b-4257-8ddb-431980407c6e'} is completed
Oct 13 15:49:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:39.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4061: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:39 standalone.localdomain unix_chkpwd[564670]: password check failed for user (root)
Oct 13 15:49:40 standalone.localdomain podman[564693]: 
Oct 13 15:49:40 standalone.localdomain podman[564693]: 2025-10-13 15:49:40.181979631 +0000 UTC m=+0.093838448 container create 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:49:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:49:40 standalone.localdomain systemd[1]: Started libpod-conmon-5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc.scope.
Oct 13 15:49:40 standalone.localdomain systemd[1]: tmp-crun.uEI8oF.mount: Deactivated successfully.
Oct 13 15:49:40 standalone.localdomain podman[564693]: 2025-10-13 15:49:40.134794514 +0000 UTC m=+0.046653341 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.251 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:40 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:40 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6800e0c7d1655b445688274a607aebe7765951b102625cd4b1695869f0a176d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:40 standalone.localdomain podman[564693]: 2025-10-13 15:49:40.267548201 +0000 UTC m=+0.179407018 container init 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:49:40 standalone.localdomain podman[564693]: 2025-10-13 15:49:40.275018812 +0000 UTC m=+0.186877609 container start 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: started, version 2.85 cachesize 150
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: DNS service limited to local subnets
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: warning: no upstream servers configured
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/addn_hosts - 0 addresses
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/host
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/opts
Oct 13 15:49:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:40.295 2 INFO neutron.agent.securitygroups_rpc [None req-c118e905-9285-466d-9f5f-524c6442c358 e52bc13f462f48aa918b14a7bdec0b0e b590ef07afc04283817dba3e7ac7fe5f - - default default] Security group member updated ['b12fa3ef-1513-43b4-8cd9-76991177073d']
Oct 13 15:49:40 standalone.localdomain podman[564708]: 2025-10-13 15:49:40.317273086 +0000 UTC m=+0.091528165 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.321 496978 INFO neutron.agent.dhcp.agent [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:38Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e702b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e70ca0>], id=67e617d5-3477-4e9d-b2c3-4f50ca761ce6, ip_allocation=immediate, mac_address=fa:16:3e:28:35:9a, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1397051671, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:36Z, description=, dns_domain=, id=dfb2252b-9ac0-44da-93fe-e98b4cd60215, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1598535027, port_security_enabled=True, project_id=b590ef07afc04283817dba3e7ac7fe5f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30939, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2479, status=ACTIVE, subnets=['590bf1f7-8b31-4f4b-abe8-045b1c2508df'], tags=[], tenant_id=b590ef07afc04283817dba3e7ac7fe5f, updated_at=2025-10-13T15:49:38Z, vlan_transparent=None, network_id=dfb2252b-9ac0-44da-93fe-e98b4cd60215, port_security_enabled=True, project_id=b590ef07afc04283817dba3e7ac7fe5f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b12fa3ef-1513-43b4-8cd9-76991177073d'], standard_attr_id=2484, status=DOWN, tags=[], tenant_id=b590ef07afc04283817dba3e7ac7fe5f, updated_at=2025-10-13T15:49:38Z on network dfb2252b-9ac0-44da-93fe-e98b4cd60215
Oct 13 15:49:40 standalone.localdomain podman[564708]: 2025-10-13 15:49:40.331869657 +0000 UTC m=+0.106124696 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 15:49:40 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.393 496978 INFO neutron.agent.dhcp.agent [None req-057dac5b-6e9b-47a1-9be1-665d122e8461 - - - - - -] DHCP configuration for ports {'8606fd9d-4683-4a7a-9cb9-e7df466199c9'} is completed
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/addn_hosts - 1 addresses
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/host
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/opts
Oct 13 15:49:40 standalone.localdomain podman[564752]: 2025-10-13 15:49:40.530707365 +0000 UTC m=+0.061503390 container kill 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.702 496978 INFO neutron.agent.dhcp.agent [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:39Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83280>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83df0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83a30>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83430>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e83130>], id=853764bd-1d7f-468d-8ffc-e89c25cb9ccf, ip_allocation=immediate, mac_address=fa:16:3e:14:4f:c9, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1022393442, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:36Z, description=, dns_domain=, id=dfb2252b-9ac0-44da-93fe-e98b4cd60215, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1598535027, port_security_enabled=True, project_id=b590ef07afc04283817dba3e7ac7fe5f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30939, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2479, status=ACTIVE, subnets=['590bf1f7-8b31-4f4b-abe8-045b1c2508df'], tags=[], tenant_id=b590ef07afc04283817dba3e7ac7fe5f, updated_at=2025-10-13T15:49:38Z, vlan_transparent=None, network_id=dfb2252b-9ac0-44da-93fe-e98b4cd60215, port_security_enabled=True, project_id=b590ef07afc04283817dba3e7ac7fe5f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b12fa3ef-1513-43b4-8cd9-76991177073d'], standard_attr_id=2485, status=DOWN, tags=[], tenant_id=b590ef07afc04283817dba3e7ac7fe5f, updated_at=2025-10-13T15:49:39Z on network dfb2252b-9ac0-44da-93fe-e98b4cd60215
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.729 496978 INFO neutron.agent.linux.dhcp [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.730 496978 INFO neutron.agent.linux.dhcp [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.730 496978 INFO neutron.agent.linux.dhcp [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions
Oct 13 15:49:40 standalone.localdomain ceph-mon[29756]: pgmap v4061: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:40 standalone.localdomain dnsmasq[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/addn_hosts - 2 addresses
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/host
Oct 13 15:49:40 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/opts
Oct 13 15:49:40 standalone.localdomain podman[564791]: 2025-10-13 15:49:40.926886424 +0000 UTC m=+0.066929307 container kill 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 15:49:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:40.932 496978 INFO neutron.agent.dhcp.agent [None req-e3db196f-7d4e-4e0c-9aeb-23957dc59092 - - - - - -] DHCP configuration for ports {'67e617d5-3477-4e9d-b2c3-4f50ca761ce6'} is completed
Oct 13 15:49:40 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:40.995 2 INFO neutron.agent.securitygroups_rpc [None req-21067529-400d-4e97-9a8f-baff74c64704 e52bc13f462f48aa918b14a7bdec0b0e b590ef07afc04283817dba3e7ac7fe5f - - default default] Security group member updated ['b12fa3ef-1513-43b4-8cd9-76991177073d']
Oct 13 15:49:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:41.145 496978 INFO neutron.agent.dhcp.agent [None req-39eb8a22-a493-45ee-8c3a-7736895918db - - - - - -] DHCP configuration for ports {'853764bd-1d7f-468d-8ffc-e89c25cb9ccf'} is completed
Oct 13 15:49:41 standalone.localdomain dnsmasq[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/addn_hosts - 1 addresses
Oct 13 15:49:41 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/host
Oct 13 15:49:41 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/opts
Oct 13 15:49:41 standalone.localdomain podman[564828]: 2025-10-13 15:49:41.30060485 +0000 UTC m=+0.067984170 container kill 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:49:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:41.490 496978 INFO neutron.agent.dhcp.agent [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:38Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111b20>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111160>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1889111b80>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1889111d90>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111e20>], id=67e617d5-3477-4e9d-b2c3-4f50ca761ce6, ip_allocation=immediate, mac_address=fa:16:3e:28:35:9a, name=tempest-new-port-name-1423031875, network_id=dfb2252b-9ac0-44da-93fe-e98b4cd60215, port_security_enabled=True, project_id=b590ef07afc04283817dba3e7ac7fe5f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['b12fa3ef-1513-43b4-8cd9-76991177073d'], standard_attr_id=2484, status=DOWN, tags=[], tenant_id=b590ef07afc04283817dba3e7ac7fe5f, updated_at=2025-10-13T15:49:41Z on network dfb2252b-9ac0-44da-93fe-e98b4cd60215
Oct 13 15:49:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:41.513 496978 INFO neutron.agent.linux.dhcp [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions
Oct 13 15:49:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:41.548 496978 INFO neutron.agent.linux.dhcp [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions
Oct 13 15:49:41 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:41.550 496978 INFO neutron.agent.linux.dhcp [None req-8b4f8984-2d44-400a-920a-306d970366be - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions
Oct 13 15:49:41 standalone.localdomain podman[467099]: time="2025-10-13T15:49:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:49:41 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:41.625 2 INFO neutron.agent.securitygroups_rpc [None req-3ddb436a-3d20-4a05-a9f6-d2ee94201c67 e52bc13f462f48aa918b14a7bdec0b0e b590ef07afc04283817dba3e7ac7fe5f - - default default] Security group member updated ['b12fa3ef-1513-43b4-8cd9-76991177073d']
Oct 13 15:49:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:49:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 418298 "" "Go-http-client/1.1"
Oct 13 15:49:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:49:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 50098 "" "Go-http-client/1.1"
Oct 13 15:49:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4062: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:41 standalone.localdomain dnsmasq[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/addn_hosts - 1 addresses
Oct 13 15:49:41 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/host
Oct 13 15:49:41 standalone.localdomain dnsmasq-dhcp[564724]: read /var/lib/neutron/dhcp/dfb2252b-9ac0-44da-93fe-e98b4cd60215/opts
Oct 13 15:49:41 standalone.localdomain podman[564866]: 2025-10-13 15:49:41.805879347 +0000 UTC m=+0.073751708 container kill 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:49:41 standalone.localdomain sshd[564199]: Failed password for root from 193.46.255.159 port 29778 ssh2
Oct 13 15:49:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:42.071 496978 INFO neutron.agent.dhcp.agent [None req-2f2ff4b2-bee7-4d1a-b778-6d5b48941c7a - - - - - -] DHCP configuration for ports {'67e617d5-3477-4e9d-b2c3-4f50ca761ce6'} is completed
Oct 13 15:49:42 standalone.localdomain dnsmasq[564724]: exiting on receipt of SIGTERM
Oct 13 15:49:42 standalone.localdomain podman[564905]: 2025-10-13 15:49:42.227308065 +0000 UTC m=+0.049889281 container kill 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:49:42 standalone.localdomain systemd[1]: libpod-5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc.scope: Deactivated successfully.
Oct 13 15:49:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:49:42 standalone.localdomain podman[564918]: 2025-10-13 15:49:42.301997561 +0000 UTC m=+0.062700817 container died 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:49:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:42Z|00811|binding|INFO|Removing iface tapb5c8af7b-46 ovn-installed in OVS
Oct 13 15:49:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:42Z|00812|binding|INFO|Removing lport b5c8af7b-4628-4de0-8eac-0df176719249 ovn-installed in OVS
Oct 13 15:49:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:42.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:42.303 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6b0cb9e9-b7b8-4ab5-9a0c-a69f585ccb49 with type ""
Oct 13 15:49:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:42.305 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-dfb2252b-9ac0-44da-93fe-e98b4cd60215', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb2252b-9ac0-44da-93fe-e98b4cd60215', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b590ef07afc04283817dba3e7ac7fe5f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc2a8b3d-afda-4492-a8d6-388704e69773, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=b5c8af7b-4628-4de0-8eac-0df176719249) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:42.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:42.309 378821 INFO neutron.agent.ovn.metadata.agent [-] Port b5c8af7b-4628-4de0-8eac-0df176719249 in datapath dfb2252b-9ac0-44da-93fe-e98b4cd60215 unbound from our chassis
Oct 13 15:49:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:42.312 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfb2252b-9ac0-44da-93fe-e98b4cd60215 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:49:42 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:42.313 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[023deab1-524f-4479-a43a-7b46ee5351f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:42 standalone.localdomain podman[564925]: 2025-10-13 15:49:42.348411313 +0000 UTC m=+0.092013262 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 15:49:42 standalone.localdomain podman[564925]: 2025-10-13 15:49:42.36290693 +0000 UTC m=+0.106508919 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 15:49:42 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:49:42 standalone.localdomain podman[564918]: 2025-10-13 15:49:42.391271386 +0000 UTC m=+0.151974592 container cleanup 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:49:42 standalone.localdomain systemd[1]: libpod-conmon-5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc.scope: Deactivated successfully.
Oct 13 15:49:42 standalone.localdomain podman[564926]: 2025-10-13 15:49:42.435206402 +0000 UTC m=+0.181988589 container remove 5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dfb2252b-9ac0-44da-93fe-e98b4cd60215, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 15:49:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:42.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:42 standalone.localdomain kernel: device tapb5c8af7b-46 left promiscuous mode
Oct 13 15:49:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:42.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:42.485 496978 INFO neutron.agent.dhcp.agent [None req-966afe6f-192c-4bad-98a2-ac30073f55b6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:42.485 496978 INFO neutron.agent.dhcp.agent [None req-966afe6f-192c-4bad-98a2-ac30073f55b6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:42.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:42 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:42.536 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:42 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:42Z|00813|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:42.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:42 standalone.localdomain ceph-mon[29756]: pgmap v4062: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:42 standalone.localdomain dnsmasq[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/addn_hosts - 0 addresses
Oct 13 15:49:42 standalone.localdomain podman[564986]: 2025-10-13 15:49:42.863883224 +0000 UTC m=+0.103332961 container kill 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:49:42 standalone.localdomain dnsmasq-dhcp[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/host
Oct 13 15:49:42 standalone.localdomain dnsmasq-dhcp[564555]: read /var/lib/neutron/dhcp/35ac4f78-471c-41f8-b8ab-f9a7b0175257/opts
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:49:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:49:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:49:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:43Z|00814|binding|INFO|Releasing lport 5a790146-6b07-4cf3-a9bf-8c790e3e5efc from this chassis (sb_readonly=0)
Oct 13 15:49:43 standalone.localdomain kernel: device tap5a790146-6b left promiscuous mode
Oct 13 15:49:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:43.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:43 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:43Z|00815|binding|INFO|Setting lport 5a790146-6b07-4cf3-a9bf-8c790e3e5efc down in Southbound
Oct 13 15:49:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:43.086 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-35ac4f78-471c-41f8-b8ab-f9a7b0175257', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35ac4f78-471c-41f8-b8ab-f9a7b0175257', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adf6933f04d74c949ba5fc4f332031e4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d1a0e53-2894-4778-8935-8dd07ebd59ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=5a790146-6b07-4cf3-a9bf-8c790e3e5efc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:43.088 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 5a790146-6b07-4cf3-a9bf-8c790e3e5efc in datapath 35ac4f78-471c-41f8-b8ab-f9a7b0175257 unbound from our chassis
Oct 13 15:49:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:43.091 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35ac4f78-471c-41f8-b8ab-f9a7b0175257, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:43 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:43.092 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e86c3957-290f-4f67-bfa9-d9f9ae5a2729]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:43.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-6800e0c7d1655b445688274a607aebe7765951b102625cd4b1695869f0a176d9-merged.mount: Deactivated successfully.
Oct 13 15:49:43 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5911a09d9b18db4ee1435144923ae7d3aa40f9be2210dde282b987cadd0086fc-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:43 standalone.localdomain systemd[1]: run-netns-qdhcp\x2ddfb2252b\x2d9ac0\x2d44da\x2d93fe\x2de98b4cd60215.mount: Deactivated successfully.
Oct 13 15:49:43 standalone.localdomain unix_chkpwd[565007]: password check failed for user (root)
Oct 13 15:49:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:43.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4063: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:44 standalone.localdomain systemd[1]: tmp-crun.3dgKps.mount: Deactivated successfully.
Oct 13 15:49:44 standalone.localdomain podman[565024]: 2025-10-13 15:49:44.339298367 +0000 UTC m=+0.067498774 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:49:44 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:49:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:49:44 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:49:44 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:44Z|00816|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:44.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:44 standalone.localdomain ceph-mon[29756]: pgmap v4063: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:45 standalone.localdomain dnsmasq[564555]: exiting on receipt of SIGTERM
Oct 13 15:49:45 standalone.localdomain podman[565063]: 2025-10-13 15:49:45.103976041 +0000 UTC m=+0.065557964 container kill 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:49:45 standalone.localdomain systemd[1]: libpod-07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498.scope: Deactivated successfully.
Oct 13 15:49:45 standalone.localdomain podman[565077]: 2025-10-13 15:49:45.183391322 +0000 UTC m=+0.066610166 container died 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:45 standalone.localdomain podman[565077]: 2025-10-13 15:49:45.212350446 +0000 UTC m=+0.095569200 container cleanup 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:49:45 standalone.localdomain systemd[1]: libpod-conmon-07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498.scope: Deactivated successfully.
Oct 13 15:49:45 standalone.localdomain podman[565081]: 2025-10-13 15:49:45.262943808 +0000 UTC m=+0.134049309 container remove 07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-35ac4f78-471c-41f8-b8ab-f9a7b0175257, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:49:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:45.291 496978 INFO neutron.agent.dhcp.agent [None req-e66cfe66-9f47-422a-8cf8-6219bd7c201e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-15776ffcc7c3417e189b45b1b17fcedd341e363408d699bcf455f20c9dfbf0bf-merged.mount: Deactivated successfully.
Oct 13 15:49:45 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07b5fd62aeca48b1b18d9b00152ab0507f75ae24c3d39c9c6bfd71e76f3c6498-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:45 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d35ac4f78\x2d471c\x2d41f8\x2db8ab\x2df9a7b0175257.mount: Deactivated successfully.
Oct 13 15:49:45 standalone.localdomain systemd-journald[48591]: Data hash table of /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Oct 13 15:49:45 standalone.localdomain systemd-journald[48591]: /run/log/journal/89829c24d904bea15dec4d2c9d1ee875/system.journal: Journal header limits reached or header out-of-date, rotating.
Oct 13 15:49:45 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:49:45 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:45.404 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:45 standalone.localdomain sshd[564199]: Failed password for root from 193.46.255.159 port 29778 ssh2
Oct 13 15:49:45 standalone.localdomain rsyslogd[56156]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 15:49:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4064: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:46 standalone.localdomain ceph-mon[29756]: pgmap v4064: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:47 standalone.localdomain sshd[564199]: Received disconnect from 193.46.255.159 port 29778:11:  [preauth]
Oct 13 15:49:47 standalone.localdomain sshd[564199]: Disconnected from authenticating user root 193.46.255.159 port 29778 [preauth]
Oct 13 15:49:47 standalone.localdomain sshd[564199]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 13 15:49:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:47.234 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:46Z, description=, device_id=f39e341a-5081-42c4-839f-f241afded935, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188907e9a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188907e6a0>], id=272bec0c-0f53-43db-81db-f5f7bb4f0675, ip_allocation=immediate, mac_address=fa:16:3e:32:57:27, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2504, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:49:46Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:49:47 standalone.localdomain sshd[565111]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:49:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:47.335 496978 INFO neutron.agent.linux.ip_lib [None req-107ac522-79cb-46f5-9c3a-4226c2f6cd10 - - - - - -] Device tapa1fd74d5-23 cannot be used as it has no MAC address
Oct 13 15:49:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:47.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:47 standalone.localdomain kernel: device tapa1fd74d5-23 entered promiscuous mode
Oct 13 15:49:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:47Z|00817|binding|INFO|Claiming lport a1fd74d5-2309-4182-982b-b2b7bce3fe41 for this chassis.
Oct 13 15:49:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:47Z|00818|binding|INFO|a1fd74d5-2309-4182-982b-b2b7bce3fe41: Claiming unknown
Oct 13 15:49:47 standalone.localdomain NetworkManager[5962]: <info>  [1760370587.3728] manager: (tapa1fd74d5-23): new Generic device (/org/freedesktop/NetworkManager/Devices/138)
Oct 13 15:49:47 standalone.localdomain systemd-udevd[565133]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:49:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:47.379 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a368014c-0d95-47fb-b354-f20b74027268', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a368014c-0d95-47fb-b354-f20b74027268', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7b9cdfef53b462b964a0aa49f0147ee', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e0996a4-7ddd-47d7-b812-65c5d0135796, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=a1fd74d5-2309-4182-982b-b2b7bce3fe41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:47.379 378821 INFO neutron.agent.ovn.metadata.agent [-] Port a1fd74d5-2309-4182-982b-b2b7bce3fe41 in datapath a368014c-0d95-47fb-b354-f20b74027268 bound to our chassis
Oct 13 15:49:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:47.381 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 32bf3916-4039-4dc1-b1d1-bf5eabedacf9 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:49:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:47.381 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a368014c-0d95-47fb-b354-f20b74027268, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:47.382 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[03816106-aa5e-41b5-8139-aa34a5719633]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:47Z|00819|binding|INFO|Setting lport a1fd74d5-2309-4182-982b-b2b7bce3fe41 ovn-installed in OVS
Oct 13 15:49:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:47Z|00820|binding|INFO|Setting lport a1fd74d5-2309-4182-982b-b2b7bce3fe41 up in Southbound
Oct 13 15:49:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:47.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:47.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tapa1fd74d5-23: No such device
Oct 13 15:49:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:47.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:47 standalone.localdomain podman[565147]: 2025-10-13 15:49:47.482550422 +0000 UTC m=+0.059322152 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:47 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:49:47 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:49:47 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:49:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:47.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:47.777 496978 INFO neutron.agent.dhcp.agent [None req-3930403e-f3a0-4be3-900e-b62268330933 - - - - - -] DHCP configuration for ports {'272bec0c-0f53-43db-81db-f5f7bb4f0675'} is completed
Oct 13 15:49:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4065: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:48 standalone.localdomain unix_chkpwd[565205]: password check failed for user (root)
Oct 13 15:49:48 standalone.localdomain sshd[565111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 13 15:49:48 standalone.localdomain podman[565228]: 
Oct 13 15:49:48 standalone.localdomain podman[565228]: 2025-10-13 15:49:48.476632787 +0000 UTC m=+0.098239393 container create b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:49:48 standalone.localdomain podman[565228]: 2025-10-13 15:49:48.428084679 +0000 UTC m=+0.049691335 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started libpod-conmon-b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6.scope.
Oct 13 15:49:48 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:48 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fed44da8941ee34497f502ebfad777bf128f707a4ffb87dcb0fb7a7e9587a6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:48 standalone.localdomain podman[565228]: 2025-10-13 15:49:48.582420323 +0000 UTC m=+0.204026909 container init b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 15:49:48 standalone.localdomain dnsmasq[565299]: started, version 2.85 cachesize 150
Oct 13 15:49:48 standalone.localdomain dnsmasq[565299]: DNS service limited to local subnets
Oct 13 15:49:48 standalone.localdomain dnsmasq[565299]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:48 standalone.localdomain dnsmasq[565299]: warning: no upstream servers configured
Oct 13 15:49:48 standalone.localdomain dnsmasq-dhcp[565299]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:49:48 standalone.localdomain dnsmasq[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/addn_hosts - 0 addresses
Oct 13 15:49:48 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/host
Oct 13 15:49:48 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/opts
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain podman[565250]: 2025-10-13 15:49:48.621465858 +0000 UTC m=+0.086336466 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T16:11:22, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.component=openstack-swift-account-container, container_name=swift_account_server, managed_by=tripleo_ansible, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, summary=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, distribution-scope=public, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true)
Oct 13 15:49:48 standalone.localdomain podman[565243]: 2025-10-13 15:49:48.638859725 +0000 UTC m=+0.110762991 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:48 standalone.localdomain podman[565242]: 2025-10-13 15:49:48.682574983 +0000 UTC m=+0.158985037 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, version=17.1.9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, container_name=swift_object_server, summary=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, batch=17.1_20250721.1, release=1)
Oct 13 15:49:48 standalone.localdomain podman[565228]: 2025-10-13 15:49:48.693345876 +0000 UTC m=+0.314952492 container start b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain podman[565244]: 2025-10-13 15:49:48.739236283 +0000 UTC m=+0.208939191 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-07-21T15:54:32, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, batch=17.1_20250721.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, distribution-scope=public)
Oct 13 15:49:48 standalone.localdomain podman[565243]: 2025-10-13 15:49:48.756002731 +0000 UTC m=+0.227906017 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Oct 13 15:49:48 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:49:48 standalone.localdomain ceph-mon[29756]: pgmap v4065: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:48.814 496978 INFO neutron.agent.dhcp.agent [None req-09fe9dca-a69e-4ee3-9826-0203f1dd305b - - - - - -] DHCP configuration for ports {'3c543a60-d462-473e-a461-6977fcfaf939'} is completed
Oct 13 15:49:48 standalone.localdomain podman[565250]: 2025-10-13 15:49:48.835711491 +0000 UTC m=+0.300582099 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, container_name=swift_account_server, release=1, managed_by=tripleo_ansible, io.buildah.version=1.33.12, build-date=2025-07-21T16:11:22, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Oct 13 15:49:48 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:49:48 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:48.860 496978 INFO neutron.agent.linux.ip_lib [None req-d77051bf-07e1-4d07-8e69-c7be7993a770 - - - - - -] Device tap8b780136-2a cannot be used as it has no MAC address
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain kernel: device tap8b780136-2a entered promiscuous mode
Oct 13 15:49:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:48Z|00821|binding|INFO|Claiming lport 8b780136-2aa0-4c14-a76c-6e341b937017 for this chassis.
Oct 13 15:49:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:48Z|00822|binding|INFO|8b780136-2aa0-4c14-a76c-6e341b937017: Claiming unknown
Oct 13 15:49:48 standalone.localdomain NetworkManager[5962]: <info>  [1760370588.8860] manager: (tap8b780136-2a): new Generic device (/org/freedesktop/NetworkManager/Devices/139)
Oct 13 15:49:48 standalone.localdomain systemd-udevd[565135]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain podman[565256]: 2025-10-13 15:49:48.839860689 +0000 UTC m=+0.299826927 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:49:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:48.897 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-80e2d217-2be0-4f94-a8cc-8d8662df1dfa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80e2d217-2be0-4f94-a8cc-8d8662df1dfa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e44725d777a49ae8c7ac32962e181eb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fa8594-8f78-47b5-a222-0789ffe15773, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8b780136-2aa0-4c14-a76c-6e341b937017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:48.898 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8b780136-2aa0-4c14-a76c-6e341b937017 in datapath 80e2d217-2be0-4f94-a8cc-8d8662df1dfa bound to our chassis
Oct 13 15:49:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:48.899 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 80e2d217-2be0-4f94-a8cc-8d8662df1dfa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:49:48 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:48.900 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[95941756-4e19-4f6d-9912-0102a642cab9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:48Z|00823|binding|INFO|Setting lport 8b780136-2aa0-4c14-a76c-6e341b937017 ovn-installed in OVS
Oct 13 15:49:48 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:48Z|00824|binding|INFO|Setting lport 8b780136-2aa0-4c14-a76c-6e341b937017 up in Southbound
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain podman[565256]: 2025-10-13 15:49:48.929020501 +0000 UTC m=+0.388986799 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8b780136-2a: No such device
Oct 13 15:49:48 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:49:48 standalone.localdomain podman[565244]: 2025-10-13 15:49:48.95687036 +0000 UTC m=+0.426573308 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, summary=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., container_name=swift_container_server, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, managed_by=tripleo_ansible, release=1)
Oct 13 15:49:48 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:49:48 standalone.localdomain podman[565242]: 2025-10-13 15:49:48.979801818 +0000 UTC m=+0.456211822 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, release=1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc.)
Oct 13 15:49:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:48.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:48 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:49:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:49.291 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:49Z, description=, device_id=f39e341a-5081-42c4-839f-f241afded935, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119c10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119280>], id=ee619a3d-b6bc-479a-b3e8-dbba19ceaa64, ip_allocation=immediate, mac_address=fa:16:3e:99:29:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:44Z, description=, dns_domain=, id=a368014c-0d95-47fb-b354-f20b74027268, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-872730407-network, port_security_enabled=True, project_id=d7b9cdfef53b462b964a0aa49f0147ee, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3418, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2491, status=ACTIVE, subnets=['fd0c64c1-a356-4a08-aef7-7d44f040622d'], tags=[], tenant_id=d7b9cdfef53b462b964a0aa49f0147ee, updated_at=2025-10-13T15:49:45Z, vlan_transparent=None, network_id=a368014c-0d95-47fb-b354-f20b74027268, port_security_enabled=False, project_id=d7b9cdfef53b462b964a0aa49f0147ee, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2509, status=DOWN, tags=[], tenant_id=d7b9cdfef53b462b964a0aa49f0147ee, updated_at=2025-10-13T15:49:49Z on network a368014c-0d95-47fb-b354-f20b74027268
Oct 13 15:49:49 standalone.localdomain podman[565438]: 2025-10-13 15:49:49.552333651 +0000 UTC m=+0.070874528 container kill b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:49:49 standalone.localdomain dnsmasq[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/addn_hosts - 1 addresses
Oct 13 15:49:49 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/host
Oct 13 15:49:49 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/opts
Oct 13 15:49:49 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:49.676 2 INFO neutron.agent.securitygroups_rpc [None req-120790da-563c-4bc5-88b4-82faa882aa00 a0a0053b1e624ba590c68b85c00f7bd4 0e44725d777a49ae8c7ac32962e181eb - - default default] Security group member updated ['d29d933e-8f6e-48c4-9edc-1def4c87f9cf']
Oct 13 15:49:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4066: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:49.874 496978 INFO neutron.agent.dhcp.agent [None req-fd33c757-1d8d-4935-81ac-01c1461d4064 - - - - - -] DHCP configuration for ports {'ee619a3d-b6bc-479a-b3e8-dbba19ceaa64'} is completed
Oct 13 15:49:50 standalone.localdomain podman[565485]: 
Oct 13 15:49:50 standalone.localdomain podman[565485]: 2025-10-13 15:49:50.051296203 +0000 UTC m=+0.116108335 container create 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:49:50 standalone.localdomain podman[565485]: 2025-10-13 15:49:49.985263414 +0000 UTC m=+0.050075606 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:49:50 standalone.localdomain systemd[1]: Started libpod-conmon-8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221.scope.
Oct 13 15:49:50 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:49:50 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8c1d879ed8f027bc138d10fff6404ee12fde846eebbb97ba1824fe6b87ccba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:49:50 standalone.localdomain podman[565485]: 2025-10-13 15:49:50.119369044 +0000 UTC m=+0.184181206 container init 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:49:50 standalone.localdomain podman[565485]: 2025-10-13 15:49:50.128748334 +0000 UTC m=+0.193560496 container start 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: started, version 2.85 cachesize 150
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: DNS service limited to local subnets
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: warning: no upstream servers configured
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/addn_hosts - 0 addresses
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/host
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/opts
Oct 13 15:49:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:50.195 496978 INFO neutron.agent.dhcp.agent [None req-1d594166-635e-47e7-8644-cbb667de232f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:49Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188900d4f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188900de20>], id=08427adf-9650-44ee-b3e7-c209767ea447, ip_allocation=immediate, mac_address=fa:16:3e:e9:0f:87, name=tempest-ExtraDHCPOptionsTestJSON-1315673039, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:46Z, description=, dns_domain=, id=80e2d217-2be0-4f94-a8cc-8d8662df1dfa, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-2107909802, port_security_enabled=True, project_id=0e44725d777a49ae8c7ac32962e181eb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2503, status=ACTIVE, subnets=['815df10a-d3f7-4970-9d6f-c372bff2c778'], tags=[], tenant_id=0e44725d777a49ae8c7ac32962e181eb, updated_at=2025-10-13T15:49:48Z, vlan_transparent=None, network_id=80e2d217-2be0-4f94-a8cc-8d8662df1dfa, port_security_enabled=True, project_id=0e44725d777a49ae8c7ac32962e181eb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d29d933e-8f6e-48c4-9edc-1def4c87f9cf'], standard_attr_id=2510, status=DOWN, tags=[], tenant_id=0e44725d777a49ae8c7ac32962e181eb, updated_at=2025-10-13T15:49:49Z on network 80e2d217-2be0-4f94-a8cc-8d8662df1dfa
Oct 13 15:49:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:50.311 496978 INFO neutron.agent.dhcp.agent [None req-f2ea0c94-0dcb-4518-8670-ffd69df30d61 - - - - - -] DHCP configuration for ports {'e46b92e5-ec20-4ebe-b91a-b5eb9e91794f'} is completed
Oct 13 15:49:50 standalone.localdomain sshd[565111]: Failed password for root from 193.46.255.159 port 24352 ssh2
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/addn_hosts - 1 addresses
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/host
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/opts
Oct 13 15:49:50 standalone.localdomain podman[565521]: 2025-10-13 15:49:50.381522256 +0000 UTC m=+0.053141501 container kill 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:50 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:50.450 2 INFO neutron.agent.securitygroups_rpc [None req-d8795e69-7533-4514-9628-458d8a81ae9a a0a0053b1e624ba590c68b85c00f7bd4 0e44725d777a49ae8c7ac32962e181eb - - default default] Security group member updated ['d29d933e-8f6e-48c4-9edc-1def4c87f9cf']
Oct 13 15:49:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:50.559 496978 INFO neutron.agent.dhcp.agent [None req-a82b171b-e53d-4d57-b78f-367538a1619c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:50Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890353d0>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890356d0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1889035130>, <neutron.agent.linux.dhcp.DictModel object at 0x7f18890352e0>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889035e50>], id=8333ac2c-2dc9-418e-b360-5759c6545118, ip_allocation=immediate, mac_address=fa:16:3e:06:e6:3d, name=tempest-ExtraDHCPOptionsTestJSON-995006957, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:46Z, description=, dns_domain=, id=80e2d217-2be0-4f94-a8cc-8d8662df1dfa, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-2107909802, port_security_enabled=True, project_id=0e44725d777a49ae8c7ac32962e181eb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2390, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2503, status=ACTIVE, subnets=['815df10a-d3f7-4970-9d6f-c372bff2c778'], tags=[], tenant_id=0e44725d777a49ae8c7ac32962e181eb, updated_at=2025-10-13T15:49:48Z, vlan_transparent=None, network_id=80e2d217-2be0-4f94-a8cc-8d8662df1dfa, port_security_enabled=True, project_id=0e44725d777a49ae8c7ac32962e181eb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d29d933e-8f6e-48c4-9edc-1def4c87f9cf'], standard_attr_id=2511, status=DOWN, tags=[], tenant_id=0e44725d777a49ae8c7ac32962e181eb, updated_at=2025-10-13T15:49:50Z on network 80e2d217-2be0-4f94-a8cc-8d8662df1dfa
Oct 13 15:49:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:50.761 496978 INFO neutron.agent.dhcp.agent [None req-fbf86f40-5a72-40ee-aeb6-13dd32fdd437 - - - - - -] DHCP configuration for ports {'08427adf-9650-44ee-b3e7-c209767ea447'} is completed
Oct 13 15:49:50 standalone.localdomain ceph-mon[29756]: pgmap v4066: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:50 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:50.883 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:49Z, description=, device_id=f39e341a-5081-42c4-839f-f241afded935, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188926d5b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888f326d0>], id=ee619a3d-b6bc-479a-b3e8-dbba19ceaa64, ip_allocation=immediate, mac_address=fa:16:3e:99:29:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:49:44Z, description=, dns_domain=, id=a368014c-0d95-47fb-b354-f20b74027268, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-872730407-network, port_security_enabled=True, project_id=d7b9cdfef53b462b964a0aa49f0147ee, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3418, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2491, status=ACTIVE, subnets=['fd0c64c1-a356-4a08-aef7-7d44f040622d'], tags=[], tenant_id=d7b9cdfef53b462b964a0aa49f0147ee, updated_at=2025-10-13T15:49:45Z, vlan_transparent=None, network_id=a368014c-0d95-47fb-b354-f20b74027268, port_security_enabled=False, project_id=d7b9cdfef53b462b964a0aa49f0147ee, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2509, status=DOWN, tags=[], tenant_id=d7b9cdfef53b462b964a0aa49f0147ee, updated_at=2025-10-13T15:49:49Z on network a368014c-0d95-47fb-b354-f20b74027268
Oct 13 15:49:50 standalone.localdomain dnsmasq[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/addn_hosts - 2 addresses
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/host
Oct 13 15:49:50 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/opts
Oct 13 15:49:50 standalone.localdomain podman[565560]: 2025-10-13 15:49:50.944583457 +0000 UTC m=+0.067381301 container kill 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:49:51 standalone.localdomain podman[565593]: 2025-10-13 15:49:51.141172325 +0000 UTC m=+0.076510723 container kill b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:49:51 standalone.localdomain dnsmasq[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/addn_hosts - 1 addresses
Oct 13 15:49:51 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/host
Oct 13 15:49:51 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/opts
Oct 13 15:49:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:51.214 496978 INFO neutron.agent.dhcp.agent [None req-6b2daf3c-d851-422a-92c1-c3270b91b016 - - - - - -] DHCP configuration for ports {'8333ac2c-2dc9-418e-b360-5759c6545118'} is completed
Oct 13 15:49:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:51.345 496978 INFO neutron.agent.dhcp.agent [None req-869f1366-4722-4562-a453-989c3081b761 - - - - - -] DHCP configuration for ports {'ee619a3d-b6bc-479a-b3e8-dbba19ceaa64'} is completed
Oct 13 15:49:51 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:51.673 2 INFO neutron.agent.securitygroups_rpc [None req-0abb8789-edd3-41f0-84b9-89bdb7f2873e a0a0053b1e624ba590c68b85c00f7bd4 0e44725d777a49ae8c7ac32962e181eb - - default default] Security group member updated ['d29d933e-8f6e-48c4-9edc-1def4c87f9cf']
Oct 13 15:49:51 standalone.localdomain unix_chkpwd[565619]: password check failed for user (root)
Oct 13 15:49:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4067: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:51 standalone.localdomain dnsmasq[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/addn_hosts - 1 addresses
Oct 13 15:49:51 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/host
Oct 13 15:49:51 standalone.localdomain podman[565637]: 2025-10-13 15:49:51.9395871 +0000 UTC m=+0.060341944 container kill 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:49:51 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/opts
Oct 13 15:49:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:52.128 496978 INFO neutron.agent.dhcp.agent [None req-b20d0360-a20c-45d3-a174-0bd17f2883ab - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:49Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f188916ce80>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4b3d0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4ba00>, <neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4bf10>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e4b670>], id=08427adf-9650-44ee-b3e7-c209767ea447, ip_allocation=immediate, mac_address=fa:16:3e:e9:0f:87, name=tempest-new-port-name-753160051, network_id=80e2d217-2be0-4f94-a8cc-8d8662df1dfa, port_security_enabled=True, project_id=0e44725d777a49ae8c7ac32962e181eb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['d29d933e-8f6e-48c4-9edc-1def4c87f9cf'], standard_attr_id=2510, status=DOWN, tags=[], tenant_id=0e44725d777a49ae8c7ac32962e181eb, updated_at=2025-10-13T15:49:51Z on network 80e2d217-2be0-4f94-a8cc-8d8662df1dfa
Oct 13 15:49:52 standalone.localdomain dnsmasq[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/addn_hosts - 1 addresses
Oct 13 15:49:52 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/host
Oct 13 15:49:52 standalone.localdomain podman[565675]: 2025-10-13 15:49:52.322450588 +0000 UTC m=+0.052825692 container kill 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:49:52 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/opts
Oct 13 15:49:52 standalone.localdomain systemd[1]: tmp-crun.R7ssAT.mount: Deactivated successfully.
Oct 13 15:49:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:49:52 standalone.localdomain podman[565689]: 2025-10-13 15:49:52.432234517 +0000 UTC m=+0.074064398 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:49:52 standalone.localdomain podman[565689]: 2025-10-13 15:49:52.468168206 +0000 UTC m=+0.109998057 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:49:52 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:49:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:52.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:52 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:49:52.584 2 INFO neutron.agent.securitygroups_rpc [None req-af87b0b8-643a-42da-8918-9af633df608f a0a0053b1e624ba590c68b85c00f7bd4 0e44725d777a49ae8c7ac32962e181eb - - default default] Security group member updated ['d29d933e-8f6e-48c4-9edc-1def4c87f9cf']
Oct 13 15:49:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:52.720 496978 INFO neutron.agent.dhcp.agent [None req-bef515c2-a6c9-480b-874e-bec47dd6f83c - - - - - -] DHCP configuration for ports {'08427adf-9650-44ee-b3e7-c209767ea447'} is completed
Oct 13 15:49:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:52 standalone.localdomain ceph-mon[29756]: pgmap v4067: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:53 standalone.localdomain dnsmasq[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/addn_hosts - 0 addresses
Oct 13 15:49:53 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/host
Oct 13 15:49:53 standalone.localdomain dnsmasq-dhcp[565503]: read /var/lib/neutron/dhcp/80e2d217-2be0-4f94-a8cc-8d8662df1dfa/opts
Oct 13 15:49:53 standalone.localdomain podman[565730]: 2025-10-13 15:49:53.003549182 +0000 UTC m=+0.073917153 container kill 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:49:53 standalone.localdomain systemd[1]: tmp-crun.7RmdJ0.mount: Deactivated successfully.
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:49:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:53Z|00825|binding|INFO|Removing iface tap8b780136-2a ovn-installed in OVS
Oct 13 15:49:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:53.332 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 28f90f1d-981c-4ddb-9b52-c76a4f529554 with type ""
Oct 13 15:49:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:53Z|00826|binding|INFO|Removing lport 8b780136-2aa0-4c14-a76c-6e341b937017 ovn-installed in OVS
Oct 13 15:49:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:53.334 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-80e2d217-2be0-4f94-a8cc-8d8662df1dfa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80e2d217-2be0-4f94-a8cc-8d8662df1dfa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e44725d777a49ae8c7ac32962e181eb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02fa8594-8f78-47b5-a222-0789ffe15773, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8b780136-2aa0-4c14-a76c-6e341b937017) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:53.336 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8b780136-2aa0-4c14-a76c-6e341b937017 in datapath 80e2d217-2be0-4f94-a8cc-8d8662df1dfa unbound from our chassis
Oct 13 15:49:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:53.339 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80e2d217-2be0-4f94-a8cc-8d8662df1dfa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:53 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:53.340 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d63687e2-b48a-4c42-a864-e1812bda9cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:53.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:53 standalone.localdomain dnsmasq[565503]: exiting on receipt of SIGTERM
Oct 13 15:49:53 standalone.localdomain podman[565769]: 2025-10-13 15:49:53.471582489 +0000 UTC m=+0.064537883 container kill 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:49:53 standalone.localdomain systemd[1]: libpod-8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221.scope: Deactivated successfully.
Oct 13 15:49:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31889 DF PROTO=TCP SPT=40598 DPT=9102 SEQ=3037410951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD46CAF0000000001030307) 
Oct 13 15:49:53 standalone.localdomain podman[565785]: 2025-10-13 15:49:53.559060229 +0000 UTC m=+0.061870721 container died 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:49:53 standalone.localdomain podman[565785]: 2025-10-13 15:49:53.6180654 +0000 UTC m=+0.120875862 container remove 8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-80e2d217-2be0-4f94-a8cc-8d8662df1dfa, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:49:53 standalone.localdomain kernel: device tap8b780136-2a left promiscuous mode
Oct 13 15:49:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:53.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:53.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:53 standalone.localdomain systemd[1]: libpod-conmon-8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221.scope: Deactivated successfully.
Oct 13 15:49:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:53.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:53.718 496978 INFO neutron.agent.dhcp.agent [None req-0e295817-8d33-4567-9d8f-27e45c709e33 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:53 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:53.719 496978 INFO neutron.agent.dhcp.agent [None req-0e295817-8d33-4567-9d8f-27e45c709e33 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:53 standalone.localdomain sshd[565111]: Failed password for root from 193.46.255.159 port 24352 ssh2
Oct 13 15:49:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4068: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:53 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:53Z|00827|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:53.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bc8c1d879ed8f027bc138d10fff6404ee12fde846eebbb97ba1824fe6b87ccba-merged.mount: Deactivated successfully.
Oct 13 15:49:53 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dc62648fb60ed6734512a22f2ed0eb672ec3179ea9fb7c297243c58e2dcf221-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:53 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d80e2d217\x2d2be0\x2d4f94\x2da8cc\x2d8d8662df1dfa.mount: Deactivated successfully.
Oct 13 15:49:54 standalone.localdomain systemd[1]: tmp-crun.D4GQxC.mount: Deactivated successfully.
Oct 13 15:49:54 standalone.localdomain podman[565828]: 2025-10-13 15:49:54.009867994 +0000 UTC m=+0.078878035 container kill b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:54 standalone.localdomain dnsmasq[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/addn_hosts - 0 addresses
Oct 13 15:49:54 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/host
Oct 13 15:49:54 standalone.localdomain dnsmasq-dhcp[565299]: read /var/lib/neutron/dhcp/a368014c-0d95-47fb-b354-f20b74027268/opts
Oct 13 15:49:54 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:49:54 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "tx573b3a5739654eb0bd9a9-0068ed1fa2" "proxy-server 2" 0.0009 "-" 21 -
Oct 13 15:49:54 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: tx573b3a5739654eb0bd9a9-0068ed1fa2)
Oct 13 15:49:54 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: tx573b3a5739654eb0bd9a9-0068ed1fa2)
Oct 13 15:49:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:54Z|00828|binding|INFO|Releasing lport a1fd74d5-2309-4182-982b-b2b7bce3fe41 from this chassis (sb_readonly=0)
Oct 13 15:49:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:54Z|00829|binding|INFO|Setting lport a1fd74d5-2309-4182-982b-b2b7bce3fe41 down in Southbound
Oct 13 15:49:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:54.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:54 standalone.localdomain kernel: device tapa1fd74d5-23 left promiscuous mode
Oct 13 15:49:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:54.236 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a368014c-0d95-47fb-b354-f20b74027268', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a368014c-0d95-47fb-b354-f20b74027268', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd7b9cdfef53b462b964a0aa49f0147ee', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e0996a4-7ddd-47d7-b812-65c5d0135796, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=a1fd74d5-2309-4182-982b-b2b7bce3fe41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:49:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:54.238 378821 INFO neutron.agent.ovn.metadata.agent [-] Port a1fd74d5-2309-4182-982b-b2b7bce3fe41 in datapath a368014c-0d95-47fb-b354-f20b74027268 unbound from our chassis
Oct 13 15:49:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:54.241 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a368014c-0d95-47fb-b354-f20b74027268, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:49:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:49:54.242 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bc191e-de40-4c4c-8d26-a74001c9fc47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:49:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:54.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31890 DF PROTO=TCP SPT=40598 DPT=9102 SEQ=3037410951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD470B70000000001030307) 
Oct 13 15:49:54 standalone.localdomain ceph-mon[29756]: pgmap v4068: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:55 standalone.localdomain unix_chkpwd[565855]: password check failed for user (root)
Oct 13 15:49:55 standalone.localdomain snmpd[110332]: empty variable list in _query
Oct 13 15:49:55 standalone.localdomain podman[565869]: 2025-10-13 15:49:55.59879711 +0000 UTC m=+0.071356003 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:49:55 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:49:55 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:49:55 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:49:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4069: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:49:55Z|00830|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:49:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:55.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31891 DF PROTO=TCP SPT=40598 DPT=9102 SEQ=3037410951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD478B60000000001030307) 
Oct 13 15:49:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:49:56 standalone.localdomain podman[565891]: 2025-10-13 15:49:56.828343293 +0000 UTC m=+0.096910372 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:49:56 standalone.localdomain podman[565891]: 2025-10-13 15:49:56.843952315 +0000 UTC m=+0.112519444 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:49:56 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:49:56 standalone.localdomain ceph-mon[29756]: pgmap v4069: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:57 standalone.localdomain podman[565931]: 2025-10-13 15:49:57.248175912 +0000 UTC m=+0.063415108 container kill b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:49:57 standalone.localdomain dnsmasq[565299]: exiting on receipt of SIGTERM
Oct 13 15:49:57 standalone.localdomain systemd[1]: libpod-b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6.scope: Deactivated successfully.
Oct 13 15:49:57 standalone.localdomain podman[565946]: 2025-10-13 15:49:57.336829339 +0000 UTC m=+0.064077369 container died b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:49:57 standalone.localdomain sshd[565111]: Failed password for root from 193.46.255.159 port 24352 ssh2
Oct 13 15:49:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6-userdata-shm.mount: Deactivated successfully.
Oct 13 15:49:57 standalone.localdomain podman[565946]: 2025-10-13 15:49:57.390000361 +0000 UTC m=+0.117248331 container remove b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a368014c-0d95-47fb-b354-f20b74027268, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:49:57 standalone.localdomain systemd[1]: libpod-conmon-b2d5634eabe2594167a9c787dca641ffb1d8fd0f0d23aea5d67f95497196ced6.scope: Deactivated successfully.
Oct 13 15:49:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:57.415 496978 INFO neutron.agent.dhcp.agent [None req-d07c52de-845d-43c9-a54b-d0dcbb6fc27c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:57 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:49:57.416 496978 INFO neutron.agent.dhcp.agent [None req-d07c52de-845d-43c9-a54b-d0dcbb6fc27c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:49:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:57.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:49:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4070: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-9fed44da8941ee34497f502ebfad777bf128f707a4ffb87dcb0fb7a7e9587a6d-merged.mount: Deactivated successfully.
Oct 13 15:49:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2da368014c\x2d0d95\x2d47fb\x2db354\x2df20b74027268.mount: Deactivated successfully.
Oct 13 15:49:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:49:58.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:49:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:49:58 standalone.localdomain ceph-mon[29756]: pgmap v4070: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:49:58 standalone.localdomain podman[565970]: 2025-10-13 15:49:58.865835295 +0000 UTC m=+0.090513625 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 15:49:58 standalone.localdomain podman[565970]: 2025-10-13 15:49:58.87505047 +0000 UTC m=+0.099728790 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 15:49:58 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:49:59 standalone.localdomain sshd[565111]: Received disconnect from 193.46.255.159 port 24352:11:  [preauth]
Oct 13 15:49:59 standalone.localdomain sshd[565111]: Disconnected from authenticating user root 193.46.255.159 port 24352 [preauth]
Oct 13 15:49:59 standalone.localdomain sshd[565111]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 13 15:49:59 standalone.localdomain sshd[565988]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:49:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4071: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:00 standalone.localdomain unix_chkpwd[565991]: password check failed for user (root)
Oct 13 15:50:00 standalone.localdomain sshd[565988]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 13 15:50:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31892 DF PROTO=TCP SPT=40598 DPT=9102 SEQ=3037410951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD488760000000001030307) 
Oct 13 15:50:00 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:00.756 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:49:59Z, description=, device_id=1ea8ec99-6306-4c13-bc63-675407329c09, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889119e80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891190a0>], id=3fa0583f-a5f9-427c-bdb6-91bb835fbfad, ip_allocation=immediate, mac_address=fa:16:3e:97:f5:61, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2522, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:49:59Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:50:00 standalone.localdomain ceph-mon[29756]: pgmap v4071: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:00 standalone.localdomain podman[566010]: 2025-10-13 15:50:00.97535516 +0000 UTC m=+0.058672531 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:00 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:50:00 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:50:00 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:50:01 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:01.777 496978 INFO neutron.agent.dhcp.agent [None req-5b68db0f-79f3-40fe-98ab-c2e788d5fa32 - - - - - -] DHCP configuration for ports {'3fa0583f-a5f9-427c-bdb6-91bb835fbfad'} is completed
Oct 13 15:50:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4072: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:01 standalone.localdomain sudo[566032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:50:01 standalone.localdomain sudo[566032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:50:01 standalone.localdomain sudo[566032]: pam_unix(sudo:session): session closed for user root
Oct 13 15:50:01 standalone.localdomain sudo[566050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:50:01 standalone.localdomain sudo[566050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:50:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:02.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:02 standalone.localdomain sudo[566050]: pam_unix(sudo:session): session closed for user root
Oct 13 15:50:02 standalone.localdomain sshd[565988]: Failed password for root from 193.46.255.159 port 33226 ssh2
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:50:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev c25a7855-cc85-4268-b31b-8ef214eb678d (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:50:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev c25a7855-cc85-4268-b31b-8ef214eb678d (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:50:02 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event c25a7855-cc85-4268-b31b-8ef214eb678d (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:50:02 standalone.localdomain sudo[566099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:50:02 standalone.localdomain sudo[566099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:50:02 standalone.localdomain sudo[566099]: pam_unix(sudo:session): session closed for user root
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: pgmap v4072: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:50:02 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:50:03 standalone.localdomain unix_chkpwd[566117]: password check failed for user (root)
Oct 13 15:50:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:03.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4073: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:04 standalone.localdomain ceph-mon[29756]: pgmap v4073: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:05.136 2 INFO neutron.agent.securitygroups_rpc [None req-a1b71eb7-3407-4eb5-aa94-4a97befb1b81 be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:50:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:50:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:50:05 standalone.localdomain sshd[565988]: Failed password for root from 193.46.255.159 port 33226 ssh2
Oct 13 15:50:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4074: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:05.967 2 INFO neutron.agent.securitygroups_rpc [None req-872ea880-0684-4679-8cb3-c419ea5a6226 be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:50:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:06.969 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:50:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:06.970 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:50:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:06.970 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:50:07 standalone.localdomain ceph-mon[29756]: pgmap v4074: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:07 standalone.localdomain unix_chkpwd[566118]: password check failed for user (root)
Oct 13 15:50:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:07.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4075: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:08.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:08 standalone.localdomain ceph-mon[29756]: pgmap v4075: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:09 standalone.localdomain sshd[565988]: Failed password for root from 193.46.255.159 port 33226 ssh2
Oct 13 15:50:09 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:50:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4076: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:09 standalone.localdomain systemd[1]: tmp-crun.EanSsM.mount: Deactivated successfully.
Oct 13 15:50:09 standalone.localdomain podman[566120]: 2025-10-13 15:50:09.825305207 +0000 UTC m=+0.092522457 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:50:09 standalone.localdomain podman[566120]: 2025-10-13 15:50:09.929989408 +0000 UTC m=+0.197206698 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 13 15:50:09 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:50:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:50:10 standalone.localdomain podman[566146]: 2025-10-13 15:50:10.81285096 +0000 UTC m=+0.081624511 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:50:10 standalone.localdomain podman[566146]: 2025-10-13 15:50:10.854953199 +0000 UTC m=+0.123726690 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal)
Oct 13 15:50:10 standalone.localdomain ceph-mon[29756]: pgmap v4076: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:10 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:50:11 standalone.localdomain sshd[565988]: Received disconnect from 193.46.255.159 port 33226:11:  [preauth]
Oct 13 15:50:11 standalone.localdomain sshd[565988]: Disconnected from authenticating user root 193.46.255.159 port 33226 [preauth]
Oct 13 15:50:11 standalone.localdomain sshd[565988]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 13 15:50:11 standalone.localdomain podman[467099]: time="2025-10-13T15:50:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:50:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:50:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:50:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:50:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49156 "" "Go-http-client/1.1"
Oct 13 15:50:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4077: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:50:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:12.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:12 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:12.538 2 INFO neutron.agent.securitygroups_rpc [None req-6c8b1f56-c0fa-4064-aa13-c0c89f0ef427 be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:12 standalone.localdomain podman[566168]: 2025-10-13 15:50:12.602813942 +0000 UTC m=+0.072011134 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:50:12 standalone.localdomain podman[566168]: 2025-10-13 15:50:12.614117541 +0000 UTC m=+0.083314703 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:50:12 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:50:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:12 standalone.localdomain ceph-mon[29756]: pgmap v4077: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:50:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.368 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.369 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.369 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.369 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:50:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:13.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4078: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:14.042 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:50:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:14.061 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:50:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:14.062 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:50:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:14.062 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:14.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:14 standalone.localdomain ceph-mon[29756]: pgmap v4078: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:15 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:15.231 2 INFO neutron.agent.securitygroups_rpc [None req-ca0000c2-42fa-42a4-8dd0-3f74e648b6ad be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:15 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:15.250 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4079: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:16.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:16.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:50:16 standalone.localdomain ceph-mon[29756]: pgmap v4079: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:16 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:16.897 2 INFO neutron.agent.securitygroups_rpc [None req-9f0d69e1-da66-4a2a-bb6b-acb4ae14ef56 be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.252 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.252 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.253 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.253 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.253 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:50:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3485922578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.709 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.770 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.771 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.771 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.775 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:50:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:17.775 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:50:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4080: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:17 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3485922578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.003 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.005 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9135MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.006 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.006 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.091 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.092 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.093 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.093 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.149 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2481312698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.606 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.614 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4012067973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4012067973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.634 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.637 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.637 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:50:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:18.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: pgmap v4080: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2481312698' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4012067973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:50:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/4012067973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:50:19 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:19.257 2 INFO neutron.agent.securitygroups_rpc [None req-214b39bd-67c6-482f-8a8d-a600f69c48f7 be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:19.635 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:19.635 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:19.636 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:19.636 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:50:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:50:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:50:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:50:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:19.733 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:50:19 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:50:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4081: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:19 standalone.localdomain podman[566233]: 2025-10-13 15:50:19.859008843 +0000 UTC m=+0.113774912 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, summary=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, version=17.1.9, tcib_managed=true, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-07-21T15:54:32, release=1, config_id=tripleo_step4, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=)
Oct 13 15:50:19 standalone.localdomain podman[566232]: 2025-10-13 15:50:19.913814675 +0000 UTC m=+0.170909716 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:19 standalone.localdomain podman[566232]: 2025-10-13 15:50:19.952974564 +0000 UTC m=+0.210069635 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:50:19 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:50:20 standalone.localdomain podman[566231]: 2025-10-13 15:50:20.020747326 +0000 UTC m=+0.277712783 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-object, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, com.redhat.component=openstack-swift-object-container, version=17.1.9, release=1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, container_name=swift_object_server, io.openshift.tags=rhosp osp openstack osp-17.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4)
Oct 13 15:50:20 standalone.localdomain podman[566234]: 2025-10-13 15:50:19.979773811 +0000 UTC m=+0.224872992 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, release=1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, name=rhosp17/openstack-swift-account, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-account, com.redhat.component=openstack-swift-account-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, version=17.1.9, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20250721.1, vendor=Red Hat, Inc., vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, summary=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22)
Oct 13 15:50:20 standalone.localdomain podman[566233]: 2025-10-13 15:50:20.044806019 +0000 UTC m=+0.299572078 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-container, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, architecture=x86_64, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, release=1, tcib_managed=true, name=rhosp17/openstack-swift-container, batch=17.1_20250721.1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.openshift.expose-services=)
Oct 13 15:50:20 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:50:20 standalone.localdomain podman[566235]: 2025-10-13 15:50:20.123724195 +0000 UTC m=+0.369933930 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:50:20 standalone.localdomain podman[566235]: 2025-10-13 15:50:20.138909253 +0000 UTC m=+0.385119058 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 15:50:20 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:50:20 standalone.localdomain podman[566234]: 2025-10-13 15:50:20.194919732 +0000 UTC m=+0.440018883 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, description=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, vcs-type=git, distribution-scope=public, batch=17.1_20250721.1, config_id=tripleo_step4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, managed_by=tripleo_ansible, release=1, build-date=2025-07-21T16:11:22, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, vendor=Red Hat, Inc.)
Oct 13 15:50:20 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:50:20 standalone.localdomain podman[566231]: 2025-10-13 15:50:20.25996193 +0000 UTC m=+0.516927447 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-object-container, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, description=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=)
Oct 13 15:50:20 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:50:20 standalone.localdomain ceph-mon[29756]: pgmap v4081: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:20.998 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.001 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.001 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.006 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.010 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1b134f50-4b5b-46e9-8558-6b87121b8074', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.001983', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52e2f6ec-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': 'ded7cbca06d4329a0a847099887e0337a36325a459b643fa4c97b9939ff50d2d'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.001983', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52e37d60-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '2e2475c9153469a8f44bbf5066a915d13f2189c4462666115a20b1404715751e'}]}, 'timestamp': '2025-10-13 15:50:21.010926', '_unique_id': '27b9956e51d24c1386aab3273600ad42'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.012 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.014 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.014 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.015 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '128b8ade-9a32-4e41-b956-1d115d7c10d0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.014277', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52e41612-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': '664c2ef683a45600522afa20bedeba46352bed9ca6e6855397e0d5d115357813'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.014277', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52e43188-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '7e4c59aaf04ae1fe33bf64a24a46cdf4057fa7d23b6c1c076121f908694b0616'}]}, 'timestamp': '2025-10-13 15:50:21.015540', '_unique_id': '038acbf49c87464c956e49da4bac5d0e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.016 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.017 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.017 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.018 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5bccabde-784d-488c-9f52-812cda7bf229', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.017922', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52e4a2b2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': '5e1c43820ee84acb271783ab7a1885033d8f456779734ee41eb25be58e689097'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.017922', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52e4b6c6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': 'c89a7aba0b34fa18379018bcc9887c5b13079b7da774da3ae8ca2502b5bf2d02'}]}, 'timestamp': '2025-10-13 15:50:21.018921', '_unique_id': '3dfe8ed580b741eb88de6b2c59142fcb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.019 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.021 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.055 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.055 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.056 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.081 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.082 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '07dd187a-96c7-4fd0-b513-720a9baab019', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.021290', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52ea5810-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '4955767cbce53438aba7a041563b53f2f79049f801fa57fb7d8ebd3d65e713cd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.021290', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52ea6cb0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '0a0fbfa82d5d012610ac7501260d8e40e8bd2dbceb67bda55ab332d9b1e1cdf7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.021290', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '52ea7e9e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': 'eeaffb81ceaed868b8e80040b8f5fe7a440856839fae6d36ea0035800af11662'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.021290', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52ee58a2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'ed7dad6f44c1e68f9a2b457fae37626f79ea391c003a614d151aaca7f93f7140'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.021290', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52ee695a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': '821d23042314639f86e8aae355673fc8cebcc92d5855156918f53484456a68f9'}]}, 'timestamp': '2025-10-13 15:50:21.082478', '_unique_id': 'a3b1e7a118204f7f9cb7cbf84fedabca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.084 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.085 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.085 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.086 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9649650-d543-4812-ba94-d6caa44898a2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.085803', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52eefe9c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': 'be7c77e2ba3453e62835e6aec80f1fa6287df0c1ab69628ad690569bdb624244'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.085803', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52ef1242-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '620cd297f3a6f12fdcf83a78894f1dfc5571888d1a4d4ce21f8fb93d19540564'}]}, 'timestamp': '2025-10-13 15:50:21.086798', '_unique_id': '81ef76e90db7464897c6dc8a8a81a86f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.087 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.089 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.089 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.089 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.090 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.090 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.090 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7529438f-d0ca-42cd-b970-ab055ce5f03f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.089170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52ef811e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': 'd644a99480270900daaf8788491a5075bdef3318f66faa3a43f3cabc438ba0da'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.089170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52ef92e4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': 'fe74b2b4c0d9bf1ba602d896e07bbdacff590f3cce13b27e159579189f7d21d4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.089170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '52efa20c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '41ea599b9a457a46ef944a4ae0555db64dd9b9803e58d51f670c946e20c1ad3b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.089170', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52efb224-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': '3e4ba809214b61693e584ada21f528f06426ec667af0d6fc978971ed4f7c0872'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.089170', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52efc0ac-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': '6b150d74613bb2c0862252c4c30d3e069932e806ba422bfcb3a2f64d35c3df12'}]}, 'timestamp': '2025-10-13 15:50:21.091229', '_unique_id': 'dbe9df006ac146eb94737d135147293f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.092 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.093 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.093 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.110 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.111 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.111 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.128 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.129 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1df8e54-1099-4eb6-8d8f-aaea97786f98', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.093978', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52f2cbe4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': '74024f49214e5ccf1889bf6fd35e13ec8fcad83765cc3c9dd2e2ff416509f1a4'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.093978', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52f2de54-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': '3617deadd644237f17a2978de62c2d7d832587d03392a298ba9cc6266228e8b0'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.093978', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '52f2f056-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': '76136a5d09fec403c47ae9d7d2c33ed8280ae0ef09b716fc2fd8ef3b5f261a3a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.093978', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52f5832a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.301330932, 'message_signature': '5b684bdde85d29695505331fe76c5da0a69b5aed08ce700d69d2a28c5c62111a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.093978', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52f59540-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.301330932, 'message_signature': 'de38105fc5674050362742c6bdf5423bac3243c35baec0f85520b693c693a3d8'}]}, 'timestamp': '2025-10-13 15:50:21.129460', '_unique_id': '254adb06056f4916a24c33c9f5ff8f3a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.131 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.132 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.132 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.132 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ad5a33e6-2fb0-42a5-a1e9-9a7185c13232', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.132472', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52f61e16-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': 'f84edd929bcf3ad64ec0db92cb2beab216c5c32315631926aeab4250993a332e'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.132472', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52f62f1e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '52f407fd132c7666186813702abd0a120575a092d25e1bc6bdfb98c36fbbca11'}]}, 'timestamp': '2025-10-13 15:50:21.133402', '_unique_id': '24451dd21fbe49c3a89474978192f605'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.134 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.135 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.152 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 15070000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.173 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 36900000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f7fad106-7574-4762-91e9-7526dfd06809', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15070000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:50:21.135691', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '52f934fc-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.341823381, 'message_signature': '33efb2cef42fa65a1181463d4a616f786434b4e15d6d00ae8af5a4960134aeb2'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 36900000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:50:21.135691', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '52fc69a6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.362790179, 'message_signature': '2018209c6e253cd207f609f4c45d1c70819771bdb611298e69a259debce01f21'}]}, 'timestamp': '2025-10-13 15:50:21.174264', '_unique_id': '5021fdc129b34a1492396d270edde97d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.176 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.177 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.177 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.177 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7d13a6fb-4941-46d7-a919-77a1a39b7ef5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.177239', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52fcf2e0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': '739b22644f3e31910852cec4d7a8607b1ec7ab16b1831ec4ca0f99f4a99cd4da'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.177239', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52fd049c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': 'a8ec73bea99c2681f5e48b6cbe948cc23c507a02db99aab7e981b287ae79010f'}]}, 'timestamp': '2025-10-13 15:50:21.178195', '_unique_id': '3a226946bb0b4bdb94733014e4110055'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.180 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.180 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.181 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.181 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.182 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ea2a39ee-4d82-43c6-8e96-78aca64aa4bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.180519', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52fd7166-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': 'c9209b34072009e216c59567952148bf5ebc93f0ca0ee7b2e5abda38dd996daa'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.180519', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52fd81a6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '97ac615be28de2a14adc3b84920e70db303ec13b5b3e2219d420589cb4a33b52'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.180519', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '52fd9196-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '9c9be5a2d4da3b7b322c2ea12079c3f33b1f0a39ce4c5162ad2dc843b2668eb9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.180519', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52fda0c8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'a3668a1e64981bd953530cd816327b13126bcdf536dfd9387d8dfd0b10c78d3b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.180519', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52fdaf82-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'dd31ec692bb0aea0ca5efec4d2b7db7d1dd2bd3c43cae23fbe4f9153bf91eb70'}]}, 'timestamp': '2025-10-13 15:50:21.182570', '_unique_id': '757e09857fb446be9b5113823b266dc2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.183 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.185 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.185 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.185 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.186 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.186 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.187 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b8605792-e600-48d2-99cb-b62892a54a7a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.185397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52fe30ec-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '49d2b2483bb6d98c9fe21b1a4c11818443e119f6f1b0f162e616ed51997a383c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.185397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52fe4104-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '88207da2f8df1503f91a635d03f569f020213037479785e2e3c8520080cc2bca'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.185397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '52fe502c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '3e439633ef129732444a2a6ae1576dba90f2ced0dd4662c410e008bf8341e1fe'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.185397', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52fe6080-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'ed1217c89a145a89daf950310ba8ed13156c1c8f32c26f3d2eda2b271012d649'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.185397', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52fe7016-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'b362a6ad03850805ad207a31513236f70cf73bc6f9f0605df72a5b4455277f9e'}]}, 'timestamp': '2025-10-13 15:50:21.187472', '_unique_id': '1f13d33a7d0543d48a0160c7eca2fbc4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.188 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.190 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.190 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.190 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9ecacf1e-1302-47ee-9938-583302f721d5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.190339', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '52fef392-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': '56dbb9e011a45250182f6d8327600233c4f16ae4b927347b8068775cc755096c'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.190339', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '52ff059e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '3a1d98280aececea6360ba2e1cb9ae2a52ca22ffd54a2df2b94771c6b45ba1d6'}]}, 'timestamp': '2025-10-13 15:50:21.191410', '_unique_id': 'c198900d890a4fd1b6377aeffac536f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.192 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.193 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.193 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.194 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.194 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.195 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.195 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a0bf451-e8f9-4bec-b23c-587ef2adb3e9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.193824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52ff7902-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '3b2622a606202a68518250b4b4af8958911bdb177eeb1c976931597b31ce7b6f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.193824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52ff8ae6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': 'ae792fe21e94a5bfdff9ffeb262fd8629ed17f0009ff6c0e8fbba21636b3a992'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.193824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '52ff9cac-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '71a96f1a2085ce8e01c9c678023681c14314521c435a0c4cbd2857e25a8e1667'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.193824', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52ffad5a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': '009596a048ab80adcb533309b641b4f24cd82b8957dd0063baed8bb77674038f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.193824', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52ffc088-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': '5f75debdc9100f011fc31c62ec1f62b8052a698411f9517b58b936a7765af4e2'}]}, 'timestamp': '2025-10-13 15:50:21.196095', '_unique_id': '427ccd4a1d2745c2a81fc93ab2bc2360'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.197 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.198 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.198 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.199 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.199 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.200 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd9cd7977-715a-4168-9089-cf39c4fd0347', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.198773', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '53003c7a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': 'f30147d1c83ace77a03516150e9d660df398c33ee3658c527a6c926bad31688f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.198773', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '53004fd0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '73cbd629966fdda219a3c2c8167b3520dba8c6c961cd6bbe9a23dcdf43755494'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.198773', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '53006038-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.210542529, 'message_signature': '1c289f76ebe02e27d918fcd8cb0a11c86288f16ee411078673b11aee271ec948'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.198773', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5300719a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'e4c36ded6076c6bcb6fb44cbe78bc05bcb29ae212ec4948f389acfdaf7a7cf97'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.198773', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '53008298-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.246019044, 'message_signature': 'cc7305346a0af83cd46f0cda9d816bd237764f2492590c64890107accafdd17e'}]}, 'timestamp': '2025-10-13 15:50:21.201085', '_unique_id': 'a30d1ddaf8d744839e645327cbdd30f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.202 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.203 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.203 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '33265e3b-9cd8-4f02-b9a4-13d1a9b1892b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.203715', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '5300f7e6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': 'bcf536a809eb12421a288120dc901952fbb23ee382e95343a0a67689edb5ec03'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.203715', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '530102cc-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '4923984751099ffd998d49b2cdccb94cf1e9920cc8f8f35fc5a8d59ffb4a0869'}]}, 'timestamp': '2025-10-13 15:50:21.204291', '_unique_id': '380217dc9685416bb995a05c5f42d613'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.204 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.205 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.205 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '71f15877-932b-4292-b0be-5bd6802130b4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:50:21.205770', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '53014818-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.341823381, 'message_signature': 'e6a55e21e048fbe766fe821e4fd36ecffe8e38e6cb830a541f7364934e75ad4d'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:50:21.205770', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '53015254-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.362790179, 'message_signature': '0eed3ceac31da21cd179d1117fe8dd3d57947f04fb1e9e587bb9312213f5118f'}]}, 'timestamp': '2025-10-13 15:50:21.206303', '_unique_id': 'b029531d153a4572a72d3e6f979642b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.206 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.207 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.207 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.208 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1cf6dcc4-d116-4a76-868c-4c887610fc19', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.207892', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '53019b4c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': 'b9b7aa0d491c48888a1fafa7745390d5bf809b07d2a1db34c17ccc43c25e88d7'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.207892', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '5301a650-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': '2d4838915bb81d6efa75d882835053a952f80279eb8f1b56d284b2049e271046'}]}, 'timestamp': '2025-10-13 15:50:21.208464', '_unique_id': '1dedf26aa9264a27beb9f96f16d05b53'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.209 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.210 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.210 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.210 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.210 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.211 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e0a1e255-d553-451f-a630-75a24e6d20e8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.210011', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5301edae-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': '3a974f9dce4e66e28742aed00bd87442421d115e257f92daae0e2eb5d41dad51'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.210011', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5301f7e0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': 'eb5a2918d71bc38af070bd1a33d8dec7df68912320b8609e8242eb0689acf3d7'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.210011', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '53020370-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': '521fa80b49a1a27b14c09020e9e153b909cd4fb662d8113c076e6140cef79cf1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.210011', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '53020d8e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.301330932, 'message_signature': '43f9448b9445eadca013634950eb94f4bdd25f6ec1531c5faa1a3c735a4ae43d'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.210011', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '53021842-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.301330932, 'message_signature': '68f765c69aad6511291aca4c203bb39e1e95330886e69524a9a2fc9e456c2a83'}]}, 'timestamp': '2025-10-13 15:50:21.211370', '_unique_id': '44b0bf2e2f9541ba9c5c4c5f388ded9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.212 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.213 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97133946-da26-47d9-83bf-b6c245b411ef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:50:21.212918', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '53025f82-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.191234322, 'message_signature': 'cde1e5194d536c5e9a887f5eed2feaf09d5fdc674edd99d2ce463d340b25fd36'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:50:21.212918', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '53026a54-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.196788194, 'message_signature': 'f908b3335198b5c880abdec5b8b0a73bee1fd6b2207ad711ff2ea160869ab1c7'}]}, 'timestamp': '2025-10-13 15:50:21.213480', '_unique_id': '3ca9925b55bb471c9ce63d12f1c2604f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.214 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.215 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.215 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.216 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '80a8f771-71b1-4c13-bc75-763d906c750c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:50:21.215081', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5302b40a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': 'bf875b534154be3bccabe3f9096101d5b6a0758d086aa1cbf4ef3e6f2b4c2e88'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:50:21.215081', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5302bf54-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': '8eb0f4652e270a5224841cf9e6d6fdf18e995eab439f5ed7ab54ae57448949be'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:50:21.215081', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '5302c92c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.283255803, 'message_signature': 'ea85c4c71cb94a0afd9975415f9cd8d5134d673469f5fab4af25c730fb8d0a9e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:50:21.215081', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5302d494-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.301330932, 'message_signature': '1aa9db532dc4b1f0992ea603fbba0ba030a55a3310b58da6b850d425133272d4'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:50:21.215081', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5302e1aa-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10064.301330932, 'message_signature': 'b03567437ba23c936deefdfd60d75d43c0a05389bebf7e7fa3eb8960a4a8492a'}]}, 'timestamp': '2025-10-13 15:50:21.216604', '_unique_id': '6ac8d4b9739447ae98077afd0481fc44'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:50:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:50:21.217 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:50:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4082: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:22.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:50:22 standalone.localdomain podman[566357]: 2025-10-13 15:50:22.614268562 +0000 UTC m=+0.073937494 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 13 15:50:22 standalone.localdomain podman[566357]: 2025-10-13 15:50:22.652943936 +0000 UTC m=+0.112612818 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 15:50:22 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:50:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:22 standalone.localdomain ceph-mon[29756]: pgmap v4082: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:22.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:22.932 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:22.934 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:50:22 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:22.935 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:50:23 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:23.212 496978 INFO neutron.agent.linux.ip_lib [None req-126da2ce-13a9-448b-8bb5-21b051b1c7bf - - - - - -] Device tapfdc5b9c2-96 cannot be used as it has no MAC address
Oct 13 15:50:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:23.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:50:23 standalone.localdomain kernel: device tapfdc5b9c2-96 entered promiscuous mode
Oct 13 15:50:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:23Z|00831|binding|INFO|Claiming lport fdc5b9c2-9644-4420-845d-072fc4b1e6cc for this chassis.
Oct 13 15:50:23 standalone.localdomain NetworkManager[5962]: <info>  [1760370623.2922] manager: (tapfdc5b9c2-96): new Generic device (/org/freedesktop/NetworkManager/Devices/140)
Oct 13 15:50:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:23Z|00832|binding|INFO|fdc5b9c2-9644-4420-845d-072fc4b1e6cc: Claiming unknown
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:50:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:23.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:50:23 standalone.localdomain systemd-udevd[566386]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:50:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:23.301 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4e817c1a-e6e2-4079-9009-a0b41f7e4e32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e817c1a-e6e2-4079-9009-a0b41f7e4e32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee3b60847f7e4cd3a4cb39299db12595', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=907448ed-42f3-42c4-82fe-aa2ab0a8d9a4, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=fdc5b9c2-9644-4420-845d-072fc4b1e6cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:23.302 378821 INFO neutron.agent.ovn.metadata.agent [-] Port fdc5b9c2-9644-4420-845d-072fc4b1e6cc in datapath 4e817c1a-e6e2-4079-9009-a0b41f7e4e32 bound to our chassis
Oct 13 15:50:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:23.304 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4e817c1a-e6e2-4079-9009-a0b41f7e4e32 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:23 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:23.305 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9e252b-b485-4d15-a4aa-4442c8a8b9c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:23Z|00833|binding|INFO|Setting lport fdc5b9c2-9644-4420-845d-072fc4b1e6cc ovn-installed in OVS
Oct 13 15:50:23 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:23Z|00834|binding|INFO|Setting lport fdc5b9c2-9644-4420-845d-072fc4b1e6cc up in Southbound
Oct 13 15:50:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:23.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:23.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:50:23
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'manila_metadata', 'vms', 'manila_data', 'images', '.mgr', 'volumes']
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:50:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:23.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21626 DF PROTO=TCP SPT=47304 DPT=9102 SEQ=3692452726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD4E1DF0000000001030307) 
Oct 13 15:50:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:23.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4083: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:50:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:50:24 standalone.localdomain podman[566441]: 
Oct 13 15:50:24 standalone.localdomain podman[566441]: 2025-10-13 15:50:24.357400228 +0000 UTC m=+0.099587135 container create dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 15:50:24 standalone.localdomain systemd[1]: Started libpod-conmon-dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18.scope.
Oct 13 15:50:24 standalone.localdomain podman[566441]: 2025-10-13 15:50:24.307993863 +0000 UTC m=+0.050180800 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:50:24 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:50:24 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b6ed862737c0b3962e4316add307d9b30259ccc8ac85e32f93f259a7508e0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:50:24 standalone.localdomain podman[566441]: 2025-10-13 15:50:24.47473072 +0000 UTC m=+0.216917637 container init dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0)
Oct 13 15:50:24 standalone.localdomain podman[566441]: 2025-10-13 15:50:24.484680337 +0000 UTC m=+0.226867244 container start dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:50:24 standalone.localdomain dnsmasq[566460]: started, version 2.85 cachesize 150
Oct 13 15:50:24 standalone.localdomain dnsmasq[566460]: DNS service limited to local subnets
Oct 13 15:50:24 standalone.localdomain dnsmasq[566460]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:50:24 standalone.localdomain dnsmasq[566460]: warning: no upstream servers configured
Oct 13 15:50:24 standalone.localdomain dnsmasq-dhcp[566460]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:50:24 standalone.localdomain dnsmasq[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/addn_hosts - 0 addresses
Oct 13 15:50:24 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/host
Oct 13 15:50:24 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/opts
Oct 13 15:50:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21627 DF PROTO=TCP SPT=47304 DPT=9102 SEQ=3692452726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD4E5F60000000001030307) 
Oct 13 15:50:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:24.669 496978 INFO neutron.agent.dhcp.agent [None req-232fced7-994b-4492-a0f5-5e8dadae0b23 - - - - - -] DHCP configuration for ports {'442311aa-2e29-42e4-bdbd-f462ed5b0af4'} is completed
Oct 13 15:50:24 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:24.676 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:24Z, description=, device_id=03a6e6cb-0318-4cec-a806-11ce667a241b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111640>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889111250>], id=4ad10917-6eec-452a-adc7-b9bbfd7da4ab, ip_allocation=immediate, mac_address=fa:16:3e:0c:e4:91, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2544, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:50:24Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:50:24 standalone.localdomain podman[566479]: 2025-10-13 15:50:24.895051294 +0000 UTC m=+0.061092437 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:50:24 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 3 addresses
Oct 13 15:50:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:50:24 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:50:24 standalone.localdomain sshd[566491]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:50:24 standalone.localdomain sshd[566491]: error: kex_exchange_identification: banner line contains invalid characters
Oct 13 15:50:24 standalone.localdomain sshd[566491]: banner exchange: Connection from 148.113.212.55 port 47200: invalid format
Oct 13 15:50:24 standalone.localdomain ceph-mon[29756]: pgmap v4083: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:25 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:25.109 496978 INFO neutron.agent.dhcp.agent [None req-4bfa1d9f-7b54-4ca4-ad93-64446adfa6fb - - - - - -] DHCP configuration for ports {'4ad10917-6eec-452a-adc7-b9bbfd7da4ab'} is completed
Oct 13 15:50:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4084: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21628 DF PROTO=TCP SPT=47304 DPT=9102 SEQ=3692452726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD4EDF60000000001030307) 
Oct 13 15:50:26 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:26.774 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:25Z, description=, device_id=03a6e6cb-0318-4cec-a806-11ce667a241b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd5730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd55b0>], id=b013391e-569f-473a-9aff-694d452bfac9, ip_allocation=immediate, mac_address=fa:16:3e:68:47:bf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:21Z, description=, dns_domain=, id=4e817c1a-e6e2-4079-9009-a0b41f7e4e32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-254992044, port_security_enabled=True, project_id=ee3b60847f7e4cd3a4cb39299db12595, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2322, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2538, status=ACTIVE, subnets=['a0079187-aa70-4300-ba63-e6824509fd1e'], tags=[], tenant_id=ee3b60847f7e4cd3a4cb39299db12595, updated_at=2025-10-13T15:50:22Z, vlan_transparent=None, network_id=4e817c1a-e6e2-4079-9009-a0b41f7e4e32, port_security_enabled=False, project_id=ee3b60847f7e4cd3a4cb39299db12595, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2545, status=DOWN, tags=[], tenant_id=ee3b60847f7e4cd3a4cb39299db12595, updated_at=2025-10-13T15:50:25Z on network 4e817c1a-e6e2-4079-9009-a0b41f7e4e32
Oct 13 15:50:26 standalone.localdomain ceph-mon[29756]: pgmap v4084: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:27.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.801640) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370627801692, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 1734, "num_deletes": 250, "total_data_size": 1570288, "memory_usage": 1602296, "flush_reason": "Manual Compaction"}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Oct 13 15:50:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4085: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370627809545, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 937577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95007, "largest_seqno": 96740, "table_properties": {"data_size": 932447, "index_size": 2346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13926, "raw_average_key_size": 20, "raw_value_size": 920922, "raw_average_value_size": 1374, "num_data_blocks": 106, "num_entries": 670, "num_filter_entries": 670, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760370472, "oldest_key_time": 1760370472, "file_creation_time": 1760370627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 7991 microseconds, and 3792 cpu microseconds.
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.809613) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 937577 bytes OK
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.809654) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.812382) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.812412) EVENT_LOG_v1 {"time_micros": 1760370627812404, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.812432) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 1562700, prev total WAL file size 1563190, number of live WAL files 2.
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.813242) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373533' seq:72057594037927935, type:22 .. '6D6772737461740034303034' seq:0, type:0; will stop at (end)
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(915KB)], [218(6735KB)]
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370627813296, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 7835139, "oldest_snapshot_seqno": -1}
Oct 13 15:50:27 standalone.localdomain podman[566502]: 2025-10-13 15:50:27.838678117 +0000 UTC m=+0.100141322 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 7673 keys, 6354544 bytes, temperature: kUnknown
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370627854731, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 6354544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6311951, "index_size": 22333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 203118, "raw_average_key_size": 26, "raw_value_size": 6181021, "raw_average_value_size": 805, "num_data_blocks": 869, "num_entries": 7673, "num_filter_entries": 7673, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760362563, "oldest_key_time": 0, "file_creation_time": 1760370627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ca07273-cffb-4aba-ab62-7eb3dc930841", "db_session_id": "GD13TFTIGQ1NUS7TF7HW", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.854961) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 6354544 bytes
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.856279) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.8 rd, 153.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.6 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(15.1) write-amplify(6.8) OK, records in: 8118, records dropped: 445 output_compression: NoCompression
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.856297) EVENT_LOG_v1 {"time_micros": 1760370627856288, "job": 138, "event": "compaction_finished", "compaction_time_micros": 41496, "compaction_time_cpu_micros": 25690, "output_level": 6, "num_output_files": 1, "total_output_size": 6354544, "num_input_records": 8118, "num_output_records": 7673, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370627856512, "job": 138, "event": "table_file_deletion", "file_number": 220}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-standalone/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760370627857148, "job": 138, "event": "table_file_deletion", "file_number": 218}
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.813114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.857253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.857260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.857262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.857265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:50:27 standalone.localdomain ceph-mon[29756]: rocksdb: (Original Log Time 2025/10/13-15:50:27.857267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 13 15:50:27 standalone.localdomain podman[566502]: 2025-10-13 15:50:27.900552307 +0000 UTC m=+0.162015522 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:50:27 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:50:27 standalone.localdomain dnsmasq[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/addn_hosts - 1 addresses
Oct 13 15:50:27 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/host
Oct 13 15:50:27 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/opts
Oct 13 15:50:27 standalone.localdomain podman[566540]: 2025-10-13 15:50:27.965715778 +0000 UTC m=+0.093922900 container kill dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 15:50:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:28.572 496978 INFO neutron.agent.dhcp.agent [None req-849aab38-8a6d-4701-91c6-4e0acda945ba - - - - - -] DHCP configuration for ports {'b013391e-569f-473a-9aff-694d452bfac9'} is completed
Oct 13 15:50:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:28.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:28 standalone.localdomain ceph-mon[29756]: pgmap v4085: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:50:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:50:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4086: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:29 standalone.localdomain podman[566562]: 2025-10-13 15:50:29.832329805 +0000 UTC m=+0.092777955 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:50:29 standalone.localdomain podman[566562]: 2025-10-13 15:50:29.84186016 +0000 UTC m=+0.102308320 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:50:29 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:50:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21629 DF PROTO=TCP SPT=47304 DPT=9102 SEQ=3692452726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD4FDB60000000001030307) 
Oct 13 15:50:30 standalone.localdomain ceph-mon[29756]: pgmap v4086: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:30 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:30.931 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:25Z, description=, device_id=03a6e6cb-0318-4cec-a806-11ce667a241b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd7d90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889004fd0>], id=b013391e-569f-473a-9aff-694d452bfac9, ip_allocation=immediate, mac_address=fa:16:3e:68:47:bf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:21Z, description=, dns_domain=, id=4e817c1a-e6e2-4079-9009-a0b41f7e4e32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-254992044, port_security_enabled=True, project_id=ee3b60847f7e4cd3a4cb39299db12595, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2322, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2538, status=ACTIVE, subnets=['a0079187-aa70-4300-ba63-e6824509fd1e'], tags=[], tenant_id=ee3b60847f7e4cd3a4cb39299db12595, updated_at=2025-10-13T15:50:22Z, vlan_transparent=None, network_id=4e817c1a-e6e2-4079-9009-a0b41f7e4e32, port_security_enabled=False, project_id=ee3b60847f7e4cd3a4cb39299db12595, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2545, status=DOWN, tags=[], tenant_id=ee3b60847f7e4cd3a4cb39299db12595, updated_at=2025-10-13T15:50:25Z on network 4e817c1a-e6e2-4079-9009-a0b41f7e4e32
Oct 13 15:50:31 standalone.localdomain dnsmasq[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/addn_hosts - 1 addresses
Oct 13 15:50:31 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/host
Oct 13 15:50:31 standalone.localdomain podman[566597]: 2025-10-13 15:50:31.136869194 +0000 UTC m=+0.045255558 container kill dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 15:50:31 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/opts
Oct 13 15:50:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:31.424 496978 INFO neutron.agent.dhcp.agent [None req-328236a3-2d4a-4802-ac6b-098b531e433c - - - - - -] DHCP configuration for ports {'b013391e-569f-473a-9aff-694d452bfac9'} is completed
Oct 13 15:50:31 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:31.623 2 INFO neutron.agent.securitygroups_rpc [None req-fa4d731a-8bd4-4252-a0ab-76d6a7e7e8a6 be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:31 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:31.650 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:31Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffcfd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ffc130>], id=5f6ae40d-dd39-4a83-95f7-d994412c3cf0, ip_allocation=immediate, mac_address=fa:16:3e:41:7a:fc, name=tempest-FloatingIPTestJSON-836609434, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:21Z, description=, dns_domain=, id=4e817c1a-e6e2-4079-9009-a0b41f7e4e32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-254992044, port_security_enabled=True, project_id=ee3b60847f7e4cd3a4cb39299db12595, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2322, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2538, status=ACTIVE, subnets=['a0079187-aa70-4300-ba63-e6824509fd1e'], tags=[], tenant_id=ee3b60847f7e4cd3a4cb39299db12595, updated_at=2025-10-13T15:50:22Z, vlan_transparent=None, network_id=4e817c1a-e6e2-4079-9009-a0b41f7e4e32, port_security_enabled=True, project_id=ee3b60847f7e4cd3a4cb39299db12595, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['75c21eba-3b80-42fe-babe-711d5726a710'], standard_attr_id=2546, status=DOWN, tags=[], tenant_id=ee3b60847f7e4cd3a4cb39299db12595, updated_at=2025-10-13T15:50:31Z on network 4e817c1a-e6e2-4079-9009-a0b41f7e4e32
Oct 13 15:50:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4087: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:31 standalone.localdomain systemd[1]: tmp-crun.eGvRZH.mount: Deactivated successfully.
Oct 13 15:50:31 standalone.localdomain dnsmasq[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/addn_hosts - 2 addresses
Oct 13 15:50:31 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/host
Oct 13 15:50:31 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/opts
Oct 13 15:50:31 standalone.localdomain podman[566636]: 2025-10-13 15:50:31.899326239 +0000 UTC m=+0.080069812 container kill dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:50:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:32.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:32 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:32.579 496978 INFO neutron.agent.dhcp.agent [None req-3cef2ee2-bacc-4b5d-9404-22d038268d9d - - - - - -] DHCP configuration for ports {'5f6ae40d-dd39-4a83-95f7-d994412c3cf0'} is completed
Oct 13 15:50:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:32 standalone.localdomain ceph-mon[29756]: pgmap v4087: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:33 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:33.175 2 INFO neutron.agent.securitygroups_rpc [None req-614af6a4-37d2-4612-9489-6deca4e952cb be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:33.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4088: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:33 standalone.localdomain dnsmasq[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/addn_hosts - 1 addresses
Oct 13 15:50:33 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/host
Oct 13 15:50:33 standalone.localdomain podman[566675]: 2025-10-13 15:50:33.97369043 +0000 UTC m=+0.066955018 container kill dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:33 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/opts
Oct 13 15:50:34 standalone.localdomain dnsmasq[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/addn_hosts - 0 addresses
Oct 13 15:50:34 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/host
Oct 13 15:50:34 standalone.localdomain dnsmasq-dhcp[566460]: read /var/lib/neutron/dhcp/4e817c1a-e6e2-4079-9009-a0b41f7e4e32/opts
Oct 13 15:50:34 standalone.localdomain podman[566715]: 2025-10-13 15:50:34.452726836 +0000 UTC m=+0.058293690 container kill dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:50:34 standalone.localdomain systemd[1]: tmp-crun.wWlcy8.mount: Deactivated successfully.
Oct 13 15:50:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:34Z|00835|binding|INFO|Releasing lport fdc5b9c2-9644-4420-845d-072fc4b1e6cc from this chassis (sb_readonly=0)
Oct 13 15:50:34 standalone.localdomain kernel: device tapfdc5b9c2-96 left promiscuous mode
Oct 13 15:50:34 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:34Z|00836|binding|INFO|Setting lport fdc5b9c2-9644-4420-845d-072fc4b1e6cc down in Southbound
Oct 13 15:50:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:34.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:34.678 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4e817c1a-e6e2-4079-9009-a0b41f7e4e32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e817c1a-e6e2-4079-9009-a0b41f7e4e32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee3b60847f7e4cd3a4cb39299db12595', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=907448ed-42f3-42c4-82fe-aa2ab0a8d9a4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=fdc5b9c2-9644-4420-845d-072fc4b1e6cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:34.680 378821 INFO neutron.agent.ovn.metadata.agent [-] Port fdc5b9c2-9644-4420-845d-072fc4b1e6cc in datapath 4e817c1a-e6e2-4079-9009-a0b41f7e4e32 unbound from our chassis
Oct 13 15:50:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:34.683 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e817c1a-e6e2-4079-9009-a0b41f7e4e32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:50:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:34.684 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[31a706ad-e768-40c0-ab16-e4fe3cdaec5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:34.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:34 standalone.localdomain container-server[114430]: Begin container update sweep
Oct 13 15:50:34 standalone.localdomain container-server[114430]: Container update sweep completed: 0.00s
Oct 13 15:50:34 standalone.localdomain ceph-mon[29756]: pgmap v4088: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:35 standalone.localdomain podman[566757]: 2025-10-13 15:50:35.186184687 +0000 UTC m=+0.065344379 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:50:35 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:50:35 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:50:35 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:50:35 standalone.localdomain systemd[1]: tmp-crun.wQjSWE.mount: Deactivated successfully.
Oct 13 15:50:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4089: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:36 standalone.localdomain dnsmasq[566460]: exiting on receipt of SIGTERM
Oct 13 15:50:36 standalone.localdomain podman[566795]: 2025-10-13 15:50:36.110728805 +0000 UTC m=+0.051396168 container kill dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 15:50:36 standalone.localdomain systemd[1]: libpod-dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18.scope: Deactivated successfully.
Oct 13 15:50:36 standalone.localdomain podman[566811]: 2025-10-13 15:50:36.163553385 +0000 UTC m=+0.033228216 container died dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:50:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18-userdata-shm.mount: Deactivated successfully.
Oct 13 15:50:36 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-f0b6ed862737c0b3962e4316add307d9b30259ccc8ac85e32f93f259a7508e0f-merged.mount: Deactivated successfully.
Oct 13 15:50:36 standalone.localdomain podman[566811]: 2025-10-13 15:50:36.215270772 +0000 UTC m=+0.084945593 container remove dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4e817c1a-e6e2-4079-9009-a0b41f7e4e32, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 15:50:36 standalone.localdomain systemd[1]: libpod-conmon-dcff0835fd9af02a638425342d9e6856492ef78f384f06abd7fe73cbde2aac18.scope: Deactivated successfully.
Oct 13 15:50:36 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d4e817c1a\x2de6e2\x2d4079\x2d9009\x2da0b41f7e4e32.mount: Deactivated successfully.
Oct 13 15:50:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:36.729 496978 INFO neutron.agent.dhcp.agent [None req-697d4aa2-fb83-4de5-884a-b877b167de10 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:36 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:36.730 496978 INFO neutron.agent.dhcp.agent [None req-697d4aa2-fb83-4de5-884a-b877b167de10 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:36 standalone.localdomain ceph-mon[29756]: pgmap v4089: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:50:37 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:37Z|00837|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:50:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:37.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:37.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4090: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:38 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:38.058 2 INFO neutron.agent.securitygroups_rpc [None req-3efd1858-1d40-4410-972a-e0d8ae40b5dd be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:38.074 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:38 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:50:38.336 2 INFO neutron.agent.securitygroups_rpc [None req-4ba0cb23-ba95-4c06-85e0-0c95b4e408fd be08220f08f940b6a6fca74c3865e72b ee3b60847f7e4cd3a4cb39299db12595 - - default default] Security group member updated ['75c21eba-3b80-42fe-babe-711d5726a710']
Oct 13 15:50:38 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:38.571 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:38.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:38 standalone.localdomain ceph-mon[29756]: pgmap v4090: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:39.046 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:39 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:39.165 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4091: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:40 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:50:40 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:50:40 standalone.localdomain podman[566856]: 2025-10-13 15:50:40.376996083 +0000 UTC m=+0.055632627 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:50:40 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:50:40 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:50:40 standalone.localdomain podman[566871]: 2025-10-13 15:50:40.486227895 +0000 UTC m=+0.082868618 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 13 15:50:40 standalone.localdomain podman[566871]: 2025-10-13 15:50:40.557020531 +0000 UTC m=+0.153661284 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible)
Oct 13 15:50:40 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:50:40 standalone.localdomain ceph-mon[29756]: pgmap v4091: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:40 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:40.954 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:41 standalone.localdomain podman[467099]: time="2025-10-13T15:50:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:50:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:50:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:50:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:50:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:50:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49149 "" "Go-http-client/1.1"
Oct 13 15:50:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4092: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:41 standalone.localdomain podman[566903]: 2025-10-13 15:50:41.8146093 +0000 UTC m=+0.078081572 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, version=9.6, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Oct 13 15:50:41 standalone.localdomain podman[566903]: 2025-10-13 15:50:41.825908429 +0000 UTC m=+0.089380661 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6)
Oct 13 15:50:41 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:50:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:42.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:50:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:42 standalone.localdomain podman[566923]: 2025-10-13 15:50:42.824589615 +0000 UTC m=+0.089229625 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:50:42 standalone.localdomain ceph-mon[29756]: pgmap v4092: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:42 standalone.localdomain podman[566923]: 2025-10-13 15:50:42.836893215 +0000 UTC m=+0.101533245 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009)
Oct 13 15:50:42 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:50:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:50:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:50:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:43.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4093: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:44 standalone.localdomain ceph-mon[29756]: pgmap v4093: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4094: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:46 standalone.localdomain ceph-mon[29756]: pgmap v4094: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:47.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4095: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:47 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:47.856 496978 INFO neutron.agent.linux.ip_lib [None req-57c90c2e-ba9e-4eb5-b53d-20e4979fb7e1 - - - - - -] Device tap8c4469c0-fd cannot be used as it has no MAC address
Oct 13 15:50:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:47.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:47 standalone.localdomain kernel: device tap8c4469c0-fd entered promiscuous mode
Oct 13 15:50:47 standalone.localdomain NetworkManager[5962]: <info>  [1760370647.8910] manager: (tap8c4469c0-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/141)
Oct 13 15:50:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:47Z|00838|binding|INFO|Claiming lport 8c4469c0-fd49-4943-87a1-daef63c78308 for this chassis.
Oct 13 15:50:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:47Z|00839|binding|INFO|8c4469c0-fd49-4943-87a1-daef63c78308: Claiming unknown
Oct 13 15:50:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:47.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:47 standalone.localdomain systemd-udevd[566952]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:50:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:47.902 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a91f760-a218-49be-8559-696c6b4a3cb3, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8c4469c0-fd49-4943-87a1-daef63c78308) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:47.904 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8c4469c0-fd49-4943-87a1-daef63c78308 in datapath 3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba bound to our chassis
Oct 13 15:50:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:47.905 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:47 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:47.906 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d6414932-3953-4d4d-ac3f-7d0c9da1ce90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:47Z|00840|binding|INFO|Setting lport 8c4469c0-fd49-4943-87a1-daef63c78308 ovn-installed in OVS
Oct 13 15:50:47 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:47Z|00841|binding|INFO|Setting lport 8c4469c0-fd49-4943-87a1-daef63c78308 up in Southbound
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:47.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:47 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap8c4469c0-fd: No such device
Oct 13 15:50:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:48.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:48 standalone.localdomain object-server[114601]: Begin object update sweep
Oct 13 15:50:48 standalone.localdomain object-server[567006]: Object update sweep starting on /srv/node/d1 (pid: 33)
Oct 13 15:50:48 standalone.localdomain object-server[567006]: Object update sweep completed on /srv/node/d1 in 0.00s seconds:, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects (pid: 33)
Oct 13 15:50:48 standalone.localdomain object-server[567006]: Object update sweep of d1 completed: 0.00s, 0 successes, 0 failures, 0 quarantines, 0 unlinks, 0 errors, 0 redirects
Oct 13 15:50:48 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:48.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:48 standalone.localdomain object-server[114601]: Object update sweep completed: 0.11s
Oct 13 15:50:48 standalone.localdomain ceph-mon[29756]: pgmap v4095: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:48 standalone.localdomain podman[567024]: 
Oct 13 15:50:48 standalone.localdomain podman[567024]: 2025-10-13 15:50:48.980319848 +0000 UTC m=+0.103237268 container create addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:49 standalone.localdomain podman[567024]: 2025-10-13 15:50:48.926863767 +0000 UTC m=+0.049781217 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:50:49 standalone.localdomain systemd[1]: Started libpod-conmon-addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530.scope.
Oct 13 15:50:49 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:50:49 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36e1a9663902242618152cea75510a3022f74cdf72b69a8d15031f1a42ae3fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:50:49 standalone.localdomain podman[567024]: 2025-10-13 15:50:49.08503889 +0000 UTC m=+0.207956300 container init addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:50:49 standalone.localdomain podman[567024]: 2025-10-13 15:50:49.094516182 +0000 UTC m=+0.217433592 container start addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:49 standalone.localdomain dnsmasq[567042]: started, version 2.85 cachesize 150
Oct 13 15:50:49 standalone.localdomain dnsmasq[567042]: DNS service limited to local subnets
Oct 13 15:50:49 standalone.localdomain dnsmasq[567042]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:50:49 standalone.localdomain dnsmasq[567042]: warning: no upstream servers configured
Oct 13 15:50:49 standalone.localdomain dnsmasq-dhcp[567042]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:50:49 standalone.localdomain dnsmasq[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/addn_hosts - 0 addresses
Oct 13 15:50:49 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/host
Oct 13 15:50:49 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/opts
Oct 13 15:50:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:49.237 496978 INFO neutron.agent.dhcp.agent [None req-e6c14616-850f-40e5-aac3-077dfef5744a - - - - - -] DHCP configuration for ports {'ebe673d2-0d4c-47e8-8fb5-3b5f9b4f1155'} is completed
Oct 13 15:50:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4096: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:49.824 496978 INFO neutron.agent.linux.ip_lib [None req-a5f6e4d4-3b0b-4057-811c-2d9769d484e8 - - - - - -] Device tap5145476f-9a cannot be used as it has no MAC address
Oct 13 15:50:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:49.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:49 standalone.localdomain kernel: device tap5145476f-9a entered promiscuous mode
Oct 13 15:50:49 standalone.localdomain NetworkManager[5962]: <info>  [1760370649.8759] manager: (tap5145476f-9a): new Generic device (/org/freedesktop/NetworkManager/Devices/142)
Oct 13 15:50:49 standalone.localdomain systemd-udevd[566955]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:50:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:49.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00842|binding|INFO|Claiming lport 5145476f-9a19-444d-b5f4-025c89fa95eb for this chassis.
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00843|binding|INFO|5145476f-9a19-444d-b5f4-025c89fa95eb: Claiming unknown
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.890 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9c1e5883-f007-44dc-afe4-ab1ee2ba062d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c1e5883-f007-44dc-afe4-ab1ee2ba062d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36561b9d-0186-4591-8f40-6e8ad2bc1f68, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=5145476f-9a19-444d-b5f4-025c89fa95eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.892 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 5145476f-9a19-444d-b5f4-025c89fa95eb in datapath 9c1e5883-f007-44dc-afe4-ab1ee2ba062d bound to our chassis
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.894 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9c1e5883-f007-44dc-afe4-ab1ee2ba062d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.895 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cd96f2-0f52-4b94-bcc7-d0a31ee1fffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00844|binding|INFO|Setting lport 5145476f-9a19-444d-b5f4-025c89fa95eb ovn-installed in OVS
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00845|binding|INFO|Setting lport 5145476f-9a19-444d-b5f4-025c89fa95eb up in Southbound
Oct 13 15:50:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:49.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:49.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:49.929 496978 INFO neutron.agent.linux.ip_lib [None req-445d92de-39b5-41a2-8ead-1836a3ed3bac - - - - - -] Device tap1e035664-c8 cannot be used as it has no MAC address
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap5145476f-9a: No such device
Oct 13 15:50:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:49.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:49 standalone.localdomain kernel: device tap1e035664-c8 entered promiscuous mode
Oct 13 15:50:49 standalone.localdomain NetworkManager[5962]: <info>  [1760370649.9724] manager: (tap1e035664-c8): new Generic device (/org/freedesktop/NetworkManager/Devices/143)
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00846|binding|INFO|Claiming lport 1e035664-c8ca-4a82-b9b7-3e82245eb86c for this chassis.
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00847|binding|INFO|1e035664-c8ca-4a82-b9b7-3e82245eb86c: Claiming unknown
Oct 13 15:50:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:49.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00848|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.990 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4a5437bf-c853-4246-b682-9bda5511af96', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a5437bf-c853-4246-b682-9bda5511af96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fadf1b2-56af-4579-8437-056e7dd7d979, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=1e035664-c8ca-4a82-b9b7-3e82245eb86c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.992 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 1e035664-c8ca-4a82-b9b7-3e82245eb86c in datapath 4a5437bf-c853-4246-b682-9bda5511af96 bound to our chassis
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.994 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4a5437bf-c853-4246-b682-9bda5511af96 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:49 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:49.995 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[efbc365e-1715-412b-a353-f819927d8dc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00849|binding|INFO|Setting lport 1e035664-c8ca-4a82-b9b7-3e82245eb86c ovn-installed in OVS
Oct 13 15:50:49 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:49Z|00850|binding|INFO|Setting lport 1e035664-c8ca-4a82-b9b7-3e82245eb86c up in Southbound
Oct 13 15:50:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:50.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:50.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:50.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:50:50 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:50.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:50:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:50:50 standalone.localdomain podman[567090]: 2025-10-13 15:50:50.171984181 +0000 UTC m=+0.109019656 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:50:50 standalone.localdomain podman[567117]: 2025-10-13 15:50:50.238720681 +0000 UTC m=+0.066951897 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, description=Red Hat OpenStack Platform 17.1 swift-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, release=1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-swift-container-container, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1)
Oct 13 15:50:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:50:50 standalone.localdomain podman[567090]: 2025-10-13 15:50:50.262237707 +0000 UTC m=+0.199273172 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ceilometer_agent_compute)
Oct 13 15:50:50 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:50:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:50:50 standalone.localdomain podman[567118]: 2025-10-13 15:50:50.359985524 +0000 UTC m=+0.184260219 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:50:50 standalone.localdomain podman[567118]: 2025-10-13 15:50:50.365877846 +0000 UTC m=+0.190152531 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:50:50 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:50:50 standalone.localdomain podman[567180]: 2025-10-13 15:50:50.412589548 +0000 UTC m=+0.068090052 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, distribution-scope=public, io.buildah.version=1.33.12, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, managed_by=tripleo_ansible, release=1, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.expose-services=, version=17.1.9, com.redhat.component=openstack-swift-object-container, container_name=swift_object_server, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements)
Oct 13 15:50:50 standalone.localdomain podman[567152]: 2025-10-13 15:50:50.330703671 +0000 UTC m=+0.071812608 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, architecture=x86_64, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.component=openstack-swift-account-container, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_account_server, io.openshift.expose-services=, release=1, config_id=tripleo_step4, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-swift-account, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:11:22, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 15:50:50 standalone.localdomain podman[567117]: 2025-10-13 15:50:50.486117298 +0000 UTC m=+0.314348534 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-07-21T15:54:32, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.component=openstack-swift-container-container, managed_by=tripleo_ansible, batch=17.1_20250721.1, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_container_server, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-swift-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:50:50 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:50:50 standalone.localdomain podman[567152]: 2025-10-13 15:50:50.533853541 +0000 UTC m=+0.274962468 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhosp17/openstack-swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, container_name=swift_account_server, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.9, tcib_managed=true, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:50:50 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:50:50 standalone.localdomain podman[567180]: 2025-10-13 15:50:50.642041991 +0000 UTC m=+0.297542545 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, distribution-scope=public, name=rhosp17/openstack-swift-object, architecture=x86_64, version=17.1.9, config_id=tripleo_step4, build-date=2025-07-21T14:56:28, batch=17.1_20250721.1, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, com.redhat.component=openstack-swift-object-container, description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, release=1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d)
Oct 13 15:50:50 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:50:50 standalone.localdomain ceph-mon[29756]: pgmap v4096: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:51 standalone.localdomain podman[567300]: 
Oct 13 15:50:51 standalone.localdomain podman[567300]: 2025-10-13 15:50:51.130769526 +0000 UTC m=+0.151181107 container create 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:50:51 standalone.localdomain podman[567316]: 
Oct 13 15:50:51 standalone.localdomain podman[567316]: 2025-10-13 15:50:51.1658581 +0000 UTC m=+0.108823970 container create 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:50:51 standalone.localdomain systemd[1]: Started libpod-conmon-9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f.scope.
Oct 13 15:50:51 standalone.localdomain podman[567300]: 2025-10-13 15:50:51.075339535 +0000 UTC m=+0.095751146 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:50:51 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:50:51 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d961ba4a096825feb9406d370df3d880931bd7d721e6a0b31f2153f2e686cbd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:50:51 standalone.localdomain systemd[1]: Started libpod-conmon-14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b.scope.
Oct 13 15:50:51 standalone.localdomain podman[567300]: 2025-10-13 15:50:51.19760549 +0000 UTC m=+0.218017081 container init 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:50:51 standalone.localdomain podman[567300]: 2025-10-13 15:50:51.205191444 +0000 UTC m=+0.225603025 container start 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: started, version 2.85 cachesize 150
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: DNS service limited to local subnets
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: warning: no upstream servers configured
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:50:51 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/addn_hosts - 0 addresses
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/host
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/opts
Oct 13 15:50:51 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b7a39a6c3ccccfab127bcdb95c3dee66a4bd2b55a45874c8e02844686458d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:50:51 standalone.localdomain podman[567316]: 2025-10-13 15:50:51.117193337 +0000 UTC m=+0.060159237 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:50:51 standalone.localdomain podman[567316]: 2025-10-13 15:50:51.225660285 +0000 UTC m=+0.168626125 container init 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:50:51 standalone.localdomain podman[567316]: 2025-10-13 15:50:51.233606281 +0000 UTC m=+0.176572121 container start 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 15:50:51 standalone.localdomain dnsmasq[567343]: started, version 2.85 cachesize 150
Oct 13 15:50:51 standalone.localdomain dnsmasq[567343]: DNS service limited to local subnets
Oct 13 15:50:51 standalone.localdomain dnsmasq[567343]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:50:51 standalone.localdomain dnsmasq[567343]: warning: no upstream servers configured
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567343]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Oct 13 15:50:51 standalone.localdomain dnsmasq[567343]: read /var/lib/neutron/dhcp/4a5437bf-c853-4246-b682-9bda5511af96/addn_hosts - 0 addresses
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567343]: read /var/lib/neutron/dhcp/4a5437bf-c853-4246-b682-9bda5511af96/host
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567343]: read /var/lib/neutron/dhcp/4a5437bf-c853-4246-b682-9bda5511af96/opts
Oct 13 15:50:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:51.285 496978 INFO neutron.agent.dhcp.agent [None req-a5f6e4d4-3b0b-4057-811c-2d9769d484e8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:49Z, description=, device_id=736b5a5e-06be-420d-ab1f-9766274c1303, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890db9d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890dba90>], id=b19b41a4-4d80-4b3e-85dd-a7069486bde5, ip_allocation=immediate, mac_address=fa:16:3e:e6:6f:e1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:47Z, description=, dns_domain=, id=9c1e5883-f007-44dc-afe4-ab1ee2ba062d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1261133052, port_security_enabled=True, project_id=700deafc96f14116854ada9bcef9278a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54126, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2557, status=ACTIVE, subnets=['aeb8d9bc-6d93-4a91-ac90-9dac421c3145'], tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:48Z, vlan_transparent=None, network_id=9c1e5883-f007-44dc-afe4-ab1ee2ba062d, port_security_enabled=False, project_id=700deafc96f14116854ada9bcef9278a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2568, status=DOWN, tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:49Z on network 9c1e5883-f007-44dc-afe4-ab1ee2ba062d
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/addn_hosts - 1 addresses
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/host
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/opts
Oct 13 15:50:51 standalone.localdomain podman[567362]: 2025-10-13 15:50:51.473525457 +0000 UTC m=+0.062546012 container kill 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:50:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:51.654 496978 INFO neutron.agent.dhcp.agent [None req-a5f6e4d4-3b0b-4057-811c-2d9769d484e8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:49Z, description=, device_id=736b5a5e-06be-420d-ab1f-9766274c1303, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ec32e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888ec3f70>], id=b19b41a4-4d80-4b3e-85dd-a7069486bde5, ip_allocation=immediate, mac_address=fa:16:3e:e6:6f:e1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:47Z, description=, dns_domain=, id=9c1e5883-f007-44dc-afe4-ab1ee2ba062d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1261133052, port_security_enabled=True, project_id=700deafc96f14116854ada9bcef9278a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54126, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2557, status=ACTIVE, subnets=['aeb8d9bc-6d93-4a91-ac90-9dac421c3145'], tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:48Z, vlan_transparent=None, network_id=9c1e5883-f007-44dc-afe4-ab1ee2ba062d, port_security_enabled=False, project_id=700deafc96f14116854ada9bcef9278a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2568, status=DOWN, tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:49Z on network 9c1e5883-f007-44dc-afe4-ab1ee2ba062d
Oct 13 15:50:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:51.797 496978 INFO neutron.agent.dhcp.agent [None req-d7145d9c-dc9a-4201-b0f6-2916f628b341 - - - - - -] DHCP configuration for ports {'522c2e4d-4a23-4295-9db9-c37dee3def04', 'a436c465-bd7c-4452-8964-2a05ea0b6033'} is completed
Oct 13 15:50:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4097: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:51 standalone.localdomain dnsmasq[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/addn_hosts - 1 addresses
Oct 13 15:50:51 standalone.localdomain podman[567403]: 2025-10-13 15:50:51.852428983 +0000 UTC m=+0.050812090 container kill 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/host
Oct 13 15:50:51 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/opts
Oct 13 15:50:51 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:51.946 496978 INFO neutron.agent.dhcp.agent [None req-e592c1cc-d18d-4e48-807e-c3e8db0ab46f - - - - - -] DHCP configuration for ports {'b19b41a4-4d80-4b3e-85dd-a7069486bde5'} is completed
Oct 13 15:50:52 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:52.063 496978 INFO neutron.agent.dhcp.agent [None req-6951a34d-d03c-411e-bfda-16902195a208 - - - - - -] DHCP configuration for ports {'b19b41a4-4d80-4b3e-85dd-a7069486bde5'} is completed
Oct 13 15:50:52 standalone.localdomain dnsmasq[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/addn_hosts - 0 addresses
Oct 13 15:50:52 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/host
Oct 13 15:50:52 standalone.localdomain dnsmasq-dhcp[567340]: read /var/lib/neutron/dhcp/9c1e5883-f007-44dc-afe4-ab1ee2ba062d/opts
Oct 13 15:50:52 standalone.localdomain podman[567441]: 2025-10-13 15:50:52.247625731 +0000 UTC m=+0.087895505 container kill 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:50:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:52Z|00851|binding|INFO|Releasing lport 5145476f-9a19-444d-b5f4-025c89fa95eb from this chassis (sb_readonly=0)
Oct 13 15:50:52 standalone.localdomain kernel: device tap5145476f-9a left promiscuous mode
Oct 13 15:50:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:52.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:52 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:52Z|00852|binding|INFO|Setting lport 5145476f-9a19-444d-b5f4-025c89fa95eb down in Southbound
Oct 13 15:50:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:52.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:52.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:52.615 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-9c1e5883-f007-44dc-afe4-ab1ee2ba062d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c1e5883-f007-44dc-afe4-ab1ee2ba062d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36561b9d-0186-4591-8f40-6e8ad2bc1f68, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=5145476f-9a19-444d-b5f4-025c89fa95eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:52.618 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 5145476f-9a19-444d-b5f4-025c89fa95eb in datapath 9c1e5883-f007-44dc-afe4-ab1ee2ba062d unbound from our chassis
Oct 13 15:50:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:52.620 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9c1e5883-f007-44dc-afe4-ab1ee2ba062d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:52 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:52.621 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7c6463-a9b1-4cbf-89af-775bef0b4e68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:50:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:52 standalone.localdomain podman[567464]: 2025-10-13 15:50:52.829378579 +0000 UTC m=+0.096514221 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 13 15:50:52 standalone.localdomain podman[567464]: 2025-10-13 15:50:52.843090291 +0000 UTC m=+0.110225963 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 13 15:50:52 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:50:52 standalone.localdomain ceph-mon[29756]: pgmap v4097: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:50:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12108 DF PROTO=TCP SPT=57666 DPT=9102 SEQ=3144847939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD5570F0000000001030307) 
Oct 13 15:50:53 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:53.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4098: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:54.106 496978 INFO neutron.agent.linux.ip_lib [None req-deaf4169-c9b3-46de-8733-9daf265b2872 - - - - - -] Device tap2a5401af-a2 cannot be used as it has no MAC address
Oct 13 15:50:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:54.112 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:53Z, description=, device_id=736b5a5e-06be-420d-ab1f-9766274c1303, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d7490>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890d75b0>], id=452f0a3c-b6ce-45a2-b8d9-81d28a416f8d, ip_allocation=immediate, mac_address=fa:16:3e:1a:fb:f6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:45Z, description=, dns_domain=, id=3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-199003545, port_security_enabled=True, project_id=700deafc96f14116854ada9bcef9278a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=61811, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2553, status=ACTIVE, subnets=['7b223bf0-e849-4f95-8e22-e9ddd9af4d73'], tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:47Z, vlan_transparent=None, network_id=3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, port_security_enabled=False, project_id=700deafc96f14116854ada9bcef9278a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2574, status=DOWN, tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:53Z on network 3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba
Oct 13 15:50:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:54.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:54 standalone.localdomain kernel: device tap2a5401af-a2 entered promiscuous mode
Oct 13 15:50:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:54Z|00853|binding|INFO|Claiming lport 2a5401af-a2f9-4e23-b5e5-947a2493acfd for this chassis.
Oct 13 15:50:54 standalone.localdomain NetworkManager[5962]: <info>  [1760370654.1485] manager: (tap2a5401af-a2): new Generic device (/org/freedesktop/NetworkManager/Devices/144)
Oct 13 15:50:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:54.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:54Z|00854|binding|INFO|2a5401af-a2f9-4e23-b5e5-947a2493acfd: Claiming unknown
Oct 13 15:50:54 standalone.localdomain systemd-udevd[567497]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:50:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:54.166 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a21acfed-b66a-4f59-acda-ec6ac456b5ad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a21acfed-b66a-4f59-acda-ec6ac456b5ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1192b55-ddc9-47c2-800c-9c89e9b9cfd8, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2a5401af-a2f9-4e23-b5e5-947a2493acfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:54.168 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2a5401af-a2f9-4e23-b5e5-947a2493acfd in datapath a21acfed-b66a-4f59-acda-ec6ac456b5ad bound to our chassis
Oct 13 15:50:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:54.169 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a21acfed-b66a-4f59-acda-ec6ac456b5ad or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:54 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:54.171 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb2884e-55db-48e1-95c9-418ed567280a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:54.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:54Z|00855|binding|INFO|Setting lport 2a5401af-a2f9-4e23-b5e5-947a2493acfd ovn-installed in OVS
Oct 13 15:50:54 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:54Z|00856|binding|INFO|Setting lport 2a5401af-a2f9-4e23-b5e5-947a2493acfd up in Southbound
Oct 13 15:50:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:54.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:54.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:54.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:54 standalone.localdomain podman[567521]: 2025-10-13 15:50:54.317050689 +0000 UTC m=+0.062891212 container kill addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:50:54 standalone.localdomain dnsmasq[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/addn_hosts - 1 addresses
Oct 13 15:50:54 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/host
Oct 13 15:50:54 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/opts
Oct 13 15:50:54 standalone.localdomain systemd[1]: tmp-crun.5o4HG5.mount: Deactivated successfully.
Oct 13 15:50:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12109 DF PROTO=TCP SPT=57666 DPT=9102 SEQ=3144847939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD55B360000000001030307) 
Oct 13 15:50:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:54.780 496978 INFO neutron.agent.dhcp.agent [None req-37cfa4c1-7e59-4f51-94cb-0190a470c7bd - - - - - -] DHCP configuration for ports {'452f0a3c-b6ce-45a2-b8d9-81d28a416f8d'} is completed
Oct 13 15:50:54 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:54.843 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:50:53Z, description=, device_id=736b5a5e-06be-420d-ab1f-9766274c1303, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889274820>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1889274f10>], id=452f0a3c-b6ce-45a2-b8d9-81d28a416f8d, ip_allocation=immediate, mac_address=fa:16:3e:1a:fb:f6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:50:45Z, description=, dns_domain=, id=3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-199003545, port_security_enabled=True, project_id=700deafc96f14116854ada9bcef9278a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=61811, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2553, status=ACTIVE, subnets=['7b223bf0-e849-4f95-8e22-e9ddd9af4d73'], tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:47Z, vlan_transparent=None, network_id=3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, port_security_enabled=False, project_id=700deafc96f14116854ada9bcef9278a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2574, status=DOWN, tags=[], tenant_id=700deafc96f14116854ada9bcef9278a, updated_at=2025-10-13T15:50:53Z on network 3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba
Oct 13 15:50:54 standalone.localdomain ceph-mon[29756]: pgmap v4098: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:55 standalone.localdomain dnsmasq[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/addn_hosts - 1 addresses
Oct 13 15:50:55 standalone.localdomain podman[567585]: 2025-10-13 15:50:55.045895057 +0000 UTC m=+0.059379494 container kill addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/host
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/opts
Oct 13 15:50:55 standalone.localdomain podman[567625]: 
Oct 13 15:50:55 standalone.localdomain podman[567625]: 2025-10-13 15:50:55.198176437 +0000 UTC m=+0.069062293 container create 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:55 standalone.localdomain systemd[1]: Started libpod-conmon-25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc.scope.
Oct 13 15:50:55 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:50:55 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7839ac884839691cb7feab054b6cd4ef0f346d00cb7fdf22eccbc3e7f66163c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:50:55 standalone.localdomain podman[567625]: 2025-10-13 15:50:55.156393317 +0000 UTC m=+0.027279173 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:50:55 standalone.localdomain podman[567625]: 2025-10-13 15:50:55.260428368 +0000 UTC m=+0.131314264 container init 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:55 standalone.localdomain podman[567625]: 2025-10-13 15:50:55.266011471 +0000 UTC m=+0.136897367 container start 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 15:50:55 standalone.localdomain dnsmasq[567647]: started, version 2.85 cachesize 150
Oct 13 15:50:55 standalone.localdomain dnsmasq[567647]: DNS service limited to local subnets
Oct 13 15:50:55 standalone.localdomain dnsmasq[567647]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:50:55 standalone.localdomain dnsmasq[567647]: warning: no upstream servers configured
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567647]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Oct 13 15:50:55 standalone.localdomain dnsmasq[567647]: read /var/lib/neutron/dhcp/a21acfed-b66a-4f59-acda-ec6ac456b5ad/addn_hosts - 0 addresses
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567647]: read /var/lib/neutron/dhcp/a21acfed-b66a-4f59-acda-ec6ac456b5ad/host
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567647]: read /var/lib/neutron/dhcp/a21acfed-b66a-4f59-acda-ec6ac456b5ad/opts
Oct 13 15:50:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:55.318 496978 INFO neutron.agent.dhcp.agent [None req-9248eaf2-98fa-427d-9891-043fb03b6140 - - - - - -] DHCP configuration for ports {'452f0a3c-b6ce-45a2-b8d9-81d28a416f8d'} is completed
Oct 13 15:50:55 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:55.462 496978 INFO neutron.agent.dhcp.agent [None req-7ee3a216-bb47-4a02-97b6-d07deda20e0a - - - - - -] DHCP configuration for ports {'8e2a93ae-97c6-4d2a-9014-4cce04328891'} is completed
Oct 13 15:50:55 standalone.localdomain dnsmasq[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/addn_hosts - 0 addresses
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/host
Oct 13 15:50:55 standalone.localdomain podman[567666]: 2025-10-13 15:50:55.703038761 +0000 UTC m=+0.045848706 container kill addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009)
Oct 13 15:50:55 standalone.localdomain dnsmasq-dhcp[567042]: read /var/lib/neutron/dhcp/3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba/opts
Oct 13 15:50:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4099: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:55Z|00857|binding|INFO|Releasing lport 8c4469c0-fd49-4943-87a1-daef63c78308 from this chassis (sb_readonly=0)
Oct 13 15:50:55 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:55Z|00858|binding|INFO|Setting lport 8c4469c0-fd49-4943-87a1-daef63c78308 down in Southbound
Oct 13 15:50:55 standalone.localdomain kernel: device tap8c4469c0-fd left promiscuous mode
Oct 13 15:50:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:55.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:55.922 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a91f760-a218-49be-8559-696c6b4a3cb3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=8c4469c0-fd49-4943-87a1-daef63c78308) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:55.924 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 8c4469c0-fd49-4943-87a1-daef63c78308 in datapath 3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba unbound from our chassis
Oct 13 15:50:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:55.926 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:55 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:55.927 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6e2972-c789-43f8-8d42-5711a6ef5327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:55 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:55.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:56 standalone.localdomain systemd[1]: tmp-crun.Lg4lUI.mount: Deactivated successfully.
Oct 13 15:50:56 standalone.localdomain dnsmasq[567647]: exiting on receipt of SIGTERM
Oct 13 15:50:56 standalone.localdomain podman[567708]: 2025-10-13 15:50:56.605358433 +0000 UTC m=+0.071358363 container kill 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:50:56 standalone.localdomain systemd[1]: libpod-25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc.scope: Deactivated successfully.
Oct 13 15:50:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12110 DF PROTO=TCP SPT=57666 DPT=9102 SEQ=3144847939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD563360000000001030307) 
Oct 13 15:50:56 standalone.localdomain podman[567721]: 2025-10-13 15:50:56.677800859 +0000 UTC m=+0.059581699 container died 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:56 standalone.localdomain podman[567721]: 2025-10-13 15:50:56.717949209 +0000 UTC m=+0.099730049 container cleanup 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:50:56 standalone.localdomain systemd[1]: libpod-conmon-25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc.scope: Deactivated successfully.
Oct 13 15:50:56 standalone.localdomain podman[567729]: 2025-10-13 15:50:56.761564755 +0000 UTC m=+0.127268999 container remove 25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a21acfed-b66a-4f59-acda-ec6ac456b5ad, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 13 15:50:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:56Z|00859|binding|INFO|Removing iface tap2a5401af-a2 ovn-installed in OVS
Oct 13 15:50:56 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:56Z|00860|binding|INFO|Removing lport 2a5401af-a2f9-4e23-b5e5-947a2493acfd ovn-installed in OVS
Oct 13 15:50:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:56.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:56 standalone.localdomain kernel: device tap2a5401af-a2 left promiscuous mode
Oct 13 15:50:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:56.813 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port db1d36c6-d812-4a7c-8903-d60a4796ff1c with type ""
Oct 13 15:50:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:56.815 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-a21acfed-b66a-4f59-acda-ec6ac456b5ad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a21acfed-b66a-4f59-acda-ec6ac456b5ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1192b55-ddc9-47c2-800c-9c89e9b9cfd8, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=2a5401af-a2f9-4e23-b5e5-947a2493acfd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:56.817 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 2a5401af-a2f9-4e23-b5e5-947a2493acfd in datapath a21acfed-b66a-4f59-acda-ec6ac456b5ad unbound from our chassis
Oct 13 15:50:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:56.819 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a21acfed-b66a-4f59-acda-ec6ac456b5ad or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:56 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:56.821 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[d962b2fb-b512-41da-abdc-96385c540494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:56 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:56.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:56.861 496978 INFO neutron.agent.dhcp.agent [None req-d8b0528d-d7d1-4594-8f60-1b35a741fb59 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:56 standalone.localdomain ceph-mon[29756]: pgmap v4099: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:56 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:56.978 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:57 standalone.localdomain systemd[1]: tmp-crun.BNdWQg.mount: Deactivated successfully.
Oct 13 15:50:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-7839ac884839691cb7feab054b6cd4ef0f346d00cb7fdf22eccbc3e7f66163c1-merged.mount: Deactivated successfully.
Oct 13 15:50:57 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25f9e76a6459f3b33eec7b3ea30a001a51d0c36613bf25e27e1e5fdb9e72b5fc-userdata-shm.mount: Deactivated successfully.
Oct 13 15:50:57 standalone.localdomain systemd[1]: run-netns-qdhcp\x2da21acfed\x2db66a\x2d4f59\x2dacda\x2dec6ac456b5ad.mount: Deactivated successfully.
Oct 13 15:50:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:57Z|00861|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:50:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:57.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:57.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:57 standalone.localdomain dnsmasq[567343]: exiting on receipt of SIGTERM
Oct 13 15:50:57 standalone.localdomain podman[567767]: 2025-10-13 15:50:57.660647208 +0000 UTC m=+0.071359244 container kill 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:50:57 standalone.localdomain systemd[1]: libpod-14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b.scope: Deactivated successfully.
Oct 13 15:50:57 standalone.localdomain podman[567781]: 2025-10-13 15:50:57.749406998 +0000 UTC m=+0.070967353 container died 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:57 standalone.localdomain podman[567781]: 2025-10-13 15:50:57.782130267 +0000 UTC m=+0.103690572 container cleanup 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 15:50:57 standalone.localdomain systemd[1]: libpod-conmon-14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b.scope: Deactivated successfully.
Oct 13 15:50:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:50:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4100: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:57 standalone.localdomain podman[567784]: 2025-10-13 15:50:57.833836594 +0000 UTC m=+0.147100572 container remove 14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4a5437bf-c853-4246-b682-9bda5511af96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:50:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:57Z|00862|binding|INFO|Releasing lport 1e035664-c8ca-4a82-b9b7-3e82245eb86c from this chassis (sb_readonly=0)
Oct 13 15:50:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:57.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:57 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:57Z|00863|binding|INFO|Setting lport 1e035664-c8ca-4a82-b9b7-3e82245eb86c down in Southbound
Oct 13 15:50:57 standalone.localdomain kernel: device tap1e035664-c8 left promiscuous mode
Oct 13 15:50:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:57.892 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-4a5437bf-c853-4246-b682-9bda5511af96', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a5437bf-c853-4246-b682-9bda5511af96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '700deafc96f14116854ada9bcef9278a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fadf1b2-56af-4579-8437-056e7dd7d979, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=1e035664-c8ca-4a82-b9b7-3e82245eb86c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:50:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:57.894 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 1e035664-c8ca-4a82-b9b7-3e82245eb86c in datapath 4a5437bf-c853-4246-b682-9bda5511af96 unbound from our chassis
Oct 13 15:50:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:57.896 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4a5437bf-c853-4246-b682-9bda5511af96 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:50:57 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:50:57.897 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[4cded83b-cf90-40ae-b9b8-e39a39fe0c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:50:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:57.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:57 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:50:58 standalone.localdomain dnsmasq[567340]: exiting on receipt of SIGTERM
Oct 13 15:50:58 standalone.localdomain podman[567827]: 2025-10-13 15:50:58.01545998 +0000 UTC m=+0.071615252 container kill 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:58 standalone.localdomain systemd[1]: libpod-9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f.scope: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d6b7a39a6c3ccccfab127bcdb95c3dee66a4bd2b55a45874c8e02844686458d9-merged.mount: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14ccf28ae1046a16cccc3635f916bf4c1d6393887e208cb8fd0bb65abc7d1c6b-userdata-shm.mount: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain podman[567838]: 2025-10-13 15:50:58.074852623 +0000 UTC m=+0.080835546 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:50:58 standalone.localdomain podman[567838]: 2025-10-13 15:50:58.089211586 +0000 UTC m=+0.095194519 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:50:58 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain podman[567854]: 2025-10-13 15:50:58.143248914 +0000 UTC m=+0.102892866 container died 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 15:50:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f-userdata-shm.mount: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-d961ba4a096825feb9406d370df3d880931bd7d721e6a0b31f2153f2e686cbd1-merged.mount: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain podman[567854]: 2025-10-13 15:50:58.18491125 +0000 UTC m=+0.144555192 container remove 9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9c1e5883-f007-44dc-afe4-ab1ee2ba062d, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 15:50:58 standalone.localdomain systemd[1]: libpod-conmon-9daf7d93902e29f9028c07803a714ffebd3679e3dc50dc034300edd5b520d16f.scope: Deactivated successfully.
Oct 13 15:50:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:58.274 496978 INFO neutron.agent.dhcp.agent [None req-e7b50695-9d6a-4dca-b4c7-26cbede6e0b8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:58.275 496978 INFO neutron.agent.dhcp.agent [None req-e7b50695-9d6a-4dca-b4c7-26cbede6e0b8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:58.356 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:58.442 496978 INFO neutron.agent.dhcp.agent [None req-ccb697d3-a2ab-4d29-81ab-80dd65514e04 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:50:58Z|00864|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:50:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:58.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:58 standalone.localdomain nova_compute[521101]: 2025-10-13 15:50:58.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:50:58 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:58.901 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:58 standalone.localdomain ceph-mon[29756]: pgmap v4100: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:59 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d4a5437bf\x2dc853\x2d4246\x2db682\x2d9bda5511af96.mount: Deactivated successfully.
Oct 13 15:50:59 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d9c1e5883\x2df007\x2d44dc\x2dafe4\x2dab1ee2ba062d.mount: Deactivated successfully.
Oct 13 15:50:59 standalone.localdomain dnsmasq[567042]: exiting on receipt of SIGTERM
Oct 13 15:50:59 standalone.localdomain podman[567908]: 2025-10-13 15:50:59.593529481 +0000 UTC m=+0.062411457 container kill addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:59 standalone.localdomain systemd[1]: libpod-addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530.scope: Deactivated successfully.
Oct 13 15:50:59 standalone.localdomain podman[567920]: 2025-10-13 15:50:59.637613122 +0000 UTC m=+0.037088296 container died addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:50:59 standalone.localdomain podman[567920]: 2025-10-13 15:50:59.660904651 +0000 UTC m=+0.060379805 container cleanup addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:50:59 standalone.localdomain systemd[1]: libpod-conmon-addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530.scope: Deactivated successfully.
Oct 13 15:50:59 standalone.localdomain podman[567927]: 2025-10-13 15:50:59.704437535 +0000 UTC m=+0.085182001 container remove addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3de0a1aa-60ad-4b5b-b6e9-f4e8b39129ba, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:50:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:59.783 496978 INFO neutron.agent.dhcp.agent [None req-017a131b-96ab-47c9-9966-a2c70b248fb2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:59 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:50:59.784 496978 INFO neutron.agent.dhcp.agent [None req-017a131b-96ab-47c9-9966-a2c70b248fb2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:50:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4101: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:50:59 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:51:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-e36e1a9663902242618152cea75510a3022f74cdf72b69a8d15031f1a42ae3fe-merged.mount: Deactivated successfully.
Oct 13 15:51:00 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-addd761ee6e61e7575ccf70b45af72ea97db4a4245fc155dc7108128236ca530-userdata-shm.mount: Deactivated successfully.
Oct 13 15:51:00 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d3de0a1aa\x2d60ad\x2d4b5b\x2db6e9\x2df4e8b39129ba.mount: Deactivated successfully.
Oct 13 15:51:00 standalone.localdomain podman[567952]: 2025-10-13 15:51:00.069663888 +0000 UTC m=+0.084824869 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 13 15:51:00 standalone.localdomain podman[567952]: 2025-10-13 15:51:00.078047747 +0000 UTC m=+0.093208748 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 13 15:51:00 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:51:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12111 DF PROTO=TCP SPT=57666 DPT=9102 SEQ=3144847939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD572F60000000001030307) 
Oct 13 15:51:00 standalone.localdomain ceph-mon[29756]: pgmap v4101: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4102: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:02.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:02 standalone.localdomain sudo[567971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:51:02 standalone.localdomain sudo[567971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:51:02 standalone.localdomain sudo[567971]: pam_unix(sudo:session): session closed for user root
Oct 13 15:51:02 standalone.localdomain sudo[567989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:51:02 standalone.localdomain sudo[567989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:51:02 standalone.localdomain ceph-mon[29756]: pgmap v4102: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:03 standalone.localdomain sudo[567989]: pam_unix(sudo:session): session closed for user root
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:51:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev dc45f30b-44a2-4641-aa78-cc0e6f5dd61a (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:51:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev dc45f30b-44a2-4641-aa78-cc0e6f5dd61a (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:51:03 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event dc45f30b-44a2-4641-aa78-cc0e6f5dd61a (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:51:03 standalone.localdomain sudo[568039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:51:03 standalone.localdomain sudo[568039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:51:03 standalone.localdomain sudo[568039]: pam_unix(sudo:session): session closed for user root
Oct 13 15:51:03 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:03.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4103: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:51:03 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:51:04 standalone.localdomain ceph-mon[29756]: pgmap v4103: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:51:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:51:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:51:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4104: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:05 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:51:05.943 2 INFO neutron.agent.securitygroups_rpc [None req-2fd15d57-72ce-48ce-9bff-ab483937c1d8 203d662b329f4b6da6f1531c1f8dcf7d 7a822832adef403fbc41daade998b757 - - default default] Security group member updated ['3ec00e78-ca2d-4b09-bb14-e539ff2b515c']
Oct 13 15:51:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:51:06 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:06.410 496978 INFO neutron.agent.linux.ip_lib [None req-bb1c62b5-4f6a-4e6c-982c-95b5578fd288 - - - - - -] Device tap6db4044d-74 cannot be used as it has no MAC address
Oct 13 15:51:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:06.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:06 standalone.localdomain kernel: device tap6db4044d-74 entered promiscuous mode
Oct 13 15:51:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:06.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:06 standalone.localdomain NetworkManager[5962]: <info>  [1760370666.4969] manager: (tap6db4044d-74): new Generic device (/org/freedesktop/NetworkManager/Devices/145)
Oct 13 15:51:06 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:06Z|00865|binding|INFO|Claiming lport 6db4044d-7422-4cc4-b9cb-7fa149862d4f for this chassis.
Oct 13 15:51:06 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:06Z|00866|binding|INFO|6db4044d-7422-4cc4-b9cb-7fa149862d4f: Claiming unknown
Oct 13 15:51:06 standalone.localdomain systemd-udevd[568067]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.506 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-790c70ca-fa91-4cc5-99ca-b6b51a5fe818', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-790c70ca-fa91-4cc5-99ca-b6b51a5fe818', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a822832adef403fbc41daade998b757', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fccadc7-ea52-4ab6-8dc3-59c05740e921, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6db4044d-7422-4cc4-b9cb-7fa149862d4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.508 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6db4044d-7422-4cc4-b9cb-7fa149862d4f in datapath 790c70ca-fa91-4cc5-99ca-b6b51a5fe818 bound to our chassis
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.510 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Port 068e646e-93f1-466e-bb61-6192fd9a691a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.511 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 790c70ca-fa91-4cc5-99ca-b6b51a5fe818, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.512 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[2d604d14-3904-4a78-afe2-1f00b4b2e26c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:06.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:06 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:06Z|00867|binding|INFO|Setting lport 6db4044d-7422-4cc4-b9cb-7fa149862d4f ovn-installed in OVS
Oct 13 15:51:06 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:06Z|00868|binding|INFO|Setting lport 6db4044d-7422-4cc4-b9cb-7fa149862d4f up in Southbound
Oct 13 15:51:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:06.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap6db4044d-74: No such device
Oct 13 15:51:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:06.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.970 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.971 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:51:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:06.972 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:51:07 standalone.localdomain ceph-mon[29756]: pgmap v4104: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:07.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:07 standalone.localdomain podman[568138]: 
Oct 13 15:51:07 standalone.localdomain podman[568138]: 2025-10-13 15:51:07.623732824 +0000 UTC m=+0.112885506 container create bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 15:51:07 standalone.localdomain systemd[1]: Started libpod-conmon-bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a.scope.
Oct 13 15:51:07 standalone.localdomain podman[568138]: 2025-10-13 15:51:07.584355598 +0000 UTC m=+0.073508350 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:51:07 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:51:07 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf257ededa913f2e690956f21542d27e133dbc07365987cb58eccdf724d9efe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:51:07 standalone.localdomain podman[568138]: 2025-10-13 15:51:07.717552099 +0000 UTC m=+0.206704781 container init bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:51:07 standalone.localdomain podman[568138]: 2025-10-13 15:51:07.726476845 +0000 UTC m=+0.215629567 container start bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:51:07 standalone.localdomain dnsmasq[568156]: started, version 2.85 cachesize 150
Oct 13 15:51:07 standalone.localdomain dnsmasq[568156]: DNS service limited to local subnets
Oct 13 15:51:07 standalone.localdomain dnsmasq[568156]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:51:07 standalone.localdomain dnsmasq[568156]: warning: no upstream servers configured
Oct 13 15:51:07 standalone.localdomain dnsmasq-dhcp[568156]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:51:07 standalone.localdomain dnsmasq[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/addn_hosts - 0 addresses
Oct 13 15:51:07 standalone.localdomain dnsmasq-dhcp[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/host
Oct 13 15:51:07 standalone.localdomain dnsmasq-dhcp[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/opts
Oct 13 15:51:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:07.785 496978 INFO neutron.agent.dhcp.agent [None req-2795dac7-eee8-4414-bfd7-b92a697778af - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:51:05Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd7880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888fd7c70>], id=777af228-d533-4ab1-a194-43c0804141f3, ip_allocation=immediate, mac_address=fa:16:3e:c5:af:f0, name=tempest-TagsExtTest-435163121, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:51:03Z, description=, dns_domain=, id=790c70ca-fa91-4cc5-99ca-b6b51a5fe818, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-1265699494, port_security_enabled=True, project_id=7a822832adef403fbc41daade998b757, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50328, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2580, status=ACTIVE, subnets=['47d5a3d0-7604-4e67-8ad6-0c975459591d'], tags=[], tenant_id=7a822832adef403fbc41daade998b757, updated_at=2025-10-13T15:51:04Z, vlan_transparent=None, network_id=790c70ca-fa91-4cc5-99ca-b6b51a5fe818, port_security_enabled=True, project_id=7a822832adef403fbc41daade998b757, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['3ec00e78-ca2d-4b09-bb14-e539ff2b515c'], standard_attr_id=2585, status=DOWN, tags=[], tenant_id=7a822832adef403fbc41daade998b757, updated_at=2025-10-13T15:51:05Z on network 790c70ca-fa91-4cc5-99ca-b6b51a5fe818
Oct 13 15:51:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4105: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:07 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:07.873 496978 INFO neutron.agent.dhcp.agent [None req-5a1d714f-703a-4a25-8c4d-3fc1fa3c75ff - - - - - -] DHCP configuration for ports {'6ca45637-84ca-469e-a6b5-d5ddfd091cce'} is completed
Oct 13 15:51:08 standalone.localdomain dnsmasq[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/addn_hosts - 1 addresses
Oct 13 15:51:08 standalone.localdomain dnsmasq-dhcp[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/host
Oct 13 15:51:08 standalone.localdomain dnsmasq-dhcp[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/opts
Oct 13 15:51:08 standalone.localdomain podman[568175]: 2025-10-13 15:51:08.005852878 +0000 UTC m=+0.061625833 container kill bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 15:51:08 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:08.285 496978 INFO neutron.agent.dhcp.agent [None req-fb092a42-9e54-400b-bffa-baf0d00163f2 - - - - - -] DHCP configuration for ports {'777af228-d533-4ab1-a194-43c0804141f3'} is completed
Oct 13 15:51:08 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:08.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:09 standalone.localdomain ceph-mon[29756]: pgmap v4105: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:09 standalone.localdomain neutron_sriov_agent[485987]: 2025-10-13 15:51:09.487 2 INFO neutron.agent.securitygroups_rpc [None req-eabb881a-e27f-462d-8e0d-cf12efa059a9 203d662b329f4b6da6f1531c1f8dcf7d 7a822832adef403fbc41daade998b757 - - default default] Security group member updated ['3ec00e78-ca2d-4b09-bb14-e539ff2b515c']
Oct 13 15:51:09 standalone.localdomain dnsmasq[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/addn_hosts - 0 addresses
Oct 13 15:51:09 standalone.localdomain podman[568214]: 2025-10-13 15:51:09.738589994 +0000 UTC m=+0.068079413 container kill bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:51:09 standalone.localdomain dnsmasq-dhcp[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/host
Oct 13 15:51:09 standalone.localdomain dnsmasq-dhcp[568156]: read /var/lib/neutron/dhcp/790c70ca-fa91-4cc5-99ca-b6b51a5fe818/opts
Oct 13 15:51:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4106: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:09 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:09Z|00869|binding|INFO|Removing iface tap6db4044d-74 ovn-installed in OVS
Oct 13 15:51:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:09.995 378821 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 068e646e-93f1-466e-bb61-6192fd9a691a with type ""
Oct 13 15:51:09 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:09Z|00870|binding|INFO|Removing lport 6db4044d-7422-4cc4-b9cb-7fa149862d4f ovn-installed in OVS
Oct 13 15:51:09 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:09.997 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-790c70ca-fa91-4cc5-99ca-b6b51a5fe818', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-790c70ca-fa91-4cc5-99ca-b6b51a5fe818', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7a822832adef403fbc41daade998b757', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fccadc7-ea52-4ab6-8dc3-59c05740e921, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=6db4044d-7422-4cc4-b9cb-7fa149862d4f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:51:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:09.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:10.000 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 6db4044d-7422-4cc4-b9cb-7fa149862d4f in datapath 790c70ca-fa91-4cc5-99ca-b6b51a5fe818 unbound from our chassis
Oct 13 15:51:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:10.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:10.004 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 790c70ca-fa91-4cc5-99ca-b6b51a5fe818, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:51:10 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:10.005 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e483a41c-4985-43fe-9a95-07be89153f5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:51:10 standalone.localdomain dnsmasq[568156]: exiting on receipt of SIGTERM
Oct 13 15:51:10 standalone.localdomain podman[568252]: 2025-10-13 15:51:10.171655201 +0000 UTC m=+0.060424775 container kill bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:51:10 standalone.localdomain systemd[1]: libpod-bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a.scope: Deactivated successfully.
Oct 13 15:51:10 standalone.localdomain podman[568264]: 2025-10-13 15:51:10.251890468 +0000 UTC m=+0.061854030 container died bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:51:10 standalone.localdomain podman[568264]: 2025-10-13 15:51:10.291726408 +0000 UTC m=+0.101689930 container cleanup bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:51:10 standalone.localdomain systemd[1]: libpod-conmon-bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a.scope: Deactivated successfully.
Oct 13 15:51:10 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:10Z|00871|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:51:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:10 standalone.localdomain podman[568266]: 2025-10-13 15:51:10.336279713 +0000 UTC m=+0.138881708 container remove bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-790c70ca-fa91-4cc5-99ca-b6b51a5fe818, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:51:10 standalone.localdomain kernel: device tap6db4044d-74 left promiscuous mode
Oct 13 15:51:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:10.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:10 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:10.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:10.385 496978 INFO neutron.agent.dhcp.agent [None req-f7fd4734-abed-4d82-9bf9-c6fe64a4e2d3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:51:10 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:10.385 496978 INFO neutron.agent.dhcp.agent [None req-f7fd4734-abed-4d82-9bf9-c6fe64a4e2d3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:51:10 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:51:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-bcf257ededa913f2e690956f21542d27e133dbc07365987cb58eccdf724d9efe-merged.mount: Deactivated successfully.
Oct 13 15:51:10 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc978b2e6020002498a12e98f346f01df3816d95cde4568f1c7df80deadfda7a-userdata-shm.mount: Deactivated successfully.
Oct 13 15:51:10 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d790c70ca\x2dfa91\x2d4cc5\x2d99ca\x2db6b51a5fe818.mount: Deactivated successfully.
Oct 13 15:51:10 standalone.localdomain podman[568294]: 2025-10-13 15:51:10.80497248 +0000 UTC m=+0.076010297 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 13 15:51:10 standalone.localdomain podman[568294]: 2025-10-13 15:51:10.843892132 +0000 UTC m=+0.114929949 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 13 15:51:10 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:51:11 standalone.localdomain ceph-mon[29756]: pgmap v4106: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:11 standalone.localdomain podman[467099]: time="2025-10-13T15:51:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:51:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:51:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:51:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:51:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49156 "" "Go-http-client/1.1"
Oct 13 15:51:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4107: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:12 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:51:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:12.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:12 standalone.localdomain podman[568319]: 2025-10-13 15:51:12.6518967 +0000 UTC m=+0.069057272 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 15:51:12 standalone.localdomain podman[568319]: 2025-10-13 15:51:12.714382799 +0000 UTC m=+0.131543311 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Oct 13 15:51:12 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:51:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:51:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.230 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.230 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:51:13 standalone.localdomain ceph-mon[29756]: pgmap v4107: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.475 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.476 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.476 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:51:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:51:13 standalone.localdomain podman[568340]: 2025-10-13 15:51:13.817268942 +0000 UTC m=+0.082342982 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:51:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4108: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:13 standalone.localdomain podman[568340]: 2025-10-13 15:51:13.829327555 +0000 UTC m=+0.094401585 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 13 15:51:13 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.931 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.948 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:51:13 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:13.949 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:51:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:15.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:15.253 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:15.256 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:15 standalone.localdomain ceph-mon[29756]: pgmap v4108: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4109: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:51:16 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:16.441 496978 INFO neutron.agent.linux.ip_lib [None req-9a5efe99-126a-4354-bc56-a1a9e6fd7444 - - - - - -] Device tap14a64a74-bc cannot be used as it has no MAC address
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:16 standalone.localdomain kernel: device tap14a64a74-bc entered promiscuous mode
Oct 13 15:51:16 standalone.localdomain NetworkManager[5962]: <info>  [1760370676.5229] manager: (tap14a64a74-bc): new Generic device (/org/freedesktop/NetworkManager/Devices/146)
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:16Z|00872|binding|INFO|Claiming lport 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e for this chassis.
Oct 13 15:51:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:16Z|00873|binding|INFO|14a64a74-bcba-41d1-b2e1-7f1f750e9e9e: Claiming unknown
Oct 13 15:51:16 standalone.localdomain systemd-udevd[568371]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 15:51:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:16.535 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-51cc093b-0f9a-4073-a115-0b0e5b97209c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51cc093b-0f9a-4073-a115-0b0e5b97209c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9866a2963b94af2add6208a27fe1027', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41562e99-7bff-4a75-a285-fe8655f7e7b6, chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=14a64a74-bcba-41d1-b2e1-7f1f750e9e9e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:51:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:16.536 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e in datapath 51cc093b-0f9a-4073-a115-0b0e5b97209c bound to our chassis
Oct 13 15:51:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:16.538 378821 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 51cc093b-0f9a-4073-a115-0b0e5b97209c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 13 15:51:16 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:16.538 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c40aff-1c9a-4163-9531-a29804457718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:16Z|00874|binding|INFO|Setting lport 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e ovn-installed in OVS
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:16 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:16Z|00875|binding|INFO|Setting lport 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e up in Southbound
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain virtnodedevd[457159]: ethtool ioctl error on tap14a64a74-bc: No such device
Oct 13 15:51:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:16.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.251 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.252 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.252 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.253 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.253 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:51:17 standalone.localdomain ceph-mon[29756]: pgmap v4109: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:17 standalone.localdomain podman[568462]: 
Oct 13 15:51:17 standalone.localdomain podman[568462]: 2025-10-13 15:51:17.564086708 +0000 UTC m=+0.076974417 container create eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:51:17 standalone.localdomain podman[568462]: 2025-10-13 15:51:17.522780593 +0000 UTC m=+0.035668412 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:17 standalone.localdomain systemd[1]: Started libpod-conmon-eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e.scope.
Oct 13 15:51:17 standalone.localdomain systemd[1]: Started libcrun container.
Oct 13 15:51:17 standalone.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76d7a91aefc8000434323aaa57e318ef94e26f8e79c157aefe68bcfb641f23b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 15:51:17 standalone.localdomain podman[568462]: 2025-10-13 15:51:17.669444151 +0000 UTC m=+0.182331860 container init eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:51:17 standalone.localdomain podman[568462]: 2025-10-13 15:51:17.675816657 +0000 UTC m=+0.188704366 container start eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 13 15:51:17 standalone.localdomain dnsmasq[568480]: started, version 2.85 cachesize 150
Oct 13 15:51:17 standalone.localdomain dnsmasq[568480]: DNS service limited to local subnets
Oct 13 15:51:17 standalone.localdomain dnsmasq[568480]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Oct 13 15:51:17 standalone.localdomain dnsmasq[568480]: warning: no upstream servers configured
Oct 13 15:51:17 standalone.localdomain dnsmasq-dhcp[568480]: DHCP, static leases only on 10.100.0.0, lease time 1d
Oct 13 15:51:17 standalone.localdomain dnsmasq[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/addn_hosts - 0 addresses
Oct 13 15:51:17 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/host
Oct 13 15:51:17 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/opts
Oct 13 15:51:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:51:17 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/374061877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.735 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.797 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.798 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.798 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.802 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.803 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:51:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4110: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:17 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:17.883 496978 INFO neutron.agent.dhcp.agent [None req-b6f87428-55f4-473b-b126-1563eda14e7d - - - - - -] DHCP configuration for ports {'cab3ce82-b746-4b76-8dc0-ff4e76d08c12'} is completed
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.978 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.979 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9139MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.979 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:51:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:17.979 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.065 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.066 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.066 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.067 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:51:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:18.131 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:51:17Z, description=, device_id=d3fa6d11-c0a2-43a6-9299-9a62a1cc76b7, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e7b700>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1888e7b070>], id=a16087af-2dc9-477e-9cb3-7063547b4629, ip_allocation=immediate, mac_address=fa:16:3e:d7:be:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T14:15:33Z, description=, dns_domain=, id=a12f1166-9c4d-4d97-bc78-657c05e7af68, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1500, name=public, port_security_enabled=True, project_id=e44641a80bcb466cb3dd688e48b72d8e, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['574149d7-d1e5-4575-8c05-cfa5cfe683e5'], tags=[], tenant_id=e44641a80bcb466cb3dd688e48b72d8e, updated_at=2025-10-13T14:15:39Z, vlan_transparent=None, network_id=a12f1166-9c4d-4d97-bc78-657c05e7af68, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2600, status=DOWN, tags=[], tenant_id=, updated_at=2025-10-13T15:51:18Z on network a12f1166-9c4d-4d97-bc78-657c05e7af68
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.138 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:51:18 standalone.localdomain podman[568518]: 2025-10-13 15:51:18.367478227 +0000 UTC m=+0.056638739 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:51:18 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 2 addresses
Oct 13 15:51:18 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:51:18 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/374061877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2014328523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.602 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.609 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.660 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.663 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.663 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2729350715' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:51:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2729350715' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:51:18 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:18.824 496978 INFO neutron.agent.dhcp.agent [None req-a05559ba-5179-4b8e-8a95-0fc4b3990bf1 - - - - - -] DHCP configuration for ports {'a16087af-2dc9-477e-9cb3-7063547b4629'} is completed
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:18.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:19.360 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:51:19Z, description=, device_id=d3fa6d11-c0a2-43a6-9299-9a62a1cc76b7, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b7220>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18890b77f0>], id=cfb81b44-738c-4251-b761-7d8f720246cd, ip_allocation=immediate, mac_address=fa:16:3e:0f:01:7e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:51:14Z, description=, dns_domain=, id=51cc093b-0f9a-4073-a115-0b0e5b97209c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-1267651952-network, port_security_enabled=True, project_id=d9866a2963b94af2add6208a27fe1027, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53284, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2594, status=ACTIVE, subnets=['7ccc3dcd-fbc0-401d-9c17-f37d1e73990d'], tags=[], tenant_id=d9866a2963b94af2add6208a27fe1027, updated_at=2025-10-13T15:51:15Z, vlan_transparent=None, network_id=51cc093b-0f9a-4073-a115-0b0e5b97209c, port_security_enabled=False, project_id=d9866a2963b94af2add6208a27fe1027, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2601, status=DOWN, tags=[], tenant_id=d9866a2963b94af2add6208a27fe1027, updated_at=2025-10-13T15:51:19Z on network 51cc093b-0f9a-4073-a115-0b0e5b97209c
Oct 13 15:51:19 standalone.localdomain ceph-mon[29756]: pgmap v4110: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2014328523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:51:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2729350715' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:51:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2729350715' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:51:19 standalone.localdomain dnsmasq[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/addn_hosts - 1 addresses
Oct 13 15:51:19 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/host
Oct 13 15:51:19 standalone.localdomain podman[568559]: 2025-10-13 15:51:19.581593094 +0000 UTC m=+0.054241885 container kill eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:51:19 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/opts
Oct 13 15:51:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4111: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:19 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:19.878 496978 INFO neutron.agent.dhcp.agent [None req-cec4d478-dc29-4fb5-951b-d62747c77a8f - - - - - -] DHCP configuration for ports {'cfb81b44-738c-4251-b761-7d8f720246cd'} is completed
Oct 13 15:51:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:20.124 496978 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-10-13T15:51:19Z, description=, device_id=d3fa6d11-c0a2-43a6-9299-9a62a1cc76b7, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891f47c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f18891f4cd0>], id=cfb81b44-738c-4251-b761-7d8f720246cd, ip_allocation=immediate, mac_address=fa:16:3e:0f:01:7e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-10-13T15:51:14Z, description=, dns_domain=, id=51cc093b-0f9a-4073-a115-0b0e5b97209c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-1267651952-network, port_security_enabled=True, project_id=d9866a2963b94af2add6208a27fe1027, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53284, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2594, status=ACTIVE, subnets=['7ccc3dcd-fbc0-401d-9c17-f37d1e73990d'], tags=[], tenant_id=d9866a2963b94af2add6208a27fe1027, updated_at=2025-10-13T15:51:15Z, vlan_transparent=None, network_id=51cc093b-0f9a-4073-a115-0b0e5b97209c, port_security_enabled=False, project_id=d9866a2963b94af2add6208a27fe1027, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2601, status=DOWN, tags=[], tenant_id=d9866a2963b94af2add6208a27fe1027, updated_at=2025-10-13T15:51:19Z on network 51cc093b-0f9a-4073-a115-0b0e5b97209c
Oct 13 15:51:20 standalone.localdomain dnsmasq[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/addn_hosts - 1 addresses
Oct 13 15:51:20 standalone.localdomain podman[568598]: 2025-10-13 15:51:20.34925826 +0000 UTC m=+0.063626936 container kill eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 15:51:20 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/host
Oct 13 15:51:20 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/opts
Oct 13 15:51:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:51:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:51:20 standalone.localdomain podman[568612]: 2025-10-13 15:51:20.489725495 +0000 UTC m=+0.108448728 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct 13 15:51:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:51:20 standalone.localdomain podman[568635]: 2025-10-13 15:51:20.571232571 +0000 UTC m=+0.080010640 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:51:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:51:20 standalone.localdomain podman[568635]: 2025-10-13 15:51:20.607933344 +0000 UTC m=+0.116711493 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:51:20 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:20.614 496978 INFO neutron.agent.dhcp.agent [None req-ee5775fe-41e3-4fc9-aa00-421b51440ee7 - - - - - -] DHCP configuration for ports {'cfb81b44-738c-4251-b761-7d8f720246cd'} is completed
Oct 13 15:51:20 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:51:20 standalone.localdomain podman[568612]: 2025-10-13 15:51:20.628793729 +0000 UTC m=+0.247516931 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:51:20 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:51:20 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:51:20 standalone.localdomain systemd[1]: tmp-crun.c1RtF2.mount: Deactivated successfully.
Oct 13 15:51:20 standalone.localdomain podman[568655]: 2025-10-13 15:51:20.750843906 +0000 UTC m=+0.172293429 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, version=17.1.9, tcib_managed=true, config_id=tripleo_step4, container_name=swift_container_server, io.buildah.version=1.33.12, release=1, vcs-type=git, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, batch=17.1_20250721.1, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:51:20 standalone.localdomain podman[568661]: 2025-10-13 15:51:20.818908857 +0000 UTC m=+0.237966926 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, name=rhosp17/openstack-swift-account, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-account-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, release=1, build-date=2025-07-21T16:11:22, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, batch=17.1_20250721.1, container_name=swift_account_server, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:51:20 standalone.localdomain podman[568682]: 2025-10-13 15:51:20.911746663 +0000 UTC m=+0.183693402 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, batch=17.1_20250721.1, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 swift-object, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, release=1, version=17.1.9)
Oct 13 15:51:20 standalone.localdomain podman[568655]: 2025-10-13 15:51:20.939719726 +0000 UTC m=+0.361169229 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, version=17.1.9, container_name=swift_container_server, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-swift-container, com.redhat.component=openstack-swift-container-container, description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, batch=17.1_20250721.1, io.buildah.version=1.33.12, release=1, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64)
Oct 13 15:51:20 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:51:20 standalone.localdomain podman[568661]: 2025-10-13 15:51:20.990781812 +0000 UTC m=+0.409839861 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, container_name=swift_account_server, io.openshift.expose-services=, build-date=2025-07-21T16:11:22, version=17.1.9, release=1, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, tcib_managed=true, name=rhosp17/openstack-swift-account, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1)
Oct 13 15:51:21 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:51:21 standalone.localdomain podman[568682]: 2025-10-13 15:51:21.066669935 +0000 UTC m=+0.338616634 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, vendor=Red Hat, Inc., io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, summary=Red Hat OpenStack Platform 17.1 swift-object, release=1, com.redhat.component=openstack-swift-object-container, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git, version=17.1.9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=swift_object_server, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, build-date=2025-07-21T14:56:28, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, distribution-scope=public, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, name=rhosp17/openstack-swift-object, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:51:21 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:51:21 standalone.localdomain ceph-mon[29756]: pgmap v4111: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:21 standalone.localdomain systemd[1]: tmp-crun.n7Lpop.mount: Deactivated successfully.
Oct 13 15:51:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:21.660 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:21.661 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:21.662 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:21.662 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:51:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4112: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:22.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:51:23
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', 'vms', 'manila_metadata', 'manila_data', 'volumes', '.mgr', 'images']
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:51:23 standalone.localdomain ceph-mon[29756]: pgmap v4112: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6692 DF PROTO=TCP SPT=34068 DPT=9102 SEQ=3623894562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD5CC3F0000000001030307) 
Oct 13 15:51:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4113: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:23 standalone.localdomain podman[568743]: 2025-10-13 15:51:23.825097141 +0000 UTC m=+0.080640330 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:51:23 standalone.localdomain podman[568743]: 2025-10-13 15:51:23.835894014 +0000 UTC m=+0.091437253 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:51:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:51:23 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:51:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:23.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:24 standalone.localdomain ceph-mgr[29999]: client.0 ms_handle_reset on v2:172.18.0.100:6800/1677275897
Oct 13 15:51:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6693 DF PROTO=TCP SPT=34068 DPT=9102 SEQ=3623894562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD5D0360000000001030307) 
Oct 13 15:51:24 standalone.localdomain dnsmasq[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/addn_hosts - 0 addresses
Oct 13 15:51:24 standalone.localdomain podman[568780]: 2025-10-13 15:51:24.63631139 +0000 UTC m=+0.061080895 container kill eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:51:24 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/host
Oct 13 15:51:24 standalone.localdomain dnsmasq-dhcp[568480]: read /var/lib/neutron/dhcp/51cc093b-0f9a-4073-a115-0b0e5b97209c/opts
Oct 13 15:51:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:24Z|00876|binding|INFO|Releasing lport 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e from this chassis (sb_readonly=0)
Oct 13 15:51:24 standalone.localdomain kernel: device tap14a64a74-bc left promiscuous mode
Oct 13 15:51:24 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:24Z|00877|binding|INFO|Setting lport 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e down in Southbound
Oct 13 15:51:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:24.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:24.845 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'standalone.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpbf48e912-6a48-578a-9570-8adccffa1a86-51cc093b-0f9a-4073-a115-0b0e5b97209c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51cc093b-0f9a-4073-a115-0b0e5b97209c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9866a2963b94af2add6208a27fe1027', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'standalone.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41562e99-7bff-4a75-a285-fe8655f7e7b6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>], logical_port=14a64a74-bcba-41d1-b2e1-7f1f750e9e9e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f7216c0d550>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:51:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:24.846 378821 INFO neutron.agent.ovn.metadata.agent [-] Port 14a64a74-bcba-41d1-b2e1-7f1f750e9e9e in datapath 51cc093b-0f9a-4073-a115-0b0e5b97209c unbound from our chassis
Oct 13 15:51:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:24.849 378821 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51cc093b-0f9a-4073-a115-0b0e5b97209c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 13 15:51:24 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:24.850 378925 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e46b17-d373-4153-9670-6541b227ad7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 13 15:51:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:24.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:25 standalone.localdomain ceph-mon[29756]: pgmap v4113: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:25.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:25.545 378821 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'b6:da:53', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'c6:b9:d4:9e:30:db'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 13 15:51:25 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:25.547 378821 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 13 15:51:25 standalone.localdomain dnsmasq[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/addn_hosts - 1 addresses
Oct 13 15:51:25 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/host
Oct 13 15:51:25 standalone.localdomain podman[568821]: 2025-10-13 15:51:25.719409003 +0000 UTC m=+0.050848930 container kill 47eb45e37b93054c5eecb19bba5f60cf9addfb53f0fe6d4a88c1df1dfca4ee5e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a12f1166-9c4d-4d97-bc78-657c05e7af68, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:51:25 standalone.localdomain dnsmasq-dhcp[497662]: read /var/lib/neutron/dhcp/a12f1166-9c4d-4d97-bc78-657c05e7af68/opts
Oct 13 15:51:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4114: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:25 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:25Z|00878|binding|INFO|Releasing lport 52f81088-6dd1-4b23-992e-fdefd91123e1 from this chassis (sb_readonly=0)
Oct 13 15:51:25 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:25.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:26 standalone.localdomain ceph-mon[29756]: pgmap v4114: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6694 DF PROTO=TCP SPT=34068 DPT=9102 SEQ=3623894562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD5D8370000000001030307) 
Oct 13 15:51:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:27.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4115: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:28 standalone.localdomain systemd[1]: tmp-crun.gfEVee.mount: Deactivated successfully.
Oct 13 15:51:28 standalone.localdomain podman[568860]: 2025-10-13 15:51:28.106730984 +0000 UTC m=+0.058128815 container kill eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:51:28 standalone.localdomain dnsmasq[568480]: exiting on receipt of SIGTERM
Oct 13 15:51:28 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:51:28 standalone.localdomain systemd[1]: libpod-eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e.scope: Deactivated successfully.
Oct 13 15:51:28 standalone.localdomain podman[568876]: 2025-10-13 15:51:28.192153041 +0000 UTC m=+0.062395947 container died eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 15:51:28 standalone.localdomain podman[568877]: 2025-10-13 15:51:28.197132665 +0000 UTC m=+0.067604179 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct 13 15:51:28 standalone.localdomain podman[568876]: 2025-10-13 15:51:28.234981273 +0000 UTC m=+0.105224159 container remove eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51cc093b-0f9a-4073-a115-0b0e5b97209c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 15:51:28 standalone.localdomain podman[568877]: 2025-10-13 15:51:28.260278414 +0000 UTC m=+0.130749948 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:51:28 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:51:28 standalone.localdomain systemd[1]: libpod-conmon-eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e.scope: Deactivated successfully.
Oct 13 15:51:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:28.554 496978 INFO neutron.agent.dhcp.agent [None req-ec2239ff-3b4e-4d38-8a32-8f4f2ef3e624 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:51:28 standalone.localdomain neutron_dhcp_agent[496972]: 2025-10-13 15:51:28.679 496978 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Oct 13 15:51:28 standalone.localdomain ceph-mon[29756]: pgmap v4115: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay-c76d7a91aefc8000434323aaa57e318ef94e26f8e79c157aefe68bcfb641f23b-merged.mount: Deactivated successfully.
Oct 13 15:51:29 standalone.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb0f5a3e69ab439f8bdeed78583e3e513ad6646ab83eea975ecd05dadad1899e-userdata-shm.mount: Deactivated successfully.
Oct 13 15:51:29 standalone.localdomain systemd[1]: run-netns-qdhcp\x2d51cc093b\x2d0f9a\x2d4073\x2da115\x2d0b0e5b97209c.mount: Deactivated successfully.
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:51:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4116: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6695 DF PROTO=TCP SPT=34068 DPT=9102 SEQ=3623894562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD5E7F60000000001030307) 
Oct 13 15:51:30 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:51:30 standalone.localdomain podman[568925]: 2025-10-13 15:51:30.806083406 +0000 UTC m=+0.074354836 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 13 15:51:30 standalone.localdomain podman[568925]: 2025-10-13 15:51:30.841368905 +0000 UTC m=+0.109640275 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 15:51:30 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:51:30 standalone.localdomain ceph-mon[29756]: pgmap v4116: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4117: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:32.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:32 standalone.localdomain ceph-mon[29756]: pgmap v4117: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4118: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:33.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:34 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:51:34.549 378821 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b9e3f7-f5c9-4d48-9eca-66995f72d494, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 13 15:51:34 standalone.localdomain ceph-mon[29756]: pgmap v4118: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4119: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:36 standalone.localdomain ceph-mon[29756]: pgmap v4119: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:37.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4120: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:38 standalone.localdomain ceph-mon[29756]: pgmap v4120: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:38.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4121: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:40 standalone.localdomain ceph-mon[29756]: pgmap v4121: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:41 standalone.localdomain podman[467099]: time="2025-10-13T15:51:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:51:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:51:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:51:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:51:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49162 "" "Go-http-client/1.1"
Oct 13 15:51:41 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:51:41 standalone.localdomain podman[568944]: 2025-10-13 15:51:41.827589113 +0000 UTC m=+0.087154211 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 15:51:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4122: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:41 standalone.localdomain podman[568944]: 2025-10-13 15:51:41.931920633 +0000 UTC m=+0.191485681 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 15:51:41 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:51:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:42 standalone.localdomain ceph-mon[29756]: pgmap v4122: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:51:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:51:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:51:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:51:43 standalone.localdomain podman[568969]: 2025-10-13 15:51:43.821716346 +0000 UTC m=+0.079816534 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 15:51:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4123: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:43 standalone.localdomain podman[568969]: 2025-10-13 15:51:43.836707099 +0000 UTC m=+0.094807277 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal)
Oct 13 15:51:43 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:51:43 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:51:43 standalone.localdomain podman[568989]: 2025-10-13 15:51:43.942479724 +0000 UTC m=+0.068706711 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 15:51:43 standalone.localdomain podman[568989]: 2025-10-13 15:51:43.952974428 +0000 UTC m=+0.079201475 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct 13 15:51:43 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:51:43 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:43.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:44 standalone.localdomain ceph-mon[29756]: pgmap v4123: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4124: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:46 standalone.localdomain ceph-mon[29756]: pgmap v4124: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:47.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4125: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:48 standalone.localdomain ceph-mon[29756]: pgmap v4125: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:49.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4126: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:51:50 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:51:50 standalone.localdomain systemd[1]: tmp-crun.qr6GQA.mount: Deactivated successfully.
Oct 13 15:51:50 standalone.localdomain podman[569009]: 2025-10-13 15:51:50.798756622 +0000 UTC m=+0.074045877 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:51:50 standalone.localdomain podman[569009]: 2025-10-13 15:51:50.806578572 +0000 UTC m=+0.081867817 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:51:50 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:51:50 standalone.localdomain podman[569010]: 2025-10-13 15:51:50.859006161 +0000 UTC m=+0.130727067 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:51:50 standalone.localdomain podman[569010]: 2025-10-13 15:51:50.87290692 +0000 UTC m=+0.144627856 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:51:50 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:51:50 standalone.localdomain ceph-mon[29756]: pgmap v4126: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:51:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:51:51 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:51:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4127: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:51 standalone.localdomain podman[569054]: 2025-10-13 15:51:51.84089321 +0000 UTC m=+0.105160077 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, com.redhat.component=openstack-swift-container-container, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, release=1, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, container_name=swift_container_server, name=rhosp17/openstack-swift-container, version=17.1.9, build-date=2025-07-21T15:54:32)
Oct 13 15:51:51 standalone.localdomain podman[569053]: 2025-10-13 15:51:51.888081986 +0000 UTC m=+0.156813171 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, release=1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, version=17.1.9, container_name=swift_object_server, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-type=git, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, summary=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:51:51 standalone.localdomain systemd[1]: tmp-crun.Gkoupn.mount: Deactivated successfully.
Oct 13 15:51:51 standalone.localdomain podman[569055]: 2025-10-13 15:51:51.955959692 +0000 UTC m=+0.215613136 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, architecture=x86_64, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, container_name=swift_account_server, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, release=1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, version=17.1.9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account)
Oct 13 15:51:52 standalone.localdomain podman[569054]: 2025-10-13 15:51:52.07638942 +0000 UTC m=+0.340656257 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, name=rhosp17/openstack-swift-container, version=17.1.9, vendor=Red Hat, Inc., com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, build-date=2025-07-21T15:54:32, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20250721.1, vcs-type=git)
Oct 13 15:51:52 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:51:52 standalone.localdomain podman[569053]: 2025-10-13 15:51:52.13019839 +0000 UTC m=+0.398929625 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, tcib_managed=true, version=17.1.9, batch=17.1_20250721.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, vendor=Red Hat, Inc., container_name=swift_object_server, release=1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1, build-date=2025-07-21T14:56:28, distribution-scope=public, com.redhat.component=openstack-swift-object-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12)
Oct 13 15:51:52 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:51:52 standalone.localdomain podman[569055]: 2025-10-13 15:51:52.182754753 +0000 UTC m=+0.442408217 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vcs-type=git, container_name=swift_account_server, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, managed_by=tripleo_ansible, build-date=2025-07-21T16:11:22, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 swift-account, release=1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, batch=17.1_20250721.1, name=rhosp17/openstack-swift-account, config_id=tripleo_step4, io.buildah.version=1.33.12, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-swift-account-container)
Oct 13 15:51:52 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:51:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:52.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:52 standalone.localdomain ceph-mon[29756]: pgmap v4127: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:51:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53514 DF PROTO=TCP SPT=60834 DPT=9102 SEQ=2868609770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD6416F0000000001030307) 
Oct 13 15:51:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4128: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:54.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53515 DF PROTO=TCP SPT=60834 DPT=9102 SEQ=2868609770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD645760000000001030307) 
Oct 13 15:51:54 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:51:54 standalone.localdomain podman[569137]: 2025-10-13 15:51:54.829253513 +0000 UTC m=+0.088882214 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 15:51:54 standalone.localdomain podman[569137]: 2025-10-13 15:51:54.861992174 +0000 UTC m=+0.121620845 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:51:54 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:51:55 standalone.localdomain ceph-mon[29756]: pgmap v4128: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4129: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:56 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=63351 DF PROTO=TCP SPT=41544 DPT=19885 SEQ=1693254943 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B395570000000001030307) 
Oct 13 15:51:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53516 DF PROTO=TCP SPT=60834 DPT=9102 SEQ=2868609770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD64D770000000001030307) 
Oct 13 15:51:56 standalone.localdomain sshd[569157]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:51:56 standalone.localdomain sshd[569157]: Accepted publickey for zuul from 38.102.83.114 port 49416 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:51:56 standalone.localdomain systemd[1]: Starting User Manager for UID 1000...
Oct 13 15:51:56 standalone.localdomain systemd-logind[45629]: New session 303 of user zuul.
Oct 13 15:51:56 standalone.localdomain systemd[569160]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 15:51:57 standalone.localdomain ceph-mon[29756]: pgmap v4129: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Queued start job for default target Main User Target.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Created slice User Application Slice.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Reached target Paths.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Reached target Timers.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Starting D-Bus User Message Bus Socket...
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Starting Create User's Volatile Files and Directories...
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Finished Create User's Volatile Files and Directories.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Listening on D-Bus User Message Bus Socket.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Reached target Sockets.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Reached target Basic System.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Reached target Main User Target.
Oct 13 15:51:57 standalone.localdomain systemd[569160]: Startup finished in 182ms.
Oct 13 15:51:57 standalone.localdomain systemd[1]: Started User Manager for UID 1000.
Oct 13 15:51:57 standalone.localdomain systemd[1]: Started Session 303 of User zuul.
Oct 13 15:51:57 standalone.localdomain sshd[569157]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Oct 13 15:51:57 standalone.localdomain sudo[569192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brkzjlzcdwibdoxwdbjhxtymmpqnrrvz ; /usr/bin/python3
Oct 13 15:51:57 standalone.localdomain sudo[569192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Oct 13 15:51:57 standalone.localdomain python3[569194]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister
                                                         _uses_shell=True zuul_log_id=fa163ec2-ffbe-1130-f2bb-000000000006-1-standalone zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 15:51:57 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=63352 DF PROTO=TCP SPT=41544 DPT=19885 SEQ=1693254943 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B399710000000001030307) 
Oct 13 15:51:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:57.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4130: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:51:57 standalone.localdomain sudo[569192]: pam_unix(sudo:session): session closed for user root
Oct 13 15:51:58 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:51:58 standalone.localdomain podman[569197]: 2025-10-13 15:51:58.838648694 +0000 UTC m=+0.099391100 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:51:58 standalone.localdomain ovn_controller[373136]: 2025-10-13T15:51:58Z|00879|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Oct 13 15:51:58 standalone.localdomain podman[569197]: 2025-10-13 15:51:58.856754762 +0000 UTC m=+0.117497128 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:51:58 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:51:59 standalone.localdomain ceph-mon[29756]: pgmap v4130: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:51:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:51:59.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:51:59 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=63353 DF PROTO=TCP SPT=41544 DPT=19885 SEQ=1693254943 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B3A1710000000001030307) 
Oct 13 15:51:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4131: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53517 DF PROTO=TCP SPT=60834 DPT=9102 SEQ=2868609770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD65D370000000001030307) 
Oct 13 15:52:01 standalone.localdomain ceph-mon[29756]: pgmap v4131: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:01 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:52:01 standalone.localdomain sshd[569157]: pam_unix(sshd:session): session closed for user zuul
Oct 13 15:52:01 standalone.localdomain systemd[1]: session-303.scope: Deactivated successfully.
Oct 13 15:52:01 standalone.localdomain systemd-logind[45629]: Session 303 logged out. Waiting for processes to exit.
Oct 13 15:52:01 standalone.localdomain systemd-logind[45629]: Removed session 303.
Oct 13 15:52:01 standalone.localdomain podman[569222]: 2025-10-13 15:52:01.828001688 +0000 UTC m=+0.093185918 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 13 15:52:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4132: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:01 standalone.localdomain podman[569222]: 2025-10-13 15:52:01.838018367 +0000 UTC m=+0.103202617 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 13 15:52:01 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:52:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:03 standalone.localdomain ceph-mon[29756]: pgmap v4132: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:03 standalone.localdomain sudo[569240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:52:03 standalone.localdomain sudo[569240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:52:03 standalone.localdomain sudo[569240]: pam_unix(sudo:session): session closed for user root
Oct 13 15:52:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4133: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:03 standalone.localdomain sudo[569258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:52:03 standalone.localdomain sudo[569258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:52:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:04.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:04 standalone.localdomain sudo[569258]: pam_unix(sudo:session): session closed for user root
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:52:04 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:52:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev ecb8f88e-993b-4aca-ba90-4072a8563aa6 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:52:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev ecb8f88e-993b-4aca-ba90-4072a8563aa6 (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:52:04 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event ecb8f88e-993b-4aca-ba90-4072a8563aa6 (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:52:04 standalone.localdomain sudo[569307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:52:04 standalone.localdomain sudo[569307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:52:04 standalone.localdomain sudo[569307]: pam_unix(sudo:session): session closed for user root
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: pgmap v4133: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:52:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:05.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:05.230 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 13 15:52:05 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:05.258 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 13 15:52:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:52:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:52:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4134: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:06 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:06.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:52:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:52:06.972 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:52:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:52:06.973 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:52:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:52:06.974 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:52:07 standalone.localdomain sshd[569325]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:52:07 standalone.localdomain ceph-mon[29756]: pgmap v4134: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:07 standalone.localdomain sshd[569325]: Accepted publickey for root from 192.168.122.11 port 43148 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:52:07 standalone.localdomain systemd[1]: Created slice User Slice of UID 0.
Oct 13 15:52:07 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 15:52:07 standalone.localdomain systemd-logind[45629]: New session 305 of user root.
Oct 13 15:52:07 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 15:52:07 standalone.localdomain systemd[1]: Starting User Manager for UID 0...
Oct 13 15:52:07 standalone.localdomain systemd[569329]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Queued start job for default target Main User Target.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Created slice User Application Slice.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Reached target Paths.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Reached target Timers.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Starting D-Bus User Message Bus Socket...
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Starting Create User's Volatile Files and Directories...
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Listening on D-Bus User Message Bus Socket.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Reached target Sockets.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Finished Create User's Volatile Files and Directories.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Reached target Basic System.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Reached target Main User Target.
Oct 13 15:52:07 standalone.localdomain systemd[569329]: Startup finished in 186ms.
Oct 13 15:52:07 standalone.localdomain systemd[1]: Started User Manager for UID 0.
Oct 13 15:52:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:07.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:07 standalone.localdomain systemd[1]: Started Session 305 of User root.
Oct 13 15:52:07 standalone.localdomain sshd[569325]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:52:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4135: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:09.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:09 standalone.localdomain ceph-mon[29756]: pgmap v4135: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:09 standalone.localdomain sshd[569345]: Received disconnect from 192.168.122.11 port 43148:11: disconnected by user
Oct 13 15:52:09 standalone.localdomain sshd[569345]: Disconnected from user root 192.168.122.11 port 43148
Oct 13 15:52:09 standalone.localdomain sshd[569325]: pam_unix(sshd:session): session closed for user root
Oct 13 15:52:09 standalone.localdomain systemd[1]: session-305.scope: Deactivated successfully.
Oct 13 15:52:09 standalone.localdomain systemd-logind[45629]: Session 305 logged out. Waiting for processes to exit.
Oct 13 15:52:09 standalone.localdomain systemd-logind[45629]: Removed session 305.
Oct 13 15:52:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4136: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:10 standalone.localdomain sshd[569362]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:52:10 standalone.localdomain sshd[569362]: Accepted publickey for root from 192.168.122.11 port 39884 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:52:10 standalone.localdomain systemd-logind[45629]: New session 307 of user root.
Oct 13 15:52:10 standalone.localdomain systemd[1]: Started Session 307 of User root.
Oct 13 15:52:10 standalone.localdomain sshd[569362]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:52:10 standalone.localdomain sshd[569365]: Received disconnect from 192.168.122.11 port 39884:11: disconnected by user
Oct 13 15:52:10 standalone.localdomain sshd[569365]: Disconnected from user root 192.168.122.11 port 39884
Oct 13 15:52:10 standalone.localdomain sshd[569362]: pam_unix(sshd:session): session closed for user root
Oct 13 15:52:10 standalone.localdomain systemd-logind[45629]: Session 307 logged out. Waiting for processes to exit.
Oct 13 15:52:10 standalone.localdomain systemd[1]: session-307.scope: Deactivated successfully.
Oct 13 15:52:10 standalone.localdomain systemd-logind[45629]: Removed session 307.
Oct 13 15:52:11 standalone.localdomain sshd[569382]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:52:11 standalone.localdomain ceph-mon[29756]: pgmap v4136: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:11 standalone.localdomain sshd[569382]: Accepted publickey for root from 192.168.122.11 port 39890 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:52:11 standalone.localdomain systemd-logind[45629]: New session 308 of user root.
Oct 13 15:52:11 standalone.localdomain systemd[1]: Started Session 308 of User root.
Oct 13 15:52:11 standalone.localdomain sshd[569382]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:52:11 standalone.localdomain podman[467099]: time="2025-10-13T15:52:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:52:11 standalone.localdomain sshd[569385]: Received disconnect from 192.168.122.11 port 39890:11: disconnected by user
Oct 13 15:52:11 standalone.localdomain sshd[569385]: Disconnected from user root 192.168.122.11 port 39890
Oct 13 15:52:11 standalone.localdomain sshd[569382]: pam_unix(sshd:session): session closed for user root
Oct 13 15:52:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:52:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:52:11 standalone.localdomain systemd[1]: session-308.scope: Deactivated successfully.
Oct 13 15:52:11 standalone.localdomain systemd-logind[45629]: Session 308 logged out. Waiting for processes to exit.
Oct 13 15:52:11 standalone.localdomain systemd-logind[45629]: Removed session 308.
Oct 13 15:52:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:52:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49161 "" "Go-http-client/1.1"
Oct 13 15:52:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4137: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:11 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:52:11 standalone.localdomain systemd[1]: Stopping User Manager for UID 1000...
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Activating special unit Exit the Session...
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped target Main User Target.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped target Basic System.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped target Paths.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped target Sockets.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped target Timers.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Closed D-Bus User Message Bus Socket.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Stopped Create User's Volatile Files and Directories.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Removed slice User Application Slice.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Reached target Shutdown.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Finished Exit the Session.
Oct 13 15:52:11 standalone.localdomain systemd[569160]: Reached target Exit the Session.
Oct 13 15:52:11 standalone.localdomain systemd[1]: user@1000.service: Deactivated successfully.
Oct 13 15:52:11 standalone.localdomain systemd[1]: Stopped User Manager for UID 1000.
Oct 13 15:52:12 standalone.localdomain podman[569402]: 2025-10-13 15:52:12.088861075 +0000 UTC m=+0.096756377 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 15:52:12 standalone.localdomain podman[569402]: 2025-10-13 15:52:12.172776645 +0000 UTC m=+0.180671967 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:52:12 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:52:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:52:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:52:13 standalone.localdomain ceph-mon[29756]: pgmap v4137: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4138: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.243 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.244 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.244 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.321 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.322 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.322 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.323 2 DEBUG nova.objects.instance [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54a46fec-332e-42f9-83ed-88e763d13f63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 13 15:52:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:52:14 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:52:14 standalone.localdomain podman[569428]: 2025-10-13 15:52:14.827403887 +0000 UTC m=+0.085147239 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:52:14 standalone.localdomain podman[569428]: 2025-10-13 15:52:14.83883749 +0000 UTC m=+0.096580812 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 13 15:52:14 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:52:14 standalone.localdomain systemd[1]: tmp-crun.Ed91en.mount: Deactivated successfully.
Oct 13 15:52:14 standalone.localdomain podman[569427]: 2025-10-13 15:52:14.895019944 +0000 UTC m=+0.153149179 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git)
Oct 13 15:52:14 standalone.localdomain podman[569427]: 2025-10-13 15:52:14.914025691 +0000 UTC m=+0.172154916 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Oct 13 15:52:14 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:52:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:14.987 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updating instance_info_cache with network_info: [{"id": "8a49767c-fb09-4185-95da-4261d8043fad", "address": "fa:16:3e:15:4e:c0", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.238", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a49767c-fb", "ovs_interfaceid": "8a49767c-fb09-4185-95da-4261d8043fad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:52:14 standalone.localdomain rsyslogd[56156]: imjournal from <standalone:nova_compute>: begin to drop messages due to rate-limiting
Oct 13 15:52:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:15.002 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-54a46fec-332e-42f9-83ed-88e763d13f63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:52:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:15.002 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 54a46fec-332e-42f9-83ed-88e763d13f63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:52:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:15.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:15 standalone.localdomain ceph-mon[29756]: pgmap v4138: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4139: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:16.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:16 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:16.229 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:52:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:17.229 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:17 standalone.localdomain ceph-mon[29756]: pgmap v4139: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:17.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4140: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.257 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.258 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.258 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.258 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.259 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:52:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1993141214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:52:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1993141214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:52:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:52:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1747723494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.726 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.829 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.829 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.830 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.835 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:52:18 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:18.835 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.072 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.075 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9123MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.076 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.076 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:19 standalone.localdomain ceph-mon[29756]: pgmap v4140: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1993141214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:52:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/1993141214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:52:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1747723494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.521 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.522 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.523 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.523 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:52:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:19.802 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing inventories for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 13 15:52:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4141: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.091 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Updating ProviderTree inventory for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.091 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Updating inventory in ProviderTree for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.109 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing aggregate associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.130 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Refreshing trait associations for resource provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269, traits: COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.176 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:52:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:52:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1834212520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.582 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.590 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.606 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.607 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.608 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.608 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:20.609 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 13 15:52:20 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:20.998 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'name': 'test', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.002 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'name': 'bfv-server', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': None, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'standalone.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'hostId': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.003 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.003 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.020 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.021 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.021 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.040 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.040 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '49c77c42-e012-4c52-a169-83d3d9b785c3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.003784', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a6b9a6e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '65b57f3189ff601f1c8609d1cab28651eb91961537e77f31e832a08a030d73f2'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.003784', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a6baea0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': 'f863aefa7e2b264fc953d6f1b06e81f30b820382488d0b4114859d44a1d65f30'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.003784', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a6bc19c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '95b4bcdcdb2d16c2bffea4bc40c7029b1bfb3b74a60a4c549586d1db73cf5510'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.003784', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a6e9b38-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.211394272, 'message_signature': '7fe1a5c1d6f5803eeac50b3b432acd791be4b8584dcbf5fee16fdecd81e2bea1'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.003784', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a6eb2e4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.211394272, 'message_signature': 'd8d812af9d575bdbbe8dbdcb778009aee15bc274b2bc4c3747034cbc30141ebc'}]}, 'timestamp': '2025-10-13 15:52:21.041536', '_unique_id': '23fe8e6ba79c42519770d92098b64a16'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.043 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.044 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.076 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 29758464 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.078 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.079 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.bytes volume: 40960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.100 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 29036032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.100 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.bytes volume: 3223552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1187a0f-00e1-4e3e-9663-02945d78cf53', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29758464, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.044949', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a744e98-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '4780a90aebbfffa6ec773f821c3c8a18c8471a3443c0b480dbe3c871e91ef085'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.044949', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a746e78-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '92a2787a3821ee11026951ba781f7417de4f57f83dd11bee9d0ce2883c99a4ac'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 40960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.044949', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a7483a4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '6dd0e72c886eae253411b7ae47b63552ac73553f4ff68a2af8ecdb72de40f6f7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29036032, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.044949', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a77bf56-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': 'dc4eb3305a2cbc67805c0313d519b88f6b0db87e11fdb4c080657ad3f60eea6f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3223552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.044949', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a77cac8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': '687a8ba784477259fe7c7b38ab2990208227ab18164be8b2b602b84240c0f4ba'}]}, 'timestamp': '2025-10-13 15:52:21.100963', '_unique_id': '3cc7e7c557904c778b1a0ac360846738'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.102 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.103 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.103 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.104 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.104 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5507287b-41a2-44f0-a711-61dea40ebc87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.103208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a782ff4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '35b55b070a7086559658fedd4f271a9e5ae698574e9e2a6dd2a005463cbfb1f3'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.103208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a783c92-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '2d5c10291365a98ee6280bc6b73b5f5d8ce1db91deb34b86d0d7ddcdb9029db1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.103208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a784606-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '034aaca890fb46de556ee264eb320801104bd04c9841f71863b2d7e1c18059c0'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.103208', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a785024-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.211394272, 'message_signature': '57bc3ec967bb7e40a52416c64fa451c9fbb49dc0525ecbe3a825227a6715dbad'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.103208', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a785c72-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.211394272, 'message_signature': '4514c3cd2555e1197779604b590650acadf17d571b34ee7d1ea118c2624a0f9c'}]}, 'timestamp': '2025-10-13 15:52:21.104677', '_unique_id': '4aeb2d3bb32e43d8b50ca027e448d979'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.105 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.106 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 15370724 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.106 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.107 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 256722871 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.107 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1ecc025-d730-4c64-ae38-dd1f1db82fa2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.106342', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a78aa92-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '9421c1cfa281c446cbe318e6c514e28bba143e45e4c325c0ed6450dedfe9d86a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15370724, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.106342', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a78b50a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '9f465856bfb599a048ab49149c966ee2c81e7aedf79b6d92a4b35d5a7ce985ce'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.106342', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a78be92-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '70e15c2b9e41a28b82848594e6a6ef23e703fd65722732166d8bf28cce7a98ea'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 256722871, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.106342', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a78c7fc-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': 'b4bc33cb7c12f1602e1cfbd7c80324697cb2b6048699882f5919acc57935617e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.106342', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a78d382-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': '0ab988de323d36d8a3b0f55474201fb6c945d2de78c21508b03231efd06a6a07'}]}, 'timestamp': '2025-10-13 15:52:21.107726', '_unique_id': '3ce33edca3a24565847d0b26b8e62da7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.108 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.109 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.112 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.114 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd8499f06-5137-42dc-bbf8-22afadd07c3b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.109238', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a799ee8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': '050551981cee6bbbea438aa13ab3da44c5db56ffa6601f0a747dba1dffaceb59'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.109238', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a79ebc8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '62e86b0023bbdef9736f0cd001b628a36c205d2312ae6374956da2d20ceef786'}]}, 'timestamp': '2025-10-13 15:52:21.114938', '_unique_id': '1a6755af7e91416599d41e92c65e33a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.115 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.116 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.127 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/memory.usage volume: 53.14453125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.142 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/memory.usage volume: 51.8203125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c1b6743b-efb7-4830-9d35-2705622631ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 53.14453125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:52:21.116420', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '9a7bee50-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.316921519, 'message_signature': '298ad459c7ddabf2b7ba0e43447adc3241b1afaa3c9982794e98fdb9203f7335'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.8203125, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:52:21.116420', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '9a7e30c0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.331698675, 'message_signature': 'dd9e4d19ceb0269a5313885d3a38652ce3277ec57bfb520b2d9a35774578f096'}]}, 'timestamp': '2025-10-13 15:52:21.142901', '_unique_id': 'cd496f3d232041fe83ef7cc8445235e3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.143 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.144 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.144 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.144 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '32ceb986-d2c1-4954-baa7-4833dd7fcf0c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.144698', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a7e82d2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': 'fc3470afcb44a0c4c3d5ba73fd9ea302ee8f9079fea4d9aec5d1a071aa77f05e'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.144698', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a7e8e58-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '2d133f90288638c4e3d33da3e53dc33cef6c0c3ddd63454f84f08352750d815c'}]}, 'timestamp': '2025-10-13 15:52:21.145289', '_unique_id': '8053141d26a04899b0221c03b7835557'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.145 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.146 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.146 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'abdad3d7-7a1b-4940-afcb-495c9dc90091', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.146724', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a7ed1ba-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': 'ee9a928748fc0fa841d907bf373fb592c766b8a395f52dbe18e8895a15084896'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.146724', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a7edc46-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '6374132463750b35371390be2dc568743f9e73cccedbf0c3765ecf350e1eac22'}]}, 'timestamp': '2025-10-13 15:52:21.147313', '_unique_id': '5880794afde54be28fd2c34a04ef7679'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.147 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.148 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.148 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '73a4bcda-1e59-4fe5-8e9f-72362f894c55', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.148769', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a7f216a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': '412638bce89439005bc3728fdeab4d60c259960c205d8e22808821704daab8c5'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.148769', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a7f2c14-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '8ad23be65fd6b43f07d474c3a5675d97c0780996acfded8618aaaa6dfdb03c0a'}]}, 'timestamp': '2025-10-13 15:52:21.149329', '_unique_id': 'd6baf644edb642a8a916b01f57ca4ddf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.149 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.150 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.150 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes volume: 11543 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.151 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes volume: 4386 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74782e16-8353-464c-bb50-8f82001b4af5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11543, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.150859', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a7f7322-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': 'df0023e4a7b1d7c2c36dfcf94fbc28f0111662c1e145a055e7069c5b38129695'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4386, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.150859', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a7f7dd6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '215fca09df0dc47a8ebb0599113fcad33fbc53757ace2cd6935927f471ccbae8'}]}, 'timestamp': '2025-10-13 15:52:21.151413', '_unique_id': '5b21ad1e098b4666962bad3685ab4028'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.152 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 689775433 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.153 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 105173955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.153 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.latency volume: 8149490 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.153 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 901782749 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.153 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.latency volume: 83578923 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2380c44f-e698-4600-a744-9da4f7d02193', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 689775433, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.152905', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a7fc318-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '87fe711bbd24db14d93fa68f3b478b71188741091607784905cafc5d7644e450'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 105173955, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.152905', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a7fcd18-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '9c28e081c5c038e09f849929e4e526f82c7320b6a8eb7592a7e37a87dcdaf7a8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 8149490, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.152905', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a7fd89e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '469b68491a3014fa6a6bf17da043c6061eb27dd41e68f41e5bb02b848401746a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 901782749, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.152905', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a7fe334-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': '7e18ffd3e222b63ff9d891599a14773eab929043b275590bbc8122bbe6d29bb0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 83578923, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.152905', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a7fec8a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': '0af2f7f4b77d079e4137d8b857335f6d30fc330a9ceed90d608db47cf4a8ebaa'}]}, 'timestamp': '2025-10-13 15:52:21.154233', '_unique_id': '76ec0e1d1f4d4837816a4d8da02162d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.154 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.155 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.156 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.156 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0e503b9-85b9-4821-9a9b-c953b1b44248', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.156013', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a803d5c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': '5457cbbc27a4f8d36dbb460a2fe75ebc373cf1fefc702cbeb87c8a69bff9140c'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.156013', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a804810-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '82e8b84dafb426c057dbae417f1ec52e83dcace12936df3d1073f1136e02b9d3'}]}, 'timestamp': '2025-10-13 15:52:21.156619', '_unique_id': '10460648d5d141d7a7ad64772a7b6c9e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.157 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.158 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.158 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.packets volume: 85 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.158 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'eb6f2398-0ab8-4eb2-9c03-101eb7237fb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 85, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.158091', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a808d98-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': 'd6022e89054f088f7309e775f4b45f67606609dc95530c67dd91206e077a6962'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 57, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.158091', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a809900-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '6489bf4db87b60846508cbd37c63b3694fc180abc255c2a622a7f7668d6be2c6'}]}, 'timestamp': '2025-10-13 15:52:21.158667', '_unique_id': 'dc09bd217b8247dca8be7b263fc34662'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.159 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.160 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.160 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.161 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.161 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'db0330e8-3857-46b4-b4fc-5072bdddc8dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.160167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a80dfb4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '2e9639f730001407fa617d4c3abda5183c4f2a81e3a47b6f9610df907a21097b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.160167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a80ea9a-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '1bbea752d13ba80a3149b6e0d1a21fde4609ea7279fab23009244ebe1f62bb97'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.160167', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a80f422-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.193041316, 'message_signature': '690763c2c9c56a6db00f9c45750ab1348248415ba9b8009766eb4a22ee88e68f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.160167', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a81008e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.211394272, 'message_signature': '6f4be8c729a0eedd41cb1ebbc39db538c64c19a1ad6741ea7b71c78ad5c92506'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.160167', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a810a3e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.211394272, 'message_signature': '03339204528aa987aff210fbafc3a5fd23a82039195c2d93e144ab495c9b4559'}]}, 'timestamp': '2025-10-13 15:52:21.161574', '_unique_id': '738ee48183214e6299ab6d123fc7adfe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.162 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.163 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.163 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes volume: 8960 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.163 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes volume: 4008 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74435553-1623-4214-9dcf-f28ec7804e1e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8960, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.163146', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a8153b8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': '06dddbb98bec8c2288a10d12242bb0319a2cde27ab6b977e622c39b9f2afa3ae'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4008, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.163146', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a815f70-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': 'fa42a816c5bec0ae364078293962297abe4fd75bfa4a15082fdefc4072071731'}]}, 'timestamp': '2025-10-13 15:52:21.163742', '_unique_id': 'f064f3bfcea945de925fd2ce5f569646'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.164 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.165 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 1018 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.165 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.read.requests volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.166 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 1043 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.166 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.read.requests volume: 148 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c44a67c4-e680-456d-956b-3255120d132e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1018, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.165180', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a81a37c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '2eb083a07005f2af36a57ff89b7e26811b9f7c69707670d1a385feb17c40025d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.165180', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a81ae9e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': 'e65a1af97c6f993995f39180655c48df28722158676b1c86e51f7f3b6f0bd88e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 10, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.165180', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a81b826-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '306e08e782a62400516dbdc2736a494e43157d226010549d5cd081e8a347ec35'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1043, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.165180', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a81c190-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': 'a9a872cc324e802c264290ed7de0f8b1f46e270f621ea67a8a4b81be8910030b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 148, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.165180', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a81cc58-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': '1f77ba53d443eff48f0269eb17425d796e7355bf64b6ccd6d7e2d97cff05b2f4'}]}, 'timestamp': '2025-10-13 15:52:21.166545', '_unique_id': '937d4586e01646799817a5a074cd2d79'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.167 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.168 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.168 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.168 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 534 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.169 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec74d198-3bc8-49b3-b60a-bd5c67c6bcb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.168124', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a8215b4-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '1afe8271dd80ca671b7e7cbaddd2deb60729c71b122b237259081de89c0e40f5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.168124', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a8222c0-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '7f5d3a8e691b5a335386899f3d3bd65582a8f06df3de92e1010470ce4ec6a7ff'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.168124', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a822c8e-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '5e27d7d5c977c2f1b47b0f5a9bb6d5e916fd53f503cbd3486a327add44d063da'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 534, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.168124', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a823602-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': 'b64cd4e86a89c9ae7fd3d1bf6573a9c90559cc30d01cd23634b48f7f149641a8'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.168124', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a823f30-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': '2f71d7da5f16d74b0e2fbe1591012c0179a3a49a08f60e1f87c95ce47be3f742'}]}, 'timestamp': '2025-10-13 15:52:21.169509', '_unique_id': '9d7a82476a82495393c2f19dc690ee3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.170 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.171 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.171 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a1eaf364-8cdf-4802-8a3b-8853a997eded', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.171050', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a8287ce-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': '8649b1818a4507021aa486ed48545692eb0ee999fa030337e35cfbbb8fa1406e'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.171050', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a829264-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '68a6694063321af8debb3e830621ce3151e20f87b048d4524280e3aaf4a83368'}]}, 'timestamp': '2025-10-13 15:52:21.171663', '_unique_id': 'ccaaa9fae06d4626bac3f30a0e0c3c9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.172 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.173 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.173 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.173 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 73879552 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.174 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf7b9779-35b8-4ea1-9933-d7697c113885', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vda', 'timestamp': '2025-10-13T15:52:21.173186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a82daf8-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '4021fe53a7e781110e4f71f0ace018d2e2bf0d0cbf3002a889488f26ec88402a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdb', 'timestamp': '2025-10-13T15:52:21.173186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a82e638-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': 'abd7d094028e493e7c4d7c9b21a15dc1d306655ca8901c56d1a43b23c5df0aeb'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63-vdc', 'timestamp': '2025-10-13T15:52:21.173186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdc'}, 'message_id': '9a82f1dc-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.234246127, 'message_signature': '3b0bf6968b01c44c4cb94cdd4afb3a6cb4553fce06be84ea4625318742bd67ef'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73879552, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vda', 'timestamp': '2025-10-13T15:52:21.173186', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a82fb82-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': 'e5fa20124dead428fd53c4dad64eb6a86a57a8796b2ddacd4559b586d3ceed01'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c-vdb', 'timestamp': '2025-10-13T15:52:21.173186', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a8304ce-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.268848256, 'message_signature': 'df6c9a03361a2e6c3e6889f4540a102f3dcd9970df645e6b8cb9dce339b71d8b'}]}, 'timestamp': '2025-10-13 15:52:21.174537', '_unique_id': 'dc1c636a9e32406d970db0f01ca540ee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.175 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.176 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/cpu volume: 15710000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.176 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/cpu volume: 37520000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4cbc74dd-acdc-432d-be89-a283b842fd6f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15710000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'timestamp': '2025-10-13T15:52:21.176049', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000001', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '9a834bd2-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.316921519, 'message_signature': '68eae955801e12b981672d37839ed4c6eed6e5ee62f9ecab1d19f633ebee3b2a'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 37520000000, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'timestamp': '2025-10-13T15:52:21.176049', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'instance-00000002', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '9a8355e6-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.331698675, 'message_signature': '85ac0cfde53498a78a44cc216a79618b13ec17735a7d2213d22cd2f30732d478'}]}, 'timestamp': '2025-10-13 15:52:21.176623', '_unique_id': '3f4c9b43a4564c758a0d607b3c065f32'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.177 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.178 12 DEBUG ceilometer.compute.pollsters [-] 54a46fec-332e-42f9-83ed-88e763d13f63/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.178 12 DEBUG ceilometer.compute.pollsters [-] 8f68d5aa-abc4-451d-89d2-f5342b71831c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5b1ee513-603f-491e-b48f-939b3097e330', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000001-54a46fec-332e-42f9-83ed-88e763d13f63-tap8a49767c-fb', 'timestamp': '2025-10-13T15:52:21.178055', 'resource_metadata': {'display_name': 'test', 'name': 'tap8a49767c-fb', 'instance_id': '54a46fec-332e-42f9-83ed-88e763d13f63', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '9de40855-8304-4f15-8bc5-8a8a2b61b79b'}, 'image_ref': '9de40855-8304-4f15-8bc5-8a8a2b61b79b', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:15:4e:c0', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8a49767c-fb'}, 'message_id': '9a83995c-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.29847718, 'message_signature': '328da77b25cf68a244d39a2bfba893903e35a8089e72b90cd111507180ce6240'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '3d9eeef137fc40c78332936114fd7ee4', 'user_name': None, 'project_id': 'e44641a80bcb466cb3dd688e48b72d8e', 'project_name': None, 'resource_id': 'instance-00000002-8f68d5aa-abc4-451d-89d2-f5342b71831c-tapda3e5a61-7a', 'timestamp': '2025-10-13T15:52:21.178055', 'resource_metadata': {'display_name': 'bfv-server', 'name': 'tapda3e5a61-7a', 'instance_id': '8f68d5aa-abc4-451d-89d2-f5342b71831c', 'instance_type': 'm1.small', 'host': '43db149b8032e5fe2d9d3d4c6a63b152b22f837bdaf4d38cb5efc407', 'instance_host': 'standalone.localdomain', 'flavor': {'id': '993bc811-5d85-498c-8b2f-935e295a6567', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': None, 'image_ref': None, 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:6c:20:5b', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapda3e5a61-7a'}, 'message_id': '9a83a3de-a84c-11f0-a10a-fa163e2e69a7', 'monotonic_time': 10184.302142933, 'message_signature': '5268da2fb7c21e2fa4b08e9d371b06a2334b5dd2fb4e4d9b9ce074d1677a3bc8'}]}, 'timestamp': '2025-10-13 15:52:21.178621', '_unique_id': '5f203d5d1f2d40aeb467c8527ba627bb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     yield
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Oct 13 15:52:21 standalone.localdomain ceilometer_agent_compute[464376]: 2025-10-13 15:52:21.179 12 ERROR oslo_messaging.notify.messaging 
Oct 13 15:52:21 standalone.localdomain ceph-mon[29756]: pgmap v4141: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1834212520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:52:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:52:21 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:52:21 standalone.localdomain systemd[1]: Stopping User Manager for UID 0...
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Activating special unit Exit the Session...
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped target Main User Target.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped target Basic System.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped target Paths.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped target Sockets.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped target Timers.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Closed D-Bus User Message Bus Socket.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Stopped Create User's Volatile Files and Directories.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Removed slice User Application Slice.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Reached target Shutdown.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Finished Exit the Session.
Oct 13 15:52:21 standalone.localdomain systemd[569329]: Reached target Exit the Session.
Oct 13 15:52:21 standalone.localdomain systemd[1]: user@0.service: Deactivated successfully.
Oct 13 15:52:21 standalone.localdomain systemd[1]: Stopped User Manager for UID 0.
Oct 13 15:52:21 standalone.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 15:52:21 standalone.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 15:52:21 standalone.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 15:52:21 standalone.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 15:52:21 standalone.localdomain systemd[1]: Removed slice User Slice of UID 0.
Oct 13 15:52:21 standalone.localdomain systemd[1]: user-0.slice: Consumed 1.640s CPU time.
Oct 13 15:52:21 standalone.localdomain podman[569511]: 2025-10-13 15:52:21.842525117 +0000 UTC m=+0.107976594 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 15:52:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4142: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:21 standalone.localdomain podman[569511]: 2025-10-13 15:52:21.883944285 +0000 UTC m=+0.149395692 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 15:52:21 standalone.localdomain podman[569512]: 2025-10-13 15:52:21.902099365 +0000 UTC m=+0.161829195 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:52:21 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:52:21 standalone.localdomain podman[569512]: 2025-10-13 15:52:21.937902751 +0000 UTC m=+0.197632521 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:52:21 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:52:22 standalone.localdomain ceph-mon[29756]: pgmap v4142: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:22.615 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:22.615 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:22.616 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:52:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:52:22 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:52:22 standalone.localdomain systemd[1]: tmp-crun.28mVCN.mount: Deactivated successfully.
Oct 13 15:52:22 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:22.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:22 standalone.localdomain podman[569555]: 2025-10-13 15:52:22.813216868 +0000 UTC m=+0.154589631 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, version=17.1.9, maintainer=OpenStack TripleO Team, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, name=rhosp17/openstack-swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, tcib_managed=true, io.buildah.version=1.33.12, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20250721.1, release=1, description=Red Hat OpenStack Platform 17.1 swift-container, architecture=x86_64, container_name=swift_container_server, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-07-21T15:54:32, com.redhat.component=openstack-swift-container-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:52:22 standalone.localdomain podman[569556]: 2025-10-13 15:52:22.832206504 +0000 UTC m=+0.170339627 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, name=rhosp17/openstack-swift-account, build-date=2025-07-21T16:11:22, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_account_server, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1, description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, batch=17.1_20250721.1, release=1, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, vendor=Red Hat, Inc., version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible)
Oct 13 15:52:22 standalone.localdomain podman[569554]: 2025-10-13 15:52:22.791082665 +0000 UTC m=+0.139544627 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=swift_object_server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 swift-object, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-type=git, architecture=x86_64, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, release=1)
Oct 13 15:52:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:22 standalone.localdomain podman[569554]: 2025-10-13 15:52:22.985057133 +0000 UTC m=+0.333519055 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, build-date=2025-07-21T14:56:28, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.buildah.version=1.33.12, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, version=17.1.9, batch=17.1_20250721.1, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container, summary=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, com.redhat.license_terms=https://www.redhat.com/agreements, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, release=1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object)
Oct 13 15:52:23 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:52:23 standalone.localdomain podman[569555]: 2025-10-13 15:52:23.054119056 +0000 UTC m=+0.395491829 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-container, summary=Red Hat OpenStack Platform 17.1 swift-container, maintainer=OpenStack TripleO Team, release=1, com.redhat.component=openstack-swift-container-container, tcib_managed=true, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, architecture=x86_64, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.expose-services=, batch=17.1_20250721.1, managed_by=tripleo_ansible, name=rhosp17/openstack-swift-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, config_id=tripleo_step4, distribution-scope=public, version=17.1.9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, build-date=2025-07-21T15:54:32, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:52:23 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:52:23 standalone.localdomain podman[569556]: 2025-10-13 15:52:23.08633965 +0000 UTC m=+0.424472763 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.openshift.expose-services=, architecture=x86_64, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, batch=17.1_20250721.1, com.redhat.component=openstack-swift-account-container, release=1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_id=tripleo_step4, io.buildah.version=1.33.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.tags=rhosp osp openstack osp-17.1, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, distribution-scope=public, container_name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, name=rhosp17/openstack-swift-account, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:52:23 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:52:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:23.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:52:23
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'vms', 'manila_data', 'manila_metadata', 'volumes']
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:52:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3239 DF PROTO=TCP SPT=41694 DPT=9102 SEQ=1978645764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD6B69F0000000001030307) 
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4143: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:52:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:52:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:24.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3240 DF PROTO=TCP SPT=41694 DPT=9102 SEQ=1978645764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD6BAB70000000001030307) 
Oct 13 15:52:24 standalone.localdomain ceph-mon[29756]: pgmap v4143: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:25 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=34667 DF PROTO=TCP SPT=55902 DPT=19885 SEQ=674204246 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B405F40000000001030307) 
Oct 13 15:52:25 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:52:25 standalone.localdomain podman[569636]: 2025-10-13 15:52:25.819082444 +0000 UTC m=+0.084758508 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid)
Oct 13 15:52:25 standalone.localdomain podman[569636]: 2025-10-13 15:52:25.832938842 +0000 UTC m=+0.098614906 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 15:52:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4144: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:25 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:52:26 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=34668 DF PROTO=TCP SPT=55902 DPT=19885 SEQ=674204246 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B409F10000000001030307) 
Oct 13 15:52:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3241 DF PROTO=TCP SPT=41694 DPT=9102 SEQ=1978645764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD6C2B60000000001030307) 
Oct 13 15:52:26 standalone.localdomain ceph-mon[29756]: pgmap v4144: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:27 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:27.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4145: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:28 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=34669 DF PROTO=TCP SPT=55902 DPT=19885 SEQ=674204246 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B411F10000000001030307) 
Oct 13 15:52:28 standalone.localdomain ceph-mon[29756]: pgmap v4145: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:29.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:52:29 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:52:29 standalone.localdomain podman[569655]: 2025-10-13 15:52:29.796052772 +0000 UTC m=+0.065674758 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:52:29 standalone.localdomain podman[569655]: 2025-10-13 15:52:29.830618949 +0000 UTC m=+0.100240945 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:52:29 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:52:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4146: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3242 DF PROTO=TCP SPT=41694 DPT=9102 SEQ=1978645764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD6D2760000000001030307) 
Oct 13 15:52:30 standalone.localdomain ceph-mon[29756]: pgmap v4146: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4147: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:32 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:52:32 standalone.localdomain podman[569678]: 2025-10-13 15:52:32.755354969 +0000 UTC m=+0.091433124 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 15:52:32 standalone.localdomain podman[569678]: 2025-10-13 15:52:32.790094051 +0000 UTC m=+0.126172196 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 15:52:32 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:32.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:32 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:52:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:32 standalone.localdomain ceph-mon[29756]: pgmap v4147: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4148: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:34.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:34 standalone.localdomain ceph-mon[29756]: pgmap v4148: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4149: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:36 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=31486 DF PROTO=TCP SPT=33606 DPT=19885 SEQ=485302722 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B431650000000001030307) 
Oct 13 15:52:36 standalone.localdomain ceph-mon[29756]: pgmap v4149: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Oct 13 15:52:37 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=31487 DF PROTO=TCP SPT=33606 DPT=19885 SEQ=485302722 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B435710000000001030307) 
Oct 13 15:52:37 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:37.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4150: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:38 standalone.localdomain ceph-mon[29756]: pgmap v4150: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:39.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:39 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=31488 DF PROTO=TCP SPT=33606 DPT=19885 SEQ=485302722 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B43D710000000001030307) 
Oct 13 15:52:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4151: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:40 standalone.localdomain ceph-mon[29756]: pgmap v4151: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:41 standalone.localdomain podman[467099]: time="2025-10-13T15:52:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:52:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:52:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:52:41 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:52:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49160 "" "Go-http-client/1.1"
Oct 13 15:52:41 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4152: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:42 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:52:42 standalone.localdomain podman[569695]: 2025-10-13 15:52:42.76233862 +0000 UTC m=+0.088817823 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, config_id=ovn_controller)
Oct 13 15:52:42 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:42.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:42 standalone.localdomain podman[569695]: 2025-10-13 15:52:42.83846684 +0000 UTC m=+0.164946013 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 15:52:42 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:52:42 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:42 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:52:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:52:42 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:52:42 standalone.localdomain ceph-mon[29756]: pgmap v4152: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:43 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4153: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:44 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:44.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:44 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=43317 DF PROTO=TCP SPT=49036 DPT=19885 SEQ=2056983387 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B4519D0000000001030307) 
Oct 13 15:52:44 standalone.localdomain sshd[569721]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:52:44 standalone.localdomain ceph-mon[29756]: pgmap v4153: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:45 standalone.localdomain sshd[569721]: error: kex_exchange_identification: Connection closed by remote host
Oct 13 15:52:45 standalone.localdomain sshd[569721]: Connection closed by 106.36.198.78 port 34542
Oct 13 15:52:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:52:45 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:52:45 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=43318 DF PROTO=TCP SPT=49036 DPT=19885 SEQ=2056983387 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B455B10000000001030307) 
Oct 13 15:52:45 standalone.localdomain systemd[1]: tmp-crun.qLuDFd.mount: Deactivated successfully.
Oct 13 15:52:45 standalone.localdomain podman[569723]: 2025-10-13 15:52:45.799905882 +0000 UTC m=+0.064455571 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd)
Oct 13 15:52:45 standalone.localdomain podman[569723]: 2025-10-13 15:52:45.808935611 +0000 UTC m=+0.073485280 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 13 15:52:45 standalone.localdomain podman[569722]: 2025-10-13 15:52:45.82087957 +0000 UTC m=+0.085052437 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9)
Oct 13 15:52:45 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:52:45 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4154: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:45 standalone.localdomain podman[569722]: 2025-10-13 15:52:45.855021474 +0000 UTC m=+0.119194361 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal)
Oct 13 15:52:45 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:52:46 standalone.localdomain systemd[1]: tmp-crun.fVuvwN.mount: Deactivated successfully.
Oct 13 15:52:46 standalone.localdomain ceph-mon[29756]: pgmap v4154: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:47 standalone.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:16:3e:77:13:69 MACDST=fa:16:3e:2e:69:a7 MACPROTO=0800 SRC=38.102.83.114 DST=38.102.83.151 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=43319 DF PROTO=TCP SPT=49036 DPT=19885 SEQ=2056983387 ACK=0 WINDOW=32120 RES=0x00 SYN URGP=0 OPT (020405B40402080A09B45DB10000000001030307) 
Oct 13 15:52:47 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:47.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:47 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4155: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:47 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:48 standalone.localdomain ceph-mon[29756]: pgmap v4155: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:49 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:49.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:49 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4156: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:51 standalone.localdomain ceph-mon[29756]: pgmap v4156: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:51 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4157: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:52:52 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:52:52 standalone.localdomain podman[569762]: 2025-10-13 15:52:52.769680291 +0000 UTC m=+0.085682056 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 15:52:52 standalone.localdomain podman[569762]: 2025-10-13 15:52:52.777698589 +0000 UTC m=+0.093700334 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 15:52:52 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:52:52 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:52.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:52 standalone.localdomain podman[569761]: 2025-10-13 15:52:52.878857091 +0000 UTC m=+0.195663630 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251009, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct 13 15:52:52 standalone.localdomain podman[569761]: 2025-10-13 15:52:52.889998575 +0000 UTC m=+0.206805044 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 15:52:52 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:52 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:52:53 standalone.localdomain ceph-mon[29756]: pgmap v4157: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:52:53 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1415 DF PROTO=TCP SPT=52934 DPT=9102 SEQ=709866565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD72BCF0000000001030307) 
Oct 13 15:52:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:52:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:52:53 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:52:53 standalone.localdomain podman[569801]: 2025-10-13 15:52:53.812703527 +0000 UTC m=+0.078412132 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, container_name=swift_object_server, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.9, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, summary=Red Hat OpenStack Platform 17.1 swift-object, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, name=rhosp17/openstack-swift-object, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.openshift.expose-services=, release=1, com.redhat.component=openstack-swift-object-container, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Oct 13 15:52:53 standalone.localdomain podman[569802]: 2025-10-13 15:52:53.835756029 +0000 UTC m=+0.094088736 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, vcs-type=git, com.redhat.component=openstack-swift-container-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, version=17.1.9, container_name=swift_container_server, io.openshift.expose-services=, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-swift-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, build-date=2025-07-21T15:54:32, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:52:53 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4158: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:53 standalone.localdomain podman[569808]: 2025-10-13 15:52:53.89764919 +0000 UTC m=+0.151744086 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20250721.1, container_name=swift_account_server, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-swift-account, release=1, tcib_managed=true, build-date=2025-07-21T16:11:22, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, version=17.1.9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, io.buildah.version=1.33.12, com.redhat.component=openstack-swift-account-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-account, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 swift-account, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f)
Oct 13 15:52:54 standalone.localdomain podman[569801]: 2025-10-13 15:52:54.014814765 +0000 UTC m=+0.280523370 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, version=17.1.9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, build-date=2025-07-21T14:56:28, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, release=1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, vcs-type=git, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, com.redhat.component=openstack-swift-object-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, name=rhosp17/openstack-swift-object, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, config_id=tripleo_step4, tcib_managed=true)
Oct 13 15:52:54 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:52:54 standalone.localdomain podman[569802]: 2025-10-13 15:52:54.05284155 +0000 UTC m=+0.311174247 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, build-date=2025-07-21T15:54:32, description=Red Hat OpenStack Platform 17.1 swift-container, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, tcib_managed=true, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, vcs-type=git, io.buildah.version=1.33.12, summary=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.tags=rhosp osp openstack osp-17.1, name=rhosp17/openstack-swift-container, vendor=Red Hat, Inc., batch=17.1_20250721.1, version=17.1.9, io.openshift.expose-services=, com.redhat.component=openstack-swift-container-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, container_name=swift_container_server, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:52:54 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:52:54 standalone.localdomain podman[569808]: 2025-10-13 15:52:54.081802623 +0000 UTC m=+0.335897529 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, summary=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, io.buildah.version=1.33.12, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, release=1, batch=17.1_20250721.1, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-swift-account, com.redhat.component=openstack-swift-account-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, maintainer=OpenStack TripleO Team, version=17.1.9, build-date=2025-07-21T16:11:22, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat OpenStack Platform 17.1 swift-account, architecture=x86_64, container_name=swift_account_server, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1)
Oct 13 15:52:54 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:52:54 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:54.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:54 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1416 DF PROTO=TCP SPT=52934 DPT=9102 SEQ=709866565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD72FF60000000001030307) 
Oct 13 15:52:55 standalone.localdomain ceph-mon[29756]: pgmap v4158: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:55 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4159: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:56 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1417 DF PROTO=TCP SPT=52934 DPT=9102 SEQ=709866565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD737F60000000001030307) 
Oct 13 15:52:56 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:52:56 standalone.localdomain podman[569884]: 2025-10-13 15:52:56.809016796 +0000 UTC m=+0.073353995 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 15:52:56 standalone.localdomain podman[569884]: 2025-10-13 15:52:56.821169201 +0000 UTC m=+0.085506350 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 13 15:52:56 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:52:57 standalone.localdomain ceph-mon[29756]: pgmap v4159: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:57 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4160: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:57 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:57.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:57 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:52:59 standalone.localdomain ceph-mon[29756]: pgmap v4160: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:52:59 standalone.localdomain rsyslogd[56156]: imjournal: 1542 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 13 15:52:59 standalone.localdomain nova_compute[521101]: 2025-10-13 15:52:59.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:52:59 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4161: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:00 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1418 DF PROTO=TCP SPT=52934 DPT=9102 SEQ=709866565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD747B60000000001030307) 
Oct 13 15:53:00 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:53:00 standalone.localdomain systemd[1]: tmp-crun.01J3of.mount: Deactivated successfully.
Oct 13 15:53:00 standalone.localdomain podman[569903]: 2025-10-13 15:53:00.833886024 +0000 UTC m=+0.095559991 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 13 15:53:00 standalone.localdomain podman[569903]: 2025-10-13 15:53:00.846818932 +0000 UTC m=+0.108492899 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:53:00 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:53:01 standalone.localdomain ceph-mon[29756]: pgmap v4161: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:01 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4162: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:02 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:02 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:02.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:03 standalone.localdomain ceph-mon[29756]: pgmap v4162: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:03 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:53:03 standalone.localdomain podman[569926]: 2025-10-13 15:53:03.836372182 +0000 UTC m=+0.094226869 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 15:53:03 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4163: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:03 standalone.localdomain podman[569926]: 2025-10-13 15:53:03.866168572 +0000 UTC m=+0.124023259 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 15:53:03 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:53:04 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:04.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:04 standalone.localdomain sudo[569944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 13 15:53:04 standalone.localdomain sudo[569944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:53:04 standalone.localdomain sudo[569944]: pam_unix(sudo:session): session closed for user root
Oct 13 15:53:04 standalone.localdomain sudo[569962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/627e7f45-65aa-56de-94df-66eaee84a56e/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Oct 13 15:53:04 standalone.localdomain sudo[569962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: pgmap v4163: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:05 standalone.localdomain sudo[569962]: pam_unix(sudo:session): session closed for user root
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct 13 15:53:05 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:53:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] update: starting ev fffa5d1d-58a5-4df7-a109-f720d11e7cac (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:53:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] complete: finished ev fffa5d1d-58a5-4df7-a109-f720d11e7cac (Updating node-proxy deployment (+1 -> 1))
Oct 13 15:53:05 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Completed event fffa5d1d-58a5-4df7-a109-f720d11e7cac (Updating node-proxy deployment (+1 -> 1)) in 0 seconds
Oct 13 15:53:05 standalone.localdomain sudo[570012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 13 15:53:05 standalone.localdomain sudo[570012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Oct 13 15:53:05 standalone.localdomain sudo[570012]: pam_unix(sudo:session): session closed for user root
Oct 13 15:53:05 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4164: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:53:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Oct 13 15:53:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:53:06 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Oct 13 15:53:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:53:06.973 378821 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:53:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:53:06.974 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:53:06 standalone.localdomain ovn_metadata_agent[378816]: 2025-10-13 15:53:06.975 378821 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:53:07 standalone.localdomain sshd[570030]: main: sshd: ssh-rsa algorithm is disabled
Oct 13 15:53:07 standalone.localdomain sshd[570030]: Accepted publickey for root from 192.168.122.10 port 41110 ssh2: RSA SHA256:QbE9nqFsi6zXzPufoNKts/sIEwkxg3SLBdG8T5xLYEw
Oct 13 15:53:07 standalone.localdomain systemd[1]: Created slice User Slice of UID 0.
Oct 13 15:53:07 standalone.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 15:53:07 standalone.localdomain systemd-logind[45629]: New session 309 of user root.
Oct 13 15:53:07 standalone.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 15:53:07 standalone.localdomain systemd[1]: Starting User Manager for UID 0...
Oct 13 15:53:07 standalone.localdomain systemd[570034]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:53:07 standalone.localdomain ceph-mon[29756]: pgmap v4164: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Queued start job for default target Main User Target.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Created slice User Application Slice.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Reached target Paths.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Reached target Timers.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Starting D-Bus User Message Bus Socket...
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Starting Create User's Volatile Files and Directories...
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Listening on D-Bus User Message Bus Socket.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Reached target Sockets.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Finished Create User's Volatile Files and Directories.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Reached target Basic System.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Reached target Main User Target.
Oct 13 15:53:07 standalone.localdomain systemd[570034]: Startup finished in 161ms.
Oct 13 15:53:07 standalone.localdomain systemd[1]: Started User Manager for UID 0.
Oct 13 15:53:07 standalone.localdomain systemd[1]: Started Session 309 of User root.
Oct 13 15:53:07 standalone.localdomain sshd[570030]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:53:07 standalone.localdomain sudo[570050]:     root : PWD=/root ; USER=root ; COMMAND=/bin/bash -c rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt
Oct 13 15:53:07 standalone.localdomain sudo[570050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 13 15:53:07 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4165: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:07 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:07 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:07.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:09 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:09.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:09 standalone.localdomain ceph-mon[29756]: pgmap v4165: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:09 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4166: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:10 standalone.localdomain ceph-mgr[29999]: [progress INFO root] Writing back 50 completed events
Oct 13 15:53:10 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct 13 15:53:10 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [INF] : from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:53:11 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15214 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:11 standalone.localdomain ceph-mon[29756]: pgmap v4166: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:11 standalone.localdomain ceph-mon[29756]: from='mgr.14120 172.18.0.100:0/534555256' entity='mgr.standalone.ectizd' 
Oct 13 15:53:11 standalone.localdomain podman[467099]: time="2025-10-13T15:53:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 15:53:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:53:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 414657 "" "Go-http-client/1.1"
Oct 13 15:53:11 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:11 standalone.localdomain podman[467099]: @ - - [13/Oct/2025:15:53:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 49152 "" "Go-http-client/1.1"
Oct 13 15:53:11 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4167: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct 13 15:53:12 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3950031253' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 15:53:12 standalone.localdomain ceph-mon[29756]: from='client.15214 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:12 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3950031253' entity='client.admin' cmd={"prefix": "status"} : dispatch
Oct 13 15:53:12 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:12 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:12.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:53:12 appctl.go:154: net.Dial: dial unix /run/ovn/ovnsb_db.ctl: connect: connection refused
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:53:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:53:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:53:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: ERROR   15:53:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 15:53:12 standalone.localdomain openstack_network_exporter[469248]: 
Oct 13 15:53:13 standalone.localdomain ceph-mon[29756]: from='client.15216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:13 standalone.localdomain ceph-mon[29756]: pgmap v4167: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:13 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.
Oct 13 15:53:13 standalone.localdomain systemd[1]: tmp-crun.yHEiud.mount: Deactivated successfully.
Oct 13 15:53:13 standalone.localdomain podman[570301]: 2025-10-13 15:53:13.84421424 +0000 UTC m=+0.102465984 container health_status fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 15:53:13 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4168: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:13 standalone.localdomain podman[570301]: 2025-10-13 15:53:13.882819812 +0000 UTC m=+0.141071536 container exec_died fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:53:13 standalone.localdomain systemd[1]: fa0c22729719c38a74dedc839d85a46401f2bf71e93e60bc7ce984d269473a4e.service: Deactivated successfully.
Oct 13 15:53:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:14.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:14.228 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 13 15:53:14 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:14.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:14 standalone.localdomain account-server[114563]: 172.20.0.100 - - [13/Oct/2025:15:53:14 +0000] "HEAD /d1/234/.expiring_objects" 404 - "HEAD http://localhost/v1/.expiring_objects?format=json" "txdfcf860014ef4d4a8d39e-0068ed206a" "proxy-server 2" 0.0007 "-" 21 -
Oct 13 15:53:14 standalone.localdomain swift[114487]: Error connecting to memcached: standalone.internalapi.localdomain:11211: [Errno 111] ECONNREFUSED (txn: txdfcf860014ef4d4a8d39e-0068ed206a)
Oct 13 15:53:14 standalone.localdomain object-expirer[114487]: Pass completed in 0s; 0 objects expired (txn: txdfcf860014ef4d4a8d39e-0068ed206a)
Oct 13 15:53:15 standalone.localdomain ovs-vsctl[570359]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 13 15:53:15 standalone.localdomain ceph-mon[29756]: pgmap v4168: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:15.798 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 13 15:53:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:15.799 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquired lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 13 15:53:15 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:15.799 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 13 15:53:15 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4169: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:16 standalone.localdomain virtqemud[425408]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 13 15:53:16 standalone.localdomain virtqemud[425408]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 13 15:53:16 standalone.localdomain virtqemud[425408]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 13 15:53:16 standalone.localdomain systemd[1]: efi.automount: Got automount request for /efi, triggered by 570533 (lsinitrd)
Oct 13 15:53:16 standalone.localdomain systemd[1]: Mounting EFI System Partition Automount...
Oct 13 15:53:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.
Oct 13 15:53:16 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.
Oct 13 15:53:16 standalone.localdomain systemd[1]: Mounted EFI System Partition Automount.
Oct 13 15:53:16 standalone.localdomain podman[570541]: 2025-10-13 15:53:16.525689711 +0000 UTC m=+0.101305868 container health_status 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct 13 15:53:16 standalone.localdomain podman[570541]: 2025-10-13 15:53:16.540751505 +0000 UTC m=+0.116367652 container exec_died 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct 13 15:53:16 standalone.localdomain systemd[1]: 6b06c2c51a8d55202752b8a9fe45ea8948ddda29e8de12bbab3d37ed3af1b21d.service: Deactivated successfully.
Oct 13 15:53:16 standalone.localdomain podman[570542]: 2025-10-13 15:53:16.6205674 +0000 UTC m=+0.191717260 container health_status 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 13 15:53:16 standalone.localdomain podman[570542]: 2025-10-13 15:53:16.65880901 +0000 UTC m=+0.229958870 container exec_died 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd)
Oct 13 15:53:16 standalone.localdomain systemd[1]: 87729595b07b05ff589d2b345c95f985b23b44434a24ba42e418b1525ebd7898.service: Deactivated successfully.
Oct 13 15:53:16 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: cache status {prefix=cache status} (starting...)
Oct 13 15:53:16 standalone.localdomain lvm[570689]: PV /dev/loop3 online, VG vg2 is complete.
Oct 13 15:53:16 standalone.localdomain lvm[570689]: VG vg2 finished
Oct 13 15:53:16 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: client ls {prefix=client ls} (starting...)
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.009 2 DEBUG nova.network.neutron [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updating instance_info_cache with network_info: [{"id": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "address": "fa:16:3e:6c:20:5b", "network": {"id": "0c455abd-28d4-47e7-a254-e50de0526def", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.237", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e44641a80bcb466cb3dd688e48b72d8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda3e5a61-7a", "ovs_interfaceid": "da3e5a61-7adb-481b-b7f5-703c3939cde2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.095 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Releasing lock "refresh_cache-8f68d5aa-abc4-451d-89d2-f5342b71831c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.096 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] [instance: 8f68d5aa-abc4-451d-89d2-f5342b71831c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.096 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.259 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.259 2 DEBUG nova.compute.manager [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 13 15:53:17 standalone.localdomain ceph-mon[29756]: pgmap v4169: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:17 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15220 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:17 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: damage ls {prefix=damage ls} (starting...)
Oct 13 15:53:17 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: dump loads {prefix=dump loads} (starting...)
Oct 13 15:53:17 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4170: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:17 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 13 15:53:17 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15222 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:17 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:17 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:17.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:17 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 13 15:53:18 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 13 15:53:18 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 13 15:53:18 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 13 15:53:18 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 13 15:53:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 13 15:53:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2879088903' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:53:18 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 13 15:53:18 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2879088903' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:53:18 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15226 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:18 standalone.localdomain ceph-mgr[29999]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 13 15:53:18 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T15:53:18.728+0000 7f0263570640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 13 15:53:18 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: ops {prefix=ops} (starting...)
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/376830451' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.258 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.259 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.259 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.259 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Auditing locally available compute resources for standalone.localdomain (node: standalone.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.259 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:53:19 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: session ls {prefix=session ls} (starting...)
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: from='client.15220 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: pgmap v4170: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: from='client.15222 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2879088903' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.32:0/2879088903' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/376830451' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Oct 13 15:53:19 standalone.localdomain ceph-mds[67044]: mds.mds.standalone.ophgjq asok_command: status {prefix=status} (starting...)
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3685027549' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:53:19 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/461031439' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.712 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.789 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.790 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.790 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.793 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.793 2 DEBUG nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 13 15:53:19 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4171: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.920 2 WARNING nova.virt.libvirt.driver [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.921 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Hypervisor/Node resource view: name=standalone.localdomain free_ram=9009MB free_disk=6.9495849609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.921 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.921 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.994 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 54a46fec-332e-42f9-83ed-88e763d13f63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.994 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Instance 8f68d5aa-abc4-451d-89d2-f5342b71831c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.994 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 13 15:53:19 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:19.994 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Final resource view: name=standalone.localdomain phys_ram=15738MB used_ram=1536MB phys_disk=6GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 13 15:53:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:20.043 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3259116339' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3083063180' entity='client.admin' cmd={"prefix": "report"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2152185525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:53:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:20.481 2 DEBUG oslo_concurrency.processutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 13 15:53:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:20.486 2 DEBUG nova.compute.provider_tree [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed in ProviderTree for provider: e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3882805984' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Oct 13 15:53:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:20.516 2 DEBUG nova.scheduler.client.report [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Inventory has not changed for provider e9d9b0c7-d821-4c8a-b3d0-4ee64acbd269 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 6, 'reserved': 1, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 13 15:53:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:20.517 2 DEBUG nova.compute.resource_tracker [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Compute_service record updated for standalone.localdomain:standalone.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 13 15:53:20 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:20.518 2 DEBUG oslo_concurrency.lockutils [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.15226 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3685027549' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/461031439' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3259116339' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3083063180' entity='client.admin' cmd={"prefix": "report"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2152185525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3882805984' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/356760299' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 13 15:53:20 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2887719541' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4126700683' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1873686443' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Oct 13 15:53:21 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:21.514 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: pgmap v4171: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/356760299' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2887719541' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4126700683' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1873686443' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/126656018' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 13 15:53:21 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2909174833' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Oct 13 15:53:21 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4172: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:22 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15258 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15260 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15262 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/126656018' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2909174833' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: pgmap v4172: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: from='client.15258 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: from='client.15260 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: from='client.15262 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15264 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Oct 13 15:53:22 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3819371863' entity='client.admin' cmd={"prefix": "features"} : dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15268 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:23.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:23.227 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:23 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:23.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] scanning for idle connections..
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [volumes INFO mgr_util] cleaning up connections: []
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3124388006' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Optimize plan auto_2025-10-13_15:53:23
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] do_upmap
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] pools ['.mgr', 'manila_metadata', 'backups', 'images', 'manila_data', 'volumes', 'vms']
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [balancer INFO root] prepared 0/10 changes
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15272 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41630 DF PROTO=TCP SPT=49620 DPT=9102 SEQ=3033798873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD7A0FE0000000001030307) 
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: from='client.15264 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3819371863' entity='client.admin' cmd={"prefix": "features"} : dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: from='client.15268 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3124388006' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mon[29756]: from='client.15272 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15274 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 13 15:53:23 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T15:53:23.711+0000 7f0263570640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 13 15:53:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.
Oct 13 15:53:23 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.
Oct 13 15:53:23 standalone.localdomain podman[571884]: 2025-10-13 15:53:23.820969878 +0000 UTC m=+0.082393734 container health_status 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 15:53:23 standalone.localdomain podman[571884]: 2025-10-13 15:53:23.833819655 +0000 UTC m=+0.095243541 container exec_died 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 15:53:23 standalone.localdomain systemd[1]: 64fc015ec3d0202d22909f172036ba2567b921a6774c0dd8788413a52a636bea.service: Deactivated successfully.
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15276 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 13 15:53:23 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4173: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:23 standalone.localdomain podman[571886]: 2025-10-13 15:53:23.924991479 +0000 UTC m=+0.186264720 container health_status e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 15:53:23 standalone.localdomain podman[571886]: 2025-10-13 15:53:23.930426727 +0000 UTC m=+0.191699958 container exec_died e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 15:53:23 standalone.localdomain systemd[1]: e75a9ce8e64c724457b96c2f563e8f7b5f17d25479f1eb6447a196535e91ee52.service: Deactivated successfully.
Oct 13 15:53:24 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15280 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:24.228 2 DEBUG oslo_service.periodic_task [None req-7fc1d580-9b06-4e69-9705-9756b8ae025f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/886107392' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Oct 13 15:53:24 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:24.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:24 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41631 DF PROTO=TCP SPT=49620 DPT=9102 SEQ=3033798873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD7A4F70000000001030307) 
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: from='client.15274 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: from='client.15276 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: pgmap v4173: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: from='client.15280 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/886107392' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Oct 13 15:53:24 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 13 15:53:24 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/340976138' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Oct 13 15:53:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.
Oct 13 15:53:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.
Oct 13 15:53:24 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.
Oct 13 15:53:24 standalone.localdomain podman[572104]: 2025-10-13 15:53:24.819022705 +0000 UTC m=+0.081569488 container health_status 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, health_status=healthy, description=Red Hat OpenStack Platform 17.1 swift-object, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, container_name=swift_object_server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhosp17/openstack-swift-object, release=1, io.openshift.tags=rhosp osp openstack osp-17.1, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1, build-date=2025-07-21T14:56:28, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 swift-object, vendor=Red Hat, Inc., version=17.1.9, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-swift-object-container)
Oct 13 15:53:24 standalone.localdomain systemd[1]: tmp-crun.pHKP9P.mount: Deactivated successfully.
Oct 13 15:53:24 standalone.localdomain podman[572106]: 2025-10-13 15:53:24.887333755 +0000 UTC m=+0.149641491 container health_status dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, health_status=healthy, version=17.1.9, com.redhat.component=openstack-swift-account-container, build-date=2025-07-21T16:11:22, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.33.12, managed_by=tripleo_ansible, release=1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, vendor=Red Hat, Inc., name=rhosp17/openstack-swift-account, description=Red Hat OpenStack Platform 17.1 swift-account, distribution-scope=public, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhosp osp openstack osp-17.1, batch=17.1_20250721.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=swift_account_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']})
Oct 13 15:53:24 standalone.localdomain podman[572105]: 2025-10-13 15:53:24.925611226 +0000 UTC m=+0.186242070 container health_status ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.9, io.buildah.version=1.33.12, build-date=2025-07-21T15:54:32, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, name=rhosp17/openstack-swift-container, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.expose-services=, release=1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-swift-container-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 swift-container, batch=17.1_20250721.1, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=6c85b2809937c86a19237cd4252af09396645905, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:37.662160+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:38.662287+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:39.662410+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:40.662537+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:41.662690+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:42.662886+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:43.663101+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:44.663860+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:45.664152+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:46.664406+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:47.664605+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:48.664750+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:49.664911+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:50.665108+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.847549438s of 119.863853455s, submitted: 5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:51.665238+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:52.665396+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142041088 unmapped: 25526272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:53.665521+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:54.665736+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:55.665874+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:56.666015+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:57.666153+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:58.666277+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:19:59.666411+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:00.666821+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:01.667057+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:02.667182+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:03.667407+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:04.667538+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:05.667659+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:06.667813+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:07.667941+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:08.668095+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:09.668227+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:10.668409+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:11.668520+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:12.668667+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:13.668811+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:14.668997+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:15.669128+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:16.669314+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:17.669468+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:18.669725+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142057472 unmapped: 25509888 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:19.669862+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:20.670008+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:21.670153+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:22.670293+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:23.670430+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:24.670670+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:25.670789+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:26.670923+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:27.671080+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:28.671198+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:29.671358+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:30.671496+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:31.671624+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:32.671767+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:33.671922+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:34.672101+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:35.672238+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:36.672379+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:37.672559+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:38.672681+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:39.672834+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:40.673002+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:41.673169+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:42.673329+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:43.673447+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:44.673580+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:45.673735+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:46.673847+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:47.673957+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:48.674623+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:49.674760+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:50.674883+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:51.674960+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:52.675092+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142073856 unmapped: 25493504 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:53.675205+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:54.675432+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:55.675607+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:56.675737+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:57.675897+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:58.676053+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:20:59.676200+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:00.676337+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:01.676523+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:02.676652+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:03.676811+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:04.677019+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:05.677176+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:06.677382+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:07.677536+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:08.677671+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:09.677814+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:10.677984+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:11.678109+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:12.678272+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:13.678423+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:14.678615+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:15.678788+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:16.678909+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:17.679072+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:18.679264+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:19.679427+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:20.679573+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:21.679765+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:22.679905+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:23.680053+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:24.680276+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:25.680428+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:26.680545+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:27.680689+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:28.680830+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:29.680947+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:30.681065+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:31.681179+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:32.681396+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:33.681563+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:34.681733+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:35.681934+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:36.682082+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:37.682210+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:38.682368+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:39.682572+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:40.682704+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:41.682841+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:42.682998+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:43.683110+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: mgrc ms_handle_reset ms_handle_reset con 0x55a74609b400
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: mgrc reconnect Terminating session with v2:172.18.0.100:6800/1677275897
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: mgrc reconnect Starting new session with [v2:172.18.0.100:6800/1677275897,v1:172.18.0.100:6801/1677275897]
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: get_auth_request con 0x55a745019c00 auth_method 0
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: mgrc handle_mgr_configure stats_period=5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:44.683250+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a746292800 session 0x55a748ac7e00
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a400
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a74609a800 session 0x55a74709f0e0
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746292800
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a7460a5800 session 0x55a74411b4a0
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746270c00
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:45.683387+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:46.683524+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:47.683645+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:48.683788+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:49.683904+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:50.684160+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.062294006s of 120.085784912s, submitted: 5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:51.684294+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:52.684423+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:53.684549+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:54.684704+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:55.684804+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:56.684934+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:57.685065+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:58.685213+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:21:59.685348+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:00.685463+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:01.685571+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:02.686072+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:03.686190+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:04.686354+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:05.686560+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:06.686719+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:07.686876+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:08.687028+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:09.687204+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:10.687392+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:11.687580+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:12.687714+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:13.687846+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:14.688010+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:15.688154+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142114816 unmapped: 25452544 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:16.688333+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:17.688461+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:18.688986+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:19.689176+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:20.689362+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:21.689613+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:22.689741+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:23.689956+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:24.690146+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:25.690290+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:26.690440+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:27.690546+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:28.690707+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:29.690878+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:30.691073+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:31.691230+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:32.691390+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:33.691595+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:34.691746+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:35.691872+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:36.692001+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:37.692139+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:38.692280+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:39.692432+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:40.692679+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:41.692878+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:42.693014+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:43.693172+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:44.693326+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:45.693448+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:46.693588+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:47.693737+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:48.693875+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:49.694048+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:50.694184+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:51.694305+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:52.694469+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:53.694662+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:54.694809+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:55.694955+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:56.695087+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:57.695210+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:58.695348+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:22:59.695525+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:00.695671+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:01.695859+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:02.696051+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:03.696197+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:04.696383+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:05.696512+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:06.696635+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:07.696776+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:08.696868+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:09.697241+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:10.697352+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:11.697555+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:12.697702+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:13.697826+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:14.698046+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:15.698208+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:16.698377+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:17.698561+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:18.698691+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:19.698818+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:20.699146+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:21.699297+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:22.699416+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:23.699540+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:24.699732+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:25.699892+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:26.700035+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:27.700193+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:28.700421+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:29.700584+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:30.700955+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:31.701349+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:32.701658+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:33.701892+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:34.702153+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:35.702314+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:36.702504+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:37.702724+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:38.702902+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:39.703097+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:40.703258+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:41.703434+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:42.703598+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:43.703737+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:44.703917+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:45.704073+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:46.704238+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:47.704374+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:48.704552+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:49.704776+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:50.704927+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.903823853s of 119.921813965s, submitted: 5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:51.705049+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:52.705180+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:53.705337+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:54.705533+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:55.705722+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:56.705867+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:57.706037+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:58.706179+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142131200 unmapped: 25436160 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:23:59.706365+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:00.706601+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:01.706821+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:02.706956+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:03.707194+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:04.707426+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:05.707637+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:06.707802+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:07.708025+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:08.708185+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:09.708353+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:10.708586+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:11.708753+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:12.709057+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:13.709437+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:14.709727+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:15.710003+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:16.710352+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:17.710545+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:18.710665+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:19.710830+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:20.710979+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:21.711152+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:22.711290+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:23.711443+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:24.711660+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:25.711788+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:26.711903+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:27.712015+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:28.712208+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:29.712348+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:30.712511+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:31.712662+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:32.712797+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:33.713029+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:34.713195+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:35.713387+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:36.713574+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:37.713729+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:38.713859+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:39.714068+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:40.714189+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:41.714329+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:42.714513+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:43.714638+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:44.714854+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:45.716803+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:46.716926+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:47.717068+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:48.717250+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:49.717447+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:50.717578+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:51.717724+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:52.717860+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142139392 unmapped: 25427968 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:53.718018+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:54.718224+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:55.718338+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:56.718464+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:57.718560+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:58.718741+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:24:59.718892+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:00.718993+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a746273400 session 0x55a7495b45a0
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746271000
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:01.719348+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:02.719528+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:03.719730+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:04.719919+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:05.720334+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:06.720710+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:07.721048+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:08.721196+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:09.721441+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:10.721610+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:11.721762+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:12.723947+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:13.724176+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:14.724520+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:15.733589+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:16.733738+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:17.733894+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:18.734077+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:19.734260+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:20.734412+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:21.734582+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:22.734756+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:23.734929+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:24.735158+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:25.735316+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:26.735496+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:27.745665+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:28.745946+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:29.746088+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:30.746300+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:31.746553+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:32.746722+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:33.746915+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:34.747134+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:35.747286+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:36.747408+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:37.747538+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:38.747666+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:39.747881+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:40.748082+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:41.748254+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:42.748414+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:43.748571+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:44.748820+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:45.749037+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:46.749213+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:47.749409+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:48.749572+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:49.749851+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142163968 unmapped: 25403392 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:50.750068+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.012695312s of 120.032173157s, submitted: 5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:51.750262+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:52.750435+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:53.750602+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:54.750855+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:55.751055+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:56.751267+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:57.751505+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:58.751698+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:25:59.751862+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:00.752631+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:01.752789+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:02.752922+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142180352 unmapped: 25387008 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:03.753113+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:04.753357+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:05.753467+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:06.753638+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:07.753771+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:08.753915+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:09.754036+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:10.754253+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:11.754447+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:12.754669+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:13.754856+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:14.755123+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:15.755250+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:16.755379+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:17.755609+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:18.755822+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:19.756039+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:20.756259+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:21.768732+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:22.768909+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:23.769065+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:24.769250+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:25.769384+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:26.769543+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:27.769673+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:28.769802+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:29.769943+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:30.770128+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:31.770265+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:32.770383+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:33.770545+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:34.770689+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:35.770813+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:36.770924+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:37.771078+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:38.771249+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 6600.1 total, 600.0 interval
                                                        Cumulative writes: 7014 writes, 30K keys, 7014 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 7014 writes, 1318 syncs, 5.32 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 45 writes, 100 keys, 45 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s
                                                        Interval WAL: 45 writes, 20 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:39.771455+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:40.771584+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:41.771699+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:42.771847+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:43.771993+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:44.772205+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:45.772366+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:46.772505+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:47.772645+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:48.772775+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:49.772925+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:50.773047+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:51.773152+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:52.773328+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:53.773467+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:54.773627+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:55.773870+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:56.774083+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:57.774255+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:58.774448+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:26:59.774697+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:00.774923+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:01.775085+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:02.775425+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:03.775585+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:04.775810+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:05.775978+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:06.776145+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:07.776304+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:08.776524+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:09.776688+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:10.776838+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:11.776982+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:12.777130+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:13.777284+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:14.777452+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:15.777591+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:16.777735+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:17.777900+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:18.778018+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:19.778162+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:20.778265+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:21.778387+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:22.778523+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:23.778682+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:24.778917+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:25.779073+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:26.779269+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:27.779454+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:28.779705+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:29.779867+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:30.780051+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:31.780268+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:32.780547+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:33.780744+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:34.781036+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:35.781220+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:36.781387+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:37.781580+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:38.781783+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:39.781945+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:40.782146+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:41.782302+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:42.782459+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:43.782677+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:44.782899+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:45.783275+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:46.783436+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:47.783570+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:48.783752+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:49.798765+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142196736 unmapped: 25370624 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:50.798922+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.019523621s of 120.032279968s, submitted: 5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:51.799085+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:52.799255+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:53.799419+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:54.799619+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:55.799775+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:56.799915+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:57.800052+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:58.800231+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:27:59.800361+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:00.800589+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:01.800723+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:02.803036+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:03.803976+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:04.804263+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:05.804423+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:06.804707+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:07.804857+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:08.805021+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:09.805151+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:10.805297+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:11.805419+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:12.805648+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:13.805798+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:14.806024+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:15.806165+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:16.806309+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:17.806447+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:18.806586+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:19.806768+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:20.807164+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:21.807282+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:22.807514+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:23.807661+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:24.807874+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:25.808025+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:26.808183+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:27.808319+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:28.808412+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:29.808543+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:30.808637+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:31.808763+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:32.808837+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:33.809001+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:34.809170+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:35.809297+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:36.809416+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:37.809551+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:38.809718+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:39.809913+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:40.810065+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:41.810182+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:42.810338+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:43.810475+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:44.810650+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:45.810769+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:46.810898+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:47.811022+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:48.811138+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:49.811290+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:50.811419+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:51.811570+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:52.811686+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:53.811801+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:54.811971+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:55.812054+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:56.812218+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:57.812355+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:58.812584+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:28:59.812703+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:00.813048+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:01.813173+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:02.813310+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:03.813462+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:04.813653+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:05.813781+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:06.813941+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:07.814087+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:08.814295+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:09.814415+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:10.814563+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:11.814711+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:12.814846+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:13.814972+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:14.815110+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:15.815238+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:16.815397+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:17.815538+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:18.815689+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:19.815810+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:20.815941+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:21.816101+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:22.816235+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:23.816403+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:24.816580+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:25.816714+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:26.816841+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:27.816995+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:28.817155+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:29.817299+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:30.818597+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:31.818754+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:32.819246+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:33.821711+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:34.821933+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:35.822216+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:36.822335+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:37.822475+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:38.822640+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:39.822759+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:40.822961+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:41.823125+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:42.823258+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:43.823376+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:44.823539+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:45.823666+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:46.823787+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:47.824712+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:48.824858+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:49.825072+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142229504 unmapped: 25337856 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:50.825195+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.998916626s of 120.013305664s, submitted: 5
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:51.825381+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:52.825548+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:53.825679+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:54.825891+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:55.826019+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:56.826240+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:57.826364+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:58.826545+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:29:59.826707+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:00.826814+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:01.826950+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:02.827101+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:03.827284+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:04.827529+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:05.827714+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:06.827853+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:07.827990+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:08.828077+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:09.828217+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:10.828356+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:11.828560+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:12.828676+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:13.828845+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:14.829396+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:15.829509+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:16.829662+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:17.829815+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:18.829978+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:19.830110+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:20.830339+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:21.830671+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:22.830875+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:23.831021+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:24.831246+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:25.831415+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:26.831586+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:27.831744+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:28.831888+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:29.832009+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:30.832128+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:31.832331+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:32.832504+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:33.832638+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:34.832845+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:35.832954+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:36.833090+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:37.833211+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:38.833344+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:39.833512+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:40.833663+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:41.833837+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:42.833999+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:43.834129+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:44.834313+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:45.834429+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:46.834585+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:47.834829+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:48.835009+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:49.837084+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:50.837284+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:51.837542+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:52.837669+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:53.837834+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:54.838032+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:55.838163+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:56.838382+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:57.838704+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:58.838868+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:30:59.839064+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:00.839237+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:01.839366+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:02.839570+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:03.839718+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:04.839884+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:05.840103+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:06.840259+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:07.840396+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:08.840591+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:09.840904+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:10.841171+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:11.841343+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:12.841512+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:13.841658+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142237696 unmapped: 25329664 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:14.842055+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:15.842393+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:16.842571+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:17.842757+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:18.842876+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:19.843003+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:20.843157+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:21.843313+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:22.843477+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:23.843666+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:24.843921+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:25.844055+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:26.844192+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:27.844415+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:28.844546+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:29.844675+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:30.844835+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:31.844991+0000)
Oct 13 15:53:24 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:24 standalone.localdomain podman[572104]: 2025-10-13 15:53:24.998748454 +0000 UTC m=+0.261295227 container exec_died 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada (image=registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1, name=swift_object_server, com.redhat.component=openstack-swift-object-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhosp17/openstack-swift-object, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-object:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, io.openshift.tags=rhosp osp openstack osp-17.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 swift-object, tcib_managed=true, architecture=x86_64, version=17.1.9, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-object, batch=17.1_20250721.1, build-date=2025-07-21T14:56:28, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-object, release=1, summary=Red Hat OpenStack Platform 17.1 swift-object, container_name=swift_object_server, vcs-ref=ea64d80aa7bb87c383eb80a1619bc9319d43295d, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-object/images/17.1.9-1)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:32.845173+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:33.845367+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:34.845614+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:35.845753+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:36.845873+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:37.845986+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:38.846137+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:39.846297+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:40.846514+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:41.846664+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:42.846794+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:43.846916+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:44.847075+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:45.847224+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:46.847403+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:47.847621+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:48.847786+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:49.847981+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142254080 unmapped: 25313280 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:50.848180+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.934669495s of 119.953399658s, submitted: 5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:51.848390+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:52.848552+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:53.848730+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:54.849064+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:55.849259+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:56.849442+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:57.849575+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:58.849730+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:31:59.849863+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:00.850010+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:01.850165+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:02.850279+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142278656 unmapped: 25288704 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:03.850422+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:04.850606+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:05.850804+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:06.851068+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:07.851260+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:08.851440+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:09.851527+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:10.851722+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:11.851886+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:12.852049+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:13.852200+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:14.852340+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:15.852527+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:16.852654+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:17.852777+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:18.852956+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:19.853119+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:20.853319+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:21.853538+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:22.853693+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142295040 unmapped: 25272320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:23.853866+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:24.854829+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:25.854969+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:26.855132+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:27.855258+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:28.855372+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:29.855499+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:30.855619+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:31.855740+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:32.855914+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:33.856105+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:34.856324+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:35.856558+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:36.856788+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:37.856967+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:38.857185+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain systemd[1]: 45aa7f4339187121d7461ffff2d1616d59ec7d020da530cac0bab0af7da86ada.service: Deactivated successfully.
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:39.857322+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:40.857459+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:41.857839+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:42.857995+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142311424 unmapped: 25255936 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:43.858197+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:44.858390+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:45.858530+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:46.858616+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:47.858794+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:48.858976+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:49.859155+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:50.859297+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:51.859473+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:52.859592+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142327808 unmapped: 25239552 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:53.859761+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:54.859958+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:55.874732+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:56.875042+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:57.875188+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:58.875323+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:32:59.876749+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:00.876924+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:01.877125+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:02.877794+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142344192 unmapped: 25223168 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:03.877953+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:04.878116+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:05.878240+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:06.878368+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:07.878575+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:08.878771+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:09.878919+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:10.879056+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:11.879243+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:12.879420+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:13.879589+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:14.879783+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:15.879962+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:16.880084+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:17.880236+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:18.880403+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:19.880573+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:20.880688+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:21.880854+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:22.881018+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:23.881185+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:24.881366+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:25.881539+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:26.881671+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:27.881827+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:28.882000+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:29.882161+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:30.882311+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:31.882569+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:32.882725+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:33.882863+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:34.883044+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:35.883172+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:36.883351+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:37.883562+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:38.883730+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:39.883876+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:40.884074+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:41.884225+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:42.884430+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:43.884666+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:44.884950+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:45.885113+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:46.885275+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:47.885415+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:48.885585+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:49.885739+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:50.885901+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.951950073s of 119.969505310s, submitted: 5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:51.886032+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:52.886189+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:53.886381+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:54.886587+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:55.886727+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:56.886912+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:57.887089+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:58.887221+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:33:59.887375+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:00.887586+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:01.887744+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:02.887925+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:03.888159+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:04.888351+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:05.888542+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:06.888726+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:07.888906+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:08.889073+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:09.889226+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:10.889394+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:11.889558+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:12.889679+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:13.889859+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:14.890077+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:15.890212+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:16.890358+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:17.890529+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:18.890648+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:19.890802+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:20.890972+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:21.891130+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:22.891298+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:23.891533+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:24.891710+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:25.891848+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:26.892018+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:27.892276+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:28.892464+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:29.892664+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:30.892824+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:31.892999+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:32.893252+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:33.893410+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:34.893756+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:35.893933+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:36.894098+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:37.894301+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:38.894534+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:39.894699+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:40.894903+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:41.895126+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:42.895327+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:43.895581+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142360576 unmapped: 25206784 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:44.896000+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:45.896213+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:46.896432+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:47.896625+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:48.897046+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:49.897322+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:50.897602+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:51.897760+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:52.897915+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:53.898098+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:54.898481+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:55.898710+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:56.898868+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:57.899059+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:58.899196+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:34:59.899367+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:00.899568+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:01.899761+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:02.899929+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:03.900064+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:04.900380+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:05.900535+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:06.900679+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:07.900882+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:08.901044+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:09.901227+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:10.901470+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:11.901699+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:12.901874+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:13.902008+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:14.902239+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:15.902386+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:16.902520+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:17.902659+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:18.902806+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:19.902971+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:20.903140+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:21.903345+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:22.903545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:23.903723+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:24.903918+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:25.904045+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:26.904216+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:27.904455+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:28.904663+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:29.904803+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:30.905001+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:31.905153+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:32.905307+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:33.905560+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:34.906624+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:35.906748+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:36.906901+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:37.907103+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:38.907289+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:39.907441+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:40.907592+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:41.907740+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:42.907888+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:43.908234+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:44.909250+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:45.909379+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:46.909606+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:47.909773+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:48.909900+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:49.910062+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142376960 unmapped: 25190400 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:50.910198+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.004753113s of 120.022567749s, submitted: 5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:51.910589+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:52.910773+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:53.910945+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:54.911196+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:55.911540+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:56.911868+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:57.912086+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:58.912259+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:35:59.912450+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:00.912654+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:01.912842+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:02.913037+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-13T15:36:03.913276+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _finish_auth 0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:03.914250+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:04.913588+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:05.913828+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:06.914014+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:07.914182+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:08.914379+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:09.914563+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:10.914736+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:11.914996+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:12.915252+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:13.915530+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:14.915752+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:15.915918+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:16.916091+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:17.916264+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:18.916410+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:19.916626+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:20.916790+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:21.917011+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:22.917189+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:23.917358+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:24.917560+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:25.917737+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:26.917915+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:27.918092+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:28.918274+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142401536 unmapped: 25165824 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:29.918574+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:30.918814+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:31.918994+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:32.919225+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:33.919458+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:34.919705+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:35.919894+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:36.920064+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:37.920229+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 7200.1 total, 600.0 interval
                                                        Cumulative writes: 7059 writes, 30K keys, 7059 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 7059 writes, 1338 syncs, 5.28 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 45 writes, 100 keys, 45 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s
                                                        Interval WAL: 45 writes, 20 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:38.920469+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:39.920825+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:40.921041+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:41.921190+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:42.921414+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:43.921596+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc ms_handle_reset ms_handle_reset con 0x55a745019c00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc reconnect Terminating session with v2:172.18.0.100:6800/1677275897
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc reconnect Starting new session with [v2:172.18.0.100:6800/1677275897,v1:172.18.0.100:6801/1677275897]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: get_auth_request con 0x55a746297400 auth_method 0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc handle_mgr_configure stats_period=5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:44.921849+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a74627a400 session 0x55a7495b4000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74423dc00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a746292800 session 0x55a74b445c20
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a746270c00 session 0x55a74620c1e0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746270800
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:45.922031+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:46.922197+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:47.922446+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:48.922752+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:49.923009+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:50.923237+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:51.923460+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:52.923594+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142409728 unmapped: 25157632 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:53.923786+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:54.923970+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:55.924098+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:56.924274+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:57.924448+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:58.924643+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:36:59.924790+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb68000/0x0/0x1bfc00000, data 0xbe96c22/0xbf26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:00.924915+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2652127 data_alloc: 83886080 data_used: 42278912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:01.925056+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:02.925305+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142426112 unmapped: 25141248 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 72.482543945s of 72.502090454s, submitted: 5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:03.925506+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:04.925796+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a744968400 session 0x55a7461dc3c0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a74627a000 session 0x55a74411bc20
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a746290400 session 0x55a748d8b0e0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:05.925997+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:06.926135+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:07.926330+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:08.926606+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:09.926793+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:10.926998+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:11.927210+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:12.927397+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:13.927627+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:14.927870+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:15.928017+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:16.928265+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:17.928463+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:18.928601+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:19.928753+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:20.928932+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:21.929084+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:22.929242+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:23.929429+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:24.929635+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:25.929845+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:26.930022+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:27.930203+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:28.930410+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:29.930610+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:30.930792+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:31.931020+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:32.931192+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:33.931355+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:34.931557+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:35.931785+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 2654975 data_alloc: 83886080 data_used: 41353216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:36.931976+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142065664 unmapped: 25501696 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a7443aa000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.274757385s of 34.356849670s, submitted: 23
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a744968400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:37.932131+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142082048 unmapped: 25485312 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:38.932258+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 142204928 unmapped: 25362432 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:39.932378+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:40.932539+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2625499 data_alloc: 100663296 data_used: 40349696
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:41.932690+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:42.932825+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:43.933006+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:44.933184+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:45.933315+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2625499 data_alloc: 100663296 data_used: 40349696
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:46.933481+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141983744 unmapped: 25583616 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:47.933673+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 141901824 unmapped: 25665536 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xbea4c22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:48.933839+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.297326088s of 11.317374229s, submitted: 5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 147136512 unmapped: 20430848 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:49.933961+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 147152896 unmapped: 20414464 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:50.934111+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2942185 data_alloc: 100663296 data_used: 39084032
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 147161088 unmapped: 20406272 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:51.934270+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 147169280 unmapped: 20398080 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:52.934439+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 147169280 unmapped: 20398080 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:53.934562+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae266000/0x0/0x1bfc00000, data 0xe7dcc22/0xd824000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 147169280 unmapped: 20398080 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:54.934861+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146513920 unmapped: 21053440 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:55.935017+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2941469 data_alloc: 100663296 data_used: 39084032
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146513920 unmapped: 21053440 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:56.935219+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146513920 unmapped: 21053440 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:57.935421+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146513920 unmapped: 21053440 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae23a000/0x0/0x1bfc00000, data 0xe80bc22/0xd853000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:58.935634+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146513920 unmapped: 21053440 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:37:59.935825+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146513920 unmapped: 21053440 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:00.936033+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2941789 data_alloc: 100663296 data_used: 39092224
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146530304 unmapped: 21037056 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:01.936215+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.683083534s of 13.030928612s, submitted: 194
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae23a000/0x0/0x1bfc00000, data 0xe80bc22/0xd853000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146743296 unmapped: 20824064 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:02.936390+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146751488 unmapped: 20815872 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:03.936550+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae21f000/0x0/0x1bfc00000, data 0xe827c22/0xd86f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146751488 unmapped: 20815872 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:04.936756+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae21f000/0x0/0x1bfc00000, data 0xe827c22/0xd86f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146915328 unmapped: 20652032 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:05.936916+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae21f000/0x0/0x1bfc00000, data 0xe827c22/0xd86f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [0,10,2,1])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2977805 data_alloc: 100663296 data_used: 38793216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145358848 unmapped: 22208512 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:06.937089+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145358848 unmapped: 22208512 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:07.937261+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145358848 unmapped: 22208512 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:08.937478+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145358848 unmapped: 22208512 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:09.937665+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145358848 unmapped: 22208512 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:10.937872+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae139000/0x0/0x1bfc00000, data 0xea14c22/0xd949000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2977805 data_alloc: 100663296 data_used: 38793216
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145367040 unmapped: 22200320 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:11.938067+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.703477859s of 10.002468109s, submitted: 118
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:12.938345+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:13.938556+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:14.938779+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:15.938989+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae123000/0x0/0x1bfc00000, data 0xea35c22/0xd96a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2973853 data_alloc: 100663296 data_used: 38797312
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:16.939147+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae123000/0x0/0x1bfc00000, data 0xea35c22/0xd96a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:17.939296+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145604608 unmapped: 21962752 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:18.939532+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145612800 unmapped: 21954560 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:19.939782+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145612800 unmapped: 21954560 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:20.940043+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2973853 data_alloc: 100663296 data_used: 38797312
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145612800 unmapped: 21954560 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:21.940232+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae123000/0x0/0x1bfc00000, data 0xea35c22/0xd96a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145612800 unmapped: 21954560 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:22.940378+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145612800 unmapped: 21954560 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:23.940545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145612800 unmapped: 21954560 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:24.940800+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145620992 unmapped: 21946368 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:25.940982+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.822599411s of 13.834762573s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a74627a000 session 0x55a74709f2c0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2976333 data_alloc: 100663296 data_used: 38789120
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145686528 unmapped: 21880832 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:26.941132+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145694720 unmapped: 21872640 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae122000/0x0/0x1bfc00000, data 0xea36c22/0xd96b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:27.941268+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145694720 unmapped: 21872640 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:28.941439+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1ae122000/0x0/0x1bfc00000, data 0xea36c22/0xd96b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145694720 unmapped: 21872640 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:29.941657+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145694720 unmapped: 21872640 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:30.941818+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2976333 data_alloc: 100663296 data_used: 38789120
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145694720 unmapped: 21872640 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:31.941969+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74628c000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a74628c000 session 0x55a744fb4f00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145760256 unmapped: 21807104 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:32.942129+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145768448 unmapped: 21798912 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:33.942331+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145768448 unmapped: 21798912 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:34.942526+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145776640 unmapped: 21790720 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:35.942728+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800443 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145776640 unmapped: 21790720 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:36.942917+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145776640 unmapped: 21790720 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:37.943089+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145776640 unmapped: 21790720 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:38.943224+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145776640 unmapped: 21790720 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:39.943371+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746517800
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.543926239s of 14.643568993s, submitted: 57
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145776640 unmapped: 21790720 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:40.943539+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:41.943693+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:42.943864+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:43.944037+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:44.945168+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:45.945359+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:46.945601+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:47.945745+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145784832 unmapped: 21782528 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:48.945927+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145793024 unmapped: 21774336 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:49.946087+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145793024 unmapped: 21774336 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:50.946288+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:51.946576+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:52.946786+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:53.946950+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:54.947134+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:55.947285+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:56.947449+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:57.947629+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145801216 unmapped: 21766144 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:58.947820+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:38:59.948081+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:00.948271+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:01.948591+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:02.948770+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:03.948979+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:04.949186+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:05.949317+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:06.949403+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:07.949546+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:08.949775+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:09.949991+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:10.950241+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:11.950410+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:12.950545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:13.950770+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:14.951011+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:15.951183+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:16.951357+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:17.951577+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:18.951705+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:19.951869+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:20.952094+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:21.952284+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:22.952406+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:23.952566+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:24.952762+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:25.952958+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:26.953115+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:27.953290+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:28.953455+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:29.953599+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:30.953745+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:31.953907+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:32.954084+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:33.954282+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:34.954481+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:35.954948+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:36.955125+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:37.955376+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:38.955588+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:39.955851+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:40.956134+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:41.956282+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:42.956425+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:43.956599+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:44.956800+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:45.957105+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:46.957255+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:47.957391+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:48.957620+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:49.957935+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:50.958167+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 70.402816772s of 70.410789490s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:51.958357+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:52.958522+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:53.958672+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:54.958869+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:55.959017+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:56.959253+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:57.959435+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:58.959582+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:39:59.959733+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:00.959892+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 ms_handle_reset con 0x55a746271000 session 0x55a74aa71c20
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746517c00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:01.960063+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145809408 unmapped: 21757952 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:02.960477+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:03.960630+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:04.960803+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:05.960882+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:06.961044+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:07.961181+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:08.961339+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:09.961573+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:10.961731+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:11.961862+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:12.962468+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:13.962625+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:14.962874+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:15.963063+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:16.963212+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:17.963343+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:18.963442+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:19.963633+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:20.963805+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:21.963962+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:22.964113+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:23.964286+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:24.964570+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:25.964756+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:26.964946+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:27.965107+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:28.965275+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:29.965407+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:30.965551+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:31.965708+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:32.965909+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:33.966097+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:34.966372+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:35.966559+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:36.966720+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:37.966847+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:38.966971+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:39.967137+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:40.967285+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:41.967446+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:42.967569+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:43.967731+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:44.967951+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:45.968099+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:46.968271+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:47.968465+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:48.968689+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:49.968834+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:50.969011+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:51.969213+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:52.969414+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:53.969587+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:54.969773+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:55.969936+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:56.970117+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:57.970291+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:58.970482+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:40:59.970706+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:00.970897+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:01.971048+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:02.971190+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:03.971324+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:04.971545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:05.971640+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:06.971810+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:07.972003+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:08.972119+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:09.972265+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:10.972416+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:11.972557+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2800399 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:12.972717+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:13.973964+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:14.974300+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145817600 unmapped: 21749760 heap: 167567360 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 heartbeat osd_stat(store_statfs(0x1afb5a000/0x0/0x1bfc00000, data 0xd42bc22/0xbf34000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:15.974500+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746290400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 85.120056152s of 85.130554199s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145850368 unmapped: 30113792 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:16.974779+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 2946379 data_alloc: 100663296 data_used: 38637568
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145850368 unmapped: 30113792 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 65 handle_osd_map epochs [65,66], i have 66, src has [1,66]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 66 ms_handle_reset con 0x55a746290400 session 0x55a748ac6d20
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:17.974982+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746277400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145965056 unmapped: 29999104 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:18.975135+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _renew_subs
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 ms_handle_reset con 0x55a746277400 session 0x55a748d8a1e0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145965056 unmapped: 29999104 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:19.975372+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:20.975608+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:21.975800+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3103487 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:22.975952+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:23.976397+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:24.976621+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:25.976776+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _renew_subs
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:26.976975+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3103487 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:27.977158+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:28.977297+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:29.977454+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:30.977614+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:31.977767+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3103487 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:32.977919+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:33.978923+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:34.979167+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:35.979331+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:36.979511+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3103487 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:37.979769+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:38.979890+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:39.980020+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:40.980153+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:41.980334+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3103487 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:42.980524+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:43.980683+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:44.981137+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:45.981443+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:46.981666+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3103487 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:47.981943+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:48.982170+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:49.982359+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145973248 unmapped: 29990912 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:50.982766+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.499660492s of 34.862449646s, submitted: 56
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26c000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:51.983005+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100799 data_alloc: 100663296 data_used: 38645760
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:52.983157+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:53.983409+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:54.983793+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:55.984046+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:56.984270+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:57.984444+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:58.984547+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:41:59.984736+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:00.984943+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:01.985119+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:02.985294+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:03.985468+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:04.985731+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:05.985890+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:06.986056+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:07.986278+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:08.986435+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:09.986604+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:10.986734+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:11.986926+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:12.987031+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:13.987184+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:14.987362+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:15.987560+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:16.987749+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:17.987904+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:18.988036+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:19.988183+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:20.988358+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:21.988538+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:22.988676+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:23.988840+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:24.989099+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:25.989294+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:26.989435+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:27.989603+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:28.989739+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:29.989881+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:30.990002+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:31.990204+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3100959 data_alloc: 100663296 data_used: 38649856
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:32.990389+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145195008 unmapped: 30769152 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:33.990529+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746276000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.008090973s of 43.016746521s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 67 handle_osd_map epochs [67,68], i have 68, src has [1,68]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 68 heartbeat osd_stat(store_statfs(0x1ad26e000/0x0/0x1bfc00000, data 0xfd0e172/0xe820000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [0,1])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 68 ms_handle_reset con 0x55a746276000 session 0x55a7464d52c0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145268736 unmapped: 30695424 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:34.990750+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145276928 unmapped: 30687232 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:35.990943+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746276000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145276928 unmapped: 30687232 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:36.991091+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 69 ms_handle_reset con 0x55a746276000 session 0x55a744fafe00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3114642 data_alloc: 100663296 data_used: 38666240
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145301504 unmapped: 30662656 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:37.991228+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 69 heartbeat osd_stat(store_statfs(0x1ad266000/0x0/0x1bfc00000, data 0xfd1066a/0xe826000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145301504 unmapped: 30662656 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:38.991361+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145432576 unmapped: 30531584 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:39.991563+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 69 heartbeat osd_stat(store_statfs(0x1ad266000/0x0/0x1bfc00000, data 0xfd1066a/0xe826000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145432576 unmapped: 30531584 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:40.991728+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145432576 unmapped: 30531584 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:41.991870+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3117422 data_alloc: 100663296 data_used: 38666240
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145489920 unmapped: 30474240 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:42.992023+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145489920 unmapped: 30474240 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:43.992191+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:44.992395+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:45.992540+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:46.992644+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3117422 data_alloc: 100663296 data_used: 38666240
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:47.992795+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:48.992937+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:49.993069+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:50.993215+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:51.993360+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3117422 data_alloc: 100663296 data_used: 38666240
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145498112 unmapped: 30466048 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:52.993528+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:53.993650+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:54.993832+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:55.993969+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:56.994060+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3117582 data_alloc: 100663296 data_used: 38670336
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:57.994243+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:58.994405+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _renew_subs
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:42:59.994574+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:00.994705+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:01.994848+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3117582 data_alloc: 100663296 data_used: 38670336
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:02.995041+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:03.995221+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145506304 unmapped: 30457856 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 heartbeat osd_stat(store_statfs(0x1ad264000/0x0/0x1bfc00000, data 0xfd117a2/0xe829000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:04.995436+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145489920 unmapped: 30474240 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746277400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.932216644s of 31.328594208s, submitted: 131
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 70 handle_osd_map epochs [70,71], i have 71, src has [1,71]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 71 ms_handle_reset con 0x55a746277400 session 0x55a748ac3a40
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:05.995625+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:06.995769+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3129682 data_alloc: 100663296 data_used: 38678528
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:07.995944+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 71 heartbeat osd_stat(store_statfs(0x1ad25d000/0x0/0x1bfc00000, data 0xfd12e4a/0xe830000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:08.996111+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:09.996265+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:10.996429+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 72 ms_handle_reset con 0x55a74627a000 session 0x55a748a93a40
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 72 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd140cd/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:11.996615+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3127871 data_alloc: 100663296 data_used: 38686720
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:13.004320+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:14.004520+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:15.004775+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:16.004911+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:17.005059+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 145514496 unmapped: 30449664 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 72 heartbeat osd_stat(store_statfs(0x1ace5f000/0x0/0x1bfc00000, data 0xfd13c9a/0xe82f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.610957146s of 12.043168068s, submitted: 99
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3132043 data_alloc: 100663296 data_used: 38694912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:18.005218+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:19.005376+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:20.005567+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:21.006647+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:22.006885+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3132043 data_alloc: 100663296 data_used: 38694912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:23.007068+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:24.007310+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:25.007625+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:26.007941+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:27.008120+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3132043 data_alloc: 100663296 data_used: 38694912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:28.008272+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:29.008557+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:30.008745+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:31.008954+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:32.009115+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _renew_subs
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.280702
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 239075328 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3132043 data_alloc: 100663296 data_used: 38694912
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:33.011166+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:34.011334+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 146563072 unmapped: 29401088 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74628c000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74628c000 session 0x55a748cf90e0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746290400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746290400 session 0x55a7486865a0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746276000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746276000 session 0x55a7495b4b40
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5b000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746277400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.154132843s of 17.200281143s, submitted: 34
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746277400 session 0x55a748de8000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:35.011523+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 152608768 unmapped: 23355392 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74627a000 session 0x55a744e665a0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74628c000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74628c000 session 0x55a74709f4a0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74943e800
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:36.011650+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74943e800 session 0x55a744ec1e00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746276000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746276000 session 0x55a7466463c0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746277400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746277400 session 0x55a746196780
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 153944064 unmapped: 22020096 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74627a000 session 0x55a748ac6960
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74628c000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74628c000 session 0x55a74aa712c0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:37.011798+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 153968640 unmapped: 21995520 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 167772160 meta_used: 3317603 data_alloc: 104857600 data_used: 43765760
Oct 13 15:53:25 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15286 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:38.012008+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 153935872 unmapped: 22028288 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74922c000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74922c000 session 0x55a7464d54a0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:39.012269+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 153935872 unmapped: 22028288 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746276000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746276000 session 0x55a744fb9c20
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab91c000/0x0/0x1bfc00000, data 0x11252e44/0xfd72000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746277400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746277400 session 0x55a7493e7680
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74627a000 session 0x55a744fb1860
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:40.012446+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 154320896 unmapped: 21643264 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74628c000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746274400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:41.012537+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 154337280 unmapped: 21626880 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:42.012640+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 155131904 unmapped: 20832256 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:43.012751+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.290909
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 230686720 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 163577856 meta_used: 3409057 data_alloc: 117440512 data_used: 54632448
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161832960 unmapped: 14131200 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:44.012929+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166043648 unmapped: 9920512 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab8f0000/0x0/0x1bfc00000, data 0x1127ce77/0xfd9e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:45.013109+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166043648 unmapped: 9920512 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:46.013275+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab8f0000/0x0/0x1bfc00000, data 0x1127ce77/0xfd9e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:47.013406+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab8f0000/0x0/0x1bfc00000, data 0x1127ce77/0xfd9e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:48.013545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.296296
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 226492416 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 159383552 meta_used: 3453377 data_alloc: 121634816 data_used: 59990016
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab8f0000/0x0/0x1bfc00000, data 0x1127ce77/0xfd9e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:49.013713+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:50.013868+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:51.014151+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab8f0000/0x0/0x1bfc00000, data 0x1127ce77/0xfd9e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.431604385s of 16.788042068s, submitted: 84
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:52.015593+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:53.015758+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.296296
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 226492416 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 163577856 meta_used: 3453313 data_alloc: 121634816 data_used: 59998208
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:54.015882+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ab8f0000/0x0/0x1bfc00000, data 0x1127ce77/0xfd9e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 166051840 unmapped: 9912320 heap: 175964160 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:55.027627+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 174366720 unmapped: 7340032 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:56.027784+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 173744128 unmapped: 7962624 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:57.027931+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175046656 unmapped: 6660096 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:58.028065+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.296296
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 226492416 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 159383552 meta_used: 3681921 data_alloc: 125829120 data_used: 62853120
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175054848 unmapped: 6651904 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:43:59.028237+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175054848 unmapped: 6651904 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9fda000/0x0/0x1bfc00000, data 0x12b92e77/0x116b4000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:00.028409+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175054848 unmapped: 6651904 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:01.028565+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175054848 unmapped: 6651904 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9fda000/0x0/0x1bfc00000, data 0x12b92e77/0x116b4000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:02.028744+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.386187553s of 10.748475075s, submitted: 236
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175202304 unmapped: 6504448 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:03.028914+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.296296
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 226492416 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 159383552 meta_used: 3683793 data_alloc: 125829120 data_used: 62853120
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175202304 unmapped: 6504448 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:04.029038+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175210496 unmapped: 6496256 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:05.029245+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175210496 unmapped: 6496256 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:06.029403+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175210496 unmapped: 6496256 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9fb5000/0x0/0x1bfc00000, data 0x12bb7e77/0x116d9000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:07.029585+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175210496 unmapped: 6496256 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:08.029676+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.296296
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 226492416 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 159383552 meta_used: 3683497 data_alloc: 125829120 data_used: 62853120
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9fb2000/0x0/0x1bfc00000, data 0x12bbae77/0x116dc000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175210496 unmapped: 6496256 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:09.029795+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9fb2000/0x0/0x1bfc00000, data 0x12bbae77/0x116dc000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175210496 unmapped: 6496256 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:10.029942+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175226880 unmapped: 6479872 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:11.030066+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175226880 unmapped: 6479872 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:12.030260+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175235072 unmapped: 6471680 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9fb2000/0x0/0x1bfc00000, data 0x12bbae77/0x116dc000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:13.030422+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.470588
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.296296
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 226492416 kv_used: 2144 kv_onode_alloc: 142606336 kv_onode_used: 464 meta_alloc: 159383552 meta_used: 3683497 data_alloc: 125829120 data_used: 62853120
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175235072 unmapped: 6471680 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:14.030563+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175235072 unmapped: 6471680 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.193229675s of 12.222276688s, submitted: 7
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:15.031022+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175235072 unmapped: 6471680 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:16.031191+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175235072 unmapped: 6471680 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1a9faf000/0x0/0x1bfc00000, data 0x12bbde77/0x116df000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:17.031398+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 175243264 unmapped: 6463488 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74628c000 session 0x55a7466470e0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746274400 session 0x55a7475c1680
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746276000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:18.031559+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746276000 session 0x55a748687860
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:19.031771+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:20.031968+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:21.032106+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:22.032257+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:23.032397+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:24.032566+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:25.032729+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:26.032883+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:27.033055+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:28.033221+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:29.033407+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:30.033623+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:31.033800+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:32.033942+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:33.034112+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:34.034265+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:35.034437+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:36.034579+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:37.034750+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161177600 unmapped: 20529152 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:38.034912+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161185792 unmapped: 20520960 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:39.035070+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161185792 unmapped: 20520960 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:40.035222+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161185792 unmapped: 20520960 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:41.035374+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161185792 unmapped: 20520960 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:42.035527+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161185792 unmapped: 20520960 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:43.035677+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161185792 unmapped: 20520960 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:44.035810+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:45.036003+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:46.036242+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:47.036407+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:48.036593+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:49.036785+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:50.036940+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161193984 unmapped: 20512768 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:51.037088+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161202176 unmapped: 20504576 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:52.037358+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161202176 unmapped: 20504576 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:53.037549+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161202176 unmapped: 20504576 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:54.037697+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:55.037877+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:56.038030+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:57.038206+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:58.038377+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:44:59.038560+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:00.038939+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:01.039601+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:02.039746+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:03.039903+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:04.040211+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:05.040416+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:06.040555+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:07.040753+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:08.041011+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:09.041360+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:10.041574+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:11.041709+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:12.041875+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161210368 unmapped: 20496384 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:13.042004+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:14.042227+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:15.047697+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:16.047847+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:17.047980+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:18.048125+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:19.048257+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:20.048453+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:21.048608+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:22.048814+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:23.048990+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:24.049122+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:25.049397+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:26.049545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:27.049683+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:28.049842+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:29.050202+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:30.050395+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:31.050745+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:32.050912+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:33.051110+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:34.051318+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:35.051556+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:36.051864+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:37.052344+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:38.052478+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:39.052638+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:40.052801+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:41.052963+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:42.053118+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:43.053270+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:44.053588+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:45.053829+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:46.053951+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:47.054095+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:48.054237+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1aca1f000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154904 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:49.054391+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:50.054558+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161218560 unmapped: 20488192 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 96.276962280s of 96.576667786s, submitted: 94
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:51.054689+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:52.054869+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:53.056598+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:54.056724+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:55.056920+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:56.057065+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:57.057210+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:58.057343+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:45:59.057516+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:00.057657+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:01.057819+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:02.057983+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:03.058119+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:04.058335+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:05.058785+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:06.058929+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:07.059124+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:08.059260+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:09.059407+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:10.059541+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:11.059697+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:12.059851+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:13.059972+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:14.060156+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:15.060579+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:16.060724+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:17.060925+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:18.061117+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:19.061228+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:20.061381+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:21.061565+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:22.061754+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:23.061888+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:24.067014+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:25.067219+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:26.067373+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:27.067537+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:28.067678+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:29.067858+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:30.067998+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:31.068152+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:32.068301+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:33.068474+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:34.068711+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:35.068906+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:36.069055+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:37.069210+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:38.069392+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                        ** DB Stats **
                                                        Uptime(secs): 7800.1 total, 600.0 interval
                                                        Cumulative writes: 8868 writes, 39K keys, 8868 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                        Cumulative WAL: 8868 writes, 1999 syncs, 4.44 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                        Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                        Interval writes: 1809 writes, 8313 keys, 1809 commit groups, 1.0 writes per commit group, ingest: 7.87 MB, 0.01 MB/s
                                                        Interval WAL: 1809 writes, 661 syncs, 2.74 writes per sync, written: 0.01 GB, 0.01 MB/s
                                                        Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:39.069576+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:40.069723+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:41.069880+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:42.070039+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:43.070180+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:44.070362+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:45.070565+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:46.070728+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:47.070879+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:48.071036+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:49.071180+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:50.071373+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:51.071546+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:52.071706+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:53.071866+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:54.071984+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:55.072119+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:56.072268+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:57.072462+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 161226752 unmapped: 20480000 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:58.072663+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:46:59.072830+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:00.073018+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:01.073161+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:02.073291+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:03.073422+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:04.073549+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:05.073731+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:06.073915+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:07.074086+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:08.074306+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:09.074481+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:10.074705+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:11.074871+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:12.075037+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:13.075162+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:14.075303+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:15.075471+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:16.075639+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:17.075771+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:18.075892+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:19.076063+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:20.076203+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160563200 unmapped: 21143552 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:21.076362+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:22.076545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:23.076675+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:24.076802+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:25.076982+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:26.077160+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:27.077304+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:28.077432+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:29.077584+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:30.077727+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:31.077882+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:32.078204+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:33.078406+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:34.078596+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:35.078767+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:36.078956+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:37.079098+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:38.079230+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:39.079388+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:40.079548+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:41.079692+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:42.079821+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:43.080018+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:44.080182+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:45.080376+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:46.080554+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:47.080683+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:48.080831+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:49.080978+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:50.081140+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160571392 unmapped: 21135360 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.001533508s of 120.012054443s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:51.081313+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:52.081460+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:53.081633+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:54.081836+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:55.082006+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:56.082157+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:57.082325+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:58.082519+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:47:59.082645+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:00.082798+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:01.082902+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:02.083002+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:03.083155+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:04.083336+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:05.083600+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:06.083734+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:07.083918+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:08.084078+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:09.084217+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:10.084338+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:11.084453+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:12.084637+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:13.084858+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:14.085020+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:15.085213+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:16.114068+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:17.114201+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:18.114438+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:19.114562+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:20.114739+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:21.114867+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:22.115012+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:23.115176+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:24.115345+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:25.115543+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:26.115733+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:27.115902+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:28.116057+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:29.116183+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:30.116327+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:31.116502+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:32.116648+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:33.116969+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:34.117137+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:35.117328+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:36.117535+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:37.117682+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:38.117891+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:39.118042+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:40.118182+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:41.118359+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:42.118606+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:43.118750+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:44.118916+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:45.119243+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:46.119398+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:47.119556+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:48.119740+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:49.119868+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:50.120251+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:51.120474+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:52.120627+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:53.120764+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:54.120950+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:55.121151+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:56.121327+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:57.121651+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:58.121803+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:48:59.121960+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:00.122118+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:01.122338+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:02.122593+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:03.122921+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:04.123096+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:05.123265+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:06.134190+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:07.134344+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:08.134532+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:09.134674+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:10.134815+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:11.134967+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:12.135117+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:13.135315+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:14.135549+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:15.135729+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:16.135987+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:17.136181+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:18.136374+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:19.136520+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:20.136662+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:21.136806+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:22.136941+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:23.137121+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:24.137281+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:25.137591+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:26.137789+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:27.137951+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:28.138130+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:29.138303+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:30.138578+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:31.138747+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:32.138894+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:33.139044+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:34.139270+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:35.139455+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:36.139659+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:37.139834+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:38.140056+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:39.140250+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 104857600 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:40.140464+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:41.140660+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:42.140856+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain podman[572106]: 2025-10-13 15:53:25.104832809 +0000 UTC m=+0.367140545 container exec_died dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1, name=swift_account_server, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-account, summary=Red Hat OpenStack Platform 17.1 swift-account, version=17.1.9, batch=17.1_20250721.1, tcib_managed=true, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-account/images/17.1.9-1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, container_name=swift_account_server, io.openshift.tags=rhosp osp openstack osp-17.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-account:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, config_id=tripleo_step4, vcs-ref=506557c4100df74c62ec18c5362691f2850e1b3f, com.redhat.component=openstack-swift-account-container, distribution-scope=public, name=rhosp17/openstack-swift-account, release=1, description=Red Hat OpenStack Platform 17.1 swift-account, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-account, maintainer=OpenStack TripleO Team, build-date=2025-07-21T16:11:22, io.openshift.expose-services=)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:43.141018+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:44.141218+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:45.141404+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:46.141624+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:47.141841+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:48.142018+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:49.142183+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:50.142313+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.015861511s of 120.060104370s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:51.142448+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:52.142632+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:53.142825+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:54.143027+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:55.143212+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:56.143363+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:57.143529+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:58.143621+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:49:59.143825+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:00.144000+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:01.144201+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:02.144352+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:03.144475+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:04.144650+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:05.144850+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:06.144981+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:07.145149+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:08.145365+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:09.145571+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:10.145760+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:11.145923+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:12.146087+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:13.146257+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:14.146429+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:15.146688+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:16.146881+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:17.147075+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:18.147253+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:19.147448+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:20.150246+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:21.150409+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:22.150550+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:23.150721+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:24.151052+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:25.151196+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:26.151379+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:27.151567+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:28.151690+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:29.151892+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:30.152092+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:31.152295+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:32.152593+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:33.152764+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:34.153074+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:35.153306+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:36.153688+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:37.153890+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:38.154104+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:39.154294+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:40.154506+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:41.154790+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:42.155058+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:43.155196+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:44.155372+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:45.155636+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:46.155843+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:47.156035+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:48.156169+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:49.156396+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:50.156554+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:51.156719+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:52.156939+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:53.157117+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:54.158628+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:55.159444+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:56.159663+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:57.160382+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:58.160609+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:50:59.162156+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:00.163560+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:01.164752+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:02.165955+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:03.166109+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:04.166318+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:05.166748+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:06.166955+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:07.167103+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:08.167327+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:09.167545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:10.167713+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:11.167932+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:12.168094+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:13.168291+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:14.168682+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:15.169283+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:16.169459+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:17.169641+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:18.169804+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:19.170035+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:20.170164+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:21.170306+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:22.170452+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:23.170761+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:24.170955+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:25.171360+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160587776 unmapped: 21118976 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:26.171565+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:27.171803+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:28.171995+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:29.172179+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:30.172317+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:31.172455+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:32.172560+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:33.172811+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:34.173063+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:35.173615+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain systemd[1]: dddd84069bd6892961231b6c017f265f10221d7d4a7e19ff09657592cdf7b004.service: Deactivated successfully.
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:36.173801+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:37.173951+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:38.174133+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:39.174246+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:40.174396+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160595968 unmapped: 21110784 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets getting new tickets!
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:41.174959+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _finish_auth 0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:41.176187+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:42.175141+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:43.175323+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc ms_handle_reset ms_handle_reset con 0x55a746297400
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc reconnect Terminating session with v2:172.18.0.100:6800/1677275897
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc reconnect Starting new session with [v2:172.18.0.100:6800/1677275897,v1:172.18.0.100:6801/1677275897]
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: get_auth_request con 0x55a7435bf400 auth_method 0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: mgrc handle_mgr_configure stats_period=5
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:44.175473+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74423dc00 session 0x55a7492ba780
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a746271800
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:45.175672+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a746270800 session 0x55a744fb7c20
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 ms_handle_reset con 0x55a74627a400 session 0x55a74a69ba40
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74627a000
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: handle_auth_request added challenge on 0x55a74423dc00
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:46.175810+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:47.175954+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:48.176123+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:49.176312+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:50.176548+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.900215149s of 119.911254883s, submitted: 2
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:51.176729+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:52.176898+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:53.177030+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:54.177197+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 160604160 unmapped: 21102592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:55.177404+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:56.177589+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:57.177735+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:58.177891+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:51:59.178037+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:00.178234+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:01.178396+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:02.178576+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:03.178731+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:04.178868+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:05.179041+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:06.179200+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:07.179413+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:08.179580+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:09.179886+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:10.180059+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:11.180180+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:12.180352+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:13.180531+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:14.180668+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:15.180909+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:16.181133+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:17.181274+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:18.181389+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:19.181545+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:20.181725+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:21.181876+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:22.182096+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:23.182281+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:24.182437+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:25.182707+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _send_mon_message to mon.standalone at v2:172.18.0.100:3300/0
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:26.182898+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:27.183088+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:28.183235+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:29.183396+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:30.183739+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:31.183970+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:32.184262+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:33.184510+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:34.184680+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:35.184903+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:36.185246+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:37.189601+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:38.189771+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:39.189895+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:40.190044+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:41.190205+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:42.190363+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:43.190528+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:44.190719+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:45.190916+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:46.191062+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:47.191183+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:48.191254+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:49.191378+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:50.191524+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.457143
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: bluestore.MempoolThread(0x55a7428bfb60) _resize_shards cache_size: 686012968 kv_alloc: 234881024 kv_used: 2144 kv_onode_alloc: 146800640 kv_onode_used: 464 meta_alloc: 171966464 meta_used: 3154552 data_alloc: 92274688 data_used: 40812544
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159924224 unmapped: 21782528 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:51.191696+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159981568 unmapped: 21725184 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:52.191761+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'config diff' '{prefix=config diff}'
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: osd.0 73 heartbeat osd_stat(store_statfs(0x1ace5c000/0x0/0x1bfc00000, data 0xfd14dd2/0xe832000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [] op hist [])
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'config show' '{prefix=config show}'
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'counter dump' '{prefix=counter dump}'
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'counter schema' '{prefix=counter schema}'
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159326208 unmapped: 22380544 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:53.191902+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: prioritycache tune_memory target: 1754493337 mapped: 159580160 unmapped: 22126592 heap: 181706752 old mem: 686012968 new mem: 686012968
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: tick
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_tickets
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-13T15:52:54.192019+0000)
Oct 13 15:53:25 standalone.localdomain ceph-osd[37878]: do_command 'log dump' '{prefix=log dump}'
Oct 13 15:53:25 standalone.localdomain podman[572105]: 2025-10-13 15:53:25.141302674 +0000 UTC m=+0.401933488 container exec_died ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9 (image=registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1, name=swift_container_server, description=Red Hat OpenStack Platform 17.1 swift-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=17.1.9, io.buildah.version=1.33.12, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=6c85b2809937c86a19237cd4252af09396645905, build-date=2025-07-21T15:54:32, name=rhosp17/openstack-swift-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd8b7706e98ee5ba7e1c4d758abf97545'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-swift-container:17.1', 'net': 'host', 'restart': 'always', 'user': 'swift', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/swift:/var/lib/kolla/config_files/src:ro', '/srv/node:/srv/node', '/dev:/dev', '/var/cache/swift:/var/cache/swift', '/var/log/containers/swift:/var/log/swift:z']}, release=1, batch=17.1_20250721.1, com.redhat.component=openstack-swift-container-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhosp17/openstack-swift-container/images/17.1.9-1, io.openshift.tags=rhosp osp openstack osp-17.1, summary=Red Hat OpenStack Platform 17.1 swift-container, io.k8s.description=Red Hat OpenStack Platform 17.1 swift-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 swift-container, container_name=swift_container_server)
Oct 13 15:53:25 standalone.localdomain systemd[1]: ac758a23ee9d609f84ab9b739ead17ba7655685abbd9fc9c28e795388a7705f9.service: Deactivated successfully.
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/925262375' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15290 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/340976138' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: from='client.15286 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/925262375' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: from='client.15290 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 13 15:53:25 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2160996285' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Oct 13 15:53:25 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4174: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2736154996' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Oct 13 15:53:26 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15298 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:26 standalone.localdomain ceph-627e7f45-65aa-56de-94df-66eaee84a56e-mgr-standalone-ectizd[29995]: 2025-10-13T15:53:26.212+0000 7f0263570640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 13 15:53:26 standalone.localdomain ceph-mgr[29999]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1630348340' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Oct 13 15:53:26 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41632 DF PROTO=TCP SPT=49620 DPT=9102 SEQ=3033798873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD7ACF60000000001030307) 
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2160996285' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: pgmap v4174: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2736154996' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: from='client.15298 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1630348340' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1249216463' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 13 15:53:26 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3682249474' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1697356376' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1181025476' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1249216463' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3682249474' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1697356376' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1181025476' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 13 15:53:27 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/263277413' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Oct 13 15:53:27 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.
Oct 13 15:53:27 standalone.localdomain crontab[572743]: (root) LIST (root)
Oct 13 15:53:27 standalone.localdomain podman[572733]: 2025-10-13 15:53:27.80642133 +0000 UTC m=+0.064040778 container health_status 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 13 15:53:27 standalone.localdomain podman[572733]: 2025-10-13 15:53:27.840056508 +0000 UTC m=+0.097675956 container exec_died 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 15:53:27 standalone.localdomain systemd[1]: 2283f9ea4cf9ff0a03026b6918e7daeea0401fc167810005063efcfce1d76a54.service: Deactivated successfully.
Oct 13 15:53:27 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4175: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:28 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:28.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/33197714' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2740765926' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/920886950' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3583051280' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/263277413' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: pgmap v4175: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/33197714' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2740765926' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/920886950' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3583051280' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 13 15:53:28 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/299491819' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1759650862' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] _maybe_adjust
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006647822445561139 of space, bias 1.0, pg target 0.6647822445561139 quantized to 32 (current 32)
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009640760224734785 of space, bias 1.0, pg target 0.9640760224734785 quantized to 32 (current 32)
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.012942315745393635 of space, bias 1.0, pg target 1.2942315745393635 quantized to 32 (current 32)
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0030076598269123396 of space, bias 1.0, pg target 0.29775832286432163 quantized to 32 (current 32)
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3620882188721385e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1983331839' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4230634958' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Oct 13 15:53:29 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:29.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/299491819' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1759650862' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1983331839' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4230634958' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 13 15:53:29 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1978789346' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15332 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:29 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4176: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1787477771' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15336 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4181822752' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Oct 13 15:53:30 standalone.localdomain kernel: DROPPING: IN=br-ctlplane OUT= MACSRC=fa:16:3e:13:3c:3c MACDST=fa:16:3e:0b:50:81 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.100 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41633 DF PROTO=TCP SPT=49620 DPT=9102 SEQ=3033798873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7CD7BCB60000000001030307) 
Oct 13 15:53:30 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1978789346' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: from='client.15332 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: pgmap v4176: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1787477771' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: from='client.15336 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4181822752' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 13 15:53:30 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3422271118' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15344 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:31 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.
Oct 13 15:53:31 standalone.localdomain systemd[1]: Starting Hostname Service...
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2393410866' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Oct 13 15:53:31 standalone.localdomain systemd[1]: tmp-crun.v1gcga.mount: Deactivated successfully.
Oct 13 15:53:31 standalone.localdomain podman[573243]: 2025-10-13 15:53:31.290617529 +0000 UTC m=+0.117744276 container health_status 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct 13 15:53:31 standalone.localdomain podman[573243]: 2025-10-13 15:53:31.302020171 +0000 UTC m=+0.129146848 container exec_died 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct 13 15:53:31 standalone.localdomain systemd[1]: 4ebf95155e90c7dc0bb5258d5bfa710d139cfb3281916c73538f4ca181471018.service: Deactivated successfully.
Oct 13 15:53:31 standalone.localdomain systemd[1]: Started Hostname Service.
Oct 13 15:53:31 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15348 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15350 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: from='client.15340 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3422271118' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: from='client.15344 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2393410866' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mon[29756]: from='client.15348 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:31 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4177: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:31 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15352 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15354 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: from='client.15350 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: pgmap v4177: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: from='client.15352 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: from='client.15354 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: from='client.15356 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 13 15:53:32 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1514764890' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:33 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:33.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3539274709' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1514764890' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3539274709' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Oct 13 15:53:33 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4178: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 13 15:53:33 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/4170922202' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1784213325' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 13 15:53:34 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:34.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:34 standalone.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/228991591' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: pgmap v4178: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/4170922202' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1784213325' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 13 15:53:34 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/228991591' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Oct 13 15:53:34 standalone.localdomain podman[573665]: 2025-10-13 15:53:34.838872024 +0000 UTC m=+0.105405644 container health_status 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 15:53:34 standalone.localdomain podman[573665]: 2025-10-13 15:53:34.870740448 +0000 UTC m=+0.137274118 container exec_died 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 15:53:34 standalone.localdomain systemd[1]: 0352e4a7cb8c8ec574f88db9070451d3ba08e32ed39f5de5234595679f858063.service: Deactivated successfully.
Oct 13 15:53:35 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15374 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 13 15:53:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2393449641' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Oct 13 15:53:35 standalone.localdomain ceph-mon[29756]: from='client.15374 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:35 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/2393449641' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Oct 13 15:53:35 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4179: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:35 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Oct 13 15:53:35 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1819539382' entity='client.admin' cmd={"prefix": "df"} : dispatch
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1303110148' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: pgmap v4179: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1819539382' entity='client.admin' cmd={"prefix": "df"} : dispatch
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1303110148' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 13 15:53:36 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/1777225181' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Oct 13 15:53:37 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15384 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:37 standalone.localdomain kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 13 15:53:37 standalone.localdomain kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 13 15:53:37 standalone.localdomain kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 13 15:53:37 standalone.localdomain kernel: cfg80211: failed to load regulatory.db
Oct 13 15:53:37 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 13 15:53:37 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/3250924704' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Oct 13 15:53:37 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/1777225181' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Oct 13 15:53:37 standalone.localdomain ceph-mon[29756]: from='client.15384 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:37 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/3250924704' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Oct 13 15:53:37 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4180: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 13 15:53:38 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:38.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:38 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 13 15:53:38 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/577379540' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Oct 13 15:53:38 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15390 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:39 standalone.localdomain ceph-mon[29756]: mon.standalone@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 13 15:53:39 standalone.localdomain ceph-mon[29756]: log_channel(audit) log [DBG] : from='client.? 172.18.0.100:0/2089216064' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Oct 13 15:53:39 standalone.localdomain ceph-mon[29756]: pgmap v4180: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:39 standalone.localdomain ceph-mon[29756]: from='client.? 172.18.0.100:0/577379540' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Oct 13 15:53:39 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15394 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 13 15:53:39 standalone.localdomain nova_compute[521101]: 2025-10-13 15:53:39.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 13 15:53:39 standalone.localdomain ceph-mgr[29999]: log_channel(cluster) log [DBG] : pgmap v4181: 177 pgs: 177 active+clean; 359 MiB data, 302 MiB used, 6.7 GiB / 7.0 GiB avail
Oct 13 15:53:39 standalone.localdomain ceph-mgr[29999]: log_channel(audit) log [DBG] : from='client.15396 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
